how to

Easy Log Analysis with Filebeat Modules

By May 11, 2017 August 18th, 2022 No Comments

Ever since Elastic{on} 17, we’ve been excited about all of the upcoming features in the Elastic Stack, especially the new Filebeat modules concept. Usually, when you want to start grabbing data with Filebeat, you need to configure Filebeat, create an Elasticsearch mapping template, create and test an ingest pipeline or Logstash instance, and then create the Kibana visualizations for that dataset. The Beats team has now made that setup process a whole lot easier with the modules concept.

A Filebeat module rolls up all of those configuration steps into a package that can then be enabled by a single command. Filebeat 5.3.0 and later ships with modules for mysql, nginx, apache, and system logs, but it’s also easy to create your own.

Filebeat modules have been available for about a few weeks now, so I wanted to create a quick blog on how to use them with non-local Elasticsearch clusters, like those on the ObjectRocket service. We have just launched Elasticsearch version 5.4 on the ObjectRocket service, so you can try out Filebeat modules today and take advantage of the new auditd module and the Linux system.auth fileset.

Setting up

The steps below reference ObjectRocket for Elasticsearch instances and our UI, but should be easy enough to modify for other services or your own clusters. Also, we’ll be using the “system” module in this example, but including the other modules is straightforward.

What you’ll need is an Elasticsearch 5.3 or later instance (I tried some earlier 5.x Elasticsearch versions and it worked fine, but no guarantees), Kibana of the same version, the hostname(s) to connect to the Elasticsearch cluster, user credentials with the ability to write to the cluster and create indices, and version 5.3 or later of Filebeat.

Assuming your Elasticsearch cluster and Kibana are already set up, you’ll first need to download Filebeat for whatever type of system you’re running, here, then extract it on the system where you’d like to gather the logs. For this example, I’m just using a Macbook running MacOS and connecting to an Elasticsearch 5.4.0 cluster, so I’ll use the filebeat-5.4.0-darwin-x86_64.tar.gz package. Extract that archive and then we’re ready to set up Filebeat.

Setting up Filebeat

By default, Filebeat tries to connect to an Elasticsearch instance on your local machine and read all logs in /var/log/*.log with the included filebeat.yml file, so we first need to modify that to point to your cluster and strip out any other prospectors. If you’re using the ObjectRocket service, you can start by just copying the Beats snippet from our UI, which auto-populates the specific hostnames for your cluster.

After stripping out everything else and filling in username/password, your entire filebeat.yml file should look something like this:

div class=”highlight”>

output:
  elasticsearch:
    # The Elasticsearch cluster
    hosts: ["https://dfw-xxxx-0.es.objectrocket.com:xxxx", "https://dfw-xxxx-1.es.objectrocket.com:xxxx", "https://dfw-xxxx-2.es.objectrocket.com:xxxx", "https://dfw-xxxx-3.es.objectrocket.com:xxxx"]

    # HTTP basic auth
    username: "elasticsearch"
    password: "supersecretpassword"

That’s it. Note that all inputs have been stripped out, so the file only includes Elasticsearch connection info. At this point, also make sure you’re able to access your elasticsearch cluster from your machine, ACLs are added, etc. Now let’s use Filebeat to put everything in motion.

Running Filebeat

You’ve got everything configured to talk to your cluster, so as long as you made all of your changes to filebeat.yml in the default location, you’ll just run the following command from the filebeat directory:

./filebeat -e -modules=system -setup

The -e makes Filebeat log to stderr rather than the syslog, -modules=system tells Filebeat to use the system module, and -setup tells Filebeat to load up the module’s Kibana dashboards. You only need to include the -setup part of the command the first time, or after upgrading Filebeat, since it just loads up the default dashboards into Kibana. If you want to run multiple modules, you can list them all separated by commas (no spaces).

Note: There is a bug in Filebeat 5.4.0 that may cause the -setup part of the command to fail on certain systems. You can work around it by setting the ulimit to something higher (run ulimit -n 2048) or use filebeat 5.3.x.

Filebeat should now be doing its thing. As long as you don’t see any errors, let’s pop over to Kibana and check on it.

Looking at the data

Once you’re logged into Kibana, there should be a new filebeat-* index pattern along with some new visualizations and dashboards available. Go to “Dashboards”, and open the “Filebeat syslog dashboard”.

Voilà. As long, as your system log has something in it, you should now have some nice visualizations of your data.

Note: If there are no apparent errors from Filebeat and there’s no data in Kibana, your system may just have a very quiet system log. I had to set the date picker back a little farther than the default of ‘Last 15 minutes’ to see some data.

Wrapping up

This is a really cool step towards making Beats even easier to use for the most common use cases and the basics of this example can easily be extended for the other modules that are now available. For, example, lots of web apps (like WordPress, Drupal, Magento, etc.) have a web server and some mysql behind them. With this one set of modules and a single command you can start shipping logs for the web server, database, and the system logs of that application, all in one step.

If you’re looking for more information and want to dig into how these modules are constructed, check out the developer documentation and poke around in the module directory of your Filebeat installation. You should be able to match up the prospectors, pipelines, templates, and visualizations that the module is using with what you’re seeing in Kibana.