Ask Your Question

Revision history [back]

  1. First, we need to edit the Filebeat configuration file (filebeat.yml) to specify that we want to separate our log lines into different fields. We can do this by adding a "fields" section under the "output" section.

    Here's an example:

    output.elasticsearch:
     hosts: ["localhost:9200"]
     index: "my_logs-%{+yyyy.MM.dd}"
     fields:
       service: my_app
    

    In this example, we're specifying that we want to add a "service" field to all log events, with the value "my_app".

  2. Next, we need to create a Logstash pipeline to receive the Filebeat events and split them into different fields.

    Here's an example pipeline:

    input {
     beats {
       port => 5044
     }
    }
    
    filter {
     grok {
       match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:loglevel} \[%{DATA:component}\] %{GREEDYDATA:message}" }
     }
     mutate {
       add_field => { "application" => "my_app" }
       add_field => { "type" => "postgresql" }
     }
    }
    
    output {
     elasticsearch {
       hosts => ["localhost:9200"]
       index => "my_logs-%{+yyyy.MM.dd}"
     }
    }
    

    In this pipeline, we're using the grok filter to split the log lines into different fields based on a regular expression. We're also adding two new fields: "application" with the value "my_app" and "type" with the value "postgresql".

    Note that in this example, we're assuming that the logs are coming from a PostgreSQL database. If your logs are coming from a different source, you may need to adjust the regular expression in the grok filter to match your log format.

  3. Finally, we need to make sure that Filebeat is sending its events to the Logstash pipeline. To do this, we can edit the Filebeat configuration file to specify the Logstash output instead of Elasticsearch.

    Here's an example:

    output.logstash:
     hosts: ["localhost:5044"]
    

    This will send all Filebeat events to the Logstash server running on localhost:5044.

With these steps, we should now be able to receive our log events with separate fields, including a "type" field specifically for PostgreSQL logs.