Following last blog, Filebeat is very easy to setup however it doesn’t do log pattern matching, guess I’ll need Logstash after all.
First is to install Logstash of course. To tell Filebeat to feed to Logstash instead of Elasticsearch is straightforward, here’s some configuration snippets:
Filebeat K8s configMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
...
# replace output.elasticsearch with this
output.logstash:
hosts: ['${LOGSTASH_HOST:logstash}:${LOGSTASH_PORT:5044}']
Sample Logstash configuration:
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
COMBINEDAPACHELOG is the standard apache log format(as well as nginx’s). By using this predefined log format, values like request URI or referrer URL will be available as fields in Elastisearch.
🙂
