How to setup easily ELK on a Docker Swarm
In this tutorial you’ll see how to set up easily an ELK (Elastic, Logstash, Kibana) stack to have a centralized logging solution for your Docker swarm cluster.
Install the stack⌗
Below you’ll find the full stack to have a working ELK stack on your docker swarm.
version: '3'
services:
elasticsearch:
image: elasticsearch:7.9.3
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms2g -Xmx2g
volumes:
- esdata:/usr/share/elasticsearch/data
kibana:
image: kibana:7.9.3
depends_on:
- elasticsearch
ports:
- "5601:5601"
logstash:
image: logstash:7.9.3
ports:
- "12201:12201/udp"
depends_on:
- elasticsearch
deploy:
mode: global
volumes:
- logstash-pipeline:/usr/share/logstash/pipeline/
volumes:
esdata:
driver: local
logstash-pipeline:
driver: local
I will break down the configuration part to explain what’s going on:
elasticsearch:
image: elasticsearch:7.9.3
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms2g -Xmx2g
volumes:
- esdata:/usr/share/elasticsearch/data
First we need to create an elasticsearch container, nothing fancy:
- discovery.type=single-node
We need to set the discovery to single-node
to evade bootstrap checks. See this for more information.
- ES_JAVA_OPTS=-Xms2g -Xmx2g
Here we need to increase the minimum memory for the container.
kibana:
image: kibana:7.9.3
depends_on:
- elasticsearch
ports:
- "5601:5601"
This one is simple, we are creating a kibana container to view the logs. The container expose the port 5601
in order
for you to reach the dashboard. In a production context it will be better to expose kibana trough Traefik
with a Basic Auth middleware for example.
The kibana container will automatically try to connect to an elasticsearch search at address elasticsearch
. So as
long as the elasticsearch container is named elasticsearch
, no further configuration is required to make it work.
logstash:
image: logstash:7.9.3
ports:
- "12201:12201/udp"
depends_on:
- elasticsearch
deploy:
mode: global
volumes:
- logstash-pipeline:/usr/share/logstash/pipeline/
Final part, the logstash container. It’s the process who will collect the containers logs to aggregate them and push them to ES.
ports:
- "12201:12201/udp"
The logstash process will expose a GELF UDP endpoint on port 12201, and the docker engine will push the logs to that endpoint.
deploy:
mode: global
We will need to have one logstash agent running per node, so that container can push logs to it.
volumes:
- logstash-pipeline:/usr/share/logstash/pipeline/
We will need to setup logstash to listen on that port and forward the logs to ES. This will be done by creating a pipeline, that we will put in this volume.
Now we will need to bring up this stack:
creekorful@host:~$ docker stack deploy logs -c logs.yaml
Configure logstash⌗
Now that we have our 3 containers up and alive, we will need to configure logstash. This is done by putting the following
file logstash.conf
into the logstash-pipeline
volume.
input {
gelf {
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
This configuration will set up an UDP
(default) GELF endpoint on port 12201
(default), and output the logs
to elasticsearch host.
Now we will need to kill the container so that it restart with the new configuration.
Configure Docker logging driver⌗
Now that we have our containers up and running we need to indicate to Docker to push the logs to logstash. This can be done by two approach:
Global configuration⌗
We can change the Docker default logging driver to that every container created will push the logs automatically
to the logstash container. This is done by editing the /etc/docker/daemon.json
config file.
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://localhost:12201"
}
}
Note: you can see that we are submitting the logs to udp://localhost:12201
, this is because the logs are submitted
trough the host network.
Per stack configurations⌗
If you prefer to configure only logging for some of your containers, this can be done individually on each stack like this:
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
Conclusion⌗
Now you should have a running logging mechanism.
Long live Docker Swarm and Happy hacking !