docker logging monitoring - ghdrako/doc_snipets GitHub Wiki

Containers write logs to the console (stdout/stderr). Docker then uses a logging driver to export container logs to chosen destinations. Due to the transient nature of Docker workloads, we should use log drivers to export the logs into a particular place and store and persist it.

If you have a log analytics solution, the best place for your logs to be is within it. Docker supports multiple log targets via logging drivers.

  • none: There are no logs available for the container, and therefore they are not stored anywhere.
  • local: Logs are stored locally in a custom format and therefore minimizes overhead.
  • json-file: The log files are stored in a JSON format, and this is the defaultcDocker logging driver.
  • syslog: Uses syslog for storing the Docker logs as well. This option makes sense when you use syslog as your default logging mechanism.
  • journald: Uses journald to store Docker logs. You can use the journald command line to browse the container logs and the Docker daemon logs.
  • gelf: Sends logs to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
  • fluentd: Sends logs to Fluentd.
  • awslogs: Sends logs to AWS CloudWatch.
  • splunk: Sends logs to Splunk using the HTTP Event Collector.
  • etwlogs: Sends logs to Event Tracing for Windows (ETW) events. You can use it only on Windows platforms.
  • gcplogs: Sends logs to Google Cloud Logging.
  • logentries: Sends logs to Rapid7 Logentries

Configuring logging drivers

  • finding the current logging driver
$ docker info | grep "Logging Driver"

To change the default logging driver, we must configure the default logging driver in the daemon.json file.

$ vim /etc/docker/daemon.json

Add the log-driver entry to the daemon.json configuration file:

{
"log-driver": "journald"
}

for Splunk:

{
  "log-driver": "splunk",
  "log-opts": {
    "splunk-token": "<Splunk HTTP Event Collector token>",
    "splunk-url": "<Splunk HTTP(S) url>"
  }
}
  • restart the Docker service:
$ sudo systemctl restart docker
$ sudo systemctl status docker
$ docker info | grep "Logging Driver"
  • visualize the logs
$ docker run --name nginx-journald -d nginx
$ sudo journalctl CONTAINER_NAME=nginx-journald

You can also have different logging drivers for different containers, and you can do so by overriding the defaults by passing the log-driver and log-opts flags from the command line. As our current configuration is Splunk, and we want to export data to a JSON file, we can specify log-driver as json-file while running the container.

$ docker run --name nginx-json-file --log-driver json-file -d nginx

To visualize JSON logs, we need to look into the JSON log directory, that is, /var/lib/docker/containers/<container_id>/<container_id>-json. log.

cat /var/lib/docker/containers/379eb8d0162d98614d53ae1\
c81ea1ad154745f9edbd2f64cffc2279772198bb2379eb8d0162d9\
8614d53ae1c81ea1ad154745f9edbd2f64cffc2279772198bb2-json.log

One solution can be to use the logging driver to forward the logs to a log analytics solution directly. But then the logging becomes heavily dependent on the availability of the log analytics solution.

The best way to approach this problem is to use JSON files to store the logs temporarily in your virtual machine and use another container to push the logs to your chosen log analytics solution using the old-fashioned way. That way, you decouple dependency on an external service to run your application.