Docker captures whatever a container writes to stdout and stderr and stashes it where the host can read it back, with the docker logs command as the front door. The most common workflow — "tail the logs for this container so I can see what it's doing right now" — is one flag away from the basic command, but Docker has enough log-related options that the right docker log tail invocation depends on what you actually want to see: the last N lines, a live stream, a specific time window, only stderr, only the JSON file directly. Each is one option away.
This guide covers every common workflow for tailing Docker container logs, the log-driver configuration that quietly decides whether docker logs works at all, the gotcha when the application logs to a file inside the container instead of stdout, and the equivalents for Docker Compose and Kubernetes.
How Docker captures logs
Docker doesn't capture logs by reading the container's filesystem — it captures whatever the PID 1 process in the container writes to its stdout and stderr streams. By default, those streams are saved to a JSON file on the host:
/var/lib/docker/containers/<container-id>/<container-id>-json.log
docker logs reads from that file. Three consequences worth internalising up front:
- If your app writes logs to a file inside the container (e.g.
/var/log/app/app.log) instead of stdout/stderr,docker logswill not show them. You have to either change the app to log to stdout (the cloud-native convention) ordocker execinto the container and tail the file directly. See When the app logs to a file inside the container. - If you change the log driver (e.g. to
journald,syslog,fluentd,awslogs), the JSON file goes away anddocker logsmay stop working depending on the driver. Thelocalandjson-filedrivers are the only ones that fully supportdocker logswith all flags. - The JSON log file grows without bound by default. You almost certainly want to set log rotation (
max-size,max-file) — see Log rotation.
The five docker logs invocations to remember
docker logs <container> # all logs from start of the container
docker logs --tail 100 <container> # last 100 lines
docker logs -f <container> # follow / stream live (Ctrl-C to stop)
docker logs --tail 100 -f <container> # most useful: last 100 lines + follow
docker logs --since 10m <container> # last 10 minutes
<container> can be a name (my-api) or a container ID prefix as short as the first 4 characters (abcd). Names are clearer for scripts; ID prefixes are faster when you've just docker ps'd.
The combination most people want most of the time:
docker logs --tail 200 -f --timestamps <container>
That gives you the last 200 lines with timestamps, then continues streaming. Add --timestamps (or -t) when you need to correlate against other logs — by default docker logs doesn't show them.
Filter by time window
--since and --until accept either a duration (30s, 5m, 2h, 7d) or an absolute timestamp (RFC 3339 / ISO 8601):
docker logs --since 5m <container> # last 5 minutes
docker logs --since 2026-05-09T10:00:00 <container> # since 10:00 today
docker logs --since 2026-05-09T10:00:00 --until 2026-05-09T11:00:00 # specific hour window
For incidents, the --since shortcut paired with --tail is the fast triage shape:
docker logs --since 30m --tail 1000 <container> | grep -iE 'error|warn|fatal'
Stream selection: stdout vs stderr separately
docker logs writes the container's stdout to its stdout, and the container's stderr to its stderr. That means standard shell redirection works:
# Only stderr from the container
docker logs <container> 2>/dev/null && echo "(stdout above)" # show only stdout
docker logs <container> 2>&1 >/dev/null # show only stderr
docker logs <container> > out.log 2> err.log # split into files
For typical web apps, stderr is usually where errors and access logs go — splitting them while debugging makes the signal much easier to read.
Search and filter
docker logs --tail 500 <container> | grep -i 'error'
docker logs --since 1h <container> 2>&1 | grep -E 'WARN|ERROR' | tail -50
# Live stream filtered for errors
docker logs -f <container> 2>&1 | grep --line-buffered -i 'error'
--line-buffered matters in the streaming case: grep buffers output by default when the destination is a pipe, so you'll see nothing until the buffer fills. --line-buffered forces line-by-line output.
For JSON-formatted logs (most modern apps), jq is the right next step:
docker logs --tail 200 <container> 2>&1 \
| jq -c 'select(.level == "error")'
When the app logs to a file inside the container
This is the most common "docker logs is empty" surprise. If the application writes to /var/log/<app>/<app>.log instead of stdout/stderr, docker logs shows nothing useful.
Two options:
Option A — fix the application (preferred). The cloud-native convention is to log to stdout/stderr. Most modern frameworks support this with one config knob:
- Nginx —
error_log /dev/stderr;andaccess_log /dev/stdout; - Apache —
ErrorLog /proc/self/fd/2,CustomLog /proc/self/fd/1 common - PHP-FPM —
error_log = /proc/self/fd/2,access.log = /proc/self/fd/2 - Python (logging module) — configure a
StreamHandler(sys.stdout) - Node —
console.log/console.errorgo to stdout/stderr by default; if a logger is writing to file, point its transport at stdout - Java (Logback) —
<appender class="ch.qos.logback.core.ConsoleAppender">
Option B — read the file from inside the container. Useful when you can't change the app or are debugging an existing image:
docker exec -it <container> tail -f /var/log/nginx/access.log
# Or one-shot:
docker exec <container> cat /var/log/app/app.log
# If `tail` isn't installed in the (slim) image:
docker exec <container> sh -c 'cat /var/log/app/app.log'
If the file is on a bind-mounted or named volume, you can also read it directly from the host without docker exec:
docker inspect <container> -f '{{ range .Mounts }}{{ .Source }} -> {{ .Destination }}{{"\n"}}{{ end }}'
# /opt/app/logs -> /var/log/app
sudo tail -f /opt/app/logs/app.log
For larger fleets, the durable fix is to ship application logs through a real logging pipeline (Loki, ELK, Fluentd → S3, etc.) so they're queryable cross-container, not just per-container.
Reading the raw JSON log file directly
When docker logs is slow or unavailable (host is heavily loaded; daemon is unresponsive; you want grep without a docker round-trip), tail the JSON file directly:
sudo tail -f /var/lib/docker/containers/<container-id>/<container-id>-json.log
Each line is one log entry as JSON:
{"log":"2026-05-09T10:11:12 GET /health 200 1.2ms\n","stream":"stdout","time":"2026-05-09T10:11:12.345678901Z"}
For a clean human-readable view:
sudo tail -f /var/lib/docker/containers/<container-id>/<container-id>-json.log \
| jq -r '.log' | sed 's/\\r$//'
Find the right path quickly:
docker inspect <container> -f '{{ .LogPath }}'
This is the trick to use when docker logs is hanging — usually a sign the daemon is overloaded or the log file has grown to multiple GB and Docker is being slow to serve it.
Log rotation: configure it before you need it
By default the JSON log file grows without bound. On a busy container that logs heavily (a noisy debug build, a request-logging web server), it can fill the disk in hours.
Two ways to set rotation:
Per-container (run-time flags):
docker run \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=5 \
...
Daemon-wide (recommended for production): edit /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "5"
}
}
Restart Docker (sudo systemctl restart docker) for daemon-level changes to take effect for new containers — running containers keep the configuration they were started with.
That config keeps at most 5 × 10 MB = 50 MB of logs per container, with the oldest rolling off as new logs arrive. Adjust upward if you need more retention; downward if disk is tight.
To check what a running container is configured for:
docker inspect <container> -f '{{ .HostConfig.LogConfig }}'
# {json-file map[max-file:5 max-size:10m]}
Log drivers beyond json-file
The json-file driver is the default and works for docker logs. Other drivers redirect logs elsewhere; some break docker logs entirely:
| Driver | docker logs works? |
What it does |
|---|---|---|
json-file |
Yes (default) | Writes JSON file per container |
local |
Yes | More compact than json-file; binary format |
journald |
Yes (reads back from journal) | Sends to systemd-journald |
syslog |
No | Forwards to syslog daemon |
fluentd |
No | Forwards to a Fluentd collector |
awslogs |
No | Forwards to AWS CloudWatch Logs |
gcplogs |
No | Forwards to Google Cloud Logging |
splunk |
No | Forwards to Splunk HEC |
none |
No | Discards logs entirely |
If docker logs returns "configured logging driver does not support reading", you've hit one of the forwarding-only drivers — you need to read the logs from wherever they were forwarded.
For containers running with journald:
journalctl CONTAINER_NAME=<container>
# or by container ID
journalctl CONTAINER_ID=<short-id>
journalctl -f CONTAINER_NAME=<container> # follow
For awslogs or other cloud drivers, log into the cloud's console (or use the cloud's CLI: aws logs tail) — there's no local equivalent.
Docker Compose
docker compose logs is the multi-service equivalent. Same flag shape:
docker compose logs # all services, all logs
docker compose logs --tail 100 -f # follow last 100 lines from all services
docker compose logs -f api # only the 'api' service
docker compose logs -f api worker # multiple specific services
# Last 30 minutes from all services
docker compose logs --since 30m
By default Compose colours the service name in the output and prefixes each line — convenient for distinguishing which service emitted what when several are streaming together.
-t / --timestamps works the same as with docker logs.
Kubernetes (the very-similar cousin)
For completeness, since Kubernetes deployments often start as Docker Compose and grow up:
kubectl logs <pod> # last 10 lines (default differs!)
kubectl logs --tail=100 <pod> # last 100
kubectl logs -f <pod> # follow
kubectl logs --since=10m <pod> # last 10 minutes
kubectl logs <pod> -c <container> # specific container in a multi-container pod
kubectl logs --previous <pod> # logs from the previous (crashed) instance
# All pods of a deployment
kubectl logs -l app=my-api -f --tail=100 --max-log-requests=10
The mental model is the same: stdout/stderr captured by the runtime; kubectl logs reads it back. The flags are slightly different (--tail=N not --tail N; defaults to last-10 not full history) but the underlying behaviour is parallel.
Continuous monitoring (production)
Tailing logs interactively is incident response. Catching a problem before it requires tailing logs is monitoring — and that's a different stack.
What good container log monitoring looks like:
- Ship logs to a central store — Loki, ELK, OpenSearch, CloudWatch, or a hosted equivalent. Per-container
docker logsdoesn't scale past about a dozen hosts. - Alert on log patterns — error rate above a threshold, specific stack traces, OOM-kill messages. The point is to discover problems by signal, not by clicking through every container's logs.
- Correlate logs with metrics — a sudden burst of
ERRORlines should sit on the same timeline as a CPU spike or a 5xx rate increase. The correlation is what tells you whether the errors caused the spike or vice versa.
For the host-level container view (CPU, memory, restarts, exit codes per container — the data that complements logs), see the existing KB on How to monitor Docker, and the broader server monitoring overview. Pair with Xitogent on the host to surface container restarts and exit codes alongside logs in one dashboard. For the upstream-of-Docker context (the host's CPU and processes), the companion articles are How to monitor CPU usage on Linux and How to check running processes on Ubuntu server.
Operational tips
docker logs --tail 100 -f --timestampsis the muscle-memory invocation. Make it an alias if you type it a lot:alias dlog='docker logs --tail 100 -f --timestamps' dlog my-api- Set log rotation in
daemon.jsononce, on every host. A container that fills/var/lib/dockeris a Saturday-evening incident waiting to happen.max-size: 10m,max-file: 5is a sensible default. --sinceis your friend. For incidents,--since <when-things-broke>is much faster than scrolling. Format is duration (30m,2h) or RFC 3339.- App logs to stdout, host handles the rest. Don't write to log files inside containers. The cloud-native pattern is "log to stdout, let Docker / Kubernetes / your log driver figure out where it goes". You'll thank yourself when the container restarts and you don't lose the last hour of logs.
--no-colorif you're piping to a file. Some apps emit ANSI colour codes; they look great in a terminal and ugly in a file.docker logs ... --no-color(Compose has the equivalent) keeps output clean.- Use exact timestamps, not "the last hour". Log entries between hosts are usually compared by time. Confirming your hosts are NTP-synced is part of the log-debugging workflow — see How to set up an NTP server on CentOS 7 for the broader context.
- Don't run
docker logs <container> | wc -lcasually. On a long-running container with no rotation, that's "read the entire JSON log file from disk" — can be tens of GB. - Slim images may not have
tail/cat.alpine,distroless, andscratchimages deliberately ship without shell utilities.docker exec <c> tailwill fail. Either usedocker logs(which doesn't need anything inside the container) or use a debug image with--mountfor ad-hoc access.
Troubleshooting
docker logsis empty even though the app is doing things. The app is logging to a file inside the container instead of stdout. See the "app logs to file" section above — fix the app to log to stdout, ordocker execto read the file.docker logsreturns "configured logging driver does not support reading". You're onsyslog/fluentd/awslogs/ similar — read the logs from wherever they were forwarded, not fromdocker logs.docker inspect <container> -f '{{ .HostConfig.LogConfig }}'confirms the driver.docker logsis slow or hangs. The JSON log file has probably grown past 1 GB.du -sh /var/lib/docker/containers/<id>/<id>-json.logto confirm. Configure rotation, then optionallytruncate -s 0the file (the running container will keep writing to it; just expect a moment of inconsistency for any active tail).docker logs -fshows nothing live but the app is running. Check whether the app is buffering its stdout (Python, especially: setPYTHONUNBUFFERED=1; orpython -u). Buffered output only flushes on full buffer or process exit, which makes it look like the app is silent.docker compose logsmixes services in a confusing order. Add--no-colorif you're piping; use-f apito follow a specific service; or open multiple terminals, one per service.- Logs disappear after a container restart. That's expected on the
localandjson-filedrivers — the file is per-container-instance. Usedocker logs --previous(ordocker compose logsin--sincemode within the same Compose project) only on Kubernetes; for plain Docker, ship logs to a central store before they disappear.
Summary
To tail Docker container logs effectively:
- Default invocation:
docker logs --tail 100 -f --timestamps <container>. Last 100 lines, then live stream, with timestamps. - Time windows:
--since 30m(or--since 2026-05-09T10:00:00) for incident triage.--untilto bound the upper edge. - Stream separation: shell redirection works —
2>/dev/nullfor stdout-only,2>&1 >/dev/nullfor stderr-only. - Search:
| grep -i error. For live streams, add--line-bufferedso grep doesn't sit on a buffer. - App-logs-to-file gotcha: change the app to log to stdout, or
docker exec <c> tail -f /path/to/log. Don't lose logs to the container filesystem. - Read the raw JSON file at
/var/lib/docker/containers/<id>/<id>-json.logwhendocker logsis slow. - Configure rotation in
daemon.json:max-size: 10m,max-file: 5. Once, on every host. The default is unlimited, which fills disks. - Compose:
docker compose logs --tail 100 -f <service>. Same flag shape. - Kubernetes:
kubectl logs --tail=100 -f <pod>— flag shape differs slightly, behaviour is the same. - Don't rely on
docker logsfor production observability. Ship to a central store, alert on patterns, correlate with metrics.
Tailing logs well is a 10-second operation once the muscle memory is in place. The bigger wins are in the configuration that doesn't change day-to-day: log rotation, stdout-as-default in the app, and a real log pipeline once the fleet is more than a handful of containers. Set those up once and the next incident is a matter of --since 5m and grep — not a Saturday spent triaging "where did the last hour of logs go?"