Docker's memory reporting has become fundamentally unreliable since the shift to cgroups v2. What used to be straightforward memory tracking now involves multiple accounting systems that report different values for the same containers. The problem isn't just cosmetic — it breaks capacity planning, alert thresholds, and resource allocation decisions across production environments.
This guide walks through a systematic approach to reconcile Docker's reported memory usage with actual host consumption using direct cgroups analysis.
Step 1: Identify Your cgroups Version
First, determine which cgroups version your system uses, as this fundamentally changes how memory is tracked.
Check your cgroups mount:
$ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
If you see cgroup2, you're running the unified hierarchy. If you see multiple cgroup entries for different controllers (memory, cpu, blkio), you're still on cgroups v1. Most distributions shipped after 2022 default to v2.
Step 2: Locate Container cgroup Paths
Find where your container's resource limits are actually enforced. Docker creates cgroups in predictable locations, but the path structure differs between systemd and non-systemd systems.
For systemd-managed Docker (most common):
ls /sys/fs/cgroup/system.slice/docker-{CONTAINER_ID}.scope/
For non-systemd Docker:
ls /sys/fs/cgroup/docker/{CONTAINER_ID}/
Replace {CONTAINER_ID} with your full container ID from docker ps --no-trunc.
Step 3: Compare Docker Stats with cgroup Files
Now examine what Docker reports versus what the kernel tracks. Run docker stats --no-stream {container_name} and note the memory usage. Then read the actual cgroup memory files.
For cgroups v2, check these key files in your container's cgroup directory:
memory.current— total memory currently usedmemory.max— memory limit (replaces memory.limitinbytes from v1)memory.stat— detailed breakdown by memory type
The discrepancy usually appears in memory.stat. Docker stats calculates usage as memory.current - inactive_file but many monitoring tools read memory.current directly.
Step 4: Analyse Memory Types Breaking Your Monitoring
Examine what's hidden in the memory.stat breakdown:
cat /sys/fs/cgroup/system.slice/docker-{CONTAINER_ID}.scope/memory.stat
Look for these common culprits:
- inactive_file — cached files that Docker subtracts from usage but appear in total
- shmem — shared memory segments, tmpfs mounts
- kernel_stack — kernel memory allocated for container processes
- sock — socket buffers not counted in application memory
Step 5: Build Host-Level Memory Cross-Reference
Compare your cgroup analysis with host-level reporting. Check /proc/meminfo and match the numbers against your container's cgroup accounting.
For containers with significant shared memory or tmpfs usage, the host may show pressure that doesn't appear in Docker stats. This commonly happens with database containers using shared memory segments or applications mounting /tmp as tmpfs.
Step 6: Account for Memory Pressure Effects
Memory pressure changes how cgroups v2 reports usage. Under pressure, the kernel reclaims inactive memory more aggressively, creating timing-dependent reporting differences.
Check if your container has experienced memory pressure:
cat /sys/fs/cgroup/system.slice/docker-{CONTAINER_ID}.scope/memory.pressure
Any non-zero values indicate the kernel has been actively reclaiming memory, which affects the accuracy of point-in-time memory readings.
Common Scenarios Where Docker Stats Mislead
Shared Memory Applications
Databases and caching applications often allocate large shared memory segments that appear differently in cgroups v1 versus v2 accounting. PostgreSQL's shared_buffers and Redis's memory allocation can show 30-40% differences between Docker stats and actual memory pressure.
tmpfs Mount Points
Containers mounting /tmp, /var/run, or application directories as tmpfs create memory usage that moves between different accounting categories. Docker stats may not include tmpfs usage in memory calculations, while the host kernel counts it against the container's memory limit.
Building Accurate Container Memory Monitoring
Effective container memory monitoring requires tracking both Docker's calculated usage and direct cgroup accounting. Server Scout's lightweight monitoring approach handles this complexity by reading cgroup files directly rather than relying on Docker's API calculations.
The key is establishing baselines for your specific application patterns. A container running MySQL with 4GB shared_buffers will show different accounting patterns than one running a stateless web application.
For environments running hundreds of containers, these accounting differences compound into significant capacity planning errors. Understanding the resource overhead of your monitoring solution itself becomes crucial when every megabyte matters for accurate resource allocation.
Memory Limit Enforcement Reality
Remember that cgroups v2 enforces limits at the memory.max boundary regardless of how Docker stats calculates usage. Your container can be OOM-killed even when Docker stats shows memory below the limit, if kernel memory or temporary file usage pushes total consumption over the cgroup boundary.
This enforcement mechanism operates independently of Docker's reporting layer. Monitoring the actual cgroup metrics provides the ground truth your alerting decisions should be based on.
The accounting differences between Docker stats and cgroups v2 aren't bugs — they're fundamental design choices about what constitutes "application memory usage." Understanding both perspectives lets you build monitoring that catches resource problems before they affect application performance.
FAQ
Why does Docker stats show lower memory usage than my host monitoring tools?
Docker stats subtracts inactive file cache from total usage, while host tools often read memory.current directly from cgroups. Both are technically correct but measure different things.
Can I force Docker to use the same memory accounting as cgroups v1?
No, cgroups v2 uses a unified hierarchy that can't be downgraded per-application. You need to adapt your monitoring to understand both calculation methods.
How often do these memory accounting differences cause real problems?
Most commonly with database containers using shared memory or applications with significant tmpfs usage. The differences can be 20-40% of reported usage in these scenarios.