🐳

Docker Memory Limits vs Host Reporting: Why cgroups v2 Changes Everything

· Server Scout

Your container monitoring dashboard shows 2GB memory usage. The host system reports 4GB for the same container. Both are technically correct, and that's exactly the problem.

This discrepancy hits production teams hard when they're trying to set meaningful alerts or debug memory pressure issues. The container sees its world through the lens of Docker's memory accounting, while the host system tracks every byte the container process actually consumes.

The cgroups v2 Difference

Most modern distributions now default to cgroups v2, which fundamentally changed how memory accounting works for containers. Unlike cgroups v1, which reported container memory usage in a somewhat opaque manner, v2 provides much more granular breakdowns.

The key insight is that Docker's internal metrics focus on active memory usage within the container's limits, while the host system sees total RSS (Resident Set Size) for the container processes. This includes shared libraries, cached data, and memory that Docker might not count against the container's limit.

Here's what actually happens when you set docker run -m 1g myapp. The container can allocate up to 1GB according to Docker's accounting, but the host might see 1.2GB or more because it's counting shared memory, cached files, and other overhead that Docker excludes from its limit calculations.

Reading the Right Files

For accurate container memory monitoring from the host perspective, you need to read from /sys/fs/cgroup/memory/docker/{containerid}/memory.usageinbytes on cgroups v1 systems, or /sys/fs/cgroup/docker/{containerid}/memory.current on cgroups v2.

These files give you Docker's view of memory usage, which should align with what docker stats reports. However, if you're running a monitoring dashboard that reads from /proc/{pid}/status, you'll get the host's perspective, which includes everything the kernel associates with those processes.

The practical impact is significant. Setting alerts based on host-level memory readings can trigger false positives when containers are operating normally within their Docker-imposed limits. Conversely, relying solely on Docker's metrics might miss situations where the host is under genuine memory pressure from container overhead.

Monitoring Both Perspectives

The solution isn't to pick one metric over the other, but to monitor both and understand when they diverge. A healthy container should show relatively stable ratios between Docker's memory accounting and the host's view.

When that ratio starts shifting, investigate whether the container is leaking memory in ways that Docker doesn't track, or if shared library usage patterns have changed. Check the cgroup documentation for the specific memory types included in each accounting method.

For production environments, track both metrics but alert primarily on Docker's view for container-specific issues and the host view for overall system health. This dual approach catches both application memory leaks and host-level resource exhaustion.

Server Scout's bash-based agent reads directly from cgroup files to provide accurate container memory tracking without the overhead of Docker API calls, making it easier to monitor both perspectives from a single dashboard.

Ready to Try Server Scout?

Start monitoring your servers and infrastructure in under 60 seconds. Free for 3 months.

Start Free Trial