🧮

When ps Shows Different Memory Usage Than free: Understanding Linux Memory Accounting

· Server Scout

The Mystery Numbers

You're investigating high memory usage on a production server. Running free -h shows 14GB used out of 16GB total, but when you add up all the RSS values from ps aux, you get maybe 8GB. The math doesn't work, and now you're wondering if your monitoring is lying to you.

This discrepancy isn't a bug - it's how Linux memory accounting actually works. Understanding the difference can save you hours of chasing phantom memory leaks and help you interpret your server metrics correctly.

Shared Memory Changes Everything

The biggest culprit is shared memory. When multiple processes use the same shared libraries (like glibc), that memory gets counted once in the kernel's accounting but appears in the RSS of every process that maps it.

A typical Apache server with 50 worker processes might show 100MB RSS per process in ps, suggesting 5GB total usage. But since they share the same executable and libraries, the actual physical memory used is closer to 2GB.

Similarly, when processes fork(), Linux uses copy-on-write semantics. The child process shows the full parent's memory size in ps even though the pages are shared until one process modifies them.

The Kernel's Hidden Allocations

Kernel memory doesn't appear in process lists at all. Buffer cache, slab allocations, and kernel data structures can consume gigabytes on busy servers. The slabtop command reveals some of this:

sudo slabtop -o | head -10

On a server handling lots of network connections, you might find hundreds of megabytes allocated to socket buffers and connection tracking structures.

Memory-Mapped Files Complicate Things Further

Applications increasingly use memory-mapped I/O for performance. When a database maps its data files into memory, that appears as RSS in the process but it's really just cached file data that the kernel can reclaim instantly.

This is why a Redis instance with a 4GB dataset might show 4GB RSS even when the actual anonymous memory (heap allocations) is only 500MB.

What Your Monitoring Should Actually Track

Instead of trying to reconcile process-level and system-level memory stats, focus on what matters:

  • Available memory from /proc/meminfo tells you how much RAM the kernel could free up immediately
  • Anonymous RSS shows memory that can't be reclaimed (actual application allocations)
  • Swap usage indicates real memory pressure

A good monitoring system tracks these kernel-level metrics because they reflect the true memory state better than summing up individual processes.

Practical Debugging Tips

When hunting actual memory issues, /proc/meminfo is your friend:

cat /proc/meminfo | grep -E '(MemTotal|MemFree|MemAvailable|Buffers|Cached|AnonPages)'

If AnonPages is growing steadily, you've got a real leak. If Cached is consuming most memory, that's usually healthy - the kernel is just caching file data.

For process-specific investigation, look at /proc/PID/smaps_rollup rather than the RSS column in ps. It breaks down exactly what type of memory each process is using.

The Linux kernel documentation explains each field in detail, but Pss (Proportional Set Size) gives you a more accurate picture of per-process memory impact than RSS.

Making Sense of the Numbers

Next time your memory metrics don't add up, remember that Linux memory management is designed for sharing and efficiency, not accounting clarity. The kernel's view (what free shows) reflects physical reality better than the sum of process views.

Server Scout tracks the system-level metrics that actually matter for capacity planning, so you can focus on real issues rather than accounting mysteries.

Ready to Try Server Scout?

Start monitoring your servers and infrastructure in under 60 seconds. Free for 3 months.

Start Free Trial