The Problem: Production Processes That Can't Pause
You've got a critical service showing steady memory growth over days or weeks. It's handling live traffic, database connections, or customer requests. You know valgrind would give you the answers, but restarting with debugging enabled means downtime. The process needs to keep running while you figure out where those extra megabytes are going each hour.
This is where /proc filesystem analysis becomes your primary tool. Linux exposes detailed memory information for every running process, and you can extract surprisingly specific leak patterns without touching the application itself.
Reading /proc/PID/smaps for Memory Pattern Analysis
The smaps file breaks down every memory mapping for a process, showing exactly where growth is happening:
cat /proc/12345/smaps | grep -A 10 -B 1 "heap\|anon"
This reveals anonymous memory regions (typically malloc allocations) and heap segments. For each mapping, you'll see:
Size: Total virtual memoryRss: Physical memory actually usedPss: Proportional share (useful for shared libraries)Private_Dirty: Memory modified by this process only
A growing heap with increasing Private_Dirty values over time points to classic malloc leaks. Anonymous mappings that keep expanding suggest mmap-based allocations that aren't being freed.
Tracking Growth Patterns Over Time
Raw /proc data is only useful when tracked historically. Set up a simple monitoring loop:
while true; do
echo "$(date): $(grep VmRSS /proc/12345/status | awk '{print $2}')"
sleep 300
done >> memory_tracking.log
But parsing log files manually gets tedious quickly. Production memory leak detection automatically tracks these memory patterns and can alert when growth exceeds normal boundaries, letting you spot the trend before it becomes a crisis.
Using pmap for Library-Specific Analysis
When smaps shows growth but you need to isolate which libraries or code sections are involved:
pmap -x 12345 | sort -k3 -nr
This sorts memory mappings by RSS (resident memory), showing which shared libraries or heap regions are consuming the most physical RAM. Growing entries in subsequent runs indicate where the leak originates.
For applications with complex memory pools (database servers, application servers), look for mappings that grow disproportionately compared to actual workload.
Correlating Memory Growth with Application Behaviour
Memory leaks rarely happen in isolation. Cross-reference your memory tracking with:
- Connection counts (
ss -tuln | wc -l) - File descriptor usage (
ls /proc/12345/fd | wc -l) - Thread counts (
ps -T -p 12345 | wc -l)
Leaks often correlate with specific operations. A process that grows memory alongside connection counts suggests per-connection allocations that aren't cleaned up. Growing memory with stable connections points to request-handling leaks.
The Linux kernel documentation provides complete details on /proc filesystem entries, including less common memory tracking files that can provide additional insight for complex debugging scenarios.
When You've Found the Pattern
Once you've isolated the leak to specific memory regions and correlated it with application behaviour, you can often implement targeted fixes or workarounds without needing full debugging sessions. Memory pool limits, connection timeouts, or periodic cleanup routines can contain leaks while you plan proper debugging with development tools.
Server Scout's memory monitoring includes automatic leak detection based on these same /proc analysis techniques, with configurable thresholds that account for normal application growth patterns versus problematic trends.