PostgreSQL's max_connections parameter shows 100. Your connection count query returns 45. Yet new connections fail with "too many connections" errors. This contradiction points to connection pool leaks hiding in socket states that standard PostgreSQL monitoring never reveals.
Connection pool exhaustion often manifests as phantom connections — sockets that appear available to PostgreSQL but remain unusable due to improper cleanup. The /proc/net/tcp interface exposes these hidden states through raw TCP socket data that correlates directly with PostgreSQL backend processes.
Understanding the Connection Exhaustion Paradox
PostgreSQL tracks active connections through its internal process table, but the operating system maintains separate socket state information. When applications fail to properly close connections, sockets can remain in CLOSEWAIT or FINWAIT states while PostgreSQL considers the connection slot available.
This creates a scenario where SELECT count(*) FROM pgstatactivity shows normal connection counts, but the underlying TCP sockets consume system resources and prevent new connections from establishing properly.
Step 1: Mapping PostgreSQL Processes to Network Sockets
Start by identifying all PostgreSQL backend processes currently running on your system. Use ps aux | grep postgres to list processes, then correlate each backend PID with its network socket usage.
Identifying PostgreSQL Backend PIDs
Extract PostgreSQL backend process IDs using awk to filter the ps output: ps aux | awk '/postgres.: .\[/ && !/grep/ {print $2, $11, $12}'. This command isolates backend processes and excludes auxiliary PostgreSQL processes like checkpointer or autovacuum launcher.
For each backend PID, examine its file descriptors in /proc/[pid]/fd/ to identify network sockets. Socket file descriptors appear as symbolic links pointing to socket inodes in the format socket:[inode_number].
Step 2: Analyzing /proc/net/tcp Socket States
/proc/net/tcp contains hexadecimal-encoded socket information including local address, remote address, and connection state. The critical field is the fourth column, which represents socket state as a hexadecimal value.
Parsing TCP Connection States with awk
Socket states use specific hex values: 01 (ESTABLISHED), 08 (CLOSEWAIT), 04 (FINWAIT1), 05 (FIN_WAIT2). Parse these states to identify problematic connections:
awk 'NR>1 {printf "%s:%d -> %s:%d [%s]\n",
sprintf("%d.%d.%d.%d", strtonum("0x" substr($2,7,2)), strtonum("0x" substr($2,5,2)),
strtonum("0x" substr($2,3,2)), strtonum("0x" substr($2,1,2))),
strtonum("0x" substr($2,10,4)),
sprintf("%d.%d.%d.%d", strtonum("0x" substr($3,7,2)), strtonum("0x" substr($3,5,2)),
strtonum("0x" substr($3,3,2)), strtonum("0x" substr($3,1,2))),
strtonum("0x" substr($3,10,4)),
($4=="01"?"ESTABLISHED":$4=="08"?"CLOSE_WAIT":$4=="04"?"FIN_WAIT1":$4=="05"?"FIN_WAIT2":"OTHER")}'
/proc/net/tcp
This awk script converts hexadecimal addresses to dotted decimal notation and translates state codes into readable connection states.
Step 3: Correlating Socket States with PostgreSQL Processes
Match socket inodes from PostgreSQL backend file descriptors with entries in /proc/net/tcp. Create a correlation script that iterates through each backend PID, extracts socket inodes from /proc/[pid]/fd/, then searches /proc/net/tcp for matching inode numbers.
Finding Leaked Connection Pools
Connection pool leaks typically appear as multiple sockets in CLOSE_WAIT state associated with the same PostgreSQL backend process. Use ls -la /proc/[pid]/fd/ | grep socket to list socket file descriptors, then cross-reference inode numbers with /proc/net/tcp entries.
Sockets stuck in CLOSE_WAIT indicate that the remote application closed the connection, but PostgreSQL hasn't acknowledged the closure. These connections consume backend slots while remaining functionally unusable.
Step 4: Advanced Socket Analysis Techniques
Beyond basic state identification, analyse connection patterns over time to detect systematic leaks. Monitor socket state transitions by capturing /proc/net/tcp snapshots at regular intervals.
Time-based Connection Pattern Detection
Implement periodic socket state sampling using a simple bash loop: while true; do date; awk scriptfromabove /proc/net/tcp | grep CLOSE_WAIT; sleep 30; done. This captures connection state changes that reveal leaking patterns.
Look for connections that remain in transitional states for extended periods — healthy connections should move quickly from ESTABLISHED to closed, while leaked connections linger in intermediate states.
Prevention and Monitoring Strategies
Establish baseline socket state monitoring using the techniques above. Socket-level monitoring through /proc analysis provides earlier warning than PostgreSQL's internal connection counts.
For production environments, consider implementing automated socket state analysis as part of your regular monitoring infrastructure. Server Scout's connection pool monitoring can track these socket patterns automatically, alerting when connection leaks develop before they cause outages.
The Linux kernel documentation provides detailed information about TCP socket states and their meanings, essential for understanding the raw data from /proc/net/tcp.
Connection pool debugging requires correlation between application-level metrics and system-level socket states. While PostgreSQL shows available connection slots, the underlying TCP sockets reveal the actual network resource consumption. Systematic monitoring approaches prevent these hidden resource exhaustion scenarios from causing cascading failures.
This socket-level analysis method works regardless of PostgreSQL version or connection pooling software. By monitoring /proc/net/tcp directly, you bypass application-level abstractions that mask connection leaks until they cause visible failures.
FAQ
Why does PostgreSQL report fewer connections than max_connections but still reject new connections?
PostgreSQL's connection count reflects active backend processes, but leaked TCP sockets can consume system resources and prevent new connections from establishing properly, even when connection slots appear available.
How often should I monitor /proc/net/tcp for connection leaks?
Monitor every 30-60 seconds during normal operation, but increase frequency to every 10-15 seconds during high-load periods or when investigating suspected connection issues.
Can connection pool leaks affect PostgreSQL performance beyond just connection limits?
Yes, leaked sockets consume system memory and file descriptors, potentially leading to resource exhaustion that affects overall database performance even before hitting max_connections limits.