PostgreSQL‘s max_connections
setting determines the maximum number of open client connections allowed to the database server. The default value of 100 is often too low for production workloads, so tuning this parameter is essential for application performance and scalability.
In this comprehensive guide, we‘ll cover all facets of configuring max_connections for optimal efficiency across both database and application architecture.
Checking the Default Max Connections
On any PostgreSQL instance, you can query the current max_connections value via:
SHOW max_connections;
Alternatively, find the setting in postgresql.conf
:
max_connections = 100
Remember, more than 100 concurrent connections will result in "too many clients" errors.
Shared Memory Tradeoffs
Every client connection consumes shared server resources like RAM for query planning, disk caches, background workers, etc.
So while raising max_connections allows more clients, it also increases memory usage. These resource tradeoffs are key in tuning for performance.
Let‘s explore how parallel client connections interact with other memory configurations in PostgreSQL.
Shared Buffers
The shared_buffers
parameter sets PostgreSQL‘s cache for frequently accessed data files. This memory is consumed per-connection for query planning.
- Default: 128MB
- Calculate as:
- 25-40% of total RAM (up to 14GB shared buffers)
- Decrease to 20% of RAM for workloads with many connections
So with higher max connections, lower the shared buffer percentage accordingly.
Work Memory
Complex queries can spill to temp disk files if insufficient memory is available per connection for sorting, hashing, etc.
The work_mem
parameter controls per-query memory. Default is 4MB.
For high concurrent connections, reduce work_mem
to prevent overallocation across connections.
Maintaining Free Memory
PostgreSQL requires available free memory for background workers, memory allocation, etc. Monitor for high disk paging activity indicating low free memory.
Aim to keep 10-20% of system RAM free when utilizing high max_connections
.
Configuring Max Connections
When setting max_connections
, consider current usage patterns and scale requirements:
- Analyze historical connection logs to profile usage
- Web apps: Allow 2-3x peak hourly connections
- Batch workloads: Parallelize across more connections
Dynamic workloads may require evolving limits.
As a general rule of thumb for servers with 8GB+ RAM:
- Set
max_connections
to ~100-300 connections per 1GB RAM - Adjust
shared_buffers
to 20% of RAM - Lower
work_mem
to 2-3MB
But Hardware capabilities and application needs can vary widely – so benchmarking different configurations is key.
Updating the Configuration
There are two methods to update max_connections
:
SQL Method:
ALTER SYSTEM SET max_connections = 250;
Restart the database for changes to apply.
Configuration File:
max_connections = 250
Save postgresql.conf
changes and restart.
After updating, validate new connections can open without errors. Monitor overall memory usage under load.
Alternatives for Connection Handling
Other architectural approaches beyond raising limits provide more scalable connection handling:
Connection Pooling
Reuse established connections across execution threads instead of opening fresh connections per operation. This provides efficient connection reuse without the overhead of continuous open/close cycles.
- Effective for applications with recurring transient tasks
- Requires client-side or proxy connection management
- AWS RDS, Azure Database for PostgreSQL provide some pooling capabilities
Multiplexing Connections
PGMultiplex, pgBouncer, and other tools multiplex numerous application connections over fewer PostgreSQL connections. This concentrates load for more efficient server processing.
- Reduces memory overhead from lower backend connection count
- Increases throughput for transactional workloads
- Implement at the application vs database level
Batching Operations
When possible, have clients batch multiple operations into fewer payload calls instead of high frequency singleton transactions. Doing so reduces connection turnover and management overhead.
Diagnosing Performance Bottlenecks
If connections leak or bottleneck during workload surges even after raising limits, further investigation into root causes is warranted:
Profile Connection Usage Patterns
Log connection open/close patterns over time. Spikes in connectivity could indicate unclosed connections leaking during distributed transactions.
Identify whether certain application operations or tenants disproportionately consume connections.
Tune Timeout Limits
The wait_timeout
and idle_in_transaction_session_timeout
settings control how long idle connections remain open. Aggressively expire unused connections to free resources.
Indexes and Queries
If individual queries require significant memory per connection for sorting, hashing, etc – investigate potentially missing indexes or expensive table scans underpinning poor performing statements.
Slow queries themselves could be a root cause of apparent connection exhaustion issues.
Hardware Bottlenecks
Heavy disk paging activity signals that memory, CPU, I/O bound hardware cannot keep pace with connection demand. Scale up capacity for enhanced performance across expanded connection pools.
Conclusion
PostgreSQL‘s out-of-the-box conservative connection limits ensure stability for small-scale workloads. But production application demands often require expanding max connections paired with tuning other memory configurations.
Analyze your app‘s concurrent usage patterns. Scale limits appropriately relative to available hardware resources. Distribute connection handling across middle-tier pooling and multiplexing where possible. And optimize expensive queries reducing individual connection burdens.
Following these best practices will lead to high-throughput, responsive database backends. Your app can then scale to support modern demand while avoiding "too many clients" errors.