- Published on
Database Connection Pooling — PgBouncer, pgpool-II, and Getting the Math Right
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Every database connection costs CPU, memory, and time. Opening connections for each request destroys performance. Connection pooling multiplies your database throughput by reusing connections. Yet most teams misconfigure pooling, setting arbitrary limits that cause either connection exhaustion or wasted resources. This post teaches you the math and patterns for production pooling.
- Connection Lifecycle Cost
- PgBouncer Configuration
- PgBouncer Transaction vs Session Mode
- Pool Sizing Formula
- PostgreSQL max_connections Setting
- Connection Monitoring and Debugging
- Prisma Connection Pooling with Data Proxy
- Serverless Connection Pooling with Neon/PlanetScale
- Connection Pooling Checklist
- Conclusion
Connection Lifecycle Cost
Understanding the cost of connections guides pooling strategy:
#!/bin/bash
# Measure connection creation cost
time_connection_open() {
local db_host="$1"
local iterations=100
echo "Measuring $iterations connection opens..."
time {
for i in $(seq 1 $iterations); do
psql -h "$db_host" -U postgres -d postgres -c "SELECT 1" > /dev/null
done
}
# Typical output (network latency 10ms, auth 5ms):
# real 0m1.500s
# user 0m0.300s
# sys 0m0.200s
# Per connection: 15ms
}
# Check PostgreSQL resource usage per connection
check_connection_resources() {
psql -c "
SELECT
pid,
usename,
application_name,
state,
wait_event_type,
pg_size_pretty(session_memory / 1024 / 1024) AS memory
FROM pg_stat_activity
WHERE pid != pg_backend_pid()
LIMIT 10;
"
}
# Each connection uses:
# - ~5-10MB RAM (per PostgreSQL process)
# - Network socket overhead
# - Context switch time in kernel scheduler
Cost breakdown: 10-50ms to open, 5-10MB RAM per connection. Reuse aggressively.
PgBouncer Configuration
PgBouncer is lightweight, battle-tested pooling middleware:
; /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=primary.db.internal port=5432 user=app password=secret dbname=myapp
myapp_replica = host=replica.db.internal port=5432 user=app password=secret dbname=myapp
[pgbouncer]
; Core pooling settings
pool_mode = transaction ; or session, statement
; Connection limits (critical)
max_connections = 1000 ; Global max to database servers
max_client_connections = 2000 ; Max from clients (higher than max_connections)
default_pool_size = 25 ; Connections per pool by default
min_pool_size = 5 ; Minimum idle connections to keep open
; Calculate defaults:
; pool_size = ceil(avg_connections * safety_factor)
; For 100 concurrent users: pool_size = ceil(100 * 1.2) = 120
; Reserve slots for superuser queries
reserve_pool_size = 3
reserve_pool_timeout = 3
; Timeouts
server_lifetime = 3600 ; Reuse connection max 1 hour
server_idle_timeout = 600 ; Close unused connections after 10 min
server_connect_timeout = 15 ; Connection must establish within 15s
query_timeout = 0 ; No query timeout (set at app level)
; Logging
log_connections = 1
log_disconnections = 1
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
; Admin
admin_users = postgres
stats_period = 60 ; Update stats every 60s
Transaction mode is safest for stateless applications. Session mode preserves connection state.
PgBouncer Transaction vs Session Mode
Choose the right mode for your application:
-- TRANSACTION MODE (recommended for web apps)
-- One pool reused across many transactions
-- Pros: Low pool size (few connections per app process)
-- Cons: Cannot use prepared statements, session variables lost
-- Query:
SELECT * FROM users WHERE id = $1; -- OK
SET SESSION random_page_cost = 1.5; -- Affects connection, lost after txn
PREPARE stmt AS SELECT * FROM users; -- Lost after txn
-- SESSION MODE (use for long-lived connections)
-- One pool per application session/connection
-- Pros: Full SQL feature support, prepared statements work
-- Cons: More connections, higher resource usage
-- Both modes: use PREPARE/EXECUTE carefully
PREPARE select_user AS SELECT * FROM users WHERE id = $1;
EXECUTE select_user(123);
-- BEST PRACTICE: Stateless queries, explicit connection reuse
SELECT * FROM users WHERE id = $1;
-- Monitor which mode is in use
SHOW pool_mode;
Transaction mode fits most web applications. Session mode for ORMs requiring persistent state.
Pool Sizing Formula
Correctly size pools using this formula:
#!/bin/bash
# Pool sizing formula
# pool_size = (core_count * 2) + effective_spindle_count
# For most systems:
# pool_size = ceil(number_of_application_processes * average_connections_per_process)
# Example: 4-core server, avg 2 concurrent connections per app process
calculate_pool_size() {
local app_processes=4
local connections_per_process=2
local safety_factor=1.2
pool_size=$(( app_processes * connections_per_process ))
pool_size=$(( pool_size * safety_factor ))
echo "Recommended pool size: $pool_size"
}
# Conservative sizing by server type
# Single web server (4 cores): pool_size = 10-20
# 10 app processes in cluster: pool_size = 2-5 per process = 20-50 total
# High-concurrency API (nginx, 1000 conns): pool_size = 50-100
# RAM calculation
# PostgreSQL connection: 5-10MB
# PgBouncer connection: 1-2MB
# Example: 100 pool_size = 500-1000MB PostgreSQL + 100-200MB PgBouncer
# Monitor actual pool usage
pgbouncer_stats() {
psql -U pgbouncer -d pgbouncer -h localhost -c "SHOW STATS;" \
| grep -E "(database|cl_active|cl_waiting|sv_active|sv_idle)"
}
Use (cores * 2) + 1 as starting point. Monitor actual usage and adjust.
PostgreSQL max_connections Setting
Ensure PostgreSQL itself can handle pooled connections:
-- Current max_connections setting
SHOW max_connections;
-- Typical output: 100
-- Calculate required max_connections:
-- max_connections >= (total_pool_size * num_pools) + 10
-- Example: 50 pool_size, 3 databases = 150 + 10 = 160
-- Set in postgresql.conf
-- max_connections = 200
-- Verify total memory will suffice:
-- RAM needed = max_connections * 5MB + shared_buffers
-- Example: 200 * 5MB = 1GB + shared_buffers (256MB) = 1.25GB total
-- Reload configuration without restart
ALTER SYSTEM SET max_connections = 200;
SELECT pg_reload_conf();
SHOW max_connections;
-- Monitor connection usage
SELECT
count(*) AS total_connections,
(SELECT setting::int FROM pg_settings WHERE name = 'max_connections') AS max_connections,
ROUND(100.0 * count(*) / (SELECT setting::int FROM pg_settings WHERE name = 'max_connections'), 1) AS usage_pct
FROM pg_stat_activity;
Set max_connections 20-30% higher than pooled connection count.
Connection Monitoring and Debugging
Monitor pool health continuously:
-- PgBouncer: view active connections
SHOW CLIENTS;
-- Output:
-- type | user | database | state
-- C | app | myapp | active
-- C | app | myapp | idle
-- PgBouncer: view server connections
SHOW SERVERS;
-- Output:
-- type | user | database | state
-- S | app | myapp | active
-- S | app | myapp | idle
-- PgBouncer: view pool stats
SHOW POOLS;
-- Output:
-- database | user | cl_active | cl_waiting | sv_active | sv_idle | sv_used
-- myapp | app | 5 | 2 | 5 | 15 | 5
-- Interpretation:
-- cl_active: clients with active queries
-- cl_waiting: clients waiting for connection from pool
-- sv_active: database connections with active queries
-- sv_idle: database connections available to reuse
-- Alert thresholds:
-- If cl_waiting > 0: pool exhausted, increase pool_size
-- If sv_idle > sv_used * 5: pool oversized, decrease pool_size
-- If sv_active near pool_size: pool at capacity
-- PostgreSQL: view actual connections
SELECT
usename,
application_name,
client_addr,
state,
query,
query_start,
state_change
FROM pg_stat_activity
WHERE datname = 'myapp'
ORDER BY query_start DESC;
-- Find idle in transaction (connection holding locks)
SELECT
pid, usename, application_name,
state, query,
EXTRACT(EPOCH FROM (NOW() - query_start)) AS duration_sec
FROM pg_stat_activity
WHERE state = 'idle in transaction'
AND query_start < NOW() - INTERVAL '1 minute'
ORDER BY query_start ASC;
-- Terminate idle connections
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE state = 'idle in transaction'
AND query_start < NOW() - INTERVAL '1 hour'
AND usename != 'postgres';
Monitor cl_waiting and sv_idle. Alert when either extreme is reached.
Prisma Connection Pooling with Data Proxy
Prisma's Data Proxy pools connections across serverless functions:
// .env
DATABASE_URL="postgresql://user:password@primary.db/myapp"
PRISMA_CLIENT_ENGINE_TYPE="dataproxy"
// prisma/schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
// This automatically pools:
// - Connections from multiple serverless functions
// - Timeout unused connections
// - Manage max connections per pool
import { PrismaClient } from '@prisma/client';
// Global singleton for connection reuse
let prisma: PrismaClient;
export const getPrismaClient = () => {
if (!prisma) {
prisma = new PrismaClient({
log: [
{
emit: 'event',
level: 'query',
},
{
emit: 'event',
level: 'error',
},
],
});
prisma.$on('query', (e) => {
console.log('Query: ' + e.query);
console.log('Params: ' + JSON.stringify(e.params));
console.log('Duration: ' + e.duration + 'ms');
});
}
return prisma;
};
// Usage in serverless function
export async function getUser(userId: string) {
const prisma = getPrismaClient();
return prisma.user.findUnique({ where: { id: userId } });
}
Prisma Data Proxy handles pooling transparently. One connection pool shared across all serverless functions.
Serverless Connection Pooling with Neon/PlanetScale
Neon and PlanetScale provide managed connection pooling:
// Neon serverless driver (automatic pooling)
const { neon } = require('@neondatabase/serverless');
const sql = neon(process.env.DATABASE_URL);
// Each request uses pool, not dedicated connection
export async function handler(event) {
const users = await sql`SELECT * FROM users LIMIT 10`;
return { statusCode: 200, body: JSON.stringify(users) };
}
// Neon pool configuration (automatic, but tunable)
const dbUrl = new URL(process.env.DATABASE_URL);
// Add pooling parameters
dbUrl.searchParams.append('sslmode', 'require');
dbUrl.searchParams.append('pool_size', '10'); // per edge location
// PlanetScale serverless driver
import mysql from '@planetscale/database';
const conn = mysql.createConnection({
host: process.env.PLANETSCALE_HOST,
username: process.env.PLANETSCALE_USERNAME,
password: process.env.PLANETSCALE_PASSWORD,
ssl: true,
});
export async function handler(event) {
const users = await conn.query('SELECT * FROM users LIMIT 10');
return { statusCode: 200, body: JSON.stringify(users) };
}
// Neon branch pooling (preview environments)
const branch_pool = `postgresql://user:pass@ep-branch-name.neon.tech:5432/dbname`;
const sql_branch = neon(branch_pool);
// PlanetScale branch selector (costs extra)
const branch_conn = mysql.createConnection({
host: process.env.PLANETSCALE_HOST,
branch: 'staging',
username: process.env.PLANETSCALE_USERNAME,
password: process.env.PLANETSCALE_PASSWORD,
});
Serverless platforms handle pooling transparently. Focus on query performance, not connection management.
Connection Pooling Checklist
- PgBouncer deployed in front of PostgreSQL
- Pool mode chosen (transaction for stateless apps, session for stateful)
- Pool size calculated using formula: (cores * 2) + 1
- max_connections set 20-30% higher than pooled size
- min_pool_size set to keep warm connections
- server_lifetime limited to 1 hour (prevent long-living connections)
- SHOW POOLS monitored (cl_waiting < 1, sv_idle reasonable)
- Idle-in-transaction queries identified and fixed
- Serverless: using connection pooling driver (Prisma, Neon, PlanetScale)
- Alerts configured for pool exhaustion
Conclusion
Connection pooling multiplies database throughput. PgBouncer transaction mode works for most stateless applications—configure pool_size using (cores * 2) + 1 and monitor for exhaustion. Set PostgreSQL's max_connections 20-30% higher than the pooled total. For serverless, use platform-provided pooling (Neon, PlanetScale, Prisma Data Proxy) that shares one pool across all functions. The key metric: watch SHOW POOLS for cl_waiting > 0 (pool exhausted) or sv_idle too high (oversized pool). Get pooling right and your database can handle 10x more concurrent load.