- Published on
Node.js Memory Management — Finding Leaks Before They Kill Your Production Server
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Memory leaks are silent killers in production. Your Node.js service runs fine for days, then gradually slows, and crashes with an OOM error. Finding the culprit—a global cache growing without bound, event listeners never removed, or circular references—requires systematic profiling.
This post walks through production-grade memory debugging and prevention strategies.
- Heap Snapshots with --inspect and DevTools
- heapdump Module for Automated Snapshots
- Identifying Retained Objects in Chrome DevTools
- Common Memory Leak Patterns
- V8 Flags for GC Tuning
- --max-old-space-size Sizing
- clinic.js Heapprofiler Workflow
- WeakRef and FinalizationRegistry for Caches
- Checklist
- Conclusion
Heap Snapshots with --inspect and DevTools
The most direct way to find what's consuming memory is a heap snapshot. Start your Node process with --inspect and use Chrome DevTools.
// server.ts
import express from 'express';
import path from 'path';
const app = express();
const cache = new Map<string, Buffer>();
app.get('/data/:id', (req, res) => {
const { id } = req.params;
if (!cache.has(id)) {
// Simulating expensive computation
const data = Buffer.alloc(1_000_000); // 1MB
cache.set(id, data); // MEMORY LEAK: cache never cleared
}
res.json({ size: cache.get(id)!.length });
});
app.get('/health', (req, res) => {
res.json({ status: 'ok', cacheSize: cache.size });
});
app.listen(3000, () => {
console.log('Server running on port 3000');
console.log('Open chrome://inspect to profile');
});
Run with:
node --inspect=0.0.0.0:9229 server.ts
Then in Chrome: chrome://inspect → Select the process → Take heap snapshot.
// Analyzing snapshot results:
// 1. Take initial snapshot
// 2. Make requests to trigger the leak
// 3. Take second snapshot
// 4. Compare: see which objects grew
// In DevTools Comparison view:
// - Detached DOM nodes → event listeners not removed
// - String[] with thousands of entries → unbounded array growth
// - Object keys growing → cache without eviction
// Fix the leak:
const cache = new Map<string, Buffer>();
const CACHE_LIMIT = 100;
app.get('/data/:id', (req, res) => {
const { id } = req.params;
if (!cache.has(id)) {
if (cache.size >= CACHE_LIMIT) {
// Evict oldest entry (simple FIFO)
const firstKey = cache.keys().next().value;
cache.delete(firstKey);
}
const data = Buffer.alloc(1_000_000);
cache.set(id, data);
}
res.json({ size: cache.get(id)!.length });
});
heapdump Module for Automated Snapshots
For production servers where Chrome DevTools isn't available, use heapdump to generate snapshots programmatically.
import heapdump from 'heapdump';
import express from 'express';
import fs from 'fs';
import path from 'path';
const app = express();
let globalState: any = null; // Potential leak source
// Endpoint to trigger heap snapshot
app.post('/admin/heapdump', (req, res) => {
const timestamp = Date.now();
const filename = path.join('/tmp', `heapdump-${timestamp}.heapsnapshot`);
heapdump.writeSnapshot(filename, (err, filename) => {
if (err) {
res.status(500).json({ error: err.message });
} else {
res.json({ file: filename });
console.log(`Heap snapshot written: ${filename}`);
}
});
});
// Automated snapshots when memory exceeds threshold
const MEMORY_THRESHOLD_MB = 512;
setInterval(() => {
const used = process.memoryUsage().heapUsed / 1024 / 1024;
if (used > MEMORY_THRESHOLD_MB) {
const filename = path.join('/tmp', `heapdump-auto-${Date.now()}.heapsnapshot`);
heapdump.writeSnapshot(filename, (err) => {
if (!err) {
console.warn(`High memory detected (${used.toFixed(2)}MB). Snapshot: ${filename}`);
}
});
}
}, 30000); // Check every 30 seconds
app.listen(3000);
Then analyze the .heapsnapshot file in Chrome DevTools by dragging it into the Memory panel.
Identifying Retained Objects in Chrome DevTools
After capturing a snapshot, use DevTools to find what's being retained.
// Example: event listener leak
import EventEmitter from 'events';
class DataProcessor extends EventEmitter {
constructor() {
super();
this.setupListeners();
}
private setupListeners(): void {
// LEAK: listener never removed
this.on('data', this.handleData.bind(this));
this.on('error', this.handleError.bind(this));
this.on('complete', this.handleComplete.bind(this));
}
private handleData = (): void => {
console.log('Processing data');
};
private handleError = (): void => {
console.log('Error occurred');
};
private handleComplete = (): void => {
console.log('Complete');
};
// Process is never called
}
// Leak demonstration
const processors: DataProcessor[] = [];
setInterval(() => {
const processor = new DataProcessor();
processors.push(processor);
// Array grows, listeners multiply
if (processors.length > 10000) {
processors.shift(); // Too late, already OOM
}
}, 100);
// FIX: proper cleanup
class ProperDataProcessor extends EventEmitter {
constructor() {
super();
this.setupListeners();
}
private setupListeners(): void {
const dataHandler = this.handleData.bind(this);
this.on('data', dataHandler);
// Store reference for cleanup
this._dataHandler = dataHandler;
}
private _dataHandler: any;
private handleData(): void {
console.log('Processing data');
}
destroy(): void {
this.removeListener('data', this._dataHandler);
this.removeAllListeners();
}
}
const properProcessors: ProperDataProcessor[] = [];
setInterval(() => {
const processor = new ProperDataProcessor();
properProcessors.push(processor);
if (properProcessors.length > 10000) {
const old = properProcessors.shift()!;
old.destroy(); // Proper cleanup
}
}, 100);
Common Memory Leak Patterns
These patterns appear repeatedly. Know them.
// Pattern 1: Unbounded cache without eviction
class BadCache {
private data = new Map<string, any>();
set(key: string, value: any): void {
this.data.set(key, value); // Never evicted
}
get(key: string): any {
return this.data.get(key);
}
}
// Fix: LRU cache with eviction
import LRU from 'lru-cache';
const cache = new LRU<string, any>({
max: 500, // Max 500 items
ttl: 1000 * 60 * 5, // 5 minute TTL
updateAgeOnGet: true,
});
// Pattern 2: Closures capturing large objects
function createHandlers(): any {
const largeData = Buffer.alloc(10_000_000); // 10MB
return {
handler1: () => console.log(largeData.length), // Closure retains largeData
handler2: () => console.log('other'),
};
}
// Fix: don't capture in closure
function createProperHandlers(data: Buffer): any {
return {
handler1: () => console.log(data.length),
handler2: () => console.log('other'),
};
}
// Pattern 3: Global cache in request context
const requestCache = new Map<string, any>();
export async function handleRequest(id: string): Promise<any> {
if (!requestCache.has(id)) {
requestCache.set(id, await fetchData(id)); // LEAK: grows forever
}
return requestCache.get(id);
}
// Fix: use request-scoped cache
export async function handleProperRequest(
id: string,
cache: Map<string, any>
): Promise<any> {
if (!cache.has(id)) {
cache.set(id, await fetchData(id));
}
return cache.get(id);
}
// Pattern 4: setTimeout with retained context
class Timer {
private value = 0;
start(): void {
const largeData = Buffer.alloc(1_000_000);
// setTimeout retains largeData through closure
setTimeout(() => {
console.log(this.value, largeData.length);
}, 1000);
}
}
// Fix: clear references
class ProperTimer {
private value = 0;
start(): void {
const largeData = Buffer.alloc(1_000_000);
const snapshot = largeData.length;
const timerId = setTimeout(() => {
console.log(this.value, snapshot);
}, 1000);
// Clear largeData reference
largeData.fill(0);
}
}
// Pattern 5: Circular references with global state
interface Node {
parent?: Node;
children: Node[];
data: Buffer;
}
function buildCircular(): void {
const root: Node = { children: [], data: Buffer.alloc(1_000_000) };
const child: Node = {
parent: root,
children: [],
data: Buffer.alloc(1_000_000),
};
root.children.push(child);
// Circular reference, GC may struggle
globalThis.rootNode = root; // Now global → never GC'd
}
V8 Flags for GC Tuning
Control garbage collection behavior with V8 flags for better debugging and performance.
// Run with GC flags:
// node --trace-gc server.ts
// node --expose-gc server.ts
// node --max-old-space-size=4096 server.ts (4GB heap)
// In code, use exposed gc (requires --expose-gc flag)
if (global.gc) {
console.log('GC exposed, can trigger manually');
// Force garbage collection
global.gc();
const before = process.memoryUsage().heapUsed;
// Do some work
const arr = Array(1_000_000).fill(Math.random());
const after = process.memoryUsage().heapUsed;
console.log(`Memory increased by ${(after - before) / 1024 / 1024}MB`);
// GC again
global.gc();
const final = process.memoryUsage().heapUsed;
console.log(`After GC: ${(final - before) / 1024 / 1024}MB`);
}
// Monitoring GC pauses
let gcPauses: number[] = [];
// Requires native addon or instrumentation
// This is conceptual:
process.on('warning', (warning) => {
if (warning.name === 'MaxListenersExceededWarning') {
console.warn('Too many listeners:', warning);
}
});
// Better: use clinic.js (production GC profiler)
--max-old-space-size Sizing
Set heap size appropriate for your workload. Too small = frequent GC pauses. Too large = long GC pauses.
// Get current limits
console.log('Memory usage:', process.memoryUsage());
// {
// rss: 81551360, // Resident set size
// heapTotal: 9043968, // Allocated heap
// heapUsed: 5234232, // Actual usage
// external: 1234,
// arrayBuffers: 0
// }
// Sizing recommendations:
// - Small app: 128-256MB
// - Medium app: 512MB-1GB
// - Large app: 2-4GB
// - Data-intensive: 4-8GB
// Monitor heap pressure
const HEAP_WARNING_THRESHOLD = 0.85; // 85% of max
setInterval(() => {
const mem = process.memoryUsage();
const usageRatio = mem.heapUsed / mem.heapTotal;
if (usageRatio > HEAP_WARNING_THRESHOLD) {
console.warn(`Heap usage critical: ${(usageRatio * 100).toFixed(1)}%`);
console.warn(`Used: ${(mem.heapUsed / 1024 / 1024).toFixed(2)}MB / ${(mem.heapTotal / 1024 / 1024).toFixed(2)}MB`);
// Trigger cleanup
if (global.gc) global.gc();
}
}, 10000);
// Start process:
// node --max-old-space-size=2048 server.ts
clinic.js Heapprofiler Workflow
Clinic.js provides the most production-like profiling experience.
import clinic from '@clinic/clinic';
// Install: npm install @clinic/clinic
// Usage:
// clinic doctor -- node server.ts
// clinic heapprofiler -- node server.ts
// clinic bubbleprof -- node server.ts
// In code, you can hook into clinic
import Doctor from '@clinic/doctor';
const doctor = new Doctor({ sampleInterval: 10 });
doctor.collect(() => {
// Your app runs here, clinic collects metrics
require('./server.ts');
});
// Clinic detects:
// - Memory growth rate
// - GC frequency
// - Event loop latency
// - Libuv handle leaks
Run:
npx clinic doctor -- node server.ts
npx clinic heapprofiler -- node server.ts
# Generates HTML report
WeakRef and FinalizationRegistry for Caches
Use WeakRef to allow garbage collection of cached objects.
import { FinalizationRegistry } from 'es2021';
// ANTI-PATTERN: strong reference prevents GC
class BadCache {
private cache = new Map<string, any>();
set(key: string, value: any): void {
this.cache.set(key, value);
}
get(key: string): any {
return this.cache.get(key);
}
}
// BETTER: weak references allow GC
class WeakCache {
private cache = new Map<string, WeakRef<any>>();
private registry = new FinalizationRegistry((key: string) => {
this.cache.delete(key);
});
set(key: string, value: any): void {
const ref = new WeakRef(value);
this.cache.set(key, ref);
// Register finalizer: cleanup when value is GC'd
this.registry.register(value, key);
}
get(key: string): any {
const ref = this.cache.get(key);
if (!ref) return undefined;
const value = ref.deref(); // Try to get the value
if (!value) {
this.cache.delete(key); // Already GC'd
}
return value;
}
has(key: string): boolean {
const ref = this.cache.get(key);
if (!ref) return false;
const value = ref.deref();
if (!value) {
this.cache.delete(key);
return false;
}
return true;
}
}
// Real example: cached HTTP responses
class ResponseCache {
private cache = new WeakCache();
async getResponse(url: string): Promise<any> {
if (this.cache.has(url)) {
return this.cache.get(url);
}
const response = await fetch(url).then((r) => r.json());
this.cache.set(url, response);
return response;
}
}
// Usage: responses are GC'd when no longer referenced elsewhere
const cache = new ResponseCache();
const data1 = await cache.getResponse('https://api.example.com/data');
// If data1 goes out of scope and no other reference exists,
// it's eligible for GC and automatically removed from cache
Checklist
- ✓ Take heap snapshots in development when you suspect memory issues
- ✓ Use heapdump for production to capture memory state without downtime
- ✓ Implement cache eviction policies (LRU, TTL) instead of unbounded caches
- ✓ Review global state and event listeners: ensure cleanup on shutdown
- ✓ Use WeakRef for caches where values should be garbage collectible
- ✓ Monitor heap usage continuously and alert at 80%+ usage
- ✓ Use --expose-gc during profiling to manually trigger GC
- ✓ Run clinic.js heapprofiler in staging environments before production
Conclusion
Memory leaks are debuggable. Combine heap snapshots, automated profiling, and awareness of common leak patterns to catch problems early. Systematic memory profiling is the difference between a stable production system and one that crashes mysteriously at 3 AM.