- Published on
Redis Eviction Causing Chaos — When Your Cache Turns on You
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Your Redis instance hits its memory limit. Instead of throwing errors, it silently starts evicting keys — deleting them to make room for new ones. But which keys does it delete? Depending on your policy, it might delete your distributed locks, your rate-limit counters, or your session tokens.
Users get randomly logged out. Rate limiting stops working. Locks fail silently. And you have no idea why.
- Redis Eviction Policies
- Why This Causes Chaos
- Fix 1: Separate Redis Instances by Data Criticality
- Fix 2: Always Set TTLs on Cache Keys, Never on Critical Keys
- Fix 3: Monitor Memory Before It's Too Late
- Fix 4: Track Eviction Events
- Fix 5: Graceful Fallback When Cache Misses
- Fix 6: Right-Size Your Redis Instance
- The Redis Eviction Playbook
- Conclusion
Redis Eviction Policies
When Redis hits maxmemory, it uses an eviction policy to decide what to delete:
| Policy | What Gets Evicted |
|---|---|
noeviction | Returns error — refuses new writes |
allkeys-lru | Least recently used keys across ALL keys |
allkeys-lfu | Least frequently used keys across ALL keys |
volatile-lru | LRU among keys WITH an expiry set |
volatile-lfu | LFU among keys WITH an expiry set |
volatile-ttl | Keys with the shortest TTL first |
allkeys-random | Random keys — most dangerous! |
volatile-random | Random keys with expiry |
The default is noeviction. Most configs change it to allkeys-lru — which can silently evict your most critical keys.
Why This Causes Chaos
You have:
- session:abc123 (user session, no TTL!)
- ratelimit:ip:1.2.3.4 (rate limit counter, no TTL!)
- lock:payment:txn123 (distributed lock, no TTL!)
- cache:product:456 (product cache, TTL: 5min)
Policy: allkeys-lru
Redis gets full → evicts least recently used keys
If sessions/locks haven't been accessed recently → EVICTED silently
User gets logged out mysteriously.
Rate limits reset silently.
Distributed lock disappears mid-transaction → double payments.
Fix 1: Separate Redis Instances by Data Criticality
The most robust solution — never mix critical and cache data:
// Redis for CRITICAL data — never evict
const sessionRedis = new Redis({
host: 'redis-sessions',
// maxmemory-policy: noeviction
// Alert before full, never auto-evict
})
// Redis for CACHE data — safe to evict
const cacheRedis = new Redis({
host: 'redis-cache',
// maxmemory-policy: allkeys-lru
// Fine to evict product caches
})
// Usage
await sessionRedis.set(`session:${token}`, userId, 'EX', 86400)
await cacheRedis.set(`product:${id}`, JSON.stringify(product), 'EX', 300)
# redis-sessions.conf
maxmemory 4gb
maxmemory-policy noeviction # Never auto-evict. Alert instead.
# redis-cache.conf
maxmemory 8gb
maxmemory-policy allkeys-lfu # Evict least-used cache entries
Fix 2: Always Set TTLs on Cache Keys, Never on Critical Keys
// ✅ Cache keys ALWAYS have TTL — can be evicted safely
await redis.set(`product:${id}`, data, 'EX', 300) // 5 min TTL
await redis.set(`feed:${userId}`, data, 'EX', 60) // 1 min TTL
await redis.set(`config:global`, data, 'EX', 3600) // 1 hr TTL
// ✅ Critical keys NEVER have TTL (or very long TTL)
await redis.set(`session:${token}`, userId) // No TTL — never auto-expire
await redis.set(`lock:${resource}`, '1', 'EX', 30) // Short TTL = lock lease
await redis.set(`ratelimit:${ip}`, count, 'EX', 60) // TTL = rate limit window
// Use volatile-lru policy: only evicts keys WITH a TTL
// This protects session/lock keys that have no TTL!
Fix 3: Monitor Memory Before It's Too Late
import Redis from 'ioredis'
const redis = new Redis()
async function monitorRedisMemory() {
const info = await redis.info('memory')
const lines = info.split('\r\n')
const memInfo: Record<string, string> = {}
lines.forEach(line => {
const [key, value] = line.split(':')
if (key && value) memInfo[key.trim()] = value.trim()
})
const usedMB = parseInt(memInfo['used_memory']) / 1024 / 1024
const maxMB = parseInt(memInfo['maxmemory']) / 1024 / 1024
const usedPercent = (usedMB / maxMB) * 100
console.log(`Redis memory: ${usedMB.toFixed(0)}MB / ${maxMB.toFixed(0)}MB (${usedPercent.toFixed(1)}%)`)
if (usedPercent > 80) {
logger.warn(`Redis at ${usedPercent.toFixed(1)}% capacity — eviction risk!`)
}
if (usedPercent > 90) {
logger.alert(`CRITICAL: Redis at ${usedPercent.toFixed(1)}% — evictions likely occurring!`)
}
return { usedMB, maxMB, usedPercent }
}
setInterval(monitorRedisMemory, 30_000)
Fix 4: Track Eviction Events
// Enable keyspace notifications for evictions
await redis.config('SET', 'notify-keyspace-events', 'Ex')
const subscriber = new Redis()
await subscriber.subscribe('__keyevent@0__:expired', '__keyevent@0__:evicted')
subscriber.on('message', (channel, key) => {
if (channel.includes('evicted')) {
logger.warn(`Key evicted: ${key}`)
// Alert if critical key was evicted
if (key.startsWith('session:') || key.startsWith('lock:') || key.startsWith('ratelimit:')) {
logger.alert(`CRITICAL key evicted: ${key} — possible data loss!`)
}
}
})
Fix 5: Graceful Fallback When Cache Misses
Write your application to handle cache misses gracefully — eviction just means a cache miss:
async function getProduct(productId: string) {
try {
const cached = await cacheRedis.get(`product:${productId}`)
if (cached) return JSON.parse(cached)
} catch (err) {
// Redis error or eviction → fall through to DB
logger.warn(`Redis miss for product:${productId} — fetching from DB`)
}
// Always have a DB fallback
const product = await db.products.findById(productId)
if (product) {
// Re-populate cache (fire and forget)
cacheRedis.set(`product:${productId}`, JSON.stringify(product), 'EX', 300)
.catch(err => logger.warn('Cache write failed:', err))
}
return product
}
Fix 6: Right-Size Your Redis Instance
Check actual memory usage patterns:
# See top memory consumers
redis-cli --bigkeys
# Detailed memory per key pattern
redis-cli memory doctor
redis-cli memory stats
# Find keys by pattern and their memory
redis-cli --scan --pattern "cache:*" | xargs -I {} redis-cli memory usage {}
// Set memory limits per data type using Redis Modules or key naming conventions
// Monitor growth per key prefix
async function getMemoryByPrefix(prefixes: string[]) {
const results: Record<string, number> = {}
for (const prefix of prefixes) {
let totalMB = 0
const keys = await redis.keys(`${prefix}*`)
for (const key of keys.slice(0, 1000)) { // Sample
const usage = await redis.call('MEMORY', 'USAGE', key)
totalMB += Number(usage) / 1024 / 1024
}
results[prefix] = totalMB
}
return results
}
The Redis Eviction Playbook
| Data Type | Policy | TTL Strategy |
|---|---|---|
| Sessions | noeviction instance | No TTL (manage expiry in app) |
| Rate limits | noeviction instance | TTL = window duration |
| Distributed locks | noeviction instance | TTL = lock lease time |
| Product cache | allkeys-lfu instance | Short TTL (5-60 min) |
| Feature flags | volatile-lru | Long TTL (1 hour) |
| Leaderboards | volatile-lru | TTL or no eviction |
Conclusion
Redis eviction causing chaos is a configuration problem, not a Redis problem. Separate your critical data (sessions, locks, counters) from cache data into different Redis instances with different eviction policies. Always set TTLs on cache keys, never on critical keys. Monitor memory usage and get alerted before you hit the limit. With these practices, Redis eviction becomes a non-event — just cache misses, handled gracefully.