Published on

Redis Patterns in Production — Caching, Sessions, Locks, and Rate Limiting Done Right

Authors

Introduction

Redis is deceptively simple: a key-value store. Yet production deployments fail because teams misuse caching strategies, implement naive locking, or ignore memory limits. This post covers battle-tested patterns for caching, rate limiting, locking, and pub/sub that work at scale.

Cache-Aside vs Write-Through vs Write-Behind

Three caching strategies with different trade-offs:

import redis from 'redis';
const client = redis.createClient();

// CACHE-ASIDE (most common)
// Read from cache, miss falls through to database
async function getUserCacheAside(userId: string) {
  // 1. Check cache
  const cached = await client.get(`user:${userId}`);
  if (cached) return JSON.parse(cached);

  // 2. Cache miss: read from database
  const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);

  // 3. Populate cache
  await client.setex(`user:${userId}`, 3600, JSON.stringify(user));

  return user;
}

// WRITE-THROUGH
// Write to cache AND database before responding
async function updateUserWriteThrough(userId: string, updates: object) {
  // 1. Write to database first (source of truth)
  const user = await db.query(
    'UPDATE users SET $1 WHERE id = $2 RETURNING *',
    [updates, userId]
  );

  // 2. Update cache immediately
  await client.setex(`user:${userId}`, 3600, JSON.stringify(user));

  return user;
}

// Problem: write-through blocks on database latency
// Solution: write-through with async cache update
async function updateUserWriteThroughAsync(userId: string, updates: object) {
  // 1. Write to database
  const user = await db.query(
    'UPDATE users SET $1 WHERE id = $2 RETURNING *',
    [updates, userId]
  );

  // 2. Update cache asynchronously (don't wait)
  client.setex(`user:${userId}`, 3600, JSON.stringify(user)).catch(console.error);

  return user;
}

// WRITE-BEHIND (cache-local write, async database flush)
// Fast writes, data loss risk if cache crashes
async function logEventWriteBehind(event: object) {
  // 1. Write to cache immediately (fast)
  const eventId = Date.now();
  await client.lpush('events:queue', JSON.stringify(event));
  await client.incr('events:counter');  // For metrics

  return { id: eventId };
}

// Background job: flush events from cache to database
setInterval(async () => {
  const events = await client.lrange('events:queue', 0, 999);
  if (events.length === 0) return;

  // Bulk insert to database
  await db.query(
    'INSERT INTO events (data) VALUES ' +
    events.map((_, i) => `($${i + 1})`).join(', '),
    events.map(e => JSON.stringify(JSON.parse(e)))
  );

  // Clear cache
  await client.ltrim('events:queue', events.length, -1);
}, 5000);  // Flush every 5 seconds

Cache-aside works for reads. Write-through ensures consistency. Write-behind is fast but risky. Choose based on consistency vs latency requirements.

Distributed Locks with Lua Scripts

Simple SET NX doesn't guarantee safety. Use Lua scripts for atomic operations:

import redis from 'redis';
const client = redis.createClient();

// NAIVE (BROKEN): Race condition exists
async function naiveLock(key: string) {
  const existing = await client.get(key);
  if (existing) throw new Error('Already locked');

  // RACE: Another process can acquire lock between check and set
  await client.set(key, '1', { EX: 30 });
}

// BETTER: SET NX with expiry (atomic)
async function basicLock(key: string) {
  const acquired = await client.set(key, '1', { NX: true, EX: 30 });
  if (!acquired) throw new Error('Lock not acquired');
}

// PRODUCTION: Lua script with random token
async function acquireLockWithToken(key: string, ttl: number = 30) {
  const token = Math.random().toString(36).substr(2, 9);

  const acquired = await client.set(key, token, { NX: true, EX: ttl });
  if (!acquired) throw new Error('Lock not acquired');

  return token;
}

async function releaseLockWithToken(key: string, token: string) {
  // Lua script: check token before deleting (prevent accidental unlock)
  const script = `
    if redis.call('get', KEYS[1]) == ARGV[1] then
      return redis.call('del', KEYS[1])
    else
      return 0
    end
  `;

  const result = await client.eval(script, { keys: [key], arguments: [token] });
  if (result === 0) throw new Error('Token mismatch');
}

// Usage
async function criticalSection() {
  const token = await acquireLockWithToken('resource:critical');
  try {
    // Critical work: only one process at a time
    await db.query('UPDATE balance SET amount = amount - 100 WHERE id = 1');
  } finally {
    await releaseLockWithToken('resource:critical', token);
  }
}

// MULTI-LOCK (prevent deadlock with sorted order)
async function acquireMultipleLocks(keys: string[]) {
  // Sort keys to prevent deadlock (always acquire in same order)
  const sortedKeys = [...keys].sort();
  const tokens: Record<string, string> = {};

  for (const key of sortedKeys) {
    tokens[key] = await acquireLockWithToken(key, 30);
  }

  return tokens;
}

async function releaseMultipleLocks(tokens: Record<string, string>) {
  for (const [key, token] of Object.entries(tokens)) {
    await releaseLockWithToken(key, token);
  }
}

Always use token-based locks with Lua release to prevent accidental unlocks.

Sliding Window Rate Limiter in Lua

Implement rate limits that reset at fixed intervals:

import redis from 'redis';
const client = redis.createClient();

// SLIDING WINDOW RATE LIMITER
// Allow N requests per M seconds
class RateLimiter {
  constructor(
    private windowSize: number = 60,  // seconds
    private maxRequests: number = 100  // requests per window
  ) {}

  async isAllowed(key: string): Promise<boolean> {
    const now = Date.now() / 1000;
    const windowStart = now - this.windowSize;

    // Lua script: atomic increment and expiry
    const script = `
      local key = KEYS[1]
      local now = tonumber(ARGV[1])
      local window_start = tonumber(ARGV[2])
      local max_requests = tonumber(ARGV[3])
      local window_size = tonumber(ARGV[4])

      -- Remove old entries outside window
      redis.call('zremrangebyscore', key, '-inf', window_start)

      -- Count current requests in window
      local current = redis.call('zcard', key)

      if current < max_requests then
        -- Add new request with timestamp as score
        redis.call('zadd', key, now, now)
        redis.call('expire', key, window_size)
        return 1
      else
        return 0
      end
    `;

    const result = await client.eval(script, {
      keys: [key],
      arguments: [now, windowStart, this.maxRequests, this.windowSize]
    });

    return result === 1;
  }

  async getRemainingRequests(key: string): Promise<number> {
    const now = Date.now() / 1000;
    const windowStart = now - this.windowSize;

    await client.zremrangebyscore(key, '-inf', windowStart);
    const current = await client.zcard(key);
    return Math.max(0, this.maxRequests - current);
  }

  async resetQuota(key: string): Promise<void> {
    await client.del(key);
  }
}

// Usage: rate limit API requests
const limiter = new RateLimiter(60, 100);  // 100 req/min

app.get('/api/data', async (req, res) => {
  const userId = req.user.id;
  const allowed = await limiter.isAllowed(`user:${userId}`);

  if (!allowed) {
    const remaining = await limiter.getRemainingRequests(`user:${userId}`);
    res.status(429).json({
      error: 'Rate limit exceeded',
      retryAfter: 60,
      remaining: 0
    });
    return;
  }

  const remaining = await limiter.getRemainingRequests(`user:${userId}`);
  res.json({
    data: [],
    rateLimit: {
      remaining,
      limit: 100,
      resetInSeconds: 60
    }
  });
});

Lua scripts ensure atomic check-and-update operations without race conditions.

Pub/Sub for Real-Time Events

Implement publish/subscribe for real-time notifications:

import redis from 'redis';

// Publisher
const publisher = redis.createClient();

// Subscriber
const subscriber = redis.createClient();

async function setupSubscriber() {
  // Subscribe to channels
  await subscriber.subscribe(
    'notifications:users:123',
    'notifications:system',
    'events:orders'
  );

  // Listen for messages
  subscriber.on('message', (channel, message) => {
    console.log(`[${channel}] ${message}`);

    if (channel === 'notifications:users:123') {
      handleUserNotification(JSON.parse(message));
    }
  });

  // Listen for pattern matches
  subscriber.on('pmessage', (pattern, channel, message) => {
    console.log(`[${pattern}] ${channel}: ${message}`);
  });

  // Subscribe to patterns
  await subscriber.psubscribe('notifications:users:*');
}

// Publish messages
async function notifyUser(userId: string, message: object) {
  await publisher.publish(
    `notifications:users:${userId}`,
    JSON.stringify(message)
  );
}

async function broadcastNotification(message: object) {
  await publisher.publish(
    'notifications:system',
    JSON.stringify(message)
  );
}

// Real-world: notify order status change
async function notifyOrderStatusChange(orderId: string, status: string) {
  const message = {
    type: 'ORDER_STATUS_CHANGED',
    orderId,
    status,
    timestamp: new Date().toISOString()
  };

  // Get order details to find user
  const order = await db.query('SELECT user_id FROM orders WHERE id = $1', [orderId]);

  // Notify user and broadcast to system channel
  await Promise.all([
    publisher.publish(`notifications:users:${order.user_id}`, JSON.stringify(message)),
    publisher.publish('notifications:system', JSON.stringify(message))
  ]);
}

Pub/Sub is fast but does NOT persist messages. Subscribers must be connected to receive.

Sorted Sets for Leaderboards

Efficiently track and query rankings:

import redis from 'redis';
const client = redis.createClient();

class Leaderboard {
  constructor(private key: string = 'leaderboard:global') {}

  async recordScore(userId: string, score: number) {
    // Add/update score (ZADD auto-updates if member exists)
    await client.zadd(this.key, score, userId);
  }

  async getTopPlayers(limit: number = 10) {
    // Return top N players by score (descending)
    const players = await client.zrevrange(this.key, 0, limit - 1, { WITHSCORES: true });

    const result = [];
    for (let i = 0; i < players.length; i += 2) {
      result.push({
        rank: result.length + 1,
        userId: players[i],
        score: parseInt(players[i + 1])
      });
    }

    return result;
  }

  async getUserRank(userId: string) {
    // Get user's rank (0-based, add 1 for 1-based rank)
    const rank = await client.zrevrank(this.key, userId);
    if (rank === null) return null;

    const score = await client.zscore(this.key, userId);
    return {
      rank: rank + 1,
      score: parseInt(score as string)
    };
  }

  async getPlayersBetweenRanks(start: number, end: number) {
    // Players between ranks (1-based input)
    const players = await client.zrevrange(
      this.key,
      start - 1,
      end - 1,
      { WITHSCORES: true }
    );

    const result = [];
    for (let i = 0; i < players.length; i += 2) {
      result.push({
        rank: start + (i / 2),
        userId: players[i],
        score: parseInt(players[i + 1])
      });
    }

    return result;
  }

  async incrementScore(userId: string, delta: number) {
    // Increment score atomically
    const newScore = await client.zincrby(this.key, delta, userId);
    return parseInt(newScore as string);
  }

  async removePlayer(userId: string) {
    await client.zrem(this.key, userId);
  }
}

// Usage
const leaderboard = new Leaderboard();

// Record score after game
await leaderboard.recordScore('user:123', 1500);
await leaderboard.recordScore('user:456', 1200);

// Get top 10
const topPlayers = await leaderboard.getTopPlayers(10);
console.log(topPlayers);
// [
//   { rank: 1, userId: 'user:123', score: 1500 },
//   { rank: 2, userId: 'user:456', score: 1200 }
// ]

// Get user's rank
const rank = await leaderboard.getUserRank('user:123');
console.log(rank);  // { rank: 1, score: 1500 }

// Increment score
const newScore = await leaderboard.incrementScore('user:456', 300);
console.log(newScore);  // 1500

Sorted sets are Redis's most powerful data structure for rankings and leaderboards.

Redis Streams for Durable Messaging

Unlike pub/sub, streams persist messages and support consumer groups:

import redis from 'redis';
const client = redis.createClient();

// Producer: add message to stream
async function publishEvent(streamKey: string, event: object) {
  const messageId = await client.xadd(
    streamKey,
    '*',  // Auto-generate timestamp-based ID
    'event_type', event.type,
    'user_id', event.userId,
    'data', JSON.stringify(event.data)
  );

  console.log(`Published event ${messageId}`);
  return messageId;
}

// Consumer group: track processed messages
async function setupConsumerGroup(streamKey: string, groupName: string) {
  try {
    await client.xgroupCreate(streamKey, groupName, '0');
  } catch (err) {
    if (!err.message.includes('BUSYGROUP')) throw err;
    // Group already exists
  }
}

// Consumer: read and process messages
async function consumeEvents(streamKey: string, groupName: string, consumerId: string) {
  while (true) {
    // Read 1 message from stream (blocking, 1 second timeout)
    const messages = await client.xreadgroup(
      {
        key: streamKey,
        group: groupName,
        consumer: consumerId
      },
      {
        count: 1,
        block: 1000
      }
    );

    if (!messages) continue;

    for (const [stream, streamMessages] of messages) {
      for (const { id, message } of streamMessages) {
        try {
          console.log(`[${id}] Processing event:`, message);

          // Process message
          await processEvent(message);

          // Acknowledge message (mark as processed)
          await client.xack(streamKey, groupName, id);
        } catch (err) {
          console.error('Error processing message:', err);
          // Don't acknowledge, message will be retried
        }
      }
    }
  }
}

// Monitor consumer group lag
async function getConsumerGroupStatus(streamKey: string, groupName: string) {
  const info = await client.xinfGroup(streamKey, groupName);
  console.log({
    consumers: info.consumers,
    pending: info.pending,
    lastDeliveredId: info['last-delivered-id']
  });
}

// Real-world: event sourcing
async function publishOrderEvent(orderId: string, event: string, data: object) {
  await publishEvent(
    'stream:order-events',
    { type: event, orderId, data }
  );
}

async function startEventProcessor() {
  await setupConsumerGroup('stream:order-events', 'order-processor');
  await consumeEvents('stream:order-events', 'order-processor', 'worker-1');
}

Streams are ideal for durable message processing, event sourcing, and audit logs.

Memory Management and Eviction Policies

Redis stores data in RAM. Manage memory carefully:

-- Check memory usage
INFO memory
-- Output:
-- used_memory: 104857600  (100MB)
-- used_memory_peak: 209715200  (200MB)
-- used_memory_human: 100M
-- mem_fragmentation_ratio: 1.1

-- Eviction policy (in redis.conf or SET CONFIG)
maxmemory 1gb
maxmemory-policy allkeys-lru

-- Policy options:
-- noeviction: return errors when full (default)
-- allkeys-lru: evict least recently used keys
-- allkeys-lfu: evict least frequently used keys
-- volatile-lru: evict LRU keys with TTL
-- volatile-lfu: evict LFU keys with TTL
-- allkeys-random: random eviction
-- volatile-random: random eviction (TTL only)
-- volatile-ttl: evict keys closest to expiry

-- Monitor memory fragmentation
MEMORY DOCTOR

-- Estimate key size
MEMORY USAGE key-name

-- Find large keys
MEMORY STATS
-- returns breakdown by allocation class

Set maxmemory-policy allkeys-lru for cache workloads. Monitor fragmentation.

Redis Patterns Checklist

  • Cache-aside pattern for read caching
  • Write-through or write-behind for consistency vs latency trade-off
  • Distributed locks with token-based release via Lua
  • Sliding window rate limiter for API limits
  • Pub/Sub for notifications (non-durable)
  • Streams for durable event processing
  • Sorted sets for rankings/leaderboards
  • TTL set on all keys (prevent unbounded growth)
  • Maxmemory policy configured (allkeys-lru for caches)
  • Memory fragmentation monitored

Conclusion

Redis excels at specific patterns: fast caching, atomic operations (via Lua), real-time messaging, and data structures. Use cache-aside for reads, write-through for consistency, and distributed locks for critical sections. Implement rate limits with sliding window Lua scripts. Choose Pub/Sub for ephemeral notifications or Streams for durable messaging. Always set TTL, configure eviction policies, and monitor memory. Redis is a precision tool—misuse it and production burns.