- Published on
Traffic Spike After Marketing Campaign — Surviving Your Own Success
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
Your marketing campaign launches at 9 AM. By 9:15, traffic is 50x normal. Your app is down. The campaign is a success — your infrastructure is not.
Traffic spikes from marketing campaigns are unique because you often know they're coming, they're sudden (not gradual), and they're the worst time for your app to fail.
- Why Spikes Are Different from Sustained Load
- Layer 1: CDN and Edge Caching (First Line of Defense)
- Layer 2: Application-Level Caching
- Layer 3: Rate Limiting and Load Shedding
- Layer 4: Circuit Breaker for Downstream Services
- Layer 5: Auto-Scaling Configuration
- Layer 6: Pre-warm Before the Campaign
- Layer 7: Queue Non-Critical Work
- The Campaign Launch Checklist
- Conclusion
Why Spikes Are Different from Sustained Load
Steady traffic growth gives your auto-scaler time to add capacity. A spike gives you seconds:
Normal traffic: 1,000 req/s
Campaign spike: 50,000 req/s (happens in 3 minutes)
Auto-scaling time: 3-5 minutes to spin up new instances
Gap: 3-5 minutes of 50x traffic with no extra capacity = crash
The solution is a combination of: handle gracefully what you can, shed load safely when you can't.
Layer 1: CDN and Edge Caching (First Line of Defense)
The fastest requests are those that never reach your server:
// Next.js — cache marketing landing pages aggressively
export async function generateStaticParams() {
// Pre-render at build time → served from CDN edge
return [{ slug: 'campaign-2026' }]
}
// Force-cache API responses at the CDN level
app.get('/api/campaign/offers', (req, res) => {
res.setHeader('Cache-Control', 'public, s-maxage=60, stale-while-revalidate=300')
res.json(campaignOffers)
// CDN caches this — your origin only gets called once per minute
})
# Nginx caching for campaign pages
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=campaign_cache:100m;
location /campaign {
proxy_cache campaign_cache;
proxy_cache_valid 200 60s; # Cache for 60 seconds
proxy_cache_use_stale error timeout updating; # Serve stale during spike
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
Layer 2: Application-Level Caching
// In-memory cache for campaign-specific data
const campaignCache = new LRU<string, any>({
max: 5000,
ttl: 30_000, // 30 second TTL
})
app.get('/api/product/:id', async (req, res) => {
const key = `product:${req.params.id}`
// L1: in-process memory (0ms)
let data = campaignCache.get(key)
if (data) return res.json(data)
// L2: Redis (1-2ms)
const cached = await redis.get(key)
if (cached) {
data = JSON.parse(cached)
campaignCache.set(key, data)
return res.json(data)
}
// L3: Database (only if cache miss)
data = await db.product.findById(req.params.id)
await redis.setex(key, 60, JSON.stringify(data))
campaignCache.set(key, data)
res.json(data)
})
Layer 3: Rate Limiting and Load Shedding
When you can't serve everyone, fail fast and clearly — don't let requests queue forever:
import { RateLimiterRedis } from 'rate-limiter-flexible'
const rateLimiter = new RateLimiterRedis({
storeClient: redis,
keyPrefix: 'rl',
points: 20, // 20 requests
duration: 1, // Per second
blockDuration: 2, // Block for 2s after limit hit
})
// Global request limiter
const globalLimiter = new RateLimiterRedis({
storeClient: redis,
keyPrefix: 'global',
points: 50_000, // Total requests per second allowed
duration: 1,
})
app.use(async (req, res, next) => {
try {
// Check both per-IP and global limit
await Promise.all([
rateLimiter.consume(req.ip),
globalLimiter.consume('global'),
])
next()
} catch {
res.status(429)
.setHeader('Retry-After', '2')
.json({
error: 'Too many requests',
retryAfter: 2,
message: 'We are experiencing high demand. Please try again in a moment.',
})
}
})
Layer 4: Circuit Breaker for Downstream Services
Protect your database and third-party services from the cascade:
import CircuitBreaker from 'opossum'
const dbBreaker = new CircuitBreaker(
async (query: () => Promise<any>) => query(),
{
timeout: 3000, // Fail requests taking > 3s
errorThresholdPercentage: 50, // Open after 50% failure rate
resetTimeout: 30_000, // Try again after 30s
volumeThreshold: 20, // Min 20 requests to evaluate
}
)
dbBreaker.fallback(async () => {
// Return cached/degraded response instead of failing
return getCachedFallbackData()
})
dbBreaker.on('open', () => {
logger.alert('DB circuit breaker OPEN — serving cached data')
// Trigger PagerDuty or Slack alert
})
app.get('/api/featured', async (req, res) => {
const data = await dbBreaker.fire(() => db.getFeaturedProducts())
res.json(data)
})
Layer 5: Auto-Scaling Configuration
Pre-configure aggressive auto-scaling BEFORE the campaign:
# Kubernetes HPA — scale fast, scale early
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
spec:
scaleTargetRef:
name: api-deployment
minReplicas: 3
maxReplicas: 100
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 40 # Scale at 40% CPU (not 80%)
behavior:
scaleUp:
stabilizationWindowSeconds: 0 # Scale up immediately
policies:
- type: Percent
value: 100 # Double pods every 15s
periodSeconds: 15
scaleDown:
stabilizationWindowSeconds: 300 # Scale down slowly
# Also configure Cluster Autoscaler to add nodes quickly
# On AWS EKS: enable Karpenter for sub-minute node provisioning
Layer 6: Pre-warm Before the Campaign
# Pre-scale before known traffic spike
kubectl scale deployment api --replicas=20
# Pre-warm caches
curl -X POST https://your-api.com/internal/cache/warmup \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-d '{"keys": ["featured_products", "campaign_offers", "homepage_feed"]}'
// Cache warmup endpoint
app.post('/internal/cache/warmup', adminAuth, async (req, res) => {
const { keys } = req.body
await Promise.all([
db.getFeaturedProducts().then(data =>
redis.setex('featured_products', 300, JSON.stringify(data))
),
db.getCampaignOffers().then(data =>
redis.setex('campaign_offers', 300, JSON.stringify(data))
),
db.getHomepageFeed().then(data =>
redis.setex('homepage_feed', 60, JSON.stringify(data))
),
])
res.json({ warmed: keys })
})
Layer 7: Queue Non-Critical Work
During spikes, defer anything not needed for the immediate response:
import Bull from 'bull'
const emailQueue = new Bull('emails', { redis })
const analyticsQueue = new Bull('analytics', { redis })
app.post('/api/order', async (req, res) => {
// CRITICAL: Create order in DB (must be synchronous)
const order = await db.order.create(req.body)
// NON-CRITICAL: Queue everything else (async)
await emailQueue.add('confirmation', { orderId: order.id, email: order.email })
await analyticsQueue.add('purchase', { orderId: order.id, amount: order.total })
// These run after the response is sent
// Respond immediately
res.status(201).json({ orderId: order.id })
})
The Campaign Launch Checklist
- ✅ Pre-scale pods/servers 30 minutes before campaign
- ✅ Warm caches (campaign pages, featured products)
- ✅ Test rate limiting and 429 responses
- ✅ Configure CDN to cache marketing pages aggressively
- ✅ Set up circuit breakers on DB and external APIs
- ✅ Configure HPA for aggressive scale-up
- ✅ Enable queue for emails, analytics, non-critical work
- ✅ Brief on-call team — someone watching dashboards
- ✅ Prepare a "high-traffic mode" toggle (simplify features)
- ✅ Load test at 10x expected peak before launch
Conclusion
Traffic spikes from marketing campaigns are the best kind of disaster — the ones you can see coming. Use CDN caching to serve static content without touching your servers, application caching to protect your database, rate limiting to shed load gracefully, circuit breakers to prevent cascades, and aggressive auto-scaling to grow capacity quickly. Do all this before the campaign launches, not during the incident.