Oct 20, 20182 min read

Mastering Redis Caching Strategies for Scalable APIs

Caching

Mastering Redis Caching Strategies for Scalable APIs

Caching is one of the highest-leverage optimizations for API performance. Redis is a fast in-memory data store that helps reduce database load, improve response time, and smooth traffic spikes.

Why Redis for Caching?

  • Sub-millisecond read latency
  • Easy key-value model for API responses
  • TTL-based expiration built in
  • Rich data structures (hashes, sets, sorted sets)

Cache-Aside Pattern

The most common pattern is cache-aside:

  1. Read from Redis first
  2. If cache miss, read from database
  3. Store result in Redis with a TTL
  4. Return response
ts
async function getUserProfile(userId: string) { const key = `user:${userId}:profile` const cached = await redis.get(key) if (cached) return JSON.parse(cached) const profile = await db.user.findUnique({ where: { id: userId } }) await redis.set(key, JSON.stringify(profile), { EX: 300 }) return profile }

TTL Strategy

Different endpoints need different freshness windows:

  • Static/reference data: 30m–24h
  • User dashboards: 1–5m
  • Critical metrics: 15–60s

Use short TTLs for volatile data and longer TTLs for stable resources.

Invalidation Rules

Never rely on TTL alone for write-heavy domains.

  • On write/update: invalidate related keys
  • On delete: remove dependent cache entries
  • For aggregate data: use namespace versioning
ts
await redis.del(`user:${userId}:profile`) await redis.del(`user:${userId}:stats`)

Avoiding Cache Stampede

When a hot key expires, many requests may hit DB at once.

Techniques:

  • Add jitter to TTL (
    terminal
    EX: base + random
    )
  • Use request coalescing/single-flight
  • Pre-warm critical keys in background jobs

Keep keys predictable and namespaced:

  • terminal
    user:{id}:profile
  • terminal
    tenant:{tenantId}:plan
  • terminal
    feed:{userId}:page:{n}

Good naming avoids collisions and simplifies invalidation.

Observability Checklist

Track these metrics:

  • Cache hit ratio
  • Evictions
  • Memory usage
  • P95/P99 latency
  • DB queries per request (before vs after caching)

Final Takeaway

Redis caching works best when paired with clear key design, TTL policy, and invalidation discipline. Start with cache-aside, then optimize for hot paths and consistency needs.

Written by Anant Kumar

Systems Engineer & Full Stack Developer

Anant Kumar

Bridging the gap between high-level applications and low-level systems. Crafting resilient software with a focus on performance and observability.

Expertise

  • Systems Engineering
  • Full Stack Development
  • Cloud Infrastructure
  • Digital Signal Processing
  • Embedded Systems

Stay Connected

Open to opportunities and interesting conversations.

Get in Touch

© 2026 Anant Kumar. All rights reserved.

Systems Operational