Intermediate18 min read· Topic 3.3

Cache write strategies

Write-through, write-back, write-around, read-through, cache-aside patterns

✍️Key Takeaways

  • 1
    Cache-aside (lazy loading): application manages cache manually — most common pattern
  • 2
    Write-through: writes go to cache AND database simultaneously — consistent but slower writes
  • 3
    Write-back: writes go to cache only, flushed to database periodically — fast writes but risk of data loss
  • 4
    Read-through: cache automatically fetches from database on miss — simplifies application code

How Data Gets Into and Out of the Cache

The write strategy determines how data flows between your application, cache, and database. Choosing the wrong strategy leads to stale data, data loss, or unnecessary complexity. The right choice depends on your consistency requirements and read/write ratio.

Write Strategies Deep Dive

The application manages the cache directly. On read: check cache → if miss, read DB → write to cache. On write: write to DB → invalidate cache.

Most common pattern. Used by most web applications with Redis.

Pros: Simple, only caches what's actually read, works with any database.

Cons: First request after invalidation is slow (cache miss). Risk of stale data if invalidation fails.

💡Most Common Production Stack
In practice, 90% of web applications use cache-aside with Redis: read → check Redis → miss → query PostgreSQL → write result to Redis with TTL. On write → update PostgreSQL → delete Redis key. Simple, proven, effective.

Advantages

  • Cache-aside is simple and proven
  • Write-through ensures consistency
  • Write-back enables highest write throughput

Disadvantages

  • Cache-aside has cold start penalty
  • Write-through slows writes
  • Write-back risks data loss

🧪 Test Your Understanding

Knowledge Check1/1

Which write strategy has the highest risk of data loss?