✅Key Takeaways
- 1At-most-once: fire and forget, may lose messages (fastest, acceptable for metrics)
- 2At-least-once: retry until acknowledged, may deliver duplicates (most common, with idempotent consumers)
- 3Exactly-once: conceptually ideal but extremely hard — usually at-least-once + idempotent processing
- 4Idempotency keys ensure that processing the same message twice produces the same result
The Delivery Guarantee Spectrum
In distributed systems, messages can be lost (network failure) or duplicated (retry after timeout with delayed ack). The delivery guarantee determines how the system handles these edge cases.
True exactly-once delivery is often considered impossible in distributed systems. What we achieve in practice is exactly-once processing: at-least-once delivery + idempotent consumers.
Guarantee Comparison
| Guarantee | Behavior | Risk | Performance | Use Case |
|---|---|---|---|---|
| At-most-once | Send once, no retry | Message loss | Fastest | Metrics, non-critical logs |
| At-least-once | Retry until ack | Duplicates | Medium | Most production systems |
| Exactly-once | Deliver + process once | None (ideal) | Slowest | Financial transactions |
Idempotent Consumer Pattern
# Problem: At-least-once delivery may send the same message twice
# Solution: Idempotent processing — processing same message twice = same result
# Approach 1: Idempotency Key
def process_payment(message):
idempotency_key = message.id # Unique per message
if already_processed(idempotency_key): # Check dedup table
return # Skip duplicate
execute_payment(message.amount)
mark_as_processed(idempotency_key) # Record in dedup table
# Approach 2: Natural Idempotency
# SET balance = 100 (idempotent — same result regardless of repeats)
# vs
# SET balance = balance + 10 (NOT idempotent — repeats cause errors)
# Approach 3: Database UPSERT
# INSERT ... ON CONFLICT (id) DO UPDATE — naturally idempotent✅Production Best Practice
Use at-least-once delivery + idempotent consumers. This is what Kafka, SQS, and RabbitMQ use by default. Design every consumer to handle duplicates: use unique message IDs, database constraints, or idempotent operations (SET vs INCREMENT).
Advantages
- •At-least-once + idempotency is robust and proven
- •Idempotency keys are simple to implement
- •Works across all message queue systems
Disadvantages
- •Idempotent design requires careful thought per consumer
- •Deduplication tables add storage and lookups
- •Exactly-once semantics in Kafka have caveats
🧪 Test Your Understanding
Knowledge Check1/1
Why is 'exactly-once delivery' considered practically impossible?