Caching Strategy - mukular/food-delivery-feastfast--architecture GitHub Wiki
Caching Strategy
This document explains how caching is used in the system, why it is needed, what is cached, and how scalability is handled using Redis.
1. Why Caching Is Required
This platform handles:
- Location-based restaurant discovery
- Paginated menu browsing with filters
- High read traffic compared to writes
- Repeated requests with similar parameters
Without caching:
- MongoDB would be overloaded
- Geo queries would be expensive
- Latency would increase with traffic
Redis is used as a fast in-memory cache to:
- Reduce database load
- Improve response time
- Handle traffic spikes
2. Caching Principles Used
The system follows these principles:
- Read-heavy optimization
- Short TTL (Time To Live)
- Stateless backend
- Safe cache invalidation
- Location-aware key design
3. What Is Cached
3.1 Restaurants Listing (/restaurants)
This is the most expensive query due to:
- Geo-spatial lookup
- Sorting
- Filters
- Pagination
Cached
TTL: 60 seconds
3.2 Restaurant Menu (/restaurants/:id/menu)
Menu browsing involves:
- Filters
- Sorting
- Pagination
Cached
TTL: 120 seconds
3.3 What Is NOT Cached
- Orders
- Payments
- Wallet balance
- Real-time delivery tracking
- Seller dashboards
These are write-heavy or real-time and must stay strongly consistent.
4. Redis Key Design
4.1 Location-Based Key Strategy
To avoid millions of unique keys, latitude and longitude are rounded.
const roundedLat = Number(lat).toFixed(2);
const roundedLng = Number(lng).toFixed(2);
This means:
- ~1.1 km radius per cache cell
- Massive reduction in key count
- Acceptable business accuracy
4.2 Example Restaurant Cache Key
restaurants:26.91:70.91:search=pizza|rating=4|sort=newest|skip=0|distance=200000
Includes:
- Rounded lat/lng
- Search text
- Filters
- Pagination
- Distance radius
4.3 Menu Cache Key Example
menu:shopId=abc123:search=burger|priceMin=100|priceMax=300|skip=7|sort=price_asc
5. Cache Read Flow
Request
↓
Generate Redis Key
↓
Check Redis
↓
Cache Hit → Return response
↓
Cache Miss → Query MongoDB
↓
Store result in Redis
↓
Return response
6. Cache Write Strategy
await redisClient.setEx(
redisKey,
TTL_IN_SECONDS,
JSON.stringify(responseData)
);
- Redis stores serialized JSON
- TTL ensures auto-cleanup
- No manual eviction needed
7. Cache Invalidation Strategy
This project uses TTL-based invalidation, not manual deletion.
Why?
- Restaurant & menu data changes infrequently
- TTL keeps logic simple
- Avoids complex invalidation bugs
| Data Type | TTL |
|---|---|
| Restaurants | 60 sec |
| Menu Items | 120 sec |
Worst-case staleness: 2 minutes (acceptable)
8. Memory Usage Analysis
Average Cache Size
- ~2–5 KB per key
Example Calculation
10,000 active keys × 5 KB ≈ 50 MB RAM
This is well within Redis Cloud / Managed Redis limits.
9. Handling High Traffic (Swiggy / Zomato Style)
Large platforms reduce keys using:
- Lat/Lng rounding
- Shared geo buckets
- Short TTL
- LRU eviction (Redis default)
This system already follows these patterns.
10. Redis Storage Model
- Redis stores data in RAM
- Optional persistence (RDB / AOF)
- Ultra-low latency (sub-millisecond)
11. Failure Handling
Redis Down?
System falls back to MongoDB:
- No user impact
- Slight latency increase
- Fully resilient