caching strategy - mukular/food-delivery-feastfast--architecture GitHub Wiki
This document explains how caching is used in the system, why it is needed, what is cached, and how scalability is handled using Redis.
This platform handles:
- Location-based restaurant discovery
- Paginated menu browsing with filters
- High read traffic compared to writes
- Repeated requests with similar parameters
Without caching:
- MongoDB would be overloaded
- Geo queries would be expensive
- Latency would increase with traffic
Redis is used as a fast in-memory cache to:
- Reduce database load
- Improve response time
- Handle traffic spikes
The system follows these principles:
- Read-heavy optimization
- Short TTL (Time To Live)
- Stateless backend
- Safe cache invalidation
- Location-aware key design
This is the most expensive query due to:
- Geo-spatial lookup
- Sorting
- Filters
- Pagination
Cached
TTL: 60 seconds
Menu browsing involves:
- Filters
- Sorting
- Pagination
Cached
TTL: 120 seconds
- Orders
- Payments
- Wallet balance
- Real-time delivery tracking
- Seller dashboards
These are write-heavy or real-time and must stay strongly consistent.
To avoid millions of unique keys, latitude and longitude are rounded.
const roundedLat = Number(lat).toFixed(2);
const roundedLng = Number(lng).toFixed(2);
This means:
- ~1.1 km radius per cache cell
- Massive reduction in key count
- Acceptable business accuracy
restaurants:26.91:70.91:search=pizza|rating=4|sort=newest|skip=0|distance=200000
Includes:
- Rounded lat/lng
- Search text
- Filters
- Pagination
- Distance radius
menu:shopId=abc123:search=burger|priceMin=100|priceMax=300|skip=7|sort=price_asc
Request
↓
Generate Redis Key
↓
Check Redis
↓
Cache Hit → Return response
↓
Cache Miss → Query MongoDB
↓
Store result in Redis
↓
Return response
await redisClient.setEx(
redisKey,
TTL_IN_SECONDS,
JSON.stringify(responseData)
);
- Redis stores serialized JSON
- TTL ensures auto-cleanup
- No manual eviction needed
This project uses TTL-based invalidation, not manual deletion.
Why?
- Restaurant & menu data changes infrequently
- TTL keeps logic simple
- Avoids complex invalidation bugs
| Data Type | TTL |
|---|---|
| Restaurants | 60 sec |
| Menu Items | 120 sec |
Worst-case staleness: 2 minutes (acceptable)
- ~2–5 KB per key
10,000 active keys × 5 KB ≈ 50 MB RAM
This is well within Redis Cloud / Managed Redis limits.
Large platforms reduce keys using:
- Lat/Lng rounding
- Shared geo buckets
- Short TTL
- LRU eviction (Redis default)
This system already follows these patterns.
- Redis stores data in RAM
- Optional persistence (RDB / AOF)
- Ultra-low latency (sub-millisecond)
System falls back to MongoDB:
- No user impact
- Slight latency increase
- Fully resilient