Using the Azure Cosmos DB in the Free Tier - DAV-1000/react-material GitHub Wiki
Created: 8/3/2025 14:17
Updated: 8/3/2025 15:43
Exported: 8/14/2025 11:32
Link: https://chatgpt.com/c/688f6153-538c-8001-8ce2-9a724717ef5a
Tell me about using cosmodb in the azure free tier
Here’s what you need to know about using Azure Cosmos DB Free Tier as of August 2025:
- When you create a new Cosmos DB account and opt in to Free Tier, you get 1,000 RU/s of provisioned throughput and 25 GB of storage free—for the entire lifetime of that account Reddit+15Microsoft Learn+15Microsoft for Developers+15.
- Please note: billing doesn’t kick in unless you exceed 1,000 RU/s or 25 GB in that account.
- If you’re also in your initial Azure Free Account (first 12 months), you gain an extra 400 RU/s and 25 GB storage each month, stacked on top of the Cosmos DB Free Tier allotment.
- That means you get up to 1,400 RU/s and 50 GB free during your first year on Azure Microsoft for DevelopersMicrosoft Learn+1.
- In a Free Tier account, you can have up to 5 shared throughput databases (per subscription), each provisioned at up to 1,000 RU/s.
- Each database can contain as many as 25 containers that share the throughput Microsoft for Developers+5Microsoft Learn+5Microsoft Learn+5.
- The minimum provisioned throughput per shared database is 400 RU/s, even if you have only one container en.wikipedia.org+15Microsoft Learn+15Microsoft Azure+15.
- Only one Free Tier–eligible Cosmos DB account per Azure subscription.
- The RU/s and storage benefits apply to provisioned throughput mode only—serverless mode is not covered by Free Tier Reddit+10Microsoft Learn+10Microsoft Learn+10Microsoft for Developers+2Microsoft for Developers+2.
- If you enable multi-region writes, the free 1,000 RU/s applies only per region; additional regions replicate the throughput and may result in charges over the free allowance Microsoft Learn+3Microsoft for Developers+3Microsoft Azure+3.
- If you delete your Free Tier account, you may create a new one later—but only one Free Tier account can exist at a time per subscription Microsoft Learn.
To stay within the free limits and avoid charges:
- Monitor your actual RU/s usage via the Azure portal or SDK logs—complex queries or large documents can burn RUs fast Reddit.
- Design your partition keys and indexing strategy to keep RU usage per operation low.
- Keep storage under 25 GB. Be aware that Cosmos DB can use more physical storage than your source dataset—especially if migrating from MongoDB; some have seen 2 GB balloon to 15 GB Reddit+1Reddit.
- During the first year if eligible, take advantage of the 400 RU/s + 25 GB from the Azure free account layered on top Microsoft Learn.
- Use the local Cosmos DB emulator for development to avoid using your live Free Tier quota Microsoft for Developers.
Scenario | Free RU/s | Free Storage | Notes |
---|---|---|---|
Provisioned throughput Free Tier (post‑12 mo) | 1,000 RU/s | 25 GB | Choose enable during account creation |
First 12 months (with Azure Free Account) | 1,400 RU/s | 50 GB | 1,000 from Cosmos + 400 from Azure free |
Serverless mode | × | × | Free Tier does not apply |
Shared throughput DBs | Shared up to 25 containers | As above | Each DB min 400 RU/s |
- Create a new Cosmos DB account via Azure Portal (or CLI/ARM), and tick the “Apply Free Tier Discount” checkbox RedditMicrosoft Learn+1Microsoft for Developers+5Microsoft Learn+5Microsoft Learn+5Microsoft Learn+4Microsoft for Developers+4Microsoft for Developers+4.
- Set up a provisioned-throughput database (or containers).
- Monitor your RU/s and storage. If usage stays below 1,000 RU/s and 25 GB (or 1,400 RU/s / 50 GB during your first year), you won’t incur charges.
- Use the Cosmos DB emulator for dev or testing to preserve your free-tier quota.
- Free Tier gives you 1,000 RU/s and 25 GB free, forever, on a single Cosmos DB account you opt into.
- During your first 12 months of Azure Free Account, you get an extra 400 RU/s and 25 GB.
- Free Tier is not available for serverless mode.
- Design workloads carefully to stay under limits—efficient indexing, small documents, and thoughtful partitioning are key.
Let me know if you want help planning throughput or estimating RUs for your scenarios!
What’s an ru
An RU (Request Unit) in Azure Cosmos DB is a performance currency that measures how much computing power (CPU, memory, IO, etc.) is needed to perform an operation.
- Just like dollars pay for different goods, RUs pay for database operations.
- Every operation—read, write, query, delete—costs a certain number of RUs.
- The more complex or data-heavy the operation, the more RUs it uses.
Operation | RU Cost (approx.) |
---|---|
Read a 1 KB document (point read) | 1 RU |
Insert a small document (1 KB) | 5+ RU |
Query across a large partition | 10–100+ RU |
Complex join or aggregation | 100–1,000+ RU |
- If you provision 1,000 RU/s, that means your Cosmos DB account can handle operations that cost up to 1,000 RUs per second.
- So if your workload is:
- 500 reads per second (1 RU each) = 500 RU/s used → ✅ fine
- 1 write (10 RUs) every second = 10 RU/s used → ✅ fine
- A big query that costs 1,200 RUs → ❌ too large unless spread over time or allowed to retry (with rate limiting)
- If your operation needs more RU/s than what’s available, Cosmos DB may throttle it, causing delays or retries.
- That’s why understanding RU usage helps you tune performance and costs.
- Azure Portal: View RU consumption per operation
- SDKs: Log RU usage after each request
- Cosmos DB Emulator (locally): See RU cost of queries before deploying
RU = how much "effort" Cosmos DB spends to handle your request.
1 RU = cost of a simple 1 KB read.
The more data or complexity, the more RUs it uses.
Would you like help estimating RU costs for a specific type of query or app design?
If i have 100 simple documents in my cosmos db and want to retrieve the id and title of all those having the word Foo in title or description how many ru’s per request
Great question. Let's walk through a realistic estimate for this query.
- Documents: 100 simple documents
-
Fields: Each has an
id
,title
, anddescription
-
Query: Something like
SELECT c.id, c.title FROM c WHERE CONTAINS(c.title, "Foo") OR CONTAINS(c.description, "Foo")
- Goal: Estimate RU cost per request
-
Query complexity:
-
CONTAINS()
on multiple fields is moderately expensive, especially without a filtered index.
-
-
Document size:
- If your documents are small (e.g. 1 KB each), that helps.
-
Partitioning:
- If all docs are in one partition, the query is cheaper.
- If spread across many partitions, the query hits each partition = higher RU cost.
-
Indexing:
- If
title
anddescription
are indexed, it costs less. - If not, RU cost increases due to a full scan.
- If
- Query touches 100 docs, filters using
CONTAINS()
, and returns only a few fields. - Typical RU cost: 10–30 RU per request
- RU cost: 30–100 RU per request
- RU cost: 100–300 RU per request or more
You can get the actual RU cost by running the query via:
- Azure Portal → Data Explorer → Query → Look at “Request Charge”
-
SDK (e.g. Python, JavaScript) → Logs return the
x-ms-request-charge
header
To keep costs low:
- Ensure
title
anddescription
are indexed. - Use a single partition if possible.
- Return only the fields you need (
id
,title
). - Avoid pulling large documents or unindexed text fields unnecessarily.
Want help checking or configuring the indexing policy for this scenario?
If i was consuming the cosmos db data from a static web app with an azure function back end, what are my options for minimizing ru consumption via caching
If you're using Azure Static Web Apps with an Azure Function backend querying Cosmos DB, caching can greatly reduce RU consumption by avoiding unnecessary database hits.
Here's a breakdown of smart caching strategies to minimize RU usage:
🔹 Use Case: If your data changes rarely and Azure Function instance stays warm for a while.
🔧 How:
- Store recently queried data in a global variable inside the Azure Function host:
let cache = null; module.exports = async function (context, req) { if (!cache) { cache = await getDataFromCosmos(); // expensive RU query } context.res = { body: cache }; };
- On first request (or if expired), query Cosmos DB.
- On subsequent requests, serve cached value.
- Functions scale out (each instance has its own memory).
- Cold starts or scale-up resets the cache.
- Best for read-heavy and low-change data.
🔹 Use Case: Need shared, fast, and persistent-ish caching across multiple function instances.
🔧 How:
- Query Redis first.
- Only hit Cosmos DB if Redis miss.
- Store Cosmos query results in Redis with a time-to-live (TTL) (e.g., 10–60 seconds for freshness).
Benefits:
- Centralized cache for all function instances
- Survives restarts
- Very fast (~1ms latency)
Costs: Redis Basic tier is ~$16/month, but saves RUs if you're on the edge of free limits.
🔹 Use Case: Data updates infrequently (e.g., daily summaries, lists)
🔧 How:
- Create a daily or hourly job (Timer-triggered Azure Function) to:
- Query Cosmos DB
- Write the result to a static JSON file in Azure Blob Storage
- Static Web App fetches the pre-baked JSON directly, no Cosmos DB hit.
Pros:
- Zero RUs per request
- CDN-cached if using Azure CDN or Static Web Apps’ edge cache
- Easy to implement
Cons:
- Data is only as fresh as your last export (depends on update frequency)
🔹 Use Case: Same query URL returns same response
🔧 How:
- Your function returns HTTP headers like:
Cache-Control: public, max-age=60
- Static Web App or CDN (e.g., Azure Front Door) can cache the response.
- No Cosmos DB hit until cache expires.
Perfect for:
- Pages like
/api/posts?tag=foo
with consistent output
🔹 Use Case: Avoid re-fetching on the front end unnecessarily
🔧 How:
- Use browser caching + SWR (Stale-While-Revalidate) strategies
- Store responses in
localStorage
orIndexedDB
- Use libraries like React Query or SWR for auto-caching API responses
This doesn't reduce Cosmos RUs directly but prevents duplicate requests, especially during navigation or page reloads.
Cache Type | Best For | RU Savings | Cost | Notes |
---|---|---|---|---|
In-Memory in Functions | Short-term, low-traffic | ✅ Low | Free | Lost on cold starts |
Azure Redis Cache | Mid-high traffic, shared cache | ✅✅✅ High | 💰 Paid | Best for scalability |
JSON in Blob Storage | Infrequent updates | ✅✅✅ High | Low | Serve via CDN or static file |
HTTP Response Cache | Repeat queries | ✅✅ Medium | Free | Edge cache if headers are set |
Client-Side Cache | UI refreshes | ✅ Medium | Free | Helps with perceived speed |
Combine:
- Blob/Redis or in-memory caching on the backend
- HTTP response caching at edge (with headers)
- Client-side caching on the frontend
This gives you multi-layered caching → minimal RU usage.
Would you like an example of implementing Redis or Blob caching with Azure Functions?
Tell me about redis
Sure — here's a clear breakdown of what Redis is and why it's useful, especially in cloud-based, serverless architectures like you're using.
Redis (REmote DIctionary Server) is a fast, in-memory key-value data store. It’s often used as:
- A cache
- A message broker
- A real-time database
- A session store
It stores data in RAM, so it's extremely fast — sub-millisecond latency for reads and writes.
Feature | What It Means |
---|---|
🧠 In-memory | All data lives in RAM → super fast access |
🧩 Key-Value store | Data is stored like myKey → myValue
|
🔄 TTL (Time-To-Live) | You can set expiration per key |
🧮 Data structures | Not just strings — also supports lists, hashes, sets, sorted sets, etc. |
🧵 Pub/Sub | Built-in messaging for real-time apps |
💾 Persistence | Can persist to disk optionally (RDB, AOF modes) |
🔐 Secure and scalable | Azure Redis supports TLS, authentication, clustering, etc. |
- Store query results with a TTL (e.g., 60 seconds)
- Avoid re-hitting Cosmos DB for frequent reads
// Pseudo-code
let cached = await redis.get("search:Foo");
if (!cached) {
const result = await cosmos.query(...);
await redis.set("search:Foo", JSON.stringify(result), "EX", 60); // TTL 60 sec
}
- Store short-lived session or access tokens securely
- Fast lookup when validating requests
- Keep count of API requests per IP/user
- Reset counts automatically after a time window
- Redis can act as a broker (e.g., Celery for Python)
- Great for queueing long-running or delayed jobs
Microsoft offers Redis as a fully managed PaaS on Azure:
Tier | Use Case | Cost (est.) |
---|---|---|
Basic | Dev/test | ~$16/month |
Standard | Production | Starts ~$41/month |
Premium | Enterprise apps | Scales into the thousands |
Features like geo-replication, clustering, and VNet support are available in higher tiers.
You don’t need to install or manage Redis yourself — Azure handles uptime, security patches, scaling, and persistence (if enabled).
- If your cache needs long-term persistence → use a DB instead.
- If your data changes very rapidly and you can't tolerate stale reads, Redis might not help unless you're tightly managing TTLs.
- If budget is extremely constrained — the Basic tier has a cost even when idle.
Redis = ultra-fast, in-memory key-value store, perfect for caching Cosmos DB queries, reducing RU usage, speeding up response times, and handling real-time data.
On Azure, you can use Azure Cache for Redis, which is fully managed and easy to connect to from Azure Functions or apps.
Let me know if you'd like an actual code example or pricing breakdown from the Azure Calculator.
With cdn caching, is it possible to invalidate the cache early
Yes, you can invalidate CDN cache early (before TTL expires), but how you do it depends on the CDN you're using. Since you're on Azure, the default CDN options are:
a. Manual Purge (via Azure Portal or API)
You can purge cached content using:
- Azure Portal UI
- REST API / SDK
- Azure CLI
az cdn endpoint purge \
--content-paths '/api/posts*' \
--profile-name myCDNProfile \
--name myCDNEndpoint \
--resource-group myResourceGroup
This forces Azure CDN to re-fetch the content from your origin (Azure Function, Blob, etc.) the next time it's requested.
b. Programmatic Purge (from your app)
You can call the purge API in response to specific events—like content updates.
Example: After your function updates Cosmos DB, also trigger a CDN purge request.
⏱ Delay: Purge usually takes 30 seconds to a few minutes to propagate globally.
If you're using Azure Front Door (the newer edge platform), cache invalidation is even more flexible:
- You can purge specific paths or all content
- Invalidation is faster than traditional Azure CDN (~seconds)
- You can also customize caching rules per path/method/header/query string
If you're okay with using versioned URLs, you can avoid purging altogether by:
GET /api/posts?tag=foo&v=20250803
Just change v
when the data changes → CDN treats it as a new URL = new cache entry.
This gives you instant cache refresh without waiting for purges.
Method | Speed | Control | Notes |
---|---|---|---|
Manual purge | ⏳ 30s–5min | ✅ Per path | API/CLI/Portal available |
Auto purge from app | Same | ✅ | Trigger from Azure Function or backend |
Cache busting (URL versioning) | ⚡ Instant | ✅ | Best for deterministic cache control |
Front Door advanced rules | ⚡ Fast | ✅✅ | Better flexibility for caching logic |
Would you like a sample Azure Function that triggers a CDN purge after updating Cosmos DB?
How to structure the data in a blob cache
Structuring data in a Blob Storage cache is all about balancing retrievability, maintainability, and performance — especially when you're using it to reduce Cosmos DB load.
Here's a complete guide tailored to your use case (e.g., storing Cosmos DB query results in Azure Blob Storage to serve via Static Web App):
Blob storage is flat, but you can simulate folder hierarchies using /
in blob names.
<category>/<query>/<version>.json
cache/search/foo/v1.json
cache/posts/recent/v1.json
cache/tags/all/v1.json
- Use lowercase, URL-safe keys.
- Versioning (e.g.
v1
,20250803
, or hash of params) allows cache busting without deletion. - Consider expiration-friendly naming like
.../hourly/2025-08-03T14.json
.
Your blob will typically store the raw result of a Cosmos DB query.
{
"data": [
{ "id": "1", "title": "Foo", "description": "..." },
{ "id": "2", "title": "Another Foo", "description": "..." }
],
"meta": {
"query": "title or description contains 'Foo'",
"cachedAt": "2025-08-03T14:52:00Z",
"source": "cosmosdb"
}
}
- Keep metadata (
cachedAt
, etc.) separate for clarity. - Avoid huge nested documents if you can — keep it lean for fast delivery.
- Use a dedicated container like
cache/
orapi-cache/
.
- Private (accessed via Function proxy)
- Blob (public read, no list)
- Optional: use Azure CDN for edge caching
There are 3 main patterns:
- Store blobs with timestamps (e.g.
/2025-08-03T14.json
) - Your function always reads the latest timestamped blob
- Use an Azure Function Timer to update cache hourly/daily
- Cache updates when data in Cosmos DB changes (e.g., via trigger or post-write logic)
- Write to a versioned blob or overwrite a standard one (like
.../latest.json
)
- If no cached blob exists, query Cosmos DB → store result → return it.
- Great for unpredictable queries.
- Serve blob URL directly if public
- Or read blob contents from your Azure Function and return it via API
- Optionally add headers:
Cache-Control: public, max-age=300 Content-Type: application/json
Element | Recommendation |
---|---|
Blob naming | Use semantic, hierarchical names with versioning |
JSON format | Include metadata like cachedAt , keep lean |
Access control | Use private blobs or public with CDN for perf |
Cache strategy | Choose between TTL, event-based, or lazy caching |
Function logic | Check cache first → fallback to Cosmos DB → write to blob if needed |
Invalidation | Use versioning or overwrite blobs selectively |
Would you like a working example (Node.js, Python, or C#) that writes Cosmos DB data to a blob in this structure?
What about sorting and paging
Sorting and paging with Blob-based cache works a bit differently than with a database like Cosmos DB — since you're working with precomputed, static JSON, not live queries. But it's absolutely doable — here’s how you can structure and serve sorted + paginated data from a blob cache efficiently.
You have two core strategies:
- You cache each page as its own blob:
cache/posts/sort=createdAt_desc/page=1.json cache/posts/sort=createdAt_desc/page=2.json
-
Pros:
- Extremely fast
- No server-side computation
-
Cons:
- More blobs to manage
- Must regenerate all pages when data changes
- Cache full result (or partial batch) as a single blob:
cache/posts/sort=createdAt_desc/all.json
- Then either:
- Slice it in the Azure Function before sending (
skip
/take
logic), or - Let the frontend page locally
- Slice it in the Azure Function before sending (
{
"data": [
{ "id": "1", "title": "Foo", "createdAt": "2025-08-02T18:30Z" },
...
],
"meta": {
"sort": "createdAt_desc",
"cachedAt": "2025-08-03T15:00Z",
"total": 250
}
}
- Accept query params:
?sort=createdAt_desc&page=2&pageSize=10
- Read the cached blob, parse it
- Sort it if needed (or pre-sort before caching)
- Return the correct page:
const pageSize = 10; const offset = (page - 1) * pageSize; const pagedData = fullResult.data.slice(offset, offset + pageSize);
- Add pagination metadata:
{ "data": [...], "meta": { "page": 2, "pageSize": 10, "total": 250, "hasMore": true } }
If you want to cache different sorts/pages individually:
cache/posts/sort=title_asc/page=1.json
cache/posts/sort=title_desc/page=3.json
Use consistent key formats, URL-safe characters, and short names.
Feature | Best Practice |
---|---|
Sorting | Pre-sort before writing to blob or sort in Azure Function |
Paging | Use .slice() on cached array in Function or serve full blob to frontend |
Blob granularity | Cache per page if dataset is large or fast-changing |
Naming | Use clean, structured paths like cache/<entity>/sort=<field>_<dir>/page=<n>.json
|
- If your data is large (e.g. >5 MB JSON), paged blobs may scale better.
- For frequently changing data, consider using versioned cache keys with TTL.
- Always validate query params to avoid unbounded blob fetches or edge cases (e.g., negative page numbers).
Would you like a sample Azure Function that supports paged access to blob-cached JSON like this?
How about caching query results
Caching query results (like search/filter results from Cosmos DB) is a powerful way to save RUs — but it comes with some trade-offs around freshness and cache key management. Here's a full guide to efficiently caching query results, tailored to your Azure Function + Static Web App + Cosmos DB setup.
Cache if:
- The query is expensive (e.g. uses
CONTAINS
, scans many documents) - The results don’t change constantly
- The same query is likely to run repeatedly
Perfect for:
- Search (
title contains 'foo'
) - Filtered views (
status = 'published'
) - Sorted lists (
order by createdAt desc
) - Home feeds, tag lists, category views
- User hits
/api/search?term=foo&sort=createdAt_desc&page=1
- Azure Function builds a cache key:
const key = `search/term=foo/sort=createdAt_desc/page=1.json`;
- Function checks if the blob exists in Azure Blob Storage:
- ✅ If yes → serve blob response (cheap)
- ❌ If no → run query on Cosmos DB, store result in blob, return it
- Optionally: set a time-to-live (TTL) or version key to auto-expire
Cache key = your query fingerprint. Structure them logically:
cache/search/term=foo/sort=createdAt_desc/page=1.json
cache/posts/status=published/tag=azure/page=2.json
Use lowercase, no spaces, and encode query strings (e.g. URL-encode spaces)
You can also hash the full query string if it’s long:
cache/query/<sha256_of_query>.json
Keep it consistent and structured:
{
"data": [...],
"meta": {
"query": {
"term": "foo",
"sort": "createdAt_desc",
"page": 1
},
"cachedAt": "2025-08-03T17:00Z",
"ttlSeconds": 300,
"source": "cosmosdb"
}
}
Method | How it works | Use case |
---|---|---|
TTL-based | Set cache to expire after X seconds | Dynamic content (e.g. search results) |
Versioned keys | Add a v=20250803 param |
Home feeds, bulk updates |
Manual purge | Trigger a delete or overwrite | After admin update |
Lazy invalidation | Check if data changed before using cache | Advanced (requires metadata tracking) |
- Reduces Cosmos DB RU consumption dramatically
- Minimizes cold starts for Azure Functions if static files served via CDN
- Works beautifully with Azure Static Web Apps + Blob Storage + CDN
const cacheKey = `search/term=${term}/sort=${sort}/page=${page}.json`;
if (await blobExists(cacheKey)) {
return getBlob(cacheKey); // Serve cached result
}
const queryResult = await queryCosmosDB(term, sort, page);
await saveBlob(cacheKey, queryResult, { ttl: 300 });
return queryResult;
Item | Recommendation |
---|---|
Cache key | Encode query params logically |
Blob content | Store data + meta with cachedAt , TTL |
Blob TTL | Use time-based or version-based expiry |
Serve from | Azure Blob, optionally behind CDN |
Invalidate | On demand, TTL, or version updates |
Want a sample implementation in a specific language (Node.js, Python, C#) for caching query results in blob storage?
How can i restrict access to the function such that only my app can call it
To restrict access to your Azure Function so that only your Static Web App (or a specific frontend) can call it — and block all other clients — you have several solid options, depending on your level of control and security needs.
Here’s a clear breakdown:
If your Function is part of the same Static Web App project (under /api/
route), Azure handles authentication and routing automatically.
Result:
- The Function is only accessible via your Static Web App frontend
- External requests to the Function URL won’t work unless they come through the app
📎 Docs: Static Web Apps + Azure Functions
How to Use:
- Deploy your Function to
/api
- Access it from your frontend like
fetch('/api/my-function')
This method works for standalone Functions not tightly integrated into SWA.
- Your frontend sends a custom header (e.g.
x-app-auth
) or shared API key. - Your Azure Function checks for it.
- If missing or incorrect → return 403 Forbidden.
module.exports = async function (context, req) {
const allowedKey = process.env.MY_SECRET_KEY;
const providedKey = req.headers['x-app-auth'];
if (providedKey !== allowedKey) {
context.res = { status: 403, body: "Forbidden" };
return;
}
// continue with logic
};
✅ Store MY_SECRET_KEY
securely in Function App → Configuration → Application settings
In your Function, you can inspect the Origin
or Referer
headers:
const allowedOrigin = 'https://your-app-name.web.app';
const origin = req.headers['origin'];
if (origin !== allowedOrigin) {
context.res = { status: 403, body: "Unauthorized" };
return;
}
If your app is registered in Azure AD, you can:
- Protect your Function App with AAD login
- Require a valid AAD access token in each request
- Only apps/users with the right roles/scopes can call it
📎 AAD auth with Azure Functions
Best for enterprise scenarios or APIs shared across services.
Restrict your Function App to only allow traffic from known IPs — such as your Static Web App, a VNet, or a private backend.
🧱 Configure:
- In Function App > Networking > Access Restrictions
- Add Allow rules for your frontend’s IP (if known and static)
- Add Deny for everything else
- Don’t rely solely on CORS: It protects the browser, not the Function — attackers can still hit your endpoint directly.
- Don’t put secrets in client-side JS — even if obfuscated, they’re exposed.
If your Function serves only your Static Web App frontend, use:
-
Integrated
/api
deployment (best, easiest) - Custom header/API key check as a backup layer
- Optionally add referrer checks or CDN IP validation
Would you like a code-ready template for validating headers or deploying a secure SWA + API setup?
Powered by ChatGPT Exporter