Use of Standards - bounswe/bounswe2026group4 GitHub Wiki

Use of Standards

This page documents the open standards the team has adopted β€” or plans to adopt β€” for the Local History Story Map project (Django REST Framework backend, React + Vite web frontend, Expo/React Native mobile, MySQL 8.0, nginx).

For transparency, the page is split into two parts:

  • Part A β€” Standards already in our code. Each entry points to the file / line where the standard is configured or applied.
  • Part B β€” Standards we plan to adopt. Each entry states what we will add and where it will live in the existing code.

Our product is a platform for discovering and contributing location-based community stories through a map and feed interface. Stories carry geographic (lat / lng + place name), temporal (year / year range / decade), and authorship metadata. Because our data model is built around places, creative works, and people, open web-data standards map onto it naturally.


Part A β€” Standards Already in Our Code

A.1 JSON Web Tokens (IETF RFC 7519) + OAuth 2.0 Bearer Scheme (RFC 6750)

Which standard: JWT β€” RFC 7519 (signed, self-contained access and refresh tokens) and OAuth 2.0 Bearer Token Usage β€” RFC 6750 (the Authorization: Bearer <token> HTTP scheme).

Why we chose it:

  • Our platform has three roles (guest, registered user, admin) and both a single-page React app and an Expo mobile client that need a stateless, cross-origin credential. A signed JWT lets the backend verify identity from the token itself, without touching the session store β€” which is the right fit for a REST API consumed by two separate client codebases.
  • RFC 7519 defines the claims (sub, exp, iat, jti) and signing method we rely on; RFC 6750 defines the header format both axios (web) and the mobile client send.
  • Staying on the standard means future integrations (SSO, third-party OAuth providers) can consume the same tokens without a translation layer.

How it is integrated (code pointers):

  • requirements/base.txt β€” djangorestframework-simplejwt==5.3.* pulled in as a dependency.
  • backend/config/settings/base.py β€” rest_framework_simplejwt.authentication.JWTAuthentication is the default authentication class inside REST_FRAMEWORK['DEFAULT_AUTHENTICATION_CLASSES'].
  • backend/config/settings/base.py β€” the SIMPLE_JWT dict sets 15-minute access tokens, 7-day refresh tokens, rotation + blacklist on refresh, and Bearer as the auth-header type.
  • backend/apps/users/views.py β€” LoginView issues an access/refresh pair; LogoutView blacklists the submitted refresh token.
  • Frontend axios client and the Expo mobile client attach Authorization: Bearer <access> on every authenticated request and call the refresh endpoint on 401.

A.2 OpenAPI 3.0 (OpenAPI Initiative / Linux Foundation)

Which standard: OpenAPI Specification 3.0.

Why we chose it:

  • OpenAPI is the industry standard description format for HTTP APIs. Publishing our REST API under it means the frontend and mobile teams can auto-generate typed clients, external reviewers get an interactive explorer for free, and breaking changes surface in the schema diff rather than in production.
  • drf-spectacular generates the OpenAPI document directly from our DRF serializers and views, so adhering to the standard costs almost nothing ongoing.

How it is integrated (code pointers):

  • requirements/base.txt β€” drf-spectacular==0.27.*.
  • backend/config/settings/base.py β€” 'drf_spectacular' is in THIRD_PARTY_APPS; 'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema' is set inside the REST_FRAMEWORK dict; SPECTACULAR_SETTINGS defines the API title, description, and version.
  • As new endpoints ship, @extend_schema decorators are added so request/response shapes stay accurate in the generated spec.

A.3 ISO 8601 β€” Date and Time Representation (RFC 3339 profile)

Which standard: ISO 8601, profiled for internet use by RFC 3339.

Why we chose it:

  • ISO 8601 is the unambiguous, locale-independent format for dates and times. Our API and DB deal with submission timestamps, account-creation dates, and moderation audit times β€” all of which need to be interpretable by web and mobile clients in different time zones.
  • Django REST Framework already serializes DateTimeField / DateField as ISO 8601 strings in UTC, so we inherit the standard without extra code.

How it is integrated (code pointers):

  • backend/config/settings/base.py β€” TIME_ZONE = 'UTC' and USE_TZ = True ensure every timestamp is stored and emitted in UTC.
  • backend/apps/stories/models.py β€” Story.submitted_at = DateTimeField(auto_now_add=True) and Story.updated_at = DateTimeField(auto_now=True) produce RFC-3339-formatted values like "2026-03-01T12:00:00Z" when serialized.
  • backend/apps/users/models.py β€” User.date_joined, EmailVerificationCode.created_at / expires_at, and UserProfile.birth_date use the same mechanism.
  • Frontend renders all timestamps by parsing the ISO string with the native Date API.

A.4 WGS 84 / EPSG:4326 β€” Geographic Coordinate Reference System

Which standard: WGS 84 (EPSG:4326) β€” the coordinate reference system used by GPS, OpenStreetMap, Leaflet, and virtually every web-mapping stack.

Why we chose it:

  • We display story pins on a Leaflet map tiled from OpenStreetMap. Leaflet and OSM both expect WGS 84 longitude/latitude in decimal degrees, so choosing anything else would require a projection-conversion step we have no need for.
  • WGS 84 is also what GPS devices emit, which is how any future "post from your current location" feature on the mobile app would work.

How it is integrated (code pointers):

  • backend/apps/stories/models.py β€” Story.location_lat = DecimalField(max_digits=9, decimal_places=6) and Story.location_lng = DecimalField(max_digits=10, decimal_places=6). The precision (6 decimal places β‰ˆ 11 cm) is enough for street-level pins and matches the [-90, 90] / [-180, 180] WGS 84 range.
  • backend/apps/stories/models.py β€” an index story_coords_idx on (location_lat, location_lng) supports bounding-box queries for the map view.
  • frontend/package.json β€” leaflet and react-leaflet are the map libraries; both assume WGS 84 by default.
  • The mobile app (mobile/) uses the same coordinate convention when displaying pins.

A.5 HTTPS / TLS (RFC 8446) for Transport Security

Which standard: TLS 1.3 β€” RFC 8446 (with TLS 1.2 β€” RFC 5246 β€” as a fallback).

Why we chose it:

  • NFR 2.4.1 requires HTTPS for all client↔server communication. TLS protects credentials (login, JWT tokens in Authorization headers) and user-submitted story content on the wire.
  • Let's Encrypt certificates are free and automatable, so there is no reason not to require TLS everywhere in production.

How it is integrated (code pointers):

  • nginx/nginx.conf β€” the reverse proxy terminates TLS using a Let's Encrypt certificate. TLS versions are pinned explicitly: ssl_protocols TLSv1.2 TLSv1.3; (line 19) and ssl_ciphers HIGH:!aNULL:!MD5; (line 20). The HTTP :80 server block issues a 301 redirect to https:// (lines 5–9), so there is no plaintext surface.
  • nginx/nginx.conf β€” certificates are mounted at runtime from the nginx/certs volume (ssl_certificate, ssl_certificate_key on lines 17–18); provisioning via Certbot is documented in the project README.md.
  • backend/config/settings/production.py β€” production settings set ALLOWED_HOSTS and CORS origins to the HTTPS domain; the frontend VITE_API_URL is required to be https://….
  • Live domain (https://storymap.page) is reachable only over HTTPS.

A.6 IANA Media Types (MIME) for Uploads

Which standard: IANA Media Types registry β€” specifically the image/jpeg, image/png, audio/*, and video/* types.

Why we chose it:

  • Our story submissions accept images (with audio and video planned β€” req. 1.4.2.1). Using IANA-registered MIME types rather than ad-hoc file-extension checks lets us verify a file's actual content type from its bytes instead of trusting the HTTP Content-Type header, which is client-controlled and trivially spoofed.
  • The profile-photo requirement (1.2.2.4 β€” JPEG/PNG only, under 2 MB) is expressed directly in IANA type names.

How it is integrated (code pointers):

  • requirements/base.txt β€” Pillow==10.* for image decoding and python-magic==0.4.* for MIME detection from file bytes.
  • backend/apps/media/serializers.py β€” ImageUploadSerializer.validate_file performs two-layer validation: Pillow's ImageField confirms the file is a decodable image, then magic.from_buffer(value.read(1024), mime=True) (line 28) detects the real MIME type and checks it against ALLOWED_MIME_TYPES = {'image/jpeg', 'image/png'} (line 6).
  • backend/apps/media/models.py β€” MediaItem.media_type stores the broad category (image / audio / video) as a TextChoices enum, alongside file_size and original_filename for auditability.

A.7 CORS β€” Cross-Origin Resource Sharing (RFC 6454 / WHATWG Fetch Living Standard)

Which standard: RFC 6454 β€” The Web Origin Concept and the WHATWG Fetch Living Standard (which supersedes the W3C CORS spec for browsers).

Why we chose it:

  • Our platform is consumed by two separate origins: the React SPA (http://localhost:5173 in dev, https://storymap.page in production) and the Expo mobile client. Without an explicit CORS policy, browsers block cross-origin XMLHttpRequest / fetch calls by default.
  • CORS lets us declare exactly which origins may make credentialed requests to the Django API. Using CORS_ALLOWED_ORIGINS with an explicit allowlist is stricter and safer than enabling CORS_ALLOW_ALL_ORIGINS, which would expose the API to any browser-based origin.

How it is integrated (code pointers):

  • requirements/base.txt β€” django-cors-headers pulled in as a dependency.
  • backend/config/settings/base.py β€” 'corsheaders' in THIRD_PARTY_APPS; 'corsheaders.middleware.CorsMiddleware' is the second entry in MIDDLEWARE (positioned before CommonMiddleware, as the library requires).
  • backend/config/settings/development.py β€” CORS_ALLOWED_ORIGINS = ['http://localhost:5173'] allows the local Vite dev server.
  • backend/config/settings/production.py β€” CORS_ALLOWED_ORIGINS = env.list('CORS_ALLOWED_ORIGINS') reads the allowlist from the environment, keeping the production origin out of source control.

A.8 HTTP Security Headers β€” HSTS (RFC 6797), X-Frame-Options, X-Content-Type-Options

Which standard:

  • HSTS β€” RFC 6797: instructs browsers to only connect over HTTPS for a given duration.
  • X-Frame-Options: prevents the app from being embedded in an <iframe> on a foreign origin (clickjacking defence).
  • X-Content-Type-Options: disables browser MIME-type sniffing, which can otherwise be exploited to execute a script disguised as an image.

Why we chose it:

  • These three headers are a minimum-viable security baseline for any web-facing service. Together they defend against HTTPS downgrade attacks, clickjacking, and MIME confusion attacks β€” all cheap to add and with no compatibility cost on modern browsers.
  • Django's built-in SecurityMiddleware emits all three without requiring any additional dependency. The HSTS header is only activated in production, where TLS is enforced (see A.5).

How it is integrated (code pointers):

  • backend/config/settings/base.py β€” 'django.middleware.security.SecurityMiddleware' is the first entry in MIDDLEWARE; it adds X-Content-Type-Options: nosniff on every response (Django default, SECURE_CONTENT_TYPE_NOSNIFF = True).
  • backend/config/settings/base.py β€” 'django.middleware.clickjacking.XFrameOptionsMiddleware' (last entry in MIDDLEWARE) adds X-Frame-Options: DENY on every response.
  • backend/config/settings/production.py β€” SECURE_HSTS_SECONDS = 31536000 and SECURE_HSTS_INCLUDE_SUBDOMAINS = True instruct SecurityMiddleware to emit Strict-Transport-Security: max-age=31536000; includeSubDomains in production, telling browsers to enforce HTTPS for one year.
  • backend/config/settings/production.py β€” SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https'), SESSION_COOKIE_SECURE = True, and CSRF_COOKIE_SECURE = True complement HSTS by ensuring cookies are never transmitted over plain HTTP.

Part B β€” Standards We Plan to Adopt

The following four standards are not yet in the code. They are listed here because each one maps cleanly onto our existing domain model and solves a concrete problem we have today (SEO-friendly structured data for stories, map interop, and expressing uncertain historical dates). For each one we state what we will add and where in the existing code it will land.

B.1 Schema.org Vocabulary β€” planned

Which standard: Schema.org β€” the types Article, Place, GeoCoordinates, Person, Comment, ImageObject, AudioObject, VideoObject, and InteractionCounter.

Why we chose it:

  • Schema.org is the shared vocabulary supported by Google, Bing, Yahoo, and Yandex. Because our stories are the kind of content search engines surface (articles tied to places and dates), adopting it makes our stories eligible for rich results and improves discoverability.
  • The hierarchy maps one-to-one onto our existing domain model: a Story is an Article, its geolocation is a Place with nested GeoCoordinates, its author is a Person, its attachments are ImageObject / AudioObject / VideoObject, and engagement counters (likes, saves) fit InteractionCounter.

How we plan to integrate it:

  • Add a <StructuredData story={...} /> component on the frontend (frontend/src/components/) that renders on the story-detail route (/stories/:id) and the public-profile route (/users/:id). The component transforms the existing REST response into a Schema.org entity and emits it in a <script type="application/ld+json"> tag.
  • Field mapping from our models (backend/apps/stories/models.py):
Schema.org property Source in our model
@type "Article"
headline Story.title
articleBody Story.narrative
datePublished Story.submitted_at
dateModified Story.updated_at
author Person built from Story.user (omitted if contributor_visible=False)
contentLocation Place with nested GeoCoordinates from location_lat / location_lng / location_name
temporalCoverage ISO 8601 interval derived from time_type + year fields (the richer EDTF form stays on the API side β€” see B.4)
keywords Tags from apps/tags
image / audio / video MediaItem entries grouped by media_type
interactionStatistic InteractionCounter for like_count, save_count

B.2 JSON-LD 1.1 (W3C) β€” planned

Which standard: JSON-LD 1.1, a W3C Recommendation for encoding Linked Data in JSON.

Why we chose it:

  • JSON-LD is the W3C-recommended serialization for Schema.org and the format Google explicitly recommends for structured data (over RDFa and Microdata) because it lives outside the visible DOM and is therefore easier to maintain alongside our React components.
  • It keeps the Schema.org vocabulary introduced in B.1 machine-readable to crawlers and any future third-party integration (Wikimedia, Europeana, heritage aggregators) without breaking our existing plain-JSON REST API clients.

How we plan to integrate it:

  • The <StructuredData> component in B.1 emits a <script type="application/ld+json"> block with "@context": "https://schema.org" and the appropriate @type.
  • JSON-LD is scoped to HTML pages; the REST API at backend/apps/stories/views.py continues to emit plain JSON for app consumers, so no server-side change is required.
  • Example payload that will be embedded on a story detail page:
<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Article",
  "@id": "https://storymap.page/stories/42",
  "headline": "The old bakery on Galip Dede Street",
  "articleBody": "My grandmother used to tell me...",
  "datePublished": "2026-03-01T12:00:00Z",
  "temporalCoverage": "1960/1969",
  "author": {
    "@type": "Person",
    "name": "ayse_k",
    "url": "https://storymap.page/users/17"
  },
  "contentLocation": {
    "@type": "Place",
    "name": "Galip Dede Caddesi, Beyoğlu",
    "geo": {
      "@type": "GeoCoordinates",
      "latitude": 41.02765,
      "longitude": 28.97499
    }
  },
  "keywords": ["istanbul", "beyoglu", "oral-history"],
  "interactionStatistic": [
    { "@type": "InteractionCounter", "interactionType": "https://schema.org/LikeAction", "userInteractionCount": 12 }
  ]
}
</script>

B.3 GeoJSON (IETF RFC 7946) β€” planned

Which standard: GeoJSON β€” RFC 7946.

Why we chose it:

  • GeoJSON is the IETF-standardised JSON format for geographic features and is the format Leaflet (our map library) accepts natively via L.geoJSON(...). Emitting our map data as GeoJSON removes a custom conversion step on the frontend.
  • RFC 7946 mandates WGS 84 / EPSG:4326 coordinates with the [longitude, latitude] ordering β€” which is exactly what Leaflet expects and what we are already storing (see A.4).

How we plan to integrate it:

  • The map endpoint in backend/apps/stories/views.py (StoryMapView) currently returns a paginated plain JSON list via StoryMapSerializer (backend/apps/stories/serializers.py). We will add either a new content negotiation case (application/geo+json) or a dedicated GET /stories/map.geojson view that wraps the same queryset as a FeatureCollection:
    • each story becomes one Feature whose geometry is a Point at [location_lng, location_lat] (note: lng first, per RFC 7946 Β§3.1.1),
    • properties carries id, title, location_name, time_type, year, year_start, year_end.
  • Expected response shape:
{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "id": 42,
      "geometry": { "type": "Point", "coordinates": [28.97499, 41.02765] },
      "properties": {
        "title": "The old bakery on Galip Dede Street",
        "location_name": "Galip Dede Caddesi, Beyoğlu",
        "time_type": "decade",
        "year": 1960
      }
    }
  ]
}
  • On the frontend, the map component will pipe this directly into L.geoJSON(...) instead of iterating over pins manually.

B.4 EDTF (Extended Date/Time Format) β€” planned

Which standard: Extended Date/Time Format (EDTF) β€” a Library of Congress extension of ISO 8601 designed specifically for uncertain, approximate, and historical dates.

Why we chose it:

  • Our Story model carries four time_type values (exact_year, approximate_year, decade, year_range). Plain ISO 8601 can express exact_year (1965) and year_range (1950/1975) but has no notation for approximate_year or decade. EDTF adds exactly the missing forms: ~1965 for "approximately 1965", 196X for "some year in the 1960s", 1965? for "uncertain 1965". This makes EDTF the right standard for oral-history content, where the narrator often cannot pin down an exact date.
  • EDTF is maintained by the Library of Congress and is the standard used by digital-archive and linked-data cultural-heritage systems β€” the same ecosystems our content could interoperate with in the future.
  • EDTF is a superset of ISO 8601, so values like 1965 and 1950/1975 remain valid in both notations β€” we don't break A.3.

How we plan to integrate it:

  • Add a derived, read-only temporal_coverage field to StoryDetailSerializer in backend/apps/stories/serializers.py that produces an EDTF string for every story. This is the internal / API representation and preserves the full uncertainty of the narrator's memory:
    • exact_year=1965 β†’ "1965"
    • approximate_year=1965 β†’ "~1965" (EDTF-specific, "approximately 1965")
    • decade=1960 β†’ "196X" (EDTF-specific, "some year in the 1960s")
    • year_range=1950..1975 β†’ "1950/1975"
  • Interop with Schema.org (B.1 / B.2): Google's Rich Results validator is not guaranteed to accept EDTF-specific forms like ~1965 or 196X in the temporalCoverage property β€” it expects plain ISO 8601. To keep structured-data validation passing, the <StructuredData> component will emit the ISO 8601 interval form into JSON-LD, derived from the same underlying fields:
    • exact_year=1965 / approximate_year=1965 β†’ "1965"
    • decade=1960 β†’ "1960/1969"
    • year_range=1950..1975 β†’ "1950/1975"
  • So: the EDTF string is our richer, archive-grade internal representation; the ISO 8601 interval is the lossy-but-validator-safe subset we publish to search engines. Both are derived from the same time_type + year fields and stay in sync by construction.
  • This single standard replaces what would otherwise have been two separate entries (one for ISO 8601 intervals, one for approximate-date handling) and solves a concrete modelling problem we already have.

B.5 Content Security Policy β€” W3C CSP Level 2/3 β€” planned

Which standard: Content Security Policy Level 2 and Level 3 β€” W3C Recommendations that allow a server to declare which content sources (scripts, styles, images, fonts, media) the browser is permitted to load for a given page.

Why we chose it:

  • CSP is the primary browser-enforced defence against Cross-Site Scripting (XSS). Even if an attacker manages to inject a <script> tag into our React frontend, a strict CSP can prevent it from executing or exfiltrating data by blocking inline scripts and restricting script sources to our own domain.
  • Our React SPA loads scripts only from its own origin and a CDN for map tiles (Leaflet / OpenStreetMap); this small, well-known set of sources makes a tight policy straightforward to write and maintain.
  • CSP also blocks mixed-content loads (HTTP assets on an HTTPS page), reinforcing A.5 (TLS) and A.8 (HSTS).

How we plan to integrate it:

  • Add add_header directives to the server block in nginx/nginx.conf. The header applies to all responses served by nginx (both the React SPA and static assets):
add_header Content-Security-Policy
  "default-src 'self';
   script-src  'self';
   style-src   'self' 'unsafe-inline';
   img-src     'self' data: https://tile.openstreetmap.org;
   font-src    'self';
   connect-src 'self' https://storymap.page;
   frame-ancestors 'none';"
  always;
  • frame-ancestors 'none' in the CSP supersedes X-Frame-Options (A.8) and provides the same clickjacking protection for CSP-aware browsers.
  • The policy will be validated with the CSP Evaluator before merging. A Content-Security-Policy-Report-Only mode will be used in staging first so any violations are logged without breaking functionality.

B.6 API Rate Limiting β€” RFC 6585 (429 Too Many Requests) β€” planned

Which standard: RFC 6585 Β§4 β€” 429 Too Many Requests, which defines the HTTP status code for rate-limit responses, and DRF's throttling framework which implements the complementary Retry-After header.

Why we chose it:

  • The login (POST /auth/login/) and token-refresh (POST /auth/refresh/) endpoints are the most sensitive surfaces in the API: an attacker can test millions of credential combinations against them without any server-side pushback. Rate limiting limits the blast radius of a credential-stuffing or brute-force attack.
  • DRF's built-in throttle classes implement the RFC 6585 429 response and the Retry-After header with no additional dependencies, so the cost of adoption is a few lines of settings.

How we plan to integrate it:

  • Add DEFAULT_THROTTLE_CLASSES and DEFAULT_THROTTLE_RATES to REST_FRAMEWORK in backend/config/settings/base.py:
REST_FRAMEWORK = {
    ...
    'DEFAULT_THROTTLE_CLASSES': [
        'rest_framework.throttling.AnonRateThrottle',
        'rest_framework.throttling.UserRateThrottle',
    ],
    'DEFAULT_THROTTLE_RATES': {
        'anon': '60/minute',
        'user': '300/minute',
    },
}
  • Apply a tighter, view-level throttle on the authentication endpoints in backend/apps/users/views.py using throttle_classes = [ScopedRateThrottle] and a custom login scope (e.g. 'login': '10/minute').
  • On throttle, DRF returns HTTP 429 with a Retry-After header indicating how many seconds the client must wait, as specified in RFC 6585.

B.7 Argon2 Password Hashing β€” NIST SP 800-63B β€” planned

Which standard: NIST Special Publication 800-63B β€” Digital Identity Guidelines: Authentication and Lifecycle Management, specifically Section 5.1.1 on memorized secret verifiers, which mandates a memory-hard KDF (such as Argon2, bcrypt, or PBKDF2 with a high iteration count) for password storage.

Why we chose it:

  • Django's default password hasher is PBKDF2-SHA256 with 870,000 iterations (Django 5.x). This satisfies NIST SP 800-63B's minimum requirement, but Argon2id β€” winner of the Password Hashing Competition (2015) β€” is memory-hard in addition to being compute-hard. Memory hardness makes GPU-based cracking attacks significantly more expensive than PBKDF2 alone.
  • NIST SP 800-63B also specifies minimum password length (8 characters) and prohibits storing passwords as plain text or with reversible encryption β€” both already satisfied by Django's auth framework; documenting the standard makes the policy explicit.
  • Argon2 is supported by Django via the argon2-cffi package with a one-line settings change; migrating is transparent for existing users because Django re-hashes on next login.

How we plan to integrate it:

  • Add argon2-cffi to requirements/base.txt.
  • Set PASSWORD_HASHERS in backend/config/settings/base.py so Argon2id is the primary hasher and PBKDF2 remains as a legacy fallback for existing hashes:
PASSWORD_HASHERS = [
    'django.contrib.auth.hashers.Argon2PasswordHasher',   # new hashes
    'django.contrib.auth.hashers.PBKDF2PasswordHasher',   # legacy fallback
]
  • Existing stored PBKDF2 hashes are not broken: Django detects the hasher from the stored prefix and re-hashes to Argon2 on the user's next successful login.
  • The MinimumLengthValidator already set in AUTH_PASSWORD_VALIDATORS (base.py line 104) enforces the NIST SP 800-63B minimum-length requirement.

Summary Table

# Standard Status Where in Our System
A.1 JWT (RFC 7519) + OAuth 2.0 Bearer (RFC 6750) Adopted requirements/base.txt, config/settings/base.py (SimpleJWT + DRF), apps/users/views.py
A.2 OpenAPI 3.0 Adopted requirements/base.txt (drf-spectacular), SPECTACULAR_SETTINGS in config/settings/base.py
A.3 ISO 8601 (RFC 3339 profile) β€” system timestamps Adopted TIME_ZONE='UTC', USE_TZ=True in config/settings/base.py; DRF default serializers
A.4 WGS 84 (EPSG:4326) Adopted Story.location_lat / location_lng decimal fields; Leaflet on the frontend
A.5 HTTPS / TLS (RFC 8446) Adopted nginx reverse proxy with Let's Encrypt cert; production-only HTTPS origins
A.6 IANA Media Types (MIME) Adopted MediaItem.media_type in apps/media/models.py; Pillow + python-magic in requirements/base.txt
A.7 CORS (RFC 6454 / WHATWG Fetch) Adopted corsheaders middleware in base.py; CORS_ALLOWED_ORIGINS in development.py and production.py
A.8 HTTP Security Headers β€” HSTS (RFC 6797), X-Frame-Options, X-Content-Type-Options Adopted SecurityMiddleware + XFrameOptionsMiddleware in base.py; SECURE_HSTS_* in production.py
B.1 Schema.org vocabulary Planned <StructuredData> component on story and profile pages
B.2 JSON-LD 1.1 Planned <script type="application/ld+json"> on story and profile pages
B.3 GeoJSON (RFC 7946) Planned New content type / route on apps/stories/views.py::StoryMapView
B.4 EDTF (Extended Date/Time Format) Planned Derived temporal_coverage field on StoryDetailSerializer in apps/stories/serializers.py
B.5 Content Security Policy β€” W3C CSP Level 2/3 Planned add_header Content-Security-Policy in nginx/nginx.conf
B.6 API Rate Limiting β€” RFC 6585 (429) Planned DEFAULT_THROTTLE_CLASSES / DEFAULT_THROTTLE_RATES in config/settings/base.py; throttle_classes on auth views
B.7 Argon2 Password Hashing β€” NIST SP 800-63B Planned PASSWORD_HASHERS in config/settings/base.py; argon2-cffi in requirements/base.txt

References

⚠️ **GitHub.com Fallback** ⚠️