nginx - dwilson2547/wiki_demo GitHub Wiki
Absolutely โ letโs go deep into NGINX (pronounced "engine-x") and break it down in a structured way.
NGINX is a high-performance, open-source web server that also functions as:
- a reverse proxy
- a load balancer
- an HTTP cache
- a mail proxy (SMTP, POP3, IMAP)
It was originally designed by Igor Sysoev in 2004 to solve the C10k problem (handling 10,000+ simultaneous connections). Today, it powers a huge portion of the internet, including big sites like Netflix, Airbnb, and Dropbox.
- Can serve static files (HTML, CSS, JS, images, videos) extremely fast.
- Handles concurrent requests efficiently using an event-driven, asynchronous architecture (unlike traditional Apacheโs process/thread-per-connection model).
-
Acts as an intermediary between clients (users) and backend servers (e.g., Flask, Django, Node.js).
-
Benefits:
- Hides backend server details (extra security).
- Distributes requests efficiently.
- Enables SSL termination (offloads HTTPS from backend apps).
-
Distributes incoming requests across multiple backend servers.
-
Supports strategies like:
- Round Robin (default)
- Least Connections
- IP Hash (same client โ same server)
- Caches static and dynamic responses to reduce load on backend servers.
- Makes frequently accessed content load faster for users.
- Supports SSL/TLS termination.
- Can enforce security headers (HSTS, CSP, X-Frame-Options).
- Acts as a Web Application Firewall (WAF) when combined with modules like ModSecurity.
- HTTP/2, gRPC, and WebSocket support.
- Rate limiting (to prevent DDoS attacks).
- URL rewriting and redirection.
- Serving multiple sites on one server (via virtual hosts/server blocks).
NGINX uses an event-driven, non-blocking architecture:
- One master process manages configuration and worker processes.
- Multiple worker processes handle client requests.
- Each worker can handle thousands of simultaneous connections using asynchronous I/O (no one-thread-per-connection overhead).
This makes NGINX extremely fast, lightweight, and scalable.
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.html;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8000; # Forward to Gunicorn/CherryPy
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
upstream myapp {
server 192.168.1.10:8000;
server 192.168.1.11:8000;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://myapp;
}
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
โ Extremely fast and lightweight โ Handles thousands of concurrent connections โ Low memory footprint โ Easy reverse proxy & load balancing setup โ Strong community & wide adoption
NGINX is a powerful, versatile web server and reverse proxy that:
- Serves static files efficiently
- Acts as a load balancer and reverse proxy
- Provides SSL/TLS termination and caching
- Is widely used in modern web deployments
๐ In real-world scenarios, a common stack is: NGINX (front-end reverse proxy, SSL, caching) โ Gunicorn/uWSGI/CherryPy (application server) โ Flask/Django app โ Database
Would you like me to also create a diagram of how NGINX fits into a Flask appโs architecture (client โ NGINX โ Flask backend)? That could make the flow crystal clear.