Nginx - sudo-arshia/tips_and_tricks GitHub Wiki
- The
nginx -hcommand is used to display the available command-line options and parameters for the NGINX web server. When executed, it provides a brief help message with a list of command-line options that can be used with the NGINX executable.
# nginx -h
nginx version: nginx/1.18.0 (Ubuntu)
Usage: nginx [-?hvVtTq] [-s signal] [-c filename] [-p prefix] [-g directives]
Options:
-?,-h : this help
-v : show version and exit
-V : show version and configure options then exit
-t : test configuration and exit
-T : test configuration, dump it and exit
-q : suppress non-error messages during configuration testing
-s signal : send signal to a master process: stop, quit, reopen, reload
-p prefix : set prefix path (default: /usr/share/nginx/)
-c filename : set configuration file (default: /etc/nginx/nginx.conf)
-g directives : set global directives out of configuration file
- The
nginx -tcommand is used to test the configuration file syntax of NGINX without actually starting the server. When executed, it checks the configuration file for any syntax errors or other issues and provides feedback on whether the configuration is valid or not.
Here is an example output of running nginx -t:
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If the configuration file is correct and contains no errors, the output will indicate that the test was successful. However, if there are any syntax errors or other issues with the configuration file, the command will display an error message specifying the problem that needs to be addressed.
This command is useful when making changes to NGINX's configuration file, allowing you to verify the correctness of the configuration before applying the changes and restarting the server. It helps prevent potential errors or misconfigurations from causing issues with the NGINX server.
NGINX settings can be configured using simple directives on a single line or using blocks with multiple lines. Here is an explanation of the two formats:
-
Simple Directives on One Line:
- Syntax:
<directive> <value>; - Example:
server_name binaryville.local; - Description: In this format, the directive and its corresponding value are specified on a single line, separated by a space. A semicolon (
;) is used to terminate the directive.
- Syntax:
-
Blocks with Multiple Lines:
- Syntax:
<block-directive> { <directive> <value>; <directive> <value>; ... } - Example:
server { listen 80; server_name binaryville.local; root /var/www/binaryville; } - Description: In this format, a block is created using a block-directive (e.g.,
server,location) followed by an opening curly brace ({). Inside the block, each directive and its corresponding value are specified on separate lines. The block is closed with a closing curly brace (}).
- Syntax:
Using blocks allows for more complex configuration with nested directives and provides a clear structure for organizing directives related to a specific context (e.g., server, location). Simple directives on one line are convenient for straightforward settings that don't require additional complexity.
Remember to follow the correct syntax and indentation when using blocks to ensure the configuration is valid and properly interpreted by NGINX.
- It includes other configuration files for virtual hosts in the
/etc/nginx/conf.d/directory and the/etc/nginx/sites-enabled/directory.
# vim /etc/nginx/nginx.conf
The include directive is used in Nginx configuration files to include other files or directories. In this case, the include directive is used twice:
-
include /etc/nginx/conf.d/*.conf;: This line includes all files with the.confextension in the/etc/nginx/conf.d/directory. These files typically contain individual virtual host configurations. -
include /etc/nginx/sites-enabled/*;: This line includes all files in the/etc/nginx/sites-enabled/directory. This directory is often used to store symbolic links to the virtual host configuration files located in/etc/nginx/conf.d/. By using symbolic links, you can easily enable or disable virtual hosts by adding or removing the corresponding links in thesites-enableddirectory.
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
By using these include directives, the Nginx configuration is modular and can be easily organized and maintained by splitting virtual host configurations into separate files.
/var/log/nginx# ll
total 40
drwxr-xr-x 2 root adm 4096 Jun 18 11:59 ./
drwxr-xr-x 11 root syslog 4096 Jun 18 11:59 ../
-rw-r----- 1 www-data adm 26123 Jun 18 18:39 access.log
-rw-r----- 1 www-data adm 78 Jun 18 11:59 error.log
Here are one-line examples for the log files in the /var/log/nginx directory:
1. Access log (access.log):
- Sample line:
192.168.0.1 - - [18/Jun/2023:15:30:22 +0000] "GET /index.html HTTP/1.1" 200 1234 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
2. Error log (error.log):
- Sample line:
[2023-06-18 11:59:01.123456] [error] [pid 1234] Some error message
Please note that the actual log lines may vary depending on the log format and specific log entries in your system. The samples provided above are meant to give you a general idea of what log lines in each file may look like.
# cat /var/www/html/index.nginx-debian.html
The contents of the index.nginx-debian.html file in the /var/www/html directory are as follows:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>This is the default HTML page that is served by NGINX when you access the web server. It provides a welcome message and some basic information about NGINX. You can modify this file to customize the default page displayed by NGINX.
-
Remove the default site configuration:
- Command:
unlink /etc/nginx/sites-enabled/default
- Command:
-
Verify the default configuration file:
- Command:
ls -ltr /etc/nginx/sites-available
- Command:
-
Edit the configuration file for the Binaryville site:
- Command:
nano /etc/nginx/conf.d/binaryville.conf
- Command:
-
Add the server block in the configuration file:
server { listen 80; root /var/www/binaryville; } -
Save and exit the file:
- Command: Press
Ctrl + Oto save, thenCtrl + Xto exit
- Command: Press
-
Test the configuration:
- Command:
nginx -t
- Command:
-
Reload the configuration:
- Command:
systemctl reload nginx
- Command:
-
Create the root directory for the Binaryville site:
- Command:
mkdir -p /var/www/binaryville
- Command:
-
Create an index.html file in the root directory:
- Command:
echo "site coming soon" > /var/www/binaryville/index.html
- Command:
-
Verify the site response by accessing it in a web browser.
-
Open the configuration file for editing:
- Command:
vim /etc/nginx/conf.d/binaryville.conf
- Command:
-
Add the
default_serverdirective to thelistenstatement:- Add the following line after
listen 80;:listen 80 default_server;
- Add the following line after
-
Add the
server_namedirective to specify the site names:- Add the following line after the
rootdirective:server_name binaryville.local www.binaryville.local;
- Add the following line after the
-
Add the
indexdirective to specify the default file:- Add the following line after the
server_namedirective:index index.html;
- Add the following line after the
server {
listen 80 default_server;
server_name binaryville.local www.binaryville.local;
root /var/www/binaryville;
index index.html;
}
-
Save and exit the file:
- Command: Press
Esc, then type:wqand press Enter
- Command: Press
-
Test the updated configuration:
- Command:
nginx -t
- Command:
-
Reload the configuration:
- Command:
systemctl reload nginx
- Command:
-
Test the site by requesting it using
curl:- Command:
curl localhost
- Command:
-
Add content files to the root directory for the site.
-
Clone the GitHub repository:
- Command:
git clone https://github.com/LinkedInLearning/learning-nginx-2492317.git
- Command:
-
List the contents of the current directory:
- Command:
ll
- Command:
-
Change into the cloned repository directory:
- Command:
cd learning-nginx-2492317/
- Command:
-
List the contents of the current directory:
- Command:
ll
- Command:
-
Extract the contents of the "Binaryville_robot_website_LIL_107684.tgz" file to the destination directory:
- Command:
tar xvf Binaryville_robot_website_LIL_107684.tgz --directory /var/www/binaryville/
- Command:
Please note that the provided instructions assume that you are in the appropriate working directory when executing the commands. Additionally, make sure you have the necessary permissions to perform the file extraction to the /var/www/binaryville/ directory.
The location directive in NGINX allows us to configure specific behavior for different requests based on their Uniform Resource Indicator (URI). It is defined as a block inside the server block and can be nested within other location blocks. This flexibility eliminates the need for creating additional servers. The location directive uses modifiers, prefix strings, and regular expressions for matching.
location [modifier] location_definition {
# Directives specific to this location
}| Modifier | Description |
|---|---|
| No modifier | Treated as a prefix to the URI. |
= |
Matches the definition exactly as specified. |
~ |
Case-sensitive matching using regular expressions. |
~* |
Case-insensitive matching using regular expressions. |
^~ |
Disables regular expression matching, prioritizing the location. |
- Prefix strings: Can be used with or without the equal sign modifier.
- Regular expressions: Requires a modifier (tilde, tilde asterisk, or caret tilde).
- Exact matches: NGINX considers exact matches first.
- Prefix strings: Locations with prefix strings are checked next.
- Regular expressions: Regular expression locations are checked in the order they appear in the configuration.
Let's look at an example using our test website:
location /images/ {
# Directives for serving images
}
location = /admin {
# Directives for handling requests to the /admin URI
}
location ~* ^/user/[0-9]+ {
# Directives for handling user-related requests with numeric IDs
}
location ^~ /downloads/ {
# Directives for serving files in the downloads directory
}In this example, the /images/ location serves image files, the /admin location handles requests to the /admin URI exactly, the /user/[0-9]+ location processes user-related requests with numeric IDs, and the /downloads/ location serves files from the downloads directory.
To configure the locations for the root of the site, images, and error pages on our demo site, follow these steps:
- Connect to your sandbox server as the root user.
- Clone the exercise files to your server and navigate to the directory for this lesson.
https://github.com/LinkedInLearning/learning-nginx-2492317/tree/main/Ch02/02_03-configure-locations-part-2 - Open the configuration file using the
viewcommand. Enable line numbers for better reference with the command:set number. - Jump to line number 8, which contains the location directive for the root directory (
/) of the site.
server {
listen 80;
root /var/www/binaryville;
location / {
# First attempt to serve a request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /images {
# Allow the contents of the /image folder to be listed
autoindex on;
}
# specify the page to serve for 404 errors
error_page 404 /404.html;
# an exact match location for the 404 page
location = /404.html {
# only use this location for internal requests
internal;
}
# specify the page to serve for 500 errors
error_page 500 502 503 504 /50x.html;
# an exact match location for the 50x page
location = /50x.html {
# only use this location for internal requests
internal;
}
# a location to demonstrate 500 errors
location /500 {
fastcgi_pass unix:/this/will/fail;
}
}
-
Inside this location block, you'll find another location block on line number 11 named
try files. Thetry filesdirective provides a list of files or directories for NGINX to search for, relative to the location. The first matching item in the list will be processed. If none of the items match, the last item will be used as the URI or error code.- Use the
$urivariable to look for a file matching the processed URI. - Append the
$urivariable followed by a/to test for matching directories. - Add
=404to serve a 404 error if no file or directory matches the requested URI.
- Use the
-
Next, review the
imageslocation on line 14. Theautoindexdirective allows browsers to list the files in the images directory. By default, this behavior is disabled. -
Following the
imageslocation, there are additional directives and locations for custom error pages. -
Adjust the screen to view all these directives simultaneously. Lines 20 and 29 configure error page directives to display custom error pages instead of the default ones.
-
On lines 23 and 32, you'll find location directives using the
=modifier followed by the name of the error page to be displayed. This means an exact match is required to serve these locations. -
Inside each location block, there's an
internaldirective that instructs NGINX to process any redirects to the custom error pages as internal redirects. Using the exact match modifier and internal redirect helps NGINX serve the custom error pages more efficiently. -
To test the 404 page, request a page that you know is not set up on the site. For the 500 page, you need to add a specific location to test it.
- Find the location with an exact match modifier for the value 500 on line 38.
- Inside the location, there is a misconfigured FastCGI directive that will cause a 500 error.
-
Stop viewing the file by typing
:q. -
Copy the updated configuration file to
/etc/nginx/conf.d. -
Test the configuration using
nginx -t. If the configuration is valid, load it withsystemctl reload. -
Use your browser to test the locations you just added:
- Check the images location by browsing to
/images, which should display the content of the images directory due to the autoindex directive. - Verify the custom 404 page by visiting a location that is not defined on your site, such as
/missing. - Test the custom 500 page by appending
/500to the site's URI.
- Check the images location by browsing to
By following these steps, you can ensure that your location configurations are functioning as expected on your NGINX server.
To configure custom logging in NGINX on a Linux system, follow these steps:
- Connect to your Linux server using SSH as the root user.
- Navigate to the directory containing the NGINX configuration files. In this example, we'll use the directory
/etc/nginx/conf.d/. - Open the configuration file for custom logging using a text editor. In this example, the file is named
custom.conf. - Add the following lines to the configuration file to define the log file paths:
The default logging configuration is in the main configuration file, nginx.conf, in the http block.
/etc/nginx/nginx.conf
http {
...
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
...
}This file has two directives: access log, and error log.
If nginx is set up to serve multiple sites, all the requests, for all the sites will be written to the same logs. This can become an issue if, for example, you need to find the access logs for one specific site.
server {
listen 80;
root /var/www/binaryville;
server_name binaryville.local www.binaryville.local;
index index.html index.htm index.php;
access_log /var/log/nginx/binaryville.local.access.log;
error_log /var/log/nginx/binaryville.local.error.log;
location / {
# First attempt to serve a request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /images {
# Allow the contents of the /image folder to be listed
autoindex on;
access_log /var/log/nginx/binaryville.local.images.access.log;
error_log /var/log/nginx/binaryville.local.images.error.log;
}
# specify the page to serve for 404 errors
error_page 404 /404.html;
# an exact match location for the 404 page
location = /404.html {
# only use this location for internal requests
internal;
}
# specify the page to serve for 500 errors
error_page 500 502 503 504 /50x.html;
# an exact match location for the 50x page
location = /50x.html {
# only use this location for internal requests
internal;
}
# a location to demonstrate 500 errors
location /500 {
fastcgi_pass unix:/this/will/fail;
}
}Note: Replace /var/log/nginx/custom_access.log and /var/log/nginx/custom_error.log with the desired paths and filenames for your custom logs.
-
Save the configuration file and exit the text editor.
-
Validate the NGINX configuration for syntax errors by running the command:
nginx -tIf there are no syntax errors, proceed to the next step. Otherwise, review and fix any errors before continuing.
-
Reload the NGINX configuration to apply the changes by executing the command:
systemctl reload nginx -
Test the custom logging configuration by sending requests to the server using
curl. For example, you can use the following command to send 10 requests to the default server:for i in {1..10}; do curl -s -o /dev/null http://localhost/; done -
Verify that the log files have been created and contain the access logs by viewing the log file using a command like:
tail /var/log/nginx/custom_access.logYou should see the access logs corresponding to the requests made in the previous step.
-
Repeat the steps above to configure custom logging for specific server blocks or location blocks within the NGINX configuration file. Adjust the log file paths accordingly.
That's it! You have now configured custom logging in NGINX on your Linux system.
-
Check for configuration errors using the
nginx -Tcommand. This command helps identify any syntax errors or typos in your NGINX configuration files. It provides information about the specific file and line number where an error occurs. -
Review the root and server name directives in your configuration files for typos. Sometimes, these typos may not be flagged during configuration testing. Ensure that all directives are spelled correctly and match the intended values.
-
Verify if NGINX is running by using the command
systemctl status nginx. This command provides output confirming whether NGINX is currently running or not. -
If you have made changes to your site but they are not reflected, or if you can't access your site at all, reload the NGINX configuration with
systemctl reload nginx. Check the site again to see if the changes are now applied. -
Ensure that the standard ports for web traffic (ports 80 and 443) are open. You can use commands like
lsofandnetstatto check the status of these ports. For example, withlsof -i, you can see if NGINX is listening on ports 80 and 443 by grepping for NGINX in the output. -
Another method to confirm NGINX's port listening is by using the
netstat -plancommand (after installing thenet-toolspackage on Ubuntu). This command provides information about network connections, and by grepping for NGINX, you can obtain the process ID that is listening on the specific port. -
If the configuration is correct, NGINX is running, and the ports are open, but something is still not functioning correctly, check the NGINX logs. Use the
tail -Fcommand to follow the NGINX logs in real-time as they are being written to disk. Access various parts of your site and check if the accesses are recorded. Look for 400 errors, which might indicate problems such as incorrect file locations or incorrect directory permissions. -
If you're unable to resolve the issue, seek help from online resources like search engines or platforms like Stack Overflow. Research the specific error or problem you are encountering. There is a possibility that someone else has already encountered and provided a solution for the same issue.
Remember, troubleshooting may take time initially, but with experience, you will become more efficient in resolving NGINX-related problems.
In this instruction learning post, we will explore two powerful functionalities of NGINX: reverse proxy and load balancing. Understanding how to configure NGINX as a reverse proxy or load balancer opens up opportunities for improved performance, scalability, and high availability in your web applications. We'll dive into the concepts, use cases, and configuration options to help you harness the full potential of NGINX.
-
Reverse Proxy:
- A reverse proxy acts as an intermediary between clients and a single server on the back end.
- It simplifies complex tasks like SSL termination, logging, and content caching.
- NGINX's reverse proxy capabilities optimize response times by caching frequently requested content and compressing data.
- It ensures a seamless connection between clients and the server, enhancing performance and security.
-
Load Balancer:
- A load balancer distributes incoming client requests across multiple servers on the back end.
- It enables high scalability and reliability by sharing the traffic load among multiple servers.
- NGINX's load balancing capabilities improve application availability, as requests can be seamlessly redirected to functioning servers if one goes down.
- Load balancers also support session persistence, ensuring that client requests are consistently routed to the same server, beneficial for maintaining login sessions and preserving user context.
-
Reverse Proxy:
- Simplifies SSL termination, logging, and content caching.
- Enhances performance by caching frequently requested content and compressing data.
- Streamlines the integration of different technologies, allowing NGINX to interface with various back-end servers seamlessly.
-
Load Balancer:
- Enables scalability and high availability by distributing client requests across multiple servers.
- Improves application reliability as traffic load is shared among multiple servers.
- Facilitates server maintenance and updates without disrupting the entire application or website.
- Supports session persistence, ensuring consistent user experience and seamless login sessions.
| Feature | Reverse Proxies | Load Balancing |
|---|---|---|
| Functionality | Acts as an intermediary between clients and a single server on the back end. | Distributes client requests across multiple servers on the back end. |
| Use Case | Ideal for scenarios with a single server on the back end. | Suitable for applications that require multiple servers to handle client requests. |
| SSL Termination | Simplifies SSL termination for secure communication with clients. | Can handle SSL termination for multiple servers, offloading the SSL processing workload. |
| Logging | Provides centralized logging for client-server interactions. | Offers consolidated logs for multiple servers, simplifying monitoring and troubleshooting. |
| Content Caching | Caches frequently requested content, reducing server load and improving response times. | Distributes client requests across servers, optimizing resource utilization and improving performance. |
| Compression | Compresses data to reduce network bandwidth usage and improve transfer speeds. | Compresses data before distributing it to back-end servers, minimizing network overhead. |
| Scalability | Limited to a single server, making it less suitable for high-traffic or large-scale applications. | Enables horizontal scaling by distributing traffic across multiple servers, enhancing application scalability. |
| High Availability | Relies on a single server, so if it goes down, the application becomes unavailable. | Increases application availability as requests can be redirected to functioning servers if one fails. |
| Session Persistence | Doesn't inherently provide session persistence; each request may be directed to a different server. | Supports session persistence, ensuring requests from a client are routed to the same server, maintaining user context. |
| Maintenance Flexibility | No inherent maintenance flexibility; taking the server offline affects the entire application. | Offers maintenance flexibility as one server can be taken offline without impacting the entire application. |
| Configuration Complexity | Typically simpler to configure as it involves a single server. | Requires additional configuration to define server groups and load balancing algorithms. |
| Example NGINX Directive |
proxy_pass directive for configuring reverse proxy behavior. |
upstream directive for defining server groups and load balancing algorithms. |
Consider an e-commerce application that experiences high traffic and requires multiple servers to handle client requests. NGINX can serve as a load balancer in this scenario.
- Incoming requests are evenly distributed across multiple back-end servers.
- If one server becomes unavailable, NGINX intelligently redirects traffic to functioning servers.
- This setup ensures high availability, seamless user experience, and efficient resource utilization.
To configure NGINX as a reverse proxy, follow these steps:
-
Open the NGINX configuration file (
/etc/nginx/nginx.confor a file in/etc/nginx/conf.d/) using a text editor. -
Locate the
httpblock within the configuration file. This is where HTTP-related directives are defined. -
Inside the
httpblock, define anupstreamblock to group together the servers that NGINX will connect to. For example:
upstream app_server_7001 {
server 127.0.0.1:7001;
}In this example, the upstream block is named app_server_7001, and it includes a single server at 127.0.0.1:7001. You can define multiple servers within an upstream block for load balancing.
-
Outside the
upstreamblock, define aserverblock to configure NGINX's behavior. Within theserverblock, specify the desiredlistendirective (e.g.,listen 80;) and other server-specific directives. -
Within the
serverblock, define alocationblock to specify the path that NGINX will proxy to the upstream server. For example:
location /proxy {
proxy_pass http://app_server_7001/;
}In this example, requests to the /proxy path will be proxied to the app_server_7001 upstream.
-
Save the NGINX configuration file and exit the text editor.
-
Test the NGINX configuration for syntax errors by running
nginx -t. If the configuration is valid, reload NGINX to apply the changes usingsystemctl reload nginx.
With NGINX configured as a reverse proxy, requests to the specified path will be forwarded to the upstream server. You can customize the configuration further to meet your specific needs.
Example:
$ curl http://localhost/proxy
This command sends a request to the NGINX server, which proxies the request to the upstream server defined in the app_server_7001 upstream block. The response from the upstream server is then returned to the client.
By following these steps, you can configure NGINX as a reverse proxy to distribute incoming requests to one or more backend servers, providing load balancing and improved performance for your applications.
NGINX can be configured as a load balancer to distribute incoming requests across multiple upstream servers. NGINX provides several load balancing methods, including round-robin, least connections, IP hashing, and weighted load balancing. Let's explore how to configure NGINX as a load balancer with these methods.
You can find more information on load balancing and the directives used in the NGINX documentation:
upstream round_robin {
server 127.0.0.1:7001;
server 127.0.0.1:7002;
server 127.0.0.1:7003;
}
upstream least_connections {
least_conn;
server 127.0.0.1:7001;
server 127.0.0.1:7002;
server 127.0.0.1:7003;
}
upstream ip_hash {
ip_hash;
server 127.0.0.1:7001;
server 127.0.0.1:7002;
server 127.0.0.1:7003;
}
upstream weighted {
server 127.0.0.1:7001 weight=2;
server 127.0.0.1:7002;
server 127.0.0.1:7003;
}
server {
listen 80;
root /var/www/binaryville;
server_name binaryville.local www.binaryville.local;
index index.html index.htm index.php;
access_log /var/log/nginx/binaryville.local.access.log;
error_log /var/log/nginx/binaryville.local.error.log;
location / {
try_files $uri $uri/ =404;
}
location /images {
autoindex on;
access_log /var/log/nginx/binaryville.local.images.access.log;
error_log /var/log/nginx/binaryville.local.images.error.log;
}
error_page 404 /404.html;
location = /404.html {
internal;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
internal;
}
location /500 {
fastcgi_pass unix:/this/will/fail;
}
location /round-robin {
proxy_pass http://round_robin/;
access_log /var/log/nginx/binaryville.local.round-robin.access.log;
error_log /var/log/nginx/binaryville.local.round-robin.error.log;
}
location /least-connections {
proxy_pass http://least_connections/;
access_log /var/log/nginx/binaryville.local.least-connections.access.log;
error_log /var/log/nginx/binaryville.local.least-connections.error.log;
}
location /ip-hash {
proxy_pass http://ip_hash/;
access_log /var/log/nginx/binaryville.local.ip-hash.access.log;
error_log /var/log/nginx/binaryville.local.ip-hash.error.log;
}
location /weighted {
proxy_pass http://weighted/;
access_log /var/log/nginx/binaryville.local.weighted.access.log;
error_log /var/log/nginx/binaryville.local.weighted.error.log;
}
}In this example, we configure NGINX as a load balancer with four upstream
blocks representing different load balancing methods. Here's a breakdown of the configuration:
- The
round_robinupstream block uses the default round-robin load balancing method. Requests are distributed evenly across the servers127.0.0.1:7001,127.0.0.1:7002, and127.0.0.1:7003. - The
least_connectionsupstream block uses the least connections load balancing method. Requests are forwarded to the server with the fewest active connections. - The
ip_hashupstream block uses the IP hashing load balancing method. Requests from the same IP address are consistently routed to the same upstream server. - The
weightedupstream block uses the round-robin method but assigns a weight of 2 to127.0.0.1:7001, indicating that it should receive twice as many requests as the other servers.
Each upstream block is associated with a specific location in the NGINX server block. Requests to these locations are proxied to the corresponding upstream servers.
To use this configuration, follow these steps:
-
Copy the provided NGINX configuration to the appropriate location on your NGINX server (e.g.,
/etc/nginx/conf.d/binaryville.conf). -
Verify the configuration by running
nginx -t. -
If the configuration test is successful, reload NGINX to apply the changes using
systemctl reload nginx(or the equivalent command for your system). -
Start the app servers listening on the specified ports (e.g.,
python3start_app_servers.py). -
Access the load-balanced locations in a browser:
-
/round-robin- Requests will be distributed among the servers in a round-robin fashion. -
/least-connections- Requests will be routed to the server with the least number of active connections. -
/ip-hash- Requests from the same IP address will consistently go to the same server. -
/weighted- Requests will be distributed among the servers, with the weighted server receiving a higher proportion of requests.
-
By configuring NGINX as a load balancer, you can distribute incoming traffic among multiple upstream servers, improving the performance and availability of your application.
In this guide, we'll explore how to secure your site's content by limiting access using the NGINX HTTP Access module. This module includes allow and deny directives, which can regulate who can view specific content and who is denied access.
- NGINX installed on your server
- SSH access to the server
- Basic knowledge of Linux commands and IP address notation
The allow and deny directives in NGINX are rules you can use in the http, server, and location contexts of your configuration. They specify patterns to match incoming requests using either:
- The keyword
all, which matches all IP addresses - A specific IP address, like
192.168.1.1 - A Classless Inter-Domain Routing (CIDR) block, like
192.168.0.0/16
These directives belong to the ngx_http_access_module in NGINX.
An example of the allow and deny directives in use might be the following configuration. This set of rules:
- Allows requests from the IP address
192.168.1.1 - Allows requests from all private IP addresses using CIDR notation
- Denies requests from all other IP addresses
The code would look like:
location /admin {
allow 192.168.1.1;
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny all;
}These rules are applied in the order they're defined, so deny rules should usually be placed after allow rules.
Connect to your server using SSH:
ssh root@your_server_ipTo add an allow directive for your IP address:
-
Obtain your public IP address using the following command on your local machine:
curl https://api.ipify.org
This command will output your public IP address.
-
Copy the returned IP address to your clipboard.
-
On your server, add an
allowdirective in thelocationblock in your site's NGINX configuration. Replaceyour_ip_addresswith the IP address you obtained earlier:location /admin { allow your_ip_address; deny all; }
-
Save and exit the file.
-
Test the new configuration:
nginx -t
If the configuration is successful, you should see a message like:
nginx: configuration file /etc/nginx/nginx.conf test is successful. -
If there are no errors, reload NGINX to apply the changes:
systemctl reload nginx
To test your new configuration, try to access the /admin path on your website. You should be able to access this previously restricted path as your IP has been allowed.
By using the allow and deny directives, you can control access to your website, enhancing the security by limiting access based on IP addresses or IP ranges. The ordering of these directives is crucial; deny all should usually be placed last, ensuring the correct IPs have access.
In this guide, we'll explore how to set up password authentication for your website using NGINX's HTTP Auth Basic module. This module allows you to set up simple username and password prompts to protect specific parts of your site.
- NGINX installed on your server
- SSH access to the server
- Basic knowledge of Linux commands
The auth_basic and auth_basic_user_file directives from NGINX's HTTP Auth Basic module are used to configure password authentication:
-
auth_basiccan be used to prompt for a password or disable authentication altogether using the keywordoff. -
auth_basic_user_filespecifies the file that contains the credentials. The file format isusername:encrypted_password.
These directives can be used in the http, server, and location contexts, which means you can restrict access to your entire website or only specific portions.
The htpasswd program is used to create and manage the password file. It's a part of the apache2-utils package.
To install it, log into your server and run:
sudo apt install -y apache2-utilsOnce htpasswd is installed, you can create a password file using the following command:
htpasswd -c /etc/nginx/passwords usernameReplace /etc/nginx/passwords with your desired file path and username with your chosen username. The command will prompt you to enter and confirm a password for the user.
After creating the password file, you can now set up password authentication in your NGINX configuration. Here's an example of how to do this for an /admin location:
location /admin {
auth_basic 'Please authenticate...';
auth_basic_user_file /etc/nginx/passwords;
# the rest of your location block configuration...
}In this example, any requests to /admin will prompt users to enter a username and password. The message 'Please authenticate...' will be shown in the prompt.
After adding the directives, save your configuration file and exit.
Always test your configuration after making changes to ensure there are no syntax errors:
nginx -tIf the test is successful, reload the NGINX service to apply the changes:
systemctl reload nginxNow, if you try to access the /admin section of your website, you should be prompted for a username and password. Enter the credentials you created earlier. If the username and password are correct, you should gain access.
By leveraging the auth_basic and auth_basic_user_file directives in NGINX, you can secure your website or specific sections of it by requiring users to authenticate. Keep in mind that basic authentication should be part of a more robust security setup and should be used in conjunction with other security measures.