Step 1: Understanding NGINX Basics and Terminology
1. What is NGINX?
NGINX (pronounced “engine-x”) is a web server software. It listens to requests from clients (usually browsers) and serves them web content. It can serve:
- Static files (HTML, CSS, images, JavaScript) quickly.
- Act as a reverse proxy to forward requests to backend servers like Python apps.
- Load balance requests among multiple backend servers to distribute traffic.
2. What is a Web Server?
A web server is software that listens for HTTP (or HTTPS) requests from clients and responds with web content (web pages, images, data).
Examples: NGINX, Apache HTTP Server.
3. What is a Proxy Server?
A proxy server acts as an intermediary between clients and other servers. Instead of clients connecting directly to backend servers, they connect to the proxy which forwards requests.
4. What is a Reverse Proxy?
A reverse proxy is a proxy server that sits in front of one or more backend servers and forwards client requests to them.
Why use a reverse proxy?
- Security: Backend servers are hidden from the public.
- SSL termination: Proxy handles HTTPS, backend can use HTTP.
- Load balancing: Distributes traffic to multiple servers.
- Caching: Can cache responses to reduce load.
5. What is Load Balancing?
Load balancing is distributing incoming network traffic across multiple backend servers, so no single server becomes overwhelmed.
NGINX supports different load balancing methods:
- Round-robin (default): Distributes requests evenly.
- Least connections: Sends request to server with fewest active connections.
- IP hash: Routes requests from same client IP to the same backend.
6. What is a Server Block?
In NGINX, a server block is a section of configuration that defines how NGINX should handle requests for a specific domain or IP and port.
It’s like a “virtual host” in Apache. Multiple server blocks can exist to serve different sites/domains on one server.
Example:
server {
listen 80;
server_name example.com;
# other config here
}
7. What is a Location Block?
Inside a server block, location blocks specify how to handle requests for particular URIs (paths).
Example:
location /static/ {
root /var/www/html;
}
Means: Requests starting with /static/ will serve files from /var/www/html/static/.
8. What is the proxy_pass directive?
proxy_pass tells NGINX to forward the request to another server (backend).
Example:
location / {
proxy_pass http://127.0.0.1:8000;
}
Means: Forward all requests to the server running on localhost port 8000.
9. What is the index directive?
index tells NGINX which file to serve if a directory is requested.
Example:
location / {
root /var/www/html;
index index.html index.htm;
}
Means: When a user requests /, serve /var/www/html/index.html or /var/www/html/index.htm if it exists.
10. How does NGINX process requests?
When NGINX receives an HTTP request, it:
- Finds the server block matching the Host header (domain).
- Within that server block, finds the best matching location block for the request URI.
- Executes directives inside that location block (serve static file, proxy, etc).
- Sends the response back to the client.
Summary of Key Terms
| Term | Explanation |
|---|---|
| NGINX | Web server software to serve websites and proxy requests |
| Web Server | Software that listens to HTTP requests and serves content |
| Proxy Server | Intermediary server forwarding requests between clients and backend servers |
| Reverse Proxy | Proxy in front of backend servers to forward requests |
| Load Balancer | Distributes incoming traffic among multiple backend servers |
| Server Block | Config block for a domain/IP and port (virtual host) |
| Location Block | Config block inside server block that matches request URI |
| proxy_pass | Directive to forward requests to backend servers |
| index | Directive specifying default files to serve in directories |
Step 2: Installing NGINX and Basic Setup
1. Installing NGINX
Before you can use NGINX, you need to install it on your system. The process differs slightly depending on your operating system.
Installation on Ubuntu/Debian
Update package lists
Run this command to make sure you get the latest version info for packages:
sudo apt update
Install NGINX
This command installs NGINX:
sudo apt install nginx
Check NGINX version
To verify installation and check the version:
nginx -v
You should see output like:
nginx version: nginx/1.18.0 (or higher depending on your system)
Installation on CentOS/RHEL
Enable EPEL repository
NGINX is available in EPEL repo:
sudo yum install epel-release
Install NGINX
sudo yum install nginx
Check version
nginx -v
Installation on Windows
NGINX is primarily designed for Linux/Unix systems. On Windows, it’s recommended to use Windows Subsystem for Linux (WSL) or run NGINX inside a Docker container.
2. Starting and Stopping NGINX
NGINX runs as a service on your system.
Starting NGINX
sudo systemctl start nginx
This command starts the NGINX service so it begins listening for HTTP requests.
Enabling NGINX to start on boot
sudo systemctl enable nginx
This sets NGINX to automatically start when the system boots.
Stopping NGINX
sudo systemctl stop nginx
Stops the NGINX service.
Restarting NGINX
If you make changes to the config, restart NGINX to apply them:
sudo systemctl restart nginx
Reloading NGINX
Alternatively, to reload config without dropping connections:
sudo systemctl reload nginx
3. Checking NGINX Status
You can check if NGINX is running with:
sudo systemctl status nginx
Expected output shows if it’s active (running) or inactive.
4. NGINX Default File Structure
NGINX uses several directories and files for configuration and website files.
Key directories:
| Path | Description |
|---|---|
/etc/nginx/nginx.conf | Main NGINX configuration file |
/etc/nginx/sites-available/ | Directory to store site config files (Ubuntu/Debian) |
/etc/nginx/sites-enabled/ | Symlinks to active site configs |
/var/www/html/ | Default folder to serve website files |
/var/log/nginx/ | Logs (access and error logs) |
What are these for?
- nginx.conf: This is the master configuration file. It can include other configs (like your site configs).
- sites-available: Store your site-specific config files here.
- sites-enabled: Symlinks (shortcuts) to enable the configs from sites-available.
- var/www/html: The default document root where NGINX serves static files from.
5. Testing NGINX Installation
Once installed and running:
- Open a browser.
- Enter your server’s IP address or
http://localhost(if local machine). - You should see the default NGINX welcome page, which means NGINX is working!
6. Basic NGINX Configuration Files Overview
Open /etc/nginx/nginx.conf to view the main config. It looks like this (simplified):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
- http block: Contains HTTP-related configuration.
- Includes other config files like your
sites-enabledfolder.
7. Enabling/Disabling Sites
When you create a new site config in /etc/nginx/sites-available/my_site, enable it by creating a symlink:
sudo ln -s /etc/nginx/sites-available/my_site /etc/nginx/sites-enabled/
To disable a site, remove the symlink:
sudo rm /etc/nginx/sites-enabled/my_site
Then reload NGINX:
sudo nginx -t # test config syntax
sudo systemctl reload nginx
Step 3: Serving Static Files Through NGINX
1. What Are Static Files?
Static files are files that do not change dynamically and are directly sent to the client (browser) as they are stored on disk.
Common static files include:
- HTML files (index.html)
- CSS files (style.css)
- JavaScript files (app.js)
- Images (PNG, JPG, SVG)
- Fonts
2. Why Serve Static Files with NGINX?
- Fast and efficient: NGINX is optimized to serve static files quickly with minimal resource use.
- Offloading backend: Serving static files directly with NGINX means your application servers (Python, Node, etc.) don’t have to serve them.
- Better scalability and performance.
3. Document Root (root) and Static Files
The document root is the directory where your static files live.
By default, this is /var/www/html on most Linux systems with NGINX.
Example:
/var/www/html/index.html/var/www/html/static/style.css
4. NGINX Configuration to Serve Static Files
Basic server block to serve static files
Example config snippet for serving static files:
server {
listen 80;
server_name example.com;
location / {
root /var/www/html; # Document root
index index.html index.htm;
}
}
What this does:
listen 80;— NGINX listens for HTTP requests on port 80.server_name example.com;— Server responds to requests for example.com.location / { ... }— For any URL starting with/, serve files from/var/www/html.root /var/www/html;— The directory containing static files.index index.html index.htm;— If a directory is requested (like/), serveindex.htmlorindex.htmif found.
Example scenario:
- User visits
http://example.com/-> NGINX serves/var/www/html/index.htmlfile. - User visits
http://example.com/static/style.css-> NGINX tries to serve/var/www/html/static/style.cssfile.
5. Understanding the root Directive
root specifies the full directory path where NGINX looks for files.
For example, if your config has:
location /static/ {
root /var/www/html;
}
and user requests /static/image.png, NGINX will look for the file:
/var/www/html/static/image.png
6. Understanding the alias Directive (Advanced)
Sometimes you want to serve static files from a directory different from the URL path.
Example:
location /static/ {
alias /home/user/my_static_files/;
}
Here:
- Request:
/static/image.png - Served file:
/home/user/my_static_files/image.png
Difference:
rootappends the location URI to the directory path.aliasreplaces the location URI with the directory path.
7. How to Set Up Your Static Files Folder
Place your website files in /var/www/html or your preferred folder.
Example:
sudo mkdir -p /var/www/html/static
sudo nano /var/www/html/index.html
sudo nano /var/www/html/static/style.css
Put your static content inside.
8. Permissions
Ensure that the NGINX process has read permission on the static files and folders.
Usually, NGINX runs under user www-data or nginx.
To set permissions:
sudo chown -R www-data:www-data /var/www/html
sudo chmod -R 755 /var/www/html
9. Testing Your Static Site
After putting files in place and configuring NGINX, test your config:
sudo nginx -t
Reload NGINX to apply:
sudo systemctl reload nginx
Visit your site in a browser:
http://example.com/→ loads index.html.http://example.com/static/style.css→ loads CSS file.
10. Example Complete Minimal Config for Static Site
server {
listen 80;
server_name example.com;
location / {
root /var/www/html;
index index.html index.htm;
try_files $uri $uri/ =404;
}
location /static/ {
root /var/www/html;
}
}
try_files $uri $uri/ =404; — Checks if file or directory exists; else returns 404.
Summary
| Term | Explanation |
|---|---|
| Static Files | Files like HTML, CSS, JS that do not change dynamically |
| Document Root | Directory from which NGINX serves files |
| root | Directive that specifies directory for static files |
| alias | Directive for mapping URL path to different directory |
| index | Default file to serve when directory is requested |
| location | Block defining how to serve requests to certain URL paths |
Step 4: Reverse Proxy with NGINX
1. What is a Reverse Proxy?
A reverse proxy is a server that sits in front of one or more backend servers (like your Python app) and forwards client requests to them.
- Clients only communicate with the reverse proxy.
- The reverse proxy forwards the request to backend servers.
- Responses from backend servers go back through the proxy to clients.
Visual Example:
Client Browser <---> NGINX Reverse Proxy <---> Backend Python App (gunicorn)
2. Why Use a Reverse Proxy?
- Security: Backend servers are hidden from the public internet. Only the proxy is exposed.
- SSL Termination: Proxy can handle HTTPS connections, while backend apps use plain HTTP.
- Load Balancing: Proxy can distribute traffic across multiple backend servers.
- Caching: Proxy can cache static or dynamic content.
- Compression: Proxy can compress responses to save bandwidth.
- Logging: Centralized access logs for requests.
- Performance: NGINX is optimized to handle many connections efficiently.
3. How Does NGINX Act as a Reverse Proxy?
NGINX listens on a public port (e.g., 80) and forwards matching requests to a backend server defined by proxy_pass.
4. Key NGINX Directives in Reverse Proxying
proxy_pass
This tells NGINX where to forward the request.
Example:
proxy_pass http://127.0.0.1:8000;
Means: Forward requests to the server running on localhost port 8000.
Proxy Headers
When proxying, it’s important to forward certain headers so the backend knows:
- The original host (Host header).
- The real client IP address.
- The protocol used (HTTP/HTTPS).
Common headers to set:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
What do these headers mean?
Host: The domain the client requested (e.g., example.com). Useful for virtual hosting.X-Real-IP: The client’s real IP address.X-Forwarded-For: The chain of IP addresses the request passed through (helps track proxies).X-Forwarded-Proto: The original protocol used (http or https).
5. Basic Reverse Proxy Configuration Example
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Explanation:
- NGINX listens on port 80 for requests to
example.com. - Any request under
/is forwarded to backend at127.0.0.1:8000. - It sets the proxy headers to keep important client info intact.
6. How It Works in Practice
Say you have a Python Flask app running on localhost port 8000:
gunicorn --bind 127.0.0.1:8000 myapp:app
- Clients never talk to Gunicorn directly. They access
http://example.com. - NGINX accepts these requests and forwards them to Gunicorn, then sends Gunicorn’s response back to the client.
7. Why Not Just Use Gunicorn Directly?
Gunicorn is a great Python WSGI server but not designed for handling slow clients, HTTPS, or static files efficiently. NGINX excels at handling client connections, buffering slow clients, SSL termination, and serving static files. NGINX offloads these tasks from Gunicorn, improving reliability and performance.
8. Useful NGINX Proxy Settings (Optional but Recommended)
Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
send_timeout 60s;
Buffering
proxy_buffering on;
proxy_buffers 8 16k;
proxy_buffer_size 32k;
These control how NGINX handles slow clients and buffering backend responses.
9. Testing Reverse Proxy Setup
- Start your backend Python app on port 8000.
- Configure NGINX as above.
- Test NGINX config syntax:
sudo nginx -t
- Reload NGINX:
sudo systemctl reload nginx
- Open browser and visit
http://example.com(or server IP). You should see your Python app’s response.
10. Summary
| Directive | Purpose |
|---|---|
| proxy_pass | Forward requests to backend server URL |
| proxy_set_header | Set headers forwarded to backend for correct info |
| listen | Define which port NGINX listens on |
| server_name | Domain or IP for which server block responds |
| location | URL path or pattern to match for proxying |
Step 5: Serving a Single Python Website Using NGINX
Overview
NGINX does not run Python code directly. Instead, you:
- Run your Python web app (Flask, Django, FastAPI, etc.) with a WSGI server like Gunicorn or uWSGI.
- Configure NGINX as a reverse proxy to forward HTTP requests to your app server.
Why use Gunicorn (or uWSGI)?
- Python web frameworks provide a WSGI interface.
- Gunicorn is a lightweight WSGI HTTP server optimized to run Python apps.
- Gunicorn handles running your app, managing worker processes, etc.
- NGINX manages client connections and forwards requests.
Step-by-step to serve a Python app with NGINX + Gunicorn
1. Create a Simple Python Web App
Let’s use Flask as an example.
Create a file called myapp.py:
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return "Hello from Flask!"
2. Set Up Python Environment
Install Python (3.6+ recommended) Install Flask and Gunicorn:
pip install flask gunicorn
3. Run the App Locally with Gunicorn
Test running your app directly with Gunicorn on port 8000:
gunicorn --bind 127.0.0.1:8000 myapp:app
--bind 127.0.0.1:8000tells Gunicorn to listen on localhost port 8000.myapp:appmeans import app from the myapp Python file.
Open browser to http://127.0.0.1:8000 — you should see:
Hello from Flask!
4. Configure NGINX to Reverse Proxy to Gunicorn
Create a new NGINX site config file, e.g., /etc/nginx/sites-available/myapp:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Explanation:
- NGINX listens on port 80 for requests to
example.com. - It forwards all requests (
location /) to Gunicorn on localhost port 8000. - Proxy headers keep client info intact.
5. Enable the Site
Create a symlink to enable the site:
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
6. Test and Reload NGINX Configuration
Check NGINX config syntax:
sudo nginx -t
If OK, reload NGINX:
sudo systemctl reload nginx
7. Adjust Firewall (if needed)
Make sure HTTP port 80 is open:
sudo ufw allow 'Nginx Full'
sudo ufw enable
8. Visit Your Website
Open a browser to http://example.com or your server IP.
You should see:
Hello from Flask!
Step 6: Serving Multiple Python Websites Using NGINX
Overview
If you have multiple Python web applications running on the same server, each typically listens on a different port via Gunicorn (or another WSGI server).
You want to:
- Serve these multiple apps using different domain names or subdomains.
- Use NGINX to route incoming requests to the appropriate backend app based on the domain.
Why serve multiple apps on one server?
- Save infrastructure costs by hosting multiple sites on one machine.
- Easily manage all apps via one reverse proxy (NGINX).
- Separate apps by domain or subdomain for isolation.
1. Run Each Python App on a Different Port
Example:
- App 1 on port 8000 (
app1.example.com) - App 2 on port 8001 (
app2.example.com)
Run each app with Gunicorn:
# For app1
gunicorn --bind 127.0.0.1:8000 app1:app
# For app2
gunicorn --bind 127.0.0.1:8001 app2:app
2. NGINX Configuration for Multiple Apps
Create separate server blocks for each domain in NGINX config, e.g., /etc/nginx/sites-available/multiapp:
# Server block for app1
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Server block for app2
server {
listen 80;
server_name app2.example.com;
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
3. Explanation:
- Each server block listens on port 80 but for a different domain name (
server_name). - Requests to
app1.example.comgo to backend on127.0.0.1:8000. - Requests to
app2.example.comgo to backend on127.0.0.1:8001. - Proxy headers keep important request info intact.
4. Enable the Site and Reload NGINX
If the config is in sites-available/multiapp:
sudo ln -s /etc/nginx/sites-available/multiapp /etc/nginx/sites-enabled/
sudo nginx -t # test config
sudo systemctl reload nginx
5. DNS Setup
Make sure both app1.example.com and app2.example.com point to your server IP address in DNS records.
6. Optional: Serve Multiple Apps on Different URL Paths (Not Domains)
Instead of different domains, you can serve multiple apps on different URL prefixes:
server {
listen 80;
server_name example.com;
location /app1/ {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /app2/ {
proxy_pass http://127.0.0.1:8001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Note: This requires your Python apps to handle being served on sub-paths properly (URL prefixes).
7. Running Multiple Gunicorn Services
Run each app with Gunicorn on different ports, or better, run them as systemd services with distinct service files.
Example for app1:
gunicorn --bind 127.0.0.1:8000 app1:app
For app2:
gunicorn --bind 127.0.0.1:8001 app2:app
8. Summary
| Concept | Description |
|---|---|
| Multiple Apps | Python apps run on different ports |
| Server Blocks | NGINX routes based on domain to different apps |
| Proxy Pass | Forward requests to correct app backend |
| DNS Setup | Domains/subdomains must point to your server |
| URL Path Routing | Alternative: route by URL path prefixes |
Step 7: NGINX as a Load Balancer
1. What is Load Balancing?
Load balancing means distributing incoming network or application traffic across multiple backend servers so no single server gets overwhelmed.
Benefits:
- Better performance — distributes requests evenly.
- High availability — if one server fails, traffic goes to others.
- Scalability — add more servers as demand grows.
2. How NGINX Load Balancer Works
NGINX listens for client requests and forwards them to one of the backend servers configured in an upstream block.
It supports several load balancing methods:
| Method | Description |
|---|---|
| round-robin | Default method, cycles through servers sequentially |
| least_conn | Sends to server with least active connections |
| ip_hash | Requests from same client IP always go to same server |
3. Basic Load Balancing Configuration
Step 1: Define an Upstream Group
In your NGINX config (e.g., /etc/nginx/nginx.conf or a site config), define backend servers:
upstream backend_servers {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
This groups 3 backend servers running on different ports.
Step 2: Use Upstream in Server Block
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
4. Explanation:
- Requests to
example.comwill be forwarded to one of the backend servers in thebackend_serversgroup. - By default, NGINX will send requests round-robin to each server.
5. Load Balancing Methods
Round-robin (default)
Each request sent to the next server in order.
upstream backend_servers {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
Least Connections
Send requests to the server with the fewest active connections.
upstream backend_servers {
least_conn;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
IP Hash
Requests from the same client IP always go to the same server (useful for session persistence).
upstream backend_servers {
ip_hash;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
}
6. Health Checks and Failover (Basic)
NGINX Open Source does not support active health checks, but if a backend server is down, it will mark it as unavailable temporarily.
You can configure weights and backup servers:
upstream backend_servers {
server 127.0.0.1:8000 weight=3;
server 127.0.0.1:8001;
server 127.0.0.1:8002 backup;
}
weight=3means the first server gets 3x more traffic.backupmeans the server is used only if others fail.
7. Summary of Key Directives
| Directive | Purpose |
|---|---|
| upstream | Define a group of backend servers |
| server | Define backend server IP and port |
| proxy_pass | Forward requests to an upstream group or single backend |
| least_conn | Load balancing by least active connections |
| ip_hash | Load balancing by client IP hash |
| weight | Adjust traffic share among backend servers |
| backup | Define a backup server used if others fail |
8. Example Full Config
upstream backend_servers {
least_conn;
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
9. Testing Your Load Balancer
- Run 2 or more Python apps on ports 8000, 8001, 8002 (each returning different text so you can identify the server).
- Configure NGINX as above.
- Reload NGINX.
- Open browser multiple times or use curl:
curl http://example.com
You should see responses cycling through different backends based on your load balancing method.
10. Final Notes
- For production-grade health checks, consider using NGINX Plus or external monitoring.
- You can combine load balancing with caching, SSL termination, and other NGINX features.
- This setup improves fault tolerance and scalability of your Python web apps.
Step 8: Setting Up SSL (HTTPS) in NGINX
1. What is SSL / TLS?
SSL (Secure Sockets Layer), now technically TLS (Transport Layer Security), is a security protocol that encrypts data exchanged between a client (browser) and a server.
In simple words: SSL ensures that data like passwords, cookies, and form inputs cannot be read or modified by attackers.
Why SSL is Important
- Encryption — protects sensitive data
- Authentication — proves the server is genuine
- Data integrity — prevents tampering
- SEO benefit — Google favors HTTPS
- Browser warnings — modern browsers warn users on HTTP sites
2. How HTTPS Works (High-Level)
- Client requests
https://example.com - Server sends its SSL certificate
- Client verifies the certificate
- A secure encrypted channel is established
- Data flows securely using HTTPS
3. Types of SSL Certificates
| Type | Description |
|---|---|
| Self-Signed | For testing, not trusted by browsers |
| CA-Signed | Issued by trusted Certificate Authorities |
| Let’s Encrypt | Free, automated, widely used |
| Wildcard | Covers all subdomains (*.example.com) |
For production: Let’s Encrypt (recommended)
4. Prerequisites
Before SSL setup:
- Domain name (
example.com) pointing to your server IP - NGINX installed and running
- Ports 80 and 443 open
Check NGINX:
nginx -v
5. Install Certbot (Let’s Encrypt Tool)
Ubuntu / Debian
sudo apt update
sudo apt install certbot python3-certbot-nginx -y
CentOS / RHEL
sudo yum install certbot python3-certbot-nginx -y
6. Obtain SSL Certificate (Automatic Method)
Run:
sudo certbot --nginx
You’ll be asked:
- Email address
- Agree to terms
- Select domain (
example.com,www.example.com) - Redirect HTTP → HTTPS? (Choose YES)
Certbot will:
- Generate SSL certificates
- Update NGINX config automatically
- Enable HTTPS
7. Manual NGINX SSL Configuration (Important for Understanding)
Certificate Files Location
Usually stored at:
/etc/letsencrypt/live/example.com/
Files:
fullchain.pem→ certificateprivkey.pem→ private key
Basic HTTPS Server Block
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
root /var/www/html;
index index.html;
}
}
8. Redirect HTTP → HTTPS (Very Important)
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
✔ Ensures all traffic uses HTTPS
9. SSL Configuration Best Practices
Strong SSL Settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
Enable HTTP/2 (Performance Boost)
listen 443 ssl http2;
Security Headers (Recommended)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
10. SSL with Reverse Proxy / Load Balancer
SSL Termination at NGINX
NGINX handles SSL and forwards traffic to backend servers over HTTP.
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;
}
}
Backend servers don’t need SSL NGINX acts as SSL termination point
11. Test SSL Configuration
Test NGINX Syntax
sudo nginx -t
Reload NGINX
sudo systemctl reload nginx
Browser Test
Open:
https://example.com
Look for 🔒 lock icon
12. Auto-Renew SSL Certificates
Let’s Encrypt certificates expire every 90 days.
Test Renewal
sudo certbot renew --dry-run
Auto Renewal (Cron / Systemd)
Usually already set by Certbot:
sudo systemctl status certbot.timer