Use NGINX as a Reverse Proxy
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
What is a Reverse Proxy?
A reverse proxy is a server that sits between internal applications and external clients, forwarding client requests to the appropriate server. While many common applications, such as Node.js, are able to function as servers on their own, NGINX has a number of advanced load balancing, security, and acceleration features that most specialized applications lack. Using NGINX as a reverse proxy enables you to add these features to any application.
This guide uses a simple Node.js app to demonstrate how to configure NGINX as a reverse proxy.
What Are The Benefits Of A Reverse Proxy?
Reverse proxy servers are able to support a number of use-cases. Some of the benefits of using a reverse proxy include:
- SSL Offloading or inspection
- Server load balancing
- Port forwarding
- Caching
- L7 filtering and routing
What Are The Benefits Of Using NGINX As Reverse Proxy?
Some common uses of NGINX as a reverse proxy include load balancing to maximize server capacity and speed, cache commonly requested content, and to act as an additional layer of security.
Install NGINX
These steps install NGINX Mainline on Ubuntu from NGINX Inc’s official repository. For other distributions, see the NGINX admin guide. For information on configuring NGINX for production environments, see our Getting Started with NGINX series.
Open
/etc/apt/sources.list
in a text editor and add the following line to the bottom. ReplaceCODENAME
in this example with the codename of your Ubuntu release. For example, for Ubuntu 18.04, named Bionic Beaver, insertbionic
in place ofCODENAME
below:- File: /etc/apt/sources.list
1
deb http://nginx.org/packages/mainline/ubuntu/ CODENAME nginx
Import the repository’s package signing key and add it to
apt
:sudo wget http://nginx.org/keys/nginx_signing.key sudo apt-key add nginx_signing.key
Install NGINX:
sudo apt update sudo apt install nginx
Ensure NGINX is running and enabled to start automatically on reboot:
sudo systemctl start nginx sudo systemctl enable nginx
Create an Example App
Install Node.js
Though there are a number of options available to install Node.js, we recommend using NVM with the following steps:
Install the Node Version Manager (NVM) for Node.js. This program helps you manage different Node.js versions on a single system.
sudo curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
To start using
nvm
in the same terminal run the following commands:export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
Verify that you have access to NVM by printing its current version.
nvm --version
You should see a similar output:
0.35.3
Install Node.js:
Note As of writing this guide, the latest LTS version of Node.js isv12.16.2
. Update this command with the version of Node.js you would like to install.nvm install 12.16.2
Use NVM to run your preferred version of Node.js.
nvm use 12.16.2
Your output will resemble the following
Now using node v12.16.2 (npm v6.14.4)
Configure the App
Create a directory for the example app:
mkdir nodeapp && cd nodeapp
Initialize a Node.js app in the directory:
npm init
Accept all defaults when prompted.
Install Express.js:
npm install --save express
Use a text editor to create
app.js
and add the following content:- File: app.js
1 2 3 4 5 6
const express = require('express') const app = express() app.get('/', (req, res) => res.send('Hello World!')) app.listen(3000, () => console.log('Node.js app listening on port 3000.'))
Run the app:
node app.js
In a separate terminal window, use
curl
to verify that the app is running onlocalhost
:curl localhost:3000
Hello World!
Configure NGINX
At this point, you could configure Node.js to serve the example app on your Linode’s public IP address, which would expose the app to the internet. Instead, this section configures NGINX to forward all requests from the public IP address to the server already listening on localhost
.
Basic Configuration for an NGINX Reverse Proxy
Create a configuration file for the app in
/etc/nginx/conf.d/
. Replaceexample.com
in this example with your app’s domain or public IP address:- File: /etc/nginx/conf.d/nodeapp.conf
1 2 3 4 5 6 7 8 9 10
server { listen 80; listen [::]:80; server_name example.com; location / { proxy_pass http://localhost:3000/; } }
The
proxy_pass
directive is what makes this configuration a reverse proxy. It specifies that all requests which match the location block (in this case the root/
path) should be forwarded to port3000
onlocalhost
, where the Node.js app is running.Disable or delete the default Welcome to NGINX page:
sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled
Test the configuration:
sudo nginx -t
If no errors are reported, reload the new configuration:
sudo nginx -s reload
In a browser, navigate to your Linode’s public IP address. You should see the “Hello World!” message displayed.
Advanced Options
For a simple app, the proxy_pass
directive is sufficient. However, more complex apps may need additional directives. For example, Node.js is often used for apps that require a lot of real-time interactions. To accommodate, disable NGINX’s buffering feature:
- File: /etc/nginx/conf.d/nodeapp.conf
1 2 3 4
location / { proxy_pass http://localhost:3000/; proxy_buffering off; }
You can also modify or add the headers that are forwarded along with the proxied requests with proxy_set_header
:
- File: /etc/nginx/conf.d/nodeapp.conf
1 2 3 4
location / { proxy_pass http://localhost:3000/; proxy_set_header X-Real-IP $remote_addr; }
This configuration uses the built-in $remote_addr
variable to send the IP address of the original client to the proxy host.
NGINX Reverse Proxy Configuration Options
With NGINX, there are now standards for serving content over HTTPS. Here are a few recommended NGINX proxy headers and parameters:
Proxy Header | Parameter |
---|---|
proxy_pass | http://127.0.0.1:3000 |
proxy_http_version | 1.1 |
proxy_cache_bypass | $http_upgrade |
proxy_set_header Upgrade | $http_upgrade |
proxy_set_header Connection | “upgrade” |
proxy_set_header Host | $host |
proxy_set_header X-Real-IP | $remote_addr |
proxy_set_header X-Forwarded-For | $proxy_add_x_forwarded_for |
proxy_set_header X-Forwarded-Proto | $scheme |
proxy_set_header X-Forwarded-Host | $host |
proxy_set_header X-Forwarded-Port | $server_port |
The following is an explanation of what each proxy header does:
proxy_http_version
: It is set to HTTP version 1.0 by default, but you can change it to define your HTTP protocol version, e.g. HTTP 1.1 is for WebSockets.proxy_cache_bypass $http_upgrade
: Defines when to bypass your cache when it receives a response.proxy_set_header
: Upgrade and Connection - are required headers if you are using WebSockets.proxy_set_header Host $host
: Preferred over proxy_set_header Host $prox_host as you don’t need to explicitly define proxy_host and it’s accounted for by default. $host contains the following: request line hostname or a Host header field hostname.proxy_set_header X-Real-IP $remote_addr
: Send the visitors IP address to our proxy server.proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
: This is a list of IP addresses of servers that every client was served a proxy from.proxy_set_header X-Forwarded-Proto $scheme
: Turns HTTP response to an HTTPS response.proxy_set_header X-Forwarded-Host
: Client’s originally requested host.proxy_set_header X-Forwarded-Port $server_port
: Client’s originally requested server port
Nginx Forward Header For Reverse Proxy
Usually any header for a reverse proxy would look something like this:
- File: /etc/nginx/conf.d/nodeapp.conf
1 2 3 4 5 6 7
X-Forwarded-For: 33.14.57.33, 12.26.13.54 X-Real-IP: 23.67.28.33 X-Forwarded-Host: linode.com X-Forwarded-Proto: https
Using a Forward header
, you can update the client address to X-Forwarded-For
Header. But when you use X-Forwarded-For
, you have to hard code IP addresses that should be trusted. Which may not be a good solution in some cases.
A better option is to use Forwarded in NGINX.
Forwarded in NGINX
The way Fowarded
changes this is by embedding a secret token in the client for identity management. To use a list of hard-coded IP addresses, we would use the $proxy_add_x_for
, but to use Forwarded
, we need to create a map object that can enable the usage of Forwarded.
To do so add the following to your NGINX configuration file:
- File: /etc/nginx/conf.d/nodeapp.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
map $remote_addr $forwarded_proxy { # To send IPv4 addresses ~^[0-9.]+$ "for=$remote_addr"; # Quote and bracket IPv6 addresses ~^[0-9A-Fa-f:.]+$ "for=\"[$remote_addr]\""; # RFC Syntax, find more information about it here https://tools.ietf.org/html/rfc7239 default "for=unknown"; } map $http_forwarded $proxy_add_forwarded { # Add a condition to check if the header is valid, then update "~^(,[ \\t]*)*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*([ \\t]*,([ \\t]*([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?(;([!#$%&'*+.^_`|~0-9A-Za-z-]+=([!#$%&'*+.^_`|~0-9A-Za-z-]+|\"([\\t \\x21\\x23-\\x5B\\x5D-\\x7E\\x80-\\xFF]|\\\\[\\t \\x21-\\x7E\\x80-\\xFF])*\"))?)*)?)*$" "$http_forwarded, $forwarded_proxy"; # Otherwise, replace it default "$forwarded_proxy"; }
Now, make changes to your proxy _pass directive to enable Forwarded
. Add the following line:
proxy_set_header Forwarded $forwarded_proxy
Check for invalid headers
Our new configuration uses a regex to check and validate all Forwarded
headers.
NGINX Reverse Proxy Buffers
When you use an NGINX reverse proxy, you risk degrading your application/server performance as you are adding another layer of complexity to the server between requests. That’s why NGINX’s buffering capabilities are used to reduce the impact of the reverse proxy on performance.
Proxy servers affect performance and impact client to proxy server connections. Based on how performance and user connections are impacted, we can adjust and optimize these connections.
There are buffering directives that can be used to adjust to various buffering behaviors and optimize performance. These buffers are usually set in either location contexts, server, or HTTP. These buffering directives are:
proxy_buffering
: It is enabled by default, and ensures that a response reaches NGINX from the proxy server as soon as possibleproxy_buffer_size
: Determines the size of the buffer for the headers from a backend server.proxy_busy_buffers_size
: Set the maximum size of your buffers to be in a busy state.proxy_buffers
: Manages the size and number of buffers. The more buffers you have, the more information you can buffer.proxy_max_temp_file_size
: This is the maximum size of temporary files on disk allowed per request.proxy_temp_file_write_size
: Controls amount of data that NGINX will store in temporary files.proxy_temp_path
: This is the path to temporary file storage for NGINX.
Configure HTTPS with Certbot
One advantage of a reverse proxy is that it is easy to set up HTTPS using a TLS certificate. Certbot is a tool that allows you to quickly obtain free certificates from Let’s Encrypt. This guide will use Certbot on Ubuntu 16.04, but the official site maintains comprehensive installation and usage instructions for all major distros.
Follow these steps to get a certificate via Certbot. Certbot will automatically update your NGINX configuration files to use the new certificate:
Install the Certbot and web server-specific packages, then run Certbot:
sudo apt update sudo apt install certbot python3-certbot-nginx sudo certbot --nginx
Certbot will ask for information about the site. The responses will be saved as part of the certificate:
# sudo certbot --nginx Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator nginx, Installer nginx Which names would you like to activate HTTPS for? ------------------------------------------------------------------------------- 1: example.com 2: www.example.com ------------------------------------------------------------------------------- Select the appropriate numbers separated by commas and/or spaces, or leave input blank to select all options shown (Enter 'c' to cancel):
Certbot will also ask if you would like to automatically redirect HTTP traffic to HTTPS traffic. It is recommended that you select this option.
When the tool completes, Certbot will store all generated keys and issued certificates in the
/etc/letsencrypt/live/$domain
directory, where$domain
is the name of the domain entered during the Certbot certificate generation step.Note Certbot recommends pointing your web server configuration to the default certificates directory or creating symlinks. Keys and certificates should not be moved to a different directory.Finally, Certbot will update your web server configuration so that it uses the new certificate, and also redirects HTTP traffic to HTTPS if you chose that option.
If you have a firewall configured on your Linode, you can add a firewall rule to allow incoming and outgoing connections to the HTTPS service. On Ubuntu, UFW is a commonly used and simple tool for managing firewall rules. Install and configure UFW for HTTP and HTTPS traffic:
sudo apt install ufw sudo systemctl start ufw && sudo systemctl enable ufw sudo ufw allow http sudo ufw allow https sudo ufw enable
Next Steps
For more information about general NGINX configuration, see our NGINX series. For practical examples of NGINX used to reverse proxy applications, see our guides on RStudio Server and Thingsboard.
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This page was originally published on