Setting up a reverse proxy with Apache and Nginx: How to create the optimal server architecture

A reverse proxy offers an effective way of providing modern web applications in a secure, high-performance and scalable manner. In this guide, I will show you step by step how to set up a reverse proxy with Apache or NGINX - including specific configuration and a comparison of the most important functions.

Key points

  • Reverse proxy Manages incoming requests centrally and protects back-end systems
  • NGINX impresses with its speed, simple configuration and modern architecture
  • Apache offers a flexible modular structure, perfect for existing infrastructures
  • Load balancing Enables even load distribution across multiple servers
  • SSL offloading improves performance and simplifies certificate management

Basics: What does a reverse proxy do?

A reverse proxy represents the interface between external requests and internal servers. It forwards client requests to suitable backend servers. Unlike a forward proxy, it does not protect the client, but relieves the internal server architecture. This architecture ensures greater security, centralized administration and improved scalability. In addition, functions such as caching, SSL termination or authentication can be implemented centrally in one place.

Set up NGINX as a reverse proxy

NGINX is one of the most common reverse proxy solutions. I like to use it when I need fast response times and a lean configuration system. The necessary setup is done in just a few steps.

After installation, you activate the reverse proxy function with a simple server configuration. For example, an application server can be made available to the outside world under port 8080 via NGINX:

server {
   listen 80;
   server_name example.en;

   location / {
      proxy_pass http://127.0.0.1:8080;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
   }
}

I forward this setup with a symbolic link to sites-enabled and a reload from NGINX live:

sudo ln -s /etc/nginx/sites-available/example.en /etc/nginx/sites-enabled/
sudo systemctl reload nginx

For load distribution I use upstream-blocks. These define several target servers between which the data is distributed evenly.

Configure Apache as a reverse proxy

The Apache HTTP Server convinces with its modular design, especially in environments where Apache is already in productive use. I value Apache as a reverse proxy when I want to maintain granular control or existing configurations. The setup is done by activating two important modules:

sudo a2enmod proxy proxy_http

I insert the forwarding commands in the appropriate virtual host:

ServerName example.com
    ProxyPass / http://127.0.0.1:8080/
    ProxyPassReverse / http://127.0.0.1:8080/

A final reload makes the configuration active:

sudo apache2ctl configtest
sudo systemctl reload apache2

Optionally, the use of mod_proxy_balancer can also realize a balancing setup - similar to NGINX.

NGINX + Apache: The hybrid variant

In many projects, I use a mixture of both systems. This serves NGINX as front enddelivers static data extremely quickly and forwards dynamic requests to Apache. Apache runs internally on a port such as 8080, while NGINX accepts public requests on port 80 or 443 (HTTPS).

I often use this configuration for WordPress hostingwhere Apache offers advantages due to its .htaccess compatibility, but NGINX ensures speed. Security can also be improved via firewall rules - by only allowing NGINX connections to Apache.

Functional advantages of the reverse proxy architecture

Its use brings me tangible benefits every day. With a reverse proxy, I can complete central tasks much more efficiently. Constellations with peak loads or sensitive applications benefit in particular.

These include:

  • Scaling per load balancing
  • Security features such as IP filters, DDoS defense or authentication
  • Central SSL termination to simplify the HTTPS infrastructure
  • Fast content thanks to Caching
  • Flexible routing based on the URI or host name

Performance comparison: Apache vs. NGINX

After many projects, this overview makes it easier for me to decide which tool makes sense in which situation. The differences are clearly noticeable during operation:

Feature NGINX Apache
Performance Very high Solid, but weaker under high load
Configuration Clear and direct Flexible thanks to modular architecture
Reverse Proxy Support Integrated as standard Controllable via modules
SSL offloading Efficient Feasible with configuration
Static content Extremely fast Rarely optimal
Compatibility Ideal for new web technologies Perfect for PHP & .htaccess

Practical tips for configuration

In my daily practice, a few tips have proven themselves time and again: Use secure ports 80 and 443. Block backend ports via the firewall. Test every configuration with configtest-commands.

You should also implement SSL encryption consistently. I recommend using Let's Encrypt with automatic certificate renewal. This ensures that data streams are not transmitted unencrypted.

Monitoring and reliability

The architecture of a reverse proxy can be perfectly combined with health checks and logging. Tools such as fail2ban, grafana or prometheus help with monitoring and logging. Protocol analysis. I also activate status endpoints and set alerts for high latency.

Centralized monitoring is mandatory in production systems. This minimizes downtime and allows rapid intervention.

Review & recommendations for use

Whether you use NGINX or Apache as a reverse proxy depends on your objectives, available tools and desired features. I like to use NGINX as a high-performance front end for static content, while Apache complements the whole thing with powerful logic in the back end. In combination, projects achieve maximum efficiency in setup and operation.

To get started, a simple reverse proxy between port 80 and a local backend is often sufficient. Features such as load balancers, SSL termination or authentication can be added later. Always keep an eye on security and monitoring. For larger setups, I use solutions such as Docker containers or Kubernetes, supplemented by internal routing.

Advanced security and performance tips

Especially if you provide critical applications on the public Internet, an extended security layer is essential. In addition to blocking backend ports, it is advisable to explicitly allow certain IP ranges at application level. This reduces potential attack vectors even before they reach your internal network.

Particularly effective are Additional security headers like Content security policy, X-Frame-Options or X-Content-Type-Options. Setting this in the reverse proxy prevents you from having to customize each application individually. In NGINX, for example, you can do this directly within the server-blocks:

add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";

Similar settings can be made in Apache via mod_headers integrate. These headers eliminate a number of attack scenarios such as clickjacking or MIME type sniffing. Also make sure to remove the so-called Weak Ciphers and SSLv3 connections in order to secure known vulnerabilities.

When it comes to performance, it is worth taking a look at Gzip or Brotli compression. By activating these features, your client receives less data. This has a positive effect on loading times, especially for static content such as CSS or JavaScript files.

Caching options for high throughput

A major advantage of reverse proxies is their integrated caching. NGINX and Apache offer different approaches to store frequently requested resources in memory or on the hard disk. This reduces the load on your application server enormously.

In NGINX you activate the Proxy cache feature like this, for example:

proxy_cache_path /var/cache/nginx keys_zone=my_cache:10m;
server {
    listen 80;
    server_name example.com;

    location / {
        proxy_cache my_cache;
        proxy_pass http://127.0.0.1:8080;
        add_header X-Cache-Status $upstream_cache_status;
    }
}

This mechanism creates a cache memory under /var/cache/nginx on. You can configure granular instructions to cache certain MIME types or directories for longer. As soon as a client requests the same resource again, NGINX serves this request directly from the cache. This speeds up the page load and reduces the number of requests to the backend.

Apache offers with mod_cache and mod_cache_disk comparable mechanisms. One advantage is that you can selectively use CacheEnable-directive to define which URLs or directories end up in the cache. For example, you can prevent sensitive areas such as login forms from being cached, while static images remain in the cache for the long term.

Health checks and high availability

If you want fail-safe operation, you need to ensure that failed or overloaded backend servers are detected automatically. This is exactly what Health Checks useful. In NGINX, you can use nginx-plus or additional modules that continuously query the status of your applications. If a server fails, NGINX automatically redirects the traffic to other available servers.

Apache enables similar functions via mod_proxy_hcheck and mod_watchdog. You configure an interval in which Apache checks a specific target for success or error code. If the corresponding HTTP status is not reached, the host is temporarily removed from the pool. In particularly large installations, a combination with load balancers such as HAProxy is recommended in order to distribute load balancing and health checking in a targeted manner.

To create real High Availability an additional failover or cluster setup can be used. Here, two or more reverse proxy instances run in parallel, while virtual IP (VIP) addressing via Keepalived or Corosync always directs traffic to the active proxy. If one instance fails, the other takes over automatically without interrupting client requests.

Optimized configuration for SSL and HTTP/2

A lot has happened in recent years, especially when it comes to encryption. HTTP/2 offers you the option of transferring several resources in parallel via a single TCP connection and thus significantly reducing latencies. Both Apache and NGINX support HTTP/2 - but usually only via an SSL-encrypted connection (HTTPS). So make sure that your virtual host is configured accordingly:

server {
    listen 443 ssl http2;
    server_name example.com;
    ssl_certificate /etc/letsencrypt/live/example.de/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/beispiel.de/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

Also remember to configure modern cipher suites and disable older encryption protocols such as SSLv3. In Apache, for example, this is done in your <VirtualHost *:443>-configuration with an entry like:

SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite HIGH:!aNULL:!MD5
SSLHonorCipherOrder on

In addition, a OCSP stapling which caches the validity of SSL certificates directly on the server. Your visitors thus avoid additional requests to external certificate authorities, which improves performance and prevents private data from being communicated to the outside world.

Integration in container environments

Both NGINX and Apache can be operated excellently in container environments such as Docker or Kubernetes. A typical scenario is that one container runs per application, while an additional container acts as a reverse proxy. This can be configured dynamically as soon as a new application container is started.

This is where tools such as nginx-proxy or Traefik which automatically recognize containers and define suitable routes. However, a highly dynamic environment can also be mapped with a self-configured NGINX or Apache container. In Kubernetes, we recommend a Ingress Controllerwhich uses NGINX or HAProxy, depending on the deployment scenario, to distribute traffic from the cluster.

Encapsulation in the container allows you to maintain a clean separation between your applications and flexibly scale reverse proxy or load balancing functions. In addition, rollbacks can be carried out much more easily if required by simply reactivating old container versions.

Migration strategies and best practices

If you are currently operating a monolithic server architecture, it is often worth gradually switching to reverse proxy structures. You can start by putting a single application behind NGINX or Apache and gaining initial experience - especially in terms of logging, error analysis and security. Then work your way forward and migrate other services.

Plan in advance exactly on which ports you can reach which backends. Enter profiles for development, staging and production environments in separate configuration files so that you can switch them on or off individually. This minimizes the risk of misconfigurations.

Another best practice is to map all configuration as code. Use version control systems such as Git so that you can track changes more easily and roll them back quickly in the event of problems. This is essential, especially if there are several administrators in the team.

Backup and recovery plans

Even the best setup does not completely protect you from failures or security incidents. A well thought-out backup and recovery concept is therefore a must on your agenda. Create regular snapshots of your configuration files and make sure that your central SSL certificates and any DNS settings are backed up. For critical systems, I recommend using a separate backup storage that is not permanently available on the same network. This will prevent data loss in the event of ransomware attacks or hardware defects.

During a restore process, you should test whether all services are running correctly after restoring the proxy configuration. New certificates are often required or updated DNS entries are missing. With a clearly documented checklist, the restore process is much faster and causes less downtime.

Outlook and further optimizations

As soon as your reverse proxy is stable, you can focus on more advanced topics. One option is to implement a web application firewall (WAF), for example ModSecurity under Apache or a dedicated module in NGINX. A WAF helps you to intercept known attacks directly at proxy level before they reach your applications. This step is particularly worthwhile for frequent attacks such as cross-site scripting (XSS), SQL injections or malware uploads.

Monitor your traffic closely to identify bottlenecks during peak loads. Tools such as grafana or prometheus can help you keep an eye on metrics such as CPU utilization, memory and bandwidth. If you find that a single reverse proxy is reaching its limits, it's time to think about horizontal scaling or clustering.

Ultimately, it is these constant optimizations and monitoring improvements that turn a simple reverse proxy into a highly available and high-performance interface to your applications. Through the interplay of caching, security optimizations and flexible architecture, you can gradually professionalize your infrastructure and offer customers and developers a stable, fast web experience.

Current articles