...

Web server speed comparison: Apache vs. NGINX vs. LiteSpeed

I compare the web server speed of Apache, NGINX and LiteSpeed based on typical traffic patterns: static files, PHP calls, TLS and caching. This allows you to quickly see which server is ahead in terms of latency, requests per second and resource requirements in which scenario and where the switch really brings performance; Practical focus.

Key points

  • ArchitectureProcesses (Apache) vs. events (NGINX/LiteSpeed) determine throughput and latency
  • StaticNGINX/OpenLiteSpeed deliver files extremely efficiently
  • DynamicLiteSpeed scores with PHP via LSAPI, often faster than PHP-FPM
  • ResourcesNGINX/OpenLiteSpeed save RAM/CPU, Apache needs more
  • SecurityIntegrated protection functions with LiteSpeed, clear curing paths with NGINX

Why the choice of web server matters

A web server affects the response time of your app more than many people think, especially under peak load; Latency. It determines how efficiently kernel and TLS stacks are used, how well caches work and how cleanly keep-alive connections function. Different architectural approaches lead to significantly different results with the same resources. That's why I don't compare in a laboratory vacuum, but on the basis of standard production samples. This allows you to make a decision that has a measurable effect instead of just shining on paper.

Architecture in comparison: processes vs. events

Apache usually uses the prefork/worker/event model with threads or processes, which causes more overhead with many simultaneous connections; Overhead. NGINX and LiteSpeed are event-oriented: a small set of workers manages a large number of connections asynchronously. This approach lowers context switches, reduces memory requirements and increases performance with long keep-alive or HTTP/2 streams. Under traffic with many simultaneous requests, this has a direct impact on stability and throughput. For APIs and static delivery, NGINX and LiteSpeed therefore often deliver the smoother flow.

Static content: Deliver files faster

With static files, efficient syscalls, zero-copy strategies and cache hits play the music; File cache. NGINX and OpenLiteSpeed are often faster here because they require fewer process changes and are optimized with sendfile/splice. Apache can follow, but needs very good tuning profiles and more RAM for workers. If you want to make a deeper comparison, this overview is worthwhile: Apache vs. NGINX comparison. NGINX/OpenLiteSpeed usually deliver the lowest latency in CDN-related setups or with many images/scripts per page.

Dynamic content and PHP: FPM vs. LSAPI

With PHP applications, the field is clearly divided because LiteSpeed uses a very high-performance interface with LSAPI; LSAPI. Compared to PHP-FPM (Apache/NGINX), latency is reduced and error recovery under load is smoother. LiteSpeed also works closely with opcode caches and context pools, which improves warm start behavior. NGINX with FPM remains strong, but requires more fine-tuning with max-children, timeouts and sockets. Those running WordPress, Shopware or WooCommerce often benefit noticeably in the TTFB with LiteSpeed.

Resource consumption and scaling

NGINX and OpenLiteSpeed achieve high connection numbers with little RAM, which leads to more stable responses on smaller VM instances or containers; Efficiency. Apache usually requires more CPU and memory for the same throughput because workers and threads are required. Under peak loads, the event-based model often scales more predictably and remains responsive. For horizontal scaling in Kubernetes environments, NGINX/OpenLiteSpeed scores with low pod resource profiles. This facilitates autoscaling and saves infrastructure budget.

Measured values at a glance

The following table shows typical measurement directions: Requests per second (RPS), average latency and approximate resource requirements under comparable load; Comparison.

Web server Speed (RPS) Latency (ms) Resource consumption
Apache 7508 26.5 High (CPU & RAM)
NGINX 7589 25.8 Low
LiteSpeed 8233 24.1 Efficient
Lighttpd 8645 22.4 Low
OpenLiteSpeed 8173 23.1 Low

Important: Such benchmarks depend heavily on the test profile, hardware, kernel version and TLS setup; Context. It is crucial that the trend is confirmed in real deployments: NGINX/LiteSpeed/OpenLiteSpeed often deliver more RPS with less RAM. For workloads with many simultaneously waiting requests (long polling, SSE), the event approach pays off particularly well. Anyone running WordPress stores will quickly see this advantage in the checkout. For legacy apps with many .htaccess rules, Apache remains very convenient.

HTTPS, HTTP/2/3 and TLS offloading

What counts under TLS is how efficiently connections are reused and packets are prioritized; HTTP/2. NGINX and LiteSpeed support modern cipher suites, 0-RTT mechanisms and clean keep-alive strategies very well. HTTP/3 (QUIC) can reduce latency for packet-loss connections, especially on mobile devices. In practice, TLS offloading in front of app servers is worthwhile: fewer CPU peaks and consistent response times. Anyone with a high TLS handshake load will benefit from session resumption, OCSP stacking and consistent H2/H3 use.

Caching: from microcaching to full page

Correctly set caching beats any hardware upgrade attempt because it immediately reduces latency and backend load; Cache. NGINX shines with microcaching for short second windows and is ideal for dynamic backends. LiteSpeed offers strong full-page caching and edge features for common CMSs. Apache can keep up if you orchestrate modules and TTLs carefully, but requires more fine-tuning. This guide provides a good starting point: Server-side caching guide.

Safety and hardening

LiteSpeed provides integrated measures against volumetric attacks and can throttle request rates cleanly; DDoS. NGINX allows clear rules for limits, timeouts and header validation for easy-to-understand hardening. Apache benefits from its long history and many modules for WAF, Auth and input filters. The interaction with upstream WAF, rate limits and bot management remains crucial. Keep logs lean and analyzable, otherwise IO will quickly eat away the latency gains.

Compatibility and migration

If you use a lot of .htaccess and mod_rewrite rules, you will feel most at home with Apache; Comfort. LiteSpeed understands large parts of this syntax and can often adopt it directly, which makes relocations easier. OpenLiteSpeed requires a different configuration in some places, but offers the event strength without license costs. You should check the differences between OLS and LiteSpeed in advance: OpenLiteSpeed vs. LiteSpeed. For NGINX, a step-by-step migration with parallel reverse proxy operation and canary traffic is worthwhile.

Practical guide: Selection by application type

For pure file or API delivery, I prefer to use NGINX or OpenLiteSpeed because of their low latency and good scaling; API. Stores and CMS with a lot of PHP perform noticeably faster with LiteSpeed, especially during traffic peaks. I keep legacy projects with special .htaccess logic on Apache or move them slowly to NGINX/LiteSpeed. For edge features (Brotli, Early Hints, HTTP/3), I look at the support matrix and build paths. In multi-tenant environments, what also counts is how cleanly rate limits and isolation can be implemented.

Tuning checklist for fast response times

I start with keep-alive, pipelining/multiplexing and sensible timeouts because they determine connection quality; Timeouts. I then check TLS parameters, session resumption and OCSP stapling to reduce the load on handshakes. For PHP, I set pools to realistic concurrency, avoid swapping and do not overfill the server with children. Microcaching or full-page caching lowers TTFB immediately if content is cacheable. I rotate logs aggressively and write them asynchronously so that IO does not become a brake.

Extended notes on reverse proxy and CDN

An upstream reverse proxy decouples TLS, caching and load distribution from the app and makes maintenance windows easier to plan; Proxy. NGINX is ideal as a front layer in front of upstream servers, LiteSpeed can also do this. Before a CDN, you should set cache control headers, ETag strategy and variants consistently, otherwise the potential is wasted. It is important to terminate the TLS end and H2/H3 handover correctly so that prioritization takes effect. This creates a chain that maintains performance instead of introducing new bottlenecks.

Benchmark methodology: measuring realistically instead of calculating beautifully

Clean measurements start with clear targets and reproducible profiles; Methodology. Use warm-ups so that caches and opcode caches are in the real state. Vary concurrency (e.g. 50/200/1000), keep the test duration long enough (60-300 s) and measure separately for H1, H2 and H3. Pay attention to connection schemes (keep-alive on/off), TLS parameters (RSA vs. ECDSA, session resumption) and real payloads instead of "Hello World". Meanwhile, log system metrics (CPU steal, run queue, IRQ, sockets, file descriptors) and app metrics (TTFB, P95/P99 latency). Measure with cold and warm caches as well as under error induction (limited PHP worker) to visualize backpressure and recovery behavior. Only when P95/P99 are stable is a setup resilient in everyday use.

OS and kernel tuning for high concurrency

Performance often fails due to system limits, not the web server; Kernel. Increase file descriptors (ulimit, fs.file-max), set appropriate backlogs (net.core.somaxconn, net.ipv4.tcp_max_syn_backlog) and use accept queues sensibly. Only activate reuseport if load distribution across several workers remains stable and check NIC off-loads (GRO/TSO/GSO) for CPU/latency trade-offs. IRQ affinity and RPS/XPS distribution reduce latency peaks. NUMA hosts benefit from local memory binding and consistent CPU pinning strategy. Be careful with aggressive TCP tuning: better observation and small steps than generic "best of" sysctl lists. Write logs asynchronously and rotate to fast storage media, otherwise IO will limit RPS long before CPU/RAM is full.

HTTP/3/QUIC in practice

HTTP/3 offers advantages for lossy networks and mobile access; QUIC. Clean alt-svc advertising, correct prioritization of streams and robust fallbacks on H2 are crucial. Pay attention to MTU/PMTUD issues and conservative initial congestion windows to keep retransmits under control. In multi-layer setups (CDN → Reverse Proxy → App), H3/H2 handovers must remain consistent, otherwise prioritization will be lost. Measure TTFB and "Fully Loaded" separately under H3, because header compression (QPACK) and packet loss work differently than with H2. Not every edge device speaks H3 stably; therefore plan dual paths with clean downgrade without latency jumps.

Caching strategies in detail

The key lies in the correct cache key and in intelligent obsolescence; Vary. Normalize query strings (utm_*, fbclid) and minimize Vary headers (e.g. only Accept-Encoding, language). Use stale-while-revalidate and stale-if-error to keep TTFB stable, even if the backend is buggy. Surrogates are ideal for microcaches (0.5-5 s) on very dynamic pages; for CMS/shop fronts, full-page caching delivers the biggest jumps. Cookie bypasses: Only accept really necessary cookies as cache breakers. Purge strategies should be automated (invalidation on product update, price change). Deliver files compressed (Brotli/Gzip) and with early hints (103) so that the browser loads early. This results in measurable TTFB gains and reduces the load on PHP/DB layers.

PHP runtime: FPM vs. LSAPI fine-tuned

With PHP, the clean dimensioning of the workers determines stability; Concurrency. For FPM, pm strategies (ondemand/dynamic) and pm.max_children should be selected according to RAM/request profiles; it is better to have a few, fast workers without swap than many that thrash. Check max_request, slowlog and timeout settings so that hanging requests do not clog up the system. Socket-based communication is often faster than TCP as long as locality is correct. LSAPI shines with tight integration, efficient backpressure and faster error recovery, which reduces P95/P99 at peak load. No matter which interface: opcode cache (memory size, interned strings), realpath cache and autoloading improve warm starts dramatically. Avoid per-request IO (sessions/transients) and use asynchronous queues for "heavy" tasks.

Multi-tenant and insulation

Shared or multi-tenant environments require clear boundaries; Insulation. Limits defined per vHost/PHP pool (CPU, RAM, file descriptors) prevent noisy neighbors. Cgroups v2 and systemd slices help to allocate resources consistently. Rate limits (requests/second, simultaneous connections) per zone protect all clients. Chroot/container isolation, restrictive capabilities and a minimal module footprint reduce the attack surface. LiteSpeed scores with deeply integrated per-site control, NGINX with transparent limit_req/limit_conn mechanisms, Apache with granular Auth/WAF modules. Important: Separate logs and metrics per tenant, otherwise troubleshooting remains blind.

License, support and operating costs

The choice has financial implications; Budget. OpenLiteSpeed and NGINX are license-free in the community version, LiteSpeed Enterprise offers features and support, but costs depend on the number of cores. In compute-intensive PHP stacks, the LSAPI performance can compensate for the license price by reducing the number of servers. NGINX scores with a broad community and predictable operating models, Apache with a comprehensive module ecosystem without additional costs. Calculate the total cost of ownership: license, operating costs (tuning/monitoring), support and hardware. The goal is not "cheap", but "consistently fast with the lowest opex".

Typical error patterns and quick troubleshooting

Recognize patterns before users feel them; Error image. Many 499/408 indicate too long TTFB or aggressive timeouts (client aborts). 502/504 indicate exhausted PHP workers or upstream timeouts. EMFILE/ENFILE in logs: File descriptors too low. H2 stream resets and prioritization loss: Proxy/CDN follow-up error. TLS handshakes with high CPU: no session resumption or unsuitable certificate curves. Accept queue drops: backlog too small, check syn cookies. Procedure: Tighten rate limits temporarily, increase backpressure, widen caches, reduce worker load. Always consider P95/P99 and error rate together - they tell the truth about load edges.

CI/CD and risk-free migration

Changes to the edge require safety nets; Canary. Use blue-green deployments or canary routing with header/path-based splits. Shadow traffic allows functional tests without user influence. Health checks must differentiate between liveness and readiness so that Autoscaler does not scale at the wrong moment. Version configurations, test them synthetically (H1/H2/H3) and with real browsers. Rollbacks must be one key away; configuration diffs belong in the review. This way, even large migrations (Apache → NGINX/LiteSpeed/OLS) can be carried out without downtime and with measurable gains.

Short verdict: the best choice depending on the destination

For raw file delivery and API gateways, I use NGINX or OpenLiteSpeed because they require few resources and remain consistently fast; Constance. For PHP-heavy systems, I choose LiteSpeed to achieve low TTFB and smooth scaling with LSAPI. If a project needs maximum .htaccess compatibility, Apache remains convenient, even if the resource requirements are higher. Those who modernize combine reverse proxy, caching and clean TLS settings and then measure under real load. This way, the web server matches the app - and latency falls where it really counts.

Current articles