PHP Handler comparison clearly shows how CGI, PHP-FPM and LSAPI control the execution of PHP scripts and thus shape latency, isolation and RAM requirements in hosting. I explain the differences in a practical way, categorize them according to workloads and give recommendations for selection and configuration in daily operation.
Key points
- PerformanceLSAPI leads in tail latencies, PHP-FPM delivers very constant response times.
- SecurityCGI strictly separates, PHP-FPM isolates with pools, LSAPI encapsulates per user.
- ResourcesLSAPI saves RAM, PHP-FPM remains efficient, CGI generates overhead.
- CompatibilityPHP-FPM fits Apache/Nginx, LSAPI shines with LiteSpeed.
- PracticeFor CMS and stores I mostly use PHP-FPM; a lot of traffic often benefits from LSAPI.
Basics of PHP handlers
A PHP handler connects the web server with the PHP interpreter. CGI starts a new process for each request and thus achieves a very clean separation between accounts. This separation costs time because each request reloads extensions and configuration. PHP-FPM keeps workers persistent and distributes requests to pools, which reduces startup costs and keeps latency low. LSAPI integrates deeply with LiteSpeed and uses very lightweight, long-lived processes for High efficiency.
Mod_php integrates PHP directly into the web server, but the isolation is weak. I prefer modern handlers because they isolate error sources and keep the platform more stable under load. If you host many users on one system, you clearly benefit from separate User contexts. This is precisely where FPM pools and LSAPI play to their strengths. CGI remains a secure but sluggish option for very small sites and special test scenarios.
Comparison table: Strengths and application scenarios
The following table summarizes the core features and assigns them to typical workloads. I use it as a quick decision-making aid for hosting php-setups with CMS, stores or APIs. Note that the actual performance also depends on caching, storage and network profile. Nevertheless, the overview provides a solid starting point for an initial selection. I then fine-tune the configuration based on specific load profiles and Measured values.
| handler | Performance | Security | RAM consumption | Scalability | Suitable for |
|---|---|---|---|---|---|
| CGI | Low | Very high | High | Low | Tests, static or rarely accessed pages |
| PHP-FPM | Very high | High | Low | High | Shared hosting, CMS, APIs |
| LSAPI | Highest | Medium to high (per user) | Very low | Very high | High-traffic, e-commerce, concurrency |
CGI scores with Separation, but suffers from process startup costs. PHP-FPM offers the best ratio of latency, throughput and isolation on systems with Apache or Nginx. LSAPI delivers very low tail latencies with high competition on LiteSpeed stacks. If you don't use a LiteSpeed server, FPM offers the broadest support. For very small sites, I stick with simple setups; for growing projects, I switch to FPM or LSAPI.
Performance under load: latencies and throughput
Among increasing competition, P95/P99 latencies and the Stability of the throughput. LSAPI holds the highest loads with surprisingly consistent response times. PHP-FPM follows close behind and responds very well to pool tuning, for example with dynamic process count. CGI noticeably loses speed as soon as many short requests arrive. For more detailed measurements, please refer to my Performance comparison, which covers typical CMS and store workloads.
I consistently combine FPM or LSAPI with OPcache, so that bytecode is not constantly generated anew. In addition, reverse proxy caches reduce the number of PHP hits for recurring content. A job queue is worthwhile for compute-intensive tasks so that frontend requests remain fast. If you want to intercept sporadic peaks, use short-lived burst scaling via additional FPM workers. This keeps tail latencies within limits and the Response times consistent.
Security and isolation in shared hosting
What counts in multi-user environments Insulation at least as much as speed. CGI achieves a very clean separation through per-request processes, but with a lot of overhead. PHP-FPM isolates per pool and allows hard limits for memory, execution time and number of processes. LSAPI also assigns processes to accounts, but is tied to the LiteSpeed stack in detail. If you want to categorize risks, it is best to read my article on Pool risk with FPM and sets clear limits.
I set a separate account for each pool with its own UID/GID and restrictive rights. This limits the radius of possible attacks and prevents faulty scripts from seeing external data. This includes limits for memory, maximum requests per worker and timeouts. Regular updates and secure file permissions round off the concept. I minimize admin scripts that are openly available on the network or protect them with Auth.
Resource consumption and RAM management
RAM often determines Costs and density per server. LSAPI scores here with a very small footprint per process and economical context switches. PHP-FPM also remains efficient if I create pools dynamically and dimension limits properly. CGI wastes memory due to frequent reloading of extensions and is therefore hardly suitable for dynamic projects. If you host a lot of accounts, FPM or LSAPI gives you significantly more reserves per node and keeps the Total costs plannable.
I regularly measure peak RAM and observe the distribution throughout the day. Peaks indicate too low worker numbers or unfavorable caching strategies. I reduce the demand with finer pool sizing and targeted OPcache tuning. This reduces swap risks and prevents unpredictable latency outliers. On overcrowded hosts, I move individual Sites on its own nodes before the overall performance suffers.
Compatibility with Apache, Nginx and LiteSpeed
The choice of web server guides the decision at the handler. PHP-FPM works excellently behind Nginx and can be cleanly connected to Apache via proxy. In Apache environments, I recommend mpm_event, keep-alive tuning and a stable proxy configuration. LSAPI unfolds its full potential with LiteSpeed and reads .htaccess files efficiently. Those who already use LiteSpeed often get the last bit of performance with LSAPI. Performance out.
For static content, I use Nginx or LiteSpeed directly from the web server cache. PHP only processes what needs to remain dynamic. This separation reduces the load on the handler and saves CPU time. As a side effect, TTFB consistency increases with recurring page requests. This keeps frontends responsive, even when backends are under pressure.
Best practices for PHP-FPM pools
I start with a conservative Pool layout per site and measure real peaks. Then I adjust pm, pm.max_children, pm.start_servers and pm.max_requests. Pools that are too small make requests wait, pools that are too large eat up RAM and generate context switches. For WordPress, WooCommerce or TYPO3, I usually choose dynamic or ondemand and regulate the limits tightly. Details on pm.max_children can be found in my guide pm.max_children summarized.
I set limits such as memory_limit and max_execution_time per pool. This prevents individual scripts from blocking resources or getting out of hand. request_terminate_timeout protects against hanging processes that would otherwise pile up. max_input_vars and upload_max_filesize are sensibly secured, depending on the project. This keeps pools controllable and the host is stable.
Caching and OPcache in practice
For me, OPcache is part of every PHP installation. I activate it, check the size and monitor the hit rate. For many deployments, I set file_cache_only and tune revalidate_freq so that deployments take effect quickly. I also use reverse proxy caches and page cache plugins in CMS to reduce the PHP hit rate. The fewer requests actually end up in PHP, the better it scales all.
Those who use server-side sessions intensively often benefit from Redis. I regulate TTLs and strictly manage memory limits. For full-page cache, I consider cache keys and invalidation strategies so that stores deliver correctly after price or stock changes. A clear cache plan saves CPU, RAM and time. The interaction of OPcache, proxy cache and Application cache ultimately determines the perceived speed.
Decision matrix: Which handler suits which project?
Small sites with little traffic run safely with PHP-FPM and conservative limits. Pure test environments or special compliance requirements can make CGI useful, despite the loss of speed. High-traffic stores and highly competitive APIs often benefit from LSAPI on LiteSpeed. If you need maximum compatibility and flexibility, you can rely on FPM. For hosting php with WordPress or WooCommerce, I prefer FPM as a versatile All-rounder before.
I never make a decision based solely on a benchmark. Instead, I measure the real mix of static hits, dynamic pages and API calls. The average script time and the proportion of cache hits also influence the choice. I also take admin habits into account, such as frequent deployments or build processes. The best solution remains the one that works under real Conditions runs stable and fast.
Costs, license and operation - what pays off?
On pure cost views FPM attractive as it does not require additional licenses. LSAPI can reduce the operating costs per site through better density and lower latencies, but requires LiteSpeed licenses in euros. This often pays off for many paying customers, but not usually for hobby projects. CGI causes indirect costs through inefficient use of resources and longer response times. I therefore calculate the overall operation and save where it makes sense. Quality not at risk.
The ability to plan remains important. A host that is too heavily overbooked saves money in the short term, but pays for it with downtime and dissatisfied users. Modern observability tools help to identify bottlenecks at an early stage. Those who regularly add capacity keep latencies stable and relieve the burden on support. In the end, the solution that conserves resources and minimizes Uptime high.
Multi-PHP versions, rollouts and zero downtime
In everyday life, I often operate several PHP versions in parallel. With FPM, this can be done cleanly via separate pools and separate sockets for each version. This allows me to migrate sites step by step without disrupting the overall system. I plan rolling updates: first staging, then a small production group, then the rest. Graceful Reloads (FPM: reload instead of restart) avoid hard tear-offs and keep connections open. With LSAPI, I use analog mechanisms in the LiteSpeed stack to preheat workers and minimize the cold-start effect.
For zero-downtime deployments, I pay attention to atomic release strategies with symlinks and OPcache validation. After switching, I selectively clear caches without discarding everything. This keeps tail latencies stable and new deployments quickly land in a warm state. Important: File permissions and owners must be correct, otherwise FPM or LSAPI workers will block new releases.
Sockets vs. TCP: architectural decisions with consequences
The handler is connected either via Unix socket or via TCP. Sockets save overhead and usually deliver slightly better latencies on a host. TCP is worthwhile if the web server and handler run separately or if I want to distribute pools to several nodes. Scale would like. For TCP, I define timeouts, keep-alive and backlog cleanly so that no 502/504 errors occur during load peaks. In Apache setups, I pay attention to the number of active proxy workers, in Nginx to the limits for open connections. With LSAPI, LiteSpeed handles a lot of things internally, but I still check the backlog and queues regularly under load.
I monitor the queue length on the FPM status, the utilization of the workers and the CPU saturation. A high queue with low utilization often indicates bottlenecks in the frontend (e.g. too few Nginx workers) or I/O brakes there. Only when I know the bottleneck do I increase child processes or adjust network parameters.
Monitoring, metrics, and troubleshooting
For observation I rely on Holistic MonitoringWeb server logs, FPM status, system metrics (CPU, RAM, I/O), application logs and synthetic checks. Particularly valuable is the FPMSlowlog, to detect outliers. I correlate P95/P99 latencies with CPU spikes, OPcache hit rate, number of running processes and database latencies. If the P99 latency increases, I first check queues and timeouts between proxy and handler.
In the event of an incident, I work from the outside in: 1) HTTP error codes and time, 2) proxy/web server errors, 3) handler queues and worker states, 4) application logs, 5) backend systems (DB, cache, file system). Frequent causes of 502/504 are timeouts that are too strict, blocking upstreams or exhausted Pool capacities. Simple countermeasures: realistic timeouts, clear limits and alerting that before of exhaustion.
File systems, realpath and OPcache details
File accesses have a greater impact on latency than many people expect. I pay attention to fast Storage paths for code and templates. On network file systems (e.g. NFS), realpath and OPcache parameters are critical. A sufficiently large realpath_cache_size and a suitable ttl prevent permanent path resolutions. In the OPcache, I dimension memory_consumption, interned_strings_buffer and the number of Hash tables so that the hit rate remains high and rehashing is rare. I set validate_timestamps and revalidate_freq to match the deployment workflow so that changes take effect quickly but do not trigger checks every second.
For large codebases it is worth Preloading for central classes and functions. This saves FPM or LSAPI workers CPU time in the hot path. I only test JIT where there are real CPU bottlenecks (lots of numerical logic). JIT rarely brings advantages for classic CMS; a clean OPcache configuration and a fast I/O path are more important.
Database and cache connection: avoid latency
Many performance problems do not stem from the handler, but from Databases and caches. I monitor query runtimes, connection pools and locks. Persistent connections can help, but they bind RAM in the workers. Therefore, I dimension pm.max_children in accordance with the connection limits of the database and control timeouts. For Redis/Memcached accesses, low network latency and timeouts are also crucial. I use tracing in the application to detect and reduce N+1 queries - this reduces the load on both handler and backend.
It often makes sense under high competition, writing decouple processes (queues, async jobs) and cache read accesses. This keeps front-end requests short and reduces the variability of response times.
Container, chroot and OS aspects
Anyone who uses FPM or LSAPI in dumpster diving gains flexibility with versions and limits. Correct ulimits, an efficient process scheduler and suitable CPU/memory quotas are important. Quotas that are too hard cause stuttering in P99 latencies. In classic setups, chroot/jail or user isolation via namespaces helps to strictly separate file accesses. I keep the images lean to keep cold start times short (e.g. after a rollout) and preheat pools before the traffic switches.
Log rotation and Backpressure-Strategies are mandatory: full disks or blocking log writers have a direct effect on response times. I also calibrate swappiness, HugePages (where appropriate) and NUMA strategies on hosts with many cores so that workers are not slowed down by memory accesses across nodes.
LSAPI and FPM units in operation
LSAPI benefits from stable, long-lasting processes and efficient request dispatch. I regulate the maximum number of requests per worker to limit memory leak effects and monitor restarts in live operation. With FPM I choose ondemand for sites with irregular traffic, dynamic for a constant load. I define pm.max_requests so that sporadic leaks or fragmentation do not play a role. I set request_slowlog_timeout close enough to recognize real hangs early, but not so close that complex admin operations constantly sound the alarm.
For both worlds, I verify the Signaling pathways for reloads and define escalation paths if workers do not restart cleanly. This prevents a deployment in the middle of the day from causing disruption to the platform.
Checklist: Selection and tuning in practice
- Define target: maximum Compatibility (FPM) vs. minimum tail latency (LSAPI) vs. very hard separation (CGI).
- Clarify server role: One-host setup (Unix socket) or separate levels (TCP) - Set timeouts/backlog appropriately.
- Pools per account/site: own UID/GID, tight limits for memory, requests and time; activate slowlog.
- OPcache: sufficient size, high hit rate, revalidate strategy suitable for deployment; preloading if necessary.
- Storage: fast path for code/cache, dimension realpath cache, observe NFS special features.
- DB/Cache: Connections and timeouts consistent with pm.max_children; eliminate N+1 queries.
- Caching layer: Combine reverse proxy, page cache and application cache; invalidate instead of emptying blindly.
- Observability: P95/P99, queue length, worker states, OPcache hit rate, I/O and backend latencies at a glance.
- rollouts: Graceful reloads, warmup, atomic deployments, selective cache invalidation.
- Capacity planning: burst reserves, no overbooking; realistically assess the costs/benefits of LSAPI licenses.
Briefly summarized: my classification
For mixed hosting environments PHP-FPM the best balance of performance, isolation and compatibility. On LiteSpeed stacks, LSAPI brings measurable advantages in terms of tail latencies and RAM consumption. CGI is suitable for strict separation in niche cases, but falls behind in dynamic projects. I initially rely on FPM with clear pool limits, activated OPcache and a clean web server setup. If you expect a lot of competition, test LSAPI on LiteSpeed and then make a decision. Cost-benefit-decision.


