...

Understanding PHP Workers: What they are and when they become a bottleneck

php workers are independent processes that execute PHP code and thus process every dynamic request from a website. If too many uncached requests reach the server at the same time, the existing workers occupy all the slots, a queue forms and the bottleneck leads to long response times, TTFB-tips and errors.

Key points

I summarize the following key messages in a compact way so that you can quickly make the right decisions for Performance and capacity.

  • DefinitionPHP Workers process requests serially, only one request per worker.
  • BottleneckToo few workers create queues, TTFB increases and timeouts are imminent.
  • ResourcesWorkers require CPU cores; incorrect ratio causes context switching.
  • Caching: The more hits from the cache, the less worker load during traffic peaks.
  • Scaling: Adapt the number of workers to the page profile, plugins and interactions.

What are PHP Workers in the hosting context?

I understand PHP Workers as digital waiters that serve each dynamic request individually. A worker reads the PHP script, triggers database queries and uses them to build the HTML for the browser. If a task is running, the worker remains bound until completion and is only then available again, parallel it does not work. In WordPress, workers also perform recurring tasks such as cron jobs, sending emails and security checks. This is precisely why the number and quality of workers influence the perceived speed of a website massive.

When and why does the worker bottleneck occur?

A bottleneck occurs as soon as more uncached requests arrive at the same time than Worker are available. Each additional request then ends up in a queue and waits for a free slot. This increases the time to first byte, extends loading times and can cause checkout processes to be aborted. In stores or member areas, personalized content exacerbates the situation because the cache cannot provide many answers, which increases the Load directly to the workers. In this situation, I achieve the greatest effect with sensible caching, optimized plugins and a coherent worker-to-CPU ratio.

Recognize symptoms: Reading metrics and logs correctly

I look first at TTFBbecause increasing values indicate queues. Errors such as 504 Gateway Timeout occur when requests remain blocked for too long and abort hard. In the hosting panel, I recognize queues via high process numbers with simultaneously low network utilization, which is typical for blocked requests. Worker is. Access logs then show many simultaneous requests to non-cached paths such as the shopping cart, checkout or personal dashboards. If response times in the backend increase at the same time, heavy admin actions usually block individual workers for longer than necessary.

Important differentiation: Web server vs. PHP-FPM

I make a clear distinction between web server workers (e.g. NGINX/Apache) and PHP-FPM Workers. Thanks to Keep-Alive and HTTP/2, the web server can multiplex many connections and serve static assets extremely parallel. However, the real bottleneck arises in PHP-FPM, where each child process processes exactly one request. Even if the browser opens dozens of requests in parallel, the number of PHP processes limits the simultaneous processing of dynamic paths. This distinction explains why pages with many static files appear fast, while individual, dynamic endpoints (checkout, login, REST API) still jam.

Optimal number of workers: processing cores, RAM and app profile

The sensible number of workers depends on the proportion of dynamic pages, the plugin landscape and the available CPU cores off. I never plan for significantly more workers than CPU cores because permanent context switching increases latency. Two to four workers are usually sufficient for small blogs, while active stores and LMSs require significantly more. The decisive factor remains the interaction: more workers without CPU reserves do not bring Acceleration. That's why I test with load, measure TTFB and check whether the cue disappears before I upgrade further.

Scenario Uncached Worker CPU cores Effect Action
Blog with cache Very low 2-4 2-4 Fast delivery Maintain cache, Plugins keep slim
WooCommerce with tips Medium-high 6-12 4-8 Short waiting times Relieve checkout, Worker increase
Members/LMS High 8-16 8-16 Fewer timeouts Cache personalization, CPU tighten
API-heavy app High 8-20 8-20 More even TTFB Optimize queries, Limits set

Rules of thumb for dimensioning

For a first feeling, I calculate with the simple approximation: Required workers ≈ Simultaneous uncached requests. This simultaneity is calculated by multiplying the request frequency by the average processing time. Example: 10 requests/s with 300 ms service time result in around 3 simultaneous PHP requests. If I plan for security reserves and short peaks, I double this value. Important: This figure must be CPU cores and RAM fit; a worker without CPU time is just another waiting worker.

Calculate your storage budget correctly

Each PHP-FPM process consumes RAM, depending on PHP versionactive Opcache and the loaded plugins. I measure the real footprint under load (ps/top) and multiply it by pm.max_childrenadd web server, database and cache services. This is how I prevent swapping and the OOM killer. As a rule, I keep 20-30% of free RAM buffer. If the consumption per process increases significantly, I interpret this as a signal for Plugin dietfewer extensions or more restrictive memory_limit settings per pool.

Caching as a relief layer

The more I learn from the Cache the less energy the workers consume. Page cache, object cache and edge cache drastically reduce PHP execution. I encapsulate dynamic parts such as the shopping cart or personalized blocks with ESI or Ajax so that the rest remains cached. If you want to go deeper, you can find more information in the Server-side caching Guide helpful strategies for NGINX and Apache that really relieve workers. This is how I noticeably reduce the TTFB and keep the Response time low under load.

I also take into account Cache invalidation and warm-up strategies: After deployments or major product changes, I warm up critical pages and API routes. In stores, I load category pages, bestsellers, the start page and checkout preloads to cushion cold start peaks. For object caches, I pay attention to clean key strategies so that I don't discard hotsets unnecessarily.

Typical mistakes and expensive traps

Many initially suspect a lack of RAM or CPU as the main problem, although the queue of the workers is the actual bottleneck. I therefore check whether cached pages remain fast and only dynamic paths get out of hand. Another misconception is "more workers solves everything", which without additional cores turns into high context switches and worse latency. Likewise, bad plugins bind a worker for an excessively long time, which increases the perceived latency. Performance deteriorates. I therefore reduce add-ons, optimize database queries and scale resources in unison.

WordPress-specific hotspots

  • admin-ajax.php and wp-jsonMany small calls add up and block workers; I bundle requests and set sensible caches.
  • Heartbeat APII limit frequencies in the backend so that there are not unnecessarily many simultaneous requests.
  • WooCommerce wc-ajaxShopping cart, shipping and coupon checks are often uncached; I reduce external API calls and optimize hooks.
  • Transients and OptionsOverfilled autoload options or expensive transient regenerations extend the PHP runtime and thus the slot commitment.

Practice: From three to eight workers - without congestion

Assuming a store only operates three Worker and experiences checkout jams in the evening. I first analyze paths that do not come from the cache and measure the TTFB under load. Then I activate clean page and object caching and only outsource personalized areas. I then increase the workers to eight and at the same time add two additional CPU cores free. In the next load test, the queues decrease and the error rate drops significantly.

Optionally, I also smooth out peaks by setting conservative limits for expensive endpoints in the web server (e.g. low simultaneous upstreams for checkout), while delivering static and cached content at unlimited speed. This takes pressure off the FPM pool and stabilizes the TTFB across the board, even if individual user actions are briefly slower.

Monitoring and load testing: tools that I use

I follow TTFB, response time and error rate at short intervals to detect congestion early on. For synthetic load, I use tools like K6 or Loader because they generate realistic peaks. Application logs help to identify slow queries and external API calls that tie up workers. I also check PHP FPM status pages to keep an eye on occupied, waiting and free slots. If slots become permanently full, I increase workers and CPU step by step and check each step with a test load.

Interpret metrics reliably

  • max children reachedThe upper limit has been reached; requests are waiting - time for more workers or faster caching.
  • listen queue: A growing queue confirms congestion in front of FPM; I check web server and upstream settings.
  • request_slowlog_timeout and slowlog: Identifies exactly the request points where workers are attached.
  • upstream_response_time in web server logs: Shows how long PHP has been responding; I correlate with request_time and status codes (502/504).

Correctly interpreting specific upgrade signals

If the TTFB If there is a noticeable increase in traffic despite active caching, there is usually a lack of worker capacity. If 504 errors accumulate during actions such as checkout or login, there are real traffic jams. More simultaneous orders, spontaneous campaigns or launches justify additional workers so that transactions run smoothly. If the 503 error status occurs, it is worth taking a look at this guide to WordPress 503 errorbecause faulty processes and limits produce similar effects. I then decide whether to use Worker, CPU or timeouts.

Configuration: PHP-FPM and sensible limits

With PHP-FPM I determine with pm.max_children the maximum number of simultaneous processes and thus the upper limit of the workers. I use pm.start_servers and pm.min/max_spare_servers to control how quickly capacity is available. pm.max_requests protects against memory leaks by restarting processes after X requests. request_terminate_timeout secures long runners so that a worker does not hang forever and block slots, which I set carefully for checkout paths. These set screws have a direct effect on queues, so I only change them together with Tests.

I choose the right pm-mode consciously: dynamic for fluctuating loads, ondemand for very sporadic loads on small instances and static for constantly high peaks when CPU and RAM are clearly reserved. I also activate Opcache with sufficient memory and revalidate scripts efficiently so that workers need less CPU per request. With request_slowlog_timeout and slowlog I find hotspots in the code without enlarging the pool. I check whether the FPM socket as Unix socket or TCP is connected; locally I prefer sockets, via containers/hosts often TCP.

Checklist for stores, memberships and LMS

For stores I consider dynamic Pages such as shopping cart, checkout and "My account" and reduce external calls. In member areas, I check every profile and dashboard query for superfluous queries. In LMS, I rely on object cache for course lists, while I render progress indicators efficiently. In all cases, I aim for a few, short requests per action so that workers are quickly free again. Only when this homework is done do I extend workers and CPU parallel.

Sessions, locking and concurrency traps

I pay attention to session locks, which in PHP work serially per user session by default. If expensive actions (e.g. payment callbacks) run during the same session, this blocks further requests from this user - resulting in spikes in TTFB and perceived hang-ups. I minimize the use of sessions, only store the essentials in sessions, and switch to high-performance handlers (e.g. in-memory). In WooCommerce, I pay attention to sessions and transient storms in the shopping cart.

Database and external services as multipliers

Often slow SQL queries or rate limits of external APIs affect the worker. I optimize indices, reduce N+1 queries, set query caches (object cache) and limit external calls with timeouts and retry logic. If payment, shipping or license servers become sluggish, I deliberately limit parallelism on these routes so that the entire pool is not waiting. This leaves free slots for other user actions.

Provider selection and hosting tuning with a view to workers

I prefer hosting plans where I can PHP Workers flexibly and expand CPU cores in parallel. High-performance providers deliver clean caching levels, fast NVMe storage and clear metrics in the panel. As an introduction to the technical evaluation, the PHP Hosting Guidethat makes central criteria and options tangible. It is important to me that support is not interrupted during traffic peaks, but that capacity is available without rebooting. This is how I keep TTFB, error rate and Throughput in balance.

Plan for peaks and protection against bot load

I agree on an escalation path in advance: how quickly can workers and CPU who monitors which timeouts are allowed to grow temporarily? At the same time, I minimize bot and spam load via sensible rate limits on dynamic endpoints. Every unnecessary request that is blocked is a free worker slot for real customers.

To take away

PHP Workers decide how quickly dynamic pages react under load, because each process only handles one request at a time. I minimize the load with consistent caching, clear up blocking plugins and establish a sensible worker-to-CPU ratio. During peaks, I carefully increase workers and test whether the queue disappears and the TTFB drops. Logs, FPM status and load tests provide me with evidence as to whether I am scaling correctly or need to tighten timeouts. If you have these levers under control, you avoid bottlenecks, protect transactions and ensure a noticeably faster User experience.

Current articles

Laptop with open DNS settings for external connection of a Strato domain
web hosting

Connecting a Strato domain to an external website - how it works

Find out now how to connect a Strato domain externally - including complete instructions on DNS configuration, A and CNAME records and professional tips for a smooth connection to external systems. Perfect for Squarespace, Webflow, Shopify & more.