The php session gc can block requests because it ties up the PHP process for a long time when cleaning up tens of thousands of session files, causing other requests to wait. I'll show you how probabilistic cleanup, file locks, and slow I/O lead to noticeable lag and how I avoid these delays with clear settings, cron jobs, and RAM storage so that the website remains liquid.
Key points
- root causeProbabilistic GC, file I/O, and locks cause delays.
- risk factorMany sessions (e.g., 170,000) prolong each GC run.
- WordPress: Admin + Heartbeat exacerbate delays.
- HostingRAM, SSD, and isolation reduce GC costs.
- SolutionCron cleanup and Redis speed up requests.
PHP session garbage collection explained briefly
Sessions store status data between requests, usually as files in the file system. Garbage collection removes obsolete files whose modification time is older than session.gc_maxlifetime, often 1440 seconds. By default, PHP starts this cleanup probabilistically via session.gc_probability and session.gc_divisor, often as 1 in 1000 calls. Sounds harmless, but with heavy traffic, it constantly affects someone who has to endure the entire run. The more files there are in the session directory, the longer the Cleanup the process.
Why does cleanup block requests?
A GC run must list the session directory, check each file, and delete old entries, which on slow I/O can quickly Seconds costs. If there are 170,000 files, many system calls run in succession, placing a strain on the CPU, RAM, and storage. PHP processes started in parallel sometimes attempt to delete files at the same time, causing additional file locks. This increases waiting times because processes slow each other down or block each other. If you want to delve deeper into session locking gets involved, recognizes how strongly locking shapes the response time profile and drives up time-to-first-byte, especially during load peaks, which I want to avoid by using the GC disconnect.
WordPress: slow admin pages due to sessions
The admin area requires more CPU and database access than the front end, which makes any extra delay noticeable. makes. If garbage collection starts at exactly that moment, the time until the HTML output is complete increases significantly. The Heartbeat API additionally queries the server and, if unlucky, collides with a GC run. This makes the backend feel sluggish, and clicks take longer, even though the actual logic doesn't do much. I mitigate this by setting the probability of GC in requests to zero and the cleanup work start outside of response times.
Hosting services and infrastructure
On shared systems, many projects share I/O capacity, which means that a single GC run can affect other websites. brakes. Better hardware with fast NVMe storage and sufficient RAM reduces the cost per file access. Clean isolation per customer or container prevents external load peaks from affecting your project. I also check process limits and I/O schedulers to ensure that many simultaneous PHP workers do not stall. If you want to plan in more detail, you will find a focused Hosting optimization concrete starting points for decoupling GC runs and Latency to stabilize.
Sessions in the file system vs. RAM stores
File-based sessions are simple, but cause a lot of Overhead when searching, checking, and deleting. RAM-based stores such as Redis or Memcached manage keys efficiently, deliver quickly, and have built-in expiration mechanisms. This saves system calls, reduces latency, and reduces the likelihood of errors due to file locks. I prefer RAM storage as soon as visitor numbers increase or the admin area becomes sluggish. The switch can be made quickly, and a guide for Session handling with Redis helps to keep the configuration clear and the Resources make better use of.
Useful PHP settings for sessions
I set up garbage collection so that no request accidentally triggers it. triggers. To do this, I set the probability to zero, schedule the cleanup via cron, and adjust the lifetime to match the risk. I also activate strict modes so that PHP only accepts valid IDs. I check the memory and path to ensure that no slow NFS mounts or overfilled directories slow things down. The following overview shows common defaults and proven values that I select depending on the use case to optimize the Performance improve measurably.
| Setting | Typical standard | Recommendation | Effect |
|---|---|---|---|
| session.gc_maxlifetime | 1,440 seconds | 900–3600 seconds | Shorter lifespan reduces old files and lowers I/O. |
| session.gc_probability / session.gc_divisor | 1 in 1,000 (common) | 0 / 1 | No cleanup in requests, Cron takes over cleanup. |
| session.save_handler | files | redis or memcached | RAM storage reduces file locking and shortens Latencies. |
| session.use_strict_mode | 0 | 1 | Only valid IDs, fewer collisions, and Risks. |
| session.save_path | system path | Your own fast path | Short directory depth, local SSD, less stat-Calls. |
In addition, I note other switches that improve stability and security without creating overhead:
- session.use_only_cookies=1, session.use_cookies=1 for clear cookie usage without URL IDs.
- session.cookie_httponly=1, session.cookie_secure=1 (for HTTPS) and a suitable session.cookie_samesite (usually Lax) to prevent leaks.
- session.lazy_write=1 to save unnecessary write accesses when the content does not change.
- session.serialize_handler=php_serialize for modern serialization and interoperability.
- Increase session.sid_length and session.sid_bits_per_character to make IDs more robust.
Specific configuration: php.ini, .user.ini, and FPM
I anchor the settings where they can be reliably applied: globally in php.ini, per pool in PHP‑FPM, or locally via .user.ini in projects that have separate requirements. A pragmatic set looks like this:
; php.ini or FPM pool session.gc_probability = 0 session.gc_divisor = 1 session.gc_maxlifetime = 1800 session.use_strict_mode = 1 session.use_only_cookies = 1 session.cookie_httponly = 1
session.cookie_secure = 1 session.cookie_samesite = Lax session.lazy_write = 1 ; faster, local path or RAM store ; session.save_handler = files ; session.save_path = "2;/var/lib/php/sessions"
In the FPM pool, I can hard-set values so that individual apps cannot overwrite them:
; /etc/php/*/fpm/pool.d/www.conf php_admin_value[session.gc_probability] = 0
php_admin_value[session.gc_divisor] = 1 php_admin_value[session.gc_maxlifetime] = 1800 php_admin_value[session.save_path] = "2;/var/lib/php/sessions"
File system and save path layout
Large directories with hundreds of thousands of files are slow. I therefore share the session directory into subfolders so that directory lookups remain short:
session.save_path = "2;/var/lib/php/sessions" The leading 2 creates two levels of subfolders based on hash parts of the session ID. In addition, mount options such as noatime and a file system with good directory indexes help. I avoid NFS for sessions if possible, or enforce sticky sessions on the load balancer until a RAM store is productive.
Deactivating locking in the code
Many lags are caused not only by GC, but also by unnecessarily long locks. I open the session as briefly as possible:
<?php session_start(); // read $data = $_SESSION['key'] ?? null; session_write_close(); // Release lock early // Expensive work without lock $result = heavy_operation($data);
// Only reopen and write if necessary session_start(); $_SESSION['result'] = $result; session_write_close();
If I'm only reading, I start the session with read_and_close so that PHP doesn't even enter write mode:
true]); // read only, no writing necessary
This reduces the likelihood that parallel requests will have to wait for each other. In WordPress plugins, I check whether session_start() is necessary at all and move the call to late hooks so that the core flow is not blocked.
RAM store configuration for sessions
With Redis or Memcached, I pay attention to timeouts, databases, and storage policy. A robust example for Redis looks like this:
session.save_handler = redis session.save_path = "tcp://127.0.0.1:6379?database=2&timeout=2&read_timeout=2&persistent=1" session.gc_maxlifetime = 1800 session.gc_probability = 0 session.gc_divisor = 1
Because RAM stores manage expiration times themselves, I don't need to worry about file GC. I run sessions separately from caches (different DB or key prefix) so that evictions for cache keys don't accidentally discard sessions. I set the storage policy to volatile LRU so that only keys with TTL are evicted when memory becomes scarce.
External cleanup via cron: how I equalize requests
I achieve the most secure decoupling by placing the GC outside the request flow. start. I set the probability to 0 in PHP‑ini or via .user.ini and regularly call up a small script via Cron that triggers the cleanup. Ideally, Cron runs every minute or every five minutes, depending on traffic and the desired level of hygiene. It is important that the cron job runs with the same user as the web server so that permissions are correct. I also check logs and metrics to make sure that the scheduled Routine runs reliably.
For file-based sessions, I use two tried-and-tested variants:
- A single line of PHP code that calls the internal GC (as of PHP 7.1):
*/5 * * * * php -d session.gc_probability=1 -d session.gc_divisor=1 -r 'session_gc();' 2>/dev/null
- A find cleanup that compares the mtime against the desired lifetime:
*/5 * * * * find /var/lib/php/sessions -type f -mmin +30 -delete
I keep an eye on the cron runtime. If five minutes isn't enough, I increase the frequency or reduce the lifetime. In high-traffic setups, cron runs every minute to keep the directory small.
Diagnosis and monitoring
I recognize GC peaks by increased response times and noticeable I/O peaks in the Monitoring. Tools in the WordPress context such as Query Monitor help identify slow hooks, plugins, and admin calls. A look at access and error logs shows when requests take significantly longer. Many small 200 ms peaks are normal, but outliers lasting several seconds indicate locking or GC. If you also monitor the number of files and directory size, you can see how the session directory is filling up and why a scheduled cleanup is necessary.
Practical tools for troubleshooting:
- Enable php-fpm slowlog and request_slowlog_timeout to see blocking points.
- iotop, iostat, pidstat, and vmstat to detect I/O pressure and context switching.
- strace -p temporarily to observe open files and locks.
- find | wc -l on the session path to measure the file volume.
- TTFB and p95/p99 latencies in APM to quantify improvements after migration.
WordPress-specific checks
I check plugins that call session_start() early on and replace candidates with unnecessary session usage with Alternatives. In Admin, I reduce the heartbeat frequency or limit it to the editor pages. Caches must not bypass sessions, otherwise the effect is lost; that's why I check exceptions carefully. Also important: no sessions for guests if there is no reason for them. This noticeably reduces the amount of data per day, and the planned GC has less to do.
In WooCommerce environments, I pay particular attention to shopping cart and fragment features that create sessions for anonymous users. It is often sufficient to start sessions only when there are genuine interactions (login, checkout). In addition, I ensure that WP-Cron does not cause a lot of load in parallel: I have WP-Cron triggered by a system cron and deactivate execution per request. This prevents cron jobs from conflicting with session operations.
Security, lifetime, and user experience
Longer lifetimes keep users logged in, but increase the amount of old Sessions. Shorter values reduce the load, but can terminate logins earlier. I therefore choose time periods that balance risk and convenience, for example 30–60 minutes in the admin area and shorter for anonymous users. For particularly sensitive content, I set strict modes and secure cookies against XSS and transport errors. This keeps data protected while the Performance remains reliable.
After login, I rotate session IDs (session_regenerate_id(true)) to avoid fixation and consistently use cookie_same_site, httponly, and secure. In single sign-on scenarios, I deliberately plan the SameSite selection (Lax vs. None) to ensure a stable user experience.
Clusters, load balancers, and sticky sessions
If you operate multiple app servers, you should only use file-based sessions with sticky sessions, otherwise users will lose their status. A central RAM store is better. I check the latency between the app and the store, set timeouts to be short but not aggressive, and plan for failover (e.g., Sentinel/Cluster for Redis). During maintenance, it is important to choose TTLs so that a short outage does not immediately lead to mass logouts.
Economic considerations and migration path
Switching to Redis or Memcached costs money to operate, but saves money. Time per request and reduces support cases. Anyone who works in admin frequently will notice the difference immediately. I evaluate the savings through faster deployments, less frustration, and fewer interruptions. When the load increases, I plan the migration early rather than late to avoid bottlenecks. A clear roadmap includes testing, rollout, and monitoring until the Latency stable and the GC runs remain unremarkable.
I am proceeding step by step with the switch: In staging, I activate the RAM store, run synthetic load, and check p95/p99 latencies. Then I roll out in small percentages using feature flags and monitor error rates and timeouts. A rollback remains easy if I can vary session.name in parallel so that sessions between the old and new backends do not collide. Important metrics are: session files per hour (should decrease), median TTFB (should decrease), 5xx rate (should remain stable), and percentage of requests with session lock over 100 ms (should decrease significantly).
Briefly summarized
The PHP session GC causes lag because randomly started cleanup runs result in long file operations and locks. trigger. I mitigate this by setting the probability to zero in requests, scheduling cleanup via cron, and placing sessions in RAM stores. Hosting resources with fast NVMe and sufficient RAM further reduce blockages. WordPress benefits noticeably when heartbeat is curbed, plugins are reviewed, and unnecessary sessions are avoided. Following these steps reduces response times, prevents blockages, and keeps the Admin-Responsive interface, even with high traffic.


