Opcode Cache WordPress decides whether your site recompiles PHP on every call or starts it directly from RAM. I'll show you why a missing OPcache can affect the CPU burdened, which TTFB increased and scaling severely limited.
Key points
Before I go into detail, I'll summarize the most important findings briefly and clearly so that you know the performance levers right away. Without OPcache, PHP recompiles with every request, which wastes waiting time and resources and makes pages unresponsive. With OPcache enabled, bytecode and code paths run out of memory, allowing requests to return faster and load spikes to escalate less frequently. In combination with page and object caching, OPcache increases efficiency and brings the necessary calm to the substructure. Properly configured, OPcache noticeably increases the portable number of users per server core and reduces the error rate during peaks. These points control the difference between a sluggish system and a fast Installation with reliable Performance.
- OPcache saves compilation time and stabilizes TTFB.
- CPU load decreases, capacity per Core increases.
- Scaling succeeds, peaks remain controllable.
- PHP 8+ brings additional Performance.
- Monitoring keeps hit rate and Memory at a glance.
Why WordPress slows down without an opcode cache
WordPress loads many PHP files with each page request, which are parsed each time without OPcache, converted into a syntax tree and recompiled into bytecode, which increases the computing time unnecessarily extended. I regularly see double to triple execution times in audits, because the same routines start completely from the beginning for each request, thus creating a heat load on the CPU generate. This repetition blocks FPM workers, postpones responses and causes TTFB to rise sharply. The throughput rate drops under simultaneous accesses, while the error rate (502/504) rises in peaks. The more plugins and heavy themes are involved, the more the cost of each individual uncache is felt.
How OPcache works in detail
OPcache stores the compiled PHP bytecode in shared memory and delivers the same code directly from RAM if the timestamps are unchanged, which means that Disc-Accesses and recompiling are no longer necessary. I benefit from the fact that parser and compiler steps are eliminated and the engine only has to execute what is already available as bytecode. This behaviour significantly reduces system overhead per request and stabilizes response times even under load. With WordPress, I therefore install OPcache as the first measure before I start caching objects or pages. The savings are made across many small files and make the difference between scarce and more relaxed Server load.
Measurable effects: TTFB, CPU and capacity
With OPcache enabled, I often see up to three times shorter execution times for repeated requests, which makes the TTFB and increases the time budget for rendering. At the same time, CPU usage in typical WordPress workloads is reduced by 50-80 % because compilation work is eliminated and workers are freed up more quickly. The result is a higher number of operable parallel users with identical hardware and fewer outliers in the P95/P99 range. For marketing campaigns or seasonal peaks, this means fewer abandonments, more completed shopping baskets and more stable rankings. These effects add up as soon as page and object caching are also integrated, but without OPcache the basis remains the same. inefficient and the overlying layers come into contact more quickly. Staggering.
OPcache and other caches in interaction
So that you can clearly separate the roles, I will contrast the levels and show how they complement but do not replace each other: OPcache accelerates code execution, while page/object caches mitigate content and data access; only together do sites reach their full speed. I'll start with OPcache because it speeds up every PHP path and takes the pressure off the CPU takes. I then use page caching to deliver recurring pages directly and object caching to reduce queries against the database. If the lower layer is missing, the upper layers cannot sufficiently compensate for load jumps. The following table provides a quick orientation for selection and Expectation.
| Caching type | Where stored | Benefits for WordPress | Typical profit |
|---|---|---|---|
| OPcache | Server RAM | Saves PHP bytecode, saves parsing/compiling | Up to 3× shorter execution time |
| Object Cache | Redis/Memcached | Holds result sets of DB queries | Noticeably less DB load |
| Page Cache | Disk/Proxy/CDN | Provides ready-made HTML for guests | Almost immediate responses |
Optimal OPcache settings for WordPress
I always set OPcache to enable=1, dimension the memory generously (128-512 MB depending on the plugin landscape) and increase max_accelerated_files so that the index remains complete and the Hit rate does not deteriorate. In production, I deactivate automatic timestamp checks or lower the frequency so that the cache does not invalidate unnecessarily, and schedule controlled clears. For large sites, it pays to have a dedicated memory pool that does not produce any out-of-memory events and therefore does not impair JIT performance. I regularly check the hit rate (>95 %), the free shared memory and orphaned entries to keep the cache healthy. For details on systematic setup, it's worth taking a look at my OPcache configuration, which leads to stable times in just a few steps and which Constance strengthens the Responses.
Preloading and JIT: benefits and limitations
PHP has supported preloading since 7.4, in which selected files are already loaded in the master process and placed in memory. In classic WordPress setups, however, this only brings manageable added value because core and many plugins load very dynamically and the code paths vary depending on the route. Preloading is particularly useful in homogeneous, framework-heavy projects with clear hot paths. If you want to test it, keep the preload list small, stable and version-proof and note that an FPM reload rebuilds the preload set.
I don't see any noticeable advantage with JIT in content workloads. Many WordPress requests are I/O and template-driven, not numerically heavy. An aggressive JIT mode consumes shared memory, which the OPcache lacks. I use a conservative approach in production: JIT off or at a moderate level so that the bytecode cache has maximum space.
; Excerpt php.ini - conservative, WP-compatible settings
opcache.enable=1
opcache.memory_consumption=256
opcache.max_accelerated_files=100000
opcache.validate_timestamps=0
opcache.revalidate_freq=60
opcache.save_comments=1
; JIT reduced or deactivated
opcache.jit=0
; Alternatively moderate:
; opcache.jit=1205
; Optional preloading (only if curated)
; opcache.preload=/var/www/preload.php
; opcache.preload_user=www-data
Recognize and correct misconfigurations
Many installations suffer from a memory pool that is too small, too few accelerated_files or aggressive timestamp validation, which makes the Effect of OPcache significantly. I analyze phpinfo(), observe statistics of the caching engine and compare them with real deployments to find leaks and thrash behavior. If plugin sets or themes grow, the cache has to keep up, otherwise the hit rate will drop and execution times will drift upwards. I use clear limits: no OOM during the course of the day, hit rate close to 100 %, revalidate_freq in seconds instead of milliseconds. You can find a structured checklist in my guide Optimize misconfigurations, which defuses the typical stumbling blocks and Stability secures.
Invalidations and deployments without loss of performance
A common error is the complete emptying of the cache after every small update, which causes loading times to explode in the short term and the User feel delays. I therefore plan controlled invalidations at file level, roll out releases at off-peak times and run warm-up processes. For CI/CD, I use preloading scripts that execute critical routes in advance and load bytecode into memory before traffic arrives. In this way, I avoid performance peaks and keep the page speed metrics stable via deployments. I summarize the most important tactics in my article on the OPcache validation together, so that releases soft and without collateral damage.
File system, paths and real path cache
Many problems do not arise in the OPcache itself, but in interaction with the file system. Different paths to the same file (e.g. via symlinks, chroots or multiple mount points) can create duplicates and bloat the index. I therefore pay attention to consistent include paths and use the defaults opcache.use_cwd=1 and revalidate_path=0 so that files remain unique. In multi-tenant environments, I additionally secure the isolation with validate_permission=1 and validate_root=1 so that there is no cross-view of external paths. On NFS shares, I reduce the check frequency and deploy atomically (release symlink) so that timestamp drift does not trigger thrash invalidations.
An often forgotten adjustment screw is PHP's real path cache. It saves the resolution of paths and reduces expensive stat calls per request. For larger WP installations, I set it higher so that frequent paths are not constantly recalculated.
; Accelerate path resolution
realpath_cache_size=1M
realpath_cache_ttl=600
Multisite, MU plugins and Composer structures
WordPress multisite, extensive MU plugins and Composer-based setups bring many small files into play. To keep the index complete, I increase max_accelerated_files early on (80-200 k, depending on the size) and give the shared memory reserves. Make sure that identical files are not integrated via different paths (e.g. changing symlink bases), otherwise the same bytecode will end up in the cache several times. I avoid dynamically generated PHP files in production; if they are unavoidable, I shield them with stable timestamps or blacklists so that no permanent recompilation is triggered. Composer autoloads are uncritical, but numerous - a generous index has a direct impact on the hit rate here.
Hosting influence: PHP version, FPM worker and RAM
With PHP 8.0+ I already get a noticeable boost compared to 7.4, and newer versions such as 8.5 make further significant gains, which means that the Baseline for OPcache profits increases. I activate enough FPM workers, but not more than the server can actually handle, so that context changes and swap risks remain low. The shared memory for OPcache needs reserves that cushion growth and do not generate constant eviction pressure. WordPress often runs more smoothly on shared plans with good basic settings than on untuned VPS instances because the bytecode cache is properly dimensioned. The decisive factor is a harmonious setup of version, number of processes and RAM that matches the actual Load fits.
CLI, WP-Cron and background jobs
In addition to FPM, many WordPress tasks run via CLI: WP-Cron, Indexer, image processing, imports or WP-CLI commands. By default, OPcache is disabled for CLI, which means that recurring jobs recompile every time. On servers with frequent CLI runs, I enable OPcache for the CLI and add a file cache. This allows bytecode artifacts to be reused between CLI calls and noticeably speeds up repeated tasks.
; Use OPcache for CLI jobs as well
opcache.enable_cli=1
opcache.file_cache=/var/cache/php/opcache
opcache.file_cache_only=0
opcache.file_cache_consistency_checks=1
Important: The CLI cache is separate from the FPM cache - it relieves background jobs, but does not replace a warm-up of the FPM pool. For busy cron windows, I also plan short warm-up scripts so that FPM workers start the shift with hot bytecode and there are no peak-to-peak effects.
Containers, orchestration and rolling deploys
In Docker and Kubernetes environments, pods are often restarted or scaled horizontally. Every new FPM master starts with an empty SHM segment - without a warm-up, the first live requests then perform a cold start. I therefore use Init containers or PreStart hooks that „pre-click“ critical routes and admin flows once. I only activate readiness probes when the hot paths are in the OPcache. For rolling deploys with symlink releases, I invoke selectively, let the old pool expire in a controlled manner and only direct traffic to the new revision when the warm-up and health checks are green. In short-lived containers, an opcache.file_cache can also further reduce the cold start times.
Practical examples and healthy guidelines
On a medium-sized WooCommerce site with lots of shortcodes, OPcache halved CPU spikes and doubled the portable number of concurrent sessions, resulting in noticeably more Turnover in peak phases. A content portal with page cache, but without OPcache, continued to show high TTFB until the bytecode cache eliminated the parse load. Blogs with block editors benefit similarly, as many small PHP files are involved and the memory index eliminates the repetitive work. Realistically, I plan for 128-192 MB for small sites and 256-512 MB of shared memory for large setups, depending on the number of files. If you follow these guidelines and check the statistics, you will keep response times reliable low and reduces risk and Costs.
Monitoring and verification in everyday life
I don't rely on gut feeling, but regularly check the OPcache metrics and relate them to real latencies. In addition to the hit rate, I am interested in used_memory, free_memory, wasted_memory and the use of interned_strings. If free_memory and the number of free hash slots remain constantly high, the setup is healthy. If wasted_memory increases permanently, I clean up (planned resets) or increase the pool.
<?php
$status = opcache_get_status(false);
$mem = $status['memory_usage'];
$stats = $status['opcache_statistics'];
printf(
"Hit-Rate: %.2f%%\nUsed: %.1f MB, Free: %.1f MB, Wasted: %.1f MB\nCached Scripts: %d\n",
$stats['opcache_hit_rate'],
$mem['used_memory']/1048576,
$mem['free_memory']/1048576,
$mem['wasted_memory']/1048576,
$stats['num_cached_scripts']
);
?>
At the same time, I measure TTFB, P95/P99 and Apdex separately for guests and logged-in users. If OPcache is working correctly, the curves stabilize after a warm-up, while peaks are much flatter. If metrics and OPcache status differ (e.g. high hit rate but poor TTFB), I next look at DB queries, network, storage or blocking external services.
Step-by-step to a fast WP instance
I start with an upgrade to PHP 8.x, activate OPcache and make sure that memory_consumption and max_accelerated_files match the project and that no OOM entries appear. I then calibrate validate_timestamps and revalidate_freq to match the deployment practice in order to avoid unnecessary invalidations and to optimize the Throughput to secure. I then measure TTFB, Apdex and P95 latencies in the logged-in and guest context in order to document real progress. Only then do I add object cache (e.g. Redis) and page cache to reduce the load on the database and HTML delivery. With this roadmap, I set a solid baseline and use it to lift the rest of the Performance on.
Briefly summarized
Without OPcache, WordPress forces every request to re-parse and recompile the code, causing TTFB to rise, workers to block and the Capacity shrinks. With an active bytecode cache, I save exactly this work, significantly reduce CPU load and gain reserves for peaks. In tests, OPcache accelerates repeated calls by up to a factor of three, while PHP 8.x provides additional speed and reduces the base load. With clean configuration, careful invalidation and monitoring, the hit rate remains high and the shared memory free of bottlenecks. If you follow these steps consistently, you will run WordPress noticeably faster, more stable and more economical.


