wordpress opcache is often activated, but rarely set correctly: Too little memory, file limits that are too narrow and incorrect timestamp checking lead directly to cache misses and noticeable loading times. In this guide, I will show you typical misconfigurations, provide reliable guidelines and explain how you can see whether your cache is working or is currently keeping your CPU busy.
Key points
The following key aspects will help you to quickly identify and rectify misconfigurations.
- MemoryRealistically dimension opcache.memory_consumption
- FilesSet opcache.max_accelerated_files to match the code base
- StringsIncrease opcache.interned_strings_buffer for WordPress
- Timestamps: choose validate_timestamps and revalidate_freq sensibly
- MonitoringCheck hit rate, restarts and keys regularly
Why faulty Opcache settings slow down WordPress
With Opcache PHP compiles your code once and then delivers bytecode directly from the working memory, but incorrect values cause this advantage to evaporate. If the cache is too small, it constantly overwrites entries, which leads to frequent recompilations and load peaks. Too few „accelerated files“ also prevent all required PHP files from ending up in the cache, resulting in avoidable cache misses. If interned strings are too small, WordPress loses efficiency with recurring strings, which is particularly noticeable with many plugins. I check such effects via the hit rate, the number of cached keys and restarts - these three key figures reveal very quickly whether the configuration is working.
Sizing memory correctly: opcache.memory_consumption
I set opcache.memory_consumption not blindly to 32 or 64 MB, because modern WordPress installations quickly exceed this. For smaller blogs, I start with 128 MB, for larger sites I plan for 256-512 MB so that entries are not continuously displaced. As the site grows, I check the free Opcache memory and the restart counters; if restarts increase or the hit rate decreases, I increase the value step by step. A short load test after plugin updates shows whether the cache has enough space or is already working at its limit. If you are setting up a new system, this compact OPcache configuration additional orientation values, which I then adjust to the actual file volume.
Set file index correctly: opcache.max_accelerated_files
With opcache.max_accelerated_files I define how many PHP files the cache is allowed to manage and always set the value above the actual number of files. I determine the number on the server side, for example via „find . -iname „*.php“ | wc -l“, and add a 20-30 percent buffer so that WordPress does not run up against this limit after updates. If the default remains at around 3000, I miss out on caching potential and create unstable performance under load. With large installations, I often end up in the 10,000 to 32,500 range, depending on plugins, themes and must-use modules. I verify the result by comparing the number of cached keys with the limit value and observing the hit rate under real accesses.
The interned string buffer as a hidden bottleneck
The opcache.interned_strings_buffer many overlook, although WordPress in particular benefits greatly from interned strings. Values of 16-32 MB work well in practice because themes and plugins use numerous recurring strings, which I keep efficiently in memory. For particularly large setups, I go up to 64 MB in stages if the memory usage and string statistics indicate this. A buffer that is too small gives away validations that would otherwise merge many similar strings into one memory location. After the adjustment, I check whether restarts decrease and the general response time remains more stable with identical traffic.
Understanding timestamps: validate_timestamps and revalidate_freq
With opcache.validate_timestamps I control whether Opcache automatically recognizes file changes, which remains important in productive environments with updates. I leave validate_timestamps at 1 and usually set revalidate_freq to 60 seconds so that changed plugins go live promptly without constantly checking the hard disk. In deployment scripts, I plan a targeted PHP-FPM reload if I want to activate critical changes immediately to avoid misunderstandings. If you switch off timestamps for active editors, you risk old artifacts and errors in the frontend that are difficult to assign. For more in-depth practical questions about control, it helps me to take a look at a clean Cache invalidation, which I apply repeatedly per release.
Monitoring that counts: Hit rate, keys, restarts
I measure the success of Opcache with opcache_get_status(), because numbers immediately expose false assumptions. A hit rate of at least 99 percent shows that most requests hit bytecode and do not recompile. If restarts increase or the number of cached keys is at the limit, I adjust the memory or the accelerated-files value. I also monitor the memory fragments, as fragmented cache memory can lead to sudden drops in performance. After plugin updates, I check the key figures again to ensure that the cache remains consistently performant and does not stop working under load.
opcache_get_status in practice: Reading key figures
To quickly get a feel for the configuration, I read out the most important fields and compare them with my goals:
- opcache_statistics.hits/missesRatio determines the hit rate. Target: ≥ 99 % under real traffic.
- opcache_statistics.num_cached_scriptsMust be clearly below opcache.max_accelerated_files remain.
- memory_usage.used_memory/free_memory/wasted_memory: Shows whether memory is scarce or fragmented.
- opcache_statistics.oom_restarts and hash_restartsIf these increase, I scale up memory or files.
- interned_strings_usage.buffer_size/used_memoryIndicates whether the string buffer is sufficiently dimensioned.
Small helpers that I run on the shell or in an admin route are useful:
php -r 'var_export(opcache_get_status(false));'
php -i | grep -i opcache
php -r 'echo count(array_filter(get_included_files(), fn($f) => substr($f,-4)===".php");' Based on these figures, I decide whether to increase the memory, extend the file index or reclock the revalidate frequency.
Recommended opcache values by scenario
Instead of making blanket recommendations Standard values to the code base and keep the variants comparable. Small to medium-sized sites require noticeably fewer resources than stores with many extensions. I set development environments so that changes are visible without delay while I clock the file checks in production. The following table summarizes the usual starting values, which I then fine-tune in monitoring. If you are planning growth, it is better to calculate with a buffer so that releases do not immediately force you to reschedule.
| Scenario | opcache.memory_consumption | opcache.max_accelerated_files | opcache.interned_strings_buffer | opcache.validate_timestamps | opcache.revalidate_freq | opcache.enable_cli |
|---|---|---|---|---|---|---|
| Small/medium | 128 MB | 10000 | 16 MB | 1 | 60 | 0 |
| Large | 256–512 MB | 32500 | 64 MB | 1 | 60 | 0 |
| Development | 128–256 MB | 10000-20000 | 16–32 MB | 1 | 0 | 0 |
OPcache in the context of CLI, FPM and WP-CLI
Not every Surroundings uses OPcache the same way, so I pay attention to differences between FPM, Apache mod_php and CLI. For WP-CLI tasks, Opcache often does not bring any advantage, which is why I typically leave enable_cli at 0. In productive stacks, I use PHP-FPM and schedule reloads specifically so that caliente deployments do not empty the cache uncontrollably. Cronjobs that start PHP scripts via CLI benefit more from optimized PHP code and I/O than from the opcache itself. I document these paths so that admins know where the opcache takes effect and where it does not.
Warm-up after deployments: avoid cold starts
After a release, the cache is cold - this is exactly when many setups briefly collapse. I am therefore planning a Targeted warm-up in:
- After the FPM reload, I retrieve critical routes (home, product/contribution pages, search/shop flows) automatically.
- I use sitemaps or predefined URL lists to prime 100-500 pages in waves instead of flooding everything at once.
- I distribute warmup requests over 1-2 minutes so that there are no CPU peaks and the bytecode is loaded consistently.
This prevents real users from paying for the compilation work. For stores in particular, this step reduces response time peaks immediately after deployments.
JIT, preloading and file cache: classification for WordPress
Because the terms are often used, I'll categorize them for WordPress:
- JIT (opcache.jit)For typical WP workloads (lots of I/O, few numerical hotloops), JIT usually does not deliver any measurable gain. I usually skip JIT in production with WordPress.
- Preloading (opcache.preload)Works well with clear, stable frameworks. WordPress loads plugins and themes dynamically - preloading is error-prone and requires a lot of maintenance. I only use it if I have precise control over the autoload chains.
- File cache (opcache.file_cache)Can mitigate CLI jobs or short-term restarts because bytecode ends up on disk. For FPM-first stacks, however, I prioritize the shared memory cache; the file cache is more of a supplement for tools and cronjobs.
Blacklist, security and control
I also maintain my Opcache configuration Safety and stability reasons clean:
- opcache.restrict_api: Limits who can call opcache functions (e.g. reset). I set a path here under which only admin scripts are located.
- opcache.blacklist_filenameExclude files/directories that are frequently rewritten (e.g. code generators) to prevent thrashing.
- opcache.save_comments=1Must be active because WP/plugins often rely on docblocks/annotations. Without comments, metadata is lost.
- opcache.consistency_checksOnly activate in staging to detect hash collisions or inconsistencies; in production this costs noticeable performance.
; Example
opcache.restrict_api=/var/www/html/opcache-admin
opcache.blacklist_filename=/etc/php/opcache-blacklist.txt
opcache.save_comments=1 Multi-site, multiple projects and PHP-FPM pools
If several sites share an FPM pool, they „compete“ for the same Opcache. I therefore separate resource-intensive projects in their own pools:
- Separate INI values for each pool; this is how I dimension memory_consumption exactly according to site size.
- No mutual displacement of bytecode; updates of one site do not flush the cache of the other.
- Better fault localization: Restarts and hit rate can be interpreted per application.
In multi-site setups, I also monitor whether certain subsites bring in an extremely large number of files (Builder, WooCommerce, Page Builder). I adjust the file index accordingly and plan for more buffering.
Keeping memory fragmentation under control
Even with enough total memory, fragmented cache can suddenly Performance drops cause. I am therefore observing:
- wasted_memory and opcache.max_wasted_percentageIf the threshold value is exceeded, Opcache restarts. If such restarts accumulate, I increase memory and check whether certain deploys change many small files.
- Code layoutLarge plugins that are updated frequently cause more fragmentation. A bundled release window instead of constant micro-updates helps.
- Huge Code Pages (opcache.huge_code_pages): If the system supports huge pages, this can reduce fragmentation and TLB misses. I only set it if the platform is properly configured for it.
Development and staging workflows
Under development Visibility of changes over maximum performance. I therefore work with:
- validate_timestamps=1 and revalidate_freq=0, so that changes are immediately visible.
- Separate INI files per environment (DEV/Stage/Prod) to prevent accidental takeovers.
- Disabled JIT and disabled enable_cli to keep WP-CLI fast and deterministic.
- Consistently deactivated debug extensions in production (e.g. Xdebug) because they strongly change caching and runtime behavior.
In containers, I pay attention to the type of mount (e.g. network/bind mounts), because otherwise frequent timestamp changes trigger unnecessary revalidations.
Clearly classify error patterns
Typical symptoms often have clear causes:
- Sudden 500s after updatesCheck restarts, fragmentation and whether the FPM reload was triggered exactly after the code swap.
- Inconsistent front endsvalidate_timestamps incorrect or revalidation window selected too large.
- Permanently low hit rateFile index or memory too small; occasionally many „misses“ also indicate constantly changing build artifacts.
- Slow CLI jobsenable_cli=0 is usually correct; optimized code or the file cache helps here, not the SHM opcache.
Quick checklist for the first 30 minutes
- Count PHP files and max_accelerated_files with 20-30 % buffers.
- memory_consumption to 128-512 MB depending on the site size; string buffer to 16-64 MB.
- validate_timestamps=1 and revalidate_freq to 60 in production.
- After deploy: FPM reload, trigger warmup routes, then check opcache_get_status().
- Monitor restarts, hit rate and wasted_memory; make targeted adjustments in the event of anomalies.
- Security: restrict_api set, save_comments=1 ensure that problematic paths are blacklisted if necessary.
- Optional: Separate FPM pools for large sites so that caches do not displace each other.
Systematic troubleshooting: from symptoms to causes
I start the Analysis always with key figures: If the hit rate drops, restarts increase or keys are at the limit, I derive specific steps. If the cache is full, I increase memory_consumption, if I reach the file limit, I increase max_accelerated_files. If I see contradictory frontend statuses after deployments, I check validate_timestamps and the time of an FPM reload. If sporadic 500s occur, I check fragmented cache and consume error logs before I tweak the configuration. After each change, I measure again until the key figures and load times match consistently.
Concise summary
A strong WordPress-Performance starts with a sufficiently large opcache, suitable limits for accelerated files and a sensibly selected internal strings buffer. In production, I leave timestamps active, clock the check and set controlled reloads for releases so that changes are live in time. I rely on metrics such as hit rate, restarts and keys, because they show me objectively which adjustment screw I need to turn. Values from a table are starting points, but monitoring decides how I adjust them per site. If you maintain this discipline, you can reliably get short response times out of PHP and keep the CPU relaxed even during traffic peaks.


