The WordPress Heartbeat API sends AJAX requests at short intervals via admin-ajax.php, saves drafts in real time and prevents editing conflicts - at the same time, it can also be wp backend load significantly. In this article, I will show you the benefits, risks and specific adjustments you can make to noticeably reduce performance without losing important functions.
Key points
- BenefitAutosave, post-locking, live information in the dashboard
- RisksCPU spikes, load on admin-ajax.php, sluggish backend
- Frequencies15/60/120 seconds depending on context
- OptimizationIncrease intervals, throttle frontend, plugins
- Hosting: Enough PHP workers and good caching
What the Heartbeat API does and why it is important
The Heartbeat API keeps the editor and dashboard up and running with frequent requests. Real time synchronized. I benefit from automatic backups, collision protection when writing and notifications without having to reload pages. Especially in a team, post-locking ensures that no one accidentally overwrites other people's work, which is noticeable. Stress from editorial processes. In order for these advantages to take effect, a constant data exchange via admin-ajax.php runs in the background. This feels convenient, but quickly consumes resources on weak hosts.
Standard intervals and typical load peaks
In the editor, Heartbeat typically fires every 15 seconds, in the dashboard every 60 seconds and in the frontend approximately every 120 seconds, which means that the Frequency is clearly specified. If several admin windows remain open, the calls add up and occupy PHP workers. As soon as memory or CPU become scarce, the backend reacts sluggishly and inputs appear with Delay. This often goes unnoticed in day-to-day business: one tab per post, plus media, forms, plugin pages - and a dense request cloud is created. If you stretch these intervals in a targeted manner, you take the load off the server without losing the most important convenience functions.
Risks: wp backend load, CPU and admin-ajax.php
Each heartbeat call starts PHP, loads WordPress and processes tasks - this sounds small, but scales extremely with multiple editors, which makes the wp backend load drives up. Shared hosting in particular then shows CPU spikes, busy workers and delays in the editor. I often recognize such patterns by slow typing and slow autosave display. I have explained the background to this silent load source in detail here: Silent load source. If you ignore these effects, you risk poor core web vitals due to slow admin response times and indirect effects on publishing processes.
Influence on WordPress performance in editorial workflows
The biggest brake is not the amount of data, but the Quantity of the requests and their simultaneity. Several open editors generate heartbeat requests in parallel, which often bypass caches because they require dynamic data. This results in waiting times even though the page itself loads quickly, which editors perceive as a „slow backend“. This is where it helps, Prioritize HTTP requests and heartbeat intervals so that fewer PHP instances are running at the same time. I therefore keep editor tabs lean and consistently close inactive tabs, which reduces the Response time stabilized noticeably.
Best practice: throttling back instead of switching off
I first increase the interval instead of rigorously switching off Heartbeat in order to Auto-save and post-locking. An interval of 60 to 120 seconds often significantly reduces the load without disturbing the editorial team. For quick relief on the frontend, I remove Heartbeat there completely, as visitors rarely need it. If you want to go even further, you can throttle the editor moderately and limit the dashboard more. This keeps the User guidance liquid, while the server gets more air.
add_filter('heartbeat_settings', function($settings) {
$settings['interval'] = 60; // seconds in the editor/dashboard
return $settings;
});
add_action('init', function() {
if ( ! is_admin() ) wp_deregister_script('heartbeat'); // throttle frontend
}, 1);
Context-specific rules in the Admin
The more precisely I control, the fewer side effects there are. I differentiate between the editor, dashboard and other admin pages and assign different intervals there. The editor remains relatively fast, the dashboard is slowed down more.
add_action('admin_init', function () {
add_filter('heartbeat_settings', function ($settings) {
if ( ! is_admin() ) return $settings;
if ( function_exists('get_current_screen') ) {
$screen = get_current_screen();
// Editor (Beiträge/Seiten) moderat
if ( $screen && in_array($screen->base, ['post', 'post-new']) ) {
$settings['interval'] = 45;
return $settings;
}
// Dashboard eher langsam
if ( $screen && $screen->base === 'dashboard' ) {
$settings['interval'] = 120;
return $settings;
}
}
// Sonstige Admin-Seiten
$settings['interval'] = 60;
return $settings;
}, 10);
});
Post-locking and autosave in the editor remain reliable, while live widgets in the dashboard „poll“ less frequently and protect the server.
Limit load peaks per tab (JavaScript)
Many load peaks are caused by inactive browser tabs. I use a small script in the admin that automatically slows down Heartbeat when the tab is in the background and speeds it up again when I return.
add_action('admin_enqueue_scripts', function () {
wp_add_inline_script(
'heartbeat',
"document.addEventListener('visibilitychange', function () {
if (window.wp && wp.heartbeat) {
if (document.hidden) {
wp.heartbeat.interval('slow'); // ~120s
} else {
wp.heartbeat.interval('standard'); // ~60s
}
}
});"
);
});
This allows me to noticeably reduce parallel heartbeats without losing functions. Important: I then specifically test autosave and simultaneous editing.
Targeted payload control instead of data ballast
In addition to the frequency, it is the content that counts. Some plugins attach large data packets to Heartbeat. I keep the payload lean by only sending values that are really needed and removing unnecessary keys on the server.
// Client: register specific data
jQuery(function ($) {
if (window.wp && wp.heartbeat) {
wp.heartbeat.enqueue('my_app', { thin: true }, true);
$(document).on('heartbeat-tick', function (event, data) {
if (data && data.my_app_response) {
// Process response efficiently
}
});
}
});
// Server: Streamline response
add_filter('heartbeat_send', function ($response, $data) {
// Remove heavy/unnecessary keys from the response
unset($response['unnecessary_data']);
return $response;
}, 10, 2);
add_filter('heartbeat_received', function ($response, $data) {
// Check/validate incoming data
return $response;
}, 10, 2);
This fine control avoids data ballast per request and reduces CPU and I/O pressure, especially when many editors are active at the same time.
Block editor (Gutenberg): Special features at a glance
Additional processes such as regular draft backups and status checks run in the block editor. I avoid unnecessary parallelism: no mass editing in many tabs, media uploads one after the other, and I plan long sessions with clear saving rhythms. Throttling in the dashboard is stronger than in the editor so that writing processes don't „hack“. I also monitor whether individual block plugins trigger heartbeat/live checks disproportionately often and reduce their live features in case of doubt.
Edge Cases: WooCommerce, Forms, Page Builder
- WooCommerce-Admin uses live components. I throttle, but do not completely switch off Heartbeat in shop-relevant masks so as not to disrupt inventory or session effects.
- Form builders with „live previews“ often use heartbeat or their own polling mechanisms. I test: preview, spam protection, upload - and regulate their updating separately.
- I reduce the load on page builders with live statistics in the dashboard by hiding the widgets or allowing them to be updated less frequently.
Server and hosting factors
Heartbeat puts a strain on PHP workers, so I make sure I have sufficient contingents and fast I/O. Object Cache (Redis/Memcached) relieves PHP calls, although Heartbeat remains dynamic. When hosting, I look at the number of workers, CPU reserves and limits per process so that editor sessions do not come to a standstill. Good providers deliver clear metrics so that I can recognize load and bottlenecks. The following overview shows typical differences and what they mean for the Performance mean.
| Hosting provider | PHP-Worker | Heartbeat optimization | Suitable for editorial offices |
|---|---|---|---|
| webhoster.de | Unlimited | Excellent | Yes |
| Other | Limited | Medium | Partial |
PHP/FPM parameters that I check
- PHP-FPM: Sufficient pm.max_children, suitable pm.max_requests (e.g. 300-1000) and sensible pm.process_idle_timeout.
- OPcacheEnough memory (e.g. 128-256 MB), high opcache.max_accelerated_files, validate_timestamps active with a practicable interval.
- request_terminate_timeout not too short, so that long edit requests are not aborted.
- „Activate “Slowlog" to find outliers in admin-ajax.php.
CDN/Firewall: typical pitfalls
In the admin area, I consistently omit CDN caches, do not set any aggressive rate limits on admin-ajax.php and prevent bot protection measures from blocking Heartbeat. Otherwise, there is a risk of incorrect autosaves, expiring sessions without notification or flickering post locks. A clean exception rule for the admin ensures stable working conditions.
Monitoring and diagnosis
For diagnostics, I check request flows, response times and how many instances of admin-ajax.php are running in parallel in order to Tips visible. Tools such as debug plugins and server logs show me when the backend stumbles. I also pay attention to sessions, because blocking sessions artificially prolong edits. If you want to understand more, take a look at the topic PHP session locking because it can collide with heartbeat effects. After each change, I test the editor, media upload and Login, so that no side effect goes unnoticed.
Step-by-step tuning plan
- Measure actual statusNumber of parallel admin-ajax.php calls, response times, CPU/worker utilization, tabs/users at peak.
- Relieve the front endDeactivate heartbeat in the frontend, check critical frontend functions.
- Set context rulesEditor moderate (45-60s), Dashboard slow (90-120s), rest 60s. Inactive tabs on „slow“.
- Keep payload leanRemove superfluous keys, reduce or deactivate large live widgets in the dashboard.
- Follow suit on the server sideCheck PHP-FPM/OPcache, activate object cache, plan sensible worker reserves.
Practical checklist for different scenarios
For solo creators with occasional updates, I leave Heartbeat set to 60-90 seconds in the editor and deactivate it in the Front end. In small editorial teams with several tabs, I set 60-120 seconds and train the team to close inactive windows. On high-traffic sites with many editors, I increase workers, activate object cache and throttle dashboard heartbeat more than the editor. If the dashboard remains sluggish despite throttling, I check plugins with live widgets and reduce their updates. Only if all the adjustments don't work do I temporarily switch off Heartbeat and secure workflows with manual Memory-discipline.
Conclusion: How to keep Heartbeat in check
I utilize the strengths of the WordPress Heartbeat API - Autosave, post-locking, live information - and at the same time reduce the load. The first lever remains the interval: stretch, measure, readjust. Then I reduce the load on the front end, set rules per context and keep tabs lean. On the server side, I make sure I have enough workers, solid caching layers and transparent metrics. This keeps my backend responsive, while all the Comfort-functions are retained.


