WordPress HTTP/2 accelerates requests over a single connection, but old optimizations and incorrect server setups often slow down the effect. I will show you where multiplexing, header compression and server push have an effect - and why noticeable Performance only comes with suitable WordPress and server settings.
Key points
- Multiplexing replaces many connections and loads files in parallel.
- Concatenation and excessive minification are often a hindrance under HTTP/2.
- Server Push only helps when specifically configured and measured.
- TLS handshake costs time, good server settings compensate for this.
- CDN and clean assets clearly beat pure protocol changes.
What HTTP/2 actually changes in WordPress
I use the advantages of Multiplexing, to load many small CSS, JS and image files in parallel over a single TCP connection. HTTP/2 reduces overhead through HPACK header compression and transmits data in binary form, which minimizes errors and makes packets more efficient. This eliminates the need to open many connections, which reduces latency and CPU load in the browser. If you want to understand the differences to HTTP/1.1, take a look at the comparison Multiplexing vs. HTTP/1.1 and plans the asset strategy based on this. Server Push can also trigger initial resources, but I use it in a targeted manner and measure the effect.
Why HTTP/2 does not automatically work faster
Old HTTP/1.x tricks such as strong merging of files often worsen the performance under HTTP/2. First Paint. Many themes bundle everything into one large file, which causes the browser to start rendering later. Tests show some drastic gains, up to 85 %, but only when server, assets and cache work together. On lean sites or with weak servers, the effect is smaller, sometimes I only see 0.5 seconds Time-to-Interactive-profit. If you load the wrong plugins, use uncompressed images or have slow database queries, HTTP/2 will slow you down.
Typical HTTP/1.x optimizations that are now slowing things down
I avoid exaggerated Concatenation, because a large JS file blocks the parsing and caching of fine granules. It's better to deliver modules separately: only what the page really needs. Excessive minification is of little use because HTTP/2 already saves a lot of bytes through compression; I minify moderately to keep debugging and caching friendly. Domain sharding should be put to the test because a single connection with multiplexing provides the best utilization. I also recheck old CSS sprites, as modern formats such as WebP together with HTTP/2 handle requests and bytes more efficiently, and Cache hit rate improve.
Server configuration and TLS: how to get a head start
HTTP/2 requires HTTPS, so I optimize TLS 1.3, activate ALPN and shorten handshakes with OCSP stapling. I use Brotli instead of just Gzip, tune Keep-Alive and set up Nginx or Apache cleanly with h2 parameters. A weak PHP-FPM configuration or too few workers cost time before the first byte flows. Caching at server level - FastCGI, Object Cache, OpCache - noticeably reduces backend load. These steps often do more than any protocol option and stabilize the perceived load. Response time.
Asset strategy for WordPress under HTTP/2
I load styles and scripts modularly via wp_enqueue and set defer or async for non-critical JS files. Critical CSS for above-the-fold shortens the first contentful paint, while residual CSS loads later. Instead of monster bundles, I split components sensibly so that caching and parallelization take effect. I optimize images with modern formats and suitable quality, lazy loading keeps the start page lean. For less overhead, I use tried and tested tips to Reduce HTTP requests, without giving away the strengths of HTTP/2; thus the payload small.
Targeted use of server push
I only push files that every site really needs immediately, for example a small Critical CSS-file or an important preload script. I don't push large images or rarely used modules because they can tie up bandwidth and disrupt caching. In WordPress, I link Push via link headers or suitable plugins, but I check whether the browser loads fast enough anyway. I use web tools to measure whether push improves the LCP or whether a preload header is sufficient. If key figures stagnate, I switch Push off again and keep the Pipeline free.
CDN, caching and latency - what really counts
I place static assets on a CDN with HTTP/2 support and a good presence close to the users. Shorter paths reduce RTT, while edge cache reduces the load on the origin. With sensible cache control headers, ETags and hashed filenames, I make clean revalidation decisions. I minimize DNS lookups and prevent unnecessary CORS preflights. Together with a clean object cache for WordPress, the effect of HTTP/2 grows noticeably and strengthens the Loading time.
Targeted use of prioritization and resource hints
HTTP/2 decides on the server side in which order streams flow, but I give the browser clear signals. I use preload for critical CSS and the LCP image, preconnect for unavoidable third-party domains and dns-prefetch only carefully. For fonts I use font-display: swap and deliver WOFF2; preload helps here to avoid flash-of-invisible text. Since WordPress 6.x, I can also use the attribute fetchpriority to upgrade important resources and reduce unimportant ones.
I follow two rules here: I only preload what is rendered immediately and I measure whether the prioritization is effective. A preload that is too wide clogs up the pipeline, especially on mobile networks.
// LCP-Bild priorisieren und nicht lazy-loaden
add_filter('wp_get_attachment_image_attributes', function ($attr, $attachment, $size) {
if (is_front_page() && !empty($attr['class']) && strpos($attr['class'], 'hero') !== false) {
$attr['fetchpriority'] = 'high';
$attr['decoding'] = 'async';
$attr['loading'] = 'eager';
}
return $attr;
}, 10, 3);
// Preconnect/Preload-Hints gezielt setzen
add_filter('wp_resource_hints', function ($hints, $relation_type) {
if ('preconnect' === $relation_type) {
$hints[] = 'https://cdn.example.com';
}
return array_unique($hints);
}, 10, 2);
For styles, I use small, outsourced Critical CSS files (easily cacheable) instead of large inline blocks that are transferred anew with every HTML. I preload the file and have the remaining CSS chunk reloaded asynchronously - that keeps FCP and LCP small and respects the strengths of HTTP/2.
WordPress assets in practice: clean splitting, smart loading
I register scripts modularly with dependencies and control the execution via defer/async. Third-party scripts, analytics and maps run asynchronously; render-critical elements remain lean.
// set defer/async depending on the handle
add_filter('script_loader_tag', function ($tag, $handle, $src) {
$async = ['analytics', 'maps'];
$defer = ['theme-vendor', 'theme-main'];
if (in_array($handle, $async, true)) {
return str_replace('<script ', '<script async ', $tag);
}
if (in_array($handle, $defer, true)) {
return str_replace('<script ', '<script defer ', $tag);
}
return $tag;
}, 10, 3);
// Disconnect superfluous plugin assets on non-target pages
add_action('wp_enqueue_scripts', function () {
if (!is_page('contact')) {
wp_dequeue_script('contact-form-7');
wp_dequeue_style('contact-form-7');
}
}, 100);
I split large JS bundles into meaningful chunks - headers, footers, components - and use tree-shaking-capable builds. Under HTTP/2, it's okay to deliver several small files as long as dependencies are clear and caching works. For CSS, I rely on modular files per template/component; this makes debugging easier and reuse more efficient.
I keep fonts to a minimum: few cuts, variable fonts only when really needed. I pay attention to defined width/height for images so that CLS remains low, and let WordPress responsive images with suitable srcset-entries so that devices do not load more bytes than necessary.
Server Push today: Preload and Early Hints
I note that many browsers HTTP/2 Server Push have been dismantled or deactivated in the meantime. In practice, I therefore consistently supply preload headers and use them where available, 103 Early Hints, to announce critical resources before the final response. This works more stable and collides less with caches.
# Example Nginx: HTTP/2, TLS 1.3, Brotli, Early Hints
server {
listen 443 ssl http2;
ssl_protocols TLSv1.3;
ssl_early_data on;
add_header Link "; rel=preload; as=style" always;
brotli on;
brotli_comp_level 5;
brotli_types text/css application/javascript application/json image/svg+xml;
}
I don't push anything that the browser moves anyway or that is considered a late render. If a targeted push (or early hint) does not result in a measurable gain in LCP I remove it again and leave prioritization to the browser.
Backend performance: PHP-FPM, object cache and database
HTTP/2 does not hide slow backends. I set PHP-FPM clean (pm dynamic, meaningful pm.max_children, no swapping) and activate Opcache with sufficient memory. A persistent object cache (Redis/Memcached) ensures that recurring requests hardly ever hit the database. For the database, I pay attention to indexes for wp_postmeta and wp_options, reduce autoload ballast and tidy up cron jobs.
; PHP-FPM (excerpt)
pm = dynamic
pm.max_children = 20
pm.max_requests = 500
request_terminate_timeout = 60s
I regularly check TTFB under load. If the first byte takes too long, too few PHP workers, missing cache hits or slow WP queries are often to blame. Query analysis, autoload options > 1-2 MB and unbraked REST/admin-ajax calls are typical brakes. I cache API responses aggressively if they rarely change.
E-commerce and dynamic pages: Caching without pitfalls
For stores (e.g. WooCommerce) I work with full-page cache plus Vary strategy on relevant cookies. I exclude shopping cart and checkout pages from the cache and deactivate cart fragments where they are not needed. Product lists and CMS pages, on the other hand, can be cached very well at the edge - HTTP/2 then delivers the many small assets in parallel, while the HTML comes immediately from the cache.
I use fragmented caching (ESI or server-side partials) to incorporate dynamic blocks into otherwise static pages. This keeps TTFB low without losing personalization. For country/currency changes, I use short cache keys and compact HTML so that the number of variants to be cached does not explode.
CDN units: Coalescing, hostnames and headers
I avoid additional hostnames if they are of no real use. Under HTTP/2 the browser can Merge connections (connection coalescing) if the certificate, IP and TLS parameters match - this keeps the number of TCP and TLS setups to a minimum. I use immutable and stale-while-revalidate in Cache-Control so that clients can retrieve assets from the cache for longer and keep them fresh.
I pay attention to consistent Brotli compression on the Edge and Origin so that there are no double encodings. Missing Vary-header on Accept-Encoding or excessive CORS policies can generate preflight requests and thus counteract the strength of HTTP/2 - I am clearing this up.
Measurement strategy: Lab and field, reading key figures correctly
Besides TTFB, FCP, LCP I observe CLS (layout shifts) and INP (interaction latency). HTTP/2 primarily improves transport and parallelization; poor values for CLS/INP often indicate assets, fonts and JS load, not the protocol. I always measure mobile with throttling, compare cold vs. warm caches and keep test conditions reproducible.
- I'm reading Waterfalls: starts CSS early, blocks a large JS, when does the LCP image flow?
- I check priorities in DevTools: Are preloads respected, is fetchpriority active?
- I differentiate between origin and edge hit rate: Scarce HTML responses plus many small assets are fine under HTTP/2 - if both caches are in place.
Typical measured values and what they mean
I monitor TTFB, FCP and LCP because these ratios reflect the real Perception reflect. A pure „requests down“ target distorts results, because HTTP/2 loves several small files. I also evaluate the distribution: Which resource blocks rendering, which loads late? Without a reproducible measurement environment (cold vs. warm cache, mobile vs. desktop), numbers quickly point in the wrong direction. These sample values show typical effects, serve me as a starting point for finer tuning and ensure the Transparency:
| Key figure | Before changeover | After HTTP/2 + tuning |
|---|---|---|
| TTFB | 450 ms | 280 ms |
| FCP | 1,8 s | 1,2 s |
| LCP | 3,2 s | 2,3 s |
| Inquiries | 92 | 104 (better parallelized) |
| Transferred data | 1,9 MB | 1,6 MB |
Limits of HTTP/2 and a look at HTTP/3
I am not forgetting that HTTP/2 head-of-line blocking on TCP-level is not completely prevented. This can slow things down on difficult networks with packet loss, even though the protocol is parallelized. HTTP/3 with QUIC avoids this problem because it relies on UDP and treats streams separately. If you want to make a deeper comparison, read my overview of HTTP/3 vs. HTTP/2 and then check whether an upgrade makes sense. For many sites, HTTP/2 is already delivering strong gains, but I keep an eye on QUIC open.
Hosting selection: what I look out for
I pay attention to Hosting for WordPress on clean HTTP/2 implementation, TLS 1.3, Brotli and fast NVMe storage. Good providers provide optimized PHP workers, object cache and edge functions. In comparisons, platforms with WordPress optimization are often the clear leaders because they keep latency and TTFB low. Test winner overviews show webhoster.de with strong HTTP/2 support and good results for wp protocol speed. This short table summarizes the core and makes it easier for me to make a clear Choice:
| Hosting provider | HTTP/2 support | WordPress optimization | Place |
|---|---|---|---|
| webhoster.de | Complete | Excellent | 1 |
Briefly summarized
I see HTTP/2 as a strong foundation, but speed only comes from clever Prioritiesmodular assets, good caching, clean TLS and server settings. I sort out old HTTP/1.x tricks and replace them with splitting, preload and thoughtful push. With a suitable CDN, optimized images and reliable cache headers, key figures such as FCP and LCP increase significantly. Solid hosts with HTTP/2, TLS 1.3 and Brotli provide the leverage for shorter TTFB and stable response times. If you set up WordPress in this way, you get wordpress http2 performance real advantages instead of just a new protocol line.


