Gzip vs Brotli decides in the Hosting loading time, file size and CPU budget. In this comparison, I show in a practical way when I activate which HTTP compression method, which level I use and how this has a direct impact on core web vitals and costs.
Key points
- compression ratioBrotli saves 15-25 % more bytes than Gzip, especially with static assets.
- SpeedGzip compresses faster on-the-fly, Brotli often decompresses faster in the browser.
- Static/dynamicBrotli for pre-compressed files, Gzip for dynamic responses.
- FallbackPrioritize Brotli, use Gzip as a compatible fallback level.
- SEO/UXSmaller files reduce latency, strengthen core web vitals and rankings.
Why HTTP compression drives hosting success
I rely on HTTP compression, because it makes each response easier and therefore takes less time over the network. Shorter transfers improve the Interactivity, compress the TTFB impression and stabilize the loading sequence. Every kilobyte counts, especially on mobile connections, and compression noticeably reduces this footprint. In addition, I save bandwidth on the server, which is a real benefit when there is a lot of traffic. Costs is reduced. Those who prioritize performance therefore consistently activate the right compression method at all edges: server, CDN and edge.
Gzip: strengths, levels and fields of application
Gzip is based on DEFLATE and in practice delivers 50-70 % smaller files with a very short compression time. For dynamic HTML responses I often choose Level 6, because it offers a good ratio of speed and savings. With high throughput, this is easy on the CPU and keeps latency low. For highly dynamic content, I also use level 4-5, depending on the load, to further reduce the on-the-fly time. Gzip remains indispensable as a fallback, as it can be used practically everywhere. works.
Brotli: advantages, levels and limits
Breadstick uses LZ77, Huffman coding and a 120 KB dictionary with common web patterns. This shrinks HTML, CSS and JavaScript significantly more on average than Gzip, especially at high levels. I typically see 15-25 % fewer bytes compared to Gzip, which clearly reduces the transfer time. Decompression in the browser runs very quickly, which relieves the render pipeline. For on-the-fly I use moderate levels (e.g. 4-6), for pre-compressed assets I prefer levels 8-11 in build processes.
Gzip vs Brotli in everyday hosting
I decide according to Content and load profile: dynamic rather Gzip, static rather Brotli. For CSS/JS, fonts and large HTML templates, pre-compression with Brotli is noticeably worthwhile. For content that varies per request, compression time counts, so Gzip. Modern stacks run both in parallel: Brotli prioritized, Gzip as a fallback. If you want to delve deeper, you will find in this detailed comparison further key figures and specific use cases.
Comparison table: Key figures and support
The following table classifies the most important Criteria for hosting setups and shows when which method is best. It helps me make decisions based on file type, load and compatibility. I evaluate compression rate, server overhead, browser support and impact on perceived speed. This is how I determine whether I should use on-the-fly or as a build step. compress. Pre-compression with Brotli scales particularly well for large static bundles.
| Criterion | Gzip | Breadstick | Effect in practice |
|---|---|---|---|
| compression ratio | approx. 50-70 % smaller | typically 15-25 % smaller than Gzip | Fewer bytes, faster transmission |
| Compression speed | Fast, especially at levels 1-6 | Slower at high levels (8-11) | Gzip advantageous for dynamic responses |
| Decompression | Fast | Often even faster | Render start looks smoother |
| Browser support | Almost complete | Very wide with modern browsers | Gzip as a compatible fallback level |
| CPU usage | Low at low levels | Higher at high levels | Clearly weigh up build time vs. runtime |
I add to these key figures TTFB and bandwidth as decision factors. If CPU reserves are tight, I choose lower levels for live compression. In CI/CD pipelines, I pre-pack static files with high Brotli levels. In this way, I combine short response times with very small Assets. The mix delivers consistently better charging experiences.
Configuration practice with Nginx and Apache
I activate Breadstick and Gzip via modules, set sensible MIMEs and regulate levels depending on the server load. For Nginx, I use separate settings for on-the-fly and for pre-compressed files with .br/.gz extensions. In Apache, I configure via modules such as mod_brotli and mod_deflate as well as via .htaccess Rules for caching and Vary headers. Pre-compression in the build remains important so that the server only delivers and does not have to constantly pack. If you are looking for a step-by-step guide, start with this one HTTP compression configuration.
Strategies: Dynamic vs. static
At dynamic For static resources, I use Brotli at high levels and store the artifacts already in the file system or in the CDN. This strategy relieves the CPU at runtime and reduces the bytes to a maximum. I make sure that the server selects the appropriate variant based on accept encoding. This is how I serve modern browsers with Brotli and older clients reliably with Gzip.
SEO effects and core web vitals
Smaller files reduce the Latency and bring content to the surface faster. I often notice a better First Contentful Paint and a more stable Largest Contentful Paint. This is clearly noticeable on mobile devices with a weak connection. I also save on data transfer, which is measurable with high traffic. Costs lowers. These advantages pay off in terms of visibility, conversion and user satisfaction.
Monitoring and tuning: measurably faster
I check the effect of Compression with lab and field measurements. Tools such as PageSpeed or RUM data show me FCP, LCP, TTFB and transfer sizes before and after adjustments. If the CPU load is high, I lower levels, if files are too large, I increase them in build steps. Caching headers such as Cache-Control and ETag prevent unnecessary repacking and strengthen the Efficiency. It remains important to test regularly because traffic patterns and asset sizes change.
Practical setup: Hybrid approach for WordPress & Co.
For WordPress I often choose Brotli for CSS/JS/Fonts and Gzip for PHP-generated HTML pages. CDNs deliver the pre-compressed files, while the Origin packs dynamic responses quickly. I pay attention to Vary headers to separate caches cleanly and to identical ETags for .br/.gz variants. If you want to fine-tune, you can find details on Compression level and CPU load. This keeps the render chain light, the Server load calculable and compatibility is high.
Which files I do not compress
Not everything benefits from HTTP compression. Some formats are already optimally packed internally or require byte-range requests where additional compression tends to interfere. I therefore generally leave them uncompressed:
- Images: JPEG/JPG, PNG, GIF, WebP, AVIF (already highly compressed)
- Video/audio: MP4, WebM, MOV, MP3, OGG, AAC
- Archives/containers: ZIP, 7z, RAR, ISO, PDF (often compressed), DMG
- Font formats: WOFF2 (uses Brotli internally), WOFF partly compressible, pack TTF/OTF in advance depending on setup
- Binary downloads that are frequently loaded via Range
In particular, the following should be compressed Text formatsHTML, CSS, JavaScript, JSON, XML, SVG, web manifests and sitemaps. SVG as XML benefits greatly; WOFF2, on the other hand, does not - here I save myself content encoding.
HTTP/2/HTTP/3 and TLS: Interaction with compression
HTTP/2 and HTTP/3 accelerate transport and multiplexing, but replace not the payload compression. Header compression (HPACK/QPACK) only takes care of headers, not the body. Fewer bytes in the body therefore remain a clear advantage. Important: Breadstick In practice, browsers only use this information via HTTPS offered. Those who still use pure HTTP usually only see Gzip as an option. In TLS termination chains, I make sure that compression at the edge happens close to the client to minimize latency and egress.
Variant handling: Accept encoding, caches and ETags
Clean Content negotiation determines cache hit rates. I consistently set the Vary header to Accept-Encoding, so that proxies and CDNs separate variants correctly. For pre-packaged assets, I consider Last-Modified and assign separate ETags per representation (.br/.gz/identical). CDNs should add accept encoding to the cache key. It is important to exclude double compression: If a file already exists as .br, the server must not gzip it again. For byte ranges (e.g. video), I provide the uncompressed variant, as ranges refer to the coded representation and caches can otherwise become inconsistent.
Fine-tuning: thresholds, levels and CPU budget
I work with Minimum sizes, so that very small files are not packed unnecessarily (typically 1-2 KB threshold). For dynamic responses, I choose Gzip Level 4-6 or Brotli 4-6. For build artifacts, I prefer Brotli 9-11, as long as the build time remains reasonable. Rules of thumb that have proven themselves:
- Small HTML snippets and API responses: Gzip 4-5 or Brotli 4-5
- Large bundles (JS/CSS > 50 KB): Brotli 8-11 in advance
- Very high live traffic volume: reduce levels to avoid queues and TTFB peaks
It is important to keep an eye on CPU peaks. If the compression pipeline jams, the perceived TTFB deteriorates. I then lower the live levels and shift savings to the build.
Safety: compression without risk
Transport compression via TLS is secure, but there have been known side-channel attacks on content compression for years (keyword BREACH). In practical terms, this means: pages that contain secret tokens and At the same time, I carefully compress or do not compress at all those endpoints that reflect user input. For example, I separate form pages with CSRF tokens from reflective parameters, minimize echo content or deactivate compression on these endpoints. Static assets are not affected by this - I continue to compress them aggressively.
CDN, serverless and object storage: clarifying responsibilities
At CDN setups I leave edge compression active and also upload pre-compressed artifacts. Correct metadata is important: Content type and Content encoding must be correct, otherwise CDNs will serve incorrect variants or compress twice. In Serverless-functions, I keep the live level conservative (Gzip 4-5 or Brotli 4) to avoid cold starts and CPU spikes. For object storage (e.g. as Origin), I store .br/.gz next to the raw file; the CDN selects based on accept encoding. The build pipeline generates all variants deterministically so that ETags remain stable.
Checking and debugging: How to check the effect
I regularly validate the delivery with browser DevTools: In the network view I check Content encoding, bytes sent and whether the server is responding from the cache. I also check whether the Vary-header and whether Brotli is really delivered to HTTPS clients. For API responses, I compare compressed vs. uncompressed sizes and observe TTFB under load. Do I notice error patterns usually occur: missing Vary header (cache poisoning), double compression (br+gz), incorrectly set content type/encoding pairs or unnecessary compression of tiny files. I fix these cases first before increasing levels further.
Cost effect briefly calculated
Compression not only saves time, but also Egress volume. For example, if you deliver 1 TB of text traffic per month and use Brotli to save an additional 20 % on average compared to Gzip, you will reduce your outgoing traffic by around 200 GB. Depending on the tariff, these savings add up significantly. On the compute side, higher live levels cost CPU time. I therefore balance egress costs against CPU budget and move expensive levels to the build, where they only occur once.
Edge cases: streaming, proxies and small files
At Server-Sent Events or streamed responses, I prefer Gzip at low levels or disabled compression so that chunks flow without delay. Behind older proxies that Accept-Encoding I keep Gzip active as a robust fallback. And for files under ~1 KB I don't use compression at all, as header overhead and latency often neutralize the gain.
Summary: The smart mix pays off
I set Breadstick preferably for static files and keep Gzip ready as a reliable fallback level. I aim for fast levels for dynamic responses and maximum savings for builds. In this way, I combine short TTFB with very small transfers and strengthen the core web vitals sustainably. With clean configuration, pre-compression and monitoring, the stack remains fast and stable. If you use this mix consistently, you will notice the loading time benefits immediately.


