...

Why hosting tariffs rarely reflect realistic user numbers

Hosting tariffs often promise thousands of simultaneous users, but in practice, shared resources and fair use rules significantly slow down performance. I show why providers ignore the reality of inflated user numbers and how limits on CPU, RAM and I/O slow down real visitor flows.

Key points

  • Shared limitsShared servers throttle peak loads and generate long loading times.
  • Fair Use„Unlimited“ tips over into hard limits with above-average use.
  • Performance mythModern hardware is no substitute for optimization and isolation.
  • Cost trapsLow entry prices lead to expensive upgrades as the company grows.
  • TransparencyClear information on CPU shares, I/O and burst is crucial.

Why user figures in tariffs are rarely correct

Marketing promises big numbers, but shared servers also share the Performance. All it takes is one neighboring account with faulty code and your response time jumps from under 500 milliseconds to over 1000 milliseconds. I've seen how a fair usage clause can suddenly halve the speed, even though your own site was properly optimized. Providers calculate average values, not real traffic peaks from campaigns, media mentions or seasonality. If you want to know how promises are made, you should read about Overselling for web hosting and critically examine the assumptions behind „unlimited“.

Fair use policy and shared resources

A tariff with a „traffic flat rate“ and a lot of storage sounds great, but fair use slows down above-average traffic. Use. In measurements, conversion drops by 64 percent with 5 seconds loading time compared to 1 second, and sales are painfully lost. Calculate the example: 1000 visitors, €100 shopping cart, a few seconds more waiting time - at the end of the month €19,700 is quickly missing. A generous memory of 52 GB is of little help if CPU shares, entry processes or I/O limits throttle you under load. That's why I always plan upper limits for simultaneous processes and look at limits first, not bold marketing figures.

The performance myth in shared hosting

Modern CPUs and NVMe SSDs sound powerful, but without isolation, the website no reliable throughput. Good providers set limits for CPU, RAM and I/O, but these do not always work fast enough under peak load. I therefore also check Entry Processes and max_execution_time because they precisely mark the bottleneck at peak times. Tools such as OPcache, Redis and server-side caching help noticeably, but neighbor load remains a risk. If you want to understand throttling, first read about Understanding hosting throttling and observes real response times under load, not just synthetic benchmarks.

Reality check on the promise of „unlimited“

„Unlimited“ rarely means limitless Resources, Instead, a „practical limit“ takes effect as soon as accounts use more than the average. CPU and RAM are the scarcest commodities in shared environments, and a single container can put a strain on the host system. If this is exceeded, throttling, short blocks or automatic process kills follow, often without clear feedback. Additional costs for SSL variants, email add-ons or extended PHP options quickly render entry-level prices obsolete. I therefore evaluate usage data on a monthly basis and rate limits more harshly than marketing slogans about bandwidth.

Marketing vs. reality in shared hosting
Advertising statement Hidden limit impact Typical way out
Unlimited traffic Fair-use + I/O cover Throttle at Peaks Cache + CDN + VPS
Thousands of users at the same time Entry Processes 503/Timeouts Increase process limit
Unlimited memory Inodes/backup quota Upload error Declutter/upgrade
Fast thanks to NVMe CPU shares Slow PHP jobs OPcache/insulation

Those who read figures correctly plan buffers for peak loads and have exit options ready in case limits take effect earlier than expected. I rely on measurable Limit values such as IOPS, RAM per process and CPU time instead of show terms such as „Power“ or „Turbo“. The key question is how many simultaneous requests the tariff can handle without throttling. Without clear information, I calculate conservatively and test in parallel on a separate staging system. This keeps costs in check while real visitors continue to be served smoothly.

What do statements such as „10,000 visitors/month“ mean?

Monthly figures conceal peaks, because visitors do not arrive in a linear fashion, but in Shafts. A short peak generates more simultaneous requests than half a day of normal operation. If entry processes or CPU shares are then too small, the site will time out within seconds. Failures quickly cost five-figure sums per minute, and lost trust has a much longer lasting effect. If you want to reduce such risks, check load profiles and avoid Calculating traffic incorrectly, before campaigns go live.

WordPress: Technology versus tariff

HTTP/3, server-side caching and image compression noticeably reduce loading times, but hard limits stop Peak load nevertheless. A high-performance cache reduces PHP calls, while OPcache keeps scripts in memory. Redis reduces the load on database queries, but only if CPU shares are not already fully utilized. I activate technical optimizations first, then measure real concurrency before switching to a larger plan. This makes it clear whether the bottleneck is due to the code, the database or the tariff.

When an upgrade really makes sense

A switch to VPS or Dedicated is worthwhile if simultaneous users regularly reach entry-process limits. bump. If 503 errors accumulate despite caching and a lean theme, compute performance is lacking, not „traffic“. I monitor CPU time per request, IOPS and memory per PHP process over several days. If the curve remains high at night, I scale horizontally via cache/CDN or vertically via isolated resources. Only when isolation is guaranteed does a more expensive package really pay off.

Understanding and checking practical key figures

Transparent providers cite CPU shares, I/O throughput, RAM per process and burst handling as hard Values. Without this information, the load capacity can only be estimated, which makes planning more difficult. I request specific entry process figures and ask how many simultaneous requests the stack can really handle. Time windows are also useful: does the hoster throttle immediately or only after a 60-second peak? These details determine whether campaigns run smoothly or get stuck in bottlenecks.

How I realistically calculate capacity

Instead of vague user numbers, I expect Concurrency and response times. A simple rule of thumb: Maximum dynamic requests per second ≈ (concurrent processes) / (average server time per request). If a tariff allows 20 entry processes and a dynamic request requires 300 ms server time, there is theoretically ~66 RPS possible - mind you, only as long as CPU, RAM and I/O are not limiting. Realistically, I deduct a 30-50 percent safety margin because cache misses, slow queries and PHP startup costs vary.

  • Worst case: Calculate without cache and with p95 latency, not with the mean value.
  • Best-CaseHigh cache hit rate, static delivery, CDN active - then I/O and network are more important.
  • Mixed80/20 rule (80 % cached, 20 % dynamic) maps many stores and blogs well.

The decisive factor is the Dwell time of a request in the stack: A checkout with 1.2 s server time displaces six faster blog requests. That's why I test scenarios separately (catalog, search, shopping cart, checkout) instead of averaging everything. This is the only way I can recognize where the bottleneck breaks first.

Load tests: How to measure real load-bearing capacity

I plan structured load tests because synthetic „peak measurements“ are often misleading. A procedure that has proven its worth:

  • Warm upFill cache, bring OPcache up to temperature, 5-10 minutes traffic at low rate.
  • Ramps: Increase in 1-2 minute steps from e.g. 10 to 200 virtual users, not by leaps and bounds.
  • MixInclude a realistic proportion of loginsensitive pages (not cached), e.g. 20-40 %.
  • trade fairs: p50/p95/p99, error rate (5xx/timeouts), queue length/backlog, CPU steal, iowait.
  • StabilityHold on plateau for 10-15 minutes to trigger throttling mechanisms (fair use).

Important: Tools provide different figures. I equalize Synthetics (artificial load test) with RUM-data (real user behavior). If p95 values only jump for real users, the database or external API is usually stuck - not the web server front end.

Cache hit rate and logged-in users

Shared tariffs thrive on a high Cache hit rate. WordPress bypasses the page cache for logged-in users, in the shopping cart and often for WooCommerce elements. Target values that I set:

  • Public blog/magazine90-98 % cache hit rate achievable.
  • Shop70-90 % depending on the proportion of logged-in users and personalization.
  • Community/SaaS30-70 %, focus on object cache and database optimization.

Helpful are Fragment caching (only regenerate blocks), preloading/now-preheating after deployments and short but meaningful TTLs. I monitor whether cookies or query parameters are unintentionally bypass. Even small rules (no cache for certain parameters, standardized URLs) increase the hit rate and relieve the CPU and I/O massively.

Typical hidden brakes in everyday life

In addition to obvious limits, many small brakes have a cumulative effect in shared operation:

  • Cron jobs and backupsServer-wide virus scans or snapshot windows increase I/O latency - plan your own media or feed generation outside of these times.
  • Image and PDF processingOn-the-fly generation eats up RAM and CPU. Better to pre-generate (build process, queue) and decouple load.
  • External APIsSlow third-party providers chain the response time. Decouple with timeouts, circuit breakers and asynchronous queues.
  • Database pinholeMissing indices, „LIKE %...%“ searches and N+1 queries hit I/O limits earlier than expected.
  • Bot trafficCrawlers increase load without revenue. Rate limiting and aggressive caching rules reduce the damage.

I regularly check slow logs, identify recurring peaks (e.g. hourly exports) and distribute them to off-peak times. Many „mysterious“ dips can be explained by colliding background jobs.

Monitoring and alerting in practice

Performance is protected like availability: with clear Thresholds and alarms. I set SLOs for TTFB p95 (e.g. < 600 ms for cache hits, < 1200 ms for dynamic pages), error rate (≤ 1 % 5xx), and resources (CPU steal < 5 %, iowait < 10 %). Alarms must early before fair-use throttling takes effect.

  • Server metricsCPU (User/System/Steal), RAM/Swap, I/O (IOPS, MB/s, iowait), Open Files/Processes.
  • PHP-FPMactive/waiting workers, max_children hit rate, request duration distribution.
  • Databaseslow queries, connection count, buffer pool hit rate, locks.
  • Application metricsCache hit rate, queue length, 95th/99th percentile per endpoint.

Without this view, you are running „blind“. Shared environments rarely forgive this because headroom is small and throttling occurs abruptly.

Migration paths and cost planning

I plan from the beginning Exit strategy, so that growth does not end in chaos. Three typical paths:

  • Better isolated shared planHigher entry process limits, dedicated CPU shares, prioritized I/O - suitable for moderate peaks.
  • Managed WordPress/StackSpecific optimizations (object cache, image processing, CDN integration). Beware of feature limits and additional costs.
  • VPS/DedicatedFull isolation, but more maintenance effort or management surcharge. Worthwhile if p95 latencies remain high despite optimization.

Costs often tip over due to ancillary issues: additional staging environments, email delivery with reputation, extended backups, more PHP workers. I reserve 20-30 % budget as Buffer for growth and unavoidable load fluctuations. This means that the change can be planned instead of ending in an emergency move.

Checklist before signing the contract

I clarify these questions with providers before I sign:

  • CPUHow many vCores/percentage shares are guaranteed? How is „burst“ defined?
  • ProcessesConcrete figures on entry processes, PHP FPM workers and NPROC limits?
  • I/OIOPS and MB/s cap, separate for read/write? How are large files handled?
  • Databasemax_user_connections, query limits, memory for temporary tables?
  • Throttle time windowDoes fair use take effect immediately or after a defined period? How long does the throttle last?
  • BackupsFrequency, storage, restore duration - and in which time window do system backups run?
  • InsulationContainers/limits per account? Protection from „noisy neighbors“?
  • TransparencyAccess to logs, metrics, PHP FPM status, error logs without a support ticket?
  • Staging/DeployAre there staging copies, rollbacks, secure deploy options?

If you have clarified these points properly, you are less likely to experience unpleasant surprises - and can make reliable commitments to performance targets.

Bots, crawlers and the difference between „traffic“ and „users“

In shared environments, it is not only the amount of Requests, but their quality. Aggressive crawlers, price bots or monitoring agents generate a lot of load without value. Me:

  • Rate limit automated accesses on the server side instead of blocking them at application level.
  • Cache static assets generously, reduce variants and set consistent cache keys.
  • Prioritize human access by securing particularly expensive endpoints (search, reports).

Many „10,000 visitors“ turn out to be 60 % bots. If you separate real users, you siphon off resources for paying customers instead of crawlers.

Database and PHP: small adjustments, big impact

Shared hosting does not forgive inefficient access. Two measures are disproportionately effective:

  • Index hygieneIndex frequent filter fields, simplify JOINs, check EXPLAIN regularly. An index quickly saves 10-100 ms per request.
  • PHP working memory: Adjust realistic memory_limit values per process and OPcache size. Too small - many compiles; too large - early out-of-memory.

I look at p95 memory per PHP process and extrapolate to the maximum number of workers. If the result is close to the RAM limit, there is a risk of OOM kills or hard throttling - regardless of „unlimited“ traffic.

Short case studies from practice

A blog article went viral, but the tariff with „traffic flat rate“ was sold within minutes Boundaries, because entry processes were scarce. A small store saw slow checkout on flash sales even though page cache was active; the database died from I/O caps. A portfolio site stayed fast until a neighboring account started backups on the fly, doubling response times. A SaaS form tipped over into timeouts because max_execution_time was set too strictly and aborted requests. A switch to isolated resources plus careful optimizations solved all five cases without complicating the architecture.

Summary and clear steps

Excessive user numbers in tariffs ignore shared resources, fair use rules and hard Limits. If you want to scale reliably, check entry processes, CPU shares, I/O and RAM per process before signing a contract. I first rely on caching, OPcache, image optimization and Redis if necessary, then measure load peaks with real scenarios. I then decide between a better isolated shared plan, VPS or dedicated, depending on simultaneous requests and error rate. In this way, hosting tariffs deliver real value for money instead of leading to expensive surprises when growth occurs.

Current articles