...

Why cheap web hosts oversell hosting – technical background explained

Low rates starting at one euro usually only work economically with overselling hostingProviders sell more CPU, RAM, and I/O than the hardware can deliver at the same time. I will show why this calculation works, which Limits and how I recognize risky offers—including sensible alternatives without constant bottlenecks.

Key points

The following bullet points provide a quick overview before I go into more detail.

  • economicsLow prices demand utilization beyond the comfort level.
  • Technology: Strict CPU, RAM, and I/O limits enforce throttling.
  • RisksOvercrowding exacerbates security and neighborhood problems.
  • PerformanceFluctuating response times reduce SEO and conversion rates.
  • AlternativesTransparent resources, VPS, and managed offerings.

What exactly does overselling mean in hosting?

With overselling I mean the sale of more resources than a server can provide in parallel. Advertising promises „unlimited visitors,“ many domains, and „up to“ storage, but the machine can never deliver these amounts to everyone at the same time because Physics and operating system limits. In shared environments, hundreds of projects share CPU cores, RAM, data storage, and network interfaces. The calculation works as long as the majority of customers remain well below the booked values and only cause isolated peaks. If the load distribution is disrupted by growth, bots, cron jobs, or unoptimized plugins, I notice this in the form of jerky loading times, timeouts, and sporadic 500 errors, i.e., clearly measurable Bottlenecks.

Why cheap web hosting „needs“ overselling“

One euro per month barely covers Hardware, electricity, cooling, licenses, and support, so the calculation pushes the costs over quantity. The provider stacks many accounts on the same hosts and increases occupancy until the economic mark is reached. I rarely pay for dedicated resources, intensive monitoring, or complex security in these tariffs, which is why the platform is highly automated and tends to throttle rather than scale during peaks. „Unlimited traffic“ often just means that there is no fixed volume limit, while the usable Bandwidth per customer under load decreases. The tighter the margins, the tighter the limits become and the more frequently throttling mechanisms are triggered during the course of the day.

Technical basics and limits on shared servers

On a shared host, many accounts run as separate users, but they share cores, RAM pools, SSDs, and the network interface. Control is exercised via CPU time, memory consumption, number of processes, and I/O speed per account; anyone who exceeds the limits is automatically throttled so that the overall host remains responsive. I see this in everyday life in sudden drops in PHP-FPM or a hard limit on simultaneous processes, which has a direct impact during traffic peaks. It becomes even clearer in multi-tenant setups with virtualization or containerization, which define behavior using cgroups, quotas, and schedulers. If you want to better understand the isolation levels, click through the compact Multi-tenant guide and correctly classifies terms such as bare metal, hypervisor, and shared hosting.

The business logic behind 1-euro tariffs

The margin in low-price models does not come about through magic, but through economies of scale and statistical utilization. A greatly simplified example: A host with 32 vCPUs, 128 GB RAM, and fast NVMe can, if planned properly, easily support 80–120 average WordPress sites. In the cheapest segments, however, 200–400 accounts end up on it. If 90% of these projects only see a few visitors per day, the measured load throughout the day is within the green range, even if more resources were „sold“ in total than are physically available. Cost blocks such as data center space, hardware depreciation, licenses, and support are allocated to as many accounts as possible. The result is not „evil,“ but a calculated trade-off: low monthly fees in exchange for a higher probability of Bottlenecks in peaks and less individually tailored performance optimization.

The calculation changes when the assumptions no longer apply: several „loud“ neighbors, bot waves, security incidents, or seasonal peaks overlap. Then limits kick in—and I pay the difference in the form of longer response times, limited processes, and temporary unavailability.

How overselling leads to bottlenecks in everyday life

At the same time, active sites compete for CPU, causing simple spikes—newsletters, social pushes, campaigns—to cause latency and timeouts. When RAM becomes scarce, the system pushes data into swap and processes wait for free pages, which noticeably slows down dynamic applications such as shops. The SSD is not a bottomless pit: many parallel read and write operations increase the queue length, and database and cache accesses begin to stumble. If network congestion is added to the mix, the effective Throughput rate per account precisely when there is additional traffic. Another risk remains bad neighbors: spam apps, compromised instances, or faulty scripts burden the machine and drag down the IP reputation for outgoing emails.

Typical hidden limits in detail

Marketing likes to use the term „unlimited,“ but the fine print contains strict limits that are crucial in day-to-day business:

  • Entry Processes/Simultaneous ProcessesLimits parallel PHP handlers or CGI instances. Reaching the limit results in 508/503 errors.
  • CPU timeIt is not only the number of cores that matters, but also the allocated CPU time over an interval. If this is exceeded, Throttling.
  • RAM/Memory LimitPer process and per account. If set too low, PHP scripts collapse or caches „forget“ entries.
  • I/O throughput and IOPSLow values make databases appear sluggish, even though „SSD/NVMe“ is advertised.
  • Inodes: Number of files/directories. Many small files (e.g., image variants, cache fragments) can quickly exceed the limit.
  • Mail rate limits: Shipping per hour/day. Newsletters or shop transaction emails come under pressure.
  • Cron frequencies: Intervals that are too long prevent tasks from being processed in a timely manner (e.g., order imports, feeds).

I therefore do not evaluate tariffs based on „unlimited,“ but rather on the specific figures behind these levers.

Security risks due to overloaded servers

The denser the occupancy, the greater the Attack surface, because many out-of-date applications, weak passwords, or insecure themes collectively open more gateways. In low-cost setups, monitoring often runs automatically and reacts quickly, but rarely holistically, meaning that subtle anomalies remain undetected for longer. Backups are sometimes only performed weekly or as an add-on package, which worsens recovery and RPO/RTO when I need it least. In addition, the quality level of account isolation determines whether a compromise remains local or has side effects on neighboring projects. I reduce this risk by ensuring clear update policies, malware scanning, restrictive file permissions, and tested restore paths, i.e., real hygiene.

Email deliverability and IP reputation

Overbooked platforms bundle many accounts on a few IP addresses. Even one neighbor with spam scripts is enough to damage your reputation—the result is bounces, delays, and delivery to spam folders. I recognize this by increasing soft bounces, unusual queue times, and increased support cases of „mail not received.“ Reputable providers isolate sending paths better, set strict rates, and react proactively. With the cheapest plans, the only option is often to throttle sending or switch to dedicated sending paths with a different plan. Anyone who generates revenue through newsletters, transactional emails, or notifications should factor this risk into their Choice of tariff price in.

SEO and conversion effects of fluctuating performance

Search engines continuously measure loading times, downtime, and responsiveness, resulting in snappy Latencies can cause direct ranking losses. Timing is particularly critical: when campaigns are running and users are arriving, peak loads collide with throttling, driving up bounce rates, shopping cart abandonment, and support tickets. That's why I don't plan capacity to the limit, but with reserves for known peaks and unpredictable bot spikes. An often underestimated factor remains the platform's ability to cleanly handle high requests for a short period of time – it is precisely this short-term Burst performance determines the impression made on first-time visitors. Those who deliver consistent TTFB, FCP, and INP values build trust, which translates into better conversion rates and repeat visits. Visitors shows.

Measurement instead of estimation: Methodology for load testing and monitoring

I evaluate a platform from two perspectives: synthetic tests (controlled requests) and Real user measurements. It is important not to celebrate the fastest individual value, but to look at distribution and stability – P50, P95, and P99 for TTFB and response time. This allows me to see if there are any „outliers“ that affect real users. Short, targeted load tests with realistic concurrency values show when entry processes, CPU time, or I/O start to take their toll. Repeat during the day and in the evening, test cold/warm caches, and monitor dynamic pages such as shopping cart, search, or checkout separately. I correlate the results with host metrics (CPU load, IOwait, steal time, queue lengths) to get a real picture. Bottlenecks from app bugs.

Resource and tariff comparison in practice

Before making a booking, I look for clear commitments Focus on CPU, RAM, I/O, and processes instead of marketing superlatives. Transparent providers name real upper limits, show measurements, and explain which project sizes run sensibly in which package. In price ranges of €1–2, no one can provide dedicated cores, lots of memory, and consistent monitoring in parallel, which is why I read the footnotes on „fair use“ twice. Those who need more control should go for a vServer or managed instance, because that's where the Resources are guaranteed and scalable. The following table classifies common models operationally and helps to set realistic expectations.

Model Resource commitment CPU share RAM per project I/O limit neighborhood risk Typical price/month
Cheap shared hosting (overselling) vague, fair use fluctuating low to moderate close high $1–3
Transparent Shared clear, documented quoted medium moderate limits medium 5-10 €
VPS / vServer guaranteed dedicated vCPUs defined high low $8–$25
Managed Cloud guaranteed + scaling elastic elastic high low 20-60 €

How to recognize oversold offers

Extremely low prices coupled with „unlimited“ features are my first warning signal, especially if CPU, RAM, and I/O are missing from the details. I also avoid providers who only describe limits as fair use and do not provide examples of typical load profiles. When I pay attention to independent user reviews, I often see complaints about outages, slow admin panels, and sluggish support from mass hosts. Reputable providers honestly state process limits, bandwidth windows, and rough project sizes, which allows me to plan realistically. As soon as communication consists mainly of slogans instead of concrete Data I keep my distance.

Resellers and agencies: responsibility and selection

Anyone who Reseller When a company or agency bundles many customer sites, overselling becomes particularly painful: a host-wide bottleneck is multiplied across dozens of projects. I therefore plan conservatively, separate critical customers into their own plans or instances, and keep emergency capacities available. This includes clear SLAs with customers, transparent expectations (e.g., P95 TTFB), and a commitment to provide short-term assistance if necessary. Scale or relocate. It is advisable to separate staging/testing and production and to define a process for security and performance rollouts so that not all sites generate peaks at the same time.

Alternatives without permanent overcrowding

If you want to escape the overselling trap, you should focus on Transparency in terms of resources and modern hardware with NVMe SSDs. Good shared hosting can be sufficient for blogs, small shops, or landing pages, provided that the provider clearly states the limits and plans the platform sensibly. For growing projects, a VPS is worthwhile because guaranteed vCPU, fixed RAM, and controllable I/O make behavior reliably predictable. Managed variants relieve me of maintenance, monitoring, and security tasks, which saves a lot of time, especially for business-critical sites. It is important not to cut corners, because constant Performance contributes directly to sales and brand perception.

Why webhoster.de comes out on top in comparisons

Many current comparisons name webhoster.de as the test winner because the platform consistently focuses on Performance, availability, and fast support. NVMe storage, good connectivity, and clear resource models deliver measurably shorter response times, even under higher loads. Responsive support in German helps me immediately when problems arise, instead of sending me through ticket loops. GDPR-compliant data centers in Germany ensure short distances and traceable data storage, which simplifies audits. Scalable pricing gives me room for growth without short-term migrationconstraints.

Practical check: How to check my current hosting

I measure loading times repeatedly during the day and in the evening, compare TTFB and complete Answertimes and watch out for significant fluctuations. I detect short outages lasting a few minutes using external monitoring and simultaneously read the server logs for 500 errors, timeouts, and „resource limit reached“ messages. The admin panel often reveals process and memory limits; if limit hits occur frequently during peak times, this confirms overbooking. If performance is sluggish or „Too many processes“ errors occur frequently, I also take a look at CPU throttling and process queues; the guide helps me with this. Detecting CPU throttling. The support test is also part of this: When I ask a specific technical question, I evaluate the response time, depth, and willingness to provide genuine Causes to clarify.

Migration without surprises: a short checklist

When a change is imminent, I follow a compact procedure:

  1. Inventory: Record domains, DNS zones, certificates, cron jobs, workers, mail accounts, and forwarders.
  2. StagingSet up target environment, match PHP version and extensions, import test data.
  3. Lower TTL: Reduce the DNS TTL 24–48 hours before the move so that the cutover takes effect quickly.
  4. Data transferMigrate files and databases consistently, schedule a read-only phase for highly active shops.
  5. ValidationFunctionality tests including checkout, login, search, API integrations, webhooks.
  6. CutoverChange DNS settings, reconfigure monitoring, closely monitor error logs.
  7. Clean up: Securely shut down old instance, rotate secret keys, remove cron duplicates.

This allows me to minimize downtime and prevent data inconsistencies—which is crucial, especially for peak-relevant projects.

Tuning that really helps—and what doesn't

Optimization can alleviate bottlenecks, but overselling Don't cancel. What helps:

  • Caching strategy: Use page cache and object cache consistently; keep dynamic exceptions narrow.
  • Query hygieneEliminate N+1 queries and expensive joins, set meaningful indexes.
  • Assets Reduce size: Deliver images, CSS/JS, and fonts efficiently; prioritize critical paths.
  • Decouple tasks: Place complex jobs (image generation, exports, webhooks) in queues.
  • Plugins/Themes Declutter: Fewer moving parts = less CPU/memory pressure.

What doesn't help: hoping for „unlimited“ resources, blindly ramping up PHP workers without regard for I/O limits, or expecting caching to mask every database weakness. If limits are the bottleneck, you need larger or more transparent plans – not just fine-tuning.

Final thoughts: Better to plan than migrate later

Overselling saves on the monthly fee, but I pay with Time, downtime, and lost revenue. Those who require reliable performance avoid marketing superlatives and focus on measurable resource specifications. I plan capacity with reserves, back up regularly, and keep software lean so that peaks don't hit unprepared systems. Switching to transparent shared, VPS, or managed cloud costs a little more, but it delivers consistent user experiences and fewer firefighting missions. This turns hosting from a blocker to a Lever, that supports projects instead of hindering them.

Current articles

Modern web hosting servers in the data center with blue status LEDs
web hosting

Why cheap web hosts oversell hosting – technical background explained

Find out why cheap web hosting is often based on overselling, how overcrowded servers arise, and what risks this poses for the performance and security of your website. Includes tips for better alternatives with a focus on the keyword overselling hosting.