I analyze hosting rate structures along technical limits and show how advertised resources translate into real usability. In doing so, I focus on CPU, RAM, I/O, connections and limit values that determine loading times, peak loads and reliability.
Key points
I will summarize the following key points before explaining the technology in detail.
- CPU/RAMComputing time and working memory define requests per second and response time.
- DatabaseConnection and query limits control how CMS and stores react under load.
- I/O/InodesDisk access and file entries decide on caching, media and updates.
- NetworkUplink, simultaneous connections and web server architecture determine parallelism.
- ScalingUpgrade paths, throttling rules and automation prevent bottlenecks.
I evaluate these points technically and demonstrate how they affect real projects. Each limit has direct effects on Loading time and turnover. I identify bottlenecks early on instead of firefighting later. To do this, I combine measurements with clear questions to the support team. This creates a picture that combines marketing promises with reality compared.
Technical read hosting tariff structures
I separate advertising messages from hard limits and look first at CPU, RAM, I/O and database. Many packages mention web space and traffic, but conceal limits for processes, connections and throughput. I read terms and conditions, status pages and cPanel/panel displays because they often contain real caps. A good start is a Resource limits in practice Overview that combines CPU time, RAM and I/O. This allows me to quickly recognize whether the tariff can withstand load peaks or whether it is too high for small peaks. cancels.
Understanding CPU, RAM and throttling
CPU is often displayed as „cores“ or „processes“, but the hoster actually limits Seconds of CPU time per period. I therefore check how many PHP workers are allowed to run simultaneously and how long scripts take to compute. RAM quotas determine whether PHP-FPM processes for image processing, caching and cron jobs run in parallel. Good providers set fair caps and throttle for a short time instead of terminating requests hard. Webhoster.de combines NVMe SSDs with a modern stack and thus delivers constant performance even under peak traffic. Response times.
Database and connection limits
WordPress, store systems and headless setups generate many Queries per page request. I therefore check the maximum number of simultaneous DB connections and the timeout for queries. A hard limit of ten connections immediately leads to queues in the event of a checkout load. Tightly set packet sizes and slow temporary tables lengthen dynamic pages considerably. I therefore plan caching, indexes and query reduction in such a way that the DB can be used even at peak times. pervades.
I/O and inodes in practice
I/O limits specify how quickly the tariff can be switched from the SSD can read and write. If the provider cuts the throughput too much, every request will fail: cache files load slowly, PHP writes sessions slowly, thumbnails get jammed. I therefore test media jobs, backups and cron runs because they create I/O hotspots. Inode limits restrict the number of files and folders; a bloated uploads directory with thousands of thumbnails eats up the quota. With tidy caches, a good media workflow and sensible retention rules, I keep the inodes healthy.
Network and simultaneous connections
„Unlimited“ does not exist, the real limit is called uplink and Parallelism. I pay attention to dedicated bandwidth per server and how many simultaneous connections the web server can handle. NGINX or LiteSpeed handle thousands of sockets more efficiently than old Apache setups with too few max clients. I qualify marketing promises with load tests and by looking at overselling rates. The widespread The server flat myth I demystify it by measuring actual requests per second and comparing them with the limits compare.
WordPress and eCommerce under load
I calibrate WordPress instances in such a way that they elegantly bypass. Object cache, full-page cache and optimized image paths reduce the load on the database and the I/O layer. WooCommerce requires more DB connections and CPU, so I specifically scale up PHP workers and cache bypasses for the shopping cart and checkout. I plan reserves for campaigns, otherwise customers run into timeouts and aborted sessions. This is how I cover sales peaks instead of at the limit to fail.
Sensible planning of mail and API limits
I check how many mails per hour the tariff technically allows. permitted. Stores with many transactional emails quickly reach caps, which is why I split shipping channels or activate API-based providers. API rate limits of gateways for payment, CRM and marketing require clean queueing. I build retries and backoffs into integrations so that hard limits do not lead to a standstill. This keeps communication channels active, even when traffic curves dress.
Choice of tariff: The right questions
I provide the support team with clear, technical QuestionsHow many PHP workers are running in parallel? What are the CPU seconds per minute? What is the I/O limit in MB/s? How many DB connections are allowed per account, and are there bursts? Only with reliable answers can I decide whether the tariff will support growth or the first peaks stalls.
Performance tests that show the truth
I do not rely on assumptions, I fair. Lighthouse and GTmetrix provide initial indications, but it becomes more meaningful with simultaneous requests via tools such as ab (Apache Bench) or k6. I check cold start, warm start and caching hits to understand how the stack really reacts. Long-term uptime over weeks shows whether nightly cronjobs displace requests. For background information on throttling in practice, it is worth taking a look at Throttling with low-cost hosters, to classify symptoms more quickly and turn off.
Scalability without relocation
I question how upgrade paths can be technically look. Can RAM, CPU and I/O be increased at short notice, or does the jump require downtime? Good packages allow live upgrades so that campaigns run without migration stress. I also look for automatic vertical scaling during load peaks and clear escalation paths. This allows me to grow in a controlled manner without unnecessarily moving projects. brakes.
Typical limits in comparison
The following overview shows common limit values, their effects and my control questions for the Support. I use it as a checklist for selection and subsequent optimization. This allows me to see immediately where things are pinching and which adjustment provides the greatest leverage. The figures serve as a guide for shared and managed environments. For large projects, I raise limits accordingly and plan reserves a.
| Parameters | Shared: Lower limit | Good tariffs | Critical effect | Test question |
|---|---|---|---|---|
| PHP-Worker | 2-4 | 8-16 | Waiting times for peaks | How many workers per account? |
| CPU time | 20-40% of a core | 1 core equivalent+ | Throttling and timeouts | How do you measure CPU seconds? |
| RAM (PHP) | 512–1024 MB | 2-4 GB | Canceled image jobs | Max memory per process? |
| I/O throughput | 5-20 MB/s | 50–200 MB/s | Slow caches/backups | I/O limit in MB/s? |
| DB connections | 10-20 | 50–100 | Locking, Queueing | Max Connections per account? |
| Inodes | 100k-200k | 500k-1M | Uploads/updates fail | Inode cap and exceptions? |
| Mail/hour. | 100-300 | 500-2000 | Delayed transaction mails | Throttling and whitelists? |
| Uplink per server | Shared 1 Gbit/s | 1-10 Gbit/s dedicated | Traffic jam at Peaks | Dedicated or shared? |
I use this table actively: first I check hard figures, then I compare them with project goals. from. A small blog runs with lower values, a store with campaigns needs reserves in every layer. If you pay low prices of around €3-7 per month, you usually get tight caps and little burst. Investments from 10-25 € per month open up buffers that prevent failures and aborts. This pays off because traffic peaks do not occur in Error tilt.
Fine-tune the web server and PHP stack
I check how the provider PHP-FPM configured: Process manager (dynamic vs. ondemand), max children, request termination and OpCache size. An OpCache config that is too small produces cold compiles with every deploy and costs CPU seconds. For the web server, I consciously decide between NGINX (efficient event loop) and LiteSpeed (strong WordPress integration, QUIC/HTTP/3). I only use Apache specifically when .htaccess rules are mandatory - otherwise prefork/worker models block parallelism. I demand clarity on keep-alive timeouts, Max Requests per FPM worker and upload limits so that large media and import jobs do not end up in nothing.
Protocols: HTTP/2, HTTP/3 and TLS overhead
I evaluate the influence of modern protocols on parallelism. HTTP/2 reduces the number of connections, but increases stream parallelism per socket - important for web server limits. HTTP/3 (QUIC) reduces latency for mobile access, but shifts CPU costs due to more encryption. I ask about supported ciphers (ECDSA vs. RSA), ALPN and session resumption. An incorrectly set TLS setup can unexpectedly cause CPU although PHP looks inconspicuous.
CDN, edge caching and origin offloading
I use a CDN specifically to protect Origin from load peaks. protect. The decisive factor is the cache strategy: sensible TTLs, stale-while-revalidate and precise cache bypasses for shopping cart, checkout and personalized content. I measure the Hit rate and calculate backwards: 80% hit rate at 1000 RPS means that the origin only has to serve 200 RPS - this fundamentally changes the tariff selection. I check whether the host accepts edge IPs properly (correct X-Forwarded-For) and whether rate limits at origin level are adjusted for CDN bursts.
Queues, cron and background work
I decouple complex tasks from web requests. Instead of WP-Cron on Request, I activate a real System cron, which starts jobs at fixed intervals and off-peak times. Dispatch, image generation, webhooks and imports run in Cues with workers whose parallelism I coordinate with PHP workers and DB connections. I pay attention to memory leaks in long-running workers and set max-execute- and max-jobs-parameter so that workers restart regularly - stable with tight RAM caps.
Backups, restore times and disaster recovery
I don't see backups as a checkbox, but as a Power limit. Important questions: How often are snapshots created, how long are they kept, and how much does the restore cost in I/O and time? mysqldump-based backups block I/O on weak tariffs, while snapshot or PITR methods are more efficient. I regularly test a Restore including search/replace in the database and measure RTO/RPO. I plan backups outside the peak windows to avoid CPU and I/O throttling.
Observability: logs, metrics and alarms
I don't rely on gut feeling. I collect metrics for CPU seconds, I/O throughput, PHP response times, DB locks and 4xx/5xx rates. Important indicators are „Steal Time“ on overbooked hosts, queue lengths and the proportion of 429/503 responses. I set alarms with meaningful thresholds (e.g. 95th percentile > 800 ms, 5xx > 1%) and evaluate over weeks Trends, not snapshots. This allows me to recognize creeping bottlenecks, such as when cron jobs eat up CPU seconds at night.
Security and security limits
I ask about WAF rules and their Costs. An overly aggressive ModSecurity configuration generates false positives and CPU load. Rate limits protect against bots, but must not slow down legitimate crawlers and mobile apps. I also check how the provider handles brute force on login endpoints and whether Fail2ban/Conntrack is active on the server side. For email, I rely on a clean sender reputation: SPF, DKIM and DMARC are mandatory, otherwise mail caps will bite us twice - in terms of quantity and deliverability.
Isolation: cgroups, LVE and neighborhood effects
I want to know how my account is isolated. CloudLinux LVE or cgroups separate CPU, RAM, I/O and processes for each customer. Without proper isolation, projects suffer from „noisy neighbors“. I explicitly ask for nproc-limits, open files (nofile) and inotify watchers. If you calculate too tightly here, you will get cryptic errors with deploys, image processing or large plugin updates.
Staging, deployments and rollbacks
I demand Staging environments with its own DB and its own object cache. Deployments must run without downtime: Health checks, avoid maintenance windows and cache warming directly after the rollout. I separate configurations (keys, secrets, endpoints) cleanly for each stage and use atomic deploys so that partial versions do not go live. A fast Rollback is mandatory, ideally as a fixed part of the pipeline.
Costs, fair use and overages
I read fair use clauses technically. Many hosters promise „unlimited“, but throttle according to thresholds or charge Overage-charges for excessive resource peaks. I clarify whether bursts are allowed, how long they may last and whether CPU seconds are smoothed in the time window. A transparent provider names hard caps, explains throttling logic and offers plannable Upgrade steps instead of surprises on the bill.
Headless, APIs and microservices
Headless front ends and microservices shift limits. Many small API calls increase RPS and competition for PHP workers; I consolidate requests (batching), activate aggressive edge caches for static JSONs and limit preloading. For webhooks, I use retry strategies with exponential backoff and dead-letter queues so that short-term throttling does not result in data loss.
Optimize image and media paths
Images are the I/O killer. I reduce variants, optimize formats (WebP/AVIF) and use On-demand generation with cache instead of generating thousands of thumbnails in advance. I split large uploads with chunking to avoid PHP and proxy timeouts. For archive content, I consider outsourcing to Object memory with CDN front, so that inodes and I/O of the web tariff do not overflow.
Team and rights management
I check how granularly roles and accesses are controlled. Separate SSH/SFTP logins, restrictive authorizations and audit logs prevent maintenance work from leading to inadvertent load peaks or data leaks. A clean release process with a dual control principle reduces the risk of incorrect configurations breaking limits unnoticed.
Summary: How to make the right choice
I rate tariffs via hard Limit values, not about web space and traffic flats. The decisive factors are CPU seconds, parallel PHP workers, DB connections, I/O throughput, inodes, uplink and server architecture. I test load realistically, observe behavior over time and clarify upgrade paths that can be escalated. For WordPress and stores, I plan caching, clean media flows and reserves for campaigns. This is how I choose hosting tariff structures that support projects, protect conversion and promote growth. enable.


