{"id":16149,"date":"2025-12-23T11:53:06","date_gmt":"2025-12-23T10:53:06","guid":{"rendered":"https:\/\/webhosting.de\/io-scheduler-linux-noop-mq-deadline-bfq-serverboost\/"},"modified":"2025-12-23T11:53:06","modified_gmt":"2025-12-23T10:53:06","slug":"io-scheduler-linux-noop-mq-deadline-bfq-serverboost","status":"publish","type":"post","link":"https:\/\/webhosting.de\/en\/io-scheduler-linux-noop-mq-deadline-bfq-serverboost\/","title":{"rendered":"I\/O Scheduler Linux: Noop, mq-deadline &amp; BFQ explained in hosting"},"content":{"rendered":"<p>The Linux I\/O scheduler decides how the system sorts and prioritizes read and write accesses to SSD, NVMe, and HDD and sends them to the device. In this guide, I explain in practical terms when <strong>Noop<\/strong>, <strong>mq-deadline<\/strong> and <strong>BFQ<\/strong> are the best choice for hosting \u2013 including tuning, testing, and clear steps for action.<\/p>\n\n<h2>Key points<\/h2>\n\n<ul>\n  <li><strong>Noop<\/strong>Minimal overhead on SSD\/NVMe and in VMs<\/li>\n  <li><strong>mq-deadline<\/strong>: Balanced latency and throughput for servers<\/li>\n  <li><strong>BFQ<\/strong>Fairness and quick response in multi-user environments<\/li>\n  <li><strong>blk-mq<\/strong>Multi-queue design for modern hardware<\/li>\n  <li><strong>Tuning<\/strong>: Tests per workload instead of fixed rules<\/li>\n<\/ul>\n\n<h2>How the I\/O scheduler works in Linux hosting<\/h2>\n\n<p>A Linux I\/O scheduler sorts I\/O requests into queues, performs merging, and decides on delivery to the device in order to <strong>Latency<\/strong> and increase throughput. Modern kernels use blk-mq, i.e., multi-queue, so that multiple CPU cores can initiate I\/O in parallel. This is ideal for NVMe SSDs, which offer many queues and high parallelism, thereby reducing queues. In hosting, broad mixed loads often collide: web servers deliver many small reads, databases generate sync writes, and backups generate streams. The right scheduler reduces congestion, keeps response times stable, and protects the <strong>Server<\/strong>-Experience under load.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux-io-scheduler-hosting-8391.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>blk-mq in practice: none vs. noop and kernel defaults<\/h2>\n\n<p>Since kernel 5.x, the multi-queue design has been the standard path. This means that <strong>none<\/strong> the \u201eNoop\u201c equivalent for blk-mq, while <strong>noop<\/strong> historically comes from the single-queue path. On NVMe devices, usually only <code>none<\/code> available; on SATA\/SAS, you often see <code>mq-deadline<\/code>, optional <code>bfq<\/code> and, depending on the distribution, also <code>kyber<\/code>. The defaults vary: NVMe usually starts with <code>none<\/code>, SCSI\/SATA often with <code>mq-deadline<\/code>. I therefore always check the available options via <code>cat \/sys\/block\/\/queue\/scheduler<\/code> and decide per device. Where only <code>none<\/code> is selectable, this is intentional\u2014additional sorting adds practically no value there.<\/p>\n\n<h2>Noop in server use: When minimalism wins<\/h2>\n\n<p>Noop primarily merges adjacent blocks, but does not sort them, which significantly reduces CPU overhead. <strong>low<\/strong> On SSDs and NVMe, controllers and firmware take care of the clever sequencing, so additional sorting in the kernel is of little use. In VMs and containers, I often plan Noop because the hypervisor plans comprehensively anyway. I don't use Noop on rotating disks because the lack of sorting increases seek times there. If you want to reliably delimit the hardware context, first look at the memory type\u2014here, it helps to take a look at <a href=\"https:\/\/webhosting.de\/en\/nvme-ssd-hdd-web-hosting-comparison-performance-costs-tips-server-pro\/\">NVMe, SSD, and HDD<\/a>, before I start the scheduler <strong>determine<\/strong>.<\/p>\n\n<h2>mq-deadline: Deadlines, sequences, and clear priorities<\/h2>\n\n<p>mq-deadline gives read accesses short deadlines and makes write accesses wait a little longer in order to <strong>Response time<\/strong> The scheduler also sorts by block addresses, reducing search times, which is particularly helpful for HDDs and RAID arrays. In web and database hosts, mq-deadline provides a good balance between latency and throughput. I like to use it when workloads are mixed and both reads and writes are constantly queued. For fine-tuning, I check request depth, writeback behavior, and controller cache to ensure that the deadline logic is consistent. <strong>grabs<\/strong>.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux_io_scheduler_meeting_4273.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>BFQ: Fairness and responsiveness for many simultaneous users<\/h2>\n\n<p>BFQ distributes bandwidth proportionally and allocates budgets per process, which is noticeable. <strong>fair<\/strong> works when many users generate I\/O in parallel. Interactive tasks such as admin shells, editors, or API calls remain fast even though backups are running in the background. BFQ often achieves high efficiency on HDDs because it takes advantage of sequential phases and makes clever use of short idle windows. On very fast SSDs, there is a little extra overhead, which I weigh against the noticeable responsiveness. Those who use cgroups and ioprio can make clear assurances with BFQ and thus avoid annoyance from noisy neighbors. <strong>Avoid<\/strong>.<\/p>\n\n<h2>QoS in everyday life: ioprio, ionice, and Cgroups v2 with BFQ<\/h2>\n\n<p>For clean <strong>Prioritization<\/strong> I combine BFQ with process and cgroup rules. At the process level, I set <code>ionice<\/code> Classes and priorities: <code>ionice -c1<\/code> (Real-time) for latency-critical reads, <code>ionice -c2 -n7<\/code> (Best effort, low) for backups or index runs, <code>ionice -c3<\/code> (Idle) for everything that should only run during idle times. In Cgroups v2, I use <code>io.weight<\/code> for relative proportions (e.g., 100 vs. 1000) and <code>io.max<\/code> for hard limits, such as <code>echo \"259:0 rbps=50M wbps=20M\" &gt; \/sys\/fs\/cgroup\/\/io.max<\/code>. With BFQ, weights are converted very precisely into bandwidth shares\u2014ideal for shared hosting and container hosts on which <strong>Fairness<\/strong> is more important than maximum raw power.<\/p>\n\n<h2>Practical comparison: Which choice suits the hardware?<\/h2>\n\n<p>The choice depends heavily on the memory type and queue architecture, so I first check <strong>Device<\/strong> and controllers. SSDs and NVMe usually benefit from Noop\/none, while HDDs run more smoothly with mq-deadline or BFQ. In RAID setups, SANs, and all-round hosts, I often prefer mq-deadline because deadline logic and sorting work well together. Multi-user environments with many interactive sessions often benefit from BFQ. The following table summarizes the strengths and useful areas of application in a clear manner <strong>together<\/strong>:<\/p>\n\n<table>\n  <thead>\n    <tr>\n      <th>scheduler<\/th>\n      <th>Hardware<\/th>\n      <th>Strengths<\/th>\n      <th>Weaknesses<\/th>\n      <th>Hosting scenarios<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td>Noop\/none<\/td>\n      <td>SSD, NVMe, VMs<\/td>\n      <td>Minimal overhead, clean merging<\/td>\n      <td>Disadvantageous without sorting on HDDs<\/td>\n      <td>Flash server, container, hypervisor-controlled<\/td>\n    <\/tr>\n    <tr>\n      <td>mq-deadline<\/td>\n      <td>HDD, RAID, all-round server<\/td>\n      <td>Strict read priority, sorting, solid latency<\/td>\n      <td>More logic than Noop<\/td>\n      <td>Databases, web backends, mixed workloads<\/td>\n    <\/tr>\n    <tr>\n      <td>BFQ<\/td>\n      <td>HDD, multi-user, desktop-like hosts<\/td>\n      <td>Fairness, responsiveness, good sequences<\/td>\n      <td>Slightly more overhead on very fast SSDs<\/td>\n      <td>Interactive services, shared hosting, dev servers<\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux-io-scheduler-hosting-4397.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Configuration: Check scheduler and set permanently<\/h2>\n\n<p>First, I check which scheduler is active, for example with <code>cat \/sys\/block\/sdX\/queue\/scheduler<\/code>, and note the <strong>Option<\/strong> in square brackets. To switch temporarily, I write, for example <code>echo mq-deadline | sudo tee \/sys\/block\/sdX\/queue\/scheduler<\/code>. For persistent settings, I use udev rules or kernel parameters such as <code>scsi_mod.use_blk_mq=1<\/code> and <code>mq-deadline<\/code> in the command line. For NVMe devices, I check paths under <code>\/sys\/block\/nvme0n1\/queue\/<\/code> and set the selection per device. Important: I document changes so that maintenance and rollbacks can be carried out without guesswork. <strong>succeed<\/strong>.<\/p>\n\n<h2>Persistence and automation in operation<\/h2>\n\n<p>In everyday life, I prioritize repeatability over automation. Three approaches have proven effective:<\/p>\n<ul>\n  <li><strong>udev rules<\/strong>Example for all HDDs (rotational=1) <code>echo 'ACTION==\"add|change\", KERNEL==\"sd*\", ATTR{queue\/rotational}==\"1\", ATTR{queue\/scheduler}=\"mq-deadline\"' &gt; \/etc\/udev\/rules.d\/60-io-scheduler.rules<\/code>, then <code>udevadm control --reload-rules &amp;&amp; udevadm trigger<\/code>.<\/li>\n  <li><strong>systemd-tmpfiles<\/strong>For specific devices, I define <code>\/etc\/tmpfiles.d\/blk.conf<\/code> with lines such as <code>\/sys\/block\/sdX\/queue\/scheduler - - - - mq-deadline<\/code>, that write during boot.<\/li>\n  <li><strong>Configuration management<\/strong>In Ansible\/Salt, I create device classes (NVMe, HDD) and distribute consistent defaults along with documentation and rollback.<\/li>\n<\/ul>\n<p>Note: <code>elevator=<\/code> was the kernel parameter for the old single-queue path. In blk-mq, I determine the choice <strong>per device<\/strong>. For stacks (dm-crypt, LVM, MD), I set the default on the top device; more on this below.<\/p>\n\n<h2>Workloads in hosting: Recognizing patterns and taking the right action<\/h2>\n\n<p>First, I analyze the load: Many small reads indicate web front ends, sync-heavy writes indicate databases and log pipelines, large sequential streams indicate backups or <strong>Archive<\/strong>. Tools such as <code>iostat<\/code>, <code>vmstat<\/code> and <code>blktrace<\/code> show queues, latencies, and merge effects. If there is noticeable CPU idle time due to I\/O, I refer to <a href=\"https:\/\/webhosting.de\/en\/io-wait-understand-memory-bottleneck-fix-optimization\/\">Understanding I\/O Wait<\/a>, to resolve bottlenecks in a structured manner. I then test 1\u20132 scheduler candidates in identical time windows. Only measurement results are decisive, not gut feeling or <strong>myths<\/strong>.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux_scheduler_hosting_4827.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Deepening measurement practice: reproducible benchmarks<\/h2>\n\n<p>For reliable decisions, I use controlled <strong>fio<\/strong>-Profiles and confirm with real application tests:<\/p>\n<ul>\n  <li><strong>Random reads<\/strong> (Web\/Cache): <code>fio --name=rr --rw=randread --bs=4k --iodepth=32 --numjobs=4 --runtime=120 --time_based --filename=\/mnt\/testfile --direct=1<\/code><\/li>\n  <li><strong>Random mix<\/strong> (DB): <code>fio --name=randmix --rw=randrw --rwmixread=70 --bs=8k --iodepth=64 --numjobs=8 --runtime=180 --time_based --direct=1<\/code><\/li>\n  <li><strong>Sequential<\/strong> (Backup): <code>fio --name=seqw --rw=write --bs=1m --iodepth=128 --numjobs=2 --runtime=120 --time_based --direct=1<\/code><\/li>\n<\/ul>\n<p>At the same time, I log in. <code>iostat -x 1<\/code>, <code>pidstat -d 1<\/code> and note P95\/P99 latencies <code>fio<\/code>. For in-depth diagnoses, I use <code>blktrace<\/code> or eBPF tools such as <code>biolatency<\/code> Important: I measure at the same times of day, with the same load windows and the same file sizes. I minimize cache effects with <code>direct=1<\/code> and clean pre-conditions (e.g., pre-fill on the volume).<\/p>\n\n<h2>File systems and I\/O schedulers: Interaction matters<\/h2>\n\n<p>The file system affects I\/O characteristics, so I check its journal mode, queue depth, and sync behavior very carefully. <strong>exactly<\/strong>. EXT4 and XFS work efficiently with mq-deadline, while ZFS buffers and aggregates a lot itself. On hosts with ZFS, I often observe a lower scheduler effect because ZFS already shapes the output. For comparisons, I use identical mount options and workloads. If you are weighing up options, you will find <a href=\"https:\/\/webhosting.de\/en\/ext4-xfs-zfs-hosting-performance-comparison-storage\/\">EXT4, XFS, or ZFS<\/a> helpful perspectives on <strong>Storage<\/strong>-Tuning.<\/p>\n\n<h2>Writeback, cache, and barriers: the often overlooked half<\/h2>\n\n<p>Schedulers can only work as well as the writeback subsystem allows. I therefore always check:<\/p>\n<ul>\n  <li><strong>dirty parameter<\/strong>: <code>sysctl vm.dirty_background_bytes<\/code>, <code>vm.dirty_bytes<\/code>, <code>vm.dirty_expire_centisecs<\/code> control when and how aggressively the kernel writes. For databases, I often lower burst peaks to keep P99 stable.<\/li>\n  <li><strong>Barriers\/Flush<\/strong>Options such as EXT4 <code>barrier<\/code> I only back up XFS default flushes if hardware (e.g., BBWC) takes over. \u201enobarrier\u201c without power protection is <strong>risky<\/strong>.<\/li>\n  <li><strong>Device write cache<\/strong>I verify the controller's write cache settings so that <code>fsync<\/code> actually ends up on the medium and not just in the cache.<\/li>\n<\/ul>\n<p>Smoothing Writeback reduces the load on the scheduler\u2014deadlines remain reliable, and BFQ has less work to do to counter sudden flush waves.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux_io_scheduler_4813.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Virtualization, containers, and the cloud: Who is really planning?<\/h2>\n\n<p>In VMs, the hypervisor controls the physical I\/O flow, which is why I often choose Noop\/none in the guest to avoid duplication. <strong>logic<\/strong> On the host itself, I use mq-deadline or BFQ depending on the device and task. For cloud volumes (e.g., network block storage), parts of the planning are in the backend, so I measure real latencies instead of relying on assumptions. For container hosts with highly mixed loads, BFQ often provides better interactivity. In homogeneous batch clusters with flash-only, Noop prevails because every CPU time counts and controllers are efficient. <strong>work<\/strong>.<\/p>\n\n<h2>RAID, LVM, MD, and multipath: where the scheduler comes into play<\/h2>\n\n<p>In stacked block stacks, I set the scheduler at <strong>top device<\/strong> because that's where the relevant queues are located:<\/p>\n<ul>\n  <li><strong>LVM\/dm-crypt<\/strong>Scheduler at <code>\/dev\/dm-*<\/code> respectively <code>\/dev\/mapper\/<\/code> set. I usually leave the physical PVs at <code>none<\/code>, so that merging\/sorting does not happen twice.<\/li>\n  <li><strong>MD RAID<\/strong>: On <code>\/dev\/mdX<\/code> decide; underlying <code>sdX<\/code> Devices remain calm <code>none<\/code>. Hardware RAID is treated as a single block device.<\/li>\n  <li><strong>multipath<\/strong>: On the multipath mapper (<code>\/dev\/mapper\/mpatha<\/code>); set path devices below to <code>none<\/code>.<\/li>\n<\/ul>\n<p>Important: I separate tests according to <strong>pool<\/strong> and redundancy level (RAID1\/10 vs. RAID5\/6). Parity RAIDs are more sensitive to random writes; here, mq-deadline often wins out thanks to consistent read deadlines and ordered output.<\/p>\n\n<h2>Tuning strategies: Step by step to reliable performance<\/h2>\n\n<p>I start with a baseline measurement: current response times, throughput, 95th\/99th percentiles, and CPU.<strong>Load<\/strong>. After that, I change only one factor, typically the scheduler, and repeat the same load. Tools such as <code>fio<\/code> help to control this, but I confirm every hypothesis with real application tests. Databases require their own benchmarks that map transactions and fsync behavior. Only when the measurement is stable do I finalize my choice and document it. <strong>Why<\/strong>.<\/p>\n\n<h2>Queue depth, read ahead, and CPU affinity<\/h2>\n\n<p>In addition to the scheduler, queue parameters also have a significant impact on practice:<\/p>\n<ul>\n  <li><strong>queue depth<\/strong>: <code>\/sys\/block\/\/queue\/nr_requests<\/code> Limits pending requests per hardware queue. NVMe can handle high depth (high throughput), while HDDs benefit from moderate depth (more stable latency).<\/li>\n  <li><strong>Readahead<\/strong>: <code>\/sys\/block\/\/queue\/read_ahead_kb<\/code> respectively <code>blockdev --getra\/setra<\/code>. Slightly higher for sequential workloads, keep low for random workloads.<\/li>\n  <li><strong>rq_affinity<\/strong>With <code>\/sys\/block\/\/queue\/rq_affinity<\/code> On 2, I ensure that I\/O completion is preferentially assigned to the generating CPU core, which reduces cross-CPU costs.<\/li>\n  <li><strong>rotational<\/strong>I verify that SSDs <code>rotational=0<\/code> Report this so that the kernel does not apply HDD heuristics.<\/li>\n  <li><strong>Merges<\/strong>: <code>\/sys\/block\/\/queue\/nomerges<\/code> Can reduce merges (2=off). Useful for NVMe micro-latency, but usually disadvantageous for HDDs.<\/li>\n  <li><strong>io_poll<\/strong> (NVMe): Polling can reduce latency, but requires CPU power. I activate it specifically for <strong>low latency<\/strong>-Requirements.<\/li>\n<\/ul>\n\n<h2>Scheduler tunables in detail<\/h2>\n\n<p>Depending on the scheduler, useful fine-tuning options are available:<\/p>\n<ul>\n  <li><strong>mq-deadline<\/strong>: <code>\/sys\/block\/\/queue\/iosched\/read_expire<\/code> (ms, typically small), <code>write_expire<\/code> (larger), <code>fifo_batch<\/code> (batch size), <code>front_merges<\/code> (0\/1). I think <code>read_expire<\/code> short to protect P95 reads, and adjust <code>fifo_batch<\/code> depending on the device.<\/li>\n  <li><strong>BFQ<\/strong>: <code>slice_idle<\/code> (Idle time for sequence utilization), <code>low latency<\/code> (0\/1) for responsive interactivity. With <code>bfq.weight<\/code> In cgroups, I control relative shares very precisely.<\/li>\n  <li><strong>none\/noop<\/strong>: Hardly any adjustment screws, but the <strong>Surroundings<\/strong> (Queue depth, read ahead) determines the results.<\/li>\n<\/ul>\n<p>I only change one parameter at a time and keep strict track of the changes\u2014that way, it's clear what effect each change had.<\/p>\n\n<h2>Common pitfalls and how I avoid them<\/h2>\n\n<p>Mixed pools of HDD and SSD behind a RAID controller distort tests, so I separate measurements per <strong>Group<\/strong>. I don't forget that the scheduler applies per block device \u2013 I consider LVM mappers and MD devices separately. Persistence tends to slip through: without a udev rule or kernel parameter, the default is restored after rebooting. Cgroups and I\/O priorities often remain unused, even though they significantly improve fairness. And I always check queue depth, writeback, and file system options to ensure that the chosen logic reaches its potential. <strong>shows<\/strong>.<\/p>\n\n<h2>Troubleshooting: Read symptoms carefully<\/h2>\n\n<p>When the readings change, I interpret patterns and derive concrete steps:<\/p>\n<ul>\n  <li><strong>High P99 latency with many reads<\/strong>Check whether writes are displacing reads. Test with mq-deadline., <code>read_expire<\/code> lower, smooth writeback (<code>vm.dirty_*<\/code> adjust).<\/li>\n  <li><strong>100% util on HDD, low throughput<\/strong>: Seeks dominate. Try BFQ or mq-deadline, reduce read ahead, moderate queue depth.<\/li>\n  <li><strong>Good throughput values, but UI stutters<\/strong>Interactivity suffers. Activate BFQ, critical services via <code>ionice -c1<\/code> or prefer cgroup weights.<\/li>\n  <li><strong>Significant variation depending on the time of day<\/strong>Shared resources. Isolate with cgroups, choose scheduler per pool, move backups to off-peak times.<\/li>\n  <li><strong>NVMe timeouts in dmesg<\/strong>: Backend or firmware issue. <code>io_poll<\/code> Deactivate on a trial basis, check firmware\/driver, verify path redundancy (multipath).<\/li>\n<\/ul>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/linux-io-hosting-9481.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>In summary: Clear decisions for everyday hosting<\/h2>\n\n<p>For flash storage and guests, I often opt for <strong>Noop<\/strong>, to save overhead and let controllers do their job. In all-round servers with HDD or RAID, mq-deadline delivers reliable latency and high usability. With many active users and interactive loads, BFQ ensures fair sharing and noticeable responsiveness. Before each commitment, I measure with real workloads and observe the effects on P95\/P99. This allows me to make traceable decisions, keep systems running smoothly, and stabilize the <strong>Server<\/strong>-Performance in day-to-day business.<\/p>","protected":false},"excerpt":{"rendered":"<p>I\/O Scheduler Linux explained: noop, mq-deadline &amp; BFQ for optimal hosting. Storage tuning tips for server performance.<\/p>","protected":false},"author":1,"featured_media":16142,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","inline_featured_image":false,"footnotes":""},"categories":[676],"tags":[],"class_list":["post-16149","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-server_vm"],"acf":[],"_wp_attached_file":null,"_wp_attachment_metadata":null,"litespeed-optimize-size":null,"litespeed-optimize-set":null,"_elementor_source_image_hash":null,"_wp_attachment_image_alt":null,"stockpack_author_name":null,"stockpack_author_url":null,"stockpack_provider":null,"stockpack_image_url":null,"stockpack_license":null,"stockpack_license_url":null,"stockpack_modification":null,"color":null,"original_id":null,"original_url":null,"original_link":null,"unsplash_location":null,"unsplash_sponsor":null,"unsplash_exif":null,"unsplash_attachment_metadata":null,"_elementor_is_screenshot":null,"surfer_file_name":null,"surfer_file_original_url":null,"envato_tk_source_kit":null,"envato_tk_source_index":null,"envato_tk_manifest":null,"envato_tk_folder_name":null,"envato_tk_builder":null,"envato_elements_download_event":null,"_menu_item_type":null,"_menu_item_menu_item_parent":null,"_menu_item_object_id":null,"_menu_item_object":null,"_menu_item_target":null,"_menu_item_classes":null,"_menu_item_xfn":null,"_menu_item_url":null,"_trp_menu_languages":null,"rank_math_primary_category":null,"rank_math_title":null,"inline_featured_image":null,"_yoast_wpseo_primary_category":null,"rank_math_schema_blogposting":null,"rank_math_schema_videoobject":null,"_oembed_049c719bc4a9f89deaead66a7da9fddc":null,"_oembed_time_049c719bc4a9f89deaead66a7da9fddc":null,"_yoast_wpseo_focuskw":null,"_yoast_wpseo_linkdex":null,"_oembed_27e3473bf8bec795fbeb3a9d38489348":null,"_oembed_c3b0f6959478faf92a1f343d8f96b19e":null,"_trp_translated_slug_en_us":null,"_wp_desired_post_slug":null,"_yoast_wpseo_title":null,"tldname":null,"tldpreis":null,"tldrubrik":null,"tldpolicylink":null,"tldsize":null,"tldregistrierungsdauer":null,"tldtransfer":null,"tldwhoisprivacy":null,"tldregistrarchange":null,"tldregistrantchange":null,"tldwhoisupdate":null,"tldnameserverupdate":null,"tlddeletesofort":null,"tlddeleteexpire":null,"tldumlaute":null,"tldrestore":null,"tldsubcategory":null,"tldbildname":null,"tldbildurl":null,"tldclean":null,"tldcategory":null,"tldpolicy":null,"tldbesonderheiten":null,"tld_bedeutung":null,"_oembed_d167040d816d8f94c072940c8009f5f8":null,"_oembed_b0a0fa59ef14f8870da2c63f2027d064":null,"_oembed_4792fa4dfb2a8f09ab950a73b7f313ba":null,"_oembed_33ceb1fe54a8ab775d9410abf699878d":null,"_oembed_fd7014d14d919b45ec004937c0db9335":null,"_oembed_21a029d076783ec3e8042698c351bd7e":null,"_oembed_be5ea8a0c7b18e658f08cc571a909452":null,"_oembed_a9ca7a298b19f9b48ec5914e010294d2":null,"_oembed_f8db6b27d08a2bb1f920e7647808899a":null,"_oembed_168ebde5096e77d8a89326519af9e022":null,"_oembed_cdb76f1b345b42743edfe25481b6f98f":null,"_oembed_87b0613611ae54e86e8864265404b0a1":null,"_oembed_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_oembed_time_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_tldname":null,"_tldclean":null,"_tldpreis":null,"_tldcategory":null,"_tldsubcategory":null,"_tldpolicy":null,"_tldpolicylink":null,"_tldsize":null,"_tldregistrierungsdauer":null,"_tldtransfer":null,"_tldwhoisprivacy":null,"_tldregistrarchange":null,"_tldregistrantchange":null,"_tldwhoisupdate":null,"_tldnameserverupdate":null,"_tlddeletesofort":null,"_tlddeleteexpire":null,"_tldumlaute":null,"_tldrestore":null,"_tldbildname":null,"_tldbildurl":null,"_tld_bedeutung":null,"_tldbesonderheiten":null,"_oembed_ad96e4112edb9f8ffa35731d4098bc6b":null,"_oembed_8357e2b8a2575c74ed5978f262a10126":null,"_oembed_3d5fea5103dd0d22ec5d6a33eff7f863":null,"_eael_widget_elements":null,"_oembed_0d8a206f09633e3d62b95a15a4dd0487":null,"_oembed_time_0d8a206f09633e3d62b95a15a4dd0487":null,"_aioseo_description":null,"_eb_attr":null,"_eb_data_table":null,"_oembed_819a879e7da16dd629cfd15a97334c8a":null,"_oembed_time_819a879e7da16dd629cfd15a97334c8a":null,"_acf_changed":null,"_wpcode_auto_insert":null,"_edit_last":null,"_edit_lock":null,"_oembed_e7b913c6c84084ed9702cb4feb012ddd":null,"_oembed_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_time_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_03514b67990db061d7c4672de26dc514":null,"_oembed_time_03514b67990db061d7c4672de26dc514":null,"rank_math_news_sitemap_robots":null,"rank_math_robots":null,"_eael_post_view_count":"1826","_trp_automatically_translated_slug_ru_ru":null,"_trp_automatically_translated_slug_et":null,"_trp_automatically_translated_slug_lv":null,"_trp_automatically_translated_slug_fr_fr":null,"_trp_automatically_translated_slug_en_us":null,"_wp_old_slug":null,"_trp_automatically_translated_slug_da_dk":null,"_trp_automatically_translated_slug_pl_pl":null,"_trp_automatically_translated_slug_es_es":null,"_trp_automatically_translated_slug_hu_hu":null,"_trp_automatically_translated_slug_fi":null,"_trp_automatically_translated_slug_ja":null,"_trp_automatically_translated_slug_lt_lt":null,"_elementor_edit_mode":null,"_elementor_template_type":null,"_elementor_version":null,"_elementor_pro_version":null,"_wp_page_template":null,"_elementor_page_settings":null,"_elementor_data":null,"_elementor_css":null,"_elementor_conditions":null,"_happyaddons_elements_cache":null,"_oembed_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_time_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_time_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_59808117857ddf57e478a31d79f76e4d":null,"_oembed_time_59808117857ddf57e478a31d79f76e4d":null,"_oembed_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_time_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_81002f7ee3604f645db4ebcfd1912acf":null,"_oembed_time_81002f7ee3604f645db4ebcfd1912acf":null,"_elementor_screenshot":null,"_oembed_7ea3429961cf98fa85da9747683af827":null,"_oembed_time_7ea3429961cf98fa85da9747683af827":null,"_elementor_controls_usage":null,"_elementor_page_assets":[],"_elementor_screenshot_failed":null,"theplus_transient_widgets":null,"_eael_custom_js":null,"_wp_old_date":null,"_trp_automatically_translated_slug_it_it":null,"_trp_automatically_translated_slug_pt_pt":null,"_trp_automatically_translated_slug_zh_cn":null,"_trp_automatically_translated_slug_nl_nl":null,"_trp_automatically_translated_slug_pt_br":null,"_trp_automatically_translated_slug_sv_se":null,"rank_math_analytic_object_id":null,"rank_math_internal_links_processed":null,"_trp_automatically_translated_slug_ro_ro":null,"_trp_automatically_translated_slug_sk_sk":null,"_trp_automatically_translated_slug_bg_bg":null,"_trp_automatically_translated_slug_sl_si":null,"litespeed_vpi_list":null,"litespeed_vpi_list_mobile":null,"rank_math_seo_score":null,"rank_math_contentai_score":null,"ilj_limitincominglinks":null,"ilj_maxincominglinks":null,"ilj_limitoutgoinglinks":null,"ilj_maxoutgoinglinks":null,"ilj_limitlinksperparagraph":null,"ilj_linksperparagraph":null,"ilj_blacklistdefinition":null,"ilj_linkdefinition":null,"_eb_reusable_block_ids":null,"rank_math_focus_keyword":"I\/O Scheduler Linux","rank_math_og_content_image":null,"_yoast_wpseo_metadesc":null,"_yoast_wpseo_content_score":null,"_yoast_wpseo_focuskeywords":null,"_yoast_wpseo_keywordsynonyms":null,"_yoast_wpseo_estimated-reading-time-minutes":null,"rank_math_description":null,"surfer_last_post_update":null,"surfer_last_post_update_direction":null,"surfer_keywords":null,"surfer_location":null,"surfer_draft_id":null,"surfer_permalink_hash":null,"surfer_scrape_ready":null,"_thumbnail_id":"16142","footnotes":null,"_links":{"self":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/16149","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/comments?post=16149"}],"version-history":[{"count":0,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/16149\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media\/16142"}],"wp:attachment":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media?parent=16149"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/categories?post=16149"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/tags?post=16149"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}