{"id":13634,"date":"2025-10-07T16:37:14","date_gmt":"2025-10-07T14:37:14","guid":{"rendered":"https:\/\/webhosting.de\/hetzner-rescue-system-starten-anleitung-recovery-tutorial\/"},"modified":"2025-10-07T16:37:14","modified_gmt":"2025-10-07T14:37:14","slug":"hetzner-rescue-system-start-instructions-recovery-tutorial","status":"publish","type":"post","link":"https:\/\/webhosting.de\/en\/hetzner-rescue-system-starten-anleitung-recovery-tutorial\/","title":{"rendered":"Starting the Hetzner Rescue System - step-by-step guide for server admins"},"content":{"rendered":"<p>I will show you how to start the hetzner rescue system in just a few minutes, how to <strong>SSH<\/strong> log in and enter your <strong>Server<\/strong> repair in a targeted manner. This guide takes you step-by-step from activation to recovery, including file system checks, backups and reinstallation.<\/p>\n\n<h2>Key points<\/h2>\n\n<p>The following key aspects will help you to start and work in rescue mode without any detours.<\/p>\n<ul>\n  <li><strong>Rescue start<\/strong>Activation in Robot or Cloud, then reboot.<\/li>\n  <li><strong>SSH access<\/strong>Login with key or password and root rights.<\/li>\n  <li><strong>Error analysis<\/strong>Check fsck, logs, partitions.<\/li>\n  <li><strong>Data backup<\/strong>: rsync, tar, scp for fast backups.<\/li>\n  <li><strong>New installation<\/strong>installimage for fresh systems.<\/li>\n<\/ul>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetzner-rescue-server-boot-9281.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>What the Rescue System does<\/h2>\n\n<p>The Rescue System loads an independent Linux environment into the working memory and gives me immediate access to the <strong>Root<\/strong>-access, even if the installed <strong>Operating system<\/strong> fails. I boot independently of defective boot loaders, damaged packages or faulty configurations. I check file systems, recover data, analyze logs and restart services. The environment remains lean, but offers all the important tools for diagnostics and recovery. This allows me to stay in control, even if the regular system goes completely down.<\/p>\n\n<p>What is practical is that the rescue environment is deliberately volatile: changes disappear after the reboot, which means I can test safely. If necessary, I install temporary tools (e.g. smartmontools, mdadm, lvm2, btrfs-progs or xfsprogs) without changing the productive system. The kernel version is modern and supports the latest hardware, including NVMe, UEFI, GPT, software RAID (mdraid), LVM and LUKS encryption. This allows me to cover even complex storage setups and isolate even rare error patterns in a reproducible manner.<\/p>\n\n<h2>Requirements and access<\/h2>\n\n<p>To get started, I need access to the customer interface and my <strong>SSH keys<\/strong> or a temporary <strong>password<\/strong>. I manage dedicated systems conveniently via the <a href=\"https:\/\/webhosting.de\/en\/hetzner-robot-surface-server-administration-tips-guide-comparison-power\/\">Hetzner Robot<\/a>while I control instances in the cloud via the console. Both interfaces offer a clear option for activating rescue mode. I check the correct server IP, IPv6 availability and, if necessary, out-of-band functions for the reset in advance. This preparation significantly shortens the downtime.<\/p>\n\n<p>When I log in to SSH for the first time, I deliberately confirm the new fingerprint and update my Known Hosts entry if necessary so that subsequent connections do not fail due to warnings. For teams, I store additional keys specifically for the rescue operation and remove them again after completion. If only a temporary password is available, I change it immediately after logging in and then replace it with Key-Auth - I consistently deactivate password logins at the end of the work.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetznerrescueguide2159.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Activating the Rescue System - step by step<\/h2>\n\n<p>I open the server details window, select the \"Rescue\" option and set the architecture to <strong>linux64<\/strong> for current systems, then I deposit my <strong>SSH key<\/strong>. Depending on the situation, I only start the rescue mode and trigger the reboot separately or I use \"Activate Rescue &amp; Power Cycle\" for a direct restart. If the machine hangs, I perform a hard reset via the interface. After the boot, the interface shows a temporary root password if I have not entered a key. As soon as the server boots up, it responds to SSH and I can get started.<\/p>\n\n<p>In complex situations, I plan a clear sequence: Activate, power cycle, test SSH login, then start troubleshooting. A manual power cycle may be more necessary on dedicated servers, while cloud instances usually switch to rescue mode immediately. Important: After a successful repair, I switch off the rescue mode again so that the machine reboots from the local disk.<\/p>\n\n<h2>SSH connection and first checks<\/h2>\n\n<p>I connect via <strong>SSH<\/strong> with <code>ssh root@<\/code> and first check the network, data carriers and logs for a quick overview of the <strong>Status<\/strong>. With <code>ip a<\/code> and <code>ping<\/code> I check the accessibility; <code>journalctl --no-pager -xb<\/code> or log files on the mounted disks show the latest error messages. The commands <code>lsblk<\/code>, <code>blkid<\/code> and <code>fdisk -l<\/code> provide clarity about layout and file systems. For RAID I use <code>cat \/proc\/mdstat<\/code> and <code>mdadm --detail<\/code> for the condition. For initial hardware indicators <code>smartctl -a<\/code> and a short <code>hdparm -Tt<\/code>-test.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetzner-rescue-system-guide-5973.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>LVM, RAID, LUKS and special file systems<\/h2>\n\n<p>Many servers use LVM, software RAID or encryption. I first activate all relevant layers:<\/p>\n<ul>\n  <li><strong>mdraid<\/strong>: <code>mdadm --assemble --scan<\/code> brings up existing arrays; I check the status with <code>cat \/proc\/mdstat<\/code>.<\/li>\n  <li><strong>LUKS<\/strong>: I open encrypted volumes with <code>cryptsetup luksOpen \/dev\/<\/code>.<\/li>\n  <li><strong>LVM<\/strong>With <code>vgscan<\/code> and <code>vgchange -ay<\/code> I activate volume groups and see them via <code>lvs<\/code>\/<code>vgs<\/code>\/<code>pvs<\/code>.<\/li>\n<\/ul>\n<p>With Btrfs, I pay attention to subvolumes and mount specifically with <code>-o subvol=@<\/code> respectively <code>-o subvolid=5<\/code> for the top level. I check XFS with <code>xfs_repair<\/code> (never on mounted volumes), while Ext4 is classically used with <code>fsck.ext4 -f<\/code> is reorganized. I use the GUID\/UUID from <code>blkid<\/code>because device names for NVMe (<code>\/dev\/nvme0n1p1<\/code>) and can vary with changing order. I will correct the <code>\/etc\/fstab<\/code>.<\/p>\n\n<h2>File system repair and data backup<\/h2>\n\n<p>Before I repair, I back up important <strong>Data<\/strong> with <code>rsync<\/code>, <code>scp<\/code> or <code>tar<\/code> to an external target or a local target <strong>Backup<\/strong>-directory. For checks I use <code>fsck<\/code> only on unmounted partitions, such as <code>fsck -f \/dev\/sda2<\/code>to correct inconsistencies cleanly. I then mount the system under <code>\/mnt<\/code>for example with <code>mount \/dev\/sda2 \/mnt<\/code>and attach sub-paths such as <code>\/proc<\/code>, <code>\/sys<\/code> and <code>\/dev<\/code> when I want to chroot. Individual configuration files such as <code>\/etc\/fstab<\/code> or network settings directly in the mounted system. By proceeding carefully, I prevent consequential damage and keep downtime to a minimum.<\/p>\n\n<p>For reliable backups, I rely on repeatable commands: <code>rsync -aHAX --info=progress2<\/code> receives rights, hardlinks, ACLs and xattrs. If the line is weak, I throttle with <code>--bwlimit<\/code> and parallelize compression with <code>tar -I pigz<\/code>. If necessary, I image critical, faulty data carriers in blocks with <code>ddrescue<\/code> to shift the logical work to an image. I check Btrfs systems carefully with <code>btrfs check --readonly<\/code> and use <code>btrfs scrub<\/code>to detect silent errors. XFS often requires an off-mount repair in the event of inconsistencies (<code>xfs_repair<\/code>) - I always back up the partition first.<\/p>\n\n<h2>UEFI\/BIOS, GPT\/MBR and bootloader repair<\/h2>\n\n<p>Many boot problems are caused by the interaction of firmware, partition scheme and boot loader. I first clarify whether the server starts in UEFI or legacy BIOS mode (<code>ls \/sys\/firmware\/efi<\/code>). With UEFI I mount the EFI partition (typical <code>\/dev\/sdX1<\/code> or <code>\/dev\/nvme0n1p1<\/code>) to <code>\/mnt\/boot\/efi<\/code>. Then I chroote into the system:<\/p>\n<pre><code>mount \/dev\/ \/mnt\nmount --bind \/dev \/mnt\/dev\nmount --bind \/proc \/mnt\/proc\nmount --bind \/sys \/mnt\/sys\nchroot \/mnt \/bin\/bash\n<\/code><\/pre>\n<p>I reinstall the bootloader appropriately (<code>grub-install<\/code> to the correct device) and regenerate configuration and initramfs: <code>update-grub<\/code> and <code>update-initramfs -u -k all<\/code> (for dracut-based systems <code>dracut -f<\/code>). If the order of the devices is not correct, I use the <code>\/etc\/default\/grub<\/code> UUIDs and check <code>\/etc\/fstab<\/code> for correct entries. When changing GPT\/MBR, I check whether a BIOS boot partition (for GRUB\/BIOS) or a valid EFI system partition exists.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetzner-rescue-anleitung-3821.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Network pitfalls in Rescue<\/h2>\n\n<p>Network problems are often the reason why services are \"gone\". In Rescue I check the link status (<code>ip link<\/code>), routes (<code>ip r<\/code>) and DNS resolution (<code>resolvectl status<\/code> respectively <code>cat \/etc\/resolv.conf<\/code>). I test IPv4 and IPv6 separately (<code>ping -4<\/code>\/<code>ping -6<\/code>). For servers with bridges or bonding, the order of interfaces in the productive system may differ from the rescue environment. I make a note of the MAC addresses and map them correctly. If the production system uses Netplan, I verify the <code>\/etc\/netplan\/*.yaml<\/code> and turn after the chroot <code>netplan generate<\/code> and <code>netplan apply<\/code> on. For classic <code>\/etc\/network\/interfaces<\/code>-setups, I pay attention to consistent interface names (predictable names vs. <code>eth0<\/code>).<\/p>\n\n<h2>Reinstall operating system<\/h2>\n\n<p>If repairs no longer make sense, I reset the system with <strong>installimage<\/strong> completely new and thus save valuable <strong>Time<\/strong>. The tool guides me through the selection of distribution, partitioning and boot loader. I include my own configuration files and SSH keys in the installation so that the first boot runs smoothly. After the installation, I start the server as normal and check the services, firewall and updates. Finally, I remove the rescue mode so that the next boot takes place from the local data carrier again.<\/p>\n\n<p>I deliberately use UUID-based mounts for new installations to rule out device order problems later on. For RAID setups, I have the arrays created from the start and check the rebuild status before restoring data. If you deploy similar systems on a recurring basis, you work with predefined installimage templates and a clear partitioning logic (root, separate data partition, swap, EFI if necessary). After the first boot, I update package sources and kernels, activate security auto-updates and roll out my basic hardening steps.<\/p>\n\n<h2>Security, time window and relapse<\/h2>\n\n<p>Access is exclusively via <strong>SSH<\/strong>therefore I consistently rely on <strong>Keys<\/strong> instead of static passwords. The rescue mode remains ready for a limited time after activation and falls back to the local boot device on the next normal restart. I work quickly, document every step and keep a second session open for larger interventions. I do not write sensitive data in bash histories and delete temporary files after use. After a successful recovery, I deactivate the mode in the interface again.<\/p>\n\n<p>After reactivating the productive system, I rotate access data, remove temporary rescue keys, reset unnecessary root passwords and back up freshly generated configurations. I collect audit information (who did what and when) and document deviations from the standard setup. This prevents emergency measures from becoming permanent and I adhere to compliance requirements.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetzner-rescue-start-4281.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Example: Rescue WordPress server<\/h2>\n\n<p>I boot into rescue mode, mount the system partition and back up the <strong>Database<\/strong> per <code>mysqldump<\/code> and the <strong>wp-content<\/strong>-directory with <code>tar<\/code> or <code>rsync<\/code>. I then check the file system, reset the boot loader and correct incorrect PHP or NGINX configurations. If packages are corrupted, I use chroot and reinstall dependencies. If that's not enough, I reset the machine with <code>installimage<\/code> and restore the backup and configurations. Finally, I verify the frontend, login and cronjobs.<\/p>\n\n<p>In practice, I pay attention to InnoDB consistency (MySQL\/MariaDB): Fails <code>mysqld<\/code> at the start, I secure the <code>\/var\/lib\/mysql<\/code> and run the dump from a fresh instance. I empty caches (object cache, page cache, OPCache) selectively, set file permissions consistently (<code>find . -type d -exec chmod 755 {} ;<\/code>, <code>find . -type f -exec chmod 644 {} ;<\/code>) and check <code>open_basedir<\/code> and upload directories. I deactivate critical plugins as a test by renaming the plugin directory. I then check PHP-FPM pools, FastCGI timeouts, memory limits and the NGINX\/Apache includes. A short <code>wp cron event run --due-now<\/code> (if WP-CLI is available) helps to process backlogs.<\/p>\n\n<h2>Best practices for admins<\/h2>\n\n<p>Before deep interventions, I create a fresh <strong>Backup<\/strong> and secure key files such as <strong>\/etc<\/strong>so that I can jump back at any time. Every step goes into a short log, which helps me later with audits or new incidents. After rebooting into the productive system, I check the services, logs, network and monitoring thoroughly. For recurring tasks, I build up a small script set to standardize command sequences. If you are planning additional performance or new hardware, you can create suitable <a href=\"https:\/\/webhosting.de\/en\/hetzner-root-server-rental-guide-tips-server-knowledge\/\">Rent a root server<\/a> and migration window.<\/p>\n\n<p>I also have a runbook checklist ready, which contains responsibilities and escalation paths. Planned \"game days\" (targeted failure simulations) train the team for emergencies. I regularly test backups as a restore sample - an untested backup is considered non-existent. And I version my system configurations so that I can quickly recognize differences between \"good\" and \"defective\" status.<\/p>\n\n<h2>Cloud vs. dedicated: differences in the process<\/h2>\n\n<p>In the cloud, I often change the boot mode directly in the instance dialog and use the serial console for quick checks, while a power cycle and possibly out-of-band access are necessary on dedicated servers. Cloud volumes can be conveniently attached to other instances - an efficient way to back up data without downtime on the affected host. On bare metal, I pay more attention to the physical order of the drives, especially when purchasing additional SSDs\/NVMe modules. In both worlds: Rescue is a temporary tool - I plan the way back to the normal boot in good time.<\/p>\n\n<h2>Comparison: providers with rescue system<\/h2>\n\n<p>What counts for fast recovery, apart from good <strong>Hardware<\/strong> also a cleanly integrated <strong>Rescue<\/strong>-feature. The following table provides a compact overview of the range of functions and handling. I have based this on availability, ease of access and typical admin workflows. The \"Recommendation\" rating reflects my practical use for typical faults. The weighting can of course vary depending on the intended use.<\/p>\n\n<table>\n  <thead>\n    <tr>\n      <th>Provider<\/th>\n      <th>Rescue System available<\/th>\n      <th>Ease of use<\/th>\n      <th>Performance<\/th>\n      <th>Recommendation<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td>webhoster.de<\/td>\n      <td>Yes<\/td>\n      <td>Very good<\/td>\n      <td>Very high<\/td>\n      <td>Test winner<\/td>\n    <\/tr>\n    <tr>\n      <td>Hetzner<\/td>\n      <td>Yes<\/td>\n      <td>Very good<\/td>\n      <td>High<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>Strato<\/td>\n      <td>Partial<\/td>\n      <td>Good<\/td>\n      <td>Medium<\/td>\n      <td><\/td>\n    <\/tr>\n    <tr>\n      <td>IONOS<\/td>\n      <td>No<\/td>\n      <td>Medium<\/td>\n      <td>Medium<\/td>\n      <td><\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/10\/hetzner-rescue-server-5186.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Checklist: Sequence of steps in an emergency<\/h2>\n\n<ul>\n  <li>Activate Rescue, trigger reboot\/power cycle, test SSH.<\/li>\n  <li>View hardware\/storage: <code>smartctl<\/code>, <code>lsblk<\/code>, <code>blkid<\/code>, <code>mdstat<\/code>, <code>lvm<\/code>.<\/li>\n  <li>Activate arrays\/LUKS\/LVM, inspect file systems read-only.<\/li>\n  <li>Create a backup (rsync\/tar), then <code>fsck<\/code>\/Repairs.<\/li>\n  <li>System under <code>\/mnt<\/code> mount, bind mounts, chroot.<\/li>\n  <li>Repair bootloader\/initramfs, check network config.<\/li>\n  <li>Test boot, verify services, check monitoring\/alarms.<\/li>\n  <li>Deactivate Rescue, remove temporary keys, update documentation.<\/li>\n<\/ul>\n\n<h2>FAQ Hetzner Rescue System<\/h2>\n\n<p>Can I use my <strong>Data<\/strong> rescue if the system no longer boots? Yes, I read the data carriers directly in rescue mode and back up important data. <strong>Folder<\/strong> or entire partitions.<\/p>\n<p>How long does the rescue mode remain active? After activation, the system is available for a limited time and switches back to the local system at the next regular reboot. <strong>Boat<\/strong>-device, I am therefore planning a speedy <strong>Procedure<\/strong>.<\/p>\n<p>Does this work for cloud and dedicated servers? Yes, I start the mode for both dedicated machines and cloud instances in the <a href=\"https:\/\/webhosting.de\/en\/hetzner-cloud-server-overview-entry-hosting-test-winner-future\/\">Hetzner Cloud<\/a>.<\/p>\n<p>What do I do if the bootloader is damaged? I mount root and possibly EFI, chroot into the system, execute <code>grub-install<\/code>, <code>update-grub<\/code> and a rebuild of the initramf, then I test the reboot.<\/p>\n<p>How do I deal with LVM\/RAID? I first assemble mdraid, activate LVM with <code>vgchange -ay<\/code> and then mount the logic volumes. Repairs only happen after a backup.<\/p>\n<p>Can I only save individual files? Yes, I mount read-only and selectively copy configs, databases (via dump) or directories - minimally invasive and fast.<\/p>\n\n<h2>Core message<\/h2>\n\n<p>With the <strong>Hetzner<\/strong> Rescue System, I have a quick tool that reliably identifies boot problems, file system errors and damaged configurations. I activate the mode, log in via SSH, back up data and then decide between repairing or reinstalling. This saves <strong>Time<\/strong> in an emergency and reduces downtime to the bare minimum. If you internalize these few steps, you can handle even difficult outages calmly. This means that server operation can be planned and the restart is controlled.<\/p>","protected":false},"excerpt":{"rendered":"<p>Learn how to activate and use the Hetzner Rescue System and get maximum security in the event of server problems.<\/p>","protected":false},"author":1,"featured_media":13627,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","inline_featured_image":false,"footnotes":""},"categories":[780],"tags":[],"class_list":["post-13634","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-administration-anleitungen"],"acf":[],"_wp_attached_file":null,"_wp_attachment_metadata":null,"litespeed-optimize-size":null,"litespeed-optimize-set":null,"_elementor_source_image_hash":null,"_wp_attachment_image_alt":null,"stockpack_author_name":null,"stockpack_author_url":null,"stockpack_provider":null,"stockpack_image_url":null,"stockpack_license":null,"stockpack_license_url":null,"stockpack_modification":null,"color":null,"original_id":null,"original_url":null,"original_link":null,"unsplash_location":null,"unsplash_sponsor":null,"unsplash_exif":null,"unsplash_attachment_metadata":null,"_elementor_is_screenshot":null,"surfer_file_name":null,"surfer_file_original_url":null,"envato_tk_source_kit":null,"envato_tk_source_index":null,"envato_tk_manifest":null,"envato_tk_folder_name":null,"envato_tk_builder":null,"envato_elements_download_event":null,"_menu_item_type":null,"_menu_item_menu_item_parent":null,"_menu_item_object_id":null,"_menu_item_object":null,"_menu_item_target":null,"_menu_item_classes":null,"_menu_item_xfn":null,"_menu_item_url":null,"_trp_menu_languages":null,"rank_math_primary_category":null,"rank_math_title":null,"inline_featured_image":null,"_yoast_wpseo_primary_category":null,"rank_math_schema_blogposting":null,"rank_math_schema_videoobject":null,"_oembed_049c719bc4a9f89deaead66a7da9fddc":null,"_oembed_time_049c719bc4a9f89deaead66a7da9fddc":null,"_yoast_wpseo_focuskw":null,"_yoast_wpseo_linkdex":null,"_oembed_27e3473bf8bec795fbeb3a9d38489348":null,"_oembed_c3b0f6959478faf92a1f343d8f96b19e":null,"_trp_translated_slug_en_us":null,"_wp_desired_post_slug":null,"_yoast_wpseo_title":null,"tldname":null,"tldpreis":null,"tldrubrik":null,"tldpolicylink":null,"tldsize":null,"tldregistrierungsdauer":null,"tldtransfer":null,"tldwhoisprivacy":null,"tldregistrarchange":null,"tldregistrantchange":null,"tldwhoisupdate":null,"tldnameserverupdate":null,"tlddeletesofort":null,"tlddeleteexpire":null,"tldumlaute":null,"tldrestore":null,"tldsubcategory":null,"tldbildname":null,"tldbildurl":null,"tldclean":null,"tldcategory":null,"tldpolicy":null,"tldbesonderheiten":null,"tld_bedeutung":null,"_oembed_d167040d816d8f94c072940c8009f5f8":null,"_oembed_b0a0fa59ef14f8870da2c63f2027d064":null,"_oembed_4792fa4dfb2a8f09ab950a73b7f313ba":null,"_oembed_33ceb1fe54a8ab775d9410abf699878d":null,"_oembed_fd7014d14d919b45ec004937c0db9335":null,"_oembed_21a029d076783ec3e8042698c351bd7e":null,"_oembed_be5ea8a0c7b18e658f08cc571a909452":null,"_oembed_a9ca7a298b19f9b48ec5914e010294d2":null,"_oembed_f8db6b27d08a2bb1f920e7647808899a":null,"_oembed_168ebde5096e77d8a89326519af9e022":null,"_oembed_cdb76f1b345b42743edfe25481b6f98f":null,"_oembed_87b0613611ae54e86e8864265404b0a1":null,"_oembed_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_oembed_time_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_tldname":null,"_tldclean":null,"_tldpreis":null,"_tldcategory":null,"_tldsubcategory":null,"_tldpolicy":null,"_tldpolicylink":null,"_tldsize":null,"_tldregistrierungsdauer":null,"_tldtransfer":null,"_tldwhoisprivacy":null,"_tldregistrarchange":null,"_tldregistrantchange":null,"_tldwhoisupdate":null,"_tldnameserverupdate":null,"_tlddeletesofort":null,"_tlddeleteexpire":null,"_tldumlaute":null,"_tldrestore":null,"_tldbildname":null,"_tldbildurl":null,"_tld_bedeutung":null,"_tldbesonderheiten":null,"_oembed_ad96e4112edb9f8ffa35731d4098bc6b":null,"_oembed_8357e2b8a2575c74ed5978f262a10126":null,"_oembed_3d5fea5103dd0d22ec5d6a33eff7f863":null,"_eael_widget_elements":null,"_oembed_0d8a206f09633e3d62b95a15a4dd0487":null,"_oembed_time_0d8a206f09633e3d62b95a15a4dd0487":null,"_aioseo_description":null,"_eb_attr":null,"_eb_data_table":null,"_oembed_819a879e7da16dd629cfd15a97334c8a":null,"_oembed_time_819a879e7da16dd629cfd15a97334c8a":null,"_acf_changed":null,"_wpcode_auto_insert":null,"_edit_last":null,"_edit_lock":null,"_oembed_e7b913c6c84084ed9702cb4feb012ddd":null,"_oembed_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_time_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_03514b67990db061d7c4672de26dc514":null,"_oembed_time_03514b67990db061d7c4672de26dc514":null,"rank_math_news_sitemap_robots":null,"rank_math_robots":null,"_eael_post_view_count":"1959","_trp_automatically_translated_slug_ru_ru":null,"_trp_automatically_translated_slug_et":null,"_trp_automatically_translated_slug_lv":null,"_trp_automatically_translated_slug_fr_fr":null,"_trp_automatically_translated_slug_en_us":null,"_wp_old_slug":null,"_trp_automatically_translated_slug_da_dk":null,"_trp_automatically_translated_slug_pl_pl":null,"_trp_automatically_translated_slug_es_es":null,"_trp_automatically_translated_slug_hu_hu":null,"_trp_automatically_translated_slug_fi":null,"_trp_automatically_translated_slug_ja":null,"_trp_automatically_translated_slug_lt_lt":null,"_elementor_edit_mode":null,"_elementor_template_type":null,"_elementor_version":null,"_elementor_pro_version":null,"_wp_page_template":null,"_elementor_page_settings":null,"_elementor_data":null,"_elementor_css":null,"_elementor_conditions":null,"_happyaddons_elements_cache":null,"_oembed_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_time_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_time_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_59808117857ddf57e478a31d79f76e4d":null,"_oembed_time_59808117857ddf57e478a31d79f76e4d":null,"_oembed_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_time_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_81002f7ee3604f645db4ebcfd1912acf":null,"_oembed_time_81002f7ee3604f645db4ebcfd1912acf":null,"_elementor_screenshot":null,"_oembed_7ea3429961cf98fa85da9747683af827":null,"_oembed_time_7ea3429961cf98fa85da9747683af827":null,"_elementor_controls_usage":null,"_elementor_page_assets":[],"_elementor_screenshot_failed":null,"theplus_transient_widgets":null,"_eael_custom_js":null,"_wp_old_date":null,"_trp_automatically_translated_slug_it_it":null,"_trp_automatically_translated_slug_pt_pt":null,"_trp_automatically_translated_slug_zh_cn":null,"_trp_automatically_translated_slug_nl_nl":null,"_trp_automatically_translated_slug_pt_br":null,"_trp_automatically_translated_slug_sv_se":null,"rank_math_analytic_object_id":null,"rank_math_internal_links_processed":null,"_trp_automatically_translated_slug_ro_ro":null,"_trp_automatically_translated_slug_sk_sk":null,"_trp_automatically_translated_slug_bg_bg":null,"_trp_automatically_translated_slug_sl_si":null,"litespeed_vpi_list":null,"litespeed_vpi_list_mobile":null,"rank_math_seo_score":null,"rank_math_contentai_score":null,"ilj_limitincominglinks":null,"ilj_maxincominglinks":null,"ilj_limitoutgoinglinks":null,"ilj_maxoutgoinglinks":null,"ilj_limitlinksperparagraph":null,"ilj_linksperparagraph":null,"ilj_blacklistdefinition":null,"ilj_linkdefinition":null,"_eb_reusable_block_ids":null,"rank_math_focus_keyword":"hetzner rescue system","rank_math_og_content_image":null,"_yoast_wpseo_metadesc":null,"_yoast_wpseo_content_score":null,"_yoast_wpseo_focuskeywords":null,"_yoast_wpseo_keywordsynonyms":null,"_yoast_wpseo_estimated-reading-time-minutes":null,"rank_math_description":null,"surfer_last_post_update":null,"surfer_last_post_update_direction":null,"surfer_keywords":null,"surfer_location":null,"surfer_draft_id":null,"surfer_permalink_hash":null,"surfer_scrape_ready":null,"_thumbnail_id":"13627","footnotes":null,"_links":{"self":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/13634","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/comments?post=13634"}],"version-history":[{"count":0,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/13634\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media\/13627"}],"wp:attachment":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media?parent=13634"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/categories?post=13634"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/tags?post=13634"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}