{"id":18833,"date":"2026-04-08T11:49:31","date_gmt":"2026-04-08T09:49:31","guid":{"rendered":"https:\/\/webhosting.de\/tcp-keepalive-einstellungen-hosting-optimierung-serverboost\/"},"modified":"2026-04-08T11:49:31","modified_gmt":"2026-04-08T09:49:31","slug":"tcp-keepalive-settings-hosting-optimization-serverboost","status":"publish","type":"post","link":"https:\/\/webhosting.de\/en\/tcp-keepalive-einstellungen-hosting-optimierung-serverboost\/","title":{"rendered":"TCP Keepalive settings: Optimization in the hosting context"},"content":{"rendered":"<p><strong>TCP Keepalive<\/strong> determines how quickly a server detects and terminates inactive TCP sessions - a control lever that has a direct impact on resource consumption, latency and downtime behavior in hosting. With suitable idle, interval and probe values, I reduce connection dead spots, prevent NAT drops and keep web applications in <strong>Hosting setups<\/strong> reliably accessible.<\/p>\n\n<h2>Key points<\/h2>\n<ul>\n  <li><strong>Parameters<\/strong>Idle, interval, set probes in a targeted manner<\/li>\n  <li><strong>Delimitation<\/strong>TCP Keepalive vs. HTTP Keep-Alive<\/li>\n  <li><strong>Per socket<\/strong>: Overrides per service\/Kubernetes pod<\/li>\n  <li><strong>Firewall\/NAT<\/strong>: Actively consider idle timeouts<\/li>\n  <li><strong>Monitoring<\/strong>Measurement, load testing, iterative fine-tuning<\/li>\n<\/ul>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/server-tcp-settings-1283.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>How TCP Keepalive works<\/h2>\n\n<p>I activate <strong>Keepalive<\/strong> at socket or system level so that the stack sends small probes at defined intervals during inactivity. After an adjustable waiting time (idle), the system sends the first check; further probes then follow at the defined interval until the number of attempts is reached. If the remote station remains mute, I terminate the connection and return file descriptors and buffers in the <strong>Kernel<\/strong> free. The logic is clearly different from retransmissions, because Keepalive checks the liveness status of an otherwise dormant flow. Especially in hosting environments with many simultaneous sessions, this behavior prevents creeping leaks, which I would otherwise often only notice at high liveness. <strong>Load<\/strong> feel.<\/p>\n\n<h2>Why Keepalive counts in hosting<\/h2>\n\n<p>Faulty clients, mobile networks and aggressive NAT gateways often leave behind <strong>Zombie connections<\/strong>, which remain open for a long time without keepalive. This costs open sockets, RAM and CPU in accept, worker and proxy processes, which stretches response times. I use suitable values to remove these dead bodies early on and keep listeners, backends and upstreams open. <strong>responsive<\/strong>. The effect is particularly noticeable during peak loads because fewer dead connections fill the queues. I therefore plan Keepalive together with HTTP and TLS timeouts and ensure a <strong>coherent<\/strong> Interaction across all layers.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/tcp_keepalive_optimierung_3746.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Sysctl parameters: practical values<\/h2>\n\n<p>Linux provides very long default values that are used in productive <strong>Hosting environments<\/strong> rarely fit. For web servers, I usually set the idle time much shorter in order to clear hanging sessions in good time. I keep the interval between probes moderate so that I detect failures quickly but don't flood the network with checks. I balance the number of probes between false alarms and detection time; fewer probes shorten the time until the <strong>Resources<\/strong>. For IPv6, I pay attention to the respective net.ipv6 variables and keep both protocols consistent.<\/p>\n\n<table>\n  <thead>\n    <tr>\n      <th><strong>Parameters<\/strong><\/th>\n      <th>Standard (Linux)<\/th>\n      <th>Hosting recommendation<\/th>\n      <th>Benefit<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td><strong>tcp_keepalive_time<\/strong><\/td>\n      <td>7200s<\/td>\n      <td>600-1800s<\/td>\n      <td>When the first sample is sent after Idle<\/td>\n    <\/tr>\n    <tr>\n      <td><strong>tcp_keepalive_intvl<\/strong><\/td>\n      <td>75s<\/td>\n      <td>10-60s<\/td>\n      <td>Distance between individual probes<\/td>\n    <\/tr>\n    <tr>\n      <td><strong>tcp_keepalive_probes<\/strong><\/td>\n      <td>9<\/td>\n      <td>3-6<\/td>\n      <td>Maximum failed attempts before I close<\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n<p>I set the base values system-wide and apply them permanently via sysctl so that reboots do not discard the tuning work. In addition, I document the initial values and measure the effects on <strong>Error rates<\/strong> and latencies. This is how I maintain a balance between fast detection and additional network traffic. I often use the following lines as a starting point and adjust them later for each workload:<\/p>\n\n<pre><code>net.ipv4.tcp_keepalive_time = 600\nnet.ipv4.tcp_keepalive_intvl = 60\nnet.ipv4.tcp_keepalive_probes = 5\nsysctl -p\n<\/code><\/pre>\n\n<h2>Per-socket and platform tuning<\/h2>\n\n<p>Global defaults are rarely enough for me; I set per service <strong>Per socket<\/strong>-values so that sensitive backends live longer while frontends clean up quickly. In Python, Go or Java, I set SO_KEEPALIVE and the specific TCP options directly on the socket. On Linux, I control via TCP_KEEPIDLE, TCP_KEEPINTVL and TCP_KEEPCNT, while Windows works via registry keys (KeepAliveTime, KeepAliveInterval). In Kubernetes, I overwrite settings on a pod or deployment-specific basis to treat short-lived API gateways differently to long-lived ones <strong>Database<\/strong>-proxies. For container setups, I also check the host NAT tables and CNI plugins, because inactive flows are often removed earlier than I would like.<\/p>\n\n<pre><code># Example (Python, Linux)\nimport socket\nsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\nsock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)\nsock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 60)\nsock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 30)\nsock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 5)\n<\/code><\/pre>\n\n<h2>HTTP Keep-Alive vs. TCP Keepalive<\/h2>\n\n<p>HTTP Keep-Alive keeps connections open for multiple requests, while <strong>TCP<\/strong> Keepalive provides pure liveness checks at transport level. Both mechanisms complement each other, but work with different targets and timers. In HTTP\/2 and HTTP\/3, PING frames partly take over the role of Keepalive, but I still additionally secure the TCP layer. I set the HTTP timeout according to the application view, while I set TCP values on the economic release of <strong>Resources<\/strong> align. If you want to find out more about the HTTP page, you can find a helpful guide at <a href=\"https:\/\/webhosting.de\/en\/http-keepalive-timeout-server-performance-configuration\/\">HTTP Keep-Alive Timeout<\/a>.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/tcp-keepalive-optimization-6738.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Network timeout tuning: practical<\/h2>\n\n<p>For classic web hosting front-ends, I often work with 300s idle, 30-45s interval and 4-6 probes to end inactive sessions quickly and <strong>Queues<\/strong> lean. Database connections are given more patience so that short busy phases do not trigger unnecessary disconnections. In edge or API gateways, I also shorten the timeouts because there are a lot of short-lived connections. I coordinate the values with TLS handshake timeouts, read\/write timeouts and upstream time limits so that there are no contradictions at the layer boundaries. For step-by-step optimization, a compact <a href=\"https:\/\/webhosting.de\/en\/http-keep-alive-tuning-server-load-performance-optimization-flow\/\">Tuning flow<\/a>, which I use in maintenance windows.<\/p>\n\n<h2>Firewall, NAT and cloud idle timeouts<\/h2>\n\n<p>Many firewalls and NAT gateways cut inactive flows after 300-900 seconds, which is why I <strong>Keepalive<\/strong> so that my interval is less than this. Otherwise, the application will not recognize the termination until the next request and cause unnecessary retries. In cloud load balancers, I check the TCP or connection idle parameters and compare them with sysctl and proxy values. In anycast or multi-AZ setups, I check whether path changes lead to seemingly dead remote stations and specifically increase the number of samples for these zones. I document the chain of client, proxy, firewall and backend so that I can <strong>Causes<\/strong> for drops quickly.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/tcp_keepalive_optimierung_4893.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Integration in web server configuration<\/h2>\n\n<p>Apache, Nginx and HAProxy organize HTTP persistence at the application level, while the operating system <strong>TCP<\/strong> Keepalive delivers. In Apache, I activate KeepAlive, limit KeepAliveRequests and keep the KeepAliveTimeout short so that workers are released promptly. In Nginx, I use a short keepalive_timeout and moderate keepalive_requests for efficient reuse. In HAProxy, I use socket options such as tcpka or system-side defaults so that transport timeouts match the proxy policy. For more in-depth web server aspects, the <a href=\"https:\/\/webhosting.de\/en\/keep-alive-web-server-performance-tuning-guide\/\">Web Server Tuning Guide<\/a>, which I combine with my TCP customizations.<\/p>\n\n<h2>Monitoring, tests and metrics<\/h2>\n\n<p>I measure the effect of each adjustment and do not rely on <strong>Gut feeling<\/strong>. ss, netstat and lsof show me how many ESTABLISHED, FIN_WAIT and TIME_WAIT connections are present and whether leaks are growing. In metrics, I monitor aborts, RSTs, retransmissions, latency P95\/P99 and queue lengths; if a value reaches its limits, I go specifically to Idle, Interval or Probes. I use synthetic load tests (e.g. ab, wrk, Locust) to simulate real usage patterns and verify whether tuning meets the target metrics. I roll out changes incrementally and compare time series before <strong>global<\/strong> Distribute defaults across all hosts.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/tcp_keepalive_0815.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Error patterns and troubleshooting<\/h2>\n\n<p>If I set intervals too short, I inflate the <strong>Network traffic<\/strong> and increase the risk of interpreting temporary faults as failures. If there are too few probes, I close live connections in slow networks, which users encounter as a sporadic error message. Idle times that are too long, on the other hand, lead to socket congestion and growing accept backlogs. I check logs for RST from client\/server, ECONNRESET and ETIMEDOUT to identify the direction. If it mainly affects mobile users, I adjust probes and intervals because there <strong>Dead spots<\/strong> and sleep conditions occur more frequently.<\/p>\n\n<h2>Secure defaults for different workloads<\/h2>\n\n<p>I start with conservative but production-suitable values and refine them after measuring the <strong>Workload<\/strong>. Web APIs usually require short idle times, databases significantly longer. Proxies between zones or providers benefit from slightly more probes to cope with path flutter. For interactive applications, I reduce the interval and increase the number of probes so that I notice errors more quickly, but don't close them prematurely. The table gives me a compact orientation, which I adjust during operation.<\/p>\n\n<table>\n  <thead>\n    <tr>\n      <th><strong>Server type<\/strong><\/th>\n      <th>Idle<\/th>\n      <th>Interval<\/th>\n      <th>rehearsals<\/th>\n      <th>Note<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td><strong>Web hosting front end<\/strong><\/td>\n      <td>300-600s<\/td>\n      <td>30-45s<\/td>\n      <td>4-6<\/td>\n      <td>Short sessions, high volume<\/td>\n    <\/tr>\n    <tr>\n      <td><strong>API gateway<\/strong><\/td>\n      <td>180-300s<\/td>\n      <td>20-30s<\/td>\n      <td>5-6<\/td>\n      <td>Many idle phases, clear quickly<\/td>\n    <\/tr>\n    <tr>\n      <td><strong>Database proxy<\/strong><\/td>\n      <td>900-1800s<\/td>\n      <td>45-60s<\/td>\n      <td>3-5<\/td>\n      <td>Establishing a connection is expensive, show patience<\/td>\n    <\/tr>\n    <tr>\n      <td><strong>Kubernetes Pod<\/strong><\/td>\n      <td>600-900s<\/td>\n      <td>30-45s<\/td>\n      <td>4\u20135<\/td>\n      <td>Synchronize with CNI\/LB timeouts<\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2026\/04\/tcp-setup-9182.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>TCP_USER_TIMEOUT and retransmission backoff<\/h2>\n<p>In addition to Keepalive, I specifically use the following for data-carrying connections <strong>TCP_USER_TIMEOUT<\/strong>, to control how long unconfirmed data may remain in the socket before the connection is actively terminated. This is particularly important for proxies and APIs, which should not loop over hangers for minutes. In contrast to Keepalive (which checks liveness during inactivity), TCP_USER_TIMEOUT takes effect when data is flowing but no ACKs are returned - for example in the event of asymmetric faults. I set it <em>per socket<\/em> slightly below the application read\/write timeouts so that the transport level does not wait longer than the app logic in the event of an error.<\/p>\n\n<pre><code># Example (Go, Linux) - Keepalive and TCP_USER_TIMEOUT\nd := net.Dialer{\n    Timeout: 5 * time.Second,\n    KeepAlive: 30 * time.Second,\n    Control: func(network, address string, c syscall.RawConn) error {\n        var err error\n        c.Control(func(fd uintptr) {\n            \/\/ 20s of unconfirmed data are allowed\n            err = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_TCP, 0x12, 20000) \/\/ TCP_USER_TIMEOUT\n        })\n        return err\n    },\n}\nconn, _ := d.Dial(\"tcp\", \"example:443\")\n<\/code><\/pre>\n\n<p>I am not forgetting that the TCP backoff (RTO extension) and retries (<strong>tcp_retries2<\/strong>) also influence the behavior in the event of packet loss. User timeouts that are too short can lead to aborts in rough networks, even though the remote station is reachable. I therefore only set them tightly where I deliberately aim for fast error detection (e.g. in the edge proxy).<\/p>\n\n<h2>IPv6 and operating system peculiarities<\/h2>\n<p>The same per-socket options (TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT) apply for IPv6. Depending on the kernel version, the global defaults for v4 and v6 apply together; I check this with <code>ss -o<\/code> to real connections. Under Windows, I adjust the defaults via the registry (KeepAliveTime, KeepAliveInterval) and use SIO_KEEPALIVE_VALS for individual sockets. Options are sometimes called differently on BSD derivatives, but the semantics remain the same. It is important to verify for each platform whether application overrides actually beat the system defaults and whether container runtimes inherit namespaces correctly.<\/p>\n\n<h2>WebSockets, gRPC and streaming<\/h2>\n<p>Long-lived streams (WebSocket, gRPC, server-sent events) benefit particularly from well-dosed keepalives. I start at two levels: The application sends periodic pings\/PONGs (e.g. WebSocket level), while the TCP layer secures with moderate intervals. This prevents NATs from silently removing flows. For mobile clients, I increase the number of probes and select longer intervals to take account of energy-saving modes. For gRPC\/HTTP-2, I coordinate HTTP\/2 PINGs with TCP keepalives so that I don't probe twice too aggressively and drain batteries.<\/p>\n\n<h2>Conntrack, kernel and NAT tables<\/h2>\n<p>In Linux hosts with active connection tracking, a too short <strong>nf_conntrack<\/strong>-timeout can lead to an early drop - even if the app thinks longer. I therefore synchronize the relevant timers (e.g. <em>nf_conntrack_tcp_timeout_established<\/em>) with my keepalive intervals so that a sample arrives safely before the conntrack deadline. On nodes with strong NAT (NodePort, egress NAT) I plan the size of the conntrack table and the hash buckets to avoid global pressure under load. Clean keepalive settings relieve these tables measurably.<\/p>\n\n<h2>Example: Proxy and web server units<\/h2>\n<p>In HAProxy, I specifically activate transport-side keepalive and keep the HTTP timeouts consistent:<\/p>\n<pre><code># Extract (HAProxy)\ndefaults\n  timeout client 60s\n  timeout server 60s\n  timeout connect 5s\n  option http-keep-alive\n  option tcpka # Enable TCP keepalive (use OS defaults)\n\nbackend app\n  server s1 10.0.0.10:8080 check inter 2s fall 3 rise 2\n<\/code><\/pre>\n<p>In Nginx, I think reuse is efficient without tying up workers:<\/p>\n<pre><code># excerpt (Nginx)\nkeepalive_timeout 30s;\nkeepalive_requests 1000;\nproxy_read_timeout 60s;\nproxy_send_timeout 60s;\n<\/code><\/pre>\n<p>I make sure that transport and application timeouts fit together logically: Preventing \u201edead lines\u201c is the task of TCP\/Keepalive, while application timeouts map business logic and user expectations.<\/p>\n\n<h2>Observability in practice<\/h2>\n<p>I verify the work of Keepalive live on the host:<\/p>\n<ul>\n  <li><strong>ss<\/strong>: <code>ss -tin 'sport = :443'<\/code> shows with <code>-o<\/code> the timer (e.g. <em>timer:(keepalive,30sec,0)<\/em>), number of retries and send\/recv Q.<\/li>\n  <li><strong>tcpdump<\/strong>I filter a dormant connection and see periodic small packets\/ACKs during idle phases. This is how I recognize whether probes trigger the NAT in time.<\/li>\n  <li><strong>Logs\/Metrics<\/strong>I correlate RST\/timeout peaks with changes to idle\/interval\/probes. A drop in open sockets at constant load shows successful cleanup.<\/li>\n<\/ul>\n<p>For reproducible tests, I simulate connection failures (e.g. interface down, iptables DROP) and observe how quickly workers\/processes release resources and whether retries work properly.<\/p>\n\n<h2>Resource and capacity planning<\/h2>\n<p>Keepalive is only part of the equilibrium. I make sure that ulimit\/nofile, <strong>fs.file-max<\/strong>, <strong>net.core.somaxconn<\/strong> and <strong>tcp_max_syn_backlog<\/strong> match my connection number. Idle times that are too long conceal deficits here, while values that are too short bring supposed stability but hit users hard. I plan buffers (Recv-\/Send-Q) and FD reserves with load scenarios and measure how many simultaneous idle connections my nodes can really support before GC\/Worker and accept queues suffer.<\/p>\n\n<h2>When I do not (only) rely on TCP Keepalive<\/h2>\n<p>For purely internal traffic without NAT, a low number of connections and clear application timeouts, I sometimes dispense with aggressive keepalives and leave the detection to the application (e.g. heartbeats at protocol level). Conversely, in edge and mobile scenarios, I prioritize short intervals, few probes and add HTTP\/2 PINGs or WebSocket pings. It is important that I never tune in isolation: Keepalive values must harmonize with retries, circuit breakers and backoff strategies so that I can detect errors quickly but don't cause the system to flutter.<\/p>\n\n<h2>Rollout strategy and validation<\/h2>\n<p>I roll out new defaults step by step: Canary hosts first, then an AZ\/zone, then the entire fleet. Before\/after comparisons include open connections, CPU in kernel mode, P95\/P99 latency, error rates and retransmissions. In Kubernetes, I test via pod annotations or init containers that set sysctl namespaces before changing node-wide. This way I minimize risk and ensure reproducible results - not just perceived improvements.<\/p>\n\n<h2>Briefly summarized<\/h2>\n\n<p>With well thought out <strong>TCP<\/strong> Keepalive settings, I remove inactive connections early, reduce resource pressure and stabilize response times. I choose short idle times for the frontend, longer values for stateful backends and secure myself with moderate intervals and few to medium probes. I coordinate the values with HTTP, TLS and proxy timeouts and keep them below firewall and NAT idle limits. After each adjustment, I measure noticeable effects on latency, errors and CPU instead of relying on gut feeling. This is how I achieve a <strong>reliable<\/strong> Platform that can cope better with peak loads and serves user flows evenly.<\/p>","protected":false},"excerpt":{"rendered":"<p>TCP Keepalive settings optimize **hosting network** and **network timeout tuning** for better performance in webhosting.<\/p>","protected":false},"author":1,"featured_media":18826,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","inline_featured_image":false,"footnotes":""},"categories":[676],"tags":[],"class_list":["post-18833","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-server_vm"],"acf":[],"_wp_attached_file":null,"_wp_attachment_metadata":null,"litespeed-optimize-size":null,"litespeed-optimize-set":null,"_elementor_source_image_hash":null,"_wp_attachment_image_alt":null,"stockpack_author_name":null,"stockpack_author_url":null,"stockpack_provider":null,"stockpack_image_url":null,"stockpack_license":null,"stockpack_license_url":null,"stockpack_modification":null,"color":null,"original_id":null,"original_url":null,"original_link":null,"unsplash_location":null,"unsplash_sponsor":null,"unsplash_exif":null,"unsplash_attachment_metadata":null,"_elementor_is_screenshot":null,"surfer_file_name":null,"surfer_file_original_url":null,"envato_tk_source_kit":null,"envato_tk_source_index":null,"envato_tk_manifest":null,"envato_tk_folder_name":null,"envato_tk_builder":null,"envato_elements_download_event":null,"_menu_item_type":null,"_menu_item_menu_item_parent":null,"_menu_item_object_id":null,"_menu_item_object":null,"_menu_item_target":null,"_menu_item_classes":null,"_menu_item_xfn":null,"_menu_item_url":null,"_trp_menu_languages":null,"rank_math_primary_category":null,"rank_math_title":null,"inline_featured_image":null,"_yoast_wpseo_primary_category":null,"rank_math_schema_blogposting":null,"rank_math_schema_videoobject":null,"_oembed_049c719bc4a9f89deaead66a7da9fddc":null,"_oembed_time_049c719bc4a9f89deaead66a7da9fddc":null,"_yoast_wpseo_focuskw":null,"_yoast_wpseo_linkdex":null,"_oembed_27e3473bf8bec795fbeb3a9d38489348":null,"_oembed_c3b0f6959478faf92a1f343d8f96b19e":null,"_trp_translated_slug_en_us":null,"_wp_desired_post_slug":null,"_yoast_wpseo_title":null,"tldname":null,"tldpreis":null,"tldrubrik":null,"tldpolicylink":null,"tldsize":null,"tldregistrierungsdauer":null,"tldtransfer":null,"tldwhoisprivacy":null,"tldregistrarchange":null,"tldregistrantchange":null,"tldwhoisupdate":null,"tldnameserverupdate":null,"tlddeletesofort":null,"tlddeleteexpire":null,"tldumlaute":null,"tldrestore":null,"tldsubcategory":null,"tldbildname":null,"tldbildurl":null,"tldclean":null,"tldcategory":null,"tldpolicy":null,"tldbesonderheiten":null,"tld_bedeutung":null,"_oembed_d167040d816d8f94c072940c8009f5f8":null,"_oembed_b0a0fa59ef14f8870da2c63f2027d064":null,"_oembed_4792fa4dfb2a8f09ab950a73b7f313ba":null,"_oembed_33ceb1fe54a8ab775d9410abf699878d":null,"_oembed_fd7014d14d919b45ec004937c0db9335":null,"_oembed_21a029d076783ec3e8042698c351bd7e":null,"_oembed_be5ea8a0c7b18e658f08cc571a909452":null,"_oembed_a9ca7a298b19f9b48ec5914e010294d2":null,"_oembed_f8db6b27d08a2bb1f920e7647808899a":null,"_oembed_168ebde5096e77d8a89326519af9e022":null,"_oembed_cdb76f1b345b42743edfe25481b6f98f":null,"_oembed_87b0613611ae54e86e8864265404b0a1":null,"_oembed_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_oembed_time_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_tldname":null,"_tldclean":null,"_tldpreis":null,"_tldcategory":null,"_tldsubcategory":null,"_tldpolicy":null,"_tldpolicylink":null,"_tldsize":null,"_tldregistrierungsdauer":null,"_tldtransfer":null,"_tldwhoisprivacy":null,"_tldregistrarchange":null,"_tldregistrantchange":null,"_tldwhoisupdate":null,"_tldnameserverupdate":null,"_tlddeletesofort":null,"_tlddeleteexpire":null,"_tldumlaute":null,"_tldrestore":null,"_tldbildname":null,"_tldbildurl":null,"_tld_bedeutung":null,"_tldbesonderheiten":null,"_oembed_ad96e4112edb9f8ffa35731d4098bc6b":null,"_oembed_8357e2b8a2575c74ed5978f262a10126":null,"_oembed_3d5fea5103dd0d22ec5d6a33eff7f863":null,"_eael_widget_elements":null,"_oembed_0d8a206f09633e3d62b95a15a4dd0487":null,"_oembed_time_0d8a206f09633e3d62b95a15a4dd0487":null,"_aioseo_description":null,"_eb_attr":null,"_eb_data_table":null,"_oembed_819a879e7da16dd629cfd15a97334c8a":null,"_oembed_time_819a879e7da16dd629cfd15a97334c8a":null,"_acf_changed":null,"_wpcode_auto_insert":null,"_edit_last":null,"_edit_lock":null,"_oembed_e7b913c6c84084ed9702cb4feb012ddd":null,"_oembed_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_time_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_03514b67990db061d7c4672de26dc514":null,"_oembed_time_03514b67990db061d7c4672de26dc514":null,"rank_math_news_sitemap_robots":null,"rank_math_robots":null,"_eael_post_view_count":"515","_trp_automatically_translated_slug_ru_ru":null,"_trp_automatically_translated_slug_et":null,"_trp_automatically_translated_slug_lv":null,"_trp_automatically_translated_slug_fr_fr":null,"_trp_automatically_translated_slug_en_us":null,"_wp_old_slug":null,"_trp_automatically_translated_slug_da_dk":null,"_trp_automatically_translated_slug_pl_pl":null,"_trp_automatically_translated_slug_es_es":null,"_trp_automatically_translated_slug_hu_hu":null,"_trp_automatically_translated_slug_fi":null,"_trp_automatically_translated_slug_ja":null,"_trp_automatically_translated_slug_lt_lt":null,"_elementor_edit_mode":null,"_elementor_template_type":null,"_elementor_version":null,"_elementor_pro_version":null,"_wp_page_template":null,"_elementor_page_settings":null,"_elementor_data":null,"_elementor_css":null,"_elementor_conditions":null,"_happyaddons_elements_cache":null,"_oembed_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_time_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_time_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_59808117857ddf57e478a31d79f76e4d":null,"_oembed_time_59808117857ddf57e478a31d79f76e4d":null,"_oembed_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_time_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_81002f7ee3604f645db4ebcfd1912acf":null,"_oembed_time_81002f7ee3604f645db4ebcfd1912acf":null,"_elementor_screenshot":null,"_oembed_7ea3429961cf98fa85da9747683af827":null,"_oembed_time_7ea3429961cf98fa85da9747683af827":null,"_elementor_controls_usage":null,"_elementor_page_assets":[],"_elementor_screenshot_failed":null,"theplus_transient_widgets":null,"_eael_custom_js":null,"_wp_old_date":null,"_trp_automatically_translated_slug_it_it":null,"_trp_automatically_translated_slug_pt_pt":null,"_trp_automatically_translated_slug_zh_cn":null,"_trp_automatically_translated_slug_nl_nl":null,"_trp_automatically_translated_slug_pt_br":null,"_trp_automatically_translated_slug_sv_se":null,"rank_math_analytic_object_id":null,"rank_math_internal_links_processed":"1","_trp_automatically_translated_slug_ro_ro":null,"_trp_automatically_translated_slug_sk_sk":null,"_trp_automatically_translated_slug_bg_bg":null,"_trp_automatically_translated_slug_sl_si":null,"litespeed_vpi_list":null,"litespeed_vpi_list_mobile":null,"rank_math_seo_score":null,"rank_math_contentai_score":null,"ilj_limitincominglinks":null,"ilj_maxincominglinks":null,"ilj_limitoutgoinglinks":null,"ilj_maxoutgoinglinks":null,"ilj_limitlinksperparagraph":null,"ilj_linksperparagraph":null,"ilj_blacklistdefinition":null,"ilj_linkdefinition":null,"_eb_reusable_block_ids":null,"rank_math_focus_keyword":"TCP Keepalive","rank_math_og_content_image":null,"_yoast_wpseo_metadesc":null,"_yoast_wpseo_content_score":null,"_yoast_wpseo_focuskeywords":null,"_yoast_wpseo_keywordsynonyms":null,"_yoast_wpseo_estimated-reading-time-minutes":null,"rank_math_description":null,"surfer_last_post_update":null,"surfer_last_post_update_direction":null,"surfer_keywords":null,"surfer_location":null,"surfer_draft_id":null,"surfer_permalink_hash":null,"surfer_scrape_ready":null,"_thumbnail_id":"18826","footnotes":null,"_links":{"self":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/18833","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/comments?post=18833"}],"version-history":[{"count":0,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/18833\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media\/18826"}],"wp:attachment":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media?parent=18833"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/categories?post=18833"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/tags?post=18833"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}