{"id":15929,"date":"2025-12-09T12:10:03","date_gmt":"2025-12-09T11:10:03","guid":{"rendered":"https:\/\/webhosting.de\/asynchrone-php-tasks-mit-worker-queues-cronjobs-skalierung-smartrun\/"},"modified":"2025-12-09T12:10:03","modified_gmt":"2025-12-09T11:10:03","slug":"asynchronous-php-tasks-with-worker-queues-cron-jobs-scaling-smartrun","status":"publish","type":"post","link":"https:\/\/webhosting.de\/en\/asynchrone-php-tasks-mit-worker-queues-cronjobs-skalierung-smartrun\/","title":{"rendered":"Asynchronous PHP tasks with worker queues: When cron jobs are no longer sufficient"},"content":{"rendered":"<p>Asynchronous PHP tasks solve typical bottlenecks when cron jobs cause peak loads, long runtimes, and a lack of transparency. I'll show you how. <strong>asynchronous PHP<\/strong> with queues and workers, you can offload web requests, scale workloads, and cushion outages without frustration.<\/p>\n\n<h2>Key points<\/h2>\n\n<p>To begin with, I will summarize the key ideas on which I base this article and which I apply in my daily practice.<strong> Basics<\/strong><\/p>\n<ul>\n  <li><strong>Decoupling<\/strong> of requests and jobs: Web requests remain fast, jobs run in the background.<\/li>\n  <li><strong>Scaling<\/strong> About worker pools: More instances, less waiting time.<\/li>\n  <li><strong>Reliability<\/strong> by retries: Restart failed tasks.<\/li>\n  <li><strong>Transparency<\/strong> Through monitoring: queue length, runtimes, error rates at a glance.<\/li>\n  <li><strong>Separation<\/strong> by workload: short, default, long with appropriate limits.<\/li>\n<\/ul>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/php-workerqueues-2874.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Why cron jobs are no longer sufficient<\/h2>\n\n<p>A cron job starts strictly according to the time, not according to a real <strong>event<\/strong>. As soon as users trigger something, I want to respond immediately instead of waiting until the next full minute. When many cron jobs run simultaneously, a load peak occurs that briefly overloads the database, CPU, and I\/O. Parallelism remains limited, and it is difficult for me to map fine-grained priorities. With queues, I immediately push tasks into a queue, let multiple workers run in parallel, and keep the web interface running smoothly. <strong>responsive<\/strong>. Those who use WordPress also benefit if they <a href=\"https:\/\/webhosting.de\/en\/wp-cron-understand-optimize-wordpress-task-management-expert\/\">Understanding WP-Cron<\/a> and wants to configure it cleanly so that time-controlled schedules reliably move into the queue.<\/p>\n\n<h2>Asynchronous processing: Job queue worker explained briefly<\/h2>\n\n<p>I put expensive tasks in a clear <strong>Job<\/strong>, that describes what needs to be done, including data references. This job ends up in a queue, which I use as a buffer against peak loads and which serves multiple consumers. A worker is a persistent process that reads jobs from the queue, executes them, and confirms the result. If a worker fails, the job remains in the queue and can be processed later by another instance. This loose coupling makes the application as a whole <strong>fault-tolerant<\/strong> and ensures consistent response times in the front end.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/phpworkerqueuesmeeting5623.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>How queues and workers function in the PHP environment<\/h2>\n\n<p>In PHP, I define a job as a simple class or as a serializable <strong>payload<\/strong> with Handler. The queue can be a database table, Redis, RabbitMQ, SQS, or Kafka, depending on size and latency requirements. Worker processes run independently, often as supervisord, systemd, or container services, and continuously retrieve jobs. I use ACK\/NACK mechanisms to clearly signal successful and failed processing. It remains important that I <strong>Throughput rate<\/strong> the worker adapts to the expected job volume, otherwise the queue will grow unchecked.<\/p>\n\n<h2>PHP workers in hosting environments: balance instead of bottlenecks<\/h2>\n\n<p>Too few PHP workers cause backlogs, too many strain the CPU and RAM and slow everything down, including <strong>web requests<\/strong>. I plan worker numbers and concurrency separately for each queue so that short tasks don't get stuck in long reports. I also set memory limits and regular restarts to catch leaks. If you're unsure about limits, CPU cores, and concurrency, read my concise <a href=\"https:\/\/webhosting.de\/en\/php-workers-hosting-bottleneck-guide-balance\/\">Guide to PHP workers<\/a> with typical balance strategies. This balance ultimately creates the necessary <strong>Plannability<\/strong> for growth and consistent response times.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/asynchrone-php-tasks-workerqueue-4287.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Timeouts, retries, and idempotence: ensuring reliable processing<\/h2>\n\n<p>I assign a <strong>Timeout<\/strong>, so that no workers get stuck on defective tasks indefinitely. The broker receives a visibility timeout that is slightly longer than the maximum job duration so that a task does not appear twice by mistake. Since many systems use \u201eat least once\u201c delivery, I implement idempotent handlers: duplicate calls do not result in duplicate emails or payments. I use backoff for retries to avoid overloading external APIs. This is how I maintain the <strong>Error rate<\/strong> low and can diagnose problems accurately.<\/p>\n\n<h2>Separate workloads: short, default, and long<\/h2>\n\n<p>I create separate queues for short, medium, and long jobs so that an export doesn't block ten notifications and the <strong>User<\/strong> Each queue gets its own worker pools with appropriate limits for runtime, concurrency, and memory. Short tasks benefit from higher parallelism and strict timeouts, while long processes get more CPU and longer runtimes. I control priorities by distributing workers across the queues. This clear separation ensures predictable <strong>Latencies<\/strong> throughout the entire system.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/phpworkerqueuesnacht4327.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Queue options compared: when which system is suitable<\/h2>\n\n<p>I deliberately choose the queue based on latency, persistence, operation, and growth path so that I don't have to migrate later at great expense and the <strong>Scaling<\/strong> remains under control.<\/p>\n\n<table>\n  <thead>\n    <tr>\n      <th>queue system<\/th>\n      <th>Use<\/th>\n      <th>Latency<\/th>\n      <th>Features<\/th>\n    <\/tr>\n  <\/thead>\n  <tbody>\n    <tr>\n      <td>Database (MySQL\/PostgreSQL)<\/td>\n      <td>Small setups, easy start<\/td>\n      <td>Medium<\/td>\n      <td>Easy to use, but quick to <strong>bottleneck<\/strong> under high load<\/td>\n    <\/tr>\n    <tr>\n      <td>Redis<\/td>\n      <td>Small to medium load<\/td>\n      <td>Low<\/td>\n      <td>Very fast in RAM, needs clear <strong>Configuration<\/strong> for reliability<\/td>\n    <\/tr>\n    <tr>\n      <td>RabbitMQ \/ Amazon SQS \/ Kafka<\/td>\n      <td>Large, distributed systems<\/td>\n      <td>Low to medium<\/td>\n      <td>Extensive features, good <strong>Scaling<\/strong>, more operating expenses<\/td>\n    <\/tr>\n  <\/tbody>\n<\/table>\n\n<h2>Using Redis correctly \u2013 avoiding common pitfalls<\/h2>\n\n<p>Redis feels lightning fast, but incorrect settings or unsuitable data structures lead to strange <strong>Waiting times<\/strong>. I pay attention to AOF\/RDB strategies, network latency, oversized payloads, and blocking commands. I also separate caching and queue workloads so that cache spikes don't slow down job retrieval. For a compact checklist of misconfigurations, this guide is helpful. <a href=\"https:\/\/webhosting.de\/en\/why-redis-is-slower-than-expected-typical-misconfigurations-cacheopt\/\">Redis misconfigurations<\/a>. If you set it correctly, you will get fast and reliable <strong>queue<\/strong> for many applications.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/phpworkerqueue6543.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Monitoring and scaling in practice<\/h2>\n\n<p>I measure the queue length over time, because increasing <strong>backlogs<\/strong> signal a lack of worker resources. The average job duration helps to set realistic timeouts and plan capacities. Error rates and the number of retries show me when external dependencies or code paths are unstable. In containers, I automatically scale workers based on CPU and queue metrics, while smaller setups can manage with simple scripts. Visibility remains crucial because only numbers provide a sound basis for <strong>Decisions<\/strong> enable.<\/p>\n\n<h2>Cron plus Queue: clear division of roles instead of competition<\/h2>\n\n<p>I use Cron as a clock that schedules time-controlled jobs, while workers do the real <strong>Work<\/strong> This prevents massive load peaks at the top of each minute, and spontaneous events are responded to immediately with enqueued jobs. I schedule recurring collective reports using Cron, but each individual report detail is processed by a worker. For WordPress setups, I adhere to guidelines such as those in \u201e<a href=\"https:\/\/webhosting.de\/en\/wp-cron-understand-optimize-wordpress-task-management-expert\/\">Understanding WP-Cron<\/a>\u201c so that planning remains consistent. This allows me to keep things organized in terms of timing and ensures that I <strong>Flexibility<\/strong> in the execution.<\/p>\n\n\n<figure class=\"wp-block-image size-full is-resized\">\n  <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/webhosting.de\/wp-content\/uploads\/2025\/12\/php-workerqueue-setup-7482.png\" alt=\"\" width=\"1536\" height=\"1024\"\/>\n<\/figure>\n\n\n<h2>Modern PHP runtimes: RoadRunner and FrankenPHP in combination with queues<\/h2>\n\n<p>Persistent worker processes save startup overhead, keep connections open, and reduce <strong>Latency<\/strong>. RoadRunner and FrankenPHP rely on long-running processes, worker pools, and shared memory, which significantly increases efficiency under load. In combination with queues, I maintain a consistent throughput rate and benefit from reused resources. I often separate HTTP handling and queue consumers into separate pools so that web traffic and background jobs do not interfere with each other. Working this way creates a calm <strong>Performance<\/strong> even with highly fluctuating demand.<\/p>\n\n<h2>Security: Treat data sparingly and encrypt it<\/h2>\n\n<p>I never put personal data directly in the payload, only IDs, which I reload later to <strong>Data protection<\/strong> All connections to the broker are encrypted, and I use at-rest encryption if the service offers it. Producers and consumers receive separate permissions with minimal rights. I rotate access data regularly and keep secrets out of logs and metrics. This approach reduces the attack surface and protects the <strong>Confidentiality<\/strong> sensitive information.<\/p>\n\n<h2>Practical application scenarios for Async-PHP<\/h2>\n\n<p>I no longer send emails in Webrequest, but queue them as jobs so that users don't have to wait for the <strong>shipping<\/strong> wait. For media processing, I upload images, provide an immediate response, and generate thumbnails later, which makes the upload experience noticeably smoother. I start reports with large amounts of data asynchronously and make the results available for download as soon as the worker is finished. For integrations with payment, CRM, or marketing systems, I decouple API calls to calmly cushion timeouts and sporadic failures. I move cache warm-up and search index updates behind the scenes so that the <strong>UI<\/strong> remains fast.<\/p>\n\n<h2>Job design and data flow: Payloads, versioning, and idempotency keys<\/h2>\n\n<p>I keep payloads as lean as possible and only store references: one <strong>ID<\/strong>, a type, a version, and a correlation or idempotency key. I use a version to identify the payload schema and can continue to develop handlers at my leisure while old jobs are still being processed cleanly. An idempotency key prevents duplicate side effects: it is noted in the data store at startup and compared during repetitions to ensure that no second email or booking is created. For complex tasks, I break jobs down into small, clearly defined steps (commands) instead of packing entire workflows into a single task\u2014this allows for retries and error handling. <strong>targeted<\/strong> reach for.<\/p>\n\n<p>For updates, I use the <strong>Outbox template<\/strong>Changes are written to an outbox table within a database transaction and then published to the real queue by a worker. This allows me to avoid inconsistencies between application data and sent jobs and obtain a robust \u201e<em>at least once<\/em>\u201cDelivery with precisely defined side effects.<\/p>\n\n<h2>Error patterns, DLQs, and \u201epoison messages\u201c<\/h2>\n\n<p>Not every error is transient. I make a clear distinction between problems that can be resolved by <strong>Retries<\/strong> resolve (network, rate limits), and final errors (missing data, validations). For the latter, I set up a <strong>dead letter queue<\/strong> (DLQ): After a limited number of retries, the job ends up there. In the DLQ, I store the reason, stack trace excerpt, number of retries, and a link to relevant entities. This allows me to make a targeted decision: manually restart, correct data, or fix the handler. I recognize \u201epoison messages\u201c (jobs that crash reproducibly) by their immediate false start and block them early on so that they don't slow down the entire pool.<\/p>\n\n<h2>Graceful shutdown, deployments, and rolling restarts<\/h2>\n\n<p>When deploying, I stick to <strong>Graceful shutdown<\/strong>The process completes ongoing jobs but does not accept any new ones. To do this, I intercept SIGTERM, set a \u201edraining\u201c status, and extend the visibility timeout if necessary so that the broker does not assign the job to another worker. In container setups, I plan the termination grace period generously, tailored to the maximum job duration. I reduce rolling restarts to small batches so that the <strong>Capacity<\/strong> does not crash. In addition, I set up heartbeats\/health checks to ensure that only healthy workers pull jobs.<\/p>\n\n<h2>Batching, rate limits, and backpressure<\/h2>\n\n<p>Where appropriate, I combine many small operations into one. <strong>batches<\/strong> Together: A worker loads 100 IDs, processes them in one go, and thus reduces overhead due to network latency and connection establishment. For external APIs, I respect rate limits and control the <strong>polling rate<\/strong>. If the error rate increases or latencies grow, the worker automatically reduces parallelism (<em>adaptive concurrency<\/em>) until the situation stabilizes. Backpressure means that producers throttle their job production when the queue length exceeds certain thresholds\u2014this way, I avoid avalanches that overwhelm the system.<\/p>\n\n<h2>Priorities, fairness, and client separation<\/h2>\n\n<p>I prioritize not only via individual priority queues, but also via <strong>weighted<\/strong> Worker allocation: A pool works 70% \u201eshort,\u201c 20% \u201edefault,\u201c and 10% \u201elong\u201c so that no category is completely starved. In multi-tenant setups, I isolate critical tenants with their own queues or dedicated worker pools to <strong>Noisy Neighbors<\/strong> For reports, I avoid rigid priorities that endlessly push long-running jobs to the back of the queue; instead, I schedule time slots (e.g., at night) and limit the number of parallel heavy jobs so that the platform can be used during the day. <strong>snappy<\/strong> remains.<\/p>\n\n<h2>Observability: Structured logs, correlation, and SLOs<\/h2>\n\n<p>I log in a structured manner: job ID, correlation ID, duration, status, retry count, and important parameters. I use this to correlate front-end requests, enqueued jobs, and worker history. From this data, I define <strong>SLOs<\/strong>: approximately 95% of all \u201eshort\u201c jobs within 2 seconds, \u201edefault\u201c within 30 seconds, \u201elong\u201c within 10 minutes. Alerts are triggered when the backlog grows, error rates increase, runtimes are unusual, or DLQs grow. Runbooks describe specific steps: scale, throttle, restart, analyze DLQ. Only with clear metrics can I make good decisions. <strong>capacity decisions<\/strong>.<\/p>\n\n<h2>Development and testing: local, reproducible, resilient<\/h2>\n\n<p>For local development, I use a <strong>fake queue<\/strong> or a real instance in dev mode and start workers in the foreground so that logs are immediately visible. I write integration tests that enqueue a job, execute the worker, and check the expected page result (e.g., database change). I simulate load tests with generated jobs and measure throughput, 95\/99 percentiles, and error rates. Reproducible data seeding and deterministic handlers are important to keep tests stable. Memory leaks are noticeable in endurance tests; I plan periodic restarts and monitor the <strong>storage curve<\/strong>.<\/p>\n\n<h2>Resource management: CPU vs. I\/O, memory, and parallelism<\/h2>\n\n<p>I distinguish between CPU-intensive and I\/O-intensive jobs. I clearly limit the parallelism of CPU-intensive tasks (e.g., image transformations) and reserve cores. I\/O-intensive tasks (network, database) benefit from more concurrency as long as latency and errors remain stable. For PHP, I rely on opcache, pay attention to reusable connections (persistent connections) in persistent workers, and explicitly release objects at the end of a job in order to <strong>Fragmentation<\/strong> A hard limit per job (memory\/runtime) prevents outliers from affecting the entire pool.<\/p>\n\n<h2>Step-by-step migration: from cron jobs to a queue-first approach<\/h2>\n\n<p>I migrate incrementally: First, I move non-critical email and notification tasks to the queue. Then I move media processing and integration calls, which often cause timeouts. Existing cron jobs remain the clock, but push their work to the queue. In the next step, I separate workloads into short\/default\/long and measure consistently. Finally, I remove heavy cron logic as soon as workers are running stably and switch to <strong>Event-driven<\/strong> Enqueuing points (e.g., \u201eUser registered\u201c \u2192 \u201eSend welcome email\u201c). This reduces risk and allows the team and infrastructure to grow into the new pattern in a controlled manner.<\/p>\n\n<h2>Governance and operation: policies, quotas, and cost control<\/h2>\n\n<p>I define clear policies: maximum payload size, permissible runtime, permitted external targets, quotas per client, and daily time slots for expensive jobs. I keep an eye on costs by scaling worker pools at night, bundling batch jobs during off-peak hours, and setting limits for cloud services that <strong>Outliers<\/strong> prevent. I have an escalation path ready for incidents: DLQ alarm \u2192 analysis \u2192 hotfix or data correction \u2192 controlled reprocessing. With this discipline, the system remains manageable\u2014even as it grows.<\/p>\n\n<h2>Final thoughts: From cron jobs to scalable asynchronous architecture<\/h2>\n\n<p>I solve performance issues by decoupling slow tasks from the web response and performing them via <strong>Worker<\/strong> process. Queues buffer load, prioritize tasks, and bring order to retries and error patterns. With separate workloads, clean timeouts, and idempotent handlers, the system remains predictable. I decide on hosting, worker limits, and the choice of broker based on real metrics, not gut feelings. Those who adopt this architecture early on will get faster responses, better <strong>Scaling<\/strong> and significantly more composure in day-to-day business.<\/p>","protected":false},"excerpt":{"rendered":"<p>Learn how asynchronous PHP tasks with worker queues and PHP workers can make your application more scalable and what role hosting plays in this.<\/p>","protected":false},"author":1,"featured_media":15922,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","inline_featured_image":false,"footnotes":""},"categories":[922],"tags":[],"class_list":["post-15929","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technologie"],"acf":[],"_wp_attached_file":null,"_wp_attachment_metadata":null,"litespeed-optimize-size":null,"litespeed-optimize-set":null,"_elementor_source_image_hash":null,"_wp_attachment_image_alt":null,"stockpack_author_name":null,"stockpack_author_url":null,"stockpack_provider":null,"stockpack_image_url":null,"stockpack_license":null,"stockpack_license_url":null,"stockpack_modification":null,"color":null,"original_id":null,"original_url":null,"original_link":null,"unsplash_location":null,"unsplash_sponsor":null,"unsplash_exif":null,"unsplash_attachment_metadata":null,"_elementor_is_screenshot":null,"surfer_file_name":null,"surfer_file_original_url":null,"envato_tk_source_kit":null,"envato_tk_source_index":null,"envato_tk_manifest":null,"envato_tk_folder_name":null,"envato_tk_builder":null,"envato_elements_download_event":null,"_menu_item_type":null,"_menu_item_menu_item_parent":null,"_menu_item_object_id":null,"_menu_item_object":null,"_menu_item_target":null,"_menu_item_classes":null,"_menu_item_xfn":null,"_menu_item_url":null,"_trp_menu_languages":null,"rank_math_primary_category":null,"rank_math_title":null,"inline_featured_image":null,"_yoast_wpseo_primary_category":null,"rank_math_schema_blogposting":null,"rank_math_schema_videoobject":null,"_oembed_049c719bc4a9f89deaead66a7da9fddc":null,"_oembed_time_049c719bc4a9f89deaead66a7da9fddc":null,"_yoast_wpseo_focuskw":null,"_yoast_wpseo_linkdex":null,"_oembed_27e3473bf8bec795fbeb3a9d38489348":null,"_oembed_c3b0f6959478faf92a1f343d8f96b19e":null,"_trp_translated_slug_en_us":null,"_wp_desired_post_slug":null,"_yoast_wpseo_title":null,"tldname":null,"tldpreis":null,"tldrubrik":null,"tldpolicylink":null,"tldsize":null,"tldregistrierungsdauer":null,"tldtransfer":null,"tldwhoisprivacy":null,"tldregistrarchange":null,"tldregistrantchange":null,"tldwhoisupdate":null,"tldnameserverupdate":null,"tlddeletesofort":null,"tlddeleteexpire":null,"tldumlaute":null,"tldrestore":null,"tldsubcategory":null,"tldbildname":null,"tldbildurl":null,"tldclean":null,"tldcategory":null,"tldpolicy":null,"tldbesonderheiten":null,"tld_bedeutung":null,"_oembed_d167040d816d8f94c072940c8009f5f8":null,"_oembed_b0a0fa59ef14f8870da2c63f2027d064":null,"_oembed_4792fa4dfb2a8f09ab950a73b7f313ba":null,"_oembed_33ceb1fe54a8ab775d9410abf699878d":null,"_oembed_fd7014d14d919b45ec004937c0db9335":null,"_oembed_21a029d076783ec3e8042698c351bd7e":null,"_oembed_be5ea8a0c7b18e658f08cc571a909452":null,"_oembed_a9ca7a298b19f9b48ec5914e010294d2":null,"_oembed_f8db6b27d08a2bb1f920e7647808899a":null,"_oembed_168ebde5096e77d8a89326519af9e022":null,"_oembed_cdb76f1b345b42743edfe25481b6f98f":null,"_oembed_87b0613611ae54e86e8864265404b0a1":null,"_oembed_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_oembed_time_27aa0e5cf3f1bb4bc416a4641a5ac273":null,"_tldname":null,"_tldclean":null,"_tldpreis":null,"_tldcategory":null,"_tldsubcategory":null,"_tldpolicy":null,"_tldpolicylink":null,"_tldsize":null,"_tldregistrierungsdauer":null,"_tldtransfer":null,"_tldwhoisprivacy":null,"_tldregistrarchange":null,"_tldregistrantchange":null,"_tldwhoisupdate":null,"_tldnameserverupdate":null,"_tlddeletesofort":null,"_tlddeleteexpire":null,"_tldumlaute":null,"_tldrestore":null,"_tldbildname":null,"_tldbildurl":null,"_tld_bedeutung":null,"_tldbesonderheiten":null,"_oembed_ad96e4112edb9f8ffa35731d4098bc6b":null,"_oembed_8357e2b8a2575c74ed5978f262a10126":null,"_oembed_3d5fea5103dd0d22ec5d6a33eff7f863":null,"_eael_widget_elements":null,"_oembed_0d8a206f09633e3d62b95a15a4dd0487":null,"_oembed_time_0d8a206f09633e3d62b95a15a4dd0487":null,"_aioseo_description":null,"_eb_attr":null,"_eb_data_table":null,"_oembed_819a879e7da16dd629cfd15a97334c8a":null,"_oembed_time_819a879e7da16dd629cfd15a97334c8a":null,"_acf_changed":null,"_wpcode_auto_insert":null,"_edit_last":null,"_edit_lock":null,"_oembed_e7b913c6c84084ed9702cb4feb012ddd":null,"_oembed_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_time_bfde9e10f59a17b85fc8917fa7edf782":null,"_oembed_03514b67990db061d7c4672de26dc514":null,"_oembed_time_03514b67990db061d7c4672de26dc514":null,"rank_math_news_sitemap_robots":null,"rank_math_robots":null,"_eael_post_view_count":"2256","_trp_automatically_translated_slug_ru_ru":null,"_trp_automatically_translated_slug_et":null,"_trp_automatically_translated_slug_lv":null,"_trp_automatically_translated_slug_fr_fr":null,"_trp_automatically_translated_slug_en_us":null,"_wp_old_slug":null,"_trp_automatically_translated_slug_da_dk":null,"_trp_automatically_translated_slug_pl_pl":null,"_trp_automatically_translated_slug_es_es":null,"_trp_automatically_translated_slug_hu_hu":null,"_trp_automatically_translated_slug_fi":null,"_trp_automatically_translated_slug_ja":null,"_trp_automatically_translated_slug_lt_lt":null,"_elementor_edit_mode":null,"_elementor_template_type":null,"_elementor_version":null,"_elementor_pro_version":null,"_wp_page_template":null,"_elementor_page_settings":null,"_elementor_data":null,"_elementor_css":null,"_elementor_conditions":null,"_happyaddons_elements_cache":null,"_oembed_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_time_75446120c39305f0da0ccd147f6de9cb":null,"_oembed_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_time_3efb2c3e76a18143e7207993a2a6939a":null,"_oembed_59808117857ddf57e478a31d79f76e4d":null,"_oembed_time_59808117857ddf57e478a31d79f76e4d":null,"_oembed_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_time_965c5b49aa8d22ce37dfb3bde0268600":null,"_oembed_81002f7ee3604f645db4ebcfd1912acf":null,"_oembed_time_81002f7ee3604f645db4ebcfd1912acf":null,"_elementor_screenshot":null,"_oembed_7ea3429961cf98fa85da9747683af827":null,"_oembed_time_7ea3429961cf98fa85da9747683af827":null,"_elementor_controls_usage":null,"_elementor_page_assets":[],"_elementor_screenshot_failed":null,"theplus_transient_widgets":null,"_eael_custom_js":null,"_wp_old_date":null,"_trp_automatically_translated_slug_it_it":null,"_trp_automatically_translated_slug_pt_pt":null,"_trp_automatically_translated_slug_zh_cn":null,"_trp_automatically_translated_slug_nl_nl":null,"_trp_automatically_translated_slug_pt_br":null,"_trp_automatically_translated_slug_sv_se":null,"rank_math_analytic_object_id":null,"rank_math_internal_links_processed":null,"_trp_automatically_translated_slug_ro_ro":null,"_trp_automatically_translated_slug_sk_sk":null,"_trp_automatically_translated_slug_bg_bg":null,"_trp_automatically_translated_slug_sl_si":null,"litespeed_vpi_list":null,"litespeed_vpi_list_mobile":null,"rank_math_seo_score":null,"rank_math_contentai_score":null,"ilj_limitincominglinks":null,"ilj_maxincominglinks":null,"ilj_limitoutgoinglinks":null,"ilj_maxoutgoinglinks":null,"ilj_limitlinksperparagraph":null,"ilj_linksperparagraph":null,"ilj_blacklistdefinition":null,"ilj_linkdefinition":null,"_eb_reusable_block_ids":null,"rank_math_focus_keyword":"asynchrone PHP","rank_math_og_content_image":null,"_yoast_wpseo_metadesc":null,"_yoast_wpseo_content_score":null,"_yoast_wpseo_focuskeywords":null,"_yoast_wpseo_keywordsynonyms":null,"_yoast_wpseo_estimated-reading-time-minutes":null,"rank_math_description":null,"surfer_last_post_update":null,"surfer_last_post_update_direction":null,"surfer_keywords":null,"surfer_location":null,"surfer_draft_id":null,"surfer_permalink_hash":null,"surfer_scrape_ready":null,"_thumbnail_id":"15922","footnotes":null,"_links":{"self":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/15929","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/comments?post=15929"}],"version-history":[{"count":0,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/posts\/15929\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media\/15922"}],"wp:attachment":[{"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/media?parent=15929"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/categories?post=15929"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/webhosting.de\/en\/wp-json\/wp\/v2\/tags?post=15929"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}