With the plugin Query Monitor I make slow SQL queries, faulty hooks and expensive HTTP requests immediately visible and assign them to specific components. This is how I find the real cause of slow loading times in WordPress and implement targeted optimizations to code, plugins, themes and hosting.
Key points
- Installation and first steps: Read admin bar, understand panels
- Queries analyze: slow SQL queries, callers, components
- Assets and requests: streamline JS/CSS and external APIs
- Hosting evaluate: Memory, PHP version, database latency
- Workflow Establish: measure, optimize, check again
What is Query Monitor and why it helps me
I set Query Monitor to make the internal processes of a page transparent: Database queries, hooks, PHP errors, HTTP calls, scripts and styles. This tool shows me in real time how page generation time, number of queries and memory consumption are made up. With just a few clicks, I can see which plugin or theme is eating up parts of the time and how much external services are contributing. In this way, I avoid guesswork and make decisions based on hard data. Metrics. This saves time when troubleshooting and gives me a clear plan for the next steps.
Installation and first steps
I install Query Monitor via the plugin search in the backend and activate it like any other plugin. A compact display with loading time, query count and memory requirements then appears in the admin bar. One click opens the panel for the currently loaded page, which means I don't have to leave the context. Only logged-in administrators see this data, visitors remain unaffected and do not notice anything Fade-in. After a few page views, I have a feeling for the typical values of my installation and can recognize outliers more quickly.
Read the most important views correctly
In the Overview tab, I can see the page generation time, the number of database queries and the peak values for the Memory. The Queries tab lists each SQL statement with duration, caller and component, which allows direct conclusions to be drawn about code locations. In the area for HTTP requests, I notice expensive, blocking calls to APIs or licenses that I would otherwise easily overlook. The registers for scripts and styles show which files are registered and loaded so that I can deactivate unused assets. For a comprehensive diagnosis, I often combine these insights with a short Performance audit, to set priorities in a targeted manner.
Other panels and functions at a glance
In addition to queries and HTTP calls, Query Monitor provides valuable insights into additional areas that I use specifically:
- Template & ConditionalsI can see which template file actually renders, which template parts are included and which conditional tags (e.g. is_singular, is_archive) apply. This helps me to move performance-critical logic to the right place in the theme.
- PHP errors and deprecated notesNotices, warnings and outdated functions subtly slow down pages. I fix these messages because any unnecessary output and error handling costs time and makes later updates more difficult.
- Hooks & ActionsI recognize which functions are attached to which hooks and thus find overloaded actions, such as database queries on init or wp that are executed too late.
- Languages and text domainsLoaded .mo files and text domains show me whether translations cause unnecessary I/O or are loaded multiple times.
- SurroundingsDetails on PHP version, database driver, server and active extensions let me discover bottlenecks such as missing OPcache optimizations or unfortunate PHP settings.
Systematic analysis: from symptoms to cause
I start with the slowly perceived Page and load them with the panel open. I then compare the loading time and number of queries with faster pages to see the relative differences. If the time differs greatly, I open the Queries tab and sort by duration to check the worst queries. If the query count looks normal, I look at external requests and the loaded assets. A clear picture emerges from these pieces of the puzzle, which leads me step by step to the actual cause.
Targeted filtering: components, callers, duplicates
I use the filter functions consistently so that I don't get lost in the details. In the queries panel, I first hide everything that is fast and focus on queries above my internal limit value. I then filter according to Component (e.g. specific plugin) or Caller (function/file) to isolate the source. I mark repeated, identical queries as Duplicates and consolidate them into a single, cached query. For debugging in themes, I look at the query variants of WP_Query (orderby, meta_query, tax_query), because small parameter changes have a big effect here.
Finding and defusing slow queries
I can recognize slow queries by their long duration, many return lines or conspicuous callers in the Code. Typical patterns are SELECT with an asterisk without WHERE, N+1 accesses in loops or meta-queries on unindexed fields. I reduce the amount of data, restrict the WHERE conditions and set LIMIT if the output only needs a few data records. For recurring data, I save results via transients or in the object cache to avoid unnecessary round trips in the database. If the query comes from a plugin, I check the settings or report the behavior to the Author, if configuration is not enough.
Using object cache and transients correctly
I specifically cache recurring queries and expensive calculations:
- TransientsFor periodically changing data (e.g. teaser lists), I use transients with a sensible interval. I choose runtimes that are technically appropriate (minutes to hours) and invalidate at suitable events (e.g. after publishing a post).
- Persistent object cacheIf a Redis or Memcached cache is active, Query Monitor shows me the hit rate. Low hit rates indicate inconsistent keys, TTLs that are too short or frequent invalidations. I consolidate keys and reduce unnecessary flushes.
- Serial dataLarge, serialized arrays in the options table are stripped down. I check autoload options critically because they load every page. Where possible, I divide large options into smaller, specifically loaded units.
Only when caching strategies are in place is it worth increasing server resources. Otherwise, I'm just masking symptoms and losing control over side effects.
SQL tuning in practice: Table of limit values
For decisions, I need tangible Thresholds. The following table shows practical values that I use to classify anomalies more quickly. It does not replace an individual analysis, but gives me a solid orientation for prioritization. I always assess the combination of duration, frequency and impact on the template. On this basis, I take targeted measures instead of experimenting randomly.
| Signal | threshold value | Typical cause | immediate action |
|---|---|---|---|
| Individual query duration | > 100 ms | Missing WHERE/LIMIT, inappropriate Index | Restrict columns, check index, cache result |
| Total number of queries | > 200 per page | N+1 pattern, plugins with many meta queries | Preload data, customize loops, streamline plugin settings |
| Return lines | > 1,000 Rows | Unfiltered lists, missing Pagination | Set WHERE/LIMIT, activate pagination |
| Memory peak | > 80% Memory limit | Large arrays, unused data, image processing | Reduce data volume, release objects, increase limit as required |
| Long total time | > 1.5 s server time | External APIs, I/O latency, weak server | Cache requests, check services, increase hosting performance |
I treat limit values as a starting point, not as a rigid rule. A query with 80 ms that runs a hundred times weighs more heavily than a single query with 200 ms. The position in the template also counts: blocking queries in the header have a stronger effect than infrequent calls in the footer. With Query Monitor, I evaluate these correlations at my leisure and first take the most effective measures. Leverage effect.
Measure REST API, AJAX and admin requests
Many bottlenecks are not in classic page views, but in background processes. I therefore carry out targeted checks:
- REST endpointsFor highly frequented endpoints (e.g. search suggestions), I measure queries, memory and response times separately. Conspicuous endpoints benefit from tighter WHERE conditions, narrower response objects and response caching.
- AJAX callsN+1 queries often creep into admin or frontend AJAX routines. I bundle data accesses, cache results and set strict limits on pagination.
- Admin pagesOverloaded plugin settings pages often generate hundreds of queries. I identify duplicates there and discuss adjustments with the team or plugin provider.
It is important to repeat these measurements under similar conditions, because caches and concurrent processes can distort the figures.
Optimize HTTP requests, scripts and styles
I can recognize external requests in the corresponding panel by their duration, endpoint and Answer. If a service stands out, I check whether a cache makes sense or whether I can decouple the query in terms of time. For scripts and styles, I check which assets pages really need and block unnecessary ones on specific templates. It is often enough to consolidate libraries and remove duplicate variants. For interface issues, I get additional tips from my analysis of the REST API performance, because backend latency strongly influences the impression in the frontend.
Correctly classify caching traps and CDN influence
If Query Monitor measures high server times with an active page cache, the problem is almost always before the cache (PHP, DB, external service). I make sure that I have a uncached view (logged in, cache busting). Conversely, CDNs and browser caches distort the perception in the frontend: fast delivery conceals slow server generation and takes revenge when caches are empty. That's why I test both situations: hot and cold.
- Warm cacheExpectation is a very low server time. If it is still high, expensive HTTP calls or large, dynamic blocks indicate causes.
- Cold cacheI pay attention to initial peaks, e.g. when images are transformed or large menus are set up on the first call. Where possible, I move such work to background jobs.
Evaluate hosting and server level wisely
Even cleaner Code reaches its limits when the environment slows it down. If Query Monitor shows few queries but long times, I look at CPU performance, database latency and PHP version. If the memory limit is too low, this leads to frequent peaks and page failures during peak loads. The database engine and caching layers also determine whether optimizations are effective. If the substructure is weak, I calculate an upgrade because better response times reinforce every other measure.
Measurement methodology: Comparable figures instead of outliers
To make valid decisions, I minimize measurement noise. I load problematic pages several times in succession, observe the range of times and compare identical states (logged in vs. logged out, empty vs. warm cache). I also document Before/After consistently: time, page, server load, special events (deploy, cron, traffic peaks). This is how I recognize real improvements from random hits.
Best practices in dealing with Query Monitor
I leave the plugin active as long as I fair, and deactivate it when the analysis is complete. Before making changes, I work in a staging environment and record actual values for a clear before/after comparison. After every plugin update, I briefly check the admin bar to detect new load peaks at an early stage. I document the results and regularly check them against real visitor numbers. For constant monitoring, I also use „Measure WordPress speed“ so that measurements outside the backend confirm the trend.
Multisite, roles and security
In multisite setups, I use Query Monitor per site because plugins and themes can generate different load images there. I check suspicious sites individually and compare their admin bar values to quickly isolate outliers. Security remains guaranteed: The output is only visible to administrators by default. If a colleague without admin rights is to measure, I work via a separate staging instance or temporary shares, which I remove again after completion. This keeps debug output under control and prevents it from reaching the public.
Practical workflow: How I work with measured values
I start with a measurement round on the most important Templates such as start page, archive and checkout. I then assign outliers to a cause: query, external request, asset or server. I define a single measure for each cause, implement it and measure again. As soon as the effect becomes visible, I tackle the next construction site so that I can maintain focus. This rhythm prevents me from making too many adjustments at the same time.
Recognizing and eliminating anti-patterns
I see some patterns so often that I proactively look for them:
- N+1 for post-metaInstead of loading metadata separately for each post in a loop, I collect the required metadata (e.g. via get_posts with fields and meta_query) and map it in advance.
- orderby=meta_value without indexSorting on unindexed meta fields is expensive. I check whether I can switch to tax_query, reduce the scope or add a suitable index.
- Unnecessary autoload options: I slow down large options that have autoload=’yes’ by setting them to ’no’ and only loading them when necessary.
- Spongy search queries: Wide LIKE filters via wp_posts condense the database. I use tighter WHERE conditions, specific post types and narrow down fields.
- Double-loaded assetsMultiple registered libraries (e.g. sliders, icons) are removed or consolidated so that only one current version loads per page.
Error images from everyday life
A classic case: A slider add-on loads on every Page three scripts and two styles, although the function is only used on the start page. After unloading on subpages, the render time decreases noticeably. I also frequently see N+1 queries when loading post meta in loops, which I solve by preloading. External license servers with long response times sometimes block the page load; a cache with a reasonable interval alleviates the situation. In all examples, Query Monitor clearly shows me where to start first.
Briefly summarized
With Query Monitor I get an X-ray image of my WordPress installation and recognize brakes without detours. I evaluate queries, HTTP calls, scripts and memory consumption in the context of the site and make decisions with a view to impact and effort. A clear workflow of measuring, adapting and monitoring ensures that results are permanent. In addition, a fast substructure and tidy plugins help me to ensure that optimizations take effect consistently. Regular use of the tool keeps performance problems to a minimum and noticeably improves the user experience.


