Best Website Speed Test Tools: Honest Picks
<p>YSlow died in 2018, but most lists still recommend it. The honest list of website speed test tools, what builders use to find the real bottleneck. PageSpeed Insights for Core Web Vitals signal. WebPageTest for filmstrip depth. Lighthouse CI for the team that ships weekly. Skip the seven dead tools we cut from the old list.</p>
The honest list of the best website speed test tools, with the dead ones cut and the new ones named. For builders who want to find the real bottleneck, not collect screenshots of grades.
- Top pick: Google PageSpeed Insights for Core Web Vitals signal, the only field-data source most sites have access to
- Cleaner waterfall: GTmetrix for visual rendering and a less crowded report than Lighthouse
- Power users: WebPageTest for filmstrips, network throttling, and the deepest waterfall on the public web
- Free, in your browser: Chrome DevTools’ Lighthouse panel, identical engine to PageSpeed Insights
- For CI/CD: Lighthouse CI runs the same audits on every deploy, free and self-hostable
- Skip YSlow, Varvy, Page Scoring, and the standalone Dareboost. They are dead, retired, or now part of an enterprise stack you don’t need.
Most “best website speed test tools” lists still recommend YSlow. YSlow’s last meaningful update was in 2018. Its Firefox extension hasn’t worked on modern browsers in years. Half the other tools on those lists are also gone, acquired, or pivoted into something different. We pulled the dead ones out of this list and replaced them with what builders use.
01What changed for the best website speed test tools: INP replaced FID
Before recommending any tool, the most important shift since the last useful version of this list. Interaction to Next Paint (INP) replaced First Input Delay (FID) as a Core Web Vital on March 12,. Google’s Search Central announced it; web.dev documented the rollout. Any tool still reporting FID as a primary metric is showing you a deprecated signal.
The current thresholds for the three Core Web Vitals:
- LCP (Largest Contentful Paint): under 2.5 seconds is good, over 4 is poor. Unchanged recently.
- INP (Interaction to Next Paint): under 200ms is good, over 500ms is poor. Replaces FID. Measures the worst interaction latency across the page session, not just the first one.
- CLS (Cumulative Layout Shift): under 0.1 is good, over 0.25 is poor. Unchanged.
INP is the harder one to optimize. FID measured a single interaction. INP measures the slowest one across the whole session. Long-running JavaScript event handlers that used to fly under FID now show up as red flags. Every tool below was checked for INP support.
02At a glance: The best website speed test tools
| Tool | Field data? | INP support | Best | Cost |
|---|---|---|---|---|
| Google PageSpeed Insights | Yes (CrUX) | Yes | Core Web Vitals signal, mobile vs desktop | Free |
| GTmetrix | No (lab only) | Yes | Cleaner waterfall and visual report | Free tier + paid |
| WebPageTest | No (lab only) | Yes | Power-user testing, filmstrips, throttling | Free + paid |
| Chrome DevTools (Lighthouse) | No (lab only) | Yes | In-browser testing without leaving your tab | Free |
| Lighthouse CI | No (lab only) | Yes | Running audits on every deploy | Free + self-hosted |
| DebugBear | Yes (RUM) | Yes | Continuous monitoring with real-user data | Paid (14-day trial) |
03Google PageSpeed Insights
The first place to check. The only public tool that shows real Chrome User Experience field data alongside lab metrics.
Buy if: you want one canonical source for Core Web Vitals signal. Skip if: you need waterfall depth, pair this with GTmetrix or WebPageTest.
PageSpeed Insights at a glance
- Engine
- Lighthouse 11+ for lab data, CrUX dataset for field data
- Field data source
- Chrome User Experience Report (28-day rolling window)
- INP support
- Yes, since the March 2024 rollout
- Mobile / desktop split
- Default mobile-first, desktop tab present
- Best
- The single most-cited Core Web Vitals score in the SEO industry
- URL
- pagespeed.web.dev
PageSpeed Insights is the canonical Core Web Vitals tool. It is the only one that shows real user data from the Chrome User Experience Report alongside the lab simulation. For sites with enough traffic, the field data tells the truer story. The lab score tells you what changed; the field data tells you whether real users felt it.
It is a deeply opinionated tool. It will recommend rendering paths the team you work for cannot ship. It will flag fonts you can’t change. We treat its score as directional, not gospel.
- Best: any site where Core Web Vitals is in the SEO conversation
- Skip if: you need a less Lighthouse-flavored opinion or deeper waterfall
04GTmetrix
Cleaner waterfall and visual report than Lighthouse, with the same underlying Lighthouse engine for the score.
Buy if: you want to send a single PDF to a client or collaborator. Skip if: you only test from one location and don’t need the waterfall.
GTmetrix uses Lighthouse under the hood. It presents the data as a less crowded report. The waterfall view is the visualization most performance engineers learned on. The free tier tests from one location at desktop resolution. Paid tiers add multi-location testing, mobile devices, scheduled monitoring, and video capture of the load.
Owner note: GTmetrix is owned by Carbon60. Development is steady, the roadmap is public, and the product hasn’t materially changed direction in years. That is rare in this category.
- Best: visual reports, waterfall analysis, sharable PDFs for stakeholders
- Skip if: you want field data, GTmetrix is lab-only
05WebPageTest
The deepest free testing tool. Filmstrip view, network throttling profiles, and connection-type emulation that the others can’t match.
Buy if: you’ve outgrown PageSpeed Insights and need to see exactly when each render frame happened. Skip if: the dashboard density makes you bounce.
WebPageTest is ownedCatchpoint. The free version gives you 10+ test locations and throttling presets that simulate 3G or slow-4G. You also get multi-run comparisons and a filmstrip that shows the page rendering frame by frame. Nothing else free does the filmstrip view as well. It’s also the tool whose output gets cited the most in technical writeups by performance engineers.
The interface is dense. New users get overwhelmed. Once you know what you’re looking at, it’s the tool you keep coming back to. Time to First Byte, Start Render, Speed Index, and Largest Contentful Paint, all plotted on the filmstrip timeline.
- Best: diagnosing exactly which asset blocked rendering or which third-party script tanked INP
- Skip if: you only want a number to share. Go to PageSpeed Insights or GTmetrix instead.
06Chrome DevTools (Lighthouse panel)
Same Lighthouse engine as PageSpeed Insights. Runs locally on whatever device you’re testing from. No URL submission, no rate limits.
Buy if: you’re already in DevTools debugging. Skip if: you need shareable reports or field data.
Chrome DevTools includes Lighthouse natively. Open DevTools, click the Lighthouse panel, hit Analyze. Same Lighthouse engine version that ships in PageSpeed Insights. Same audits, same scores. The big difference: The test runs from your machine, on your local network, with your DevTools throttling profile. That can be a feature (test against a staging environment behind auth) or a bug (your home connection isn’t representative).
The Performance panel is also worth knowing. Where Lighthouse gives you a score, the Performance panel shows you the millisecond-by-millisecond flame chart of what the browser was actually doing. INP regressions almost always show up here as long-running event handlers in the call tree.
- Best: debugging what Lighthouse flagged, testing against staging behind auth
- Skip if: you need an external test from a different location or device
07Lighthouse CI
Open-source Lighthouse runner that fits into GitHub Actions, GitLab CI, or any pipeline. Catches performance regressions before they ship.
Buy if: you ship to production weekly or faster. Skip if: the team won’t actually look at the report.
Lighthouse CI runs the same audits as PageSpeed Insights, but inside your CI pipeline. Set thresholds (LCP under 2.5s, INP under 200ms), fail the build if a PR regresses them. The reports get stored on the LHCI server, comparable across commits.
Setup takes an afternoon if you’ve never used it. The official getting-started uses Docker and a YAML config. Teams running on Vercel or Netlify get simpler integration paths. We’ve seen too many teams ship Lighthouse CI as a green badge nobody reads. That’s the failure mode.
- Best: teams shipping multiple times a week who need a budget enforcement layer
- Skip if: nobody on the team will own performance-budget review
08DebugBear (the worth-knowing paid option)
Continuous monitoring done right. RUM data plus Lighthouse audits in one dashboard, with alerting that doesn’t drown you.
Buy if: you’ve gone past one-off testing and need monitoring. Skip if: you’re a single site running quarterly checks.
DebugBear sits in the same category as SpeedCurve and Calibre. The free tier is a 14-day trial. Entry pricing starts in the low double digits per month for small teams. What sets it apart for our reader: The RUM (real-user monitoring) data maps cleanly onto Core Web Vitals thresholds. The alerts are scoped tightly enough that you don’t get noise.
Worth flagging: this is paid software. WikiWalls hasn’t been paid to recommend it. The reason it’s on the list: The 2016 version of this article spent space on free tools that have since gone away. There’s value in naming one paid tool that fills the resulting gap honestly.
- Best: small teams or agencies that need to monitor 5-50 sites continuously
- Skip if: you have one site and don’t need a dashboard you check weekly
09What we cut from the list and why
- YSlow. Yahoo’s classic browser extension. Last meaningful repository activity was around 2018. It does not measure Core Web Vitals. Modern Chrome and Firefox versions don’t run the extension reliably. Replace with Chrome DevTools’ Lighthouse panel or PageSpeed Insights.
- Varvy Speed Test. Went offline several years ago. The domain is no longer hosting an active testing tool.
- Page Scoring. Minor utility, no longer maintained as a public testing tool.
- Dareboost (standalone). Acquired into Splunk Synthetic Monitoring, which moved into Cisco’s portfolio after the Splunk acquisition closed. The original Dareboost product as a standalone freemium speed checker is gone. The functionality lives on inside an enterprise observability stack.
- Pingdom Tools. Still works as a URL you can paste into. SolarWinds owns it. Active development has slowed compared to the era when it competed head-on with GTmetrix. We don’t recommend it as a primary tool anymore but include it because it’s still part of the public free testing layer.
- Sucuri Load Time Tester. Still functional but bare-bones compared to the alternatives. Useful as a quick “is the site responding from this region” check. Not the tool you optimize against.
- Uptrends. Pivoted into enterprise synthetic monitoring. The free quick scan still exists, but the company’s center of gravity is now SaaS uptime monitoring rather than developer-facing speed tests.
10Common mistakes when running these tools
- Running once and trusting the number. Run three times minimum, take the median. Network conditions vary. CDN caches warm. Ad scripts load differently across runs.
- Testing only the homepage. The homepage is usually the most-cached page. Test the slowest page (a heavy product page, a long-form blog post, a search results page) to find the real ceiling.
- Optimizing the score, not the experience. A site that scores 95 on Lighthouse but feels janky to users is failing where it counts. INP is the metric that catches this. LCP isn’t.
- Ignoring mobile. 60-70% of traffic is mobile for most sites in our reader’s space. Test mobile first; desktop is secondary.
- Treating tool scores as directly comparable. PageSpeed Insights, GTmetrix, and WebPageTest can return scores 30+ points apart for the same URL. They use different tests and different thresholds. Pick one as your reference. Cross-check with another.
11Your decision tree
The decision tree
- Just want a number? Go to Google PageSpeed Insights.
- Want a cleaner waterfall? GTmetrix free tier.
- Need to debug a specific render block? WebPageTest filmstrip view, or Chrome DevTools Performance panel.
- Shipping production weekly? Add Lighthouse CI to your pipeline this sprint.
- Monitoring 5+ sites continuously? Trial DebugBear or one of its enterprise peers.
WikiWalls verdict. Start with PageSpeed Insights for the Core Web Vitals signal. Pair it with GTmetrix for a cleaner waterfall when something looks wrong. Add WebPageTest when those two haven’t told you what’s blocking render. Bring in Lighthouse CI when the team starts shipping faster than you can manually re-test. Skip everything that hasn’t been actively maintained recently.
12FAQ
Q: Is PageSpeed Insights still the most accurate speed test tool?
It’s the most authoritative for Core Web Vitals because it’s the only public tool that includes Chrome User Experience Report field data. It is not the deepest tool. WebPageTest gives you more diagnostic surface. But for the score Google’s algorithm cares about, PageSpeed Insights is the source.
Q: What replaced YSlow?
Lighthouse, which ships inside Chrome DevTools, PageSpeed Insights, GTmetrix, and Lighthouse CI. The original YSlow rules from Yahoo are largely subsumed by Lighthouse’s audits. There is no maintained successor to YSlow as a standalone extension.
Q: What is INP and how do I test for it?
Interaction to Next Paint, the Core Web Vital that replaced First Input Delay on March 12,. It measures the longest interaction latency the user experiences across the whole page session. PageSpeed Insights, Chrome DevTools’ Performance panel, GTmetrix, and WebPageTest all support INP testing. Field data via the Chrome User Experience Report is available for sites with enough traffic.
Q: Do I need a paid tool like DebugBear or SpeedCurve?
Only when one-off testing isn’t enough. If you’re running speed checks weekly or quarterly, the free tools above cover it. The case for paid monitoring is when performance regressions ship without you knowing about them, or when you’re tracking 10+ pages per site over time. DebugBear and SpeedCurve both publish their pricing publicly.
Originally published 2018. Substantially updated recently with INP-replaces-FID Core Web Vitals coverage, Lighthouse CI and DebugBear additions, and the dead-tools graveyard (YSlow, Varvy, Page Scoring, standalone Dareboost).