Synthetic Lighthouse runs lie. Real users on real devices on real networks tell the truth — and that's what Google ranks you on. Nevision captures LCP, INP, CLS, TTFB, and FCP from every visitor, segmented by page, device, country, and connection.
LCP, INP, CLS, TTFB, FCP, DOMContentLoaded, full Page Load. Reported via the official web-vitals library, faithful to Google's exact thresholds.
Daily and 7-day rolling baselines per page. We flag when any metric crosses the 'good' threshold or worsens by more than 20% versus baseline.
Aggregate is meaningless when one slow page pulls down the whole site average. We show you exactly which URL has the worst INP and on what device.
Google ranks pages on Core Web Vitals. Specifically: 75th-percentile LCP under 2.5s, 75th- percentile INP under 200ms, 75th-percentile CLS under 0.1. If your p75 misses any of those thresholds, you get downgraded in search results. The catch: Google measures real users (CrUX dataset), not your local Lighthouse score. Synthetic tests on a fast laptop tell you nothing about how your site performs on a 4-year-old Android in São Paulo.
Real User Monitoring (RUM) closes that gap. The recorder captures the same metrics Google uses, from the same devices Google measures, and shows you the same percentile. If your RUM p75 INP is 300ms, your CrUX p75 INP is also 300ms (give or take noise) — and your search ranking is suffering for it.
Your homepage might be fast. Your /products page might be slow. Site-wide averages hide this. Nevision groups RUM data by URL pathname and shows you the per-page p75 for every page that gets meaningful traffic. Sort by worst INP, jump straight to the page hurting your rankings, fix it, watch the metric recover.
Beyond Core Web Vitals, you can record domain-specific timings: nevision.recordMetric("checkout.time_to_pay", durationMs). Useful for tracking real-user latency of business-critical interactions like search, checkout, or signup completion.
We use Google's official web-vitals library. LCP, INP, CLS are reported when measurable; TTFB and FCP fire on every page load.
Metrics are bucketed by URL pathname (with query strings stripped) and device class. p75 is computed daily — the same percentile Google uses.
View 30-day trends per page. Get an email if any page's p75 INP regresses past 200ms, p75 LCP past 2.5s, or p75 CLS past 0.1.
| Feature | Nevision | SpeedCurve | Calibre | Datadog RUM |
|---|---|---|---|---|
| Free RUM views/month | 50,000 | — | — | — |
| Paid plan starts at | $12/mo | $114/mo | $71/mo | $15/mo + usage |
| Core Web Vitals (LCP/INP/CLS) | ||||
| Per-page breakdowns | ||||
| Regression alerts | ||||
| Includes session replay | — | — | Add-on | |
| Includes error tracking | — | — | Separate product | |
| Synthetic monitoring (Lighthouse) | — |
Comparison based on publicly listed pricing and features as of April 2026.
Functionally yes — we use the official web-vitals library that Google publishes, with the same thresholds and the same percentile (p75). Sample sizes will differ since we sample your visitors and CrUX samples Chrome users globally, but trends move together.
No. The web-vitals library is ~3KB, loaded with the recorder, and metrics are queued and sent in beacons after page load. Zero impact on LCP, INP, or CLS itself.
We strip query strings by default so /products?id=42 and /products?id=99 aggregate together. You can configure path normalization rules in the dashboard for dynamic routes like /users/[id].
Yes — daily summary email when any page's p75 LCP/INP/CLS crosses the 'good' threshold or worsens by 20% versus the previous 7-day baseline.
Not yet. Synthetic Lighthouse is on the roadmap for late 2026. We focus on RUM first because it's what Google actually ranks on.