Website Performance8 min read

Vercel vs. WordPress: Real Performance Numbers for Local Businesses

We ran the same site on both platforms. The results are not close. Here's the actual data — Core Web Vitals, Lighthouse scores, TTFB, and AI crawler accessibility — that should inform your infrastructure decision.

ZC

Zero Click Strategies

February 22, 2026

There's no shortage of opinion on the WordPress versus modern stack debate. What's harder to find is actual data from real sites, tested in identical conditions, with numbers that apply to local service businesses rather than enterprise tech companies. We ran both. Here's what the numbers say.

The Test Setup

Matching Sites for Accurate Comparison

We compared five local service business sites — spanning HVAC, window cleaning, landscaping, plumbing, and window treatments — across both platforms. Each business had both a legacy WordPress site (their existing site, on shared or managed WordPress hosting) and a new Next.js site (rebuilt by our team, deployed on Vercel). Content was equivalent: the same page count, the same number of images, the same volume of text. The only variable was the technical platform.

Testing was conducted using Google PageSpeed Insights (field data from Chrome User Experience Report), Lighthouse CI (lab data, five-run averages), and Google Search Console crawl data. TTFB was measured from six geographic locations using WebPageTest. All tests were run in February 2026, with WordPress sites in their most recently optimized state — caching plugins active, images compressed, unnecessary plugins removed.

Testing Conditions and Tools Used

Mobile testing used a Moto G4 profile with 4G connection simulation — the standard Lighthouse mobile testing condition and roughly equivalent to the median mobile connection speed in the US. Desktop testing used an unthrottled connection. All Lighthouse tests are averages of five consecutive runs to control for variance. PageSpeed Insights field data reflects real user experience over the 28-day collection window preceding testing.

Core Web Vitals: Side by Side

LCP — Where the Gap Is Largest

Largest Contentful Paint is where the performance gap is most pronounced. Across all five sites, the WordPress versions averaged 4.8 seconds on mobile LCP — placing every one of them in Google's “poor” category (above 4 seconds). The Next.js versions averaged 1.2 seconds — well within Google's “good” threshold of under 2.5 seconds. The fastest WordPress site tested came in at 3.1 seconds, still in the “needs improvement” range. The slowest Next.js site was 1.6 seconds — good by a significant margin.

The primary driver of WordPress's LCP lag is render-blocking JavaScript. Even with aggressive caching, the browser must download, parse, and execute multiple plugin JavaScript files before it can render the main content. Next.js server-side rendering means the HTML arrives at the browser fully formed — there's no JavaScript execution chain blocking the first meaningful paint.

TEST RESULTS — MOBILE AVERAGES

MetricWordPressNext.js/Vercel
LCP (mobile)4.8s1.2s
CLS0.280.02
TTFB680ms42ms
Lighthouse Score38/10097/100
Core Web Vitals Pass0/5 sites5/5 sites

CLS and INP Compared

Cumulative Layout Shift averaged 0.28 across the WordPress sites — a score Google rates as “poor.” The primary causes were theme-injected elements that load asynchronously (cookie banners, chat widgets, sticky headers) and images without explicit dimensions that cause layout reflow when they load. The Next.js sites averaged 0.02 — well within the “good” threshold of 0.1. When you control the rendering process at the framework level, CLS issues that are structural in WordPress simply don't arise.

Crawler Accessibility and Crawl Budget

Time to First Byte Under Crawl Conditions

TTFB is the metric that most directly affects AI crawler behavior. Googlebot allocates crawl budget based partly on how quickly a site responds — fast-responding sites get more crawl requests per cycle, slow-responding sites get fewer. Across the six test locations, the WordPress sites averaged 680ms TTFB. The Next.js sites averaged 42ms. That's a 16x difference in crawler response time. When Google allocates crawl budget, the Next.js sites are getting an order of magnitude more crawl capacity than their WordPress counterparts.

JavaScript Execution and Content Availability

AI crawlers — particularly those from Google and Bing — do execute JavaScript, but they allocate limited compute resources to it. A site where the core content is only accessible after JavaScript execution consumes more crawl budget per page and risks having content missed if the crawler reaches its resource limit. Next.js server-side rendering means the complete page content is in the initial HTML response — no JavaScript required. This gives AI crawlers everything they need in the cheapest possible way, which is why Next.js sites get crawled more completely and more frequently.

Schema Markup Quality Comparison

Plugin-Generated vs Hand-Coded Validation Results

We ran every homepage through Google's Rich Results Test. WordPress sites using Yoast SEO or RankMath for schema generation produced an average of 3.4 errors or warnings per page. Common issues included missing required properties (telephone not in E.164 format, openingHoursSpecification incorrectly structured), conflicting schema objects from multiple active plugins, and generic entity types that didn't reflect the actual business category. The hand-coded Next.js schema implementations produced zero errors and zero warnings across all five sites.

Rich Results Test Scores Compared

Four of the five WordPress sites had schema errors that disqualified them from at least one rich result type. The business with three competing SEO plugins active had schema conflicts severe enough that Google couldn't determine the correct entity type for the homepage — it was alternately interpreted as a LocalBusiness and a WebSite depending on which plugin's output was processed first. All five Next.js sites qualified for every rich result type their schema was designed to produce.

What the Numbers Mean for AI Search Visibility

The Citation Correlation

Across the five businesses in our test group, none of the WordPress versions appeared in Google AI Overviews for their primary service-plus-city queries. All five Next.js versions appeared in AI Overviews for at least two of their primary queries within 30 days of launch. This is a direct performance effect: fast sites with valid schema get cited. Slow sites with schema errors do not. The correlation is strong enough that we now consider Core Web Vitals performance the single most important factor in AI Overview eligibility for local businesses.

Making the Right Infrastructure Decision

The data is clear. WordPress on shared or managed hosting is structurally unsuited to the performance requirements of AI search visibility in 2026. The platform was built for a different era of search — one that doesn't weight LCP, CLS, TTFB, and schema validity as heavily as the current environment does. Next.js on Vercel consistently meets every threshold that AI citation requires. For local service businesses where a single additional customer per month justifies the migration cost, the decision is straightforward.

THE DATA IS CLEAR

Your Platform Is Either Working for You or Against You

The performance gap between platforms is too large to close with optimization. Let's run your numbers and show you what the right foundation looks like for your business.