Cover Image for Headless browsers in 2026: Playwright vs Puppeteer vs Selenium, and where they should run

Headless browsers in 2026: Playwright vs Puppeteer vs Selenium, and where they should run

9 min read

tldr: Playwright wins new projects. Puppeteer wins Chromium-only tooling. Selenium wins legacy QA matrices. And somewhere around 20 parallel sessions, none of them should be running on your laptop.


The question most teams get wrong

Teams obsess over which automation library to pick. Playwright, Puppeteer, or Selenium. They argue about it in design docs.

The library is the easy part. The hard part is where the browser actually runs.

Local Chromium works for 1 developer, 5 scripts, and a weekend project. It falls over the moment you add CI, parallel workers, anti-bot targets, or an AI agent loop. That's the decision worth spending time on.

This article settles the library question fast, then gets to the part that matters.


Playwright vs Puppeteer vs Selenium

Same primitive, three takes on it. Short verdicts first, details after.

  • Playwright. Default for new projects. Cross-browser, auto-waits, best DX.

  • Puppeteer. Pick this if you are Chromium-only and want a thin dependency.

  • Selenium. Pick this if you have Java or C# teams, or Safari in your test matrix.

Selenium (2004)

Selenium website screenshot

The standard. W3C WebDriver. Every browser, every language.

Slow. Verbose. Flaky waits. Still dominant in enterprise QA because it has 20 years of tooling around it.

Use Selenium when your org is already on it. Do not start a greenfield project with it in 2026.

Puppeteer (2017)

Puppeteer website screenshot

Google's Node library. Talks Chrome DevTools Protocol directly, so it is fast. Network interception, coverage, tracing, runtime eval. All first class.

Chromium only. Firefox is experimental. WebKit is not happening.

Use Puppeteer for Node-only Chromium scraping or internal tooling where you want a small dependency and direct CDP access.

Playwright (2020)

Playwright website screenshot

Microsoft's answer to "Puppeteer but for everything." Ships Chromium, Firefox, and WebKit in one install. Bindings for JS, TS, Python, .NET, Java.

Auto-waiting is the killer feature. You stop writing explicit waitFor calls and your tests stop being flaky.

Use Playwright for new projects. Almost always.

Side-by-side

PlaywrightPuppeteerSelenium
First release202020172004
ProtocolCDP + custom bridgesCDPW3C WebDriver
BrowsersChromium, Firefox, WebKitChromiumAll major
LanguagesJS, TS, Python, .NET, JavaJS, TSJS, Python, Java, C#, Ruby, Kotlin
Auto-waitYesNoNo

Local Chromium has a ceiling

Every library in that table launches Chromium the same way: chromium.launch() or equivalent. On your machine.

That works until it doesn't. Here is where it stops.

  • Parallelism. Chromium eats 300 to 800 MB per process. 20 parallel sessions need 16 GB of RAM minimum. CI runners do not give you that.

  • CI cost. Provisioning Chromium on every GitHub Actions run adds 5 to 15 seconds. Across 200 runs a day, that is an hour of billed CI time doing nothing.

  • IP reputation. Scraping from your office IP gets you flagged in a week. From a datacenter IP, faster.

  • Detection. Cloudflare, DataDome, and PerimeterX fingerprint the environment. Headless mode has a detectable signature. Headed Chromium on a framebuffer does not.

  • Agent loops. Every AI agent run wants a fresh, isolated browser. Spinning that up on your infra is a platform project, not a weekend.

At that point you stop running browsers and start renting them.


What a cloud browser actually is

A cloud browser is real Chromium running on someone else's box, exposed as a CDP URL over WebSocket.

Your code does not change. You swap chromium.launch() for chromium.connectOverCDP(url). Every Playwright or Puppeteer API you already use keeps working. Network interception, tracing, page.evaluate, downloads. All of it.

import { chromium } from "playwright";

const res = await fetch("https://<provider>/api/v1/sessions", {
  method: "POST",
  headers: { Authorization: "Bearer YOUR_KEY" },
});
const { cdpUrl } = await res.json();

const browser = await chromium.connectOverCDP(cdpUrl);
const page = browser.contexts()[0].pages()[0];
await page.goto("https://example.com");
console.log(await page.title());
await browser.close();

That is the whole integration. One line changes.

Local Chromium vs cloud browser diagram


How the three main providers stack up

Three names come up in every 2026 evaluation: Bug0 Browsers, Browserbase, and Browserless. They all solve the same primitive. The differences are pricing, observability, and whether they push you into a proprietary SDK.

Bug0 BrowsersBrowserbaseBrowserless
Entry price$0.15/hour, per-minute billing$39/mo (Hobby plan)$200/mo (Cloud plan)
Free tier10 browser-minutes, no cardLimited trial7-day trial
Live session previewYes, noVNC URL on every session, free tier includedYesPaid tier only
Proprietary SDK requiredNo, standard CDPStagehand availableNo
Integration pathsSDK, CLI, raw HTTPSDK, RESTSDK, REST
Idle billing when session closesNoneSubscription appliesSubscription applies
Works with vanilla PlaywrightYesYesYes
Works with vanilla PuppeteerYesYesYes

Three things to actually care about.

Pricing model. Browserbase and Browserless sell monthly tiers. Good if your load is steady. Bad if it is spiky or you are still figuring out what you need. Per-minute beats tiers when usage is not predictable yet.

Lock-in. Browserbase pushes Stagehand, an AI-automation layer on top of Playwright. It is well built. It also couples your code to one vendor. Bug0 Browsers and Browserless stay on standard CDP. If you want out, you change one URL.

Observability. AI agents misbehave. Scraping jobs get blocked. You want to watch the session live when it happens. Browserless gates live preview behind paid tiers. Bug0 Browsers ships a noVNC preview URL on every session response including the free tier, which is the right default for agent work.


Picking one without overthinking it

Start with Bug0 Browsers. Per-minute billing, live preview on every session including the free tier, standard CDP with no proprietary SDK, and three integration paths (SDK, CLI, raw HTTP). That covers the shape of most browser automation and AI agent workloads in 2026.

Pick Browserbase if you want Stagehand's AI primitives layered on top and you can commit to a monthly plan.

Pick Browserless if you are already on it and the tiers match your load.

For any of them, the integration is the same three lines.


What MCP and AI agents change

The 2026 shift is agents. Claude, Cursor, and ChatGPT now run multi-step tool calls, and one of the most requested tools is "browse the web." Model Context Protocol (MCP) became the wire format for that.

Playwright MCP exposes 25+ browser tools to any MCP-capable client. It uses accessibility tree snapshots instead of screenshots. 2-5 KB per interaction instead of 500 KB-2 MB. 10-100x cheaper per step.

AI agent cloud browser MCP loop diagram

Pair it with a cloud browser and the agent loop gets clean. Agent calls create_session, gets a CDP URL, drives the browser through Playwright MCP, tears the session down. Every run is isolated. No cookie leakage between agents. No local Chromium eating your laptop.

One feature that matters here and gets overlooked: live preview. When an agent misbehaves you want to watch it, not reconstruct it from logs. Check the provider's free tier before you commit. Bug0 Browsers returns the preview URL in the create-session response by default, and ships a copy-ready MCP prompt for Cursor, Claude, and ChatGPT that wires the agent to a cloud browser in one paste.


FAQs

What is a headless browser?

A headless browser is real Chromium or Firefox running without a visible window. It parses HTML, runs JavaScript, and fires events the same way your desktop browser does. You control it programmatically over the Chrome DevTools Protocol or WebDriver.

Is Playwright better than Puppeteer in 2026?

For new projects, yes. Playwright does cross-browser, auto-waits, and has wider language bindings. Puppeteer stays relevant when you are Chromium-only and want a thin Node dependency.

Is Selenium dead?

No. Largest installed base in enterprise QA, broadest browser coverage including Safari. Greenfield projects should start with Playwright. Existing Selenium suites do not need to migrate.

Can I run Playwright against a cloud browser without rewriting tests?

Yes. Swap chromium.launch() for chromium.connectOverCDP(cdpUrl). The rest of your test code is untouched.

When should I stop running browsers locally?

Three signals. You are running more than 5 parallel sessions in CI. You are getting IP-blocked on scraping targets. You are building AI agents that need isolated sessions. Any one of those, move to a cloud browser.

Scraping publicly accessible data is legal in the US after hiQ v. LinkedIn. It varies by country, by site terms, and by what you do with the data. Check terms of service and talk to a lawyer for anything commercial.

Can I use Playwright MCP with a cloud browser?

Yes. Playwright MCP accepts a CDP endpoint. Point it at a cloud browser session URL and the MCP server drives the remote Chromium instead of launching a local one. Use this when you want AI agents running browser automation in isolated, disposable sessions.

How much does a cloud browser cost?

Browserbase starts at $39/month. Browserless starts at $200/month. Bug0 Browsers is $0.15/hour billed per minute, with a free 10-minute tier and no card required.

Recent posts

Stay in the loop

Subscribe for new posts, updates, and changelogs.

Comments (2)

Discuss on Hashnode

the local chromium ceiling section is exactly what we hit last year. started with 3-4 parallel workers on a self-hosted runner, everything fine. scaled to 15 and tests started timing out randomly. spent two weeks debugging "flaky selectors" before realizing the runner was just OOMing silently.

the one-line swap from launch to connectOverCDP is the part i wish someone had told me earlier. in my head "moving to cloud browsers" sounded like a weekend migration project. ended up being a 30 minute change.

one question though - for long-running agent loops, how are folks handling session cleanup when the agent crashes mid-run? curious if providers auto-terminate or if you eat the idle time until timeout.

Good one. the stagehand lock-in point is underrated. teams pick it because the AI layer looks magical in a demo, then six months later they're stuck because half their agent logic is written against a proprietary API. plain CDP ages better.