Skip to content
← Back to blog

Browser persistence pools: eliminating MFA fatigue

Keeping authenticated sessions warm across scraping runs

If your scrapers hit sites protected by MFA, you already know the problem: every run either burns a one-time code, irritates a human, or both. The fix isn’t better automation around MFA — it’s making sure you authenticate as rarely as possible.

The answer we landed on is a browser persistence pool: a fixed-size pool of long-lived browser contexts, each already authenticated to one or more target sites, that scrapers borrow from and return to. When done right, a well-warmed pool can run a month of daily scrapes from a single MFA event.

What’s in a pool

A pool entry is a Playwright BrowserContext with:

interface PoolEntry {
  id: string;
  userDataDir: string;
  siteSessions: Record<SiteId, SessionStatus>;
  leasedBy: ScraperId | null;
  leasedAt: number | null;
  lastHealthCheck: number;
}

The health probe is the thing most teams skip. You can’t trust a session just because the cookie hasn’t expired — sites invalidate sessions for all kinds of reasons (IP change, device fingerprint drift, concurrent login). Probe before you hand out.

Full post coming soon. Upcoming sections: the lease protocol that prevents double-checkout, how we rotate pool entries without dropping any in-flight scrapers, and the observability story — how we know when the pool is getting stale before it actually fails a run.