Why browser zero-days are not going away, and what Bromure does about it
Apple and Google now spend tens of millions of dollars a year finding and fixing browser bugs. There are still eight to ten actively-exploited browser zero-days every year. This post lays out why that math does not change, how Claude Mythos and the "Vulnpocalypse" are about to make it worse, and why a browser built to assume it will be breached is a different kind of product.
You cannot spend your way out of a problem with thirty-five million lines of code. The browser is not safe because it was carefully written; it is the single largest piece of untrusted content parser running on your computer. Designing around that assumption is the only thing that scales.
A browser, at rest, looks like a well-funded engineering miracle. Google and Apple each employ hundreds of people whose full-time job is to make Chrome and Safari less exploitable. Both companies ship patches in continuous-integration pipelines that measure in minutes. Both run bug-bounty programs with six- and seven-figure payouts. Both publish post-mortems. Both have world-class fuzzing infrastructure. And yet, with perfect regularity, every year, users of the most-patched, most-hardened, most-audited consumer software on the planet learn that a page they opened three days ago had silently taken over their machine.
This post is about why. It is also about why that math is about to get much worse, and what we actually think is the only way out.
The bounty money is real. The bugs are, too.
Apple and Google now pay record amounts for browser bugs.
Google's 2024 Vulnerability Reward Program paid out $12 million across all its products, with Chrome alone taking $3.4 million to 137 researchers. 2025 blew that away: the VRP paid $17.1 million to 747 researchers — a 45% year-over-year jump and the largest bounty year in Google's history. A single Chrome sandbox-escape (a Mojo bug) paid $250,000 to one researcher.
Apple is in the same neighborhood. The company's security-bounty page now lists a $2 million top payout for its highest-severity chains (up from $1 million), with additional multipliers that can reach $5 million; cumulative bounty payments since the program opened to the public have crossed $35 million.
Those are large numbers. They are the correct numbers; both companies have been transparent about them. But here is the uncomfortable thing about spending $17 million to find bugs: what you are really doing is buying evidence that there are more bugs.
The receipts, by year.
A few things are happening on this chart at once, and each of them is worse than the last.
The line from 2023 to 2025 looks like progress. Fewer Chrome CVEs per month, year after year, during the same period Google's bounty spending rose by half. In-the-wild zero-days, the only category that really matters to a user, held roughly steady at eight or so per year for Chrome alone — plus a steady few each year for Safari and WebKit, including CVE-2023-42916 / 42917 (both marked "actively exploited" by Apple in late 2023), CVE-2024-23222 (Apple's first zero-day of 2024), and CVE-2025-24201 (exploited in "extremely sophisticated attacks").
The interpretation was tempting: this was working, just slowly.
And then 2026 happened. The first three months of this year alone have produced roughly 200 Chrome CVEs — a rate that, if it continues, will blow past every previous year on record. The number of in-the-wild zero-days is already at four, with more than eight months to go. Something changed.
The something that changed has a name.
On April 7, 2026, Anthropic announced a preview of a frontier model, internally tiered "Copybara" and marketed as Claude Mythos, optimized for one task: autonomously finding and exploiting security vulnerabilities. The public red-team report describes a model that:
- Generated a working zero-day exploit 72.4% of the time on internal benchmarks, versus less than 1% for the prior-generation Opus 4.6.
- Chained four separate vulnerabilities into a single JIT heap spray that escaped both the renderer and the OS sandbox on a major browser.
- Found "thousands" of high- and critical-severity zero-days across every major operating system and every major web browser, in the course of an evaluation.
Anthropic is, to its credit, not releasing Mythos publicly. Instead the capability is being distributed through a controlled program called Project Glasswing, which includes Apple, Google, Microsoft, AWS, Cisco, CrowdStrike, NVIDIA, Palo Alto Networks, JPMorgan Chase, Broadcom, and the Linux Foundation. Those organizations now have a pipe into a model that finds zero-days at a pace measured not in months but in hours.
That is, relatively speaking, the good news. The bad news is that Mythos is an announcement, not an invention. If a frontier lab has reached this capability inside a constrained pilot, somebody else — an unfriendly government, a commercial spyware vendor, a well-funded criminal group — will have something equivalent within a few quarters. That is the broader phenomenon the early press coverage has already started calling the Vulnpocalypse.
The Vulnpocalypse is not rhetoric. It is a measurable change in the window.
The term has been used, most visibly, in NBC News' April 9 report on Mythos and in the Cloud Security Alliance's "Mythos and the Vulnpocalypse" piece. It is not a branding exercise. It describes a specific, quantifiable change: the mean time from public disclosure of a vulnerability to active exploitation has fallen from roughly 2.3 years in 2019 to under 24 hours in 2026. When AISLE, an AI vulnerability-research firm, was given OpenSSL's January 2026 advisory on the day of release, it independently rediscovered all twelve patched zero-days within hours.
This is not a hypothetical. The NVD, the US government's canonical vulnerability database, has been quietly conceding for more than a year that it cannot keep up: as of early 2026, the backlog of un-enriched CVEs with working proof-of-concept exploits is the dominant story of its operations. New CVE submissions are up more than 260% since 2020 while analyst headcount is flat, and NIST's public messaging has moved from "we will catch up" to "we will prioritize the highest-risk ones."
If you are waiting for the vendor to patch, for the patch to ship, for your device to take the update, and for you to restart your browser — that is a security plan that, in 2026, has a 24-hour window to complete. The mean.
Why this problem is architectural, not budgetary.
The reason Google and Apple cannot fix this with more money is simple, and worth stating plainly: the browser is the largest untrusted-content parser running on your computer, and there is no such thing as a safe parser of that much attacker-controlled data.
Chromium is roughly 35 million lines of C++, plus an enormous
//third_party directory that bundles
over 200 external libraries —
the WebP decoder (CVE-2023-4863,
famously chained to NSO's Pegasus in the BLASTPASS incident), the
VP8/VP9 decoder, libxml2, ICU, Skia, ANGLE, Dawn, BoringSSL. Every
one of those libraries is a parser, handling input controlled by
whoever owns the page you opened. Any bug in any one of them is a
bug in the browser.
Safari and WebKit are smaller — the core runs about 2.5 million lines of C++, per Igalia's audit — but the attack-surface shape is identical: a monumental pile of C and C++ parsers, handling adversarial input, with every decoded byte running inside the same address space as your browser profile.
You cannot write that much C++, that handles that much untrusted input, without bugs. You cannot buy your way out of bugs when the attacker side of the market now has access to a fuzzer that generates exploits in hours. The rate at which the defender can find and ship fixes and the rate at which the attacker can find and weaponize new bugs are, as of April 2026, mathematically diverging.
What recent zero-days actually did.
Before the Mythos announcement, the most consequential browser zero days of the last three years were already sobering. A short list:
BLASTPASS / libwebp, 2023
CVE-2023-4863 was a heap overflow in the WebP image decoder used by both Chrome and Safari. The exploit, deployed by the NSO Group's Pegasus spyware, required no interaction: visiting the wrong image was enough. It remains one of the clearest real-world demonstrations of a third-party parser compromising the whole browser.
Operation ForumTroll, 2025
CVE-2025-2783, discovered by Kaspersky GReAT, was a Mojo sandbox escape in Chrome. It was chained with a renderer compromise to deliver Memento Labs' commercial spyware to Russian and Belarusian targets. Google fixed it within days of disclosure. The attackers had been using it for at least months.
North Korean V8 exploitation, 2024
CVE-2024-7971 was a V8 type confusion used by North-Korean-aligned actors to deploy a cryptocurrency theft toolkit. The same year saw multiple further V8 bugs tied to commercial spyware vendors. The point of commonality: every one of them needed just one web page.
Safari, late 2025
CVE-2025-24201 was exploited in what Apple described as "extremely sophisticated attacks" targeting specific individuals. A further pair in December (CVE-2025-43529, CVE-2025-14174) exploited WebKit and ANGLE and were marked actively exploited in targeted attacks. WebKit shares a code genealogy with Chromium's ANGLE; a bug in one sometimes patches in both.
These bugs were not found because engineers were sloppy. They were found because modern adversaries are paid to find them, and because there are always more. What Mythos does is change the ratio of "researchers who can find a bug" to "people who can afford a researcher who can find a bug." Anthropic has restricted their tool for now. There will be others.
The architectural argument.
Everything above should point at one conclusion: the right design assumption is that the browser has a zero-day in it right now, that you do not know about, and that will be used against someone this year, and that might be used against you.
Given that assumption, the only question that matters is: what happens after the bug fires?
On a traditional browser, the answer is brutal. The browser runs inside your user account. Its renderer has access to your files, your keychain, your cookies, your webcam, your microphone, and your local network — because those are things the user account has access to, and the browser is a program running as that user. A renderer compromise that bypasses the browser's sandbox — exactly what Mythos demonstrated on a major browser during its evaluation — is a compromise of your computer. Period.
That is the outcome the rest of the industry implicitly accepts and quietly plans around — with layers of EDR, endpoint detection, MDM, telemetry, incident response, and, for the unlucky, ransomware negotiation lawyers on retainer.
Bromure does something different.
Bromure does not try to outrun Mythos. That race cannot be won. What Bromure does is change what a zero-day actually reaches.
In Bromure, the browser runs inside a sealed guest VM. When a renderer compromise fires — the same Mojo sandbox escape that ran in Operation ForumTroll, the same chained V8 type-confusion that Mythos generated on internal tests — the attacker now has code execution inside a disposable Linux VM. That VM does not contain your files. It does not contain your keychain. It does not contain your webcam or your microphone or your local network. It contains a browser, a profile, and whatever state that profile has accumulated.
When the session ends, and you close the window, the VM is destroyed. The exploit is destroyed with it. Persistence — the single most valuable move an attacker makes after initial access — cannot be established in something that is about to stop existing.
What we are not claiming.
Bromure's architecture is not a silver bullet. Two limits worth stating:
The exploit still compromises the browser session
Passwords typed into that session, cookies stored in that profile, data entered on forms during the exploit window — all of that lives inside the VM and is reachable by a compromise of that VM. Isolation keeps the damage scoped to the session; it does not retroactively protect inputs already handed to the session. Per-profile separation and disposable sessions reduce the value of that blast radius; they do not make it zero.
A VM escape is still in play
A sufficiently severe chain could, in principle, escape the VM itself. That is a much harder problem than escaping a browser sandbox — several orders of magnitude fewer bugs, and the hypervisor attack surface is narrower than a browser's by design — but it is not zero. Bromure reduces the probability that a browser zero-day translates into a host compromise; it does not eliminate it.
The useful number is not "zero risk." The useful number is "how many zero-days per year actually end up compromising your actual computer." Under a traditional browser, that number has been climbing, and is about to climb faster. Under Bromure, it is gated by a separate class of bug that costs about a hundred times more to find.
Plan for the world you are about to live in.
Browser zero-days are not a bug in the ecosystem. They are a structural feature of running tens of millions of lines of C++ in the path of adversarial bytes, for which there is no credible engineering fix, and for which the economic deterrent (you are too expensive to attack) is about to be automated away. Apple and Google are not slacking. The problem is genuinely harder than their budgets.
What we can do, and what Bromure does, is move the blast radius. Isolate the browser. Make every session disposable. Put the host operating system one wall further from every page. That is a different product shape, and it is the only one we have found that still makes sense when the adversary is an always-on AI exploit factory with a three-figure API bill.
Install Bromure. Make it your default for anything you do not fully trust. And when the next headline reads "Actively exploited zero-day in Chrome patched in emergency release" — and it will, within weeks — keep reading, because the outcome does not have to be yours.