Back to all posts
Published on · by Renaud Deraison

How Bromure stops phishing before it reaches your parents

A step-by-step look at Bromure's anti-phishing — the local sweep, the model, the verdict, and why your parents, your grandparents, and the neighbor across the hall are exactly who we built it for.

Your parents do not need another lecture about clicking careful links. They need a browser that sees the trap they are walking into — and says something, in time, in words they understand.

Some of you have had the phone call. The one where your mother's voice is already quiet, already small, and she says I think I did something wrong on the computer. Some of you are waiting for it. The purpose of this post is to describe, in detail, the machinery we built so that that phone call happens less often — and, when it does, so that the damage stops at a warning instead of a drained account.

The attack industry has always hunted the elderly.

Phishing has never been a clever person's game. It is a probabilistic game: an industrial pipeline that generates fake login pages by the thousand, spoofs familiar brands, writes urgent-sounding messages, and pushes them out across email, SMS, social networks, and phone calls. All the attacker needs is for one person, on one day, in one moment of distraction, to type a password into the wrong box. Grandparents, parents, widowed neighbors, anyone who grew up trusting a bank letter in the mail — they are not slow, they are trusting. Attackers know this, and they target it deliberately.

"Package delivery""IRS refund""Microsoft support""Bank locked""Grandma, it's me""Verify account"Your parentone password awayNo amount of "spot the typo" training is going to fix this.
A single older user, sitting at home, receives the full weight of a global attack industry — SMS, email, phone calls, social messages, fake shipping notices, refund scams. Anti-phishing advice assumes they can sort each of these by hand. They cannot.

Security advice pushed at grandparents has always been variations on be more careful. Check the URL. Hover the link. Do not trust the sender. The problem is not that grandparents are careless. The problem is that the attack industry hires professionals whose full-time job is to defeat "be more careful." The gap between the people defending and the people attacking is, by this point, embarrassing.

What we built, at a glance.

When a page loads inside Bromure, a small inspector runs inside the sealed browser VM. It watches the page like a suspicious usher at a gate — looking at the URL, the forms, the visible text, the links, any QR codes, even what the page silently tries to write to the clipboard. If anything looks off, the inspector's findings travel over an isolated channel to a model. The model reads the signals, decides a verdict, writes a single-sentence explanation, and sends it back. Bromure renders the verdict as a banner on the page. An outright phishing page is redirected to a blocked-site warning before it can trick anyone.

INSIDE THE SANDBOX (VM)Page loadscontent scriptattachesLocal sweepforms · URL · brandQR · clipboardPre-filterskip popular,trusted, SSORender verdict bannersafe · suspicious · phishingvsock · port 5950ON THE BROMURE APICache{domain, path}1 h TTLClaude HaikustructuredJSON verdictVerdict JSONverdict · confidence · reason
The full pipeline. Steps in the blue region run inside the sandboxed guest VM. Steps in the orange region run on the Bromure server. The yellow bar is a vsock channel — not the network — that carries findings between the two.

The rest of this post is the how. It is deliberately detailed. If your parents are the reason you are reading this, feel free to skim to Where your data actually goes — and know that the short answer is: the feature is off by default, it asks for consent, and everything it does is documented below.

Step one — the local sweep.

Before anything leaves your computer, a content script runs inside the guest VM. It inspects the page on several independent passes. None of the passes are expensive. All of them run in the browser, in your own computer, in the same sealed sandbox as the page itself.

Password fields, the instant they appear

The moment a <input type="password"> is attached to the DOM — before you have even focused it — the inspector has already noted the domain and flagged whether you have ever entered a password here before. First time on a new domain? That alone is a signal.

Form structure

Login-only pages with no navigation. Forms that POST to a different domain than the one in the address bar. Forms pretending to be a brand whose logo is hotlinked from a different host. Each one is weak on its own. Together they shift the odds.

Brand mismatch

If the page says "Apple ID" but the domain is not Apple's, or the logo is served from a suspicious host, that mismatch is recorded. No network call required — it is just pattern-matching on what is already on the page.

Homoglyph domains

Domains like аррӏе.com — rendered the same as "apple.com" but built from cyrillic lookalikes — are near-certain phishing. The inspector detects these by comparing the rendered form against a set of known confusable characters.

Scam vocabulary

"Confirm your password." "Verify your account." "You've been selected." "Click here to claim your prize." Each phrase is a soft signal. A page that contains three of them, inside a login form, on a domain nobody has ever heard of, is a different story.

QR codes and clipboard payloads

A QR code that decodes to a crypto payment URI on an unrelated wallet. A page that silently writes a shell command into your clipboard while telling you to "press Win+R and paste." These are modern attack patterns that bypass every traditional filter.

A local pre-filter also makes sure the inspector does not bother the server about sites nobody needs to worry about. The top hundred thousand domains from the Tranco list, plus a curated set of 30+ SSO providers (Google, Microsoft, Okta, Apple, and so on), are silently ignored as long as nothing else looks suspicious. Your mother's actual bank, her actual pharmacy, and her actual webmail will never be called home about.

If after the local sweep nothing is interesting, the story ends here. No request leaves the computer.

Step two — the second opinion.

If the dossier is interesting — or if a password field has just appeared on an unfamiliar domain — a structured report is sent for evaluation. Not a screenshot. Not the whole DOM. A bounded, sanitized summary of signals.

WHAT THE USER SEESpaypai-secure.xyz/loginPayPalLog in to your accountPlease verify your account now.Email addressPasswordLog inWHAT THE MODEL SEES{"domain": "paypai-secure.xyz","urlSuspicion": ["brand-in-subdomain"],"brandSignals": {"title": "Log in to your account","logoDomain": "i.imgur.com"},"domainMismatch": {"claimed": "paypal","actual": "paypai-secure.xyz"},"sensitiveFields": ["password"],"pageStructure": ["only-login-forms","minimal-navigation", "data-uri-favicon"],"contentIndicators": ["urgency-language"]}
What the model actually sees. The page on the left might look pixel-perfect to a human; the model on the right works from a small, structured dossier of signals — not from the rendered pixels.

The bundle includes the URL, the domain, the visible text (capped at around 800 characters, stripped of control characters), a summary of each form (field types, button labels, and the action URL), which sensitive fields were detected, the page's title and top headings, where the logo image is served from, domain-mismatch and homoglyph flags, url-level red flags (brand-in-subdomain, punycode, excessive hyphens), structural flags (only-login-forms, password autocomplete disabled, data-URI favicon, hidden iframes), QR code contents if any were decoded, and clipboard payloads if any were silently written. All strings are capped to short ceilings; arrays are capped to small counts; control characters do not survive.

On the server, the dossier is handed to Claude Haiku 4.5 — a small, fast model. The system prompt teaches the model to treat everything in the dossier as untrusted forensic evidence: to assume the page was built by someone who knew the prompt existed and wrote the page to confuse it. The prompt walks the model through nine ordered checks (domain analysis, brand impersonation, page structure, credential harvesting, suspicious links, scam content, QR codes, clipboard payloads, final synthesis) and requires it to emit a single JSON object:

{
  "verdict": "phishing" | "suspicious" | "safe",
  "confidence": 0.0,
  "reason": "one short sentence, plain language"
}

The thresholds are calibrated deliberately. Above 0.85, the verdict is phishing and the site is blocked. Between 0.4 and 0.84 it is suspicious and a warning banner is shown. Below 0.4 it is safe and nothing is displayed. One weak signal — one scam word, one mismatched favicon — does not clear the bar alone. The model is told, explicitly, that a false positive on your grandmother's pharmacy is worse than letting a mediocre scam through.

Two fast paths skip the model entirely when the answer is already obvious:

  • Brand mismatch shortcut — if the page claims to be PayPal but the domain is paypai-secure.xyz, the server returns suspicious immediately, with no LLM call and no cost.
  • Cache hit — verdicts are keyed on {domain, normalized path} and stored in memcache for one hour. A second visit to the same page costs nothing and answers in milliseconds.

Step three — the verdict on the page.

The verdict comes back, and Bromure renders it inline — in the page, above the content. No popup. No modal your parents have to dismiss before they can see what they came to see.

SAFE · nothing shown(no banner — the page just loads)SUSPICIOUS · amber banner, dismissable!Suspicious pageThis page is asking for your password, but it is the first time you visit this domain.I know this site×PHISHING · red interstitial, default is go backPhishing detectedThis site is impersonating PayPal. Do not enter your credentials.paypai-secure.xyzGo back to safetyProceed anyway
The three possible banners. Silence for safe. An amber warning for suspicious, dismissable, with an 'I know this site' escape hatch. A red interstitial for phishing, where the default action is to go back.

The reason line is always a single sentence, written for the person reading it, not the person debugging it. "This site is impersonating PayPal. Do not enter your credentials." is what the banner says — not a confidence score, not a signal trace, not a list of regex matches. The goal is that your mother can read the banner once and know what to do.

Every dismiss of a suspicious banner is scoped to the current session, or promoted to the profile's trusted list if she clicks I know this site. The trust decision is hers, not the model's.

The cross-domain catch — a safety net that does not need a verdict.

Even before any verdict arrives, Bromure enforces one hard rule: a password is never sent to a domain other than the one in the address bar, without asking. If a form on login.bank.com is about to POST to credentials.attacker.example, the submission is intercepted and a modal appears.

PAGElogin.bank.compassword: ••••••••INTERCEPT!Password going elsewhereThis page is about to sendyour password to another domain.CancelContinuedefault action is CancelFORM TARGETcreds.attacker.exampledifferent domainnot in SSO allowlistblocked until confirmed
The cross-domain password catch. Before any server verdict, any form submission where the password target does not match the page domain is halted with an explicit confirmation — naming both domains in plain language.

A short curated allowlist of known identity providers — Google, Microsoft, Okta, Apple, and a few dozen more — exempts legitimate federated sign-ins from the catch. Everything else gets the modal, every time. This one rule, on its own, catches an enormous share of credential-phishing pages before the model has even returned a verdict.

Where your data actually goes.

Two constraints drive the data path. The first is consent. The second is isolation.

Consent. The feature is off on every new profile. To enable it, the user opens Privacy & Safety and encounters a dedicated modal that describes, before any toggle is flipped, exactly what is sent (URL, page text, form structure, warning signals), where it goes (a server operated by Bromure at bromure.io), how long it is kept (short-term logs, for abuse prevention and model improvement), and how to turn it off. The feature is only available at all on persistent profiles — not ephemeral, throwaway sessions — so that an anonymous research session cannot leak data to the server by accident. If you install Bromure on your grandmother's Mac and enable the feature for her, the banner she sees is the proof that the choice was made deliberately, on her behalf, by someone she trusts.

Isolation. The browser is not directly connected to the internet. Neither is the inspector. All findings travel over a vsock channel from the guest VM to the host, on an internal port — not over the network interface the web pages see, and not via any proxy the page can influence. The host relays the request to the API, receives the verdict, and hands it back to the inspector. No web page in the world can reach the vsock bridge; no web page can reach the host directly; no web page can even know the bridge exists.

GUEST VM (browser)Web pagephishing-guardcontent scriptbackground workervsock :5950macOS HOSTPhishingAnalysisBridgerelays guest findingssole path to the APIHTTPSBROMURE APIbromure.io/v1/analyzememcache1 h TTLHaiku 4.5verdict JSONThe web page has no access to any of these hops.
The data path. Findings go from the guest extension, over vsock to the host bridge, then over HTTPS to the API. The web page never touches any of these hops directly.

The URL of the page being checked is logged on the server, briefly, for abuse prevention. The visible text is sent to the model for analysis and is not retained after the response. Nothing you typed into a form is ever in the bundle. If you turn the feature off, no data at all is sent.

Fast on the pages you already know.

Calling a language model on every page load would make the browser feel heavy. Bromure caches verdicts aggressively so that the hot pages of the internet are effectively free to evaluate after the first visit — and if the server is ever unreachable, the local sweep keeps on doing its job.

Cache first, model last

Verdicts are cached on {domain, normalized path} for an hour. A second visit to the same page answers in milliseconds, without touching the model. Brand-mismatch shortcuts skip the model entirely.

The local sweep is the floor

If the server is unreachable or slow, the local sweep keeps running. The banners for the most obvious cases — cross-domain passwords, homoglyph domains, clipboard payloads — are produced entirely inside your computer, no network required.

The promise, kept.

We said in the first post that the defense should be a second pair of eyes that never blinks. This is what that pair of eyes actually does, inside Bromure, right now.

Your mother should not have to pass a security quiz to read the news. Your father should not have to know what a homoglyph is. Your grandfather should not have to feel stupid because a polite voice on the phone told him to install a thing. The browser should see the trap, in time, and say so — in a sentence, not in jargon — before a password ever leaves the keyboard.

That is what Bromure's anti-phishing is for. That is why it is off by default, documented before it is enabled, isolated from the page, and cheap to run at scale. Install it on your parents' Mac. Walk them through the Privacy & Safety panel. Turn the feature on. Then go to dinner.