·6m read time·1,023 words·

The arms race for your trust: Mythos, Cyber and the security hype

Anthropic and OpenAI are fighting for market share with AI security tools they call "too dangerous" to release. But the facts tell a different story than the press releases.

Last month, Anthropic launched their AI model Mythos with a claim so spectacular that the entire tech world paused: the model was so dangerously good at finding security vulnerabilities that they didn't dare release it publicly. Within two weeks, OpenAI was at the door with their own answer: GPT-5.5-Cyber, same pitch, same dramatics.

The world collectively lost its mind. But the question nobody seemed to ask was: is it actually true?

The facts behind the headlines

Let's start with what actually happened, rather than what the press releases wanted you to believe.

Daniel Stenberg, the man behind curl, one of the most tested, fuzzed and audited C codebases on earth, eventually received a Mythos scan of his project. After weeks of delay (Anthropic promised access, but it never materialized), someone else analyzed the code for him.

The result? Five "confirmed" vulnerabilities, according to Mythos.

The reality? One vulnerability. Severity: low. The other four were false positives — three of them were documented behaviors that were right there in the API documentation. The model confirmed itself, as models do.

Stenberg's conclusion is lethally sober: "The big hype around this model has so far been primarily marketing." He sees no evidence that Mythos significantly outperforms the AI tools curl has been using for months: AISLE, Zeropath and OpenAI's own Codex Security. Those tools had already found hundreds of bugs and produced more than a dozen CVEs.

The marketing machine

And this is where it gets interesting. Look at the timing.

Anthropic launched Mythos in April 2026 with the claim that the model was "too dangerous" to publish. Too dangerous. Not "good," not "better than existing tools," but dangerous. That's not a technical conclusion. That's a marketing decision.

The message is brilliant: by refusing to sell your product, you make it irresistible. Everyone wants access to the thing they're not allowed to have. It's the same trick OpenAI pulled in 2019 with GPT-2, which was dubbed "too dangerous to release." That same model now runs on your phone.

While Anthropic steers toward a potential IPO, with an estimated valuation of $900 billion, the timing is no coincidence. Nothing sells better than fear. And nothing inflates your valuation like the idea that your technology is so powerful that the world isn't ready for it yet.

OpenAI saw the attention bleeding away and responded within two weeks with GPT-Cyber. Same approach: limited access, only for "trusted parties," same dramatic framing. Today, dozens of European companies, from Deutsche Telekom to the European Commission, are getting access to Cyber. The arms race is official.

The numbers nobody verifies

Mozilla reported that Mythos found "a whopping 271 vulnerabilities" in Firefox. That number flew around the world. But how many of those were actually confirmed? How many were false positives, like 80% at curl? How many were documentation issues dressed up as vulnerabilities?

Those questions aren't being asked, because the big number is the press release. The small number — the actual, verified impact — that's the footnote nobody reads.

We know this pattern. It's the same mechanism as with every AI announcement: the headline claim is spectacular, the fine print is nuanced, and by the time the nuance surfaces, the news cycle has already moved three claims ahead.

The irony of the arms race

There's a dark irony in this whole circus. A year ago, curl had to shut down its bug bounty program — not because of real security problems, but because they were being flooded with AI-generated fake vulnerability reports. "AI slop" Stenberg called it: reports that sounded technically plausible but were complete nonsense, generated by the same models now being touted as the saviors of cybersecurity.

The same technology that made it impossible to distinguish real bug reports from automated garbage is now being sold as the ultimate solution for code security. The companies that caused the problem are now selling the fix.

Follow the money

The real question isn't whether these tools work. They do. AI code analysis is genuinely better than traditional static analysis. Stenberg says it himself: "Not using AI code analyzers means you leave adversaries time and opportunity." That's true.

But "better than what existed" is not the same as "dangerously good." And the difference between those two is exactly where the marketing department lives.

What the marketing saysWhat the data shows
"Too dangerous to release"One low-severity vulnerability in curl
"271 bugs in Firefox"Not independently verified
"Confirmed vulnerabilities"80% false positive at curl, confirmed by itself
"Exclusive access"Standard enterprise sales strategy

Anthropic is chasing an IPO. OpenAI is fighting to claw back enterprise market share after dropping from 50% to 27%. Both companies have an existential interest in the idea that their models aren't just good, but dangerously good. The kind of good that makes governments nervous and companies reach for their wallets.

What this means for you

As a developer, you need to see through the marketing. AI security tools are useful. Use them. But treat them like you'd treat any other tool: with healthy skepticism.

  • Verify everything. If an AI tool claims five vulnerabilities, assume four are noise until proven otherwise.
  • Don't let FOMO drive your decisions. The exclusivity marketing is designed to make you feel like you're falling behind. Existing tools largely do the same thing.
  • Check the sources. When a press release says "271 bugs found," ask: how many were real? How many got fixed? What was the severity?
  • Remember who's paying. The companies building these tools have a direct financial interest in maximizing fear and minimizing nuance.

In closing

AI security tools are a genuine improvement. That's not the point. The point is that two companies collectively aiming to be worth hundreds of billions of dollars are running a calculated fear campaign to inflate their valuations, wrapped in the language of "responsible disclosure."

The parrot has learned a new trick: it screams "danger!" and waits for you to reach for your wallet.

Listen to what Stenberg says — the man who actually maintains the code: "Maybe a little bit better." That's reality. The rest is theater.

// series: The AI Skeptic(8 of 8)