Sam Altman's Very Public U-Turn
There's something almost poetic about this. Just weeks after OpenAI CEO Sam Altman went on a podcast and accused Anthropic of "fear-based marketing" for limiting access to its advanced cybersecurity AI, he announced OpenAI would be doing... exactly the same thing.
On April 29, Altman posted on X that OpenAI was beginning a restricted rollout of GPT-5.5-Cyber — a frontier cybersecurity model — to "critical cyber defenders" only. Sound familiar? It should. That's essentially what Anthropic did with Claude Mythos Preview, its own advanced cybersecurity model, which launched on April 7 through a program called Project Glasswing.
And here's the kicker: on April 21 — barely a week before his own announcement — Altman publicly argued that Anthropic was exaggerating the dangers of Mythos to justify keeping powerful AI "in the hands of a small and exclusive elite."
What OpenAI Actually Said — And What It's Doing Now
Altman's X post read: "We're starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days. We will work with the entire ecosystem and the government to figure out trusted access for cyber."
No list of who gets access. No technical deep-dive on capabilities. Just... a restricted rollout to unspecified vetted parties, with vague promises to figure it out collaboratively. Which is — and I really can't stress this enough — exactly the kind of approach he criticized Anthropic for.
OpenAI's restricted rollout does build on existing infrastructure. The company had already launched a Trusted Access for Cyber program back in February and expanded it in mid-April when GPT-5.4-Cyber was released to vetted defenders. So this isn't coming out of nowhere. But the timing of the criticism followed by an identical move is... a lot.
Why Both Companies Are Taking This Route
Okay, so why are two of the biggest AI labs in the world suddenly being cagey about cybersecurity models? Here's what the data actually shows.
A UK AI Security Institute evaluation published on April 30 found that GPT-5.5 is "one of the strongest models we have tested on our cyber tasks." More specifically, it's only the second model — after Anthropic's Mythos — to complete a 32-step corporate network attack simulation end-to-end. The institute estimates that kind of attack would take a skilled human roughly 20 hours to pull off.
On expert-level tasks, the numbers broke down like this:
|
Model
|
Expert Task Pass Rate
|
|
GPT-5.5
|
71.4%
|
|
Mythos Preview
|
68.6%
|
|
GPT-5.4
|
52.4%
|
|
Opus 4.7
|
48.6%
|
The institute's framing is important here — they described these results not as a one-off breakthrough but as evidence of a broad trend across frontier models. In other words, this isn't about one company building something uniquely dangerous. Multiple labs are crossing similar capability thresholds around the same time.
And when you can simulate a full corporate network attack in an automated way? Yeah, you probably want to think carefully about who gets the keys.
The Hypocrisy Narrative Takes Hold
Look, the internet wasn't going to let this slide quietly. India Today called out Altman directly, noting he had "slammed Anthropic for keeping Mythos out of reach of most people" before adopting an identical strategy. The Verge framed the move as part of "a growing trend within the AI sector, where companies are labeling their premier models as too risky for widespread public usage."
Whether or not you think restricted access is the right call — and there are reasonable arguments on both sides — the specific sequence of events here is hard to defend. Publicly mocking a competitor's safety decision and then making the exact same decision weeks later isn't a great look.
What's Happening on the Anthropic Side
Meanwhile, Anthropic's Mythos rollout has its own complications. The Trump administration has reportedly pushed back on Anthropic's plans to expand Mythos access to around 70 additional companies, citing both security concerns and compute capacity constraints, according to Bloomberg.
So both companies are navigating real pressure — from governments, from security communities, and from each other — around how to handle AI systems capable of sophisticated cyberattacks.

