Anthropics Mythos is moving between governments faster than regulators

Anthropic's most controversial product has spent its first three weeks moving between state actors who cannot agree on whether it is theirs to use, theirs to block, or someone else's problem entirely.

On Wednesday morning, an unnamed Trump administration official told the Wall Street Journal that the White House opposed Anthropic's plan to expand access to Mythos, its advanced cybersecurity model, from roughly 50 organisations to 120.

The reason given was twofold: a security concern about misuse, and an operational concern that Anthropic does not have enough computing power to serve more users without degrading the access already extended to the federal government, including the National Security Agency.

The same White House was simultaneously developing an executive action that would let federal agencies work around the Pentagon's supply chain risk designation of Anthropic and onboard the same model.

Susie Wiles and Scott Bessent had met Dario Amodei earlier in the month. A White House spokesperson said the administration was “balancing innovation and security while cooperating with the private sector.”

Both things were true at the same time. The same administration was working to keep Mythos out of the hands of more civilians and to bring it back into the hands of more soldiers. This is not a contradiction that the government is going to resolve. It is the story.

Three weeks earlier, on April 7, Anthropic had unveiled Mythos through what it called Project Glasswing, a coalition of eleven of the world's most consequential technology companies (AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks) extended to forty more critical-infrastructure organisations, with $100 million in usage credits and $4 million in open-source security donations attached.

Anthropic claimed Mythos had autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD and a 17-year-old remote code execution flaw in FreeBSD that the model identified, exploited, and documented without human help.

In one demonstration, the model escaped its sandbox, established broad internet access, and emailed the researcher running the evaluation, who was eating a sandwich in a park. The same day, a small group of unauthorised users in a private online forum gained access to Mythos.

Bruce Schneier, who has spent three decades watching the cybersecurity industry overreact and underreact in roughly equal measure, called the launch “very much a PR play”,and then immediately added that he believed everyone panicking about the implications was correct.

Stanislav Fort, the founder of the security company AISLE, demonstrated that eight much smaller models could detect the FreeBSD bug Anthropic had presented as a frontier discovery. Mythos was real. Mythos was also, in part, theatre. The capability mattered enough to make both descriptions simultaneously accurate.

And from that moment, the model began to move between governments, not just companies.

Anthropic, before the announcement, had briefed CISA, the Commerce Department, and senior federal officials. The NSA was given access. The Pentagon, however, was a problem. As we wrote about it, the relations between Anthropic and the Department of Defense had collapsed earlier in 2026 after the company refused to allow Claude to be used for autonomous weapons or domestic mass surveillance, two uses Amodei has said publicly he will not permit.

The Pentagon designated Anthropic an unprecedented national security supply chain risk. On March 24, the Northern District of California granted Anthropic a preliminary injunction, finding that the Pentagon's actions were not designed to protect national security but to punish Anthropic for refusing the contract. “Classic illegal First Amendment retaliation,” the court called it.

The story since then has not been a straight line. It has been a series of governments pulling in different directions on the same model. The NSA uses it. The Pentagon would like to ban its maker. The White House this week opposed expanding civilian access while drafting paperwork to expand its own.

Across the Pacific, China launched the 2026 edition of its annual Qinglang campaign against AI misuse the same day the White House moved against Mythos expansion, a domestic enforcement action, framed around protecting Chinese consumers from AI-enabled fraud, that nonetheless landed exactly one week after the White House Office of Science and Technology Policy formally accused Chinese companies of running “industrial-scale” distillation campaigns against American frontier labs.

OpenAI, watching the choreography, made its own move. On April 23, the company released GPT-5.4-Cyber, a defensive cybersecurity model offered through its Trusted Access for Cyber programme. The model is less capable than Mythos at raw vulnerability discovery, but the strategic difference is access architecture: where Anthropic gated Mythos to roughly fifty organisations, OpenAI scaled GPT-5.4-Cyber to thousands of vetted defenders.

The implicit argument was that restricting frontier security capability to a handful of large customers leaves the rest of critical infrastructure under-defended. The explicit calculation was that having two architectures for distributing AI cyber capability, one tightly gated, one verified-but-broad, gave OpenAI a position the Trump administration could prefer over Anthropic's.

Underneath all of this is a question of who, exactly, is supposed to decide who gets a model like this. Anthropic's answer, encoded in Project Glasswing, is that Anthropic decides. The Cloud Security Alliance has framed this as “a significant policy posture”, a deliberate departure from the prevailing AI deployment norm under which capability and access expand together.

Anthropic has stated, plainly, “We are not confident that everybody should have access right now.” It is hard to overstate how unusual this is. A private company, holding a capability that the United States government has both come to depend on and tried to constrain, has decided unilaterally where the access list ends.

The financial stakes make the politics inseparable from the business. Anthropic is now considering offers at a valuation of more than $900 billion, with a board decision expected in May and an IPO target as early as October. Part of what the new capital would fund is the compute the White House said this week the company does not have. CNBC reported that securing infrastructure to scale Mythos is explicitly part of the fundraising rationale.

So when the administration objects that Anthropic lacks the compute to expand the access list, it is also, whether it intends to or not, commenting on a fundraising round whose success depends on solving exactly that problem.

This is the geopolitical reality of frontier AI in 2026, made unusually legible. Mythos sits at the intersection of three governments, the US administration, the US military, and the Chinese state, each with a different theory of what private AI cyber capability is for. To the White House, it appears to be something to be managed: rationed when expanded too far, repatriated when restricted too tightly.

To the Pentagon, until the courts intervened, it was something to be punished into compliance. To Beijing, it is one part of a broader argument that the United States cannot simultaneously claim AI sovereignty and let private companies decide who gets to use the most consequential capability the country has produced.

The Trump-Xi summit is scheduled for May 14 in Beijing. AI export controls and semiconductor policy will be on the agenda. Mythos almost certainly will not be named. B

ut the question Mythos forces, who controls private AI cyber capability when the public sector cannot agree internally on whether to use it, ban it, or buy it back, is the actual subject of that meeting, and of every meeting like it for the next decade.

The model is not the story. The fact that no government can decide what to do with it is.