\n\n\n\n 75 Terrorism Cases Later, NSA Is Running Anthropic's Mythos Anyway - BotClaw 75 Terrorism Cases Later, NSA Is Running Anthropic's Mythos Anyway - BotClaw \n

75 Terrorism Cases Later, NSA Is Running Anthropic’s Mythos Anyway

📖 4 min read•755 words•Updated Apr 21, 2026

75. That’s how many terrorism defendants have been notified of some type of FISA spying, according to a nationwide review of federal court records by The Intercept. Keep that number in mind as you read what comes next: the NSA is reportedly using Anthropic’s Mythos AI model — and doing so without any official acknowledgment, without any denial, and apparently without Pentagon blessing.

As a backend engineer, I spend a lot of time thinking about what happens when systems get deployed before the governance layer catches up. This story is that problem, scaled to the intelligence community.

What We Actually Know

The verified facts here are deliberately thin, which is itself a signal. As of 2026, NSA personnel are reportedly using Anthropic’s Mythos — described as Anthropic’s most powerful model — despite active opposition from top Pentagon officials. No one has officially confirmed it. No one has officially denied it. The story broke through reporting from TechCrunch and The Intercept, among others, and the silence from both Anthropic and the government has been total.

That silence is doing a lot of work. In the absence of a denial, the reporting stands. And the reporting says spies are using a commercial AI model that their own department’s leadership apparently doesn’t want them using.

The Infrastructure Problem Nobody Is Talking About

Here’s what jumps out at me from an engineering angle: this isn’t a policy story, it’s a deployment story. Someone stood up access to Mythos inside an intelligence workflow. That means API keys, or a procurement channel, or some kind of shadow IT path that bypassed the normal acquisition process. Either way, there’s infrastructure behind this.

When engineers inside an org start routing around official tooling, it usually means one of two things. Either the approved tools are genuinely inadequate for the job, or the approval process is so slow and bureaucratic that people stop waiting for it. In the NSA’s case, both are probably true simultaneously.

The NSA’s own research chief has reportedly said that US spies should be using private AI models. That’s not a fringe opinion inside the building — it’s coming from leadership. So you have a situation where the research arm is pushing for commercial AI adoption, the Pentagon is pushing back, and field-level analysts are apparently just… using Mythos anyway. Classic org-level deadlock resolved by individual action at the edges.

What Anthropic’s Position Actually Means

Anthropic has built its public identity around safety-first AI development. Mythos being used inside intelligence operations — with or without Anthropic’s explicit sign-off — creates a real tension with that positioning. If the company knew and said nothing, that’s a choice. If the company didn’t know, that’s a different kind of problem: one about how commercial API access gets used downstream once it leaves your control.

From a backend perspective, this is the API key problem at civilizational scale. You ship an API. You set terms of service. You have no reliable way to audit every query that hits your endpoint, especially if access is being routed through intermediaries or procurement vehicles that obscure the end user. The model doesn’t know it’s answering questions for an intelligence analyst. It just processes tokens.

This is why the “Big Tech and national security are getting closer than ever” framing, which has appeared in coverage of this story, undersells the actual dynamic. It’s not closeness — it’s entanglement. And entanglement is much harder to unwind than a formal partnership you can terminate.

The Pentagon Feud Is a Distraction

The inter-agency conflict angle makes for good headlines, but from where I sit, the more interesting question is about precedent. If the NSA can use a commercial frontier model without official acknowledgment, what does that mean for how AI deployment inside government actually works going forward?

Procurement rules exist for reasons — security review, data handling agreements, liability chains. When those get bypassed, even for genuinely useful tools, you create gaps that are very hard to audit later. The 75 FISA cases The Intercept identified represent years of legal and procedural infrastructure built around surveillance accountability. AI inference pipelines have none of that infrastructure yet.

That’s the part that should concern engineers more than the political drama. Not whether the Pentagon and NSA are feuding, but whether anyone has actually mapped what data is flowing through Mythos, where it’s going, and what the retention policy looks like on Anthropic’s end for queries originating from government networks.

Nobody has answered those questions publicly. And given the silence so far, nobody seems to be in a hurry to ask them out loud either.

đź•’ Published:

🛠️
Written by Jake Chen

Full-stack developer specializing in bot frameworks and APIs. Open-source contributor with 2000+ GitHub stars.

Learn more →
Browse Topics: Bot Architecture | Business | Development | Open Source | Operations
Scroll to Top