In a massive geopolitical and technological contradiction, the United States military reportedly relied heavily on Anthropic's Claude AI during its recent major airstrikes against Iran. Shockingly, this battlefield deployment occurred just hours after President Donald Trump publicly banned the San Francisco-based AI company from all federal government use.

As we track the rapid intersection of artificial intelligence and global security here at IN4 GRAMS, this extraordinary development highlights how deeply embedded AI tools have become in modern military operations, even defying direct presidential orders. Massive controversy began when Anthropic CEO Dario Amodei firmly rejected the Pentagon's demands for unrestricted access to its Claude AI models. Sticking to its core safety principles, Anthropic refused to allow its technology to be used for mass domestic surveillance or fully autonomous weapons systems that can strike without human oversight.


In response, President Trump launched a fierce public attack, labeling Anthropic a "woke" company and designating it a "supply chain risk." He issued a sweeping directive ordering all federal agencies to immediately cease using Anthropic's technology, effectively blacklisting them from a lucrative $200 million defense contract.

Claude AI Actively Used in Iran Strikes

Despite the high-profile federal ban, the realities of the battlefield painted a completely different picture. According to insider reports, the US Central Command (CENTCOM) heavily utilized Claude AI during its subsequent operations and bombing campaign against Iran.

Military officials reportedly used the advanced chatbot for high-stakes operational tasks, including:

  • Intelligence Assessments: Rapidly analyzing vast amounts of classified data to find actionable insights.
  • Target Identification: Mapping out strategic strike zones in real-time.
  • Battle Simulations: Running complex combat scenarios before the missiles were ever fired.

This is not the first time Claude has been utilized in high-risk missions; the AI was previously used in the US operation that led to the capture of Venezuelan President Nicolas Maduro. However, its use in Iran just hours after the President's ban proves that military commanders view the tool as absolutely essential, regardless of the political fallout in Washington.

OpenAI Steps In : While Anthropic stood its ethical ground and accepted the federal ban, its biggest rival quickly capitalized on the chaos. Just hours after the Trump administration blacklisted Anthropic, OpenAI CEO Sam Altman announced that his company had signed a new deal to deploy its own AI models inside the Pentagon's classified networks.

For everyday tech users and defense analysts alike, this saga represents a massive turning point. It proves that the future of global warfare will not just be fought with fighter jets and aircraft carriers, but with highly advanced artificial intelligence.