An interview with Dario Amodei
Anthropic’s boss apologises for bashing Pentagon—but still plans to sue
April 15, 2026
DARIO AMODEI, the boss of Anthropic, says he is sorry. In his first interview since the Pentagon labelled the AI lab a supply-chain risk—the first American company to receive that designation—he offered a mea culpa for the way he handled a crisis that he described as one of the most “disorienting” in Anthropic’s history. Yet he also said he would challenge the Pentagon’s designation in court in order to avoid a “chilling” impact on Anthropic’s business.
The Department of War’s designation has come as a heavy blow to Anthropic, one of America’s leading model-makers with a valuation of $380bn. It is in effect a blacklisting usually meted out to foreign adversaries. It arrived on March 4th, hours after the leak of an intemperate memo that Mr Amodei sent to staff, in which he blamed a weeks-long row with the Pentagon on his failure to lavish “dictator-style praise” on President Donald Trump.
In a surprisingly chipper mood, Mr Amodei spent more than an hour talking animatedly to The Economist’s editor in chief, Zanny Minton Beddoes, about the dispute with the Pentagon and its wider ramifications. He said the company would be “fine”. He was both frank and contrite about the leaked missive. “I want to completely apologise for this memo,” which he admitted was not a very “considered or refined” version of his thinking. When asked whether he would say sorry to Mr Trump, he added: “I’ve apologised to the people that I’m talking to…I’m happy to speak to others within the administration as well.”
Mr Amodei claimed that the Pentagon’s sanction would not impact Anthropic’s wider government work. It would also only affect other Anthropic customers in their Pentagon-related work. That would represent a lesser blow than that threatened by Mr Trump and Pete Hegseth, the secretary of war, a week ago. At that point, Mr Hegseth said he would stop any company that worked with the Pentagon having any relationship with Anthropic. Mr Amodei said the official sanction would affect “a small fraction of our business”.
Yet Mr Amodei said his company would challenge the supply-chain-risk designation in court, even as it sought to do “everything we can to de-escalate” the situation. This was “not because we want to have a fight here but because we feel this exceeds the scope of the statute’s authority”. He added: “I’m not worried about the overall company’s trajectory. Legally, the…designation has a limited impact on the company’s revenue. But the thing I’m worried about is some kind of wider chilling effect, and asserting our legal rights is part of that.” Big customers, including Microsoft, were expected to be supportive, he said. Shortly after the interview, a Microsoft spokesperson said that the designation means Anthropic’s products could be used by all of its customers except the Department of War.
The row has become a test case of who ultimately controls the world’s most powerful technology—governments or private firms. It erupted when the Pentagon demanded that Anthropic let it use its models for all lawful purposes; for its part, Anthropic insisted that it retain red lines against the deployment of AI for mass surveillance of Americans and for fully autonomous weapons. In the interview, Mr Amodei said he thought that AI was too powerful to be fully controlled by private firms or governments, and he hoped that the “unfortunate tensions” of the past week would lead to more public debate about the risks.
He said Congress needed to “step up” to legislate for safer AI, and that he hoped Anthropic’s public spat would catalyse a broader debate involving rival model-makers and the government about the balance of speed and safety when it comes to AI. Though his memo had accused OpenAI, a rival, of swooping in during the government’s fight with Anthropic and accepting a weaker deal, he suggested that the firm may have “improved” terms of the contract since. He held out hope that the model-makers could find common ground. “It doesn’t have to be kumbaya, but we do have to get to an agreement. When some players are trying to set a higher standard, that does attract other players to follow them,” he said.
Despite the public tensions with the Trump administration and OpenAI, Mr Amodei said the crisis may have positive spillovers. It wasn’t just about Anthropic’s relationship with the government or its rivals, he said. “No technology this powerful comes at the world this fast without there being some kind of precipitating event that [leads one to say] ‘Whoa, we’re talking about something really different here.’” He added: “If there’s something we can all learn…it’s the discussion of this technology and the balance of power of this technology—these issues are becoming urgent and we all need to wake up to them.” ■
The full interview with Dario Amodei will be available on The Economist Insider.