Breaking out

AI danger gets real

April 15, 2026

An illustration of a robot bending the bars of a cage, on the verge of escape.
In the past week an extraordinary fight over artificial intelligence has broken out. The Trump administration’s row with Anthropic, one of America’s leading AI labs, over the Pentagon’s access to its models will be a test of who controls the world’s most potent technology. Its outcome will shape everything from America’s national security to the development of ai. It could also make an AI-enabled disaster more likely.
On each of these counts, you should be alarmed. In the first big clash between the concern for AI safety and the imperative to race ahead in an attempt to dominate the technology, America’s government has clearly shown it is on the side of speed. Because long-feared safety risks involving AI are already becoming realities, more such tests are at hand. Experts warn that the world is hurtling towards AI-mageddon. America’s rash embrace of risk makes that more likely.
The Pentagon fell out with Anthropic over the government’s demand that it should be allowed to use the company’s models for all legal purposes. Anthropic (a sponsor of The Economist’s “Insider” shows) refused on two grounds.
First, Dario Amodei, the chief executive of Anthropic, fears that ai could one day be used to analyse the digital footprints of ordinary Americans, a form of surveillance that today’s laws have not caught up with. Under Mr Trump, Immigration and Customs Enforcement is already using ai to analyse vast amounts of data to speed up deportations. Extending that to Americans does not seem far-fetched.
Second, Mr Amodei is worried about the use of autonomous weapons. AI remains unpredictable and immature as well as extraordinarily powerful. Because the technology could go rogue, he argues, it is too soon to take humans out of the loop.
The administration has responded to Anthropic with fury and retribution. President Donald Trump branded the company “leftwing nut jobs” who were trying to “dictate” how America’s “great military fights and wins wars”. He has given the federal government six months to rip up its contracts with Anthropic. Pete Hegseth, the secretary of war, says he will designate the firm a “supply-chain risk”.
This could be bluster—Anthropic’s models are being used in the attacks on Iran. But if the threat is enacted, then for the first time an American company will be classed as a security risk and prevented from doing business with defence contractors. On March 4th Anthropic was in damage control after a leaked memo from Mr Amodei said it was under fire for not giving “dictator-style praise to Trump”.
With a normal government and a normal technology, the dispute would surely have been quickly sorted out. But this is not a normal government, and AI is not a normal technology. Our briefing this week explains how both Mr Amodei’s fears reflect wider concerns about the dangers it poses. As with enhanced government surveillance, one set of worries is that AI is too powerful. In December Anthropic’s Claude chatbot was told by hackers to break into the Mexican government’s records, supposedly as part of a security test; it found and exploited vulnerabilities and stole 150gb of taxpayer details, voter records and employee credentials. Researchers reckon that AI could be used to develop analogues of the toxin ricin that cannot be traced using conventional methods, because of novel protein structures.
The other set of worries, as with autonomous weapons, is that the models could stop heeding human instructions. Anthropic thinks that, because so much of its code is now written by AI, detecting whether it is drifting away from human instructions is hard to monitor. Many models now demonstrate a degree of what experts call “situational awareness”: when asked to delete themselves they reason that the situation is a test, and refuse to do so.
Against this backdrop, the administration’s treatment of Anthropic shows how much it prizes AI as a tool of national power. Instead of being prepared to set out clear rules on how the technology will be used, the government is making an example of a firm that dared to raise concerns, even if that means hurting homegrown innovation. This can only encourage a race to the bottom. Already, OpenAI, Anthropic’s chief rival, has leapt into the breach, striking a deal with the Pentagon that superficially resembles the one Anthropic had sought, but which is closer to what the Pentagon was after.
Where America leads, the world will surely follow. The pattern is being repeated as companies and governments downgrade safety concerns. Modelmakers have spent hundreds of billions of dollars investing in the computing power they need to race ahead to the next upgrade. That puts them under intense pressure to go as fast as they can to turn a profit. Even Anthropic has watered down its safety protocols in response to competition. At a recent ai summit in India, most governments were keener to discuss fair access to the technology than safety.
You might have hoped that the governments of China and America, home to the world’s most advanced ai labs, would unite to set global standards—and then ensure that they did not pay a penalty by imposing them on everyone else. But the two superpowers are locked in a race of their own, because they both see the domination of AI as the key to dominating the rest of the 21st century.
No wonder that, as AI grows rapidly more powerful, experts in the field are gloomily predicting a catastrophe. Some warn of a “Chernobyl moment”: the use of AI that leads to a disaster which causes either huge economic damage or loss of life.
The parable of Anthropic leads to the bleak conclusion that this danger is becoming more likely. Perhaps the best the world can hope for is a small-scale disaster, which jolts China and America into pressing for safety precautions—not Chernobyl so much as Three Mile Island. But worse is possible, too. Alas, action is unlikely to come until it’s too late.
Subscribers to The Economist can sign up to our Opinion newsletter, which brings together the best of our leaders, columns, guest essays and reader correspondence.