Release tension
Why Anthropic and OpenAI are locking up their latest models
April 16, 2026
Exclusivity is a powerful marketing tool. Just ask Anthropic. When on April 7th the artificial-intelligence lab announced that the preview version of its latest model, called “Mythos”, would be available only to an exclusive group of companies, the envy quickly spread. Only one bank, JPMorgan Chase, made the initial list of invitees. The board of an Asian lender called in its chief executive a few days later to explain how it, too, could swiftly gain access to “Project Glasswing”, as Anthropic’s new club is called.
That response is unsurprising, given the lab’s claims about Mythos. It says that its new model is particularly adept at finding cyber-security weaknesses—which is why it is being rolled out in stages, lest those with nefarious aims get their hands on it before companies have had a chance to use the tool to patch their defences.
When Mythos is eventually released to the public, Anthropic says its hacking powers will be curtailed. Even so, the model’s purported capabilities have alarmed not just businesses but governments, too. After Mythos was unveiled, Scott Bessent, America’s treasury secretary, and Jerome Powell, chairman of the Federal Reserve, summoned America’s biggest banks to discuss the cyber-security risks posed by AI.
Not to be outshone, on April 14th OpenAI, maker of ChatGPT, announced that it would release its own system with supercharged hacking capabilities—a tailored version of the GPT-5.4 model it launched last month—to vetted users. This staggered approach to releasing models is more than just an act of public service, and may soon become the norm for frontier systems. That is because the labs stand to gain in three ways.
First, that which is locked up is harder to steal. In February Anthropic complained about “industrial-scale” campaigns by three Chinese labs to “distill” its models, and OpenAI has made similar protests in the past. Distillation entails using the outputs of one AI model to improve a system that is less capable. Most labs use the technique in one way or another. For example, a big model that requires lots of computing power may be distilled into a small but mighty one.
Distilling a rival’s model, however, is closer to industrial espionage. America’s labs are eager to use intellectual-property laws to prevent that from happening. But the surest way to stop a model from being plagiarised is to prevent those who would distill it from gaining access.
Second, labs are grappling with a shortfall in computing power, forcing them to triage. Despite the vast sums being spent on data centres, demand for AI continues to soar and each new frontier model gobbles up more power than the last. Anthropic has recently had to introduce limits on how much customers can use Claude, its chatbot, particularly at certain times of the day, and has altered its enterprise pricing to charge based on consumption.
Mythos appears to have a particularly voracious appetite for computing power. Anthropic’s published pricing for the service is high, at five times what it charges to use Opus 4.6, its most powerful model available to the public. That suggests Mythos is far more of a burden on its infrastructure. Keeping it behind closed doors will allow Anthropic to induct new customers only when capacity allows.
Third, restricting access to the most advanced AI systems conveniently shifts power back to the model-makers. Applications like Cursor, an AI coding tool, are popular among enterprises partly because they avoid vendor lock-in: IT can swap models in and out of the back end depending on cost and performance without requiring staff to learn a new interface. But it is difficult for developers to create applications that are compatible with models they cannot access. That may lead some customers to opt for a model-maker’s own tools, such as Anthropic’s Claude Code or OpenAI’s Codex.
The upshot of all this is that leading AI labs are likely to continue exerting greater control over who gets access to their technology, and not only because they are concerned about safety. Those left out will be none too pleased. ■
To track the trends shaping commerce, industry and technology, sign up to “The Bottom Line”, our weekly subscriber-only newsletter on global business.