Technological trade-offs

Why China’s government worries about AI

April 17, 2026

An illustration of an electrical storm over Shanghai with circuit-board lines for lightning bolts
ON A MARCH morning in the southern city of Shenzhen, hundreds of people queued outside the headquarters of Tencent, a Chinese tech giant. They were waiting to install a new artificial-intelligence agent called OpenClaw. Most were pensioners, toying with a newly powerful technology, or students facing a grim jobs market. Scenes of such “lobster farming”, named after OpenClaw’s mascot, were replicated in cities across the country, with the number of Chinese users quickly outpacing that of America or anywhere else.
The excitement didn’t last. Some users had sensitive data deleted. Local media reported on agents burning through computing power at the expense of oblivious users. Unusually, Chinese authorities appeared to want to slow the spread of an AI application. They worried that such agents made easy targets for hackers, could leak personal data and take over devices entirely. Regulators banned the software in banks and other sensitive sectors. Almost as soon as the OpenClaw craze came, it dissipated. A small industry emerged for paid “uninstallations”.
Chinese policymakers, led by President Xi Jinping, have come to see AI as an “epoch-defining technology”. It is central to China’s competition with America, and Mr Xi’s plan to make the ailing economy grow. OpenClaw promised much of what officials say they want from AI: a productivity boost enabled by free, open-source software and rapid adoption by its tech-savvy population. Yet the agent also brought risks, heightening worries that AI will destroy jobs and that it could expose vulnerabilities in the country’s cyber defences. The episode marked a growing recognition inside the government that as AI models get more powerful, it may face difficult trade-offs.
Until recently, China thought about the more mundane risks of AI, with only glancing attention paid to science-fiction-style worries of a robot takeover (see chart). In 2017 Mr Xi said that the technology must always be “controllable”—meaning produced domestically under the authority of the government. After the release of ChatGPT in 2022, officials focused on the risk that chatbots may say unflattering things about China’s leaders. Firms had to register their algorithms with the government and test them against long lists of banned words and phrases. A new edict now prohibits systems from encouraging self-harm or emotional dependency after an apparent rise in “AI boyfriends”.
As in America, authorities are in the uncomfortable position of relying on private firms to develop the technology on which their national power may one day depend. OpenClaw mania was boosted by some of China’s biggest tech firms, including Tencent, Alibaba and Bytedance. The firms saw an opportunity at last to make money from consumers using AI models (which are usually given away in China), by selling computing power for hungry agents. Recognising that the interests of such firms may not align with its own, the Communist Party depends on a coterie of trusted academics at leading universities for advice on how to regulate them.
The movers and shakers of Chinese AI keep lower profiles than their Western counterparts, in part because of the expectations the government places on the sector. Liang Wenfeng, the founder of DeepSeek, is an exception. After his firm’s model shocked the world last year, state media broadcast a handshake between him and Mr Xi. Yan Junjie, the founder of Minimax and a newly minted billionaire, and Yang Zhilin, the founder of Moonshot AI, are not household names but have been invited in recent months to brief the prime minister, Li Qiang, on economic issues. Z.ai, a lab founded by Jie Tang of Tsinghua University in Beijing, also has deep connections with the government.
The state wants them to succeed, but their success may have costs for China’s ruling party. It is difficult to know whether elites fret about the technology itself or the public anxiety that appears to be steadily emerging, says Matt Sheehan of the Carnegie Endowment, an American think-tank. Although people in China are consistently more optimistic in surveys about AI than those in other countries, a report prepared by a state think-tank shows that the share of workers who worry that AI might replace their jobs has risen, from 49% in 2024 to 59% last year. And in a speech early this year Mr Xi raised “security problems” such as data theft and, perhaps for the first time, a potential technical loss of control over frontier AI.
Already AI deployment is fuelling fears of lay-offs. In the central city of Wuhan, where autonomous taxis are operating, cab drivers blame robotaxis for taking jobs and have petitioned the government to slow the roll-out. Tools like OpenClaw are landing in a period of 16% youth unemployment (roughly double the level in America) and dire wage growth. Media chatter suggests becoming skilled with AI can boost job prospects, says Poe Zhao, a Beijing-based tech analyst. “What looks like grassroots tech enthusiasm is closer to grassroots career anxiety,” he says of the queues outside Tencent.
In December Beijing’s government published a ruling that firms could not fire employees replaced by AI. Doing so, according to a commentary published in state media, was akin to “offloading” risks from technological change onto workers, whom firms “enjoying the benefits of AI” had a duty to protect. State media project confidence that China can both deploy AI and protect jobs. A government white paper is said to be in the works on the impact of the technology on employment.
Concerns about rogue AI have also been heightened by recent incidents. In the weeks after the OpenClaw craze, the government told a group of experts to propose “safety standards” and rules governing agent behaviour. Public safety was more immediately put at risk late last month when more than a hundred robotaxis in Wuhan suddenly stopped, leaving customers stranded on highways for up to two hours, local media reported. Senior officials, including from the Ministry of Public Security, met to discuss such incidents soon afterwards.
News on April 7th that Anthropic, an American lab, has developed an unreleased model capable of finding vulnerabilities in cyber defences has so far caused muted reaction in China, but will surely be picked over by national-security authorities. As the capabilities of AI models bound ahead, so do dreams of the riches and power they could bring China in its rivalry with America. Authorities insist that they take public concern seriously, while generally urging faster adoption. If forced to choose, the party will prioritise its technological goals over all else, says a government adviser in Beijing. For it, winning the AI race is “life or death”.
Subscribers can sign up to Drum Tower, our new weekly newsletter, to understand what the world makes of China—and what China makes of the world.