Inside the Summit Where China Pitched Its AI Agenda to the World

Behind closed doors, Chinese researchers are laying the groundwork for a new global AI agenda—without input from the US.
Image may contain Accessories Glasses Sunglasses Adult Person Goggles Aircraft Airplane Transportation and Vehicle
Photography: Getty

Three days after the Trump administration published its much-anticipated AI action plan, the Chinese government put out its own AI policy blueprint. Was the timing a coincidence? I doubt it.

China’s “Global AI Governance Action Plan” was released on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were among the many Western tech industry figures who attended the festivities in Shanghai. Our WIRED colleague Will Knight was also on the scene.

The vibe at WAIC was the polar opposite of Trump’s America-first, regulation-light vision for AI, Will tells me. In his opening speech, Chinese Premier Li Qiang made a sobering case for the importance of global cooperation on AI. He was followed by a series of prominent Chinese AI researchers, who gave technical talks highlighting urgent questions the Trump administration appears to be largely brushing off.

Zhou Bowen, leader of the Shanghai AI Lab, one of China’s top AI research institutions, touted his team’s work on AI safety at WAIC. He also suggested the government could play a role in monitoring commercial AI models for vulnerabilities.

In an interview with WIRED, Yi Zeng, a professor at the Chinese Academy of Sciences and one of the country’s leading voices on AI, said that he hopes AI safety organizations from around the world find ways to collaborate. “It would be best if the UK, US, China, Singapore, and other institutes come together,” he said.

The conference also included closed-door meetings about AI safety policy issues. Speaking after he attended one such confab, Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, told WIRED that the discussions had been productive, despite the noticeable absence of American leadership. With the US out of the picture, “a coalition of major AI safety players, co-led by China, Singapore, the UK, and the EU, will now drive efforts to construct guardrails around frontier AI model development,” Triolo told WIRED. He added that it wasn’t just the US government that was missing: Of all the major US AI labs, only Elon Musk’s xAI sent employees to attend the WAIC forum.

Many Western visitors were surprised to learn how much of the conversation about AI in China revolves around safety regulations. “You could literally attend AI safety events nonstop in the last seven days. And that was not the case with some of the other global AI summits,” Brian Tse, founder of the Beijing-based AI safety research institute Concordia AI, told me. Earlier this week, Concordia AI hosted a day-long safety forum in Shanghai with famous AI researchers like Stuart Russel and Yoshua Bengio.

Switching Positions

Comparing China’s AI blueprint with Trump’s action plan, it appears the two countries have switched positions. When Chinese companies first began developing advanced AI models, many observers thought they would be held back by censorship requirements imposed by the government. Now, US leaders say they want to ensure homegrown AI models “pursue objective truth,” an endeavor that, as my colleague Steven Levy wrote in last week’s Backchannel newsletter, is “a blatant exercise in top-down ideological bias.” China’s AI action plan, meanwhile, reads like a globalist manifesto: It recommends that the United Nations help lead international AI efforts and suggests governments have an important role to play in regulating the technology.

Although their governments are very different, when it comes to AI safety, people in China and the US are worried about many of the same things: model hallucinations, discrimination, existential risks, cybersecurity vulnerabilities, etc. Because the US and China are developing frontier AI models “trained on the same architecture and using the same methods of scaling laws, the types of societal impact and the risks they pose are very, very similar,” says Tse. That also means academic research on AI safety is converging in the two countries, including in areas like scalable oversight (how humans can monitor AI models with other AI models) and the development of interoperable safety testing standards.

But Chinese and American leaders have demonstrated they have very different attitudes toward these issues. On one hand, the Trump administration recently tried and failed to put a 10-year moratorium on passing new state-level AI regulations. On the other hand, Chinese officials, including even Xi Jinping himself, are increasingly speaking out about the importance of putting guardrails on AI. Beijing has also been busy drafting domestic standards and rules for the technology, some of which are already in effect.

As Trump goes rogue with unorthodox and inconsistent policies, the Chinese government increasingly looks like the adult in the room. With its new AI action plan, Beijing is trying to seize the moment and send the world a message: If you want leadership on this world-changing innovation, look here.

Charm Offensive

I don’t know how effective China’s charm offensive will be in the end, but the global retreat of the US does feel like a once-in-a-century opportunity for Beijing to spread its influence, especially at a moment when every country is looking for role models to help them make sense of AI risks and the best ways to manage them.

But there’s one thing I’m not sure about: How eager will China’s domestic AI industry be to embrace this heightened focus on safety? While the Chinese government and academic circles have significantly ramped up their AI safety efforts, industry has so far seemed less enthusiastic—just like in the West.

Chinese AI labs disclose less information about their AI safety efforts than their Western counterparts do, according to a recent report published by Concordia AI. Of the 13 frontier AI developers in China the report analyzed, only three produced details about safety assessments in their research publications.

Will told me that several tech entrepreneurs he spoke to at WAIC said they were worried about AI risks such as hallucination, model bias, and criminal misuse. But when it came to AGI, many seemed optimistic that the technology will have positive impacts on their life, and they were less concerned about things like job loss or existential risks. Privately, Will says, some entrepreneurs admitted that addressing existential risks isn’t as important to them as figuring out how to scale, make money, and beat the competition.

But the clear signal from the Chinese government is that companies should be encouraged to tackle AI safety risks, and I wouldn’t be surprised if many startups in the country change their tune. Triolo, of DGA-Albright Stonebridge Group, said he expected Chinese frontier research labs to begin publishing more cutting-edge safety work.

Some WAIC attendees see China’s focus on open source AI as a key part of the picture. “As Chinese AI companies increasingly open-source powerful AIs, their American counterparts are pressured to do the same,” Bo Peng, a researcher who created the open source large language model RWKV, told WIRED.

Peng envisions a future where different nations—including ones that do not always agree—work together on AI. “A competitive landscape of multiple powerful open-source AIs is in the best interest of AI safety and humanity's future,” he explained. “Because different AIs naturally embody different values and will keep each other in check.”


This is an edition of Zeyi Yang and Louise Matsakis Made in China newsletter. Read previous newsletters here.