BEIJING: In a move reflecting the fast-paced breakthroughs in artificial intelligence, on Sept 15, China released its upgraded AI Safety Governance Framework 2.0.
The latest framework signals a significant strategic evolution from its predecessor, shifting from a static list of risks to a full life cycle governance methodology.
It comes just a year after the first framework was released by the National Technical Committee 260 on Cybersecurity, China’s key body responsible for cybersecurity standardization.
In its preface, the new iteration notes that the update was driven by breakthroughs in AI technology that had been “beyond expectation”.
These breakthroughs include the emergence of high-performance reasoning models that drastically increase AI’s intellectual capabilities, and the open-sourcing of high-efficacy, lightweight models, which have strongly lowered the barrier to deploying AI systems.
At the same time, the manifestations and magnitude of AI security risks — and people’s understanding of them — are evolving rapidly.
The core objective has evolved from simply preventing risks to ensuring technology remains under human control, according to Wang Yingchun, a researcher at the Shanghai Artificial Intelligence Laboratory, who called the move a “major leap” in governance logic.
In a commentary published on the official website of the Cyberspace Administration of China, he emphasized that the framework aims to guard the bottom line of national security, social stability and the long-term survival of humanity.
The significant shift in the framework involves the introduction of a new governance principle, compared with version 1.0, which focuses on trustworthy applications and the prevention of the loss of control, Wang said.
This principle is supported by the framework’s new addendum listing the fundamental principles for trustworthy AI, which mandates ultimate human control and value alignment.
Hong Yanqing, a professor specialized in cybersecurity at the Beijing Institute of Technology, said in a commentary that the newly added principle is intended to ensure that the evolution of AI remains safe, reliable and controllable. It must guard against runaway risks that could threaten human survival and development, and keep AI firmly under human control, he said.
Reflecting this high-stakes focus, the new framework lists real-world threats that directly impact human security and scientific integrity. It includes the loss of control over knowledge and capabilities of nuclear, biological, chemical and missile weapons.
It explains that AI models are often trained on broad, content-rich datasets that may include foundational knowledge related to nuclear, biological and chemical weapons, and that some systems are paired with retrieval-augmented generation tools.
“If not effectively governed, such capabilities could be exploited by extremists or terrorists to acquire relevant know-how and even to design, manufacture, synthesize and use nuclear, biological and chemical weapons — undermining existing control regimes and heightening peace and security risks across all regions,” the framework said. –The Daily Mail-China Daily news exchange item



