|
|
China’s Hierarchical and Classified AI Risk Governance Framework: Insights from Foreign Comparisons and Localized Construction
Sun Na,Qu Zhi
Science & Technology Progress and Policy
2026, 43 (1):
114-123.
DOI: 10.6049/kjjbydc.D9N202507023
Artificial intelligence (AI) is rapidly reshaping national governance, social operations, and economic structures, while simultaneously generating multifaceted risks relating to safety, privacy, discrimination, and systemic uncertainty. In an era characterized by rapid technological iteration and the wide proliferation of AI application scenarios, constructing a scientific, systematic, and forward-looking risk-based regulatory framework has become an essential legislative task for China. This article conducts a comparative examination of AI regulatory regimes in the European Union, South Korea, Canada, and the United States, analyzing their respective approaches to risk identification, classification models, and the allocation of regulatory obligations, with the aim of informing China′s future AI legislation. At the comparative level, the European Union adopts a four-tier framework—unacceptable, high, limited, and minimal risk—and imposes stringent, lifecycle-wide compliance obligations on high-risk systems. Although comprehensive, this framework produces excessively heavy compliance burdens and lacks flexibility in responding to technological dynamics. South Korea employs a horizontal regulatory model centered on high-impact AI, characterized by concise provisions and a streamlined structure, yet its taxonomy does not adequately differentiate among distinct categories of risks. Canada distinguishes between biased-output systems and harm-based systems, but ultimately applies uniform obligations to both categories, resulting in a disconnect between risk classification and regulatory practice. The United States adopts the most flexible structure: a dual-track model distinguishing safety-impacting AI from rights-impacting AI. On the basis of a unified minimum compliance baseline, the U.S. model adds differentiated requirements for rights-impacting systems and incorporates deferral, exemption, and dynamic adjustment mechanisms that enhance regulatory adaptability. Despite differences in legislative traditions and policy objectives, the four jurisdictions share a common governance logic: risk identification as the regulatory starting point, risk classification as the core organizing principle, and differentiated obligations as the primary regulatory tool. The proportionality principle underlies these systems. The U.S. dual-track model distinguishing safety and rights impacts offers notable advantages in proportionality, precision, and institutional flexibility, thereby providing a valuable template for China in building a multi-level governance structure. In light of these comparative insights, this article proposes that China optimize its AI risk-governance framework along three dimensions. First, China should establish a layered legislative structure combining common rules + sector-specific rules. A national-level Artificial Intelligence Basic Law should articulate overarching governance principles, risk-classification methods, and baseline regulatory obligations. Sectoral regulatory authorities should then develop technical standards and regulatory rules tailored to specific application scenarios, thereby balancing systemic coherence with operational flexibility and avoiding fragmented governance.Second, China should adopt a dual-track classification framework that distinguishes between safety-impacting AI and rights-impacting AI. Systems involving life safety, critical infrastructure, or public security should be categorized as safety-impacting AI, whereas systems that affect fairness, fundamental rights, or vulnerable groups should be regulated as rights-impacting AI. For systems that present hybrid or overlapping risks, a primary-risk identification mechanism should be introduced to classify them according to their dominant risk attributes. In addition, a combined-obligations mechanism should be implemented to allow both sets of obligations to apply where necessary, thereby enhancing the precision of risk identification and strengthening the applicability of regulatory tools.Third, China should develop a comprehensive system of dynamic adjustment and flexible exemptions. Through presumed-strict classification, application-based exemptions, periodic review, and cross-departmental feedback mechanisms, regulatory measures can be dynamically aligned with technological evolution and sector-specific characteristics. Such mechanisms help prevent regulatory rigidity and excessive compliance burdens, ensuring that governance tools remain adaptive to emerging risks and evolving industrial practices. In sum, through comparative analysis of foreign regulatory models and the construction of a localized governance pathway, this article argues that the core of China′s AI risk-governance framework lies in risk-based classification as its organizing principle, a layered legislative structure as its institutional foundation, a dual-path classification model as its methodological approach, and dynamic adjustment mechanisms as its regulatory toolset. A governance system guided by unified national principles, supported by differentiated regulatory rules, and coordinated between central and sectoral authorities can achieve a dynamic balance between safeguarding safety and fostering innovation, thereby forming a Chinese model of AI risk governance capable of addressing the complexities of the digital era.
Reference |
Related Articles |
Metrics
|
|