Sci-tech Policy and Management

China’s Hierarchical and Classified AI Risk Governance Framework: Insights from Foreign Comparisons and Localized Construction

  • Sun Na ,
  • Qu Zhi
Expand
  • (School of Law,Xi'an Jiaotong University,Xi'an 710049, China)

Received date: 2025-07-08

  Revised date: 2025-11-25

  Online published: 2026-01-10

Abstract

Artificial intelligence (AI) is rapidly reshaping national governance, social operations, and economic structures, while simultaneously generating multifaceted risks relating to safety, privacy, discrimination, and systemic uncertainty. In an era characterized by rapid technological iteration and the wide proliferation of AI application scenarios, constructing a scientific, systematic, and forward-looking risk-based regulatory framework has become an essential legislative task for China. This article conducts a comparative examination of AI regulatory regimes in the European Union, South Korea, Canada, and the United States, analyzing their respective approaches to risk identification, classification models, and the allocation of regulatory obligations, with the aim of informing China′s future AI legislation.
At the comparative level, the European Union adopts a four-tier framework—unacceptable, high, limited, and minimal risk—and imposes stringent, lifecycle-wide compliance obligations on high-risk systems. Although comprehensive, this framework produces excessively heavy compliance burdens and lacks flexibility in responding to technological dynamics. South Korea employs a horizontal regulatory model centered on high-impact AI, characterized by concise provisions and a streamlined structure, yet its taxonomy does not adequately differentiate among distinct categories of risks. Canada distinguishes between biased-output systems and harm-based systems, but ultimately applies uniform obligations to both categories, resulting in a disconnect between risk classification and regulatory practice. The United States adopts the most flexible structure: a dual-track model distinguishing safety-impacting AI from rights-impacting AI. On the basis of a unified minimum compliance baseline, the U.S. model adds differentiated requirements for rights-impacting systems and incorporates deferral, exemption, and dynamic adjustment mechanisms that enhance regulatory adaptability.
Despite differences in legislative traditions and policy objectives, the four jurisdictions share a common governance logic: risk identification as the regulatory starting point, risk classification as the core organizing principle, and differentiated obligations as the primary regulatory tool. The proportionality principle underlies these systems. The U.S. dual-track model distinguishing safety and rights impacts offers notable advantages in proportionality, precision, and institutional flexibility, thereby providing a valuable template for China in building a multi-level governance structure.
In light of these comparative insights, this article proposes that China optimize its AI risk-governance framework along three dimensions. First, China should establish a layered legislative structure combining common rules + sector-specific rules. A national-level Artificial Intelligence Basic Law should articulate overarching governance principles, risk-classification methods, and baseline regulatory obligations. Sectoral regulatory authorities should then develop technical standards and regulatory rules tailored to specific application scenarios, thereby balancing systemic coherence with operational flexibility and avoiding fragmented governance.Second, China should adopt a dual-track classification framework that distinguishes between safety-impacting AI and rights-impacting AI. Systems involving life safety, critical infrastructure, or public security should be categorized as safety-impacting AI, whereas systems that affect fairness, fundamental rights, or vulnerable groups should be regulated as rights-impacting AI. For systems that present hybrid or overlapping risks, a primary-risk identification mechanism should be introduced to classify them according to their dominant risk attributes. In addition, a combined-obligations mechanism should be implemented to allow both sets of obligations to apply where necessary, thereby enhancing the precision of risk identification and strengthening the applicability of regulatory tools.Third, China should develop a comprehensive system of dynamic adjustment and flexible exemptions. Through presumed-strict classification, application-based exemptions, periodic review, and cross-departmental feedback mechanisms, regulatory measures can be dynamically aligned with technological evolution and sector-specific characteristics. Such mechanisms help prevent regulatory rigidity and excessive compliance burdens, ensuring that governance tools remain adaptive to emerging risks and evolving industrial practices.
In sum, through comparative analysis of foreign regulatory models and the construction of a localized governance pathway, this article argues that the core of China′s AI risk-governance framework lies in risk-based classification as its organizing principle, a layered legislative structure as its institutional foundation, a dual-path classification model as its methodological approach, and dynamic adjustment mechanisms as its regulatory toolset. A governance system guided by unified national principles, supported by differentiated regulatory rules, and coordinated between central and sectoral authorities can achieve a dynamic balance between safeguarding safety and fostering innovation, thereby forming a Chinese model of AI risk governance capable of addressing the complexities of the digital era.

Cite this article

Sun Na , Qu Zhi . China’s Hierarchical and Classified AI Risk Governance Framework: Insights from Foreign Comparisons and Localized Construction[J]. Science & Technology Progress and Policy, 2026 , 43(1) : 114 -123 . DOI: 10.6049/kjjbydc.D9N202507023

References

[1] 郑煌杰.生成式人工智能的伦理风险与可信治理路径研究[J].科技进步与对策,2025,42(12):38-48.
[2] 何元浪,袁健红.人工智能发展与新质生产力提升:理论机制与实证检验[J].科技进步与对策,2025,42(11):1-11.
[3] Office of Management and Budget. Advancing Governance, Innovation,and Risk Management for Agency Use of Artificial Intelligence M-24-10[EB/OL]. (2024-03-28) [2025-06-25].https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
[4] 顾楚丹.面向未来的人工智能社会风险及其预期治理[J].西北民族研究,2025,40(5):122-135.
[5] 许可.人工智能法律规制的第三条道路[J].法律科学(西北政法大学学报),2025,43(1):59-71.
[6] POWER M. Organized uncertainty: designing a world of risk management[M]. Oxford: Oxford University Press, 2007.
[7] GELLERT R. Understanding the notion of risk in the General Data Protection Regulation[J]. Computer Law & Security Review, 2018, 34(2): 279-288.
[8] ISO. Risk management: principles and guidelines[R]. Geneva: International Organization for Standardization, 2009.
[9] DE GREGORIO G,DUNN P.The European risk-based approaches: connecting constitutional dots in the digital age[J]. Common Market Law Review, 2022, 59(2): 473-500.
[10] QUELLE C. Enhancing compliance under the General Data Protection Regulation:the risky upshot of the accountability and risk-based approach[J]. European Journal of Risk Regulation, 2018, 9(3): 502-526.
[11] 张璐瑶.敏捷治理与人工智能治理机制创新[D].上海:上海国际问题研究院,2023.
[12] 张涛. 人工智能治理中“基于风险的方法”: 理论、实践与反思[J]. 华中科技大学学报(社会科学版), 2024, 38(2): 66-77.
[13] 周学峰.论人工智能的风险规制[J].比较法研究,2024,38(6):42-56.
[14] EBERS M. Truly risk-based regulation of artificial intelligence: how to implement the EU′s AI Act[J]. European Journal of Risk Regulation,2025,16(2): 684-703.
[15] 中华人民共和国国务院办公厅. 国务院2023年度立法工作计划(国办发〔2023〕18号)[EB/OL].(2023-06-06)[2025-07-01].https://www.gov.cn/zhengce/content/202306/content_6884925.htm.
[16] 中华人民共和国国务院办公厅.国务院2024年度立法工作计划(国办发〔2024〕23号)[EB/OL].(2024-05-09)[2025-07-01].https://www.gov.cn/zhengce/zhengceku/202405/content_6950094.htm.
[17] 全国人民代表大会常务委员会.十三届全国人大常委会立法规划[EB/OL].(2018-09-10) [2025-07-01]. http://www.npc.gov.cn/zgrdw/npc/xinwen/2018-09/10/content_2061041.htm.
[18] 全国人民代表大会常务委员会. 十四届全国人大常委会立法规划[EB/OL].(2023-09-08) [2025-07-01]. http://www.npc.gov.cn/npc/c2/c30834/202309/t20230908_431613.html.
[19] 陈亮.人工智能立法体系化的困境与出路[J].数字法治,2023,1(6):10-17.
[20] 张凌寒,于琳.从传统治理到敏捷治理:生成式人工智能的治理范式革新[J].电子政务,2023,20(9):2-13.
[21] 徐伟,张丽梅.人工智能立法:欧盟经验与中国路径[J].德国研究,2024,39(6): 76-93, 141.
[22] EUROPEAN COMMISSION. Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) COM(2021) 206 final[EB/OL]. (2021-04-21)[2025-06-25]. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.
[23] 曾雄,梁正,张辉. 欧盟人工智能的规制路径及其对我国的启示——以《人工智能法案》为分析对象[J]. 电子政务,2022,19(9):63-72.
[24] ECLEAR. The EU AI Act explained[EB/OL].(2023-12-11)[2025-06-25].https://eclear.com/article/the-eu-ai-act-explained/.
[25] 沈伟,陈徐安黎.全球人工智能治理模式比较与中国进路[J].亚太经济,2025,42(3):140-154.
[26] 刘子婧. 欧盟《人工智能法案》:演进、规则与启示[J]. 德国研究,2024,39(3): 101-128, 151.
[27] 周汉华. 论我国人工智能立法的定位[J]. 现代法学,2024,46(5): 17-34, 217.
[28] 张新宝,魏艳伟.我国人工智能立法基本问题研究[J].法制与社会发展,2024,30(6):5-21.
[29] 薛澜,贾开,赵静. 人工智能敏捷治理实践:分类监管思路与政策工具箱构建[J]. 中国行政管理,2024,40(3): 99-110.
[30] 张凌寒.中国需要一部怎样的《人工智能法》——中国人工智能立法的基本逻辑与制度架构[J].法律科学(西北政法大学学报),2024,42(3):3-17.
[31] 申卫星.面向未来的中国人工智能立法:思路与重点[J]. 探索与争鸣,2024,40(10): 5-8.
[32] 丁晓东.人工智能风险的法律规制——以欧盟《人工智能法案》为例[J]. 法律科学(西北政法大学学报),2024,42(5): 3-18.
[33] 解志勇.高风险人工智能的法律界定及规制[J].中外法学,2025,37(2):285-305.
[34] 张欣.人工智能治理的全球变革与中国路径[J].华东政法大学学报,2025,28(1):18-32.
[35] 张敏.新质生产力背景下人工智能立法的创新法定位研究[J].政法论丛,2025,41(3):153-167.
Outlines

/