在“人工智能+”背景下,ChatGPT、Sora及文心一言等生成式人工智能应用(AIGC)横空出世,显著提升了人类的生产效能与创新潜能,但亦伴生伦理价值失衡、伦理规范失控、伦理关系失调等风险。基于技术风险维度分析,算法的隐秘性与智能技术的不可预测性、价值理性与工具理性的内在张力、人类风险感知与应对机制的局限性,是AIGC伦理风险的核心成因。鉴此,AIGC伦理风险管控正趋于“可信治理”范式,其核心要旨在于确保技术的可控性、可问责性、公平性、可靠性、可解释性和安全性。在可信治理引导下,应完善数据风险治理架构,优化侵权风险防控与责任追究体系,健全伦理治理制度,以赋能新质生产力发展。
At the moment when AI technology is fully penetrated into all fields of social life, the rise of AIGC(Artificial Intelligence Generated Content) represented by ChatGPT, Sora and ERNIE Bot marks the leap of AI technology to a new stage of development, and indicates the approach of the era of "artificial general intelligence". However, while AIGC greatly amplifies human innovation potential, it also presents considerable challenges to the existing ethical governance system. Against this backdrop, this article examines the current state and future trends of AIGC in China, analyzing its ethical risks and generative logic. Drawing from the ethical risk governance trends of AIGC globally, the article proposes a governance framework that is not only attuned to China's national context but also forward-looking. This is aimed at providing robust theoretical guidance and practical paths for the standardized development of AIGC.
This study commences with an analysis of the potential ethical risks associated with AIGC, considering its technical attributes. Initially, it addresses the risk of ethical value imbalance, primarily evident in the intensification of algorithmic discrimination. Following this, it examines the risk of ethical norms being beyond control, which is predominantly showcased in the challenge of accountability. Lastly, it explores the risk of ethical relationship imbalance, which is chiefly characterized by a diminution of human agency. From the perspective of technological risk, the root cause of AIGC ethical risk lies in the complexity and uncertainty of technology. Particularly under the 'black box' phenomenon of algorithms, the behavior of AIGC becomes challenging to anticipate and manage, thereby amplifying ethical risks. At the same time, the tension between technological rationality and value rationality, as well as the limitations of human risk perception and response, further deepen the complexity of ethical risks.
Considering the trends in AIGC governance, this article advocates "trusted governance" as the core strategy to mitigate AIGC ethical risks. It underscores the importance of technology being controllable, accountable, fair, reliable, interpretable, and secure, with the goal of ensuring that technological advancements are transparent, equitable, and contribute positively to society. At the same time, the Artificial Intelligence Law (scholars' proposal draft) also provides important governance basis and guides the legal path of AIGC trusted governance.
Under the principle of 'trusted governance, this article proposes three core strategies. First, it calls for the establishment of a robust data risk governance framework. By integrating cross-border collaborative governance and internal fine governance, it emphasizes adherence to 'reliability' standards. This includes enhancing regulations for cross-border data flows and promoting data ethics and compliance within enterprises to ensure data usage is reliable and secure. Second, it advocates for an optimized liability attribution mechanism to address AIGC infringement risks. This involves assessing subject obligations, product defects, and infringement liability in light of AIGC's technical characteristics to enforce the 'accountability' standard. Third, it recommends integrating a 'people-oriented' approach into the AIGC ethical governance system. This aims to find technological solutions that are tailored to China's context and capable of addressing its unique challenges. In practice, it shall involve two aspects: first, at the organizational level, creating a dedicated AIGC ethical governance body to oversee ethical governance and supervision, thereby enhancing AIGC's controllability; and second, at the normative level, harnessing the 'complementary advantages' of policies, laws, and technology to uphold standards of 'fairness,' 'safety,' and 'interpretability,' guiding AIGC towards positive development.
In the future, it is essential to further integrate market development with China's national conditions, adopting 'trusted governance' as the core and the 'Proposal Draft' as the foundation to construct a comprehensive ethical risk governance system for AIGC. Moreover, there is a need for ongoing exploration in the realm of ethical governance, focusing on the organic integration of technological ethics with legal governance. This approach aims to devise AI ethical governance strategies that are distinctively Chinese, thereby equipping to adapt to and potentially spearhead the high-quality development of the new round of the global digital economy.
[1] 刘艳红.生成式人工智能的三大安全风险及法律规制——以ChatGPT为例[J].东方法学,2023,16(4): 29-43.
[2] 陈兵.生成式人工智能可信发展的法治基础[J].上海政法学院学报(法治论丛),2023,38 (4): 13-27.
[3] 陈兵,董思琰.生成式人工智能的算法风险及治理基点[J].学习与实践,2023,40(10): 22-31.
[4] 陆小华,陆赛赛.论生成式人工智能侵权的责任主体——以集体主义为视角[J].南昌大学学报(人文社会科学版),2024,55 (1): 119-131.
[5] MARCHANDOT B,MATSUSHITA K,CARMONA A,et al. ChatGPT: the next frontier in academic writing for cardiologists or a Pandora′s box of ethical dilemmas[J]. European Heart Journal Open,2023,3(2):oead007.
[6] RICH A S,GURECKIS T M. Lessons for artificial intelligence from the study of natural stupidity[J]. Nature Machine Intelligence,2019,1(4):174-180.
[7] GOODWIN M. Complicit bias and the supreme court[J].Harvard Law Review Forum,2022,52(2):107-119.
[8] 吕雪梅.美国预测警务中基于大数据的犯罪情报分析[J].情报杂志, 2015,34 (12): 16-20.
[9] 许中缘,郑煌杰. ChatGPT类应用风险的治理误区及其修正——从“重构式规制”到“阶段性治理”[J].河南社会科学,2023,31 (10): 50-62.
[10] 彭中礼.新兴技术推动法理论变革的因素考量——以人工智能产品侵权责任分配理论为例的反思[J].甘肃社会科学,2022,44(4): 115-128.
[11] 付其运.人工智能非主体性前提下侵权责任承担机制研究[J].法学杂志,2021,42 (4): 83-90.
[12] 坤海,徐来.人工智能对侵权责任构成要件的挑战及应对[J].重庆社会科学,2019,37 (2): 55-65.
[13] HYLAND T. Artificial Intelligence (AI) developments and their implications for humankind: a critical analysis of contemporary trends and perspectives[J]. Current Journal of Applied Science and Technology,2023,42(21):12-26.
[14] STOKEL-WALKER C,VAN NOORDEN R. What ChatGPT and generative AI mean for science[J]. Nature,2023,614(7947):214-216.
[15] 郭金彬,黄长平.哥德尔不完全性定理的科学推理意义[J].自然辩证法通讯,2010,32 (2): 15-20,126.
[16] 贾丽艳.海德格尔的技术哲学:祛魅与返魅的存在论反思[J].社会科学研究, 2024,46(2): 153-159.
[17] 陈志刚.马克思的工具理性批判思想——兼与韦伯思想的比较[J].科学技术与辩证法,2001,38(6): 38-41,67.
[18] 余成峰.卢曼社会系统论视野下的法律功能[J]. 北京航空航天大学学报(社会科学版),2021,34 (1): 31-41.
[19] 兰立山.技术治理的伦理风险及其应对之策[J].道德与文明,2023,42 (5): 159-167.
[20] 劳伦斯·莱斯格.代码2.0:网络空间中的法律[M].北京:清华大学出版社,2009.
[21] 张慧敏,陈凡.从自主的技术到技术的政治——L·温纳(Langdon Winner)的技术哲学思想及启示[J].自然辩证法研究,2004,20(8): 61-64.
[22] 雷·库兹韦尔.灵魂机器的时代:当计算机超过人类智能时[M].上海:上海译文出版社,2002.
[23] SHARMA A. The escalation of ChatGPT: how ChatGPT will exert influence on the legal profession[J].Jus Corpus Law Journal,2022,43(3): 95-106.
[24] 张春春, 孙瑞英.如何走出AIGC的“科林格里奇困境”:全流程动态数据合规治理[J]. 图书情报知识,2024,41 (2): 39-49,66.
[25] 袁康,鄢浩宇.数据分类分级保护的逻辑厘定与制度构建——以重要数据识别和管控为中心[J].中国科技论坛,2022,38(7): 167-177.
[26] 卢彪,姚萍.技术时代的责任风险与防范[J].江苏社会科学,2013,34(2): 109-112.
[27] 冯子轩.生成式人工智能应用的伦理立场与治理之道:以ChatGPT为例[J].华东政法大学学报,2024,27 (1): 61-71.