|
|
ChatGPT and Academic Misconduct Governance: Challenges and Responses |
Wang Shao |
(School of Marxism, Tongji University, Shanghai 200092,China) |
|
|
Abstract ChatGPT is capable of reinforcement learning from human feedback, and it has intervened in scientific research and brought new problems of academic misconduct. The connotation of academic misconduct includes "violating academic ethics", "impeding scientific communication" and "misleading the research community", etc. ChatGPT has impacted the connotation of academic misconduct in three ways. First, the scientific papers produced by ChatGPT are still highly integrated products with "non-original" characteristics. The use of "non-original" works for academic publication is certainly against academic ethics, but for ChatGPT, it does not have academic ethical subjectivity and thus cannot bear the responsibility of "violating academic ethics". Second, ChatGPT lacks the skill to distinguish between credible and untrustworthy sources, and the biases that lead human beings to go astray may be further amplified in ChatGPT's output, thus "hindering scientific communication". Third, when ChatGPT's capabilities reach the average level of researchers, those who choose not to use ChatGPT are placed on an uneven playing field, and the phenomenon of "misleading the research community" shall be serious. #br#The outreach of academic misconduct includes plagiarism, improper attribution, and questionable peer review, etc. The impact of ChatGPT on the outreach of the concept of academic misconduct is manifested at three levels. First, the creativity generated by ChatGPT is a qualitative change result based on data analysis, and it may occur both in users' minds and in ChatGPT's responses under users' feedback. Whether the attribution of the results when published should be accompanied by ChatGPT becomes an issue in improper signature governance. Second, it is difficult to detect seemingly unique but actually "reworked" articles generated by ChatGPT based on a large number of web datasets. Third, ChatGPT misinterprets questions and tasks that require a deep understanding of the literature and produces biased texts that trigger problematic peer review, because if the reviewers themselves are biased, then the biased texts are "right on target".#br#From the perspective of academic misconduct governance, ChatGPT has implications for the subject, object, process, and standard of governance. First, ChatGPT expands the scope of governance subjects. When academic misconduct can be attributed to ChatGPT to a certain extent or it is considered to have exacerbated the risk of academic misconduct, then institutions and personnel who develop and technically supervise the technology need to assume certain responsibilities for academic misconduct governance. Second, ChatGPT reconstructs the structure of object responsibility. The quality of ChatGPT-generated texts depends on the quality of users' questions and feedback, and the choice of text usage also rests with users, thus increasing the object responsibility of users.While the expansion of the scope of objects also results in factual changes of the original object responsibility structure. Again, ChatGPT increases the governance process procedures. The added links mainly cover the disablement after initiation, the detection and the feedback after processing. Finally, ChatGPT reshapes the connotation of governance standards. Academic misconduct governance standards include the general requirements of academic ethics and the basic characteristics of academic misconduct. The impact of ChatGPT on the concept of academic misconduct coincides with two major aspects of the governance standards for academic misconduct.#br#When a part of academic research turns to ChatGPT's dialog box, the network impact should be fully considered in the governance strategy so as to make new adjustments. The first step is to build a network of subject-object cooperation,and the governance subjects and governance objects must cooperate to address the new risks posed by ChatGPT. Second, governance procedures and standards should be updated to reach agreement; and then it is necessary to incorporate new standards into the learning models of artificial intelligence content production software, such as ChatGPT can effectively prevent users from engaging in academic misconduct, and keep the standard updating as biases or errors in AI implementation of the standard may still exist.Finally, a fluid governance framework should be established. The liquidity framework is a framework centered around ethical principles for governing academic misconduct, and the parties that make up the framework are movable; when technology is controllable, the framework ensures the use of technology under controllable measures to obtain technical effectiveness; otherwise the framework shall hedge the risk through temporary controls until control is regained.#br#
|
Received: 20 February 2023
|
|
|
|
|
[1] ROOSE,KEVIN. The brilliance and weirdness of ChatGPT[N]. The New York Times,2022-12-05. [2] SHEN Y, HEACOCK L, ELIAS J, et al. ChatGPT and other large language models are double-edged swords[J]. Radiology, 2023,307(2):e230163. [3] 王少.科研不端概念再审思[J].自然辩证法研究,2023,39(2):108-116. [4] THORP H H. ChatGPT is fun, but not an author[J]. Science, 2023, 379(6630): 313. [5] JO A. The promise and peril of generative AI [J]. Nature, 2023, 614(7947):214-216. [6] FITTS J, BOVARD R. AI and the future of academic integrity[EB/OL]. (2022-02-15).https://digitalscholarship.unlv.edu/btp_expo/198. [7] SUSNJAK T. ChatGPT: the end of online exam integrity[J]. arXiv,2022.10.48550/arXiv.2212.09292. [8] 李木子. AI能列为论文作者吗[N].中国科学报,2023-01-20(01). [9] 王少.权利视角下科研不端治理研究——以隐私权为中心[J].自然辩证法通讯,2017,39(6):89-95. [10] 王少.出版物不当署名的治理现状、不足及对策[J].出版发行研究,2021,37(6):62-69. [11] 迈克斯·泰格马克.生命3.0——人工智能时代人类的进化与重生[M].汪婕舒,译.杭州:浙江教育出版社,2018:444. [12] NUS.Guidelines on the use of AI tools For academic work [EB/OL].(2022-02-15).https://libguides.nus.edu.sg/new2nus/acadintegrity#s-lib-ctab-22144949-5. [13] ROHRBACH A, HENDRICKS L A, BURNS K, et al. Object hallucination in image captioning[J]. arXiv, 2018.10.18653/v1/D18-1437. [14] VAN DIS E A M, BOLLEN J, ZUIDEMA W, et al. ChatGPT: five priorities for research[J]. Nature, 2023, 614(7947): 224-226.
|
|
|
|