科学学与科学技术管理 ›› 2024, Vol. 45 ›› Issue (04): 98-117.

• 创新战略与管理 • 上一篇    下一篇

风险、原则与责任:基于实验路径的人工智能社会 实验伦理规范体系建构探究

  

  1. 1.清华大学公共管理学院,北京 100084;2. 清华大学智库中心,北京 100084
  • 收稿日期:2023-03-11 出版日期:2024-04-10 发布日期:2024-04-28
  • 通讯作者: 秦晓阳,qlamb@mail.tsinghua.edu.cn
  • 作者简介:汝鹏(1978— ),男,山东济南人,汉族,清华大学公共管理学院副教授,博士,研究方向:智能社会治理、科技与社会、 科技政策;秦晓阳(1991— ),女,山东青岛人,汉族,清华大学智库中心博士后,博士,研究方向:科技伦理;苏竣(1965— ), 男,陕西户县人,汉族,清华大学公共管理学院教授,智能社会治理研究院院长,博士,研究方向:智能社会治理、公共科技政策。
  • 基金资助:
    国家自然科学基金面上项目(72074134);科技部科技创新 2030 重大项目(2020AAA0105405);中国博士后科学基金 面上项目(2023M732011)

Risks, Principles and Responsibilities: Research on the Construction of the Ethical Norm Framework of Artificial Intelligence Social Experiment Based on the Experimental Path

  1. 1. School of Public Policy and Management, Tsinghua University, Beijing 100084, China; 2. Think Tank Center, Tsinghua University, Beijing 100084, China
  • Received:2023-03-11 Online:2024-04-10 Published:2024-04-28

摘要: 人工智能社会实验以社会实验的科学方式观测人工智能技术的综合社会影响,意在对其风险进行前瞻性评估,提 前寻求应对策略,促进智能技术的良性发展。人工智能社会实验无疑会对实验场域中的参与人员和社会环境造成深刻 的伦理影响,因而亟需建构一套切合实际、行之有效的伦理规范体系。当前研究虽有对于伦理风险和伦理原则的探讨, 但仍缺乏有关如何在具体实践中定位伦理风险、落实伦理原则并确定伦理责任的精确阐释。人工智能社会实验的实验 路径为其伦理规范体系的确立提供了基本框架。基于实验开展的 7 个阶段,结合人工智能伦理和社会实验伦理的研究 成果:(1) 分析整合实验各阶段面临的伦理风险,结合应用场景与技术特征进行深化;(2) 梳理提炼普适的伦理原则, 将其规范要求细化至实验各个阶段;(3) 针对实验开展需要,确定不同参与主体在各阶段的职能作用,据此落实主体间 的伦理责任。由此形成对人工智能社会实验伦理研究的基本认识,建构起既表达普遍价值共识,又具备实践可操作性 的人工智能社会实验伦理规范体系。

关键词: 人工智能社会实验, 实验路径, 伦理风险, 伦理原则, 伦理责任

Abstract: The Artificial Intelligence Social Experiment (AISE) investigates the broader social impact of AI technology. It employs social experimentation as a scientific method of research, with the goal of anticipating AI hazards, seeking solutions in advance, and promoting the benign growth of AI technology. AISEs must be carried out in real-life social circumstances and will surely have deep ethical consequences for both the participants and the social environment within the experimental areas. As a result, there is a pressing need to build a practical and effective ethical norm framework. Although some studies have investigated relevant ethical risks and principles, scholars and practitioners remain perplexed and divided about how to make these abstract concepts empirically understandable and useful, that is, to "translate" them into practices to guide the actual conduct of AISE. Meanwhile, the unpredictability of AI technology makes it more difficult to forecast and prevent ethical problems, as well as to assign responsibility in AISEs. In this setting, a more detailed explanation of how to identify ethical risks, apply ethical principles, and define ethical obligations in practice is required. Fortunately, the systermatic settings of AISE equip researchers with the tools and approaches to tackle these difficult topics. By sorting out the basic protocol of a standard AISE, it can be summarized that the typical AISE process consists of seven stages: build experimental scenarios, clarify experimental methods, confirm experimental objects, set observed variables, organize experimental implementation, analyze experimental data, and provide feedback on experimental results. Based on the specific stages of experiments, the ethical risks, principles, and responsibilities may be appropriately assessed and integrated, resulting in the construction of an AISE ethical norm system that can not only convey the value consensus in general, but also have strong practical operability. With this approach, the study of ethical risks may be merged based on the risks of AI technology and social experimentation at each experimental stage, which awaits enrichment and further interpretation based on diverse application scenarios paired with various AI features. Following the identification of ethical risks, the ethical principles established along the experimental path can directly target the ethical risks inside each stage, maximizing normative efficacy. By organizing the most prevalent ethical principles in AI ethics and social experiment ethics, the overlapping chosen principles of beneficence、autonomy、non-maleficence、justice and transparency are shown. Each principle informs different normative criteria and exerts varied normative effects at different phases, such comprehensive refinement works as detailed operating direction for all AISE participants. Furthermore, according to AISE regulations, the key participants in this event can be classified as application subjects, research subjects, technical subjects, and review subjects. Each participant has their own distinct tasks to contribute, which maintains their allocated obligation as well as accountability towards specific elements of the experiments. In this way, a complete comprehension of AISE's ethical norm framework can be acquired and illuminated. The ethical norm system based on the experimental path is not only simple to grasp and reflect the ethical qualities of AISE, but it is also simple to use, which aids in the resolution of practical ethical concerns, the avoidance of ethical harms, the application of ethical principles, and the definition of ethical obligations.

Key words: artificial intelligence social experiment, experimental path, ethical risks, ethical principles, ethical responsibility

中图分类号: