科学美国人 | 当AI杀人时,谁该负责?


来源:《科学美国人》2023年3月号


Who is responsible when artificial intelligence harms someone? A California jury may soon have to decide. In December 2019 a person driving a Tesla with an AI navigation system killed two people in an accident. The driver faces up to 12 years in prison. Several federal agencies are investigating Tesla crashes, and the U.S. Department of Justice has opened a criminal probe into how Tesla markets its self-driving system. And California’s Motor Vehicles Department is examining its use of AI-guided driving features.

翻译

当人工智能伤人时,谁该负责?加州陪审团可能很快就要做出决定。2019年12月,一名驾驶装有AI导航系统的特斯拉的人在一次事故中导致两人死亡。司机面临最高12年的监禁。多个联邦机构正在调查特斯拉撞车事故,美国司法部已对特斯拉如何营销其自动驾驶系统展开刑事调查。加州机动车辆部门正在检查其人工智能驾驶功能的使用情况。


Our current liability system—used to determine responsibility and payment for injuries—is unprepared for AI. Liability rules were designed for a time when humans caused most injuries. But with AI, errors may occur without any direct human input. The liability system needs to adjust accordingly. Bad liability policy won’t just stifle AI innovation. It will also harm patients and consumers.

翻译

我们目前的责任体系——用来确定伤害的责任和赔偿——还没有为人工智能做好准备。责任规则是为人类造成大多数伤害的时代设计的。但有了人工智能,错误可能会在没有任何直接人工输入的情况下发生。责任制度需要相应调整。糟糕的责任政策不仅会扼杀人工智能创新,还会伤害病人和消费者。


The time to think about liability is now—as AI becomes ubiquitous but remains underregulated. AI-based systems have already contributed to injuries. In 2019 an AI algorithm misidentified a suspect in an aggravated assault, leading to a mistaken arrest. In 2020, an AI-based mental health chatbot encouraged a simulated suicidal patient to take her own life.

翻译

现在是考虑责任的时候了——人工智能变得无处不在,但仍然监管不足。基于人工智能的系统已经造成了伤害。2019年,一种人工智能算法在一起严重袭击中错误识别了一名嫌疑人,导致错误逮捕。2020年,一个基于人工智能的心理健康聊天机器人鼓励一个假装自杀的病人结束自己的生命。


Getting the liability landscape right is essential to unlocking AI’s potential. Uncertain rules and the prospect of costly litigation will discourage the investment, development and adoption of AI in industries ranging from health care to autonomous vehicles.

翻译

正确把握责任格局对于释放人工智能的潜力至关重要。不确定的规则和代价高昂的诉讼前景,将阻碍从医疗保健到自动驾驶汽车等行业对人工智能的投资、开发和利用。


Currently liability inquiries usually start—and stop—with the person who uses the algorithm. Granted, if someone misuses an AI system or ignores its warnings, that person should be liable. But AI errors are often not the fault of the user.

翻译

目前,责任调查通常开始和结束于使用算法的人。当然,如果有人滥用人工智能系统或忽视其警告,那个人应该承担责任。但人工智能的错误往往不是用户的错。


AI is constantly self-learning, meaning it takes information and looks for patterns in it. It is a “black box,” which makes it challenging to know what variables contribute to its output. This further complicates the liability question. Shifting the blame solely to AI engineers does not solve the issue. Of course, the engineers created the algorithm in question. But could every Tesla Autopilot accident be prevented by more testing before product launch?

翻译

人工智能是不断自我学习的,这意味着它获取信息并在其中寻找模式。它是一个“黑盒”,这使得了解哪些变量对其输出有贡献具有挑战性。这使责任问题进一步复杂化。将责任完全推给人工智能工程师并不能解决问题。当然,工程师们创造了有问题的算法。但是,在产品发布前进行更多的测试,就能防止特斯拉自动驾驶仪的所有事故吗?


The key is to ensure that all stakeholders—users, developers and everyone else along the chain—bear enough liability to ensure AI safety and effectiveness.

翻译

关键是要确保所有利益相关者(用户、开发人员和链上的其他人)承担足够的责任,以确保人工智能的安全性和有效性。


Industries ranging from finance to cybersecurity are on the cusp of AI revolutions that could benefit billions worldwide. But these benefits shouldn’t be undercut by poorly developed algorithms: 21st-century AI demands a 21st-century liability system.

翻译

从金融到网络安全,各行各业都处于人工智能革命的风口浪尖,可能使全球数十亿人受益。但这些好处不应该被不完善的算法所削弱:21世纪的人工智能需要21世纪的责任制度。




意见反馈  ·  辽ICP备2021000238号