卫报 | 人工智能需要在为时已晚之前进行监管


来源:《卫报》

原文刊登日期:2021年11月1日


With little debate about its downsides, AI is becoming embedded in society. Machines now recommend online videos to watch, perform surgery and send people to jail. The science of AI is a human enterprise that requires social limitations. The risks, however, are not being properly weighed. There are two emerging approaches to AI. The first is to view it in engineering terms, where algorithms are trained on specific tasks. The second presents deeper philosophical questions about the nature of human knowledge.

翻译

在几乎没有对其缺点争论的情况下,人工智能正在融入社会。现在,机器可以推荐在线视频观看、进行手术和送人进监狱。人工智能科学是一项需要社会限制的人类事业。然而,人们并没有恰当地权衡这些风险。目前展现出了两种看待人工智能的方式。第一种是从工程的角度来看,算法是在特定的任务上训练的。第二种提出了关于人类知识本质的更深层次的哲学问题。


Stuart Russell, the University of California computing professor, engages with both these perspectives. The former is very much pushed by Silicon Valley, where AI is deployed to get products quickly to market and problems dealt with later. This has led to AI “succeeding” even when the goals aren’t socially acceptable and they are pursued with little accountability. The pitfalls of this approach are highlighted by the role YouTube’s algorithm plays in radicalising people. Prof Russell argues, reasonably, for a system of checks where machines can pause and “ask” for human guidance, and for regulations to deal with systemic biases.

翻译

加州大学计算科学教授斯图尔特•拉塞尔对这两个角度都有研究。前者在很大程度上是由硅谷推动的,在硅谷,人工智能的使用是为了将产品快速推向市场,并在之后处理问题。这导致人工智能“成功”,即使目标不为社会所接受,而且在追求这些目标时几乎没有问责。YouTube的算法在使人们变得激进方面所扮演的角色突显了这种方法的缺陷。拉塞尔教授主张建立一种检查系统,在这种系统中,机器可以暂停并“请求”人类的指导,并制定法规来处理系统性偏见,这个主张是合理的。


The academic also backs global adoption of EU legislation that would ban impersonation of humans by machines. Computers are getting closer to passing, in a superficial way, the Turing test – where machines attempt to trick people into believing they are communicating with other humans. Yet human knowledge is collective: to truly fool humans a computer would have to be able to grasp mutual understandings. OpenAI’s GPT-3, probably the best non-human writer ever, cannot comprehend what it spews. When Oxford scientists put it to the test this year, they found the machine produced false answers to questions that “mimic popular misconceptions and have the potential to deceive”. It so troubled one of OpenAI’s own researchers that no one knew how such language is being made that he left to set up an AI safety lab.

翻译

这位学者还支持全球采纳欧盟立法,禁止机器模仿人类。从表面上看,计算机正在接近通过图灵测试——在图灵测试中,计算机试图诱使人们相信自己在与真人交流。然而,人类的知识是集体的:要想真正愚弄人类,计算机必须能够掌握相互理解。OpenAI的GPT-3可能是有史以来最好的非人类写作者,但它无法理解自身所表达的东西。当牛津大学的科学家们今年对它进行测试时,他们发现这台机器在回答“模仿流行误解并具有欺骗潜力”的问题时,会给出错误答案。OpenAI公司的一名研究人员对此感到非常困扰,没有人知道这样的回答是如何形成的,于是他离开公司,建立了一个人工智能安全实验室。


Some argue that AI can already produce new insights that humans have missed. But human intelligence is much more than an algorithm. Inspiration strikes when a brilliant thought arises that can’t be explained as a logical consequence of preceding steps. Einstein’s theory of general relativity cannot be derived from observations of that age – it was experimentally proven only decades later. Human beings can also learn a new task by being shown how to do it only a few times. Machines, so far, cannot.

翻译

一些人认为,人工智能已经能够产生人类不理解的新见解。但人类的智能远不止是一种算法。当一个绝妙的想法出现时,灵感就会出现,而这个想法不能用前面步骤的逻辑推理来解释。爱因斯坦的广义相对论不能从那个年代的观测中得出——它是在几十年后才被实验证明的。人类也可以通过几次演示来学习一项新任务。到目前为止,机器还不能。


Ajeya Cotra, a tech analyst with the US-based Open Philanthropy Project, reckoned a computer that could match the human brain might arrive by 2052. We need to find better ways to build it. Humans are stumbling into an era when the more powerful the AI system, the harder it is to explain its actions. How can we tell if a machine is acting on our behalf and not acting contrary to our interests? Such questions ought to give us all pause for thought.

翻译

美国开放慈善项目的科技分析师阿杰亚•科特拉估计,到2052年,可能会出现一种与人脑相当的计算机。我们需要找到更好的方法来建造它。人类正无意间进入一个时代,AI系统越强大,就越难解释其行为。我们如何辨别机器是在代表我们行事,而不是在违背我们的利益呢?这样的问题应该让我们大家停下来想一想。




意见反馈  ·  辽ICP备2021000238号