来源:《华尔街日报》
原文见刊日期:2020年11月5日
To train an artificial intelligence using “machine learning,” you give it a goal, such as picking out all the photos in a set that have a cat in them. But you don’t actually tell the AI how to achieve the goal. You just give it lots of examples of success and failure, and it figures out how to solve the problem itself.
为了使用“机器学习”训练人工智能,你给它一个目标,比如从一组照片中挑出所有有猫的照片。但实际上你并没有告诉AI如何实现目标。你只要给它很多成功和失败的例子,它就能自己解决问题。
But imagine the scenario proposed by the philosopher Nick Bostrom. One day in the future, someone builds an advanced AI very much smarter than any current system and gives it the goal of making paper-clips. The AI, faithfully following instructions, takes over the world’s machines and starts to demolish everything from pots and pans to cars and skyscrapers. So it can melt down the raw material and turn it into paper clips. The AI is doing what it thinks its creator wanted, but it gets things disastrously wrong.
但想象一下哲学家尼克·博斯特罗姆提出的情景。在未来的某一天,有人构建了一个比当前任何系统都聪明得多的高级人工智能,并赋予它制作回形针的目标。人工智能忠实地遵循指令,接管了世界上的机器,并开始摧毁一切,从锅碗瓢盆到汽车和摩天大楼。这样它就可以熔化原料,将其变成回形针。人工智能正在做它认为它的创造者想要的事情,但它却犯了灾难性的错误。
On social media, we may face a version of this apocalypse already. Instead of maximizing paper clips, Facebook and Twitter maximize clicks, by showing us things that their algorithms think we will be interested in. It seems like an innocent goal, but the problem is that outrage and fear are always more interesting, or at least more clickable, than sober information.
在社交媒体上,我们可能已经面临了这种末日的一个版本。Facebook和Twitter没有最大化产出回形针,而是通过向我们展示它们的算法认为我们会感兴趣的东西,从而最大化点击量。这似乎是一个无恶意的目标,但问题是,愤怒和恐惧总是比审慎的信息更有趣,或者至少更容易被点击。
The gap between what we actually want and what an AI thinks we want is called the alignment problem, since we have to align the machine’s function with our own goals. A great deal of research in AI safety and ethics is devoted to trying to solve it. In his fascinating new book “The Alignment Problem,” writer and programmer Brian Christian describes a lot of this research, but he also suggests an interesting and unexpected place to look for solutions: parenting.
我们实际想要的东西和人工智能认为我们想要的东西之间的差距被称为对齐问题,因为我们必须让机器的功能与我们自己的目标保持一致。人工智能安全和伦理方面的大量研究都致力于解决这一问题。作家兼程序员布莱恩·克里斯坦在其引人入胜的新书《对齐问题》中描述了很多这方面的研究,但他也提出了一个有趣而意外的地方来寻找解决方案:育儿。
After all, parents know a lot about dealing with super-intelligent systems and trying to give them the right values and goals. Often, that means making children’s priorities align with ours, whether that means convincing a toddler to take a nap or teaching a teenager to stay away from drugs. A lot of the work of being a parent, or a caregiver or teacher more generally, is about solving the alignment problem.
毕竟,父母们非常了解如何应对超级智能系统,并试图给他们正确的价值观和目标。通常,这意味着让孩子们的优先事项与我们的优先事项保持一致,无论是说服一个蹒跚学步的孩子小睡一会儿,还是教一个十几岁的孩子远离毒品。作为父母,或者更一般的照顾者或老师,很多工作都是关于解决对齐问题。
But when it comes to children, there’s an added twist. Computer programmers hope to make an AI that will do exactly what they want. But as parents, we don’t want our children to have exactly the same preferences and accomplishments that we do. We want them to become autonomous, with their own goals and values, which may turn out to be better than our own.
但当涉及到儿童时,还有一个额外的问题。计算机程序员希望制造出一个完全按照他们的意愿行事的人工智能。但作为父母,我们不希望我们的孩子拥有和我们完全相同的偏好和成就。我们希望他们变得自主,有自己的目标和价值观,这可能会比我们的更好。
One possible solution to the alignment problem is to design AIs that are more skilled at divining what humans really want, even when we don’t quite know ourselves.
对于对齐问题,一个可能的解决方案是设计出更擅长猜测人类真正想要什么的人工智能,即使我们自己都不太了解自己。
But it might be better to think of creating AIs as more like parenting, as the science fiction writer Ted Chiang does in his story “The Lifecycle of Software Objects.” The story imagines a future where people adopt and train childlike AIs called “digients” as a kind of game. Soon, though, the “parents” come to love the artificial children they care for, and ultimately they face the same dilemmas of independence and care that parents of human children do. If we ever do create truly human-level intelligence in machines, we may need to give them mothers.
但更好的想法是,像科幻作家特德·蒋在他的科幻小说《软件对象的生命周期》中所写的那样,把创造人工智能想象成养育子女的过程。这个小说想象了一个未来,人们收养和训练被称为“digients”的儿童人工智能作为一种游戏。然而,“父母”很快就爱上了他们所照顾的人造孩子,最终他们面临着与人类孩子的父母一样的独立和照顾的两难困境。如果我们真的能在机器中创造出真正的人类水平的智能,我们可能需要给它们赋予母亲。