【转】How the Enlightenment Ends Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

by HENRY A. KISSINGER June 2018

https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

1

three years ago, at a conference on transatlantic issues, the subject of artificial intelligence appeared on the agenda. I was on the verge of skipping that session—it lay outside my usual concerns—but the beginning of the presentation held me in my seat.

三年前,我即将放弃听在一个有人工智能议程的跨大西洋会议(因为通常情况下人工智能并不在我的关注范围之内)但演讲的开始阶段就让我回到座位上。

 

The speaker described the workings of a computer program that would soon challenge international champions in the game Go. I was amazed that a computer could master Go, which is more complex than chess. In it, each player deploys 180 or 181 pieces (depending on which color he or she chooses), placed alternately on an initially empty board; victory goes to the side that, by making better strategic decisions, immobilizes his or her opponent by more effectively controlling territory.

会议的发言者描述了一个即将挑战围棋游戏国际冠军的计算机程序及程序运作方式。 我很惊讶计算机可以掌握围棋,这游戏比国际象棋还要复杂。 在这个游戏中,每个玩家部署180或181件(取决于他或她选择的颜色),交替放置在最初空的板上; 通过做出更好的战略决策,通过更有效地控制领土来阻止他或她的对手。

 

The speaker insisted that this ability could not be preprogrammed. His machine, he said, learned to master Go by training itself through practice. Given Go’s basic rules, the computer played innumerable games against itself, learning from its mistakes and refining its algorithms accordingly. In the process, it exceeded the skills of its human mentors. And indeed, in the months following the speech, an AI program named AlphaGo would decisively defeat the world’s greatest Go players.

发言者坚持认为,这种能力是无法预先编程的。 他说,他的机器学会了通过练习玩围棋游戏来训练自己。 基于围棋的基本规则,计算机模拟玩大量游戏对抗自身,从错误中学习并相应地改进算法。在这个训练的过程中,计算机做的比人类老师更有效。 事实上,在演讲结束后的几个月里,一个名为AlphaGo的人工智能项目将决定性地击败世界上最强大的围棋选手。

 

As I listened to the speaker celebrate this technical progress, my experience as a historian and occasional practicing statesman gave me pause. What would be the impact on history of self-learning machines—machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them? Were we at the edge of a new phase of human history?

当我听演讲者庆祝这一技术进步时,我作为一名历史学家、兼职的政治家的经历让我停下来思考。能够通过自学的过程获取知识,并将这些知识应用于目前人类没有理解的范畴,这些机器会对历史会产生什么样的影响? 这些机器会学会相互通信吗?如何在新兴期权中做出选择?难道人类的历史可能会走向印加人的道路,面对难以理解西班牙文化甚至敬畏哪些文化吗? 我们处于人类历史新阶段的边缘吗?

 

Aware of my lack of technical competence in this field, I organized a number of informal dialogues on the subject, with the advice and cooperation of acquaintances in technology and the humanities. These discussions have caused my concerns to grow.

我意识到在这一领域我本身缺乏技术能力,于是我邀请了人工智能领域里的熟人专家以及人文科学家,并组织了一些关于这一主题的非正式的研讨。这些讨论引起了我的关注。

 

Heretofore, the technological advance that most altered the course of modern history was the invention of the printing press in the 15th century, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion. Individual insight and scientific knowledge replaced faith as the principal criterion of human consciousness. Information was stored and systematized in expanding libraries. The Age of Reason originated the thoughts and actions that shaped the contemporary world order.

迄今为止,15世纪印刷机的发明是改变大多数现代历史进程的技术,这使得经验知识学说取代礼仪学说,以及理性时代取代宗教时代。 个人见解和科学知识取代了信仰作为人类意识的主要标准,信息也在扩展库中存储和系统化。 理性时代起源于,也塑造了当代世界秩序的思想和行动。

 

But that order is now in upheaval amid a new, even more sweeping technological revolution whose consequences we have failed to fully reckon with, and whose culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms.

但是,这种秩序现在正处于在一场新的甚至更为彻底的技术革命中,这种革命的后果我们未能完全考虑,其结果可能是依赖于由数据和算法驱动并且不受伦理或哲学规范支持的机器的世界。

 

The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

我们已经生活的互联网时代预示了一些人工智能只会更加严重的问题。启蒙运动试图将传统的真理提交给一个解放的、可解释的人类理性。 而互联网的目的是通过积累和操纵不断扩展的数据来获取知识。 人类的认知失去了它的个性,个人信息变成数据,数据变成了统治者。

 

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

互联网用户强调通过语境化或概念化其含义来检索和操纵信息。他们很少过问历史或哲学; 通常来说,他们要求提供与他们当前实际需求相关的信息。在此过程中,搜索引擎算法获得预测个人客户偏好的能力,使算法能够个性化结果并将其提供给其他方用于政治或商业目的。 真理变得相对了,信息有可能智慧产生威胁。

 

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

社交媒体淹没了众多人的意见,用户被转向内省; 事实上,许多技术爱好者使用互联网来避免他们害怕的孤独。 所有这些压力都削弱了发展和维持信念所需的坚韧,这些信念只有通过一条孤独的道路才能实现,这也是创造力的本质。

 

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.

互联网技术对政治的影响尤为突出。针对微观群体的能力通过允许专注于专门目的或不满来打破先前对优先事项的共识。被职业压力所压倒的政治领导人被剥夺了时间去思考或反思他们可用于发展愿景的空间。

 

The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences.

数字世界对速度的强调阻止了反思; 它的激励机制能够激发思想上的激进; 它的价值观是由小组共识而不是反省所塑造的。 尽管它取得了许多成就,但由于它有自行开启的风险以及它的强制性这两个弊端胜过了它的便利性。

 

As the internet and increased computing power have facilitated the accumulation and analysis of vast data, unprecedented vistas for human understanding have emerged. Perhaps most significant is the project of producing artificial intelligence—a technology capable of inventing and solving complex, seemingly abstract problems by processes that seem to replicate those of the human mind.

随着计算能力的增长,互联网促进了大量数据的积累和分析,出现了前所未有的人类理解前景。 也许最重要的是生产人工智能的项目 — 这种技术能够通过似乎复制人类思维的过程来发现和解决复杂的、看似抽象的问题。

 

This goes far beyond automation as we have known it. Automation deals with means; it achieves prescribed objectives by rationalizing or mechanizing instruments for reaching them. AI, by contrast, deals with ends; it establishes its own objectives. To the extent that its achievements are in part shaped by itself, AI is inherently unstable. AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. It makes strategic judgments about the future, some based on data received as code (for example, the rules of a game), and some based on data it gathers itself (for example, by playing 1 million iterations of a game).

这远远超出了我们所知道的自动化。 自动化处理手段;意味着它通过合理化或机械化达到它们的工具来达到预定的目标。 相比之下,AI确立了它自己的目标。 如果其成就部分地由其自身塑造,那么AI本质上是不稳定的。 AI系统通过它们的操作,在获取并即时分析新数据时不断变化,然后在该分析的基础上寻求自我提升。 通过这个过程,AI开发出一种以前被认为是为人类保留的能力。 它基于作为代码接收的数据(例如游戏规则),或者基于它收集的数据(例如通过玩一百万次游戏的迭代)能对未来做出战略判断。

 

The driverless car illustrates the difference between the actions of traditional human-controlled, software-powered computers and the universe AI seeks to navigate. Driving a car requires judgments in multiple situations impossible to anticipate and hence to program in advance. What would happen, to use a well-known hypothetical example, if such a car were obliged by circumstance to choose between killing a grandparent and killing a child? Whom would it choose? Why? Which factors among its options would it attempt to optimize? And could it explain its rationale? Challenged, its truthful answer would likely be, were it able to communicate: “I don’t know (because I am following mathematical, not human, principles),” or “You would not understand (because I have been trained to act in a certain way but not to explain it).” Yet driverless cars are likely to be prevalent on roads within a decade.

无人驾驶汽车展示了传统的人为控制、软件驱动的计算机和人工智能寻求导航的行为之间的差异。 驾驶汽车需要在多种情况下做出无法预测的判断,从而需要提前进行编程。 假设使用一个众所周知的例子,如果这样的汽车被环境所迫,要么选择杀死祖父母或者杀死一个孩子,会发生什么呢?它会选择谁?为什么?它的选择中哪些因素会尝试优化? 它可以解释它的基本原理吗?然而它的真实答案可能是,如果它能够沟通,那么就是“我不知道(因为我遵循数学,而不是人类或者原则),”或“你不会理解(因为我受过训练可以用某种方式,但不必解释它。”然而,无人驾驶汽车可能在十年内在道路上流行。

 

Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.

迄今为止,人工智能研究仅限于特定的活动领域,旨在实现能够在多个领域执行任务的“通用智能”。 在未来某个时间段内,越来越多的人类活动将由AI算法驱动。但是这些算法,只是在数学上对观测数据进行解释,并没有解释如何产生数据的潜在现实。而矛盾的是,随着世界变得更加透明,它也将变得越来越神秘。 将新世界与我们所知道的世界区分开来的是什么? 我们将如何生活? 我们将如何管理人工智能,改进它,或者至少防止它造成伤害,防止最终导致最不好的消息:人工智能通过比人类更快速,更明确地掌握某些能力,可能会随着时间的推移而削弱人的能力和人类本身,把人们变成数据。

 

Artificial intelligence will in time bring extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas. But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results. There are three areas of special concern:

人工智能将及时为医学科学,清洁能源供应,环境问题和许多其他领域带来极大的好处。 但正是因为AI对一个不断发展的、尚未确定的未来做出了判断,所以其结果有不确定性和模糊性。其中有三个特别值得关注的领域:

 

First, that AI may achieve unintended results. Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context. A famous recent example was the AI chatbot called Tay, designed to generate friendly conversation in the language patterns of a 19-year-old girl. But the machine proved unable to define the imperatives of “friendly” and “reasonable” language installed by its instructors and instead became racist, sexist, and otherwise inflammatory in its responses. Some in the technology world claim that the experiment was ill-conceived and poorly executed, but it illustrates an underlying ambiguity: To what extent is it possible to enable AI to comprehend the context that informs its instructions? What medium could have helped Tay define for itself offensive, a word upon whose meaning humans do not universally agree? Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?

首先,AI可能会产生意想不到的结果。科幻小说已经想象了人工智能影响其创作者的场景。更有可能的是,由于其固有的缺乏背景,AI会误解人类的指令。最近一个著名的例子是名为Tay的AI聊天机器人,旨在用一个19岁女孩的语言模式与别人进行友好交谈。但事实证明,这台机器无法理解创建者安装的“友好”和“合理”的语言的必要性,而是在其回应中使用了种族主义,性别歧视和其他煽动性的语言。技术界的一些人声称这个实验是错误的,执行得不好,但它说明了一个潜在的不确定性:创建者可以AI理解多大程度的指令背景?什么媒介可以帮助Tay定义自己的攻击性?这个词的含义是人类普遍不同意的?我们能否在早期阶段检测并纠正在我们的期望框架之外行动的AI程序?或者在设备上的AI程序会不可避免地会产生轻微的偏差,而随着时间的推移,这种偏差可能会导致灾难性的结果?

 

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

其次,在实现预期目标时,人工智能可能会改变人类思维过程和人类价值观。 AlphaGo通过制造战略上前所未有的行为模式 – 一种人类没有想到并且还没有成功学会克服的行为模式 – 击败了世界围棋冠军。 这些行为是否超出了人类大脑的能力? 或者人类现在可以通过新老师证明他们来学习这些行为模式吗?

 

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

通过开发生成一系列人类想要的答案的设备,其他的AI项目就可以改变人类的思想。 除了事实性问题(“外面的温度是多少?”)之外,关于现实本质或生命意义的问题可以引发了更深层次的思考。 我们是否希望孩子通过不受限制的算法的讨论来学习价值观? 我们是否应该通过限制AI对其提问者的了解来保护隐私? 如果是这样,我们如何实现这些目标?

 

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

如果人工智能学得比人类快得多,那么我们必须期望它能够以指数的速度做出人类平常的反复试验过程:犯错误的速度比人类更快,严重程度也更大。 正如人工智能的研究人员经常提出的,通过在程序中加入需要“道德”或“合理”结果的警告,可能无法缓解这些错误。 整个学科都源于人类无法就如何定义这些术语达成一致。 因此,AI应该成为他们的仲裁者吗?

 

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—pattern recognition, big-data analysis, gaming—AI’s capacities already may exceed those of humans. If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI’s decision making surpass the explanatory powers of human language and reason? Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?

第三,AI可能达到预期目标,但无法解释其结论的基本原理。在某些领域比如模式识别,大数据分析,游戏等,AI的能力已经超过了人类。如果它的计算能力继续快速增长,AI可能很快就能够以与人类优化它们的方式略有不同的方式优化问题,并且可能有显著不同得结果。但在那时,人工智能是否能够以人类可以理解的方式解释为什么它的行为是最优的?或者AI的决策是否会超越人类语言和理性的解释力?纵观所有人类历史,文明创造了解释周围世界的方法,在中世纪是宗教,在启蒙运动中是理性,在19世纪是历史,在20世纪是意识形态。关于我们所处的世界最困难而又最重要的问题是:如果人工智能超越其自身的解释力,社会再也无法以有意义的方式解释他们所居住的世界,人类意识将会变成什么?

 

How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?

如何在一个机器世界中定义意识,将人类经验减少到数学数据,从而由他们自己的记忆去解释? 谁对AI的行为负责? 如何确定错误的责任? 人类设计的法律体系能否与人工智能能够产生的活动保持同步,这种活动能否超越并潜在地超越它们?

 

Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.

最终,人工智能一词可能用词不当。不过可以肯定的是,这些机器可以解决以前关于人类认知的复杂得,看似抽象的问题。 但他们所做的独特之处并不是像以前那样构思和体验。 相反,它是前所未有的记忆和计算。 由于其在这些领域的固有优势,AI很可能赢得分配给它的任何游戏。 但对于我们作为人类的目的,游戏不仅仅是关于胜利,而是思考取得胜利得过程。 如果将数学过程视为一个思考过程,并试图模仿该过程本身或仅仅接受结果,我们就有失去人类认知本质的能力的危险。

 

The implications of this evolution are shown by a recently designed program, AlphaZero, which plays chess at a level superior to chess masters and in a style not previously seen in chess history. On its own, in just a few hours of self-play, it achieved a level of skill that took human beings 1,500 years to attain. Only the basic rules of the game were provided to AlphaZero. Neither human beings nor human-generated data were part of its process of self-learning. If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices?

最近设计的程序AlphaZero显示了这种演变的含义,它以比国际象棋大师有更优越的下棋水平,并且具有以前在国际象棋历史中没有的风格。就其本身而言,在短短几个小时的自我发挥中,它达到了人类1500年才能达到的技能水平。 只向AlphaZero提供了游戏的基本规则。 人类和人类生成的数据都不是其自我学习过程的一部分。 如果AlphaZero能够如此迅速地实现这种掌控,那么人工智能在五年内会发展到哪里? 一般会对人类认知产生什么影响? 道德在这个过程中的作用是什么,其中包括加速选择的本质?

 

Typically, these questions are left to technologists and to the intelligentsia of related scientific fields. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. And governance, insofar as it deals with the subject, is more likely to investigate AI’s applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.

通常来说这些问题应该留给技术专家和相关科学领域的知识分子。 帮助塑造先前世界秩序概念的哲学家和人文领域的其他人往往处于不利地位,缺乏人工智能机制的知识的人会被AI能力所震慑。相反,科学世界被迫探索其成就的技术可能性,而技术世界则专注于神奇规模的商业前景。 这两个世界的动机是推动发现的极限而不是理解它们。 在涉及该主题的情况下,应该治理审查AI的安全和智能应用,而不是探索它已经开始产生的、人类状况的转变情况。

 

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.

启蒙运动始于基本上通过新技术传播的哲学见解。而我们的时期正朝着相反的方向发展。 它已经产生了一种潜在的主导技术以寻求指导思想。现在其他国家已将人工智能作为主要的国家项目。 作为一个国家,美国尚未系统地探索AI得全部范围,研究其影响,或开始探索最终的学习过程。 从将人工智能与人文传统联系起来的角度来看,这应该被赋予高度国家优先权。

 

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.

人工智能开发人员向我在政治上缺乏技术经验一样,他们在技术上缺乏政治和哲学经验,应该问问自己我提出的这些问题,以便提高他们的工程效果。 美国政府应该考虑由杰出思想家组成的总统委员会来帮助制定国家愿景。 有一点是肯定的:如果我们不尽快开始这项工作,不久我们就会发现我们开始得太晚了。

anyShare分享到:
This entry was posted in 新闻动态. Bookmark the permalink.

2 Responses to 【转】How the Enlightenment Ends Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

  1. 江, 思源 says:

    人工智能是科学研究的一个热点问题,能够模拟人脑帮助人们从事一些人脑才能完成的工作,解脱人们繁重的脑力劳动。但是,人工智能是一把双刃剑,人们对人工智能技术的飞速发展存在着恐慌。其实,人工智能并不是问题,人类如何运用人工智能技术才是关键。

  2. 贵芳 says:

    当今,人工智能迅速发展,详细在未来10-20年人工智能将对世界带来颠覆性的变化,一切都将变得智能化。在人工智能给人类带来无限便利的同时,对人工智能的思考也开始在人群中蔓延,正如克隆技术一样,人们开始对人工智能产生担忧甚至惶恐的情绪。在人工智能无处不在的同时,是否会导致大规模的失业和全球经济结构的挑战,它们是否真的会如电影中的那样,消灭人类,取代人类。
    知识将改变这个世界。

发表评论