课堂英语

美文欣赏童话故事历史文化英语诗歌名人名言英文歌词幽默笑话人文地理星座英语双语阅读

杀手机器人将是人类噩梦(双语)

cocotang 于2015-08-06发布 l 已有人浏览
增大字体 减小字体
人类是一个嗜血的物种。根据学者史蒂文?平克(Steven Pinker)的说法,在历史的大部分时间里,被同类所杀是人类的头号死因。

Mankind is a bloodthirsty species. According to Steven Pinker, the academic, for much of history being murdered by a fellow human was the leading cause of death. Civilisation is largely a tale of man’s violent instincts being progressively muffled. A part of this is the steady withdrawal of actual human flesh from the battle zone, with front lines gradually pulled apart by the advent of long-range artillery and air power, and the decline in the public’s tolerance for casualties.  

人类是一个嗜血的物种。根据学者史蒂文?平克(Steven Pinker)的说法,在历史的大部分时间里,被同类所杀是人类的头号死因。文明基本上是人的暴力本能被逐渐束缚住的故事。其中一个部分是有血有肉的人持续从战场撤出,前线逐渐被远程武器和空中军事力量拉远,公众对于伤亡的容忍程度也下降了。

Arguably, America’s principal offensive weapon is the drone, firing on targets thousands of miles from where its controller safely sits. Given the pace of advance, it takes no imaginative leap to foresee machines displacing human agency altogether from the act of killing. Artificial brains already perform well in tasks hitherto regarded as the province of humans. Computers will be trusted with driving a car or diagnosing an illness. Algorithmic intelligence could therefore surpass the human sort for making the decision to kill. 

可以说,美国的主要进攻武器是无人机,操纵者安坐于千里之外对目标进行打击。考虑到技术进步之快,无需脑洞大开,我们就能预见到机器将可完全代替人类进行杀戮。在迄今仍被视为人类专属的活动领域里,人工大脑已有良好表现。电脑将被交托驾驶汽车或者诊断疾病的任务。因此,在做出杀戮的决策上,算法智能或许也将超越人类智能。

This prospect has prompted more than 1,000 artificial intelligence experts to write calling for the development of “lethal, autonomous weapons systems” to cease forthwith. Act now, they urge, or what they inevitably dub “killer robots” will be as widespread, and as deadly, as the Kalashnikov rifle. 

这种可能性,促使1000多名人工智能专家在一封公开信中呼吁立即停止发展“致命自动武器系统”。他们敦促称,现在就行动,否则被他们不可避免地称为“杀手机器人”的武器将和卡拉什尼科夫步枪(Kalashnikov,即AK-47)一样广为流传并造成同样致命的危害。

It is easy to understand military enthusiasm for robotic warfare. Soldiers are precious, expensive and fallible. Every conflict exacts a heavy toll from avoidable human error. Machines in contrast neither grow weary nor lose patience. They can be sent into places unsafe or even impossible for ordinary soldiers. Rapid improvements in computational power are giving machines “softer” skills, such as the ability to identify an individual, flesh-and-blood target. Robots could eventually prove safer than even the most experienced soldier, for example by being capable of picking out a gunman from a crowd of children — then shooting him. 

军方对机器人战争的热衷很容易理解。士兵是宝贵的、成本高昂的,也是会犯错误的。本可避免的人为失误在每一场战斗中都造成了严重伤亡。相较之下,机器既不知疲倦,也不会失去耐心。它们可以被送往不安全甚至普通士兵无法到达的地方。计算能力的迅速提升正赋予机器“更柔软”的技能,比如识别一个有血有肉的单独目标。最终,事实可能将证明机器人会比最有经验的士兵更安全,比如能够从一群孩子中挑出枪手——然后射杀他。

The case against robotic warfare is the same that applies to all advances in weaponry, the avoidance of unforeseeable consequences that cause unlimited damage to the innocent. Whatever precautions are taken, there is no foolproof way to stop weapons falling into the wrong hands. For a glimpse into what could go wrong, recall how Chrysler, the US carmaker has needed to debug 1.4m vehicles after finding the car could be remotely hacked. Now imagine it came equipped with guns. 

反对机器人战争的理由与反对所有武器进步的理由相同——避免大量无辜受到伤害这种不可预知的后果。无论采取了什么预防措施,都没有万无一失的方法来阻止武器落入不法之徒的手中。要想一窥那种情况下会有什么后果,可以回忆一下美国汽车制造商克莱斯勒(Chrysler)在发现汽车可以被远程入侵后,需要检测和排除140万辆汽车隐患的事情。现在,想象一下这些车装备了枪支。

Technological futurists also fret about the exponential nature of advances in artificial intelligence. The scientist Stephen Hawking recently warned of the “technological catastrophe” that would follow artificial intelligence vastly exceeding the human sort. Whether this is a plumb inevitability or fantasy, science itself cannot decide: but in light of the risk, how sensible can it be to arm such super-intelligences? 

技术未来主义者也担忧人工智能异常快速的发展。科学家斯蒂芬?霍金(Stephen Hawking)最近提醒人们警惕人工智能远超人类智能后可能发生的“科技大灾难”。这到底是绝对无法避免的事情,还是只是幻想,科学本身无法确定:但考虑到其中的风险,给超级智能装备武器能有多明智呢?

The moral argument is more straightforward. The abhorrence of killing has been as important to its decline as any technological breakthrough. Inserting artificial intelligence into the causal chain would muddle the responsibility that must underpin any decision to kill. Without clear responsibility, not only might the means to wage war be enhanced, but so too might the appetite for doing so. 

道德方面的理由更为直接。在减少杀人方面,对杀戮的厌恶是个重要因素,其作用不亚于任何技术突破。将人工智能插入这条因果链,将弄混杀人决定背后的责任。没有明确的责任,不仅发动战争的手段得到加强,发动战争的意愿也可能上升。

Uninventing weapons is impossible: consider anti-personnel landmines — autonomous weapons in their way — which are still killing 15,000-20,000 people annually. The nature of artificial intelligence renders it impossible to foresee where the development of autonomous weapons would end. No amount of careful programming could limit the consequences. Far better not to embark on such a journey.

让武器消失是不可能的:想一想杀伤性地雷——一种自动运行的武器——现在依然每年造成1.5万到2万人丧生。人工智能的性质使人们无法预见自动武器发展的终点在哪里。不管进行多少精密的编程,也无法限制其后果。最好不要踏上这样的旅程。

 1 2 下一页

分享到

添加到收藏

双语阅读排行