英语单词

听力入门英语演讲VOA慢速英语美文听力教程英语新闻名校课程听力节目影视听力英语视频

人工智能学会了反驳偏执者

zlxxm 于2019-12-25发布 l 已有人浏览
增大字体 减小字体
像Facebook这样的社交媒体平台结合了人工智能和人工版主来侦查和消除仇恨言论。但现在,研究人员开发了一种新的人工智能工具,它不仅能清除仇恨言论,还能对其做出反驳
    小E英语欢迎您,请点击播放按钮开始播放……

Artificial Intelligence Learns to Talk Back to Bigots

人工智能学会了反驳偏执者

Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech, but would actually craft responses to it, like: 'The language used is highly offensive. All ethnicities and social groups deserve tolerance.'

像Facebook这样的社交媒体平台结合了人工智能和人工版主来侦查和消除仇恨言论。但现在,研究人员开发了一种新的人工智能工具,它不仅能清除仇恨言论,还能对其做出反驳,比如:“使用的语言非常无礼。所有种族和社会群体都应该得到宽容。”

And this type of intervention response can hopefully short circuit the hate cycles that we often get in these types of forums. Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech. An approach advocated by the ACLU and the UN High Commissioner for Human Rights.

英特尔的数据科学家安娜·贝斯克表示:“这种类型的干预反应有望缩短我们在这类论坛中经常遇到的仇恨循环。”她说,这样做的目的是用更多的言论来对抗仇恨言论。这是美国公民自由联盟和联合国人权事务高级专员倡导的方法。

So, with her colleagues at UC Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit, and nearly 12,000 more from Gab - a social media site where many users banned by Twitter tend to resurface.

因此,贝斯克和她在加州大学圣巴巴拉分校的同事们从Reddit网站上获得了5000多条对话,并从从社交媒体网站Gab上获得了多至12000条对话——这一网站的许多用户都是被推特禁言而转移过来的。

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then, they let natural language processing algorithms learn from the real human responses, and craft their own. Such as: 'I don't think using words that are sexist in nature contribute to a productive conversation.'

研究人员让真人对Reddit和Gab对话中的仇恨言论做出样本反应。然后,他们让自然语言处理算法从真实的人类反应中学习,并形成自己的算法。比如:“我认为使用带有性别歧视的语言没有助于提高谈话的效率。”

Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: 'This is not allowed and un time to treat people by their skin color.'And when the scientists asked human reviewers to blindly choose between human responses and machine responses... well, most of the time, the humans won. The team published the results on the site Arxiv, and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing. [Jing Qian et al, A Benchmark Dataset for Learning to Intervene in Online Hate Speech]

听起来不错。但这些机器也会发出一些让人挠头的回答,比如:“这是不允许的,也没有时间根据肤色来对待人。”当科学家们要求人类评论者在人类反应和机器反应之间盲目选择时……嗯,大多数时候,人类赢了。研究小组将研究结果发表在Arxiv网站上,并将于下月在香港举行的“自然语言处理的经验方法”会议上发表。

Ultimately, Bethke says, the idea is to spark more conversation. "Not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves between the people that might be being harmful, and those they're potentially harming."

贝斯克说,最终目的是激发更多的对话。“这不仅仅是一个人和一个机器人之间的讨论,而是要开始在社区内部引发可能有害的人和那些可能被他们伤害的人之间的对话。”

In other words: to bring back good ol' civil discourse? "Oh! I don't know if I'd go that far, but it sort of sounds like that's what I just proposed, huh?"

换句话说:将人们的话语带回正轨?“哦!我不知道能否实现那么远大的目标,但这听起来像是我刚刚提出的建议,嗯?”

- Christopher Intagliata

 1 2 下一页

分享到

添加到收藏

英语新闻排行