广东36选7色谱走势图:英語聽力

聽力入門英語演講VOA慢速英語美文聽力教程英語新聞名??緯?/a>聽力節目影視聽力英語視頻

人工智能學會了反駁偏執者

zlxxm 于2019-12-25發布 l 已有人瀏覽
增大字體 減小字體
像Facebook這樣的社交媒體平臺結合了人工智能和人工版主來偵查和消除仇恨言論。但現在,研究人員開發了一種新的人工智能工具,它不僅能清除仇恨言論,還能對其做出反駁
    小E英語歡迎您,請點擊播放按鈕開始播放……

广东36选7好彩3开奖奖金多少 www.oabxa.com Artificial Intelligence Learns to Talk Back to Bigots

人工智能學會了反駁偏執者

Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech, but would actually craft responses to it, like: 'The language used is highly offensive. All ethnicities and social groups deserve tolerance.'

像Facebook這樣的社交媒體平臺結合了人工智能和人工版主來偵查和消除仇恨言論。但現在,研究人員開發了一種新的人工智能工具,它不僅能清除仇恨言論,還能對其做出反駁,比如:“使用的語言非常無禮。所有種族和社會群體都應該得到寬容。”

And this type of intervention response can hopefully short circuit the hate cycles that we often get in these types of forums. Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech. An approach advocated by the ACLU and the UN High Commissioner for Human Rights.

英特爾的數據科學家安娜·貝斯克表示:“這種類型的干預反應有望縮短我們在這類論壇中經常遇到的仇恨循環。”她說,這樣做的目的是用更多的言論來對抗仇恨言論。這是美國公民自由聯盟和聯合國人權事務高級專員倡導的方法。

So, with her colleagues at UC Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit, and nearly 12,000 more from Gab - a social media site where many users banned by Twitter tend to resurface.

因此,貝斯克和她在加州大學圣巴巴拉分校的同事們從Reddit網站上獲得了5000多條對話,并從從社交媒體網站Gab上獲得了多至12000條對話——這一網站的許多用戶都是被推特禁言而轉移過來的。

The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then, they let natural language processing algorithms learn from the real human responses, and craft their own. Such as: 'I don't think using words that are sexist in nature contribute to a productive conversation.'

研究人員讓真人對Reddit和Gab對話中的仇恨言論做出樣本反應。然后,他們讓自然語言處理算法從真實的人類反應中學習,并形成自己的算法。比如:“我認為使用帶有性別歧視的語言沒有助于提高談話的效率。”

Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: 'This is not allowed and un time to treat people by their skin color.'And when the scientists asked human reviewers to blindly choose between human responses and machine responses... well, most of the time, the humans won. The team published the results on the site Arxiv, and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing. [Jing Qian et al, A Benchmark Dataset for Learning to Intervene in Online Hate Speech]

聽起來不錯。但這些機器也會發出一些讓人撓頭的回答,比如:“這是不允許的,也沒有時間根據膚色來對待人。”當科學家們要求人類評論者在人類反應和機器反應之間盲目選擇時……嗯,大多數時候,人類贏了。研究小組將研究結果發表在Arxiv網站上,并將于下月在香港舉行的“自然語言處理的經驗方法”會議上發表。

Ultimately, Bethke says, the idea is to spark more conversation. "Not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves between the people that might be being harmful, and those they're potentially harming."

貝斯克說,最終目的是激發更多的對話。“這不僅僅是一個人和一個機器人之間的討論,而是要開始在社區內部引發可能有害的人和那些可能被他們傷害的人之間的對話。”

In other words: to bring back good ol' civil discourse? "Oh! I don't know if I'd go that far, but it sort of sounds like that's what I just proposed, huh?"

換句話說:將人們的話語帶回正軌?“哦!我不知道能否實現那么遠大的目標,但這聽起來像是我剛剛提出的建議,嗯?”

- Christopher Intagliata

 1 2 下一頁

英語新聞排行