Artificial Intelligence Tools and Online Hate Speech

01.02.2019

This Issue Paper provides an overview of opportunities and challenges brought by the use of Artificial Intelligence (AI) to counter online hate speech. It first documents the problem of online hate speech and the shortcomings of current forms of human-based content moderation processes before introducing the potential of machine- and deep-learning, highlighting that AI may trigger important efficiency gains in this area.

AI and in particular machine- and deep-learning are widely perceived as desirable innovations in this domain. The automated detection of hate speech would be a scalable solution to manage ever-growing amounts of online content, reduce costs and decrease human discretion in this process.

There are also considerable weaknesses associated with current forms of AI, most importantly its over-inclusiveness which causes considerable problems from the freedom of expression perspective. The paper considers if and how future developments in artificial intelligence may address some of these issues and the paper closes with suggestions of themes for future discussion.

Online hate speech is widely recognised as a societal problem, yet defining what exactly amounts to hate speech is no easy task. There are no clear legal criteria to distinguish between speech that might be offensive or hurtful but protected under freedom of expression, and speech that is unlawful because it, in fact, qualifies as hate speech.

To date, the identification of hate speech online and removal thereof by human content moderators has been burdensome. In light of the sheer amount of data to be reviewed, the required investments in human resources are significant.

 

This paper was presented and discussed at an event on 31 January 2019, with an opening address by Michael O'Flaherty, Director of the EU Agency for Fundamental Rights.

Authors: 
Dr. Michèle Finck
top