When he testified before Congress last month, Facebook CEO Mark Zuckerberg discussed the problem of using artificial intelligence to identify online hate speech. He said he was optimistic that in five to 10 years, “We will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging content for our systems, but today we’re not just there on that.”
As an expert on hate speech who recently developed an AI-based system to study online hate, I can confidently say that Zuckerberg is both right and wrong. He is right that AI is not a panacea, since hate speech relies on nuances that algorithms cannot fully detect. At the same time, just because AI does not solve the problem entirely doesn’t mean it’s useless.
In fact, it’s just the opposite. Instead of relying on AI to eliminate the need for human review of hate speech, Facebook and other social media platforms should invest in intelligent systems that assist human discretion. The technology is already here. It can not only help tech companies deal with the scale of this challenge; it can also make platforms more transparent and accountable.
At its core, AI identifies patterns in data sets. In his testimony, Zuckerberg may have been trying to say that AI is not a good mechanism by itself to remove hate speech. That’s true. Even the best filters will not replace human reviewers.
This is because hate speech evolves. For example, Shrinky Dinks are plastic toys from the 1980s that are designed to get smaller when baked in an oven. Toys by themselves certainly aren’t hate speech. But when those same words are used to describe Jews, as they are today by some white supremacists, the name of a child’s plaything can be transformed into an offensive Holocaust metaphor. Another example came in 2016 when white supremacists started putting triple parentheses around Jewish people’s names on Twitter in an effort to harass and intimidate them.
Imagine trying to build an artificial intelligence that could capture this subtlety. The technology simply doesn’t exist yet. Because hate speech is nuanced, even the best AI can’t replace human beings. Computation will not solve the hate speech dilemma.
The clearest proof that AI alone can’t solve hate speech is the false-positive problem. As Zuckerberg explained in his testimony, “Until we get it more automated, there’s a higher error rate than I’m happy with.” However, even if AI was 99 percent effective at removing controversial content like hate speech, there would still be real consequences, made worse by the immense scale and reach of online platforms.
The post What Mark Zuckerberg Gets Wrong—and Right—About Hate Speech first appeared on Tech News and Articles.