Facebook claims AI managed to reduce hate speech by 50%

A man working at a laptop which displays messages of abusive nature
(Image credit: Shutterstock)

Facebook has hit back at reports which claim its artificial intelligence (AI) fails to detect hate speech by claiming that the technology managed to reduce its prevalence by 50%.

On Sunday, the Wall Street Journal (WSJ) published a report based on internal documents and employee testimonials which show that the social media platform manages to remove only “a low-single-digit percentage” of posts that violate its rules of conduct.

The AI used to identify harmful content has problems detecting first-person shooting videos and racist rants, according to the report, as well as telling the difference between cockfighting and car crashes. However, it is more affordable to use than human reviewers, which in 2019 were costing the company “$2 million a week, or $104 million a year”, according to the WSJ.

Facebook’s VP of Integrity, Guy Rosen, issued a response to the article hours after it was published, stating that the hate speech has been reduced by almost 50% during the last three quarters.

According to the company, “prevalence is the most important metric to use because it shows how much hate speech is actually seen on Facebook”.

RELATED RESOURCE

Build trustworthy AI with MLOps

AI performance, operations, and ethics

FREE DOWNLOAD

“Recent reporting suggests that our approach to addressing hate speech is much narrower than it actually is, ignoring the fact that hate speech prevalence has dropped to 0.05%, or 5 views per every 10,000 on Facebook,” said Rosen.

When it’s uncertain whether a post violates Facebook’s terms, its visibility is reduced by limiting its distribution and not being recommended to users. This is done in order to protect those who post “content that looks like hate speech but isn’t”, such as “describing experiences with hate speech or condemning it”. The company also stated that 97% of removed content is identified by its algorithm, up from 23.6% in 2016.

Facebook didn’t address the WSJ’s claims that the decision to use AI to monitor hate speech was motivated by costs.

These latest allegations come amid a difficult month for the social media platform, which was recently accused by former product manager turned whistleblower Frances Haugen of repeatedly prioritising profits over user safety. On 4 October, Facebook, as well as its subsidiaries WhatsApp and Instagram, also suffered a six-hour outage.

Sabina Weston

Having only graduated from City University in 2019, Sabina has already demonstrated her abilities as a keen writer and effective journalist. Currently a content writer for Drapers, Sabina spent a number of years writing for ITPro, specialising in networking and telecommunications, as well as charting the efforts of technology companies to improve their inclusion and diversity strategies, a topic close to her heart.

Sabina has also held a number of editorial roles at Harper's Bazaar, Cube Collective, and HighClouds.