Google to combat online toxicity with AI

In an attempt to tackle online trolling, Google has launched new software that uses machine learning to filter out comments deemed too toxic for civilised discussions.

In a partnership with technology incubator Jigsaw, Google has launched Perspective, a program designed to help publishers and website admins with comment moderation by reviewing their content and providing ratings based on how unpleasant the language is.

An online demo of the technology is available here. Type in something like "You're a stupid idiot", and the software will return a rating based on how many other people viewed the comment as toxic, in this case 98%.

According to a recent report by the US Centre for Innovative Public Health, 72% of American internet users have witnessed online harassment, with almost half (47%) experiencing toxicity first hand.

"News organisations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time," said Jigsaw president Jared Cohen, speaking in a blog post.

"As a result, many sites have shut down comments altogether. But they tell us that isn't the solution they want. We think technology can help."

The technology has already been applied to The New York Times, where traditionally a dedicated team trawls through "an average of 11,000 comments every day", and that the sheer volume of moderation required means only 10% of its articles have published comments.

Perspective has so far examined "hundreds of thousands of comments that had been labeled by human reviewers". Each time Perspective reads a comment, or receives corrections from users, the technology becomes more accurate at scoring for toxicity, according to Google.

The API has been made available on the Cloud Machine Learning Platform, Google's hub for machine learning services available to developers, as well as open source library TensorFlow.

While in its current form it only sifts out toxic comments, Google has said it plans to improve the technology to include other languages, and the ability to spot comments that may be unsubstantiated or off-topic.

Contributor

Dale Walker is a contributor specializing in cybersecurity, data protection, and IT regulations. He was the former managing editor at ITPro, as well as its sibling sites CloudPro and ChannelPro. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.