Facebook launches UK initiative to tackle online hate speech

Facebook is stepping up the fight against online extremism by launching a UK programme, which will help local companies tackle hate speech and "promote the civil courage displayed by organisations and grassroots activists".

Facebook's Online Civil Courage Initiative (OCCI) will train non-governmental organisations to help them identify extremist content, and provide a dedicated support desk for those wishing to contact Facebook directly.

The OCCI, which has partnered with a number of anti-extremist groups such as the Jo Cox Foundation, Community Security Trust, Tell MAMA, and Imams Online, will provide financial backing to NGOs working to counter hate speech.

"No one should have to live in fear of terrorism - and we all have a part to play in stopping violent extremism from spreading," said Sheryl Sandberg, COO of Facebook. "Partnerships with others - including tech companies, civil society, researchers and governments - are also a crucial piece of the puzzle. Some of our most important partnerships are focused on counterspeech, which means encouraging people to speak out against violence and extremism."

Technology companies were criticised in the wake of the recent London Bridge terror attack, when Theresa May claimed social media sites provided terrorists with "safe places" online. Sites like Twitter, Google, and Facebook were said to be relying too heavily on users reporting hate speech, rather than working actively to fight it themselves.

Facebook's knee-jerk reaction was to defend its practices, however, it has since introduced a number of measures to help counter extremist content. Facebook-owned WhatsApp and Instagram were both updated last week, with user data now being more widely shared to keep terrorist content off the platforms.

However, the government's approach to tackling extremist content has put it increasingly at odds with the technology community. Calls for a ban on end-to-end encryption and the regulation of web providers through "international agreements" have been criticised as "intellectually lazy". Facebook's latest efforts appear to represent an alternative approach, which seeks to provide meaningful support to those trying to eradicate hatred online.

"We have called on industry to take more action on the issue and welcome this new initiative from Facebook to provide support to other organisations in tackling terrorist and extremist material," read a statement from the Home Office.

"Technology companies still need to go further and faster in moving towards preventing this type of toxic output being disseminated in the first place. We look forward to seeing how the industry led forum, which will combat terrorist use of the internet, will build on this collective response to the threat."

The UK is the latest country to see the launch of an OCCI project, with a programme already running in Germany since last year, and another in France since March.

02/05/2017: Social networks 'should be fined millions' for failing to tackle extremism

A new report by the Home Affairs Committee suggests social media companies are focusing on making money rather than ensuring their users' safety is paramount.

For taking such an approach, the Home Affairs Committee suggests they should be fined millions.

The House of Commons' Home Affairs Committee's "Hate crime: abuse, hate and extremism online" report, which was published yesterday, is the result of an inquiry into how social media companies are dealing with hate crimes following the murder of MP Jo Cox in the lead up to the EU referendum.

The inquiry specifically looked into Google-owned YouTube, Twitter and Facebook to see how they are used to spread messages of hate and extremism. Although it highlighted the platforms are used to drum up support for positive movements against hatred, racism and misogyny, there are also negative messages communicated from the platform.

It found that although social media companies are very good identifying and removing content related to copyright, they are slow to react to messages of hate, taking a more laissez faire attitude, specifically referring to Google's response to copyright content that appears on its video channels. Although they're removed swiftly, the company was much slower to take down content flagged as extremist when some advertisers pulled their advertising because it was being displayed alongside such content.

"We note that Google can act quickly to remove videos from YouTube when they are found to infringe copyright rules, but that the same prompt action is not taken when the material involves hateful or illegal content," the report said.

"There may be some lasting financial implications for Google's advertising division from this episode; however the most salient fact is that one of the world's largest companies has profited from hatred and has allowed itself to be a platform from which extremists have generated revenue."

The inquiry added that although social media companies have assessed the impact such hateful content is having on individuals and praised the effort that has been made, such as publishing new community guidelines and building new tools to help people report content, it is not enough and they must be held accountable.

"Social media companies currently face almost no penalties for failing to remove illegal content," the MPs said in the conclusion to their report.

"We recommend that the government consult on a system of escalating sanctions, to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe."

Dale Walker

Dale Walker is the Managing Editor of ITPro, and its sibling sites CloudPro and ChannelPro. Dale has a keen interest in IT regulations, data protection, and cyber security. He spent a number of years reporting for ITPro from numerous domestic and international events, including IBM, Red Hat, Google, and has been a regular reporter for Microsoft's various yearly showcases, including Ignite.