UK’s digital regulator will fine tech giants for spreading ‘harmful content’ online

Image of social media apps on a smartphone screen

The government will establish a statutory regulator with powers to punish internet giants as part of new measures to crack down on the spread of harmful and extremist content online.

Tech companies and social media platforms will be compelled to abide by a 'duty of care' at the behest of a digital regulator, according to draft proposals, to rival Ofcom and the Information Commissioner's Office (ICO).

The government's long-awaited 'Online Harms' white paper, released today, has also set out a regulatory framework that hands this proposed body the power to levy fines against companies for breaching standards. Senior managers will also be subject to these fines and held liable in a criminal context.

In extreme cases, the regulator will also disrupt offending companies' business operations, and order internet service providers (ISPs) to block non-compliant platforms or apps entirely from UK markets as a last resort.

"The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough," said the government's digital secretary Jeremy Wright.

"Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action."

Home Secretary Sajid Javid also reinforced the notion that large tech firms have a "moral duty" to protect their younger users from harmful and extremist content.

"Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online," he said. "That is why we are forcing these firms to clean up their act once and for all."

Putting big tech firmly in the government's sights

As to which firms will be subject to these rules, the government cites companies that "allow users to share or discover user-generated content or interact with each other online". This translates to social media platforms, file hosting sites, messaging services, public discussion forums, and search engines.

The government has also hinted it would target the biggest companies first, with the regulator's initial 'risk-based' focus hitting companies that pose the clearest risk of harm to users. This will either be due to the scale of the platforms, or issues with online content that have already been identified.

Powers will span issuing notices to companies for breaching standards, to "substantial" fines for failures to remove harmful or extremist content. The scale of these fines, however, have not been defined. Senior managers, too, will be individually subject to regulatory action if found to have led their companies to breach standards.

Underpinning the regulations will be a duty of care and several codes of practice. These latter documents are particularly relevant to minimising the spread of disinformation, and extremist content online. A 'safety by design' framework too will inform developers when they create any new digital products or services from scratch.

The white paper insists that any action taken will be proportionate and designed not to stifle innovation in the digital economy. Legislative action, therefore, will be coupled with measures to promote 'tech safety', for example, as well as digital literacy to empower users to manage their own online activity and the risks this involves.

The overall package of measures, however, has been branded by critics as too "vague". Although the trade organisation techUK, for instance, said the measures were a "significant step forward", this vision will not be achieved if difficult problems are trade-offs are ignored.

"A regulator, whether new or existing, will not thank anyone for being handed a vague remit," said techUK's head of policy Vinous Ali.

"It is vital that the Government is clear about what it wants to achieve and the trade-offs necessary to do it. That means providing clear definitions of online harms and setting out how difficult boundary issues should be addressed."

The information commissioner Elizabeth Denham, meanwhile, said the need for these proposals reflected users' growing mistrust of online services.

"People want to use these services, they appreciate the value of them, but they're increasingly questioning how much control they have of what they see, and how their information is used," said Denham.

"That relationship needs repairing, and regulation can help that. If we get this right, we can protect people online while embracing the opportunities of digital innovation."

How these plans compare against other ideas

These proposals jointly-devised between the Home Office and the Department for Digital Culture Media and Sport (DCMS) have been several months in the making. Although the plans are subject to change, they set out for the first time how the government intends to reign in the worst tendencies of internet giants like Facebook and Google.

Ideas such as an internet regulator and a statutory 'duty of care' have been explored extensively by a chorus of policymakers, including the Labour opposition, the House of Lords' communications committee, and the DCMS select committee.

But the government's white paper sets out how such measures could translate into legislation, and what this means for many companies subject to these rules. However, there are a handful of differences between what organisations such as the DCMS select committee are proposing, and what the government is.

"Disinformation is clearly harmful to democracy and society as a whole. The social media companies must have a responsibility to act against accounts and groups that are consistently and maliciously sharing known sources of disinformation," said chair of the committee Damian Collins MP.

"The white paper does not address the concerns raised by the select committee into the need for transparency for political advertising and campaigning on social media. Again, it is vital that our electoral law is brought up to date as soon as possible so that social media users know who is contacting them with political messages, and why."

The House of Lords' communications committee, meanwhile, wouldn't have gone as far as the government in establishing a fully-fledged regulator; instead suggesting an interim Digital Authority. This body would oversee the work of existing regulators, nullify regulatory overlaps and suggest any changes to future processes so the raft of problems can be tackled now.

"If you make a new regulatory body then you will waste a long time before anything happens," its chair Lord Gilbert told IT Pro in March. "In the meantime, all of these bodies need to be brought together, they need to be forward-thinking, and they need to be de-duplicated.

"In time the new powerful Digital Authority may say the existing regulatory structure is not quite right and we need to either merge, change or set up some new bodies. But we don't want that to get in the way of doing stuff."

As for the Labour Party, these plans just don't go far enough. Its shadow digital secretary Tom Watson MP would also establish an independent regulator, in the same mould as this government's, but give this organisation the power to break up monopolies, among a suite of tougher powers.

"The public and politicians of all parties agree these platforms must be made to take responsibility for the harms, hate speech and fake news they host," he said. "The concern with these plans is that they could take years to implement. We need action immediately to protect children and others vulnerable to harm.

"These plans also seem to stop short of tackling the overriding data monopolies causing this market failure and do nothing to protect our democracy from dark digital advertising campaigners and fake news."

Yet another punt into the long grass?

The government is spinning these proposals as decisively tough, and a world-first in regulating some of the biggest companies on the planet. This is indeed, as the government claims, the prelude to the world's first 'online safety' laws of its kind. And it's remarkable to chart the journey that ministers have been on.

They'll be glad, therefore, to hear the plans have largely been welcomed by the industry, in principle at least. NCC Group, for example, says these proposals strike "the right balance". Specifically, the security firm's global CTO Ollie Whitehouse says that appointing an independent regulator will ensure regulation will keep pace with an evolving online environment.

But organisations on all sides of the debate have found quibbles with these plans, ranging from techUK's charge of being too vague to the Labour Party's accusations that these do nothing to resolve deep-rooted and systemic issues in the tech sector. Even NCC Group's Whitehouse believes the regulations won't be effective unless they are underpinned with online safety awareness and education in schools.

This, of course, also comes in the wake of the fallout from a mass shooting in New Zealand, and voices critical of tech companies for not removing a live stream of the tragedy quickly enough. Just in the last few hours, the country's privacy commissioner John Edwards branded Marck Zuckerberg and his company Facebook "morally bankrupt pathological liars who enable genocide (Myanmar), facilitate foreign undermining of democratic institutions".

His comments, posted onto Twitter and now deleted, are indicative of an incredibly emotionally-charged debate. It also brings into perspective what the phrase 'online harms', something that has not up to now been adequately defined, means to people in reality.

The government has now, at least, released a chart outlining three categories of online harms, as it sees things, including harms with a clear legal definition, harms with a less clear legal definition, and underage exposure to legal content. But it's still uncertain how the government, and its new regulator, will effectively enforce these laws.

The chorus of voices reacting critically to these proposals shouldn't be seen as disparaging, rather it betrays the idea of just how complex an issue online regulation is. There's perhaps a good reason that no country has yet attempted to reign in the ever-ballooning tech sector. So it's just as well that the government has opened their plans up to an extensive period of public consultation, immediately after putting their cards onto the table.

But given the white paper's release was already delayed, having been touted for 'winter 2018/19', some may see the 12-week consultation as kicking the issue into the long grass, as the government has arguably done with Brexit.

This is justified, as we probably won't see any finalised proposals until the end of the year. But if the government uses this time to finetune and clearly establish the implementation of these measures, as against just playing for time, the sector might well be in a healthier place for it.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.