Government clampdown bolsters tech giants

Social media button concept

The government's attempt to rid social media of harmful content is likely to benefit the very companies it's targeting, according to experts.

The Online Harms white paper lays the groundwork for laws designed to rein in the power of companies such as Facebook and Google. But critics believe the firms that sparked the clampdown are the ones that will benefit most.

"The reality is that it isn't tech companies, broadly, that the government is concerned about it's a very small number of specific big platforms," Richard Wingfield, legal officer for Global Partners Digital, a civic advocacy group, explained to us. "But if you look at the scope of the white paper, it's about any company or platform that allows users to share or generate content that could be your local online newspaper that has a comments section, or Mumsnet, or blogging platforms."

Those smaller companies, the argument goes, will be faced with increased costs and stiff penalties, including potential prosecution, making it easier for the companies that created the current environment to dominate.

"One of the unintended consequences is that you risk entrenching the dominance of the big platforms that are causing the government the most problems," Wingfield explained.

"If you impose these requirements on all of these smaller platforms, it makes them less likely to start up in the first place particularly if you have things like criminal liability for management of these companies."

One aspect of the Online Harms bill would see firms forced to take down illegal and merely "harmful" content within a set period of time. As the spread of fake news and terror-related material has shown, this isn't easy even for the giants, and it could be impossible for smaller operators.

The bill could force firms to employ fact checkers, with the as-yet-unknown regulator creating codes of conduct that would make company officials liable for breaches. "Facebook has pointed to the fact it has hired 30,000 moderators for the platform," said Dom Hallas, executive director of the Coalition for a Digital Economy, a UK trade group for small and medium-sized digital businesses.

"If the point is that everyone has to hire 30,000 moderators, nobody else can exist only Facebook, Google and probably not even Twitter have that capability.

"There's a practical reality about the way that this has been put together the impact, while significant on the big platforms, will virtually put small platforms out of business," said Hallas.

The government has given little information on how the industry-funded regulator would be formed, but there's a fear it won't target the biggest players with the deepest pockets. "The rules will be enforced selectively and, actually, in practice that sort of regulation entrenches the biggest platforms in the world because they have the best resources and lawyers to deal with that," Hallas explained.

Big firms see the benefit

While Facebook, Google and others have historically pushed back against laws controlling online activity and argued for self-regulation, this appears to be changing.

Chastened by months of revelations over data usage and fake news that officials believe threatens democracy, Facebook CEO Mark Zuckerberg wrote a Washington Post opinion piece calling for greater government action.

"I believe we need a more active role for governments and regulators," he said. "By updating the rules for the internet, we can preserve what's best about it while also protecting society from broader harms."

Zuckerberg believes that "we need new regulation in four areas: harmful content, election integrity, privacy and data portability".

But critics claim the belated embrace of regulation follows a well-worn path trodden by dominant early-to-market companies. "I honestly see it as classic business protection and a traditional way of operation," said Hallas. "People build a strong market position and then they will, bit by bit, acknowledge the concerns of regulators, [and] gradually move their position whilst dragging their feet in order to protect their position in the market.

"You can see that with the likes of Facebook," Hallas continued. "It sees the perception tide turning among the public and policy makers and the response is to say that it wants regulation because Facebook knows that... they're best placed to adapt."

What constitutes a "harm"?

The white paper, which is out for consultation before moving towards a bill, covers terrorism and child abuse, as well as less clear-cut "harms" such as anti-vaccination documentation and fake news. The bill would impose a duty of care that could see firms fined, blocked and directors prosecuted for failing to comply.

While terror and child abuse content is already illegal, for example, the proposed law falls into subjective territory across a range of online areas. "The duty of care would apply not only to a range of types of content from illegal content like child abuse material to legal but harmful material like disinformation but also harmful activities from cyber bullying, to immigration crime, to intimidation," said Mozilla fellow Owen Bennett in a company blog.

"This conflation of content/activities and legal/harmful is concerning, given that many content-related 'activities' are almost impossible to proactively identify, and there is rarely a shared understanding of what 'harmful' means in different contexts," Bennett added.

The scope of the white paper has also created fears that companies will remove all "harmful" referrals and use AI to take material down, rather than spend time and money reviewing controversial content.

"The concern here is that if you impose very high risks of sanctions, it creates a very strong incentive to just take down anything that's vaguely problematic or controversial," said Wingfield. "Especially if there are time limits on removal."

A step in the right direction?

Not everyone is against the attempt to hold the social networks accountable for the content they host. "As a concept, it's a step forward, something we've been pushing for to answer evidence on the spread of illegal material," said Sonia Livingstone, professor of social psychology in the Department of Media and Communications at LSE, a vociferous proponent for child safety online.

According to Livingstone, once a regulator is created the bill's impact would evolve with easy targets such as terrorist material likely to be first on the block."It does need some tweaking. People are trying to imagine the entire legislation coming in on a certain date and everything is regulated," she said. "It would, of course, be too crude end heavy-handed and I would not welcome that.

"I feel a smart regulator is going to find a way that we can protect the public without intervening in free speech. It might sound implausible, but my analogy would be town or transport planning -- no one complains that people aren't free to drive their cars at top speeds in a neighbourhood with schools."

According to Livingstone, legislation would gradually lead to a shift in the way the regulator, companies and consumers behave. "Child abuse is a better place to start than fuzzier things like misinformation that blocks, say, what Fox News puts out, which isn't going to get universal approval straight away, or ever," she said.

Critics would argue there are already powers to handle child abuse material, and the regulator will face opposition over other forms of "harmful" content, with the reference to Fox News highlighting how one person's respected news broadcaster is another's harmful content.