How will the Online Safety Bill change the tech industry?

Silhouette of a child using a tablet in a dark room with blue lighting
(Image credit: Getty Images)

Rumbling in the wings of the government’s policy programme for the last few years has been the widely anticipated Online Safety Bill.

This major piece of legislation will be among the first laws directly regulating tech companies and the way they operate, with 25,000 businesses falling under the scope of this complex framework. The overarching ambition is to make the internet a safer place to be – given it’s reportedly rife with terrorism, hate speech and exploitative content – although critics say it infringes on free speech.

Nevertheless, it still represents a marked change in the way business is conducted online, with those affected expected to adapt, or face huge financial penalties, when it eventually comes into force.

What is the Online Safety Bill?

First proposed formally in March 2021, the Online Safety Bill effectively regulates the content that any “user-to-user service” makes available online. It seeks to make big tech companies more responsible for the material they host on their platforms in order to protect their users.

The Bill applies to search engines, internet services that host user generated content and those that publish or display pornographic content. It’s been designed to make the UK “the safest place in the world to be online” while defending free expression. It also aims to improve law enforcement’s capacity to tackle harmful content online, improve users’ ability to keep themselves safe, and improve society’s understanding of the harms landscape.

The legislation proposes a shift away from self regulation, which has arguably failed, to one that promotes accountability and a safety-first mindset. Under this model, tech companies will need to be able to demonstrate they have evaluated key risks. This includes misinformation, predatory behaviour, and cyber bullying. They need to have proven there are suitable protections and safeguarding mechanisms in place on their platforms, with those falling short of these expectations facing massive fines. These will either be up to 10% of annual turnover, or £18 million, which are similar levels to fines under the General Data Protection Regulation (GDPR).

How has the Online Safety Bill evolved?

In the many months since this legislation was introduced, the government has amended its terms to reflect more regulatory requirements. Initially, Ofcom was granted a statutory duty of care to enforce the terms of the Online Harms white paper, which was produced as a result of a two-year consultation.

In February this year, for example, the government introduced an amendment compelling websites containing pornographic content to use secure age verification technology on their platforms. The following month, the scope of the legislation expanded from the purest sense of ‘online harms’ to also include fraudulent and misleading adverts, which social media sites and search engines would have to do more to protect UK users from.

The government then changed a key provision in the Online Safety Bill in July, with an amendment forcing companies to identify child sexual exploitation and abuse (CSEA) content and take it down. Previous iterations of the law only required companies to use “accredited technology” to detect CSEA and terrorism content, but the amendment goes further in stating companeis should further seek to use “best endeavours to develop or source technology” to automatically detect and take such material offline.

Then, less than a week later, the government inadvertently put the Online Safety Bill on ice after failing to include its third reading in the parliamentary schedule before the summer recess. This means the legislation will be delayed, and possibly subject to change when a new prime minister is appointed by the Conservative Party in September.

Why do some feel the Online Safety Bill doesn’t go far enough?

Charlotte Aynsley, safeguarding advisor at Impero Software, says the Bill needs clearer directives around reporting and referrals.

RELATED RESOURCE

Storage's role in addressing the challenges of ensuring cyber resilience

Understanding the role of data storage in cyber resiliency

FREE DOWNLOAD

“There is currently no centralised system to make referrals, nor is there clear guidance or clarification on who these referrals go to and how they will be progressed,” she tells IT Pro. “If they do go to the police, how will this then be managed? It is vital that victims feel reassured by knowing their incident will be addressed – if people don’t think any action will be taken, or there isn’t a timely response, there is a risk that harmful incidents will continue to go unreported,” she says.

Dr Bill Mitchell, director of policy at BCS, the Chartered Institute for IT, says the Bill leaves a lot of abstract definitions, and much of the concrete expectations for what platforms will be asked to do will be set out in secondary legislation and codes of practice. He adds that this means “it’s currently very difficult to assess what exactly platforms will be asked to do to reduce harms and protect rights, and whether it will be sufficient”.

“For instance, platforms will need to take into account the importance of ‘democratically important content’ – the definition of which is extremely unclear,” he explains.

For Robin Wilton, meanwhile, a director at the Internet Society, one omission in the legislation is highly significant – encryption. “The Bill only mentions encryption twice, and not in a meaningful way: it says that if a service provider gets a law enforcement request for access to data and, in providing that data, encrypts it so that it is unusable by law enforcement, it’s committing an offence. Fine. I wonder how many times that has ever happened in the past. I suspect it’s none.”

How will the Online Safety Bill affect big tech?

Hand in hand with the raft of new obligations are new costs that companies falling in syncope will have to absorb, according to Luke Jackson, a director at Yorkshire-based law firm Walker Morris LLP.

“As well as the mooted Ofcom regulator fee, many will need to commission specialist support to ensure compliance – be that with subscriptions for policing and age-gating software or legal fees to interpret the act and understand risk exposure,” he says.

Worryingly, there’s also scope in the legislation to compel companies to develop flawed security in their products, critics allege. Wilton says that, until now, the Bill stopped short of allowing the government to compel the tech industry to design their products in such a way that might not be feasible. However, the latest amendments removed any doubt. The July update, instructing tech companies to implement technology automatically scanning messages, is an example.

“This is “scope creep” of the worst kind, and in all probability, it won’t work,” he adds. “Consider this: today, Apple announced a $10m research fund to “harden” devices against spyware probes like Pegasus. By contrast, the UK Government’s “Safety Tech Challenge Fund”, to develop “safe” tools for backdoor access has distributed £85,000 each to a handful of start-ups. The Online Safety Bill will not prevent secure communication technology from reaching the mass consumer market, but it could prevent UK users from being lawfully allowed to use it. That represents significant harm for no visible benefit.”

How will the Online Safety Bill change the wider industry?

The Bill sets out to take on the big tech companies, but it has the potential to impact any business that is operating in the digital space. Jackson says that the legislation could well require a mindset shift for all enterprises with an online presence, requiring them to prevent online harm as a driving principle in the way their web presences are delivered.

Jackson adds that, for consumers, the idea of an Online Safety Bill should, in theory, be a good thing. “A reduction in the spread of misinformation, online abuse and inappropriate content for children would undoubtedly make the Internet a better place,” he says. “However, as some have pointed out, there is a very delicate balance to be struck between shielding users from harmful content whilst protecting freedom of speech.”

When, or if, enacted, the Online Safety Bill is likely to cause much controversy in the form of projected weakened security in products and services. Wilton says that the bill not only enables scope creep, but in the context of technology, it also allows societal scope creep, as this is also built into the Bill. Wilton explains that it allows the secretary of state for Digital, Culture, Media and Sport (DCMS) to add new categories to the list of banned content through secondary legislation.

“That means no parliamentary scrutiny, and it’s a licence for the minister to ban whatever he or she chooses,” he explains. “That could be ‘communicating about an anti-Government protest; it could be ‘saying rude things about the minister’; it could be ‘demanding information about secret donations to political parties’; it could be ‘publishing information about access to abortion’,” he says. “We live in an information society, with a data-driven economy, and a population of digital natives. We cannot allow the government to weaponise digital technology against its citizens."

Rene Millman

Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.