Massive fines headed for tech giants that host harmful online content
New powers will allow Ofcom to fine companies up to 5% of their revenues
Video sharing apps such as Facebook, Instagram and YouTube will soon be fined millions of pounds for hosting harmful videos as part of the government's ongoing commitment to enforce EU laws.
The government will hand communications regulator Ofcom new policing and sanctioning powers to protect children from violent, abusive and pornographic content, according to the Telegraph.
The tech giants could face a fine of up to 5% of the company's revenue as well as having their platform potentially banned in the UK if they fail to comply with Ofcom's rulings.
The handoff of powers to Ofcom is being made to comply with the UK's current obligations it has to the EU. Specifically, its Audiovisual Media Service Directive (AVMS) which aims to deliver greater protections for children, as well as preserving cultural diversity and guaranteeing the independence of national media regulators.
However, the regulator may never get to enjoy these powers as the proposed date of Ofcom's new role would be 19 September 2020, beyond the current withdrawal date for the UK leaving the EU. Once it leaves the bloc, the UK will no longer be legally obligated to enforce the AVMS directive.
"The implementation of the AVMSD is required as part of the United Kingdom's obligations arising from its membership of the European Union and until the UK formally leaves the European Union all of its obligations remain in force," said a spokesman for the Department for Digital, Culture, Media and Sport to the BBC.
"If the UK leaves the European Union without a deal, we will not be bound to transpose the AVMSD into UK law."
Under the same rules, the apps in question will also face fines for improper implementation of robust age verification systems and parental controls on videos.
Social media platforms have faced heightened scrutiny this year after a number of incidents involving terrorist attacks being broadcast over online platforms. Operators, including Facebook, have been accused of not removing said videos expeditiously.
"1.5 million copies of the video had to be removed by Facebook - and could still be found on Youtube for as long as eight hours after it was first posted - is a stark reminder that we need to do more both to remove this content, and stop it going online in the first place," said former Prime Minister Theresa May at the Online Extremism Summit in Paris.
Facebook, Twitter and YouTube all faced harsh criticism after the New Zealand shooter's video evaded all three sites' harmful content algorithms.
Google, which owns YouTube, has previously boasted impressive figures concerning the accuracy of its machine learning-driven AI algorithms which first came into effect on YouTube's platform in 2017.
Within a year after its implementation, most content that was violent or extremist in nature was removed from the site with fewer than 10 views.
How to scale your organisation in the cloud
How to overcome common scaling challenges and choose the right scalable cloud serviceDownload now
The people factor: A critical ingredient for intelligent communications
How to improve communication within your businessDownload now
Future of video conferencing
Optimising video conferencing features to achieve business goalsDownload now
Improving cyber security for remote working
13 recommendations for security from any locationDownload now