Twitter has seen its largest ever number of legal demands to remove content, as well as the most government legal demands targeting journalists and news organisations, the company has revealed.
In its 20th transparency report, Twitter released information for the period running from July 1 2021 to December 31 2021. Across the period, a record 47,572 legal demands were made regarding 198,931 accounts, comprising a mixture of court orders and other such formal demands by government entities and the lawyers of individuals for content to be taken down.
The largest ever measurement of such reports, it represents a 10% increase compared to the previous six-month period. This is reflective of both the increased user base of the platform and the greater role that Twitter is asked to play in disputes over disinformation and illegal content.
Also revealed in the report was that only 2.6% of Twitter users had put in place any kind of two-factor authentication (2FA) for their account, a low figure in the face of the threat to user passwords posed by phishing and hackers on any social media. Businesses with Twitter could face particularly severe consequences for lacklustre security, with threat actors already targeting Facebook business accounts.
The company's advocacy for greater transparency, bringing in labels for misinformation in the past few years among the aforementioned technology, has pulled the focus of governments and other public bodies, and could set a precedent for other social media giants. Just last month, TikTok announced a new API aimed at greater insight into their platform, and user demand for transparency around data use has already cost companies like Facebook billions of dollars.
Legislation such as the UK government's proposed Online Safety Bill would require platforms to provide far more moderation of the content on their platforms, compelling them to use or develop technology to detect and remove child sexual exploitation and abuse (CSEA) content and take it down. It would also require 'Category 1' companies to transparently identify 'legal but harmful content' on their platforms, such as cyberbullying or disinformation and prevent users from seeing it.
The efficacy of Twitter’s automated system in dealing with spam, bots and fake accounts has been the subject of scrutiny after Elon Musk cited his reservations about the system as his reason to pull out of his $44 billion acquisition of the company. In response, Twitter has sued Musk, and a trial date has now been scheduled for October 17.
The company also revealed that 349 accounts of journalists and news sources were subject to legal demands by governments, representing a 103% increase compared to the previous period.
On the whole, however, government information requests were down 7% in the period, with 11,460 total. Of this, the United States government accounted for 20% of global traffic, closely followed by Japan (24%) and India (11%). Despite the lower number of individual reports, the aggregate of accounts specified by government entities rose by 9%, indicating each government report concerned a larger range of accounts.
Due to US legislation, Twitter does not report on information requests related to national security processes. It is contesting this in court to provide greater transparency in further reports.
When faced with requests, Twitter can ‘narrow’ information, meaning that it selectively discloses some but not all information demanded in the request. It did this in response to 60% of global government information requests.
“This update comes at a time when government requests for account information and content removal continually hit new records, including demands to reveal the identity of anonymous account owners,” said the company in a blog post.
“This is why we continue to advocate for greater transparency from governments themselves about how these powers are used.”
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at email@example.com or on LinkedIn.