More than 90 global privacy groups urge Apple to abandon CSAM surveillance
Groups warn the tech could be exploited by “abusive adults”
Apple is being urged to abandon plans to scan images and iMessages for child sexual abuse material (CSAM) over fears that the tech could threaten citizens' privacy and wellbeing, as well as inadvertently flag ‘innocent’ content.
This is according to an open letter signed by more than 90 civil society organisations, including the UK’s Big Brother Watch and Liberty.
Although the signatories “support efforts to protect children and stand firmly against the proliferation of CSAM”, they argue that the “algorithms designed to detect sexually explicit material are notoriously unreliable” and are known to “mistakenly flag art, health information, educational resources, advocacy messages, and other imagery”.
Moreover, the letter criticises Apple for assuming that the users of its iMessage surveillance technology, which aims to protect children from explicit content, will “actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship”.
According to the signatories, the tech could be exploited by “abusive adults”, providing them with even more power to control their victims. It could also lead to non-heteronormative children being outed against their will:
“LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk,” the letter reads. “As a result of this change, iMessages will no longer provide confidentiality and privacy to those users through an end-to-end encrypted messaging system in which only the sender and intended recipients have access to the information sent.”
The letter, which is addressed to Apple CEO Tim Cook and is signed by privacy groups from across the US, Africa, Europe, South America, and East Asia, also echoed previous concerns of government interference in the surveillance technology, which could include Apple being pressured to “extend notification to other accounts, and to detect images that are objectionable for reasons other than being sexually explicit”, such as: “human rights abuses, political protests, images companies have tagged as “terrorist” or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them”.
“And that pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance and persecution on a global basis,” the letter states.
Apple had previously addressed these fears, maintaining that the technology would not scan users' iCloud uploads for anything other than CSAM, and that it would reject governmental requests to "add non-CSAM images to the hash list".
However, earlier this week, the tech giant appeared to bow down to some demands by announcing that it would only flag images that had been supplied by clearinghouses in multiple countries and not just by the US National Center for Missing and Exploited Children (NCMEC), as announced earlier.
The open letter comes as security researchers found a flaw in Apple’s NeuralHash hashing algorithm, which is used to scan for known CSAM imagery.
GitHub user Asuhariet Ygvar warned that NeuralHash “can tolerate image resizing and compression, but not cropping or rotations”, potentially reducing the success rate of the tech.
Apple has said the flaw only exists in a previous build of the technology and would not be present in the final product.
IT best practices for accelerating the journey to carbon neutrality
Considerations and pragmatic solutions for IT executives driving sustainable IT

The Total Economic Impact™ of IBM Spectrum Virtualize
Cost savings and business benefits enabled by storage built with IBMSpectrum Virtualize

Using application migration and modernisation to supercharge business agility and resiliency
Modernisation can propel your digital transformation to the next generation
