More than 90 global privacy groups urge Apple to abandon CSAM surveillance
Groups warn the tech could be exploited by “abusive adults”
Apple is being urged to abandon plans to scan images and iMessages for child sexual abuse material (CSAM) over fears that the tech could threaten citizens' privacy and wellbeing, as well as inadvertently flag ‘innocent’ content.
Although the signatories “support efforts to protect children and stand firmly against the proliferation of CSAM”, they argue that the “algorithms designed to detect sexually explicit material are notoriously unreliable” and are known to “mistakenly flag art, health information, educational resources, advocacy messages, and other imagery”.
Moreover, the letter criticises Apple for assuming that the users of its iMessage surveillance technology, which aims to protect children from explicit content, will “actually belong to an adult who is the parent of a child, and that those individuals have a healthy relationship”.
According to the signatories, the tech could be exploited by “abusive adults”, providing them with even more power to control their victims. It could also lead to non-heteronormative children being outed against their will:
“LGBTQ+ youths on family accounts with unsympathetic parents are particularly at risk,” the letter reads. “As a result of this change, iMessages will no longer provide confidentiality and privacy to those users through an end-to-end encrypted messaging system in which only the sender and intended recipients have access to the information sent.”
The letter, which is addressed to Apple CEO Tim Cook and is signed by privacy groups from across the US, Africa, Europe, South America, and East Asia, also echoed previous concerns of government interference in the surveillance technology, which could include Apple being pressured to “extend notification to other accounts, and to detect images that are objectionable for reasons other than being sexually explicit”, such as: “human rights abuses, political protests, images companies have tagged as “terrorist” or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them”.
“And that pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance and persecution on a global basis,” the letter states.
Apple had previously addressed these fears, maintaining that the technology would not scan users' iCloud uploads for anything other than CSAM, and that it would reject governmental requests to "add non-CSAM images to the hash list".
The future of CIAM
Four trends shaping identity and access managementDownload now
However, earlier this week, the tech giant appeared to bow down to some demands by announcing that it would only flag images that had been supplied by clearinghouses in multiple countries and not just by the US National Center for Missing and Exploited Children (NCMEC), as announced earlier.
The open letter comes as security researchers found a flaw in Apple’s NeuralHash hashing algorithm, which is used to scan for known CSAM imagery.
Apple has said the flaw only exists in a previous build of the technology and would not be present in the final product.
The ultimate law enforcement agency guide to going mobile
Best practices for implementing a mobile device programFree download
The business value of Red Hat OpenShift
Platform cost savings, ROI, and the challenges and opportunities of Red Hat OpenShiftFree download
Managing security and risk across the IT supply chain: A practical approach
Best practices for IT supply chain securityFree download
Digital remote monitoring and dispatch services’ impact on edge computing and data centres
Seven trends redefining remote monitoring and field service dispatch service requirementsFree download