Apple announced late Thursday evening that with the launch of iOS 15 and Co., it will begin scanning iCloud Photos in the US to look for known Child Sexual Abuse Material (CSAM) and plans to report the results to the National Center for Missing and Exploited Children (NCMEC).
Even before Apple has announced its own plans in detail presented news of the CSAM initiative leaked out throughAs a result, security researchers have begun to raise concerns about how Apple’s new image scanning protocol could be used in the future. reported Now the Financial Times reports that Apple uses a "NeuralHash" system to compare known CSAM images with photos on a user's iPhone before uploading them to iCloud.
Matthew Green on Apple's plan: “A really bad idea”
If there's a match, the photo is uploaded with a cryptographic security voucher, and at a certain threshold, a check is triggered to ensure the person actually has CSAM on their devices. Currently, Apple uses this technology to scan and match images to check for child abuse. But security researchers fear it could be adapted in the future to look for other types of images that are more concerning, such as anti-government symbols at protests. In a series of tweets, John Hopkins cryptography researcher Matthew Green said that CSAM scanning is a "really bad idea" because in the future it could be expanded to scan end-to-end encrypted photos, rather than just content uploaded to iCloud. Side note: For children, Apple is implementing a separate scanning feature that looks for sexually explicit content directly in end-to-end encrypted messages.
Scanning technology should be very accurate
Green also raised concerns about the hashes Apple plans to use, citing the potential for "collisions" where someone sends a harmless file that shares a hash with CSAM, potentially leading to a false positive. As a reminder, Apple says its scanning technology has an "extremely high level of accuracy" to ensure accounts aren't falsely flagged. Reports are manually reviewed before a person's iCloud account is deactivated and a report is sent to the NCMEC. Green believes Apple's implementation will encourage other tech companies to adopt similar techniques.
That will break the dam. Governments will demand it from everyone.
Security researcher Alec Muffett, who formerly worked at Facebook, said Apple's decision to implement this type of image scanning is a "huge and regressive step for individual privacy."
Apple has been using screening technology for some time
As many noted on Twitter, several tech companies are already doing image scanning for CSAM. Google, Twitter, Microsoft, Facebook, and others use image hashing methods to search for and report known child abuse images. It's also worth noting that Apple was already scanning some content for child abuse images before the new CSAM initiative rolled out. In 2020 confirmed Apple's Chief Privacy Officer, Jane Horvath, explained that Apple uses screening technology to search for illegal images and then deactivates accounts if evidence of CSAM is discovered. That said, Apple already updated its privacy policy in 2019 to indicate that it scans uploaded content for "potentially illegal content, including child sexual exploitation material." Therefore, yesterday's announcement isn't entirely new. (Photo by weyo / Bigstockphoto)
 
			



