Tech

Apple Defends Its Anti-Child Abuse Imagery Tech After Claims of ‘Hash Collisions’

Apple said the version of NeuralHash analyzed by researchers is not the final version that will be used for iCloud Photos CSAM detection.
iphone
Image: Japanexperterna.se/Flickr
Screen Shot 2021-02-24 at 3
Hacking. Disinformation. Surveillance. CYBER is Motherboard's podcast and reporting on the dark underbelly of the internet.

Researchers claim they have probed a particular part of Apple's new system to detect and flag child sexual abuse material, or CSAM, and were able to trick it into saying two images that were clearly different shared the same cryptographic fingerprint. But Apple says this part of its system is not supposed to be secret, that the overall system is designed to account for this to happen in general, and that the analyzed code is not the final implementation that will be used with the CSAM system itself and is instead a generic version.

Advertisement

On Wednesday, GitHub user AsuharietYgvar published details of what they claim is an implementation of NeuralHash, a hashing technology in the anti-CSAM system announced by Apple at the beginning of August. Hours later, someone else claimed to have been able to create a collision, meaning he tricked the system into giving two different images the same hash. Ordinarily, hash collisions mean that one file could appear to be another to a system. For example, perhaps a piece of malware shares a hash with an innocuous file, so an anti-virus system flags the banal file thinking it poses a threat to the user.  

In a whitepaper, Apple explained that its CSAM detection technology will work on a user's device, as opposed to on the company's cloud, as other companies like Google and Microsoft do. The system relies on a database of hashes—cryptographic representations of images—of known CSAM photos provided by National Center for Missing & Exploited Children (NCMEC) and other child protection organizations. Apple's system will scan photos a user uploads to iCloud to see if any match the hashes, and if there's more than 30 matches, it will flag the user to an Apple team which will review the images. If Apple finds they are CSAM, it will report the user to law enforcement. 

Advertisement

In a document describing the new system, Apple says "The hashing technology, called NeuralHash, analyzes an image and converts it to a unique number specific to that image."

Apple however told Motherboard in an email that that version analyzed by users on GitHub is a generic version, and not the one final version that will be used for iCloud Photos CSAM detection. Apple said that it also made the algorithm public.

"The NeuralHash algorithm [... is] included as part of the code of the signed operating system [and] security researchers can verify that it behaves as described," one of Apple's pieces of documentation reads. Apple also said that after a user passes the 30 match threshold, a second non-public algorithm that runs on Apple's servers will check the results.

"This independent hash is chosen to reject the unlikely possibility that the match threshold was exceeded due to non-CSAM images that were adversarially perturbed to cause false NeuralHash matches against the on-device encrypted CSAM database," the documentation reads.

"If collisions exist for this function I expect they’ll exist in the system Apple eventually activates," Matthew Green, who teaches cryptography at Johns Hopkins University, told Motherboard in an online chat. "Of course it’s possible that they will re-spin the hash function before they deploy. But as a proof of concept this is definitely valid," he added, referring to the research on GitHub.

Advertisement

Apple's new system is not just a technical one, though. Humans will also review images once the system marks a device as suspicious after a certain threshold of offending pictures are identified. These people will verify that the images do actually contain CSAM.

"Apple actually designed this system so the hash function doesn't need to remain secret, as the only thing you can do with 'non-CSAM that hashes as CSAM' is annoy Apple's response team with some garbage images until they implement a filter to eliminate those garbage false positives in their analysis pipeline," Nicholas Weaver, senior researcher at the International Computer Science Institute at UC Berkeley, told Motherboard in an online chat.

Ryan Duff, the director of cyber products at SIXGEN, and a researcher who has focused on the iPhone for years, said that it looks like Apple's algorithm "is pretty susceptible to preimage attacks."

"You could argue how risky that is. It means that the odds of any of your images matching CSAM are essentially nil," Duff said in an online chat. "But someone may be able to send you an image that registers as CSAM according to the NeuralHash algorithm."

Since Apple announced its new anti-CSAM system, privacy and security experts, as well as the general public, have raised concerns about how the system could be abused. The company has tried to address these concerns by publishing several technical whitepapers and organizing calls with journalists, but the attention researchers got today shows there's still a lot of interest in understanding how the system will work. 

Subscribe to our cybersecurity podcast, CYBER.