Apple walks a privacy tightrope to discover child abuse in iCloud

Must read

The child safety organization also immediately praised Apple’s measures, saying that they have achieved the necessary balance, “to bring us closer to justice for the survivors who spread online in the most painful moments,” the CEO of the child safety advocacy organization Julie Cordua Thorne wrote in a statement to Wired magazine.

Other cloud storage providers, from Microsoft to Dropbox, have already tested images uploaded to their servers. However, some privacy critics believe that by adding any type of image analysis to user devices, Apple has also taken a step towards a disturbing new form of surveillance and has weakened its historically strong privacy stance in the face of law enforcement pressure.

“I’m not defending child abuse. However, your personal device is constantly scanning and monitoring you locally based on some offensive content standards, and the whole idea of ​​conditionally reporting to the authorities is a very, very slippery slope. “Paris-based cryptographer and founder Nadim Kobeissi said. Based on the cryptographic software company Symbolic Software. “If this situation continues, I will definitely switch to an Android phone.”

Apple’s new system does not directly scan user images, either on the company’s device or on its iCloud server. Rather, it is a clever and sophisticated new form of image analysis designed to prevent Apple from seeing these photos unless they have been determined to be part of a collection of multiple CSAM images uploaded by users. The system “hashes” all images sent by users to iCloud, converting the files into strings uniquely derived from these images. Then, like the old CSAM detection system (such as PhotoDNA), it compares them with the large number of known CSAM image hashes provided by NCMEC to find any matches.

Apple also uses a new form of hashing called NeuralHash, and the company says it can still match images despite changes such as cropping or coloring. As important as preventing evasion, its system has never actually downloaded these NCMEC hashes to the user’s device. Instead, it uses some encryption techniques to convert them into so-called blind databases, which are then downloaded to the user’s mobile phone or PC, which contains seemingly meaningless strings derived from these hashes. This blinding prevents any user from obtaining hash values ​​and using them to bypass system detection.

Then, the system compares the hashed blind database with the hashed image on the user’s device. The results of these comparisons are uploaded to Apple’s servers, which the company calls “secure credentials,” which are encrypted in two layers. The first layer of encryption aims to use an encryption technique called privacy set intersection, so that it can be decrypted only when the hash comparison produces a match. No information about the mismatched hash was disclosed.

The second layer of encryption is designed to decrypt matches only under certain circumstances number the match of. Apple says this is to avoid false positives and to ensure that it detects the entire CSAM collection, not a single image. The company declined to disclose the threshold for the number of CSAM images it is looking for. In fact, over time, it may adjust the threshold to adjust its system and keep the false alarm rate below one part in a trillion. Apple believes that these protective measures will prevent possible abuse of its iCloud CSAM detection mechanism, enabling it to identify collections of child exploitation images without seeing any other images uploaded by users to iCloud.

When Apple currently does not encrypt iCloud photos, and can simply perform CSAM checks on images hosted on their servers like many other cloud storage providers, this very technical process represents a series of strange obstacles. Apple argues that the process it introduced splits the inspection between the device and the server, which is less intrusive to privacy than a simple batch scan of server-side images.

But critics like Johns Hopkins University cryptographer Matt Green suspect that Apple’s approach has more complicated motives. He pointed out that although the process has privacy protection, Apple has made great technical efforts in inspecting the image on the user’s device, but it is only true if the image is encrypted before it leaves the user’s mobile phone or computer and server-side detection. Meaning becomes impossible.He worries that this means that Apple will extend the detection system to photos on the user’s device no Once uploaded to iCloud, a scan of images on a device, represents a new form of intrusion into users’ offline storage.

Source link

- Advertisement -spot_img

More articles


Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article

Roy Jones Jr., Miguel Cot...

In the south, November is...

Use cold frames in the we...

Now you can explode a gre...

“Worker Data Scienc...