Another worrying annoucement from Apple (or not?) 

Tags :

Apple’s recent announcement about iCloud photos scanning for child sexual abuse material (CSAM) and Message notification has generated a fait bit of push back and mixed reactions from privacy experts.

Is this a backdoor?

Edward Snowden voiced his concerns on Twitter:

No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this,” he added. “Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs—without asking.

My initial reaction was that this might be the first step in installing a backdoor in iOS and macOS. Imagine what authoritarian governments could do with such tools… e.g. Saudi Arabia clamping down on LGBTQ+ images, or others flagging dissident or anti-government messages. Nightmare.

Kendra Albert, a lawyer at the Harvard Law School’s Cyberlaw Clinic, raises concerns that “these “child protection” features are going to get queer kids kicked out of their homes, beaten, or worse” :

These “child protection” features are going to get queer kids kicked out of their homes, beaten, or worse.

The EFF posted an insightful article on the subject entitled “Apple’s Plan to “Think Different” About Encryption Opens a Backdoor to Your Private Life”.

That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

A large number of security and privacy experts, legal experts, and more, gathered in an open letter to Apple:

The open letter contains tons of arguments, scenarios, and examples from experts about just how bad this technology is, and just how easily it can be abused.

Updates

Aral Balkan voices his very serious concerns on his blog in a post named “Apple is trying to redefine what it means to violate your privacy. We must not let it.

This isn’t the first time I am concerned by Apple’s decisions of late. I wrote a post back in 2019 underlining a few concerns I had with regards to Apple’s stances on several topics.

Since then, Apple has sued companies which enable security research (and sort of lost [1], [2], [3]), so maybe this latest announcement is to add to the list of odd (or worrying) decisions they have made over the years.

This led me to try to better understand was is going on here. Apple’s recent positions on privacy-related technologies didn’t really fit what I was reading.

So, what is really going on?

Mike Peterson and Mike Wuerthele of AppleInsider go over the details of how the scanning works both in Photos and Messages, as does John Gruber on Daring Fireball :

There is three distinct initiatives here:

  • The iCloud Photos scanner is comparing the mathematical hashes (not images) of known CSAM to collection of known CSAM images is stored in iCloud.If a match is detected, the account is flagged and a report is sent to the National Center for Missing & Exploited Children (NCMEC) on device. This only applies to photos stored in Photos, and this feature is not optional. To opt out of this fingerprint matching, you’ll need to disable iCloud Photo Library. The database will be part of iOS 15, and is a database of fingerprints, not images.
  • The Messages scanning only applies to accounts belonging to children, and is opt-in, not opt-out. Furthermore, it doesn’t generate any reports to external entities — only the parents of the children are notified that an inappropriate message has been received or sent. This feature is specifically only for children in a shared iCloud family account, and is a feature of the Messages app, not them iMessage service. From my understanding, it changes nothing about the end-to-end encryption inherent to the iMessage protocol and is performed on device too.
  • Siri and search interventions is a feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

And all this will only apply to the USA for the time being.

As usual (sic), Apple’s lack of transparency, leads to wonder if Apple is doing what it says its doing, how it says its doing it. Apple’s system is designed to ensure that false positives are ridiculously rare but raise valid privacy concerns.

As AppleInsider reports, Apple is nor the first, nor the most agressive, to scan for CSAM content. Google has been doing it since 2014 in GMail. Microsoft, Facebook or Dropbox are doing it too, and reporting much more as their communications aren’t all (yet) end-to-end encrypted.

This is good news – in principle

On the principle, what Apple announced is good news, nobody will disagree, but once something is built in, it’s built in, and there is no way of knowing for sure how it will be used in the future. Let’s hope Apple has found a reasonable middle ground in which privacy is preserved, and their tools may be as efficient as possible without being abused.

Apple published a technical summary along with a detailed explanation of the cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.

ArsTechnica and The Verge have good write-ups with links to technical assessments of the system from three independent cryptographers who found it to be mathematically robust.

There’s always a thin line between privacy and freedom […] While privacy is a fundamental human right, any technology that curbs the spread of CSAM is also inherently good. Apple has managed to develop a system that works toward that latter goal without significantly jeopardizing the average person’s privacy. – AppleInsider

Updates

Posted a response ? — Webmention it

This site uses webmentions. If you've posted a response and need to manually notify me, you can enter the URL of your response below.

Want more ? — prev/next entries