Apple

Apple is temporarily hitting the pause button on its controversial plans to screen users' devices for child sexual abuse material (CSAM) after receiving sustained blowback over worries that the tool could be weaponized for mass surveillance and erode the privacy of users.

"Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features," the iPhone maker said in a statement on its website.

Cybersecurity

The announcement, however, doesn't make it clear as to the kind of inputs it would be gathering, the nature of changes it aims to devise, or how it intends to implement the system in a way that mitigates the privacy and security concerns that could arise once it's deployed.

The changes were originally slated to go live with iOS 15 and macOS Monterey later this year, starting with the U.S.

In August, Apple detailed several new features intended to help limit the spread of CSAM on its platform, including scanning users' iCloud Photos libraries for illicit content, a Communication Safety option in Messages app to warn children and their parents when receiving or sending sexually explicit photos, and expanded guidance in Siri and Search when users try to perform searches for CSAM-related topics.

The so-called NeuralHash technology would have worked by matching photos on users' iPhones, iPads, and Macs just before they are uploaded to iCloud Photos against a database of known child sexual abuse imagery maintained by the National Center for Missing and Exploited Children (NCMEC) without having to possess the images or glean their contents. iCloud accounts that crossed a set threshold of 30 matching hashes would then be manually reviewed, have their profiles disabled, and reported to law enforcement.

The measures aimed to strike a compromise between protecting customers' privacy and meeting growing demands from government agencies in investigations pertaining to terrorism and child pornography — and by extension, offer a solution to the so-called "going dark" problem of criminals taking advantage of encryption protections to cloak their contraband activities.

However, the proposals were met with near-instantaneous backlash, with the Electronic Frontier Foundation (EFF) calling out the tech giant for attempting to create an on-device surveillance system, adding "a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor."

"Once this capability is built into Apple products, the company and its competitors will face enormous pressure — and potentially legal requirements — from governments around the world to scan photos not just for CSAM, but also for other images a government finds objectionable," the Center for Democracy & Technology (CDT) said in an open letter.

"Those images may be of human rights abuses, political protests, images companies have tagged as "terrorist" or violent extremist content, or even unflattering images of the very politicians who will pressure the company to scan for them. And that pressure could extend to all images stored on the device, not just those uploaded to iCloud. Thus, Apple will have laid the foundation for censorship, surveillance, and persecution on a global basis," the letter read.

But in an email circulated internally at Apple, child safety campaigners were found dismissing the complaints of privacy activists and security researchers as the "screeching voice of the minority."

Cybersecurity

Apple has since stepped in to assuage potential concerns arising out of unintended consequences, pushing back against the possibility that the system could be used to detect other forms of photos at the request of authoritarian governments. "Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government's request to expand it," the company said.

Still, it did nothing to allay fears that the client-side scanning could amount to troubling invasions of privacy and that it could be expanded to further abuses, and provide a blueprint for breaking end-to-end encryption. It also didn't help that researchers were able to create "hash collisions" — aka false positives — by reverse-engineering the algorithm, leading to a scenario where two completely different images generated the same hash value, thus effectively tricking the system into thinking the images were the same when they're not.

"My suggestions to Apple: (1) talk to the technical and policy communities before you do whatever you're going to do. Talk to the general public as well. This isn't a fancy new Touch Bar: it's a privacy compromise that affects 1 billion users," Johns Hopkins professor and security researcher Matthew D. Green tweeted.

"Be clear about why you're scanning and what you're scanning. Going from scanning nothing (but email attachments) to scanning everyone's private photo library was an enormous delta. You need to justify escalations like this," Green added.


Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.