The EFF wrote a really shitty hit piece deliberately confused the parental management function with the matching against hashes of illegal images. Two different things. From there, a bazillion hot takes followed.
The EFF article refers to a "classifier", not just matching hashes.
So, three different things.
I don't know how much you know about them, but this is what the EFF's role is. Privacy can't be curtailed uncritically or unchecked. We don't have a way to guarantee that Apple won't change how this works in the future, that it will never be compromised domestically or internationally, or that children and families won't be harmed by it.
It's an unauditable black box that places one of the highest, most damaging penalties in the US legal system against a bet that it's a perfect system. Working backwards from that, it's easy to see how anything that assumes its own perfection is an impossible barrier for individuals, akin to YouTube's incontestable automated bans. Best case, maybe you lose access to all of your Apple services for life. Worst case, what, your life?
When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear? Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm? How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".
Someone's being dumb here, and it's probably the ones who believe that fruit can only be good for them.
> When you take a picture of your penis to send to your doctor and it accidentally syncs to iCloud and trips the CSAM alarms, will you get a warning before police appear?
You would have to have not one, but N perceptual hash collisions with existing CSAM (where N is chosen such that the overall probability of that happening is vanishingly small). Then, there'd be human review. But no, presumably there won't be a warning.
> Will there be a whitelist to allow certain people to "opt-out for (national) security reasons" that regular people won't have access to or be able to confirm?
Everyone can opt out (for now at least) by disabling iCloud syncing. (You could sync to another cloud service, but chances are that then they're scanned there.)
Beyond that, it would be good if Apple built it verifiably identically across jurisdictions. (If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature.)
> How can we know this won't be used against journalists and opponents of those in power, like every other invasive system that purports to provide "authorized governments with technology that helps them combat terror and crime[1]".
By ensuring that a) the used hash database is verifiably identical across jurisdictions, and b) notifications go only to that US NGO. Would be nice if Apple could open source that part of the iOS, but unless one could somehow verify that that's what's running on the device, I don't see how that would alleviate the concerns.
That isn't an offer of legal protections or guarantees that the trustworthiness and accuracy of their methods can be verified in court.
It really doesn't matter how they do it now that we know that iOS has vulnerabilities that allow remote monitoring and control of someone's device to the extent that it created a market for at least one espionage tool that has lead to the deaths of innocent people.
I remember when the popular way to shut down small forums, business competitors, or get embarrassing information taken off the web was to anonymously upload CP to it and then report it, repeatedly. With this, what's to stop virtual "SWATing" of Apple customers? Not necessarily just those whose Apple products have been compromised, whose iClouds have been compromised, or who are the victims of hash collisions (see any group of non-CSAM images that CSAM detection flags).
Will Apple analyze all hardware to ensure no innocent person is framed because of an undisclosed vulnerability? What checks are being offered on this notoriously burdensome process on the accused?
>If you think that Apple creates malicious iOS updates targeting specific people, then you have more to worry about than this new feature
You’re making my point. Nobody cares about your penis pictures.
EFF is an advocacy group, and you need to read between the lines what they say because they have a specific set of principles that may or may not align with reality. They published a bad article and took an extreme stance about what could be as opposed to what is.
Parents care about children sending or receiving explicit material. This is for behavioral, moral and liability reasons.
When your 12 year old boy sends a dick pic to his girlfriend, that may be a felony. When your 16 year old daughter sends an explicit picture to her 18 year old crush, that person may be committing a felony by possessing it.
So why doesn't Apple release an iPod Touch or iPhone without a camera? Maybe release child safe versions of their apps that can't send or receive images from unapproved (by parents) parties?
There seem to be an endless number of ways to achieve what they claim without invasively scanning people's private data.
EFF today is really not the organisation it was just a few years ago. I dont know who they hired badly, but the reasoned takedowns have been replaced with hysterical screaming.