Apple uploads all photos to iCloud photos (just as google does). This includes CSAM and not CSAM.
It keeps a counter going of how much stuff is getting flagged as possible CSAM. They don't even get an alert about anything until you hit some thresholds etc. And no one has reviewed anything yet at all, the system is flagging things up as other systems do.
Are you sure (legally) that one can't review flags from a moderation system? That is routine currently. No one is knowingly doing anything. Their system discusses being alerted to possible counts of CSAM.
Is your goal that apple go straight to child porn reports automatically with no human involvement at all? At scale (billions of photos) that's going to be a fair number of false positives with potentially very serious consequences.
The current approach is that images are flagged in various ways, folks look at them (yes, a large number have problems), then next steps are taken. But the flags are treated as possible CSAM.
Please look into all the false positives in youtube screening before you jump from a flag => sure thing. These databases and systems are not perfect.
I'm not a lawyer and i want apple to do nothing especially not scan my device.
I'm saying the linked article in discussion says you can't transmit content you KNOW (or suspect) is CSAM. You don't assume that all your customers' content is CSAM, but post-scan, you should assume.
The only legal way to transmit (according to article) is if it's to the government authorities.
I don't know the legal view on the "false positive" suspicion vs legality of transmitting. That's a gamble it seems. I don't have a further opinion on it since IANAL and this is very legal grey area.
Apple is very clear that they don't know anything when photos are uploaded. The system does not even begin to tell them that some may have CSAM until it's had like 30 or so matches. The jump from this type of system (variations are used by everyone else) to some kind of child porn charges is such a reach it's really mind boggling. Especially since the very administrative entities involved are supporting it.
A strong claim (apple committing CSAM felonies) should be supported by reasonably strong support.
Here we have a blog post where they've talked to someone ELSE who (anonymously) has reached some type of legal conclusion. If you follow the QAnon claims in this area (there are lots) they follow a somewhat similar approach - someone heard from someone that something someone did is X crime. It's a weak basis for these legal conclusions.
Apple uploads all photos to iCloud photos (just as google does). This includes CSAM and not CSAM.
It keeps a counter going of how much stuff is getting flagged as possible CSAM. They don't even get an alert about anything until you hit some thresholds etc. And no one has reviewed anything yet at all, the system is flagging things up as other systems do.
Are you sure (legally) that one can't review flags from a moderation system? That is routine currently. No one is knowingly doing anything. Their system discusses being alerted to possible counts of CSAM.
Is your goal that apple go straight to child porn reports automatically with no human involvement at all? At scale (billions of photos) that's going to be a fair number of false positives with potentially very serious consequences.
The current approach is that images are flagged in various ways, folks look at them (yes, a large number have problems), then next steps are taken. But the flags are treated as possible CSAM.
Please look into all the false positives in youtube screening before you jump from a flag => sure thing. These databases and systems are not perfect.