Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There will only be an alert if that photo is extremely similar to an image in the NCMEC database, AND there are numerous other such photos on the account that match. The threshold number of matches to trigger an alert is tuned for a 1/trillion chance of false positive.

Furthermore, if you were using say Google Photos to store your images, then you were already subject to this vulnerability.



So what if a small circle of people produce their CSAM material by themselves and share only among themselves? None of the pictures is being uploaded to that database, so either the algorithms are really really good at recognizing them, or it will require human intervention, that is, scanning one by one all phones, then deciding which picture matches the criteria and write down the names of the people involved. I can't think of a similar scenario that doesn't imply the total loss of privacy by anyone even remotely linked to one of these people.


If a small circle of people only share not-already-known CSAM content among themselves and never share known CSAM material then there is no way that Apple's scheme can catch them.

However, if at any point one of them screws up and shares a known CSAM image, the entire ring can be caught. This is not actually an implausible scenario. See for example: https://twitter.com/alexstamos/status/1424037132201431045?s=...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: