Adblocking company AdGuard is the latest to offer commentary on Apple’s controversial decision to detect CSAM in iCloud Photos. The team ponders ways to block it using their AdGuard DNS technology.
We consider preventing uploading the safety voucher to iCloud and blocking CSAM detection within AdGuard DNS. How can it be done? It depends on the way CSAM detection is implemented, and before we understand it in details, we can promise nothing particular.
Who knows what this base can turn into if Apple starts cooperating with some third parties? The base goes in, the voucher goes out. Each of the processes can be obstructed, but right now we are not ready to claim which solution is better and whether it can be easily incorporated into AdGuard DNS. Research and testing are required.
Check It Out: AdGuard: ‘People Should be Worried About Apple CSAM Detection’
Jeff and Andrew:
Viewed differently, the fact that your AI cannot tell the difference between a chihuahua and a muffin, should instil confidence in AI privacy preservation. (NB: This is not about Apple’s decision to screen for CSAM, and potential exploits). Bear with me.
If you think back to your trigonometry course in school (although I’m not sure that trig is required in the US school system – my kids didn’t have to take it, but humour me), and your teacher gave you a problem to solve for a geometric (shape) solution, once you solved it, and stared at the formula, despite being a sentient being with an imagination, you do not see a shape or a figure. You see a mathematical formula. Full stop. Your AI is neither sentient, nor does it have imagination, and therefore no capacity for curiosity and trying to figure out what the formula even describes.
In fact, if you had a particularly sadistic teacher (mine wasn’t sadistic; he just didn’t care whether you liked it or not – no seriously) he or she could give you a series of geometric shapes to solve for, each describing a different aspect of one thing. These might describe the outline of a trilobite and the gentle undulations along its dorsal surface; the sharper curves of a pill bug and the serried lines along its back when closed up; or the fan-shaped curves of an exotic seashell following a gentle arc. As you solve for each of these shapes, you are not likely to organise any of these (independent of having an image in front of you) into a coherent picture. Swap any one of these images for something pornographic (I’m not offering any suggestions here) and, once you solve for it and stare at the various formulae you’ve worked out, you’re not going to be aroused any more than you were by the solutions for the pill bug or the trilobite. Our brains are simply not wired that way; and AI is not even a close proximity.
Not only can AI not distinguish a dog from a muffin, it doesn’t care. At all. About anything. The only thing that it is trained to do is to, like a good student (or at least one that wants a passing grade and to matriculate out of that class) is to derive the correct solution, and to match it with that of the teacher.
As for what it describes? Seriously, your AI doesn’t know and doesn’t care. And that’s a good thing.
Of course, this is separate and apart from any concerns about how Apple’s CSAM technology might be susceptible to exploits, or any pros and cons for its use. I’m not even trying address that here. That’s a larger discussion, and for it to be intelligently engaged, we need more specifics about which of these technologies Apple have actually used. Another time.
This might be my favorite argument against CSAM hash scanning: “your AI can’t tell a chihuahua from a muffin.”