Could you explain your posture in the CSAM scanning thing? I'm not an apple user but the way I see it, it's not some sort of nudism / age detection mechanism where a very incopetent system could land you in jail for an old digitalized photo of yourself after a shower from 30 years ago when parents actually took those kinds of pics.
The way I see it, it just hashes them with whatever mechanism they came up with and there are additional mechanisms to verify it if for some fringe conincidence your cat pictures hash matches some CSAM hash which would be annoying but not the end of the world.
Now, in the other hand, let's say the snitch detects actual CSAM in someones phone, what's the problem? if it was sent without their consent an investigation can lead to who sent it, and if it was well, tough shit...
I know it sounds very 1984ish but honestly I don't think it's any worst than the kind of surveillance power google has with all their platforms combined (chrome, android, google, any web thing they didn't kill already).
I guess, what I'm asking for is for real arguments on why and how this truly violates privacy and to what extent it is problematic for a legit non CSAM consuming person.
I'm not trying to argue with you but to understand this point of view since I read so many comments against it but nothing that seriously made sense to me.
I am not fond of my device burning battery for, and being one bitflip away at a jmp-if-zero/jmp-if-not-zero, to calling an API whose only purpose is to inform the authorities that I am a suspect. If it happened server-side I would have no problem with it.
Thanks for your response, I'll answer just to continue the conversation, not to try to invalidate your points or anything like that.
I read in other posts there's some sort of review process and a way to verify if the image is a match or a collision (I don't know much about he details) but I read the latest posts about the attacks that were being worked on.
I mean, with the complexity these attacks have I think it'd be easier for a ransomware gang to just infect you, plant the CSAM, find reliable contact info, verify it, lock your phone and extort your through an untrackable side channel (if this system didn't exist) or something a bit more elaborate / targeted (not even at NSO level).
IDK, I think the phone burns battery for dumber reasons at a higher rate, this should only be activated when there's a new picture written to disk and it's probably less expensive resource wise than whatsapp / telegram / imessage checking for new messages periodically don't you think?
I think if they don't royally fuck the process up or turn it into some idiotic fake way of getting the cops whatever paperwork they need to force you to give them access to your files it's a good thing.
The answer to why people don't like this is simple, if a government like China says "Apple, you're going to add these image hashes to the database and report any device that has them in the next update or you're going to leave China," what do you think Apple is going to do?
I have read their papers, I understand the system and the safeguards they put in place, but none of them are good enough to have scanning on my device. There is nothing that is good enough. On device scanning for "illicit" content is a box that cannot be closed.
They have the whole system at their disposal for that, they don't have to do this. As an example (I know I could be out of date with this one), do you know why aren't there any iMessage bridges that don't require a mac?
IIRC it's deeply entrenched in the system and no one reversed their way deep enough to be able to replicate it. Now this might sound silly, but it's just an example, a contrast maybe of how the hard work the people behind asahi are putting or the huge jailbreak community, but the idea I'm trying to convey is that the playfield is HUGE and they just don't need this.
The one thing I would be 100% concerned about is the investigation process for matches because that's mainly where human interaction and decision making com into play and we humans SUCK, we've put people behind bars for years for no reason and with all this AI crap there have been a lot of news articles about that kind of stuff and that's something we should definitely be worried about, but I guess it's less about the tech and more about the people in charge right?
I don't understand the idea you are trying to convey. iMessage is not impenetrably complex, it is just an ugly API that uses an Apple provided certificate and a valid serial number as part of the authentication factor.
I also don't agree that the human-in-the-loop part of the process is a/the problem. Are you suggesting that it should just send the findings straight to the FBI... where a human would review it? Or maybe skip all of the messy middle part and if the model detects enough CSAM just send an APB to the local police to pick you up and take you straight to prison with no trial?
I was using iMessage as an example of something tedious and not overly complex but not exactly low hanging fruit that's yet to be completely reversed. The S/N or certificate parts don't even matter, if people had reversed their way through it, there would be at least an option to extract the required parameters out of your hardware and plug them in a stand alone server (in fact, IIRC there a valid S/N generator some time ago that was used to deploy osx in kvm?).
So, the idea is that even though it's not an impenetrable fortress, there are still plenty of dark places to introduce subreptious changes.
As for the human-in-the-loop part, I don't know why did you get so snarky, what I was talking about was that this is the layer that should be scrutinized the most, all of those components because even without the technology those are the people that will put someone in jail with no verifiable evidence.
So your argument is "iOS is complex, they could have just hidden it in there, but they didn't, they told us about it." I'm still not sure why this matters. From the standpoint of interacting with a government Apple could say "we cannot do that and maintain the security of the OS." Now, post announcement, they have to say "we will not do that."
That is a huge difference.
I got snarky because the human-in-the-loop for decision making is the the least concerning part of the process and the alternatives are as ridiculous as I laid out. There will always be a human-in-the-loop in this process - I'd rather it start with Apple's human, then law enforcement, then a prosecutor, then a judge, etc.
You really should go read Apple's papers, FAQ, etc... on the feature. Not saying that has happened here, but there are a lot of knee jerk, uninformed opinions and information floating around. Also take a look at PhotoDNA, which is an older version of a hash system already in use by other providers.
In your example about planting CSAM, why would on/off device matter since the new feature only checks for items going to iCloud anyway? The planting CSAM attack vector is available right now for any device connected to FB, OneDrive, or Gmail, and I don't think planting material has been an issue.
Well, in the planting scenario I didn't mention the attacker uploading it to iCloud directly because that's exponentially harder nowadays.
If you're hit by an NSO client and they have an agent running in your phone checking in with their C2, what do you think would be easier :
1 - Run a reverse proxy in your phone, steal your credentials (or session data) and use that connection to upload the material
2 - Write it to disk and wait for the media scanner service to pick it up and act on it?
I mean, in the end it's not about the technology but the people operating it, if apple is really incompetent and law enforcement is shitty as usual then yeah, people might end up behind bars for no reason, which sucks but in that case I think the focus shouldn't be the technology itself but how shitty and unfair the system is.
There is also the other thing with imessage scanning. It seems ripe for abuse, for example a husband forcing it on every family member including the spouse (and forcing a fake DOB).
Using Apple devices used to be all about how they serve the user to bring joy. Knowing they now spend even a single cpu instruction on trying to frame the user turns the device from something I loved to something I fear.
All the talk about human review and multiple failsafes does not smooth things over. App store review is a prime example of how their review process can be seriously flawed - scam apps and subscriptions sometimes even being FEATURED in editorials on the app store.
It does not matter that you will be found innocent in the end. Just being put under investigation for csam can make anyone's a life living hell. Getting your AppleID blocked, even if temporary, can cause severe problems.
When a company advertises that "what happens on your phone stays on your phone", and then proceeds to build snitchware into the phone that reports on received imessages to the "family head of household" and reports and UPLOADS photo roll items that were never intended to be shared, to human review, well... that company does not appear to be honest anymore.
> There is also the other thing with imessage scanning. It seems ripe for abuse, for example a husband forcing it on every family member including the spouse (and forcing a fake DOB).
In addition, this only works for <18 accounts. If the abusive figure goes as far as making other family members recreate Apple IDs and lie about their age every 5 years to keep getting access to iMessage (and other parental controls like screen time) then there's not much Apple can do.
In that iMessage scenario, the family member has to explicitly approve sharing anything with the adult on the iCloud account. Nothing happens automatically.
No I didn't. Even if someone is under age and gets nudity messaged to them with this feature enabled, they have explicitly opt into sending it to their parent. Otherwise, nobody sees it.
But it ultimately does happen server side. The client is is hash creation, but the server runs it thru an elliptical curve to see if there’s a real match. And if so, it performs an additional hash on the server side.
One question I have on CSAM is: so it only detects known pics that law enforcement already has? If so, is that the major issue with child porn, e.g. same pics getting passed around? Just seems like this won't prevent abuse from occurring, with their own new pics and videos.
FB, Google, MS, etc... already use a similar hash based system called PhotoDNA on any photos in their clouds. They reported ~20M+ instances last year, so yeah it seems like the same pics do get passed around.
I'm not defending the FBI or the Apple feature here, but on a lot of cases where abusers get caught it's because they were sharing pictures of their victims in groups where they exchange other pictures with other people into that. Sometimes those other pictures are on this databases. So this database matching things generate leads.
Last case I heard about was cleaning personnel in a body expression workshop for kids with learning disabilities that was sharing new pictures he took of girls in a Telegram group. The group was infiltrated by an FBI agent that allegedly got the link from a Facebook group that they found because of Facebook scanning for known hashes and reporting.
I can't remember the actual title of the article but I clearly remember about an instance (I think it was a few years ago) where there was a CP ring that operated (at least partly) through a whatsapp group and they got busted when they accidentally added someone with the wrong phone number so yeah, I think there's a lot of "low hanging fruit" that could lead to putting some of these assholes behind bars.
I don't think it would prevent "new" abuse but just like you might have some material (books, music, whatever) these people have their stuff and it's not like every single one of them is a producer but they might be part of communities and catching some of them might lead law enforcement to bigger fish and hopefully producers or at least that's how I think the people behind this might be thinking.
> I know it sounds very 1984ish but honestly I don't think it's any worst than the kind of surveillance power google has with all their platforms combined (chrome, android, google, any web thing they didn't kill already).
This should give you pause.
Why do you think something that sounds 1984ish should be acceptable to anyone? Why is it acceptable to you?
The fact that other companies also have advanced surveillance power should be reason to push back on that as well, not to cede more ground to surveillance.
I guess your logic doesn't make any kind of sense to me.
The way I see it, it just hashes them with whatever mechanism they came up with and there are additional mechanisms to verify it if for some fringe conincidence your cat pictures hash matches some CSAM hash which would be annoying but not the end of the world.
Now, in the other hand, let's say the snitch detects actual CSAM in someones phone, what's the problem? if it was sent without their consent an investigation can lead to who sent it, and if it was well, tough shit...
I know it sounds very 1984ish but honestly I don't think it's any worst than the kind of surveillance power google has with all their platforms combined (chrome, android, google, any web thing they didn't kill already).
I guess, what I'm asking for is for real arguments on why and how this truly violates privacy and to what extent it is problematic for a legit non CSAM consuming person.
I'm not trying to argue with you but to understand this point of view since I read so many comments against it but nothing that seriously made sense to me.