Hacker News new | past | comments | ask | show | jobs | submit login

If you read their Security Threat Model Review [1] they're only using the "intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions".

So you'd have to pressure NEMEC and another org under a different government to both add the non-CSAM hash, plus Apple would need to be pressured to verify a non-CSAM derivative image, plus you'd need other hash matches on-device to exceed the threshold before they could even do the review in the first place (they can't even tell if there was a match unless the threshold is exceeded).

I get why people are concerned, but between this thread and the other thread yesterday it's clear that pretty much everyone discussing this has no idea how it works.

1: https://www.apple.com/child-safety/pdf/Security_Threat_Model...




I think you're missing what concerns people; what guarantee do we have that Apple will, always and forever, only use the "intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions"?


We have the same guarantee about that today as we did a month ago before this was announced. Apple writes the software that runs their devices, so they’ll always have the option to ship something like that.

Apple could have quietly implemented CSAM scanning server-side, and left the door open to it being quietly exploited in who knows what way. But they didn’t: instead they put a whole bunch of infrastructure in place that all but guarantees that they’ll be immediately caught (and publicly excoriated) if they try to use this CSAM mechanism for anything other than CSAM. (See the PDF that GP linked for technical details on why.)

Of course, they could still do it with some other mechanism. But in that case none of these CSAM changes are at all relevant to the concern, as that risk is unchanged from a month ago.


> Apple could have quietly implemented CSAM scanning server-side

Aren't iCloud Photos already scanned for CSAM though?


No


Their privacy page[1] certainly suggests they have been since 2019 - "pre-screening or scanning uploaded content for potentially illegal content, including child sexual exploitation material".

Do you have some source to indicate definitively that they have not been scanning iCloud Photos for CSAM?

[1] https://web.archive.org/web/20190701000647/https://www.apple...


Craig Federighi said they don’t in a recent interview.


It’s almost like you didn’t read the paper or my whole comment.


I've actually read your comment, along with the linked PDF, in full multiple times in an effort to make sure I wasn't missing a point you were trying to make. Before I continue, I want to make it clear that I fully understand that adding non-CSAM content into the existing NCMEC/CSAM/IWF environment would more than likely raise a lot of red flags. You and I have an understanding there.

As it happens, I still think you're missing the point that myself and others are trying to make. Perhaps the following is a better way of phrasing my inquiry:

What guarantees to we have that Apple will, always and forever, only use the NCMEC/CSAM/IWF system to scan images on phones? By that I mean, isn't it perfectly plausible that Apple can leave the NCMEC/CSAM/IWF system in place, as it is described in your post and the linked PDF, and at the same time partner with [random body X/Y/Z], who has their own database of [whatever] that they scan separately from the NCMEC/CSAM/IWF system? These two different scans wouldn't ever need to communicate with each other, and could run at the same time.


Well yeah it's plausible they would just turn the contents of everyone's entire iCloud backups over to the FBI, which they can totally do right now at any time since they already have the keys!

So what? What you're describing is not what this system does. You could argue against literally any possibility anyone could dream up, because they're the platform vendor and they can make any change they want. If that's a problem when you're evaluating your threat model then this new system is the least of your worries.

And as far as I can see this system is the least intrusive version of what any of the other cloud vendors are doing. Don't want it? Turn off iCloud Photos and sync to something else like your own Nextcloud instance.


>What you're describing is not what this system does.

I said that the hypothetical system could be set up, and function, in the same way as the NCMEC/CSAM/IWF system, just with different entities behind it. Despite chastising me for what you thought was a failure to comprehend your post, this is the third time you seem to have been unable to grasp that concept.

That's part of what people are concerned about - an identical system running in tandem to this, but with less savory characters behind it. Given how many comments on HN have addressed that this week, including all of mine right here, I am at a loss for how to make that any clearer to you.


Hypothetical system. Yes, a lot of comments have addressed something that isn't happening.


>Hypothetical system.

Right!

>Yes, a lot of comments have addressed something that isn't happening.

No, and the confusion lies in the tense, "isn't" vs. "won't". A lot of comments have addressed a lack of trust that such a scenario won't happen.


Sure but you are ignoring that such a scenario would need a different system to be deployed.

Yes they could do that, but this doesn’t help them.


>Sure but you are ignoring that such a scenario would need a different system to be deployed.

Right! I clearly stated multiple times that this would be a second system working in tandem, of course it would need to be deployed on its own. I am entirely unsure how you think I am ignoring that when it's the premise of my argument and of the concern of others.


If you are talking about a second system that needs to be deployed in it’s own in the future, then you are not identifying a problem with this system.


Forgive me - I typically try to adhere to the ideal that posts on HN should be constructive and refrain from snark and sarcasm, but this particular comment chain has become so circular and repetitive that the only response I can think of at this point is, “No shit!”.

Edit: Removing some paragraphs, because if that's not perfectly clear to you by reading this entire comment thread, even when I basically said as much in the post you just responded to, then I really don't know what to tell you.


Right, but the point is that you are talking about this fictional system as though it is causally connected to what Apple is doing, when it really isn’t.

You being able to imagine a fictional evil parallel system isn’t a valid criticism of what Apple is actually doing.


>You being able to imagine a fictional evil parallel system isn’t a valid criticism of what Apple is actually doing.

Right! And where did I criticize them? All I did was ask, “What guarantee do we have that Apple will, always and forever, only scan for data that comes from NCMEC/CSAM/IWF?” and then gave an example of what such a scenario might look like when another user responded to me.

I expressed a concern (I even used that word, and not "critique" in my last post that you responded to) that this type of scanning could be expanded in the future, but I have not criticized them broadly nor have I criticized this specific program. How has that not been clear?

If you think I’ve done otherwise, I’d appreciate it if you could quote what you interpret as a critique, so that I may work to avoid such confusion in future conversations similar to this.


The implication of expressing a concern is that it is related to what is being done.

Otherwise why post it here? I doubt you’d claim that your ‘concern’ is irrelevant to this topic.

It’s an innuendo intended to imply a likelyhood of future wrongdoing.

It works like this: ‘What guarantee do we have that he won’t hit his wife after we leave?’

Obviously this is a way of suggesting that the man will hit his wife, phrased as a question to facilitate the same kind of denial that you are using here.


>The implication of expressing a concern is that it is related to what is being done.

… duh? I never said it wasn’t related, I said that it’s not a criticism of the current program.

>Otherwise why post it here? I doubt you’d claim that your ‘concern’ is irrelevant to this topic.

Correct!

>It’s an innuendo intended to imply a likelyhood of future wrongdoing.

No, it’s intended to state clearly - not imply - a concern around whether or not Apple will be pressured by other bodies to begin scanning on-device content for hashes pertaining to subjects beyond child abuse.

The question is simple - what guarantee do we have that Apple won’t expand what kind of content they scan for? What is so difficult to grasp about that, and why do you, it seems, feel that that is an entirely irrelevant question that isn’t worth discussing, answering or being concerned about?

>It works like this: ‘What guarantee do we have that he won’t hit his wife after we leave?’

>Obviously this is a way of suggesting that the man will hit his wife, phrased as a question to facilitate the same kind of denial that you are using here.

You leave and hope that he doesn’t, but would you never follow up with your friends to see how they’re doing after a domestic altercation, to make sure it hasn’t happened again and that they’re safe from harm? What is so wrong about being aware of possibilities and remaining vigilant?


> You leave and hope that he doesn’t

You’re both proving the point.

Nobody said the man had done anything wrong, and yet you are assuming he did simply because someone asked a question and recommending that people act on this false impression.

That is exactly the goal of innuendo.


Your scenario around leaving and whether or not the man will hit his wife was flawed from the start. Given the context of this broader conversation - Apple implementing a new photo-scanning system - one assumes that with the way you left the beginning of your scenario open to interpretation/assumption, that you are leaving after he's already hit his wife (in this case, Apple implementing the CSAM scanning) and hoping he doesn't do it again (Apple scanning for different content in the future). You would do well to better set up your hypothetical scenario next time, perhaps clarifying that you haven't actually witnessed any violence firsthand prior to your departure.

But again, since you continue to avoid what I'm asking:

>The question is simple - what guarantee do we have that Apple won’t expand what kind of content they scan for? What is so difficult to grasp about that, and why do you, it seems, feel that that is an entirely irrelevant question that isn’t worth discussing, answering or being concerned about?

and perhaps more curiously

>What is so wrong about being aware of possibilities and remaining vigilant?


> Your scenario around leaving and whether or not the man will hit his wife was flawed from the start

My scenario never said anything about ‘leaving’. You continue to confirm the point. You read that in to the scenario.

That’s the point. You completely failed to understand the scenario as written and made up your own story to suit your prejudices. The only violence was in your imagination.

What guarantee do we have that you aren’t doing the same thing with Apple?


>My scenario never said anything about ‘leaving’.

LOL, surely you jest:

>It works like this: ‘What guarantee do we have that he won’t hit his wife after we leave?’

At this point, after continuously misinterpreting my posts (almost intentionally, as though you're arguing in bad faith), you've effectively moved on to gaslighting, so I'm just going to continue to reiterate the same questions I have been posting, which you continue to avoid answering. Carry on as you have been if you want, but this is about all you'll get from me moving forward until you answer them directly.

- Apple has decided to scan for a specific kind of content on their phones, what guarantee do we have that they won't scan for other content in the future?

- For someone who commented, "stand up for civil liberties now", why are you so opposed to people being aware of possibilities and remaining vigilant?


QED


> So you'd have to pressure NEMEC and another org under a different government

To be fair, if the other organisation is IWF[1] under the UK government, I don't think there'd be much pressure needed to get them to comply - just offer to bung them and their mates a few million in contracts and you'd be golden.

It's a sensible plan, it just might not be as strong as it seems.

[1] https://www.iwf.org.uk


Looks like you forgot to take into account the entire rest of that sentence you quoted.


No, just pointing out that the first part isn't as strong as Apple seem to think it might be. The rest of it might well catch this, yes, (and that's good!) but it doesn't obviate that "only considering images from two or more providers" can be subverted, possibly easily, given the pressure USGOV can bring to bear on their partners.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: