Hacker News new | past | comments | ask | show | jobs | submit login
One Bad Apple (hackerfactor.com)
1484 points by cwmartin on Aug 8, 2021 | hide | past | favorite | 524 comments



NCMEC has essentially shows that they have zero regard for privacy and called all privacy activists "screeching voices of the minority". At the same time, they're at the center point of a highly opaque, entrenched (often legally mandated) censorhip infrastructure that can and will get accounts shut down irrecoverably and possibly people's homes raided, on questionable data:

In one of the previous discussions, I've seen claims about the NCMEC database containing a lot of harmless pictures misclassified as CSAM. This post confirms this again (ctrl-f "macaque")

It also seems like the PhotoDNA hash algorithm is problematic (to the point where it may be possible to trigger false matches).

Now NCMEC seem to be pushing for the development of a technology that would implant an informant in every single of our devices (mandating the inclusion of this technology is the logical next step that seems inevitable if Apple launches this).

I'm surprised, and honestly disappointed, that the author seems to still play nice, instead of releasing the whitepaper. The NCMEC seems to have decided to position itself directly alongside other Enemies of the Internet, and while I can imagine that they're also doing a lot of important and good work, at this point, I don't think they're salvageable would like to see them disbanded.

Really curious how this will play out. I expect attacks either sabotaging these scanning systems by flooding them with false positives, or exploiting them to get the accounts of your enemies shut down permanently by sending them a picture of a macaque.


> I'm surprised, and honestly disappointed, that the author seems to still play nice, instead of releasing the whitepaper.

I'm the author.

I've worked with different parts of NCMEC for years. (I built the initial FotoForensics service in a few days. Before I wrote the first line of code, I was in phone calls with NCMEC about my reporting requirements.) Over time, this relationship grew. Some years, I was in face-to-face development discussions, other times it have been remote communications. To me, there are different independent parts working inside NCMEC.

The CyberTipline and their internal case staff are absolutely incredible. They see the worst of people in the media and reports that they process. They deal with victims and families. And they remain the kindest and most sincere people I've ever encountered. When possible, I will do anything needed to make their job easier.

The IT group has gone through different iterations, but they are always friendly and responsive. When I can help them, I help them.

When I interact with their legal staff, they are very polite. But I rarely interact with them directly. On occasion, they have also given me some very bad advice. (It might be good for them. But, as my own attorney pointed out, it is generally over-reaching in the requested scope.)

The upper management that I have interacted with are a pain in the ass. If it wasn't for the CyberTipline, related investigators, and the IT staff, I would have walked away (or minimized my interactions) long ago.

Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives. The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.

About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact. There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.


> About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

The problem is people use this perfectly legitimate problem to justify anything. They think it's okay to surveil the entire world because children are suffering. There are no limits they won't exceed, no lines they won't cross in the name of protecting children. If you take issue, you're a "screeching minority" that's in their way and should be silenced.

It's extremely tiresome seeing "children, terrorists, drug dealers" mentioned every single time the government wants to erode some fundamental human right. They are the bogeymen of the 21st century. Children in particular are the perfect political weapon to let you get away with anything. Anyone questions you, just destroy their reputation by calling them a pedophile.


This quote sums it up perfectly :

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies.

The robber baron's cruelty may sometimes sleep ,his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.

They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be "cured" against one's will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.” C.S. Lewis

There are so many problems with this feature :

- it can easily be used to identify also other types of dangerous material such as warnings against government activity or posters of organized protests

- some malicious actor can send you this kind of content in order to get you in trouble

- false positives

- moral busy bodying (your employer not agreeing with you going drinking sunday night)

Honestly, it feels like we're about to enter a new age of surveillance. Client-side surveillance.


The consequences seem very simple - once this system is in place, it's only a matter of time until a state actor, be it China or US or Russia or pretty much anywhere goes to Apple and says "hey this hash matching algorithm you have there? If you want to keep operating in our country, you need to match <those> hashes too, no, we won't tell you what they are and what they represent, but you have to report to our state agency who had those present on the device".

Once the technology exists it will be abused.


Completely agree. I have absolutely no doubt this technology will be abused. I bet it will be used to silence dissent, opposition and wrongthink more often than to protect children.


Why doesn't anyone ever question their intent to protect children? There is no such intent. When they do protect children, these are the sorts of observations of what happens when the state has 100% control over children:

https://www.kansascity.com/news/special-reports/article23820...

Why does the state, when they treat children like this, get to proceed on "more" child protection to protect children? Are we totally mad? Any effort to protect children is always a de facto effort to deliver more children into this system, and the state system is much worse than being abused in society (never mind that someone found there's actually more abuse in foster care than with abusive parents! Therefore even if you assume social services is always right ... they never protect kids against social services itself. Therefore you can reasonably assume that a kid that is getting abused will be forced into a worse situation by state "help")?

I mean if you want to protect children, obviously the first step is to fix the child protection system that everybody knows is badly broken (you constantly hear stories about schools sabotaging research into abusive parents to protect the child against social services, covering for the child, lying about attendance or even wounds, ..., because they know social services will be much worse for the child).

There's states where the corrections department provides better care (and definitely better educational instruction) for children than social services do. And yes: that is most definitely NOT because the corrections department provides quality instruction. It's just MUCH worse with social services.

They cover themselves by showing the worst possible situations in society "demonstrating the need to intervene". And you won't hear them talk about how they treat children ... because frankly everyone knows.

Reality is that research keeps finding the same thing: a child that gets abused at home ... is being treated better (gets a better future, better schools, better treatment, yes really, more to read, ...) than children "protected" from abuse by the state:

https://www.aeaweb.org/articles?id=10.1257/aer.97.5.1583

(and that's ignoring that reasons for placement are almost never that the child is unsafe. The number 1 reason is non-cooperation with mental health and/or social care by either one of the parents or the child themselves. There is not even an allegation that the child is unsafe and needs protection. Proof is never provided, because for child protection there is no required standard of proof in law)

And we're to believe that people who refuse to fix child services ... want extra power "to protect children"? How is this reasonable at all? They have proven they have no interest whatsoever in protecting children, as the cost cutting proves time and time again.


To add on this: Nothing else is to expect form a country that—under protection of its legal system!—tortures mentally ill and disabled children.

https://en.wikipedia.org/wiki/Judge_Rotenberg_Educational_Ce...

https://www.independent.co.uk/news/world/americas/massachuse...


I am supportive of Apple’s new child protection policies. I spent two years helping to build some of the tools used for scanning and reporting content (EDIT: not at Apple). There are a few sentiments in this thread that I'd like to address.

> Yeah, and none of these assholes (pardon the language) is willing to spend a penny to provide for millions of children suffering of poverty: free education, food, health, parental support. Nothing, zero, zilch. And these millions are suffering right now, this second, with life-long physical and psychological traumas that will perpetuate this poverty spiral forever.

Many people in child safety and I, personally, strongly support policies that enhance the lives of children, including improved education, food, access to health services, and support for parents. To your point, though, it's also true that many political leaders who vocally address the issue of child sexual abuse on the internet also happen to be opposed to the policies I would support. Like most political alliances, it is an uncomfortable one.

> Anyone questions you, just destroy their reputation by calling them a pedophile.

I see this sentiment a lot. I have worked with people across law enforcement, survivors, survivor advocates, NGOs, social workers, private companies, etc. and I don't know anyone who responds this way to people raising privacy concerns. At worst, they might, rightly or wrongly, consider you alarmist or uninformed or privileged (in that you don't have images of your abuse being actively traded on the internet). But a pedophile? I just can't imagine anyone I've worked with in this space accusing you or even thinking of you as a potential pedophile just because you're opposed to content scanning or want E2EE enabled, etc. I suppose maybe someone far, far removed from actual front line workers would say something so ridiculous.

---

Separately, I want to suggest that there are paths forward here that could include risk controls to reduce the risk that this technology gets extended beyond its initial purpose. Maybe NCMEC could provide verifiable digests of the lists used on your device to verify that additional things haven't been added to it. Or there could be public, concrete, transparent criteria for how and when a lead is referred for law enforcement action. By designing this system by which the matching occurs on device against a list that can be made available to the user, Apple has made content scanning far more privacy-preserving and also created avenues by which it could be further regulated and transparent. I'm very excited about it and, honestly, I think even the staunchest privacy advocates should be cautiously optimistic because it is, in my opinion, a step in the direction of less visibility into user data.

I think that privacy advocates are arguing in good faith for protecting society from a surveillance state. I think that advocates of scanning are arguing in good faith for protecting children. I also think that both sides are using language (screeching on the NCMEC side, comparisons to hostile regimes on the other) that make it very hard to move forward. This isn't pedophile privacy advocates vs. surveillance state NCMEC. Neither of those groups even exist. It's concerned citizens wanting freedom for all people vs. concerned citizens wanting freedom for all people.


HN is the only place I feel safe enough to post these pro-privacy opinions. In many other communities, I've seen people being accused of serious crimes for daring to take issue with whatever surveillance the authorities are trying to implement. I've seen people openly speculating about the CSAM stashes of encryption users. What else could they possibly be hiding, right?

I don't think I trust actual authorities either.


Given NCMEC's continuing attacks against consenting adult sex workers, that they never seem to regard adult sexual identities as valid under any circumstances, that they repeatedly retweet abolitionist groups for adult sexual expression, their actions during Backpage, them lying to Congress about the level of abuse that occurs and the recent statements by their staff, I find it kind of baffling that anyone would defend their leadership at this point.

NCMEC are willing to completely compromise their mission in order to chase other moral crusades that are unrelated to children, and seem to never care about the consequences of any of the things they call for.

I don't trust them, and they've earned that.


> Many people in child safety and I, personally, strongly support policies that enhance the lives of children, including improved education, food, access to health services, and support for parents. To your point, though, it's also true that ...

You work with people who, almost always against the will of children and parents kidnap children (sorry, but search on Youtube for images of their action: kidnap is the only correct term) ... then proceed to NOT care for those children and destroy their lives. Obviously this is the only correct reasoning is that this is entirely immoral until the care system is fixed, because they are not improving the lives of children, and their complete lack of caring about this tells you what they're goals aren't.

Watch "Short Term 12" for what facilities these children are provided with. Stop pretending that helping these people acquire children (because that's exactly what you're doing) helps a single child. It doesn't. Terrible, abusive, violent, parents take better care of children than these places do. The moral reaction should be to do what most of society, thankfully, does: sabotage the efforts of social workers.

And if you're unwilling to accept this, as soon as this whole covid thing dies down a bit, find the nearest children's facility and visit. Make sure to talk to the children. You will find you're making a big mistake.

Whatever you tell yourself, please don't believe that you're helping children. You're not. You're helping to destroy their lives

> I see this sentiment a lot. I have worked with people across law enforcement, survivors, survivor advocates, NGOs, social workers, private companies, etc. and I don't know anyone who responds this way to people raising privacy concerns.

I'm adding this response to the other reply to your comment ... which makes 2 people who disagree with your assessment: you are likely to be threatened in many places. I feel like one might even say it's likely we're 2 people who have been threatened.

I would like to add that either as a child or an adult, child protection authorities, the people you help, will threaten you, and I've never known a single exception if they think (correctly or otherwise) that you're hiding something from them. That's if you're at their mercy (which is why every child in child services makes sure to commit some despicable/violent/minor criminal act or two every month and keep it secret: if you don't have something to confess that won't get you sent to a "secure facility" you will be horribly punished at some unexpected random time. That's how these people work. And over time you may learn that, for a kid in the system, a whole host of places are traps. Like the hospital, child services itself, police, homeless shelters or school. As in every person in one of those places will be shown your "history" as soon as you mention your name and if they report you ... very bad things will happen. Some children explicitly make bad things happen (e.g. commit violent theft), because they'll get sent to "juvie" and finally the stress of suddenly getting punished out of nowhere disappears. Also some believe you get better schooling in juvie. So you hide, even if you're hurt or sick. This is why some of those kids respond angrily or even violently if someone suggests they should see a doctor for whatever reason. Sometimes literally to make sure the option of suicide remains open (which is difficult in "secure" care). And why does this happen? NOT to protect children: to protect themselves and "their reputation", and their peace of mind against these children).

This, of course, you will never hear your new friends mention needs fixing. They are in fact fighting with this side of the system over money, so that EVEN LESS money goes to the kids the system takes care of. That, too, you will never hear from them. They are doing the opposite of what someone who means well with disadvantaged children will do.

I have zero doubts they will use your work, not to convict offenders, because that's hard, years of very difficult work, but to bring more kids into a system that destroys children's lives, and MORE so than the worst of parents do. Because throwing children into this system (and destroying their lives) is very easy. I'm sure occasionally they will convict an offender (which, incidentally, doesn't help the kids) or get a good outcome for a child. It happens. If is absolutely not common.

And not to worry they will put great emphasis on this statement: "I've never seen anyone in the system who didn't mean well". Most are idiots, by the way, if you keep digging at their motivations, they will reveal themselves very clearly quite quickly.

These "people in child safety" DO NOT mean well with children. They merely hate a portion of society (which includes those victimized children) and want to see them punished HARD. If you are a moral person, volunteer at a nearby children's shelter and show understanding when the inevitable happens and you get taken advantage of (do not worry, you will learn). DO NOT HELP THESE PEOPLE GET MORE CHILDREN.


Meta-question: Is there a way to get rid of such "hot air" comments form throwaway accounts?

This comment does not add any argument in favor of its point, despite its length.


And i think if you're working on these systems, then it's all too easy to only think about the happy scenario.

But time has shown that an initial good concept will transform into something worse.

We have a very recent example with Corona contact-tracing apps, that law enforcement in multiple democratic countries are now using for other purposes.

So no, we should not allow this client-side scanning to go through, it will not end well.


This is what will happen, it's obvious to anyone not emotionally invested in the propaganda theater driving all social interaction.


This is such a stupid fallacy and it comes up every time something like this is discussed. You don't know if anything is or will be abused you expect it because you expect your elected government to do so. The problem here is with you or your electors not with the technology itself. You want to fix the symptoms but not the problem. As usual.


How so? The UK's snooper's charter that compels ISPs to save your entire browsing history for a year was only meant to be used to catch pedophiles and terrorists, and now 17 different agencies have been given completely warrantless access to the database, including the UK's Food Standards Agency(??!?!?!?). People have already been fired for abusing access too, so it definitely happens too.

>>The problem here is with you or your electors not with the technology itself

No offense, but this is such a shitty answer, and it's always made by apologists for these incredibly invasive technologies. Like, please explain to me, in simple terms, how can I, an immigrant who doesn't even have the ability to vote, vote to make an American corporation do nor not do something. I'm super curious.


It's the exact same shit I was talking about instead of fixing your governing body you want to fix it with technology. All the crypto apologist are the same way if the government YOU are electing does something stupid you want to fix the symptom and not the governing body. You are just throwing around goal posts instead of working on the real problem.


>government YOU are electing does something stupid

Is America the entire world to you? What should a Chinese citizen do? What should a Saudi citizen do? What should a Russian citizen do? Even if you ignore the fact that the chance of fascism in American is not zero, why should Apple make it easier for totalitarian regimes to spy on their citizens? Or do you expect a Saudi person to "just move to America"?


I am talking to people on this website which is banned in 3 of the 4 countries you are talking about. Stop moving your shitty goalpost we are talking about the US, Europe and other democracies. You know what would help people in regimes? Governing bodies that stand up for them in other countries... guess who could change that.


I'm not moving the goalposts. I, one, have empathy for people other than myself and two, I am not deluded enough to think that fascism will never return to the West. If you don't care about citizens in other countries, that's on you.

>You know what would help people in regimes? Governing bodies that stand up for them in other countries... guess who could change that.

What would also help is if Apple didn't build tools for those regimes to suppress opposing political bodies.


>>if the government YOU are electing

Except like I clearly said above, I'm an immigrant without the right to vote, so I'm not electing shit. Again, how exactly am I supposed to vote my way out of this?


[flagged]


>> you can be political active in other ways than direct votes.

Which is what we're doing here, by participating in protests, complaining to agencies and governing bodies as well as Apple itself against this technology. Or is this not up to your definition of "politically active"?

>>You can leave the country

I'm envious of your position in life where this is the first thing that springs to your mind, well done you.

>> Don't wiggle your way out of your extremist position that you can't do anything except building technology which would be totally obsolete if you fixed your political problem.

Uhm....are you sure you have the right argument there? Or maybe replying to the wrong person?


> you can be political active in other ways than direct votes

Absolutely. That's why we're posting our thoughts here and attempting to convince others.


Civil disobedience. If we think a law is unjust, it is our duty to disobey and undermine it. Technology is a tool that allows us to do exactly that.


> You don't know if anything is or will be abused

Actually I do. Governments abuse their powers all the time. They have done it before, are doing it right now and will continue to do it in the future. This is not fallacy, it is fact.

Here's an example:

https://en.wikipedia.org/wiki/LOVEINT

The only solution is to take their power away. No way to abuse power they don't have. We must make it technologically impossible for them to spy on us.


That's the exact same fallacy you are proposing to fix a symptom and not the problem. You want to reign in the government YOU ARE ELECTING TO GOVERN you with technology instead of real political work. Either vote different or get into politics and fix yourself.

Your wikipedia link doesn't show anything regarding abuse of the governing body ALL of the examples are from private persons.


>>Your wikipedia link doesn't show anything regarding abuse of the governing body ALL of the examples are from private persons.

Have you even like....read the page they linked?

"Siobhan Gorman (2013-08-23). "NSA Officers Spy on Love Interests". Washington Wire. The Wallstreet Journal. "

Are NSA Officers "private persons" now? They are government employees, they were abusing the power they were given while employed by the government. It doesn't matter in the slightest if they were abusing the power for private or state gain, it's a state agency and its employees abusing the access, that implicitly makes it the state abusing the power they have.


Wow if you can't distinguish between rogue agents and an institutional abuse of power there is nothing left to argue.


If you really think that an NSA employee abusing their access is just a "private person" then yeah, I guess there is nothing left to argue. I guess it must be nice sleeping well at night not worrying about this stuff, right?


I didn't elect anyone. There is not a single politician in power right now that represents me. I'm not even american to begin with so it's not like I have any influence over american administration.

In any case, there's no reason why politics and democracy ought to be the only way to bring about change. We have a far more powerful tool: technology.

Governments make their laws. People make technology that neutralizes their laws. They make new laws. People make new technology. And so on. With every iteration, the government must become ever more tyrannical in order to maintain the same level of control over the population it previously enjoyed. If this loop continues long enough, we'll either end up with an uncontrollable population or with an oppressive totalitarian state. Hopefully limits will be found along the way.

> Your wikipedia link doesn't show anything regarding abuse of the governing body ALL of the examples are from private persons.

A government employee abused his access to the USA's warrantless surveillance apparatus in order to spy on his loved ones. If this isn't abuse of power, I don't know what is.

Honestly, it's just human nature. No person should ever be given such powers to begin with. I wouldn't trust myself with such power. It should be impossible to spy on everyone.


Any modern state implements separation of powers, trias politica in most cases. Your argument ignores that. You also haven't made clear what fallacy you mean.

Want to fix child abuse? Fund teachers and child care. Apple cannot help those kids and I don't mean that as an indignation towards the company.


The thing is tools like this will only make it harder to fix the problem.


It's not just idle speculation- history speaks for itself.


Yeah, and none of these assholes (pardon the language) is willing to spend a penny to provide for millions of children suffering of poverty: free education, food, health, parental support. Nothing, zero, zilch. And these millions are suffering right now, this second, with life-long physical and psychological traumas that will perpetuate this poverty spiral forever.


Thank you! This is a really important observation about anything controversial „child“ related. Be it abortion or abuse you can tell what people‘s real priorities are by looking at how much they are willing to spend on the welfare of kids outside their immediate point of concern.

All of these types of problems are likely better solved in some other ways. Why not have general mental health coaching in schools freely available to increase the chance of early detection of abusive behaviors? Why not improve the financial situation of parents so that the child is not perceived to be another burden in a brutal life? Why not offer low-friction free mental health coaching to all individuals?

The way the world is organized now is ABSURD to the highest degrees! We pretend to care about a thing but only look at issues narrowly without considering the big picture.

In the end, politics and the way power is distributed is broken. There are too many entrenched interests looking out for their narrow self-interest. There seems to be not enough vision to unite enough power for substantial change. Even the SDGs don‘t seem to inspire the developed nations as something to mobilize for. We need the fervor of war efforts for positive change.

Doing the right thing and doing things right needs to become the very essence of what we as individuals stand for. Not consuming goods or furthering the idealogical agenda of a select few. Each and every one of us should look into themselves and look into the world and work to build a monument of this collective existence. We were here. Let us explain to all following us how we tried our best to do the right things in the right way and where we fell short of our ambitions and hope that others might improve upon our work.

Sorry, I think this turned into a rant but it comes from the heart.


> Be it abortion or abuse you can tell what people‘s real priorities are.

There is also (almost always) an unhealthy matching support for an enormous military with nuclear weapons whose sole purpose - when push comes to shove - is to indiscriminately murder the same babies once they have grown up.

Its a symptom of a deeper mental pathology.


The thing with war is, and especially with nuclear weapons - it does not wait for the babies to be grown up. It might have gotten more precise to avoid direct hits of babies as they make for bad PR, but that still happens a lot and mothers with their babies on the run in a burning city is not much better either.


That would require demurrage which is never going to happen. I'll be honest. It's not demurrage itself that we need, it's just much easier to implement. What we need is a linkage between the lifetime of debt and money. Defaulting on debt should destroy money. I.e. money is only valid as long as the contract (the debt) that created it is valid. Given a sufficiently advanced electronic currency you would track the expiry of every single dollar. The lender then would decide whether he wants to extend the expiry of the dollar and thereby extend the due date of the debt.

The fundamental problem with short term thinking (positive time preference) is that people want to "flee" into fictional wealth. Rather than build a long lasting monument (companies like blue origin count as monument) they prefer to increase a number on a balance sheet in the form of a bank account. What makes fictional wealth so attractive? As mentioned above money is just the other side of a debt contract. By withholding your money you extend the debt. In other words, a lot of people promise to work for you. Having idle servants is the entire reason behind accumulating money. If you truly wanted to achieve full employment then all money earned must be spent eventually, to be more specific all debts must be fulfilled. It would mean that your wealth cannot exist in the form forced coercion of other people. Employees would still come and work in your companies but they would leave each day with a fair share of the wealth they helped create. All your wealth would have to be long lasting and the environment, which is the longest lasting form of prosperity, would be considered part of your wealth.

Ancient Egypt had something closer to a "grain standard" meaning that farmers deposit grain in a storehouse where they receive something akin to a coupon which you can trade in for grain. Their money is representing a claim to a spoiling good! The horror! The complete antithesis of the gold standard. Just imagine what a backwards society that must have been! Of course the truth is everyone remembers ancient Egypt as an advanced civilization for its time.


No rant at all, you make very good points. Thanks for sharing


Thank you for this argument, I will put it in my cannon of responses to people who are pro spyware.


this so much this...


OK, I'll forfit 100% of my rights to eradicate CP from my country. Then lets say these measures are effective and CP is removed in all but name from the country. Now what do we do with the thousands of people and billions or of dollars that is being funneled into the effort? Well history shows us this answer. None of those people or authorizes are suddenly going to say well that's a wrap. Lay us off and take the budget to a more meaningful use. Nope, instead they double down and start looking for other "infringements" to justify their funding and their jobs. Ever increasing in the pettiness of the trespass until powerful sociopath realizes they can now repurpose this agency an their resources for their own power gain to remove opponents and silence opposition.

Perhaps even worse. They move on to some ML based detection that uploads suspected CP that is not in the database. Most people have no idea how bad a false accusation can destroy their lives. Even if you are 100% innocent there is already a formal government case against you. At the very least you will be compelled to hire a very expensive attorney to handle the matter for you. After all you cant risk a budget attorney with a huge caseload to handle this potential life sentence case. Not to mention that during this time you will possible be locked out of accounts,finances, put on leave from employment, banned from hiring, banned from assistance programs. Perhaps even posted in the media as a sex offender.


> someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

What you are saying here kind of sidetracks the actual problem those critics have though, doesn't it? The problem is not the acknowledgement of how bad child abuse is. The problem is whether we can trust the people who claim that everything they do, they do it for the children. The problem is damaged trust.

And I think your last paragraph illustrates why child abuse is such an effective excuse, if someone criticizes your plan, just avoid and go into how bad child abuse really is.

I'm not accusing you of anything here by the way, I like the article and your insight. I just see a huge danger in the critics and those who actually want to help with this problem being pitted against each other by people who see the topic of child abuse as nothing but a convenient carrier for their goals, the ones that have actually heavily damaged a lot of trust.


What are the main instances where these claims were made fraudulently?


One example I quote from EFF's post Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life [1] on the topic:

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content [2] that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT) [3], is troublingly without external oversight, despite calls from civil society [4]. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content [5] as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.

[1]: https://www.eff.org/deeplinks/2021/08/apples-plan-think-diff...

[2]: https://www.eff.org/deeplinks/2020/08/one-database-rule-them...

[3]: https://gifct.org/

[4]: https://cdt.org/insights/human-rights-ngos-in-coalition-lett...

[5]: https://www.eff.org/wp/caught-net-impact-extremist-speech-re...


Thanks, I think I see what you mean. Essentially another organisation has used file hashes to scan for extremist material, by their definition of extreme.

I agree that has potential for abuse but it doesn't seem to explain what the actual link is to NCMEC. It just says "One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed..." but doesn't name this "technology". Is it talking about PhotoDNA?


I'm not the one you reply to, but I have some relevant thoughts on the process. I'm value my fundamental right to privacy. I understand that technology can be used to stop bad things from happening if we accept a little less privacy.

I am okay with giving up some privacy under certain conditions, that is directly related to democracy. In essence a database and tech of any kind that systematically violates privacy we need the power distributed and "fair" trials. I.e. legislative branch, executive branch and judiciary branch.

* Only the legislative branch, the politicians, should be able to enable violation of privacy, i.e. by law. Then and only then can companies be allowed to do it.

* The executive have oversight of the process and tech, they would in essence be responsible to saying go/no-go to a specific implementation. They would also be responsible to perform a yearly review. All according to the law. This also includes the police.

* The judiciary branch, the justice system, is responsible for looking at "positive" hits and grant the executive branch case by case powers to use the information.

If we miss any of those three, I am not okay with systematically violation of privacy.


Here in France, law enforcement & probably some intelligence agencies used to monitor P2P networks for child pornography & content terrorists like to share with one another.

Now we have the expensive and pretty much useless "HADOPI" paying a private company to do the same for copyright infringement.

Ironically enough, it seems the interior and defence ministries cried out when our lawmakers decided to expand it to copyright infringement on the request of copyright holders. They were afraid some geeks, either out of principle or simply to keep torrenting movies, would democratise already existing means to hide one's self online, and create new ones.

Today, everyone knows to look for a VPN or a seedbox. Some even accept payments in crypto or gift cards.

¯\_(ツ)_/¯


Thanks for pointing that out, I have updated my comment to provide the links from the quote, which were hyperlinks in the EFF post.

> it doesn't seem to explain what the actual link is to NCMEC

The problem I see is the focus that is put onto the CSAM database. I quote from Apples FAQ on the topic [1]:

Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM

and

Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations.

Which is already dishonest in my opinion. Here lie the main problems and my reasons to find the implementation highly problematic derived from how I personally understand things:

- Nothing would actually prevent Apple from adding different database sources. The only thing the "it's only CSAM" part hinges on is Apple choosing to only use image hashes provided by NCMEC. It's not a system built "specifically for CSAM images provided by NCMEC". It's ultimately a system to scan for arbitrary image hashes, and Apple chooses to limit those to one specific source with the promise to keep the usage limited to that.

- The second large attack vector comes from outside, what if countries decide to fuse their official CSAM databases with additional uses? Let's say Apple does actually mean it and they uphold their promise: There isn't anything Apple can do to guarantee that the scope stays limited to child abuse material since they don't have control over the sources. I find it hard to believe that certain figures are not already rubbing their hands about this in a "just think about all the possibilites" kind of way.

In short: The limited scope only rests on two promises: That Apple won't expand it and that the source won't be merged with other topics (like terrorism) in the future.

The red flag for me here is that Apple acts as if there was some technological factor that ties this system only to CSAM material.

Oh and of course the fact that the fine people at Apple think (or at least agree) that Electronic Frontier Foundation, the Center for Democracy and Technology, the Open Privacy Research Center, Johns Hopkins, Harvard's Cyberlaw Clinic and more are "the screeching voices of the minority". Doesn't quite inspire confidence.

[1]: https://www.apple.com/child-safety/pdf/Expanded_Protections_...



Snoopers Charter in the UK springs to mind.


The snoopers charter was always fairly broad though. It was never about child abuse, it covers collecting phone records etc which is used in most police investigations. Unless you are referring to some particular aspect of it I'm not aware of?

I understand very broad stuff might be sold as "helping to fight child abuse" while avoiding talking about the full scope, but that isn't what Apple/NCMEC are doing here. They are making quite explicit claims about the scope.

That's why I'm wondering if there is precedent for claims about "we will ONLY use this for CSAM" which were later backtracked.


>About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact. There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.

I think we have seen "think of the kids" used as an excuse for so many things over the years that the pendulum has now swung so far that some of the tech community has begun to think we should do absolutely nothing about this problem. I have even seen people on HN in the last week that are so upset by the privacy implications of this that they start arguing that these images of abuse should be legal since trying to crack down is used as a motive to invade people's privacy.

I don't know what the way forward is here, but we really shouldn't lose sight that there are real kids being hurt in all this. That is incredibly motivating for a lot of people. Too often the tech community's response is that the intangible concept of privacy is more important than the tangible issue of child abuse. That isn't going to be a winning argument among mainstream audiences. We need something better or it is only a matter of time until these type of systems are implemented everywhere.


> Too often the tech community's response is that the intangible concept of privacy is more important than the tangible issue of child abuse.

I think the actual idea, which the nuance is often lost (and let's be honest, some people aren't really aware of and are just jumping on the bandwagon), is that privacy is a right, and erosion of rights is extremely important because it has been shown many times in the past to have far reaching and poorly understood future effects.

It's unfortunate that some would choose to link their opposition to the idea with making the thing it's attempting to combat legal when it's something so abhorrent, but in general I think a push back on any erosion of our rights is a good and useful way for people to dedicate time and effort.

While we shouldn't lose sight that there are real children in danger and that this should help, we also shouldn't lose sight that there is plenty of precedence that this could eventually lead to danger and problems for a great many other people, children included.


>I think the actual idea, which the nuance is often lost (and let's be honest, some people aren't really aware of and are just jumping on the bandwagon), is that privacy is a right, and erosion of rights is extremely important because it has been shown many times in the past to have far reaching and poorly understood future effects.

The encryption everywhere mindset of a lot of the tech community is changing the nature of the right to privacy. 50 years ago all these images would have been physical objects. They could have been found by the person developing the photos or the person making copies. They could have been found with a warrant. Now they are all hidden behind E2EE and a secure enclave. To a certain extent the proliferation of technology means people can have an even stronger degree of privacy today than was practical in the past. It is only natural for people to wonder if that shift has changed where the line should be.


In the past these people would have been developing the pictures themselves, and handing them between each other either personally or in some hidden manner using public systems.

Not only has communication become easier, so has surveillance. The only difference now is that it's easier for people not to be aware when their very personal privacy is invaded, and that it can be done to the whole populace at once.


>Not only has communication become easier, so has surveillance.

It is a devil's advocate response, but can you explain why this is a bad thing? If communication is scaling and becoming easier, why shouldn't surveillance?


1. We judge these programs under the wrong assumption that it is run by the good guys. Any system that depends on the goodness of the people running it is dangerous, because it can be taken over by the bad guys.

2. I am a law abiding, innocent citizen. Why should I have to face the same compromised privacy as a criminal? It used to be that only people under suspicion are surveilled and that a judge had to grant this based on preliminary evidence. Now everyone is surveilled. How long until people who are sidestepping surveillance (i.e. using Open Source systems that don't implement this stuff), fall under suspicion just because they are not surveilled? How long until it's "guilty until proven otherwise"?

In my opinion it is absolutely fair and necessary to scan images that are uploaded to the cloud in a way that makes them shareable. But never the stuff I have on my device. And the scanning system has to be transparent.


>We judge these programs under the wrong assumption that it is run by the good guys

Speaking as someone who as a boy was tortured and trafficked by operatives of the Central Intelligence Agency of the United States of America, I am surprized how little appreciation there is for the standard role of misdirection in the espionage playbook.

It is inevitable that all these obstensibly well-intended investigators will have their ranks and their leadership infiltrated by intelligence agencies of major powers, all of whom invest in child-trafficking. What better cover?


"Wont somebody think of the (white, wealthy, documented) chiiiiilren!"

:sigh:


> We judge these programs under the wrong assumption that it is run by the good guys. Any system that depends on the goodness of the people running it is dangerous, because it can be taken over by the bad guys.

And sometimes, the "good guys" in their attempts to be as good as they can imagine being, turn into the sort of person who'll look a supreme court judge or Congresspeople or oversight committees in the eye and claim "It's not 'surveillance' until a human looks at it." after having built PRISM "to gather and store enormous quantities of users’ communications held by internet companies such as Google, Apple, Microsoft, and Facebook."

https://www.hrw.org/news/2017/09/14/q-us-warrantless-surveil...

Sure, we copied and stored your email archive and contact lists and instant messenger history. But we weren't "surveilling you" unless we _looked_ at it!

Who are these "good guys"? The intelligence community? The executive branch? The judicial branch? Because they're _all_ complicit in that piece of deceit about your privacy and rights.


Surveillance does have its place, but a large part of the problem with the new technology of surveillance is the people passing the laws on surveillance don't understand the technology that surrounds it.

Take, for instance, the collection of metadata that is now so freely swept up by the American government without a warrant. This includes the people involved in communication, the method of communication, time and duration of communication and locations of those involved with said communication. All of that is involved in the metadata that can be collected without a warrant by national agencies for "national security" done on a massive scale under PRISM (PRIZM?).

Now, this is non targeted, national surveillance, fishing for a "bad guy" with enhanced surveillance capabilities. This doesn't necessarily seem like a good thing. It seems like a lazy thing, and a thing which was ruled constitutional by people who chose to not understand the technology because it was easier than thinking down stream at the implications.


> is the people passing the laws on surveillance don't understand the technology that surrounds it.

And that the people running the surveillance have a rich track record of lying to the people who are considering whether to pass the laws they're proposing.

"Oh no, we would _NEVER_ surveil American citizens using these capabilities!"

"Oh yeah, except for all the mistakes we make."

"No - that's not 'surveillance', it's only metadata, not data. All we did was bulk collect call records of every American, we didn't 'surveil" them."

"Yeah, PRISM collects well over 80% of all email sent by Americans, but we only _read_ it if it matches a search we do across it. It's not 'surveillance'."

But they've stopped doing all that, right? And they totally haven't just shared that same work out amongst their five eyes counterparts so that what each of them is doing is legal in their jurisdiction even though there are strong laws preventing each of them from doing it domestically.

And how would we even know? Without Snowden we wouldn't know most of what we know about what they've been doing. And look at the thanks get got for that...


> It is a devil's advocate response, but can you explain why this is a bad thing?

I didn't state it as a bad thing, I stated it as a counter to your argument that encrypted communication is more common, and therefore maybe we should assess whether additional capabilities are warranted. Those capabilities already exist, and expanded before the increased encrypted communication (they happened during the analog phone line era). I would hazard that increasingly encrypted communication is really just people responding to that change in the status quo (or overreach, depending on how you view it) brought about by the major powers (whether they be governmental or corporate).


Why shouldn't every Neuralink come with a mandated FBI module preinstalled? Where, if anywhere, is private life to remain, where citizens think and communicate freely? Is Xinjiang just the start for the whole world?


Surely there is communication surrounding the child sexual abuse, making plans with other pedophiles and discussing strategies. Some of this may occur over text message. Maybe Apple’s next iteration of this technology can use my $1000 phone that I bought and that I own to surveil my E2EE private encrypted messages on my phone and generate a safety voucher for anything I say? The sky’s the limit!


Good point: neural nets are getting better at natural language every year.


Do you see how this underlines my original point? I brought up real child abuse that is happening today and your response is about brain implants being mandated by the government. Most people are going prioritize the tangible problem over worrying about your sci-fi dystopia.


A plausible story of how our rights are in the way is always ready to hand. If we can't at some point draw a clear line and say no, that's it, stop -- then we have no rights. It's chisel, chisel, chisel, year after year, decade after decade.

In America one of those lines is that your personal papers are private. Get a warrant. I don't have to justify this stand. I might choose to explain why it's a good stand, or I might not; it's on you to persuade us.


>In America one of those lines is that your personal papers are private. Get a warrant. I don't have to justify this stand. I might choose to explain why it's a good stand, or I might not; it's on you to persuade us.

Part of the problem is that these devices are encrypted so a warrant doesn't work on them. That is a big enough change that maybe people need to debate if the line is still in the right place.


That is a change worth considering, though it must be treated at the level of rights, not just a case-by-case utility calculus. At the same time, most other changes have been towards more surveillance and control: cameras everywhere, even in the sky; ubiquitous location tracking; rapidly improving AI to scale up these capabilities beyond what teams of humans could monitor; tracking of most payments; mass warrantless surveillance by spy agencies; God knows what else now, many years after the Snowden leaks. This talk you hear about the population "going dark" is... selective.

I think my vehemence last night might've obscured the point I wanted to make: what a right is supposed to be is a principle that overrides case-by-case utility analysis. I would agree that everything is open to questioning, including the right to privacy -- but as I see it, if you ask what's the object-level balance of utilities with respect to this particular proposal, explicitly dropping that larger context of privacy as a right (which was not arrived at for no reason) and denigrate that concern as science fiction, as a slippery-slope fallacy -- then that's a debate that should be rejected on its premise.


My memories and thoughts are also warrant-proof. We just accept it as a limitation.


20 years ago a device 24*7 inspecting personal data of 60% citizens for samples remotely provided by government was sci-fi dystopia.


To whom do you give the keys?


Post your passwords.


That's too short a thought, it detracts from the point.

It is bad to post passwords, not just because you lose privacy, but because you'd lose control of important accounts. Asking people to post their passwords is not reasonable.

I think you might have a point you're trying to make, but please spell it out fully.


No actually I kind of like it. It boils down the essence of the problem to it's core point-- trust.


Post your home address and phone number.


Right, 50 years ago you needed a warrant. Now with Apple’s change you don’t. That’s not progress.


> Too often the tech community's response is that the intangible concept of privacy is more important than the tangible issue of child abuse.

Is it intangible? 18% of the world lives in China alone. That's more people than the "1/10 who are victims of child abuse*", and I'm sure that 18% will only grow as other authoritarian countries get more technologically advanced.

I think "Think of the kids" applies very well to the CREATORS of pornography. Per wikipedia, there isn't any conclusive causal relationship between viewing CP and assaulting children.

* Per a google search "A Bureau of Justice Statistics report shows 1.6 % (sixteen out of one thousand) of children between the ages of 12-17 were victims of rape/sexual assault" which is a lot less than 10% figure you're citing. Non-sexual abuse wouldn't really have any bearing here, right?


>Is it intangible? 18% of the world lives in China alone. That's more people than the "1/10 who are victims of child abuse*", and I'm sure that 18% will only grow as other authoritarian countries get more technologically advanced.

You didn't mention any tangible results here. How would this system by Apple make my life worse? Can you answer that without a slippery slope argument?

>I think "Think of the kids" applies very well to the CREATORS of pornography. Per wikipedia, there isn't any conclusive causal relationship between viewing CP and assaulting children.

Why does the causality matter? A correlation is enough that cracking down on this content will result in less abusers on the streets.

>* Per a google search "A Bureau of Justice Statistics report shows 1.6 % (sixteen out of one thousand) of children between the ages of 12-17 were victims of rape/sexual assault" which is a lot less than 10% figure you're citing. Non-sexual abuse wouldn't really have any bearing here, right?

I wasn't the one citing that, but you are also citing an incomplete number since it excludes younger children.


> Can you answer that without a slippery slope argument?

So far any defense of this whole fiasco can be boiled down to what you are trying to imply in part. When you say "The possibility of abusing this system is a slippery slope argument", as if identifying a (possible) slippery slope element in an argument would somehow automatically make it invalid?

The other way around if all that can be said in defense is that the dangers are part of slippery slope thinking, then you are effectively saying that the only defense is "trust them, let's wait and see, they might not do something bad with it" or "it sure doesn't affect me" (sounds pretty similar to "I've got nothing to hide"). This might work for other areas, not so much when it comes to backdooring your device/backups for arbitrary database checks.

And since "oh the children" or "but the terrorists" has become the vanilla excuse for many many things I'm unsure why we are supposed to believe in a truly noble intent down the road here. "No no this time it's REALLY about the kids, the people at work here mean it" just doesn't cut it anymore. So no, I'm not convinced the people at Apple working on this actually do it because they care.

When "but the children" becomes a favourite excuse to push whatever, the problem are very much the people abusing this very excuse to this level, not the ones becoming wary of it.

> some of the tech community has begun to think we should do absolutely nothing about this problem

I don't believe that people think that, I believe that people rather think that the ones in power simply aren't actually mainly interested in this problem. The trust is (rightfully) heavily damaged.


>> You didn't mention any tangible results here. How would this system by Apple make my life worse? Can you answer that without a slippery slope argument?

That's a weird goal-post.

>> Why does the causality matter? A correlation is enough that cracking down on this content will result in less abusers on the streets.

Obviously of the people who look at cp, a higher percentage of those will be actual child abusers. The question for everybody is -- does giving those people a fantasy outlet increase or actually reduce the number of kids who get assaulted. At the end of the day that's what matters.

>> I wasn't the one citing that, but you are also citing an incomplete number since it excludes younger children.

[EDIT: Mistake] Well you didn't cite anything at all, and were off by a shockingly large number. Please cite something or explain why you made up a number.


>Well you didn't cite anything at all, and were off by a shockingly large number.

Sorry, I am just baffled by your last point here. How can I be off by a shockingly large number when I didn't even cite a number?


My B, I see now that was in a section where you were quoting hackerfactor. I guess I should direct that question to him.


> How would this system by Apple make my life worse? Can you answer that without a slippery slope argument?

“With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably. The first time any man's freedom is trodden on we're all damaged...."


This is melodramatic high school Hamlet-ism. It’s also silly - there has obviously been a case where the first speech was censured. It happened before civilizations. Are we all still damaged and bondaged by that?

Look, speech is important. So is protecting the public good. But if one believes in absolutes, rather than takeoffs, they are IMO getting too high on their own supply.

Let’s talk about the trade offs that we have already made.


Does the fact that the NSA can comb your personal files and look at people's nude photographs not concern you? That's a present day reality brought to light by Snowden. Showing a colleague a 'good find' was considered a perk of the job.

We're lying to ourselves if we think this couldn't be abused and can implicitly be trusted. We should generally be sceptical of closed source at the best of times, let alone when it's inherently designed to report on us.

To your point of 'as a layman end user what is the cost to me?': more code running on your computer doing potentially anything which you have no way to audit -> compromising the security of whatever is on your computer, and an uptick in cpu/disk/network utilisation (although it remains to be seen if it's anything other than negligible).

My defeated mentality is partly - 'well they're already spying on us anyway'...


> This is melodramatic high school Hamlet-ism. It’s also silly - there has obviously been a case where the first speech was censured. It happened before civilizations. Are we all still damaged and bondaged by that?

Frankly, yes.


That seems like a ‘no’ then.


You can’t seriously be suffering that Apple not implementing these measures will somehow be good for privacy in China?


The implication -- and I think it's a valid one -- is that this client-side mechanism will be very quickly co-opted to also alert on non-CSAM material. For example, Winnie the Pooh memes in China.


I think it’s not valid to claim that it will be quickly used for that purpose.

However I absolutely agree that it could be used to detect non-CSAM images if Apple colludes with that use case.

My point is that this is immaterial to what is going on in China. China is already an authoritarian surveillance state. Even without this, the state has access to the iCloud photo servers in China, so who knows what they are doing, with or without Apple’s cooperation.


You can label it collusion, but when Apple does it, it's going to call it _complying with local regulation_.


It doesn’t matter what you label it. Hand wringing over making things worse in China is not a valid concern.


I'm old enough to remember when the revelation that NSA is spying on everyone was a shock.

Now people are seriously arguing that continuous searching through your entire life without any warrant or justification by opaque algorithms is fine.

It only took what, 10 years?


> Now people are seriously arguing that continuous searching through your entire life without any warrant or justification by opaque algorithms is fine.

Where is anyone arguing that?


Fine, then let’s talk about other countries.

Does anyone seriously doubt that Germany will use this mechanism to ban Nazi imagery?

Then from there, it’s not a big leap to talk about controlling far right (or far left) memes in France or the UK.

More insidiously, suppose some politician in a western liberal democracy was caught with underage kids, and there were blackmail photos that leaked. Do you think those hashes wouldn’t instantly make their way onto this ban list?


> Fine, then let’s talk about other countries.

I’ll let you change the subject, but let’s note that every time someone realizes privacy in China as a concern, it’s just bullshit.

> Does anyone seriously doubt that Germany will use this mechanism to ban Nazi imagery?

Yes.

> Then from there, it’s not a big leap to talk about controlling far right (or far left) memes in France or the UK.

This one is harder for me to argue against. Those countries could order such a mechanism, whether Apple had built this or not. Because those countries have hate speech laws and no constitutional mechanism protecting freedom of speech.

This is a real problem, but banning certain kinds of speech is popular in these societies. It is disturbingly popular in the US too. That is not Apple’s doing.


It’s just sad how things have shifted.

During the Cold War, the West in general and the US in particular were proud of spreading freedom and democracy. Rock & roll and Levi’s played a big role in bringing down the USSR.

Then in the 90s, the internet had this same ethos. People fought against filters on computers in libraries and schools.

Now that rich westerners have their porn and their video games, apparently many are happy to let the rest of the world rot.

I guess I just expected more.


I actually feel the same way. I miss those earlier eras.

I just think that exaggerating scares is part of the problem, not the solution, regardless of which side of a debate is doing it.


Agreed in principle. But in this particular case, I think it's difficult to exaggerate the badness of this scare. This strikes me as one of the "Those who forget their history are doomed to repeat it" kind of things.

Like with the TSA and the no-fly list. Civil liberties groups said it was going to be abused, and they said so well before any actual abuse had occurred. But they weren't overreacting, and they weren't exaggerating. They were right. Senator Ted Kennedy even wound up on that list at one point.


I don’t think the scare is warranted.

This really is a narrowly targeted solution that only works with image collections, and requires two factors to verify, and two organizations to cooperate, one of which is Apple who has unequivocally staked their reputation on it not being used for other purposes, and the other is NCMEC which is a non-profit staffed with people dedicated to preventing child abuse.

People who are equating this with a general purpose hashing or file scanning mechanism are just wrong at best.

It’s not like the no-fly list at all.


What tangible impact in the life of an everyday Chinese citizen are you expecting if Apple offered an E2EE messging and cloud backup service in China? And why do you think the Chinese government would not just ban it and criminalise anyone found using a device which connected to Apple's servers, rendering any benefits moot?

(And why do you think it's morally right, or the responsibility of a foreign private company to try and force anything into China against their laws? Another commenter in a previous thread said the idea was for Apple to refuse to do business there - but that still leads to the question, how would that help anyone?)


> I think "Think of the kids" applies very well to the CREATORS of pornography. Per wikipedia, there isn't any conclusive causal relationship between viewing CP and assaulting children.

“Think of the Kids” damn well applies to the consumers of this content - by definition, there is a kid (or baby in some instances) involved in the CP. As a society, the United States draws the line at age 18 as the age of consent [the line has to be drawn somewhere and this is a fairly settled argument]. So by definition, in the United States, these are not consenting victims in the pictures.

Demand drives creation. Getting rid of it on one of the largest potential viewing and sharing platforms is a move in the right direction in addressing the problem.

What I haven’t seen from the tech community is the idea that this will be shut down if it goes too far or beyond this limited scope. Which I think it would be - people would get rid of iPhones if some of the other cases privacy advocates are talking about occur. And at that point they would have to scrap the program - so Apple is motivated to keep it limited in scope to something everyone can agree is abhorrent.


>Demand drives creation. Getting rid of it on one of the largest potential viewing and sharing platforms is a move in the right direction in addressing the problem.

Yeah, that focus has worked really well in the "war on some drugs," hasn't it?

I don't pretend to have all (or any good ones for that matter) the answers, but we know interdiction doesn't work.

Those who are going to engage in non-consensual behavior (with anyone, not just children) are going to do so whether or not they can view and share records of their abuse.

The current legal regime (in the US at least) creates a gaping hole where even if you don't know what you have (e.g., if someone sends you a child abuse photo without your knowledge or consent) you are guilty, as possession of child abuse images is a felony.

That's wrong. I don't know what the right way is, but adding software to millions of devices searching locally for such stuff creates an environment where literally anyone can be thrown in jail for receiving an unsolicited email or text message. That's not the kind of world in which I want to live.

Many years ago, I was visiting my brother and was taking photos of his two sons, at that time aged ~4.5 and ~2.

I took an entire roll of my brother, his wife and their kids. In one photo, the two boys are sitting on a staircase, and the younger one (none of us noticed, as he wasn't potty trained and hated pants) wasn't wearing any pants.

I took the film to a processor and got my prints in a couple of days. We all had a good laugh looking at the photos and realizing that my nephew wasn't wearing any pants.

There wasn't, when the photos were taken, nor when they were viewed, any abuse or sexual motives involved.

Were that to happen today, I would be sitting in a jail cell, looking at a lengthy prison sentence. And when done "repaying my debt to society" I'd be forced to register as a sex offender for the rest of my life.

Which is ridiculous on its face.

Unless and until we reform these insane and inane laws, I can't support such programs.

N.B.: I strongly believe that consent is never optional and those under the age of consent cannot do so. As such, there should absolutely be accountability and consequences for those who abuse others, including children.


> Were that to happen today, I would be sitting in a jail cell, looking at a lengthy prison sentence.

No you would not, I was ready to somewhat agree with you but this is just false and has nothing to do with what you were talking about before. The law does not say that naked photos of (your or anyone else's) kids are inherently illegal, they have to actually be sexual in nature. And while the line is certainly not all that clear cut, a simple picture like you're describing would never meet that line.

I mean let's be clear here, do you believe the law considers to much stuff to be CSAM, and if so why? How would you prefer we redefine it?


> The law does not say that naked photos of (your or anyone else's) kids are inherently illegal, they have to actually be sexual in nature.

But that depends on who looks at it.

People have been arrested and (at least temporary) lost custody over their children because someone called the police over perfectly normal family photos. I remember one case a few years ago where someone had gotten into trouble because one photo included a toddler wearing nothing (even facing away from the camera if my memory serves me correctly) playing at the beach. When police realized this wasn't an offense, instead of apologizing they got hung up on another photo were kids were playing with an empty beer can.

Recently this was also linked https://jonathanturley.org/2009/09/18/arizona-couple-sues-wa...

which further links to a couple of previous cases.

I'd say we get police or health care to talk to people who think perfectly normal images are sexual in nature, but until we get laws changed at least then keep us safe.

> I mean let's be clear here, do you believe the law considers to much stuff to be CSAM, and if so why? How would you prefer we redefine it?

Another thing that comes up is that a lot of things that are legal might be in that database because criminal might have a somewhat varied history.

Personally I am a practicing conservative Christian so this doesn't bother me personally at the moment since for obvious reasons I don't collect these kinds of images.

The reason I care is because every such capability will be abused, and below I present in two easy steps how it will go from todays well intended system to a powerful tool for oppression:

1. today it is pictures but if getting around it is as simple as putting it in a pdf then obviously pdfs must be screened too. Same with zip files. Because otherwise this so simple to circumvent that is worthless.

2. once you have such a system in place it would be a shame not to use it for every other evil thing. Dependending on where you live this might be anything: Muslim scriptures, Atheist books or videos, Christian scriptures, Winnie the Pooh drawings - you name it and someone wants to censor it.


As soon as it is used in a negative way beyond CSAM scanning, it will cause people to sell their phones and stop using apple products.

If Apple starts using the tech to scan for religious material, there will be significant market and legal backlash. I think the fact that CSAM scanning will stop if they push it too far will keep them in check to only do CSAM scanning.

Everyone can agree on using the tech for CSAM, but beyond that I don’t see Apple doing it. The tech community is reacting as if they already have.


Problem one is Apple doesn't know what they are scanning for.

This is by design and actually a good thing.

It becomes a problem because problem number 2:

No one is accountable if someone gets their life ruined over a mistake in this database.

I'd actually be somewhat less hostile to this idea if there was more regulatory oversight:

- laws that punishes police/officials if innocent people are harmed in any way

- mandatory technical audits as well as verification that for what it is used for: Apple keeps logs of all signatures that "matched"/triggered as well as raw files, these are provided to the court as part of any case that comes up. This way we could hopefully prevent most fishing expeditions - both wide and personalized ones - and also avoid any follow up parallel reconstructions.

I'm not saying I'd necessarily be OK with it but at that point there would be something to discuss.


It may be worth taking very seriously that you might be overestimating both how quickly regular people become aware of such events and how emphatically people will react.


> I'd say we get police or health care to talk to people who think perfectly normal images are sexual in nature, but until we get laws changed at least then keep us safe.

Personally I don't find anecdotes convincing compared to the very real amount of CSAM (and actual child abuse) we already know exists and circulates in the wild, but I do get your point. That said personally I don't think changing the laws would really achieve what you want anyway - I don't think a random Walmart employee is up-to-date on the legal definitions of CSAM, they're going to potentially report it regardless of what the law is (and the question of whether this is a wider trend is debatable, again this is an anecdote).

With that, they were eventually found innocent, so the law already agrees what they did was perfectly fine, which was my original point. No it should not have taken that long, but then again we don't know much about the background of those who took them, so I'm not entirely sure we can easily determine how appropriate the response was. I'm certainly not trying to claim our system is perfect, but I'm also not convinced rolling back protections for abused children is a great idea without some solid evidence that it really isn't working.

> Another thing that comes up is that a lot of things that are legal might be in that database because criminal might have a somewhat varied history.

That didn't really answer my question :P

I agree the database is suspect but I don't see how that has anything to do with the definition of CSAM. The legal definition of CSAM is not "anything in that database", and if we're already suggesting that there's stuff in there that's known to not be CSAM then how would changing the definition of CSAM help?


> Personally I don't find anecdotes convincing compared to the very real amount of CSAM (and actual child abuse) we already know exists

First: This is not hearsay or anecdotal evidence, this is multiple innocent real people getting their lives trashed to some degree before getting aquitted.

> I don't think a random Walmart employee is up-to-date on the legal definitions of CSAM, they're going to potentially report it regardless of what the law is (and the question of whether this is a wider trend is debatable, again this is an anecdote).

Fine, I too report a number of things to the police that might or might not be crimes. (Eastern European car towing a Norwegian luxury car towards the border is one. Perfectly legal in one way but definitely something the police was happy to get told about so they could verify.)

> With that, they were eventually found innocent, so the law already agrees what they did was perfectly fine, which was my original point.

Remember the job of the police is more to keep law abiding citizens safe than to lock up offenders. If we could magically keep kids safe forever without catching would-be offenders I'd be happy with that.

Making innocent peoples lives less safe for a marginally bigger chance to catch small fry (i.e. not producers), does it matter?

The problem here and elsewhere is that police many places doesn't have a good track record of throwing it out. Once you've been dragged through court for the most heinous crimes you don't get your life completely back.

If we knew police would always throw out such cases I'd still be against this but then it wouldn't be so obviously bad.


> First: This is not hearsay or anecdotal evidence, this is multiple innocent real people getting their lives trashed to some degree before getting aquitted.

"multiple" is still anecdotal, unless we have actual numbers on the issue. The question is how many of these cases actually happen vs. the number of times these types of investigations actually reveal something bad. Unless you never want kids saved from abuse there has to be some acceptable number of investigations that eventually get dropped.

> Remember the job of the police is more to keep law abiding citizens safe than to lock up offenders.

Maybe that should be their purpose, but in reality they're law enforcement, their job has nothing to do with keeping people safe. The SCOTUS has confirmed as much that the police have no duty to protect people, only to enforce the law. However I think we agree that's pretty problematic...

> Making innocent peoples lives less safe for a marginally bigger chance to catch small fry (i.e. not producers), does it matter?

I would point out that the children in this situation are law abiding citizens as well, and they also deserve protection. Whether their lives were made more or less safe in this situation is debatable, but the decision was made with their safety in mind. For the few cases of a mistake being made like the one you presented I could easily find similar cases where the kids were taken away and then it was found they were actually being abused. That's also why I pointed out your examples are only anecdotes, the big question is whether this is a one-off or a wider trend.

If reducing the police's ability to investigate these potential crimes would actually result in harm to more children, then you're really not achieving your goal of keeping people safer.

> The problem here and elsewhere is that police many places doesn't have a good track record of throwing it out. Once you've been dragged through court for the most heinous crimes you don't get your life completely back.

Now this I agree with. The "not having a good record of throwing it out" I'm a little iffy on but generally agree, but I definitely agree that public knowledge of being investigating for such a thing is damaging even if it turns out your innocent, which isn't right. I can't really say I have much of a solution for that in a situation like this though, I don't think there's much of a way to not-publicly take the kids away - and maybe that should have a higher threshold, but I really don't know, as I mentioned earlier we'd really need to look at the numbers to know that. For cases that don't involve a public component like that though I think there should be a lot more anonymity involved.


A large? portion of “sexual predators” are men peeing on the side of the interstate[1]. So it’s not far-fetched to think that a random pic would also land you in jail.

[1]: I looked up the cases of sex offenders living around me several years ago.


Random pictures aren’t going to be in the CSAM database and trigger review. And to have multiple CSAM hash matches on your phone is incredibly unlikely.

An unsolicited email / having photos planted on your phone or equipment is a problem today as much as it will be then, but I think people over-estimate the probability of this and ignore the fact it could easily happen today, with an “anonymous tip” called into the authorities.

If they are scanning it though iMessage they will have logs of when it arrived, and where it came from as well - so in that case it might protect the victim being framed.


That is such a tortured, backwards argument, but the only one that has even a semblance of logic so it gets repeated ad nauseam. Why be so coy about the real reasons?


Any reach into privacy, even while "thinking of the kids" is an overreach. Police can't open your mail or search your house without a warrant. The same should apply to your packets and your devices.

Why not police these groups with.. you know.. regular policing? Infiltrate, gather evidence, arrest. This strategy has worked for centuries to combat all manner of organized crime. I don't see why it's any different here.


Devil's advocate: It may not be mostly big organized crime. It may be hard to fight, because it's not groups. It mostly comes from people close to the families, or the families themselves.

Here's a relevant extract sourced from Wikipedia:

"Most sexual abuse offenders are acquainted with their victims; approximately 30% are relatives of the child, most often brothers, sisters, fathers, mothers, uncles or cousins; around 60% are other acquaintances such as friends of the family, babysitters, or neighbours; strangers are the offenders in approximately 10% of child sexual abuse cases.[53] In over one-third of cases, the perpetrator is also a minor.[56]"

Content warning: The WP article contains pictures of abused children.


Sure, but the people sharing these images do so in organized groups, often referred to as "rings". I agree it would be very hard to catch a solitary perpetrator abusing children and not sharing the images. However since they would be creating novel images with new hashes, Apple's system wouldn't do much to help catch them would it?


The laws regarding CSAM/CSA are not the problem, they are fine. The problem is that we are expected to give up our rights in the vague notion of 'protecting children' while the same authorities actively ignore ongoing CSA. The Sophie Long case is an excellent example where the police has no interest in investigating allegations of CSA. Why is it that resources are spent policing CSAM but not CSA? It is because it is about control and eroding our right to privacy.


I agree that our current legal and law enforcement system isn't up to speed with 21st century internet. And it has to be updated, because this kind of makes the internet a de-facto law less space, covering everything from stalking, harassment over fraud to contraband trafficking and CSAM.

I don't think full scale surveillance is the way to go in free, democratic societies. It is the easiest one, so. Even more if surveillance can be outsourced to private, global tech companies. It saves the pain of passing laws.

Talking about laws, those along with legal proceedings should be brought up to speed. If investigators, based on existing and potential new laws, convince a court to issue a warrant to surveil or search any of my devices, fine. Because then I have legal options to react to that. Having that surveillance incorporated into some opaque EULA from a company in US, a company that now can enforce its standards on stuff I do, is nothing I would have thought would be acceptable. Not that I am shocked it is, I just wonder why that stuff still surprises me when it happens.

Going one step forward, if FANG manages to block right to repair laws it would be easy to block running non-approved OS on their devices. Which would than be possible to enforce by law. Welcome to STASI's wet dream.


All of the 'bad things' you mention are already very illegal. Changing the laws in this case will only lead to tyranny. I cannot emphasize this enough, you cannot fix societal ills by simply making bad things illegal. Right to repair is of course crucial to free society.


By laws I mean the laws governing police work. And bringing police up to speed. Obviously CP and other things are already very much illegal, these laws are just hardly enforced online. That has to change.


The internet isn't the wild west, laws are very much enforced.


Stalking and harrasment isn't, at least over here. Victims are constantly left out in the cold. Same goes for fraud, most cases are not prosecuted. Especially if these cases cross state, and in the EU, nation borders. Because it becomes inconvenient, so police isn't really bothering. And if they do, the fraud is done. The stalking went on for years. And nothing really improved.

Hell, do I miss the old internet. The one without social media.


International fraud is a really interesting problem. I've proposed a mandatory international money transfer insurance that would pay out in case of court decided fraud. It would make doing business with corrupt countries that look the other way on fraud within their borders crack down to preserve their international market access.


I have hands on experience with, what I'd call at least attempted fraud, with crypto. Back when Sweden thought about state backed crypto, a lot of ads showed up where you could invest in that. I almost did, call centers used Austrian numbers. Not sure if there was even any coin behind that. I reported it to police, got a letter after a couple of months that the investigation led nowhere and was dropped, apparently Austrian authorities did find anything on the reported number.

A couple of hours online found

- the company behind that operated out of the UK - the call center was not in Austria but used local numbers for a while - company was known for that shady business but never even got a slap on the wrist

I decided to never count on authorities for that kind of fraud. Or report it, because that's just a waste of time, unless you lost a shitload of money.


There's a lot of dumb fraud. Being international of course makes everything harder. Usually the amount of investigation is related to how many people were scammed. People need to learn how to do due diligence because the definition of fraud can get vague at times. I think your example is a good case of due diligence, it couldn't hurt to blog about their fraud though.


People did write about it, that's how I found out so much so quickly. I didn't invest, but cake reasonably close. One could call it due diligence, but I can see how easy it is to fall for things like that. I got a lot more sceptical of these ads, even more so than before.


It’s not just thinking of the victims when they are kids - if they aren’t killed after the material is made, then they have issues for the rest of their life with a real cost to society.

We’re talking a life-long impact from being abused in that stuff…


> Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives. The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.

Making it public would allow the public to scrutinize it, attack it, if you will, so that we can get to the bottom of how bad this technology is. Ultimately my sincere guess is that we’d end up with better technology to do this not some crap system that essentially matches on blurry images. Our government is supposed to be open source, there’s really no reason we can’t as a society figure this out better than some anti CP cabal with outdated crufty image tech.


PhotoDNA is a very simple algorithm. Reproducing what OP did to "reverse engineer" is not hard. If you really have the guts to try this go ahead and you'll be surprised. I'm living in the US with a greencard so I'm not touching the problem even with a laser pointer.

The techno-legal framework is worse than just "reversing the hashes is possible". You could brute force creation of matching images using a cloud service or online chat as an "oracle".


> About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

The exploitation of children is a real and heartbreaking ongoing tragedy that forever harms millions of those we most owe protection.

It's because this is so tragic that we in the world of privacy have grown skeptical. The Four Horsemen, invoked consistently around the world to mount assaults against privacy, are child abuse, terrorism, drugs, and money laundering.

Once privacy is defeated, none of those real, compelling, pressing problems get better (well, except money laundering). Lives are not improved. People are not rescued. Well-meaning advocates just move on to the next thing the foes of privacy point them at. Surely it will work this time!

Your heart is in the right place. You have devoted no small part of your time and energy to a deep and genuine blooming of compassion and empathy. It's just perhaps worth considering that others of us might have good reason to be skeptical.


> Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives. The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.

I have to respectfully disagree with that statement and the train of thought unfortunately. However, you should seek legal counsel before proceeding with anything related: the advice from a stranger on the web.

a) You are not in the possession of a mystical secret for some magical curve or lattice. The "bad guys" if they have enough incentive to reverse a compression algorithm (effectively what this is) they will easily do that, if the money is good enough.

b) If we followed the same mentality in the cryptography community we would still be using DES or have a broken AES. It is clear from your post that the area requires some serious boost from the community in terms of algorithms and implementations and architectural solutions. By hiding the laundry we are never going to advance.

Right now this area is not taken seriously enough as it should by the research community -- one of the reasons also being the huge privacy, and illegal search and seizure concerns and disregard of other areas of law most of my peers have. Material such as yours can help attract attention necessary to the problem and showcase how without the help of the community we end up with problematic and harmful measures such as what you imply.

c) I guess you have, but just in the slightest of cases: From what I have read so far and from the implications of the marketing material, I have to advise you to seek legal counsel if you come to the possession of this PhotoDNA material they promised or about your written work. [https://www.justice.gov/criminal-ceos/citizens-guide-us-fede...] Similarly for any trained ML model -- although here it is a disaster in progress still.


> Right now this area is not taken seriously enough as it should by the research community

Another reason is that basic material needed to conduct this research (child porn) is legally toxic to posses. Sure, there are ways around that for sufficiently motivated researchers. But if you are a CS researcher, you have a lot of research oppurtunities that do not involve any of that risk or overhead.


I'd lament that using 'thinking of the children' as an excuse for genuine overreach, when that's what it is, puts us in a worse situation.

There is incredible second-order harm in overreach, because the reaction to it hurts the original cause, too.

If you try to overcorrect, people will overcorrect in response.

The sort of zeal that leads to thoughts like "screeching minority", I think shows carelessness and shortsightedness in the face of very important decisions.

I have no informed opinion on Apple's CSAM tool, beyond deferring to the FotoForensics expert.


Thanks for the write-up and putting all the work into this important issue.

> "There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18."

One thing I wondered and have not seen brought up in the discussion so far is this: As far as I understand the perceptual hash solutions work based on existing corpora of abuse material. So, if we improve our detection ability of this existing content, doesn't that increase the pressure on the abusers to produce new content and in consequence hurt even more children? If so, an AI solution that also flags previously unknown abuse material and a lot of human review are probably our only chance. What is your take on this?


> doesn't that increase the pressure on the abusers to produce new content and in consequence hurt even more children?

Not really. Two points:

1) Many / all CP boards these days ask applicants to provide CSAM (as police in almost all jurisdictions except for IIRC US and Australia are banned from bringing CSAM into circulation). And to keep police where allowed from simply re-uploading stuff, they (as well as the "client base") demand new, yet-unseen stuff, and so no matter what the police is doing there will always be pressure for new content.

2) The CSAM detection on popular sites only hits people dumb enough to upload CSAM to Instagram, Facebook and the likes. Granted, the consumer masses are dumb and incompetent at basic data security, but ... uhh, for lack of a better word, experienced CSAM consumers know after decades of busts that they need to keep their stashes secure - aka encrypted disks.

> If so, an AI solution that also flags previously unknown abuse material and a lot of human review are probably our only chance. What is your take on this?

There are other chances to prevent CSA that don't risk damaging privacy, and all of them aim at the early stages - preventing abuse from happening:

1) teach children already in early school years about their body and about consent. This one is crucial - children who haven't learned that it is not normal that Uncle Billy touches their willy won't report it! - but unfortunately, conservatives and religious fundamentalists tend to blast any such efforts as "early sexualization", "gay propaganda" and whatnot.

2) provide resources for (potential) committers of abuse. In Germany, we have the "Kein Täter werden" network that provides help, but many other countries don't have anything even remotely similar.

3) Screening of staff and volunteers in trust / authority positions dealing with children (priests and other clergy, school and pre-school/kindergarten teachers, sports club trainers) against CSA convictions. Unfortunately, this is ... not followed thoroughly very often and in some cases (cough Catholic Church) the institutions actively attempt to cover up CSA cases, protect perps and simply shuffle staff around the country or in some cases across the world.


Thanks, that was very insightful.


> About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact. There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.

Maybe it's the fact that I don't have kids, or that I spend most of my life online with various devices and services. But I would much rather drop the NCMEC, drop any requirement to monitor private messages or photos, reinstate strong privacy guarantees, and instead massively step up monitoring requirements for families. This argument seems like we're using CSAM as a crutch to get at child abusers. If the relationship is really nearly 1:1, it seems more efficient to more closely monitor the groups most likely to be abusers instead.

It even seems to me that going after a database of existing CSAM is counterproductive. With that material, the damage is already done. In a perverse sense, we want as many pedos as possible to buy old CSAM, since this reduces the market for new abuse. It seems to me that initiatives like this do the opposite.

I am not defending CSAM here. But CSAM and child abuse are connected problems, and istm child abuse is the immensely greater one. We should confront child abuse as the first priority, even at the expense of CSAM enforcement, even at the expense of familial privacy. With a rate of 1 in 10, I don't see how not doing so can be ethically defended.


"and instead massively step up monitoring requirements for families"

Pls have a family first and then see if you ask for more state governence into your life?

It is true, most abuse happens in the family, but there is also schools, churches, sports clubs, ...

But the thing is, if the schools for example would have the stuff, with enough time (and empathy) to care about the actual children and talk with them and interact with them (and not mainly the paperwork about them) - then you could easily spot abuse everywhere and take action.

But they usually don't, so you have traumatized children coming into another hostile and cold environment called public school, where they just stonewall again and learn to hide their wounds and scars.

Child abuse is a complex problem, with no simple solution. But I prefer a more human focused solution amd not just another surveillance stepup.

Child abuseres also do not fall from the sky. If you pay more attention to them while they are young - you can spot it and help them, while they need help, before they turn into creepy monsters.


Honestly, if we take the ratio of 1 in 10 seriously, I think the time for human focus and caution has passed. That's an epidemic. To be clear, I'm not letting schools, churches, sports clubs etc off here; all those places clearly need massively increased external oversight as well. But at a 10% rate, we cannot exclude the family; it must be considered an institution in a state of failure.


Well, there are still lots of people who take the bible literal:

"Whoever spares the rod hates their children, but the one who loves their children is careful to discipline them."

https://www.bibleref.com/Proverbs/13/Proverbs-13-24.html

This is the ideological base for it, in my opinion. Unchecked authoritive power tpgether with physical violence. The thing is, the state institutions have not really a clean record on abuse either.

And I did not say anything about excluding the family from monitoring of abuse. I said I see no reason to increase the monitoring. With the meassure in place right now, you could spot plenty of abuse already everywhere - if it would be really about the children.

No easy problem, no easy solution.


> talk with them and interact with them (and not mainly the paperwork about them) - then you could easily spot abuse everywhere and take action

My impression is that the abuse is easily spotted, and paperwork done, but that often not much comes of it. We (USA) don't actually seem to have very good systems for handling things once they're discovered, partly (largely?) due to lack of resources.


I mean there is improvement in some regards, that for example priests or teacher molesters do not silently get moved to a different place in the same job anymore, but yeah - we can easily spot the actual problems on the ground now. One more reason to reject dystopian technological solutions that also do not solve the real problems: children in need of a real protected home.


> someone usually mocks "it's always about the kids, think about the kids."

Yes, let's think about the kids, please.

I certainly don't want my children to grow up in an authoritarian surveillance state...


Exactly, we can’t save the children by giving them a dystopia to grow up in.


Information is power and you can dethrone anyone with access to their communication. Just a quote out of context is enough, especially with many people out for blood to handle their doubts and fears.

A lot of people allegedly knew about Epstein and he was completely untouched while connected to high ranking politicians. You wouldn't have needed surveillance to identify child abuse and if it had turned up anything, I doubt something had happened. Even with that surveillance implemented, evidence would only be used if politically convenient.

If you are against child abuse, you should add funding to child care. People there will notice more abuse cases when they have the support and funding they need because that means more eyes on a potential problem. An image algorithm is no help.


> The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.

But it will help all of society by preventing a backdoor from gaining a foothold. If the technology is shown to be ineffective it will help pressure Apple to remove this super dangerous tool.

Once a technical capability is there, governments will force Apple to use it for compliance. It won’t be long before pictures of Winnie the Pooh will be flagged in China.


OP knows that, and is acting accordingly.

They enjoy being “unquestionable”/above being audited.


> Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help?

Would it help activists push for more accurate technology and better management for NCMEC? Would it help technologists come up with better algorithms? I see all kinds of benefits to more openess and accountability here.


>> There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children

Can you give a source for this please? Also by "deal in" do you mean create or view?


Hey, first of all kudos to you for all your hard work and having the heart in the right place.

I don’t really want to take any specific position on this issue as I don’t have enough context to make a fair assessment of the situation. However, I do want to point out one thing:

By supporting a specific approach to solve a problem, you generally remove some incentives to solve the problem in some other way.

Minted to this situation, I think it would be interesting to ask what are other potential solutions to the problem of child abuse and how effective may they be compared to things like PhotoDNA? Is it the biggest net benefit you could habe or maybe even a net cost to work on this to solve the problem of child abuse?

I don’t have the answer but I think it‘s important to look at the really big picture once in a while. What is it that you want to achieve and is what you are doing really the most effective way of getting that or just something that is „convenient“ or „familiar“ given a set of predefined talking points you didn’t really question?

All the best to you :)


> Nearly 1 in 10 children in the US will be sexually abused before the age of 18.

Please lets not invent distorted statistics or quote mouthpieces who has it in their own interest to scare people as much as possible into rash actions just like Apple has done.


Thank you for those insights.

I've long held a grudge against Microsoft and NCMEC for not providing this technology, because I live in a country where reporting CSAM is ill-advised if you're not a commercial entity and law enforcement seizes first and asks questions later (_months_ later), so you end up just closing down a service if it turns out to be a problem.

This puts it into perspective. PhotoDNA seems fundamentally broken as a hashing technology, but it works just well enough with a huge NDA to keep people from looking too closely at it.

NCMEC needs a new technology partner. It's a shame they picked Apple, who are likely not going to open up this tech.

Without it, it's only a matter of time until small indie web services (think of the Fediverse) just can't exist in a lot of places anymore.


> it's only a matter of time until small indie web services (think of the Fediverse) just can't exist in a lot of places anymore

I expect they will simply become P2P E2EE darknets eventually, meaning there won't really be a "service" anymore. Matrix already has E2EE and is actively working towards P2P.


> someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

This is making it worse : because people with an agenda use CP as an excuse to force immoral behavior into us, now child suffering is always associated with bullshit. Those action are hurting the children by supressong the social will to take it seriously.

Stop hurting the children!


>Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives.

I would suggest that the people NCMEC are most enthusiastic to catch know better than to post CSAM in places using PhotoDNA, particularly in a manner that may implicate them. Perhaps I overestimate them.


1. Your claims about CP and child abuse need substantial evidence.

2. Assuming all that as true, opaque surveillance and the destruction of free, general computing is much worse than child abuse.


> About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact.

I'd say it's the other way around. If freedom dies this time, it might die for good, followed by an "eternity" of totalitarianism. All violence that occured in human history so far combined is nothing compared to what is at stake.

> Free thought requires free media. Free media requires free technology. We require ethical treatment when we go to read, to write, to listen and to watch. Those are the hallmarks of our politics. We need to keep those politics until we die. Because if we don’t, something else will die. Something so precious that many, many of our fathers and mothers gave their life for it. Something so precious, that we understood it to define what it meant to be human; it will die.

-- Eben Moglen


> About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact. There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.

Does CP being available create victims? I'd say that virtually everybody who suddenly saw CP would not have the inclination to abuse a child. I don't believe that availability of CP is the causal factor to child abuse.

But putting aside that and other extremely important slippery slope arguments for a minute about this issue: have you considered that this project may create economic incentives that are inverse of the ostensible goal of protecting more children from becoming victims?

Consider the following. If it becomes en vogue for cloud data operators to scan their customers' photos for known illegal CP images, then the economic incentives created heavily promote the creation of new, custom CP that isn't present in any database. Like many well-intentioned activists, there's a possibility that you may be contributing more to the problem you care about than actually solving it.


I have no idea whether this statistic is true or not, but "nearly 1 in 10 children in the US will be sexually abused" is an incredibly high number.

I have no doubt that some cases of sexually abused children must have also existed in the environment where I have grown up as a child, in Europe, but I am quite certain that such cases could not have been significantly more than 1 per thousand children.

If the numbers would be really so high in USA, then something should definitely be done to change this, but spying on all people is certainly not the right solution.


What makes you quite certain about your 1-1000 number? It’s really easy to mistake our experience for something generalizable.


It's hard to say without data, but the number is so astonishingly high that, really, the first reaction is to call it into question.

It also seems to be the kind of thing that is hard to measure.


I dunno, seems reasonable to me. Crimes of this nature are underreported, so I tend to assume I’m only aware of a very small percentage of these situations. I’m not saying you’re wrong - just that I had the reaction that it seemed within the general ballpark. For example, most of the apparently normal women I have known well have shared stories of abuse and trauma.

What makes you say it’s hard to measure? The definitions are pretty clear cut.


Of my female friends, I know at least 5 who are victims. Victims make up the majority of my closer female circle.

It is way more commonplace than you would like to think.


I'm glad that you at least were transparent about the shortcomings of these types of systems. Maybe some academics in forensic image analysis can take up the torch and shed light on how much of a failure this whole system is, especially when they're putting spyware on our personal devices. Anyway, I like your writings, keep up the good fight.


It's somewhat off-topic for the article, but I'm curious as to why you retain images uploaded for analysis after the analysis is completed?

After all, surely 99% of uploaders don't hold the copyright for the image they're uploading, so retaining them is on shaky legal grounds to begin with, if you're the kind of person who wants to be in strict compliance with the law - and at the same time, it forces you and your moderators to handle child porn several times a day (not a job I'd envy) and you say you risk a felony conviction.

Wouldn't it be far simpler, and less legally risky, if you didn't retain the images?


> Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives. The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.

It would help those victims who are falsely accused of having child porn on the phone because of a bug.

It would also help those people who is going to be detained because PhotoDNA will be used by dictatorial states to find incriminating material on their phone just as they used Pegasus to spy on journalists, political opponents, etc.


Hey - thanks for writing the OP. I may be missing something obvious, but how does fotoforensics actually identify all the CSAM content it reports to NCMEC?


You ask how they can know that their false positive rate is 1 in a trillion, without testing trillions of images. Simple, they require more than on hit. If each hit has a false positive rate of 1 in 100, which is very testable, you can simply require six hits.


It doesn't matter how "nice" the people on the lower levels are.

An organization is determined to act like the management wants. If management consists of jerks the organization they lead will always behave accordingly.

A fish rots from the head down…


> There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.

These are clearly propaganda statistics. There is absolutely no way 10% of the US child population is molested.

The fact that you not only believe this, but repeat it publicly calls into question your judgement and gullibility.

You’ve been recruited to undermine the privacy and security of hundreds of millions (billions?) of people and indoctrinated with utterly ridiculous statistics. Your argument is essentially “the ends justify the means”, and it could not be any more ethically hollow.


Not that I particularly care one way or the other, but doesn’t writing a ‘whitepaper’ (or calling your notes such) indicate an intention to release it?


How feasible is it to change the algorithm to flag police uniforms? The Tank Man?


No need to change anything. It's supported by design.


> I built the initial FotoForensics service in a few days.

Why did you specify "In a few days"?


My guess is to establish the approximate amount of collaboration. He didn't work there full time for years, that collaboration has been small and limited.


It does make one wonder--careless work, trivial problem, reuse of existing projects, writer portrays self as extremely productive, or some combination?


I think you're assuming bad faith from someone who has proven very competent, technical, and coherent.

It might simply be truthful. And if the author is proud of that, it is both irrelevant and perfectly okay. Let's focus on the facts and arguments relevant to the article, not personal attacks.


None of the options I proffered are in the "bad faith" category.


Ya, it's hard to interpret it differently, so was curious. Don't know why I'm being downvoted for a question about rhetorical intention.


I didn’t downvote, but it comes across as a nitpick on a thoughtful and reasonable piece.


[flagged]


No - but this comment is abhorrent.


I think there is a trend across the world of power centers becoming more authoritarian and punitive. It is en vogue to see it in China and point them out for it.

It is harder to look in the mirror and see something similar happening in the US. In the US, we superficially see government doing it (Snowden disclosures) and then we see corporations doing it separately (Facebook, Google ad tracking, etc...). However, I think government and tech giants are working closely together to act like China. This feels like another bud of this trend.

- Social credit system

- Loss of financial services (PayPal)

- Loss of access to social media (Facebook, Google, Apple, YouTube)

- Loss of access to travel related services (AirBnB, travel ban)

- Banning of material (Amazon)

- Control of media (owned by just a few major corporations who don't have to make money from the media... Comcast basically has a monopoly on land line Internet, AT&T has a very strong position on mobile, Disney has a strong position in films... they own CNN, NBC, and ABC)... EDIT: And let's not forget Fox

You can say all you want about freedom of association, but the effect is similar in China and the US. You are ostracized from the system.

Tech has lost its neutral carrier status and now is connected into a system that enforces consent. I wonder why? Are we being prepared for an economic war? Is this the natural evolution of power seeking power? Is this just a cycle of authoritarianism and liberalism?

P.S. I don't think groundswell upcry leading to cancellation means much. I am much more concerned when corporations do it. I just think they are much, much more powerful.


I agree. NCMEC is the root cause. It's very much like the war on drugs. There are a lot of horrendous crimes out there and this level of overreach isn't deemed necessary for any of them.

Child sex abuse seems to have become the excuse to create these heavy handed policies to terminate accounts with no recourse by Google and scanning locally stored photos by Apple. Even receiving a cartoon depicting child sex abuse can get you in trouble.

Hopefully this will be a catalyst to reform the law.


> I'm surprised, and honestly disappointed, that the author seems to still play nice

Stopping to that level is what they want, CNN: “‘privacy activists’ have released a tool to allow the spread of CP”


If that’s how they play then the only winning move is to respond in kind.

>shadowy government affiliated agency abuses its role of protecting children to install malware on a billion devices


That's a PR fight you will lose.


I'm not so sure, I think it may be a lose-lose scenario, with both the government and the companies who's products have become compromised taking a big hit.

The people merely avoiding the tainted products aren't on blast here.


The people on blast are the ones publicly advocating for privacy. They're the ones who would be damaged in that scenario, and I for one want a society that takes them seriously.


Well, it doesn't look that way so far. I haven't seen any credible mainstream positions ridiculing the people advocating for privacy. It seems like everyone's pretty well aware that technology sucks and systems like this are not to be trusted, even if they're implemented with good intentions.

We will see.


That's not even news at this point. Privacy invading laws have been pushed for years under the guise of either CP or terrorism.


Scary to admit this, but whatever. In my younger years, which coincidentally were also the younger years of the internet, I used to come across... let's say...bad things all the time.

Being a decent enough human, I made many reports to NCMEC. Human verified(me) reports.

Never once did I hear back. Not even some weird auto-reply. Needless to say, I have zero faith in that agency. Fitting they'd get others to do the work.


>I'm surprised, and honestly disappointed, that the author seems to still play nice, instead of releasing the whitepaper.

Would it help anything? Apple isn't using PhotoDNA, so proving PhotoDNA is bad would just be met with "we don't use that".


I believe it would because other image systems will probably have to make similar implementations and trade offs that PhotoDNA did.


>> Would it help anything? Apple isn't using PhotoDNA, so proving PhotoDNA is bad would just be met with "we don't use that".

> I believe it would because other image systems will probably have to make similar implementations and trade offs that PhotoDNA did.

Even if that's the case, that's too subtle of a point to be very effective, and you'll have all kinds of PR people (Apple and otherwise) selling the "we don't use that [so those criticisms don't apply to us]" narrative.


Why? It would be interesting to compare the two systems, but there is no reason to assume the trade-offs are the same.


There are trade offs that can be made that are inherent to the space.

There are a handful of hashing methods in papers, and each can have its parameters tuned, again making trade offs for things like efficiency or accuracy.

Then when it comes to efficient searching through hashes for matches and fuzzy matches, there are common algorithms and data structures used across perceptual hashing systems, each with with their own trade offs, implementation details and parameters that can be tuned.

If there's an issue with PhotoDNA that doesn't come down to a poor implementation, then there's a good chance that other systems might have met the same pitfalls they did. And if it comes down to a poor implementation, it would be prudent for operators of other systems to make sure their own systems don't make the same mistakes.


NCMEC is a private nonprofit. No government oversight. How are they allowed this much power over every photo on everyone's devices?


> It also seems like the PhotoDNA hash algorithm is problematic (to the point where it may be possible to trigger false matches

A post here some days ago (since removed) linked to a google drive containing generated images (which displayed nonsense), the hashes of which matched those of genuine problems images.


Isn’t that an alleged memo from the NCMEC? I would sincerely doubt an official memo from any agency would use the term “shrieking minority”, it sounds like imaginative fiction of what a government agency would actually say.


9to5Mac published the full message that NCMEC sent to Apple:

https://9to5mac.com/2021/08/06/apple-internal-memo-icloud-ph...

It does say "We know that the days to come will be filled with the screeching voices of the minority."


Wow. Thank you, looks like I was wrong.


The fact that they're willing to openly make such a brazen statement is part of why I think they're unsalvageable.


One way to interpret the macaque photo is that the database has already been subverted for uses other than catching CP.


Or it’s one of many test images, like EICAR, so people can test and validate the system without using abhorrent images.


Sure. Though, it’s not of much use if it is not documented.


Good article, however-

"Due to how Apple handles cryptography (for your privacy), it is very hard (if not impossible) for them to access content in your iCloud account. Your content is encrypted in their cloud, and they don't have access. If Apple wants to crack down on CSAM, then they have to do it on your Apple device"

I do not believe this is true. Maybe one day it will be true and Apple is planning for it, but right now iCloud service data is encrypted in the sense that they are stored encrypted at rest and in transit, however Apple holds the keys. We know this given that iCloud backups have been surrendered to authorities, and of course you can log into the web variants to view your photos, calendar, etc. Not to mention that Apple has purportedly been doing the same hash checking on their side for a couple of years.

Thus far there has been no compelling answer as to why Apple needs to do this on device.


> why Apple needs to do this on device

Presumably to implement E2E encryption, while at the same time helping the NCMEC to push for legislation to make it illegal to offer E2E encryption without this backdoor.

Apple users would be slightly better off than the status quo, but worse off than if Apple simply implemented real E2E without backdoors, and everyone else's privacy will be impacted by the backdoors that the NCMEC will likely push.


> make it illegal to offer E2E encryption without this backdoor.

It isn’t a back door to E2E encryption. It can’t even be used to search for a specific image on a person’s device.

It could be used possibly to find a collection of images that are not CSAM but are disliked by the state, assuming Apple is willing to enter into a conspiracy with NCMEC.


It seems that Apple does not need to be a co-conspirator, and that it would be sufficient if someone added ‘malicious’ hashes to the NCMEC database.


Not correct. When enough images to trigger a match are detected apple employees verify the visual derivative to make sure it matches before an alert is generated. They would need to collude.


You’re right, I was thinking about a breach of privacy in general instead of actual legal consequences. (Though the possibility of governments backdooring Apple’s servers to access decrypted files stands, that shouldn’t make a difference with this iCloud-Photos-only spyware)


Whether users are worse off or not entirely depends on the rate of false positives. That's the biggest issue IMO and OP's article points out how there is zero published and trustworthy information on that.


I think that section was rewritten, it currently reads:

> [Revised; thanks CW!] Apple's iCloud service encrypts all data, but Apple has the decryption keys and can use them if there is a warrant. However, nothing in the iCloud terms of service grants Apple access to your pictures for use in research projects, such as developing a CSAM scanner. (Apple can deploy new beta features, but Apple cannot arbitrarily use your data.) In effect, they don't have access to your content for testing their CSAM system. > If Apple wants to crack down on CSAM, then they have to do it on your Apple device.

(which also doesn't really make sense, if the iCloud ToS don't grant Apple the necessary rights to do CSAM scanning there, they could just revise it, however, I think they probably have the rights they need already)


Indeed, for three years or so Apple's privacy policy has specifically had this in it-

"Security and Fraud Prevention. To protect individuals, employees, and Apple and for loss prevention and to prevent fraud, including to protect individuals, employees, and Apple for the benefit of all our users, and prescreening or scanning uploaded content for potentially illegal content, including child sexual exploitation material."

https://www.apple.com/legal/privacy/en-ww/

Under "Apple's Use of Personal Data". They had that in there since at least 2019.

Add that an Apple executive told congress two years ago that Apple scans iCloud data for CSAM.


You are correct — most of the iCloud data is not end-to-end encrypted. Apple discusses which data is end-to-end encrypted at https://support.apple.com/en-us/HT202303


They want to move to e2e for photos so they don't have to keep those keys. That's what this is part of — a way to prevent their service from being used for CSAM, yet still provide e2e encryption. I feel very ambivalent about this.


That was never mentioned by Apple. If that was their intention then I suspect they would have mentioned it alongside this announcement to provide a justification and quell the (justified) outrage.

I also question the value of e2e there’s an arbitrary scanner that can send back the unencrypted files if it finds a match. If apple’s servers controls the db with “hashes” to match then is it all that different from apple’s servers holding the decryption keys?

Sure e2e still prevents routine large scale surveillance but at the end of the day if apple (or someone that forced apple) wants your data, they’ll get it.


They don’t send unencrypted full-res files, they send low res “visual representation” and can only decode if they get > x “hits”. Assuming it works as described I do think it’s better than just having full keys as they do now. And why else would they go to all this trouble? They can scan images now on their servers if that’s what they want.


Low-res I suppose is better but...If it's enough for a human to tell whether it's CSAM or not, it's probably high-res enough to be a significant invasion of privacy in case of a mistake.

Also the > x "hits" part is a good feature assuming that the database only looks for CSAM. Otherwise it's useless (not to mention totally unauditable).

My guess is that they're doing it on device because they've had several years of marketing and proclaiming that "everything is done on-device" so to implement CSAM scanning server side would go against that. Maybe they thought this would somehow look better to the average consumer who thinks "on-device" is automatically better?


Do you actually think they didn't have CSAM scanning implemented server-side before this?


> If that was their intention then I suspect they would have mentioned it alongside this announcement to provide a justification and quell the (justified) outrage.

It’s August. New iPhones and iOS / macOS are released in September. If they want to introduce E2E encryption for photos and need this in place to do it, then it makes sense to announce this ahead of time and get the backlash out of the way so that they can announce the headline feature during the main event without it being overshadowed.


Software features are announced at WWDC, which happens early summer and already happened this year. It's only the hardware that gets announced (and the software released) in September.


Some software features are announced at WWDC, mostly ones that affect developers. Consumer-facing services have also been announced during the September event; it’s not just hardware. Apple One, Apple TV+, and Apple Arcade were all announced during the last couple of September events.


> They want to move to e2e for photos so they don't have to keep those keys.

That's my suspicion too, but has it actually been confirmed?


Well why else would they bother? This is WAY more complicated than just scanning on their servers.


We have no information to make a decision there. It could be (as you say) because they want to implement E2E cloud, but it could just as well be because they want to start scanning offline content for folks that opted out of iCloud storage (and even if that is not their immediate intent, you can't argue implementing this system doesn't take them 95% of the way to making that possible)


Probably because Apple wants to stay away from any suspicions that they sometimes actually use their keys to access private information.


> Thus far there has been no compelling answer as to why Apple needs to do this on device.

"You scratch my back and I scratch yours". Apple doesn't want to go through an antitrust lawsuit that will kill their money printer so they kowtow with these favors.


That's obviously completely untrue.

It doesn't matter anyway, end to end cryptography is meaningless if someone you don't trust owns one of the ends (and in this case, Apple owns both.)


I don't see many people pushing back on the child pornography laws themselves that are the cause of this. I'm stepping into a hornets nest by even bringing this up, because any criticism of the laws on the books makes one look they're a pedo, so I'll preface by saying, child pornography (filmed with actual kids) is vile and disgusting, but it is the production of it that is evil to be fought and suppressed, not the possession of it. Remember in the 90s, we had to deal with the Communications Decency Act I & II, because every time they want to crack down on the internet, the excuse is always "it's for the children!" And pedophilia is the go-to excuse in a lot of circumstances.

The CPPA had bans on virtual child porn (e.g. using look-alike adult actresses or CGI), that was overturned by SCOTUS, and then Congress responded with the PROTECT act which tightened up those provisions. These laws on possession are practically unenforceable with modern technology, peer to peer file sharing, onion routing, and encrypted hard drives.

Thus, in order to make them enforcement, the government has to put surveillance at all egress and ingress points of our private/secure enclaves, whether it's at the point of storing it locally, or the point of uploading it to the cloud.

While I agree with the goal of eliminating child porn, should it come at the cost of an omnipresent government surveillance system everywhere? One that could be used for future laws that restrict other forms of content? How about anti-vax imagery? Anti-Semitic imagery? And with other governments of the world watching, especially authoritarian governments, how long until China, which had a similar system with Jingwang Weishi (https://en.wikipedia.org/wiki/Jingwang_Weishi) starts asking: hey, can you extend this to Falun Gong, Islamic, Hong Kong resistance, and Tiananmen square imagery? What if Thailand passes a law that requires Apple to scan for images insulting to the Thai Monarch, does Apple comply?

This is a very bad precedent. I liked the Apple that said no to the FBI instead of installing backdoors. I'd prefer if Apple get fined, and battle all the way to the Supreme Court to resist this.


There's a reason they always promote the most extreme cases to restrict liberty:

"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all." — HL Mencken


Relatedly, I don't see much discussion around the nuances of what counts as cp. In my experience, the vast majority of cp people are likely to come into contact with is "consensual" (the definition of consent gets weird here) images taken by teenagers of themselves, not old men raping little kids. So what happens if a 17 year old girl takes pictures of herself, sends them to her partner, they break up, he posts them online as revenge porn, and a site moderator reports it as cp and its hash ends up in the db? Now police come after her because she has 25 similar images in her icloud and she gets treated like a criminal? She definitely did break the law I suppose, but there's so much more nuance there. I really feel like these are the kinds of people who are most likely to get flagged. I (hopefully) don't know anyone who keeps swaths of abusive cp on their iPhones, but I certainly have known a lot of underage kids with intimate pictures of themselves and their friends/partners.


My understanding of the problem is that you are entirely wrong as to the ratio of "accidental" underage photos vs. "evil" photos. CSAM is "Child Sexual Abuse Material" and the abuse part is apparently more widespread than the public knows or wants to know.

I have objections to this tool and have many of the same concerns expressed in these comments, but your example seems like a bit of a corner case compared to the broader "government panopticon" problem.


Sure there’s probably orders of magnitudes more of that content. But that’s not what most people are targeting I think with these laws and tech. I believe when I read some of NYT (Gabriel J.X. Dance?) on this a while back it was explicitly about child abuse not “oops older teens”.


The reason possession is illegal is to try to prevent further production of it and to eliminate continuing abuse to children. The theory is that allowing even mere possession perpetuates a market of “buyers” that would further stimulate production. This is well covered in the Congressional findings associated with 18 U.S.C. 2251. See e.g. https://www.law.cornell.edu/uscode/text/18/2251 (click the notes tab).


This reasoning is working well in the War on Drugs, so let's not only employ it, but embrace and extend it, as well.


The police already have an incentive to not investigate rape because it makes their numbers look bad which affects funding. Now extend that to children who can't file a police report and you can get an idea of how little the 'think of the children' crowd actually cares about stopping CSA. Pure hypocrisy. They expect us to surrender our fundamental rights while they look the other way (Sophie Long)on CSA.


Something else I haven’t heard anyone bring up: when child abusers are caught, sentences are often a joke. They often spend less time in prison than low to mid level drug offenders.

CP is rape porn. People who produce or knowingly distribute actual rape porn (of children or adults) should be going to jail for 25+ years. I would not rule out life for extreme cases that also involve other forms of abuse.

Yet as far as I know this isn’t what is being pushed for. Instead we get a dragnet to try to catch low level offenders who will probably barely see prison because sex crimes are just not punished much in our society.

Secondly… look into some of the people who have set up fake CP honey pots. These have included both vigilantes and security researchers trying to get a sense of how much CP is on Tor. Turns out most of these people are imbeciles and catching them is easy. Why don’t police do this more? Because… well… as I said sex crimes are just not a priority.

I really feel like the whole argument is in bad faith. If they really cared about the children there are so many other things that would be both easier and more effective.


Indeed. A case in particular was Benjamin Levin who was the Minister of Education in Ontario

> Levin was ultimately nabbed conversing with police officers posing as single moms. He encouraged them to sexually abuse their kids and in some cases shared photos. [1]

> In one case, Levin sent photographs to a New Zealand police officer, one showing a “close-up of the face of a crying child, her face smeared with black makeup.” Levin suggested to her the image was “hot,” according to parole board documents. Another photo he sent showed a young female bound and leashed, with a gag in her mouth and Levin commented, “Mmm, so hot to imagine a mother doing that to her girl to please her lover.” [1]

> On May 29, 2015, he was sentenced to three years in prison. He only spent 3 months of his sentence in jail before being paroled [2]

[1] https://torontosun.com/2017/10/07/ex-deputy-education-minist... [2] https://en.wikipedia.org/wiki/Benjamin_Levin_(academic)


There are loads and loads of cases like that. If they have good lawyers they don’t even see prison, at least in the US. But we must backdoor everything to catch them so we can slap them on the wrist…?


I think you’re mistaken.

Lots of rape porn is produced by women, for women. Typically novels, and sometimes graphic novels.

It’s legal, and definitely not something that takes with it a 25+ year sentence.

You can literally find rape porn on Amazon bookshelves. Literally.


Clearly they were talking about non-fictitious rape.


This. Everything that is happening now is downstream of some terrible laws.


There are a lot of articles about Apples hadh algorithm and for me they are mostly irrelevant to the main problem.

The main problem is that Apple has backdoored my device.

More types of bad images or other files will be scanned since now apple does not have plausible deniablity to defend any of ghe government’x requests.

In the future a false? positive that happened? to be of a political file that crept in the list can pin point people to the future dictator wannabe.

It’s always about the children or terrorism.


They could have done all that without telling you. And as long as the traffic was combined with normal traffic no one would ever notice (and in this case it would end up mixed with normal traffic since it only applies to images being uploaded to iCloud, so communication with Apples servers would be expected).

What it looks like to me is that Apple is planning on releasing end-to-end encryption for iCloud. But they know that whenever E2EE comes up, people get mad that terrorists, child molesters, and mass shooters can hide their data and communications. Hell, they've been painted as the villain when they say they can't unlock iPhones for the FBI. This heads off those concerns for the most common out of those crimes.


Sticking to the apt analogy from the article, >To reiterate: scanning your device is not a privacy risk, but copying files from your device without any notice is definitely a privacy issue.

> Think of it this way: Your landlord owns your property, but in the United States, he cannot enter any time he wants. In order to enter, the landlord must have permission, give prior notice, or have cause. Any other reason is trespassing. Moreover, if the landlord takes anything, then it's theft. Apple's license agreement says that they own the operating system, but that doesn't give them permission to search whenever they want or to take content.

This viewpoint is like thanking your landlord for warning you that they are going to enter your home and root through your private items, all in the name of some greater good. Let's not spin it as if the landlord is doing us a favor in this scenario.


That first line of the quote is misrepresenting what Apple is doing. They are not copying files from your device. You are sending them the files. As it stands the only images that will be scanned are the ones you are uploading to iCloud. And I'd be shocked if they weren't already analyzing those images on the server side.

When it comes to governments being able to pressure them into being more invasive, nothing has changed with this update. If a government wanted to poison the CSAM database, they could have already. You'd end up reported when the server does the scanning. If the government wanted to expand scanning to include things that you're not uploading, they already could have asked Apple to do that. It would have been possible to silently add a much simpler scanning mechanism or data exfiltration into an update.

This isn't a spin to say anyone is doing us a favor. iCloud should be end-to-end encrypted and there shouldn't be any scanning at all. But why should that opinion on how we should treat privacy be taken as the only valid opinion? The people who do want the scanning are not simply asking for it because they are stupid or uninformed. Instead they put different weights into what they value.


> What it looks like to me is that Apple is planning on releasing end-to-end encryption for iCloud

This is Gruber's optimistic take on it as well. If so, why not make both changes at once? Given that they've walked back E2EE on iCloud before, I'm not holding my breath.


> They could have done all that without telling you.

But in that case it would much more likely be a crime, it would certainly cost them a tremendous amount of good will.

Your personal computing device is a trusted agent. You cannot use the internet without it, and esp. in lockdown you likely can't realistically live your life without use of the internet. You share with it your most private information, more so even than you do with your other trusted agents like your doctor or lawyers (whom you likely communicate with using the device). Its operation is opaque to you: you're just forced to trust it. As such your device ethically owes you a duty to act in your best interest, to the greatest extent allowed by the law. -- not unlike your lawyers obligation to act in your interest.

Apple is reprogramming customer devices, against the will of many users (presumably at the cost of receiving necessary fixes and security updates if you decline) to make it betray that trust and compromise the confidentiality of the device's user/owner.

The fact that Apple is doing it openly makes it worse in the sense that it undermines your legal recourse for the betrayal. The only recourse people have is the one you see them exercising in this thread: Complaining about it in public and encouraging people to abandon apple products.

E2EE should have been standard a decade ago, certainly since the Snowden revelations. No doubt apple seeks to gain a commercial advantage by simultaneously improving their service while providing some pretextual dismissal of child abuse concerns. But this gain comes at the cost of deploying and normalizing an automated surveillance infrastructure, one which undermines their product's ethical duty to their customers, and one that could be undetectable retasked to enable genocide by being switched to match on images associated with various religions, ethniticities, or political ideologies.


Eh, I think it is simply the fact that Apple doesn’t want to be associated with individuals violating their terms of service in an unlawful way.

This is a way to root them out and report them to law enforcement.


If this is a prelude to E2E encryption for iCloud they are going to be under TREMENDOUS pressure from law enforcement to expand the list of bad material way beyond just CSAM.


Not if it's only E2EE for photos


They might be able to do it without anyone ever knowing, but at a business level they can’t they do it better with everyone knowing and children safety as an excuse?


This is mainly where I come down on this. I care about what they’re doing not why they’re doing it. What they are doing is examining my private files, and that is simply unacceptable.


Apple has front-doored your device from Day 1.


Exactly. The issue is that the iPhone is now a snitch AI for whatever purpose Apple deems fit.


Drug dogs have a 50~80% false positive rate, because after they've been certified as capable of detecting drugs, they get that trained out of them the field to instead indicate whatever the handler wants indicated.


Drug dogs are "probable cause on four legs".

https://www.npr.org/2017/11/20/563889510/preventing-police-b...


> More types of bad images or other files will be scanned since now apple does not have plausible deniablity to defend any of ghe government’x requests.

The mechanism doesn’t scan anything except images, and won’t trigger on a single bad image - only a set.

Yes, that set could be something other than child porn, assuming Apple and NCMEC conspire, but this is not a general purpose backdoor.


Literally Apple could just add a different set to the set of hashes they push? That seems very naive.


What seems naive? Adding hashes will only match images, not other files, and even then only if a group is matched, not just one.


Because people take and store images of huge numbers of different things? Like the things mentioned upthread, memes, documents? along with screenshots of a huge number of other things.

And the details around matching (and groups, etc) are trivial to change in a later update.


> or other files will be scanned since now

Ok, so not other files at all then. Just photos of documents.

> And the details around matching (and groups, etc) are trivial to change in a later update.

Ok, so this mechanism can’t do more than us claimed, but they could add a new mechanism later?


As I said, incredibly naive.


You said it, but then you failed to back it up with anything other than your fears.


What reason do we have to trust that the NSA won't knock on the door of apple and ask for a small expansion, as a matter of national security + here is your NDA outlining that any canary tampering will result in jail time? It's a closed system so we would have no way of knowing.


Sure, but that has nothing to do with this mechanism or this discussion.

They could have done that at any time in the past, and could do so in future.

‘NSA could force Apple to do something in secret’ is an evergreen fear just like ‘think of the children’. Such comments get added to every thread about Apple and privacy or security.


It’s one thing to have a company build and keep secret such a system from everyone, from the ground up.

It’s a different thing entirely to do a minor extension to a system that they rolled out publicly that already essentially does this!

The first one will almost certainly be noticed, and will be clearly illegal/violate contracts, and can therefore be identified and rooted out.

The second one you could do for groups of people or targeted individuals trivially and would be under the radar and probably never noticed - and could be denied unless truly rock solid evidence existed. Which would be easy to avoid existing if you used the same mechanisms (but different types of matches) you were public about - in a closed ecosystem with a Secure Enclave, for instance. It’s not like anyone is going to be able to do step-by-step instruction debugging on the code running on their iPhone!

There is a long history of this happening. Not everyone is as blatant as the stasi - and even then, no one knew who was working for them or what was tapped until the whole system collapsed and their records became public. It still took a long time to unravel.


> The first one will almost certainly be noticed, and will be clearly illegal/violate contracts, and can therefore be identified and rooted out.

Well since they have made very detailed public statements about the limits of this system and. not letting it be misused, they would certainly be in violation of contracts if they did start misusing it.

> if you used the same mechanisms (but different types of matches)

What kinds of ‘different matches’ do you think this mechanism can be used to make?


It also only scans images uploaded to icloud


It does that on the device. All it takes is enabling it by default, instead of triggering the scan prior to upload, and add more hashes to flag. The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.


How would this let them scan the text of your signal messages or even your iMessages?

How would this let them search for a subversive PDF?

It wouldn’t. This is just fearmongering.

> The result would be a total loss of privacy, basically Apple is privatising mass surveillance with that. Without oversight, no matter how small, or accountability.

You are talking about an imagined system someone could build in the future, not the system Apple has put in place.


"Don't worry this malware is only triggered if you double click it. -- signed the author"


> The main problem is that Apple has backdoored my device.

Isn't that the shtick with Apple though? That they own the devices you rent and you don't have to worry too much about it. They always had the backdoor in place, they used it for software updates. Now they will also use it for another thing.


> That they own the devices you rent and you don't have to worry too much about it.

You didn't need to worry about it because they did a sufficiently good job at making the choices for you. This is a sign that they stopped doing so.

An appropriate metaphor might be a secretary. They can handle a lot of busy work for you so you don't have to worry about it, but they need access to your calendar, mails etc. to do so. This is not an intrusion as long as they work on your favor. If you suddenly find your mails on the desk of your competitor, though, you might reconsider. That, however, does not mean that the whole idea of a secretary is flawed.


I agree. I am just saying that Apple has always had massive backdoors in place. People were always pointing it out as a huge plus for iDevices. So no, the parent's main problem is not the backdoor. Otherwise they would have complained long ago.


As a fan of this blog for longer than I can remember, it's refreshing to hear this particular author's take on this issue, especially considering their background.

I'm glad these issues were addressed in a much more elegant way than I would have put them:

> Apple's technical whitepaper is overly technical -- and yet doesn't give enough information for someone to confirm the implementation. (I cover this type of paper in my blog entry, "Oh Baby, Talk Technical To Me" under "Over-Talk".) In effect, it is a proof by cumbersome notation. This plays to a common fallacy: if it looks really technical, then it must be really good. Similarly, one of Apple's reviewers wrote an entire paper full of mathematical symbols and complex variables. (But the paper looks impressive. Remember kids: a mathematical proof is not the same as a code review.)

> Apple claims that there is a "one in one trillion chance per year of incorrectly flagging a given account". I'm calling bullshit on this.


Reading carefully through the paper, an important part of their calculation for the "one in a trillion" claim seems to rest on the cryptographic threshold approach they are using. In particular, it seems likely to me that the number matches required for your account to be flagged is relatively high (perhaps a dozen). If that is the case, their hash collision likelihood could be "only" 1 in a million, but it would still be vanishingly unlikely for a typical iCloud user to get a dozen false positives. 1e-6 is _much_ more testable than 1e-12 for the perceptual hashing, and the cryptographic parts of the secret sharing are easy to analyze mathematically.

As a disclaimer, I haven't done the actual math here. This also implies that the risk of your account getting flagged falsely is tightly related to how many images you upload.


You're assuming that perceptual hashes are uniformly distributed, but that's not the case. If I post a picture of my kid at the beach I'm far, far more likely to generate perceptual hashes closer to the threshold. Not to mention intimate photos of/with my partner.


Good point about the possibility of capturing a bunch of distinct photos with the same perceptual hash, either by taking a burst of photos or by editing one photo a bunch of times. I guess a better implementation would never upload two different encryption keys for the same perceptual hash and just send dummy data instead, but I haven't seen any indication that they actually do that.


yep. what if i take a burst of 12 photos that all incorrectly fall as a false positive to NeuralHash (which is a ML black box), and an Apple reviewer is now invading my privacy by looking at my photo library?


The the technical paper Apple put out that is linked to in the post talks about the risk, but isn’t very helpful

“Several solutions to this were considered, but ultimately, this issue is addressed by a mechanism outside of the cryptographic protocol.”


Not acceptable for a technology being deployed to hundreds of millions of people.


There was a huge brawl of sorts about "a mathematical proof is not the same as a code review" between Neal Koblitz, Alfred Menezes, etc on the one hand and theoretical crypto community on the other hand wrt "provable security". Here is a site: http://anotherlook.ca/


Regarding that rate, I’m no expert but my guess is that it’s the result of math, not actual testing of 1+ trillion images. This sounds like calling bullshit on “You have one trillion chance to win the lottery.”


The author addresses this point:

> Perhaps Apple is basing their "1 in 1 trillion" estimate on the number of bits in their hash? With cryptographic hashes (MD5, SHA1, etc.), we can use the number of bits to identify the likelihood of a collision. If the odds are "1 in 1 trillion", then it means the algorithm has about 40 bits for the hash. However, counting the bit size for a hash does not work with perceptual hashes.

> With perceptual hashes, the real question is how often do those specific attributes appear in a photo. This isn't the same as looking at the number of bits in the hash. (Two different pictures of cars will have different perceptual hashes. Two different pictures of similar dogs taken at similar angles will have similar hashes. And two different pictures of white walls will be almost identical.)

> With AI-driven perceptual hashes, including algorithms like Apple's NeuralHash, you don't even know the attributes so you cannot directly test the likelihood. The only real solution is to test by passing through a large number of visually different images. But as I mentioned, I don't think Apple has access to 1 trillion pictures.

> What is the real error rate? We don't know. Apple doesn't seem to know. And since they don't know, they appear to have just thrown out a really big number. As far as I can tell, Apple's claim of "1 in 1 trillion" is a baseless estimate. In this regard, Apple has provided misleading support for their algorithm and misleading accuracy rates.


>18 U.S.C. § 2258A is specific: the data can only be sent to NCMEC. (With 2258A, it is illegal for a service provider to turn over CP photos to the police or the FBI; you can only send it to NCMEC. Then NCMEC will contact the police or FBI.) What Apple has detailed is the intentional distribution (to Apple), collection (at Apple), and access (viewing at Apple) of material that they strongly have reason to believe is CSAM. As it was explained to me by my attorney, that is a felony.

I'm not sure, after reading the article, who is/has the most insane system of Apple or NCMEC.


This is the part that also caught my eye.

Surely Apple's lawyers have also reviewed the same law, and if it's that clearly defined, how did they justify/explain their approach?


Because Apple (its employees) aren't actually viewing the images, nor transmitting them. They mention somewhere that it's a low res proxy of the image, or something similar.


But wouldn't a low res image of CP be still classified as CP ? I guess the manual verification will ve done as a joint venture by apple and the authorities


> I guess the manual verification will ve done as a joint venture by apple and the authorities

Do we know this for sure? And even if it is, this is still apparently illegal under current law.


> They mention somewhere that it's a low res proxy of the image, or something similar.

Perceptual hashes are just integer/byte encodings of images that were scaled down and had some transformations applied to them.

If you convert a hash into an array of pixels and reverse the transformations, you'll get some of the original image that was scaled down and hashed.


And if you then apply DLSS, you may even get the original image.


Why is it insane that the law places extremely tight controls on the legitimate distribution of CSAM?


Well the trouble is that it makes developing detection nearly impossible.

And the line where the law is crossed is fuzzy. Say you use an AI classifier, at what accuracy is validating the results of that AI a crime? 50.000001%?


What part of criminalizing the obvious course of action that everyone is taught to do (find evidence of illegal activity, give it to the police) makes any iota of sense to you?


There was parents arrested over bath time and playing in the yard sprinklers, photos being processed at photo mats, will the same thing happen by apple mistakenly reporting parents?

When I worked telecom, we had md5sum database to check for this type of content. If you emailed/sms/uploaded a file with the same md5sum, your account was flagged and sent to legal to confirm it.

Also if a police was involved, the account was burned to dvd in the datacenter, and only a police officer would touch the dvd, no engineer touched or saw the evidence. (Chain of Evidence maintained)

Prob changed since I haven't worked in telecom in 15 years, but one thing I've read for years, is the feds knew who these people are, where they hang out online, even ran the some of the honeypots. The problem is they leave these sites up to catch the ring leaders, the feds are aware, they have busts almost every month of rings of criminals. Twitter has had accounts reported, and they stay up for years.

I dont think finding the criminals are the problem, seems like every time this happens, theres been people of interest for years, just not enough law enforcement dedicated to investigating this.

All the defund the police, I think moving some police from traffic duty to Internet crimes would be more of an impact on actual cases being closed. Those crimes lead to racketeering and other organized crime anyways.


> There was parents arrested over bath time and playing in the yard sprinklers, photos being processed at photo mats, will the same thing happen by apple mistakenly reporting parents?

No, because they’re not identifying content, they’re matching it against a set of already-known CSAM that NCMEC maintains. As you go on to say, telecoms and other companies already do this. Apple just advanced the state of the art when it comes to the security and privacy guarantees involved.


A set of unverified hashes that you hope only came from NCMEC. Telecoms do this on their own devices - not yours.

Apple just opened the door for constant searches of your digital devices. If you think it will stop at CSAM you have never read a history book - the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.


> A set of unverified hashes that you hope only came from NCMEC. Telecoms do this on their own devices - not yours.

Yes, those are the main things we’re concerned about.

> Apple just opened the door for constant searches of your digital devices.

Specifically, it opens the door to them scanning content which is then end-to-end encrypted, which is the main problem.

I think the jury is out on whether this capability will be abused. Apple has said they will reject requests to use it for other purposes, but who really knows whether they will end up being forced to add hashes that aren’t CSAM?

I agree that both of these are potential problems.


Apple gave up in the face of China. Why not the US too?


Because they have legal avenues in the US that they don’t have in China? Easy.


They had a legal avenue in China called leaving. That was not profitable though, so they compromised their ethics in exchange for money.


> the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.

Which camera system? Do you have a citation for that?


That's absolute nonsense but it's one of those things where I'd be interested to try and unpick the provenance of how someone could believe something so ridiculous.


You guys are so dismissive. How dare you look down on some random person without even taking 10 seconds to look it up.

https://www.thewestmorlandgazette.co.uk/news/17327294.cctv-c...

https://www.bbc.com/news/uk-northern-ireland-22792013

Anecdotally my Mum works for Coventry City Council (though she is in events planning) but has noted complaints from colleagues about “busy work” from “fussy old people who keep asking for camera footage” — though Coventry often declines.

Information on those cameras: https://www.coventry.gov.uk/cctv


One or two news reports of local councils maybe using CCTV, doesn’t back up your claim.

The UK doesn’t have a super camera system used for minor crimes like you insinuate.

The high camera counts in the UK come from including private CCTV cameras in the data which privately owned and are not linked together, hence the government is not using a network of cameras to monitor dog poo clean up as you claim.


Don't know about UK but in France they are now using CCTV to fine not well parked delivery guys for a 2 minutes stop. While I agree vehicle parked anywhere can be a big inconvenience and deserve a fine, I don't think that's a big crime justifying deployment of such a surveillance system.

This totally makes sense, having one cop checking 100+ CCTV is far more efficient than a full team walking in the streets. Once you justified the cost on privacy and managed to deploy such system, it's so easy and convenient to use it for something else.


The UK government explicitly lays out a strategy for provate cameras to be bought and operated with mandatory rules for police access to footage. [1]

This is on top of the cameras that ARE owned by government entities - 18+ city councils [2]. And it's expanding [3].

Why do you think it matters if they are linked together? Retaining footage and handing it over to police on request (not warrant) is a requirement. The IPA allows collecting this information in bulk (eg from cctv providers) with warrants. [4]

[1] - PDF Warning https://assets.publishing.service.gov.uk/government/uploads/...

[2] - https://www.bbc.com/news/uk-england-17116526

[3] - https://abcnews.go.com/International/wireStory/uk-funds-stre...

[4] - https://www.legislation.gov.uk/ukpga/2016/25/contents/enacte...


I don't think any of your links remotely substantiate what you claimed ("the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.").

What even is the "camera system originally intended for serious crimes"?


It's a lossy hash match though. If it wasn't, then subtly re-encoding the image would hide it. So they're definitely going to be mistakingly matching images.


And any mistakes would be caught by the manual review.


Which will have its own rate of false positives. Manual review is not a panacea, especially if the original net is cast widely.


So now Apple gets to paw through your images without a warrant or consent?


You consent in the terms of service for iCloud Photos.


Try that on your next date and let me know how it goes for you.


The date will probably leave, so why would people "on a date" with iCloud Photos stay?


I don’t tend to sign contracts with people I date.


You can read a bit more about how it would work here:

https://daringfireball.net/2021/08/apple_child_safety_initia...

(No, not arresting parents over bath time.)


I want to draw attention to one point in this

> Will Apple actually flatly refuse any and all such demands? If they do, it’s all good. If they don’t, and these features creep into surveillance for things like political dissent, copyright infringement, LGBT imagery, or adult pornography — anything at all beyond irrefutable CSAM — it’ll prove disastrous to Apple’s reputation for privacy protection. The EFF seems to see such slipping down the slope as inevitable.

What seems to be missing from this discussion is that Apple is already doing these scans on the iCloud photos they store. Therefore, the slippery slope scenario is already a threat today. What’s stopping Apple from acquiescing to a government request to scan for political content right now, or in any of the past years iCloud photos has existed? The answer is they claim not to and their customers believe them. Nothing changes when the scanning moves on device, though, as the blog mentions, I suspect this is a precursor to allowing more private data in iCloud backups that Apple cannot decrypt even when ordered to.


The slippery slope has already been slid down for every iCloud user in China.


> To reiterate: scanning your device is not a privacy risk, but copying files from your device without any notice is definitely a privacy issue.

Not a lawyer, but I believe this part about legality is inaccurate, because they aren’t copying your photos without notice. The feature is not harvesting suspect photos from a device, it is attaching data to all photos before they are uploaded to Apple’s servers. If you’re not using iCloud Photos, the feature will not be activated. Furthermore, they’re not knowingly transferring CSAM, because the system is designed only to notify them when a certain “threshold” of suspect images has been crossed.

In this way it’s identical in practice to what Google and Facebook are already doing with photos that end up on their servers, they just run the check before the upload instead of after. I certainly have reservations about their technique here, but this argument doesn’t add up to me.


This basic implementation fact has been misrepresented over and over and over again. Does anyone read anymore? I’m starting to get really concerned. The hacker community is where I’ve turned to be more informed, away from the clickbait. But I’m being let down.


Agreed - so dissapointing.

The idea that standard moderation steps are a felony is such a stretch. Almost all the major players have folks doing content screening and management - and yes, this may invovle the provider transmitting / copying etc images that are then flagged and moderated away.

The idea that this is a felony is rediculous.

The other piece is that folks are making a lot of assumptions about how this works, then claiming things are felonies.

Does it not strain credibility slightly that apple, with it's team of lawyers, has decided to instead of blocking CASM to commit CASM felonies? And the govt is going to bust them for this? Really? They are doing what govt wants and using automation to drive down the number of images someone will look at and what even might get transferred to apple's servers in the first place.


Does the law have a moderation carve out? There are plenty of laws that have what's called 'strict liability' where your intent doesn't matter.

I'm not suggesting that this is absolutely positively a situation where strict liability exists and that moderation isn't allowed. But the idea that "hey we're trying to do the right thing here" will be honored in court is....not obvious.


If we investigated this author if they do in fact run a photo service we would inevitably find that unless they are incompetent they have to moderate content, either blind or based on flags.

So if apple is going to jail for child porn because they moderate / report content after flagging (this is normally actually required to do - report it), then this article writer should be going to jail as well - I guarantee his services stores, forwards and otherwise handles CASM content.

My complaint is just - HN used to focus on stuff where folks didn't just always jump to worst case arguments (ie, apple is guilty of child porn and is committing felonies) without at least allowing that apple MAY have given this a tiny bit of thought.

It's just tiresome to wade through. It's a mashup of they are blocking too much, are the evil govt henchperson to they are breaking the law and going to jail for felony child porn charges.

I get that it generates interaction (here I am), but it's annoying after a while. Clickbait sells though no question so things like "One Bad Apple" are probably going to keep on coming at us.


Well there is a difference as stated on the article that they don't expect to see CP or CSAM and says "We are not 'knowingly' seeing it since it makes up less than 0.06% of the uploads ... We do not intentionally look for CP."

Whereas Apple is moderating the suspected images so they intentionally look for CP (which, according to the author and his lawyer, is a crime).


This is such a pathetic interpretation. All flagging systems (which is how moderation work - facebook does not manually review every photo posted) alert the company that there may be a problem. Moderators do their thing. They EXPECT to see bad content based on these flags. Smaller places may act on complaints.

The idea that this makes them guilty of felony child porn charges is so ridiculous and offensive.

Facebook (with Insta) alone is dealing with 20 million photos a year -

https://www.businessinsider.com/facebook-instagram-report-20...

This lawyer is an absolute idiot.

How about we ask the actual folks involved in this (NCMEC) what they think about apples "felonies". Maybe they have some experts?

Oh wait, the folks actually dealing with this, the people who have to handle all this crap - are writing letters THANKING apple for helping reduce the spread of this crap.

So - we have a big company like Apple (with a ton of folks looking at this sort of thing). We have the National Center for Missing and Exploited Children looking at this. And we are being told - by some guy who will not even name the attorney and law firm reaching this opinion, that apple is committing child porn felonies.

Does no one see how making these types of horribly supported explosive claims just trashes discourse? Apple are child pornographers! So and so is horrible for X.

Can folks dial it back a TINY bit - or is the outrage factory the only thing running these days?


Yeah, as usual I'm worried the people who claim that others don't read are the ones not reading (or being able to comprehend) what the author is trying to say. To me it seems like moderation in general is fine. What Apple is doing here is that after they receive a flag that a certain threshold is crossed, they manually review the material. The author states that no one should do that i.e., the law explicitly prohibits anyone even trying to verify. If you suspect CP, you got to forward it to NCMEC and be done with that.

I 100% understand why Apple doesn't want to do that - automatic forwarding - they're clearly worried about false positives. I also think Apple has competent lawyers. It's entirely possible that the author and their lawyers' interpretation could be wrong (a possibility).

Point is - the author isn't trying to say moderation is illegal.


The whole thing rests on whether Apple knows that the content is CSAM or not. And they don’t. The author gets this fundamentally wrong. They do not know whether it is a match or not when the voucher is created. The process does, but they don’t. They know when the system detects a threshold number of matches in the account, and they can then verify the matches.

Additionally, we already know they consulted with NCMEC on this because of the internal memos that leaked the other day, both from Apple leadership and a letter NCMEC sent congratulating them on their new system. If you think they haven’t evaluated the legality of what they’re doing, you’re just wrong.


What does "manual review" mean then and how are those images reported?


Before: you would upload images to iCloud Photos. Apple can access your images in iCloud Photos, but it does not.

Now: You upload images to iCloud Photos. When doing so, your device also uploads a separate safety voucher for the image. If there are enough vouchers for CSAM matched images in your library, Apple gains the ability to access the data in the vouchers for images matching CSAM. One of the data elements in the voucher is an “image derivative” (probably a thumbnail) which is manually reviewed. If the image derivative also looks like CSAM, Apple files a report with NCMEC’s CyberTip line. Apple can (for now) access the image you stored in iCloud, but it does not. All the data it needs is in the safety voucher.

Lot of words spilled on this topic, yet I’d be surprised if a majority of people are even aware of these basic facts about the system operation.


Thank you for this explanation. Much more helpful than any of the lengthy articles I've read to date.

I think Apple has botched the rollout of this change by failing to explain clearly how it works. As a result, rumors and misunderstandings have proliferated instead.


Not sure your before is entirely correct. Apple has admitted to scanning iCloud photos, so they are already accessing them at some point.

https://digit.fyi/apple-admits-scanning-photos-uploaded-to-i...


The before is entirely correct. Only iCloud Mail was previously scanned for CSAM. As a sanity check: it's not plausible that Apple only generated O(100) referrals to CyberTip annually if it were scanning all iCloud Photos. Other services of similar scale generate O(1M) referrals.


> access the data in the vouchers for images matching CSAM. One of the data elements in the voucher is an “image derivative” (probably a thumbnail)

So the author of the article is technically correct: Apple intentionally uploads CP to their servers for manual review which is explicitly forbidden by law.

He even describes the issue with thumbnails


It is exceedingly unlikely that a system developed with NCMEC’s support and a Fortune 5 legal team somehow fails to comply with the most obviously relevant laws.


I'd say: A trillion-dollar company and a government agency can do whatever they feel like, and laws be damned :)


As I understand it:

When you choose to upload your images to iCloud (which currently happens without end-to-end encryption), your phone generates some form of encrypted ticket. In the future, the images will be encrypted, with a backdoor key encoded in the tickets.

If Apple receives enough images that were considered a match, the tickets become decryptable (I think I saw Shamir's Secret Sharing mentioned for this step). Right now, Apple doesn't need that because they have unencrypted images, in a future scheme, decrypting these tickets will allow them to decrypt your images.

(I've simplified a bit, I believe there's a second layer that they claim will only give them access to the offending images. I have not studied their approach deeply.)


These are not “claims.” The process by which they get access to only the safety vouchers for images matching CSAM is private set intersection and comes with a cryptographic proof.

In no step of the proposal does Apple access the images you store in iCloud. All access is through the associated data in the safety voucher. This design allows Apple to switch iCloud storage to end to end encrypted with no protocol changes.


The private set intersection is part of the protocol to shield Apple (and their database providers) from accountability, not to protect the users privacy.

They could instead send the list of hashes to the device (which they already must trust is faithfully computing the local hash) and just let the device report when there are hits. It would be much more CPU and bandwidth efficient, too.

The PSI serves the purpose that if Apple starts sending out hashes for popular lawful images connected to particular religions, ethnicity, or political ideologies that it is information theoretically impossible for anyone to detect the abuse. It also makes it impossible to tell if different users are being tested against different lists, e.g. if Thai users were being tested against political cartoons that insult the king.


The list of hashes is confidential. Good luck getting NCMEC to sign off on an implementation which lets clients infer which photos are matching their database.

The database is embedded into iOS. There are at least three primary sources which say that users will not receive different databases, and it should be easily confirmed.


I am well aware but that is exactly the point. If Apple can't provide an accountable implementation they should not implement this at all. This should be table stakes that all users should demand, at a minimum.

Otherwise there is no way to detect if the system is abused to target lawful activities.

The fancy crypto in the system isn't there to protect the user, it's to guard the system's implementer(s) against accountability. It protects Apple's privacy, not yours.


What good is end to end encryption if the OS is prebuilt with a method of breaking that encryption? This is definitional backdooring, and you’re back to trusting Apple’s goodwill (and/or willingness to resist governments) to keep your data safe (I.e., not add new decryptable triggers).

Not having backdoors is a hard requirement for end to end encryption offering privacy guarantees.


This is taking the discussion into the realm of hypothetical. If we end up in a world where there are reliable public cloud providers that offer end to end encryption with no content scanning whatsoever, I'll be glad to give them my money.


There's a weird meme I the hacker community that law enforcement has no right to use any means to enforce the law, and can only bust people who turn themselves in.


Legality aside – how is this not a privacy risk? Privileged users of the infrastructure can gain information about users (whether they possess CSAM that's in the hash-database... for now).


Presumably the reviewers would not know the identity of the user whose photos are under review, as they have no need to.


Unless you're a public figure or celebrity. If I were, I wouldn't use iCloud photos, but that's not exactly how Apple markets their photo service.


... but the link between user and photo obviously exists somewhere in Apple's system.


> they’re not knowingly transferring CSAM, because the system is designed only to notify them when a certain “threshold” of suspect images has been crossed

And when they’re notified, Apple manually checks (a modified but legible version of) the images.


Wow, 20% of images being misclassified as CP would be terrifying. I never thought I'd say it but my next phone will definitely not be an iPhone or any device that is uploading my information to another server without my permission for any reason.

Is there even any evidence that arresting people with the wrong bit pattern on the computer helps stop child rape/trafficking? If so, why aren't we also going after people who go to gore websites? There's tons of awful material out there easily accessible of people getting stabbed, shot, murdered, butchered, etc. Do we not want to find people who are keeping collections of this material on their computers? And if so, what about people who really like graphic horror movies like Saw or Hostel? Obviously it's not real violence, but it's definitely close enough, and if you like watching that stuff, maybe you should be on a list? If your neighbor to the left of you has videos of naked children, and your neighbor to the right has videos of people getting stabbed and tortured to death, only one should be arrested and put on a list?

This is all not even taking into account that someone might not even realize they are in possession of CP because someone else put it on their device. I've heard there's tons of services marketing on the dark net where you pay someone $X00 in bitcoin and they remotely upload CP to any target's computer.

It seems like we are going down a very scary and dangerous path.


> If your neighbor to the left of you has videos of naked children, and your neighbor to the right has videos of people getting stabbed and tortured to death, only one should be arrested and put on a list?

Why is it like this? Why are we not jailing people who enjoy watching gore?


For the same reason why rape is a taboo in American movies and murder is not - US still is a deeply Puritan country [1][2][3].

[1]http://www.princeton.edu/~sociolog/pdf/asmith1.pdf

[2]https://davetannenbaum.github.io/documents/Implicit%20Purita...

[3]https://lib.ugent.be/fulltxt/RUG01/002/478/832/RUG01-0024788...


My only initial guess is that the financial incentives are not the same. The "problem" of people getting murdered or non-consensually tortured specifically for entertainment and distribution is much lower than the problem of the economics of child sex abuse.


You did agree to upload your information to another server when you turned on iCloud Photo Library.


I don't have iCloud Photo Library


Then you don’t have to worry about it uploading anything.


...yet.


I'm honestly shocked that Apple is buying into this because it's one of those well-intentioned ideas that is just incredibly bad. It also goes to show you can justify pretty much anything by saying it fights terrorism or child exploitation.

We went through this 20+ years ago when US companies then couldn't export "strong" encryption (being stronger than 40 bits if you can believe that). Even at the time that was ridiculously low.

We then moved onto cryptographic back doors, which seem like a good idea but aren't for the obvious reason that if a backdoor exists, it will be exploited by someone you didn't intend or used by an authorized party in an unintended way (parallel construction anyone?).

So these photos exist on Apple servers but what they're proposing, if I understand it correctly, is that that data will no longer be protected on their servers. That is, human review will be required in some cases. By definition that means the data can be decrypted. Of course it'll be by (or intended to be by) authorized individuals using a secured, audited system.

But a backdoor now exists.

Also, what controls exist on those who have to review the material? What if it's a nude photo of an adult celebrity? How confident are we that someone can't take a snap of that on their own phone and sell it or distribute it online? It doesn't have to be a celebrity either of course.

Here's another issue: in some jurisdictions it's technically a case of distributing CSAM to have a naked photo of yourself (if you're underage) on your own phone. It's just another overly broad, badly written statute thrown together in the hysteria of "won't anybody think of the children?" but it's still a problem.

Will Apple's system identify such photos and lead to people getting prosecuted for their own photos?

What's next after this? Uploading your browsing history to see if you visit any known CSAM trafficking sites or view any such material?

This needs to be killed.


I don't know how I feel about all of this yet (still trying to understand better), but your post implies that you've made a lot of incorrect assumptions about how this system works.

For example, the main system in discussion never sends the image to Apple, only a "visual proxy", and furthermore, it only aims to identify known (previously cataloged) CSAM.

There's a [good primer of this on Daring Fireball](https://daringfireball.net/2021/08/apple_child_safety_initia...)


If the visual proxy is enough to determine CSAM from non-CSAM, it's a significant invasion of privacy. Sure a thumbnail is less information than full-res but not that much less.


FWIW I'm not defending this, but it's important to get the facts correct.

1) Someone can't just randomly review one of your images. The implementation is built on threshold secret sharing, so the visual derivative can't be reviewed (is cryptographically secure) unless you hit the threshold of matched content.

2) You're uploading these files to iCloud, which is currently not end-to-end encrypted. So these photos can be reviewed in the current iCloud regime.


Yeah aware of this.

1) Still, I'm unable to audit this protocol which has a threshold I'm not allowed to know. It also always comes back to control over the "hash" DB. If you can add anything to it (as apple could), then the threshold part becomes more trivial.

2) My understanding was that they currently don't but perhaps I'm incorrect. I know for a fact that they give access to law enforcement if there's a subpoena however. Also, there is a difference in terms of building in local scanning functionality. When it's done on their server, they can only ever access what I have sent. Otherwise, the line is much fuzzier (even if the feature promises to only scan iCloud photos).


Legally, a visual proxy of CP is CP


And?

My point about visual proxies was in reference to the OP's point:

> Also, what controls exist on those who have to review the material? What if it's a nude photo of an adult celebrity? How confident are we that someone can't take a snap of that on their own phone and sell it or distribute it online? It doesn't have to be a celebrity either of course.

I never said that a visual proxy/derivitive wasn't CSAM.

I assume your point had something to do with the legality of sending this data to Apple for review?

I'm not a lawyer, and I have read that NCMEC is the only entity with a legal carve out for possessing CSAM, but if FB and Google already have teams of reviewers for this type of material and other abuse images, I imagine there must be a legal way for this type of review to take place. I mean, these were all images that were being uploaded to iCloud anyway.


>So these photos exist on Apple servers but what they're proposing, if I understand it correctly, is that that data will no longer be protected on their servers. That is, human review will be required in some cases. By definition that means the data can be decrypted. Of course it'll be by (or intended to be by) authorized individuals using a secured, audited system.

Apple has always had the decryption keys for encrypted photos stored in iCloud, so this isn't new. They never claimed that your photos were end-to-end encrypted. I'm not sure how this is a "backdoor" unless you think there's a risk of either something like AES getting broken or Apple storing the keys in a way that's insecure, both of which seem unlikely to me.

>Also, what controls exist on those who have to review the material? What if it's a nude photo of an adult celebrity? How confident are we that someone can't take a snap of that on their own phone and sell it or distribute it online? It doesn't have to be a celebrity either of course.

I'm equally interested in the review process. But while perceptual hash collisions are possible, it seems unlikely that multiple random nude photos on the same device would almost exactly match known CSAM content, which is the threshold for Apple reviewing the content.


OPs article cites the number of NCMEC reports from Apple vs. other tech giants (200 something vs 20 million something at Facebook). It is all a bit confusing and I expect most of us are learning more about iCloud than we ever planned on; Apple has been able to decrypt our iCloud photos all along but those reporting figures make it pretty clear that they haven’t been doing so en masse. This is a big shift.


> Will Apple's system identify such photos and lead to people getting prosecuted for their own photos?

No, no, no. As has been said a billion times by now, this system matches copies of specific CSAM photographs in the NCMEC’s database.


This feels like missing the forest from the trees — Steve Jobs said many times to the effect ‘it doesn’t matter how any of this stuff happens, GigaHertz, Ram, Speeds, it only matters that the user gets what they want.’

Right now Apple’s biggest unhappy user is the DOJ. As it stands with the legislation coming down the pipe and both previous administrations building on a keenness to ‘get something done’ about big tech, Apple will do as they’ve done in China and ‘obey the laws in each jurisdiction.’

Right now there are a lot of unwritten laws that say Apple better play right or lose quite a bit more —

So, how it’s getting done is a side show.

That said, it wasn’t long ago that they stood toe to toe with the FBI —- but there also weren’t wonderfully strong ‘sanctions’ on the horizon.


Why do elected officials act as fake representatives to the people that elected them in the first place? Has it always been this way? It doesn’t matter left or right. The governing bodies should obey the people not the other way around.


If the people had their way, those suspected of child sex crimes wouldn’t even get trials. Things like privacy and due process only exist to the extent that a ruling class has the power to impose their own values, contrary to popular will.


These cases of people actually claiming to want people locked up without due process are pushed by a vocal minority (same with us - even hacker news posts with maybe a million views mean than 1% of U.S. adults read it) - almost everyone wants due process since we want the truth to come out in the most verifiable way, and the court system provides that, even if it involves humans that can be bribed, persuaded, and manipulated by rich & powerful people.


I’m one of the people and that’s not my way.

I wonder where you got your data from.


This made me come up with a thought experiment. If you sampled a thousand parents and asked them if they think sex offenders deserve a fair and just trial, what do you think the result would be?


That question is framed wrong. What you should be asking is "Should people accused of such crimes get a fair and just trial?". This makes it clear that it takes me one sentence to put you in that very position. And that's exactly the problem. Your formulation erases any question of guilt, which makes the discussion nearly impossible to win for the privacy side.


No, what they’re saying is that most people already presume guilt, even if it’s just an accusation. Further, most would be willing to engage in mob justice. I feel like you’re doing a lot of work to miss that point.


The thread parent stated:

> If the people had their way, those suspected of child sex crimes wouldn’t even get trials.

Emphasis on suspected.

I agree that the question, as phrased, would be interesting and you'd probably be right that a lot of people would condone or even engage in mob justice. Given this thread, however, it seems that the actual question posed is the one I formulated.

As a side note, my formulation would also be the one to find out how likely people are to presume guilt, as the original formulation confirms guilt directly in the prompt.



> The governing bodies should obey the people not the other way around.

Then they wouldn't be the governing body. By definition the governing body does not obey the people; they govern the people.


Isn't a democratic governing body representing the will of the people and society?


No, they represent themselves.


I can't say this for certain, but I suspect most people are actually happy about this sort of thing; their elected representatives are doing exactly what they want.


That is interesting. I haven't met anyone who thinks privacy erosion is a good thing. We don't need to frame it as digital vs physical anymore. There is no distinction as it relates to personal privacy.


You may underestimate how much popular democratic support there is for stopping child rapists, at the cost of theoretical or even actual but low probabity privacy risks.


I appreciate just about everything about this post, but this part keeps getting lost in everything I see written about it:

>As noted, Apple says that they will scan your Apple device for CSAM material. If they find something that they think matches, then they will send it to Apple. The problem is that you don't know which pictures will be sent to Apple.

It's iCloud Photos. Apple has explicitly said it's iCloud photos. If it's being synced to iCloud Photos, you know it's getting scanned one way or another (server side, currently, or client side, going forward).

It notes privacy issues, but... iCloud syncs by default. You wouldn't do the kind of work they're talking about (e.g, investigation) and store that kind of material where it could be synced to a server to begin with.

Everyone keeps proclaiming that Apple is scanning your entire device, but that's not what's happening with this change. It's not even comparable to A/V in this respect - it would be a very different story if that was the case. The wording and explanation matters.


The use of the detection algorithm is for iCloud only today.

Now that the technology is on-board the device, how many lines of codes do you think it will take to scan the full photo-roll?

Do you think that this ability will not tempt LEA, law makers, governments, to push for that ever-so-small change to the code, either for blanket monitoring (see if China is not tempted, using their own database of "illegal" content) or for targetted monitoring (some specific users, with or without valid court orders).

The main issue is that the wall has been breached: monitoring data that was otherwise only on-device is now possible with little to no change as the feature is now embedded in the OS.

You can argue we're not there yet and can trust Apple to do-the-right-thing and that the Rule of Law will protect citizen against abuse, but that's a big step into a worrying trend, and not all countries follow the Rule of Law and have checks and balances to avoid misuse. Don't forget that Apple abides by the laws of countries where it sells its devices. That means they will -forced or not- do what they are told.


Based on this article (and my first rebuttal[0]) I actually think the whole "only photos syncing to icloud photos" part of it is the defining factor making it legal - if Apple specifically only sent themselves photos that were detected as CSAM, it would likely be a felony, while the planned system can use the reasoning I laid out to likely not be charged with such felony.

0: https://news.ycombinator.com/item?id=28112312


Oooooh. That makes sense.


>The use of the detection algorithm is for iCloud only today.

Yes, which is what I was saying in my comment. If or when it comes to Apple changing this then I would agree it's a battle worth fighting, but that is not what is happening here and that is not what I was correcting in this article itself.

>The main issue is that the wall has been breached:

The wall was breached when we opted to run proprietary OS systems. You have zero clue what is going on in that OS and whether it's reporting; you have to trust the vendor on some level and Apple is being fairly transparent here. I would be far more worried if they did this without saying anything at all.


Yes, which is what I was saying in my comment. If or when it comes to Apple changing this then I would agree it's a battle worth fighting, but that is not what is happening here and that is not what I was correcting in this article itself.

Isn't it too late to fight the battle then? They've already built the infrastructure to make it trivial to scan any file on your device.


It's their operating system! Files are already "scanned" by their processes, how do you think they get from the camera to the hard drive?


I'd imagine that they do that by writing the bits to the hard drive without creating a fuzzy hash used to match your picture with those in some organization's list?


Sorry, perhaps I misunderstood what you meant by "infrastructure".

Apple have always had the ability to do great evil to a lot of people, this update doesn't change that. They haven't gained any power they didn't already have.

The government, for example, do not currently have the infrastructure to push updates to iPhone. If they passed laws, built servers etc to allow this then that would be a meaningful change that would be worth all this chatter.


The government, for example, do not currently have the infrastructure to push updates to iPhone

That's the point, this is a slippery slope -- without this system, governments have no way to compel Apple to scan for objectionable photos, Apple could claim, rightly so, that due to encryption technology and privacy, they have no way to do it. But now they've removed both the technological and privacy hurdle, and it's just a matter of logistics.

Now governments know that all they need is a database of banned photos and they can go to Apple and say "In our country, it's illegal to share photos that put our government in a bad light. Here's a database of banned photos. If you don't comply, you can't sell your phones to our 1.4 billion citizens".


I only disagree that Apple could have previously "rightly" said it was impossible.

If China had previously demanded they scan users photos for certain material, then Apple could always have done so.

Sure, it would involve pushing an update to all phones, but so would the change you are talking about (to check more hashes).


Is the problem here, as so many commenters complain, that Apple won't resist the corrupt government, or is that that society as whole won't overthrow the corrupt government?


The entire technical infrastructure to scan your entire device for arbitrary content is being built and deployed.

The only change necessary to scan other files is changing a path and that's configuration that could even be silently done per-device.


I love that this is the (correct) response here. /r/Apple is full of morons that read the press release and honestly believe making this system scan the whole file system is impossible.


>The entire technical infrastructure to scan your entire device for arbitrary content is being built and deployed.

It's a proprietary OS. This literally could exist already and you would have zero clue.


The difference between open and closed source is not lost to anyone on this site. There is a grand canyon sized gap between 'could already exist and you have no clue' and 'we have built and are deploying this system'.


This reads like a failure of the NCMEC, and the legal system surrounding it.

It is insane that using perceptual hashes is likely illegal. As the hashes are actually somewhat reversible and so possession of the hash is a criminal offence. It just shows how twisted up in itself the law is in this area.

One independent image analysis service should not be beating reporting rates of major service providers. And NCMEC should not be acting like detection is a trade secret. Wider detection and reporting is the goal.

And the law as setup prevents developing detection methods. You cannot legally check the results of your detection (which Apple are doing), as that involves transmitting the content to someone other than the NCMEC!


> This reads like a failure of the NCMEC

Yes, it's a "failure" of a "private" Non(lol, technically, wink wink) Governmental Organization who works extremely closely with the FBI to put their camel-shaped nose under the very tent that the FBI happens to have been trying to breach for 20+ years.

Come on. It's beyond gullibility, at this point, to believe that NCMEC isn't an arm of the Feeb. Specifically, it's an arm that isn't required to comply with FOIA requests, which is particularly convenient.

Two years. At the current rate, you have approximately two years until the Feeb have full access to your iDevice. Though, I will admit, Apple's development of the SEP, their high-priced bug bounties, and their convincing play-acting at defying the FBI after the San Bernardino case definitely had me fooled.

We probably should have been more keen after they failed to close the bugs that GreyKey et. al. exploited.

But now we know. Everything they gave to China, they will give doubly so to their own corporate domicile.


> And NCMEC should not be acting like detection is a trade secret.

It feels to me like they want to hide their detection algorithms so people don't find out how bad they are.


Doesn’t seem like it was an accident, rather protectionism written long ago to make sure they were the only “game” in town.


There is so much focus on the technical aspects such as probability of mismatch etc.

For me the risk is much more that through some mechanism outside my control real CSAM material becomes present on my device. Whether its a dodgy web site, a spam email, a successful hack attempt or something else like that, I feel like there's a significant chance some day I'll end up with this stuff injected onto my phone without me knowing. So I'm not at all concerned about the technical capacity to accurately match to CP etc. In fact I'm even more worried if its really accurate because then I know when this unfortunate event happens I face a huge risk of being immediately flagged before I even know about the content and then spending years extricating myself from a ruined reputation and a legal system that treats evidence like this with far more trust than it should have.


The reason everyone is focusing on the technical aspects is because most people will evaluate it as it is planned to be, and not as what it could become. This is probably because, at any time, users can switch from Apple to Android in an upgrade cycle - so while perhaps 99% of users are fine with this CSAM scanner in its current state, if they expand its usage to something bad then those 99% that stayed can once again evaluate if they still value the hardware enough to keep it with the new status quo; therefore, evaluating it as it is in its current state will help people reading make the best personal decisions and the details will help with context if/when they do expand usage of this system.


I still have WhatsApp on my phone and I have had issues in the past with backups so I have it set to save the media directly to my photos app.

I also have notifications off on it and check it when I need to.

All this needs is someone forwarding me something that’s in the DB.


right? Phones are not at all secure. Dozens (hundreds?) of click-less exploits exist today that can do anything they want with “your” phone at the push of a button.

My phone is not mine. nor is the data on it. nor is yours. That’s the real state of computer security today.

all of this is ill conceived.

The day someone chooses to mass release their worms on iPhone will be a wake up call.


Really nice explanation from someone who knows a thing or two about images/photos (Dr. Neal Krawetz is the creator of https://fotoforensics.com and specializes in computer forensics).


He wrongly interpreted CSAM scanning. He said that Apple will scan your photos and if finds something, it will send photo to Apple. Which is absolutely not how it works. Photo is only scanned before uploading to iCloud Photos. Apple already confirmed it to iMore and it’s clearly stated in Apple papers from press-release.


Please stop spreading misinformation.

Apple has built the system to scan your entire phone, their claims about its limited scope are suspect.



You're very clearly missing the forest for the trees. Right before uploading to icloud, "Apple will scan your photos and if finds something, it will send photo to Apple."

This process is automated and turned on on most iPhones. Most iPhones will have automatic photo upload to icloud enabled, and that's when this scanning takes place.


> Most iPhones will have automatic photo upload to icloud enabled,

I highly doubt that with how each photo is 10+MB and Apple only gives an abysmal 5GB of free iCloud storage, which includes everything else. My iphone 6s backups got to 4.5gb on their own years ago, turning off photo storage was the only way to avoid paying for iCloud storage outright.


"The problem with AI is that you don't know what attributes it finds important.... It determined that ... a guy with long hair is female."

Lots of people had this problem back in the 60s. Funny - except that some guys were jailed or worse because of it.


I've been a FOSS dev for 25 years and I remember when everyone else I worked with were avid linux/freeBSD users because 'we didn't trust the big end of town'.. over the years I've watched the vast majority of devs move to apple devices for all sorts of 'just works', 'shinier' reasons that just boil down to 'convenience is more important than privacy'.

Perhaps this is just the benefit of longevity but from my POV it was engineer early adoption and advocacy that made Apple, Google Search etc what they are, and it will be engineer early adoption and advocacy that dethrones these problematic companies from controlling the ecosystem..

Back 20 years ago, before the community filled with $_$ dollars-struck startup founders, software was built by people who wanted to use it.. rather than sell it. There are still some people doing this now, Look at Matrix network for instance.

What will it take for a grass-roots software industry to start building privacy-first apps and systems that don't suck, based on decentralised, distributed principles? We have the skills to build highly polished alternatives to these things, but it takes a determination to step away from convenience for a period of time for the sake of privacy.

How bad does it have to get before the dev community realise this? or are we in a frog boiling slowly scenario and it's hopeless?


The author of this article purports to have done a ton of research into this system, but appears to have missed basic information that I’ve acquired from a few podcasts.

Namely the “1-in-a-trillion” false positives per account per year is based on the likelihood of multiple photos matching the database (Apple doesn’t say how many are required to trip their manual screening threshold).


So they are assuming that the photos are independent. Common error with probabilistic reasoning. What if the same photo (or photos of the same scene taken seconds apart) gets uploaded more than once? The likelihood no longer multiplies. I don't believe the "one in a trillion" claim.


They say that they address this issue through a mechanism outside of the cryptographic protocol, but don't say specifically how. The quote from the paper:

A user might store multiple variants or near-duplicates of the same image on their client. In our language, this means that a single client could hold two triples (y,id,ad) and (y,id′,ad) that have same hash y, but different identifiers. This causes an issue that is addressed outside of the cryptographic protocol. Suppose a user copies a single image from a USB drive onto his or her device. The image will be assigned an identifier id. Later the user copies the same image from the USB drive onto a different client device. The new copy of the image will be assigned a new identifier id′ which is likely to be different from id. Because the two copies have different identifiers they will count twice towards the tPSI-AD threshold. In particular, the two triples will cause two distinct Shamir shares to be sent to the sever, even though they correspond to the same semantic image. Several solutions to this were considered, but ultimately, this issue is addressed by a mechanism outside of the cryptographic protocol. [0]

[0]: https://www.apple.com/child-safety/pdf/Apple_PSI_System_Secu...


Well, 3 photos or more are what puts you over the line for an affirmative defense against CSAM possession[0]. It's probably above that.

0: https://www.law.cornell.edu/uscode/text/18/2252A#:~:text=(d)...


Kim Dotcom did this and it lead to Riaa saying it obligated the file sharing service to look for all copyrighted materials. This opens pandoras box for Apple.


No, it didn't, at least not legally. There is no federal requirement to scan for copyrighted content (which is an impossible task without $MM in capital and a media library of copyrighted content) - YouTube only does it with Content ID to appease the rights holders and not have them remove all their content from YT outright. The only requirement is to respond to DMCA takedown notices, which they were obligated to do before Kim started scanning for CSAM (which I can't find a story for).


>Think of it this way: Your landlord owns your property, but in the United States, he cannot enter any time he wants. In order to enter, the landlord must have permission, give prior notice, or have cause. Any other reason is trespassing. Moreover, if the landlord takes anything, then it's theft. Apple's license agreement says that they own the operating system, but that doesn't give them permission to search whenever they want or to take content.

Yes. That is an analogy only with Apple's software but not hardware. In Apple's view they are selling you the experience. So they are more like Hotels, You dont own the hotel room, the bed, the TV or anything inside that room. And in a Hotel, they can do Room Cleaning anytime they want.


> Apple then manually reviews each report to confirm there is a match,

This is always the terrifying part for me. They will access your personal photos or data without telling you. I’m surprised how is that even legal given all the law that are already available. Are they immune to those laws stated in thd blog?

Also what happens when they launch this in EU, AU, etc with different privacy laws?


I think the article got that wrong. Apple does manually review tagged images, but does not access the original image, but the security voucher containing metadata, including the NeuralHash and a "visual derivative" of it (see Apple's spec [1]).

Also, this only applies to pictures you upload to iCloud. So, it's not like they're accessing your personal photos without telling you.

[1] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


Most iPhones have icloud backups enabled, which will trigger this detection automatically. Most iPhones are set up to upload every photo to icloud in case you lose your phone.


They do tell you. If your account is flagged, you are notified and have the opportunity to challenge it.


The "legal" section talks about local scanning, and possible transmission of CSAM from devices to Apple, in pursuit of verification, however Apple have made clear that the scanning happens only for files that have been uploaded to iCloud Photo Library -- in which case they are not deliberately transmitting the CSAM but rather flagging something which the user already sent them.

Likewise the copyright issue; The user has already sent these files to Apple themselves by enabling iCloud photo library, and Apple are not making any additional copies that I am aware of.

It also says "The problem is that you don't know which pictures will be sent to Apple." - but we do know exactly which pictures will and will not be sent to apple; the ones that are already sent by iCloud Photo library.

[To be clear, I don't like the precedent/slippery slope that this kind of technique might lead towards in the future, but it doesn't seem like all the criticisms of it today are valid]


> However, nothing in the iCloud terms of service grants Apple access to your pictures for use in research projects, such as developing a CSAM scanner. (Apple can deploy new beta features, but Apple cannot arbitrarily use your data.) In effect, they don't have access to your content for testing their CSAM system.

> If Apple wants to crack down on CSAM, then they have to do it on your Apple device.

I don’t understand… Apple can’t change their TOS but they can install this scanning service on your device?


Apple said it is coming in iOS 15 and when you install it, you will need to accept TOS. It might change at that point.


> Apple can’t change their TOS

Why not... has anyone actually successfully sued a company for changing their ToS from under them?


Doesn't every TOS include that clause about the customer automatically accepting any change the company makes to the TOS at any time and without notification?


German banks just fell on their face with this one. They tried to implement fee increases this way (often turning free into paid accounts), but the courts nixed this.

Unfortunately not fast enough, so some banks got away with the first year or two worth of loot due to statutes of limitations, but it's now clear that companies can't just change material parts of the ToS without explicit, active consent (it's not enough to notify customers and consider it agreement if they don't do anything).


Yes, see Douglas v. Talk America.


Thanks for the link.

... but I'm not sure that would apply here, especially if Apple has a pop-up that the user dismissed a long time ago.


It’s unclear what the legal limits are, but I’ve seen some sites do a bold print summary of the changes. Certainly if one wants to be sure their TOS change is enforceable they’ll make sure a judge and jury will agree the changes were prominently advertised.


That’s what I don’t understand about that paragraph.


There is one particular thing I don’t understand about this Apple policy:

You can buy a SIM card and send images to your enemies/competitors through WhatsApp, and these images automatically gets downloaded to iPhone and potentially uploaded to iCloud.

What precautions are Apple taking against such actions? Or will it be some kind of exploitable implementation where you can easily swat any person you want and let them go to courts to prove their innocence?


Funnily enough, iMessage is also adopting this as a (default on!) option in iOS 15. Which is also where the anti-CP "feature" drops...


I know of no messaging app the “automatically” uploads to iCloud. And what sort of idiot would try and send CP using a messenger app (almost all of which CASM detection). That’s like trying to smuggle drugs past the TSA so you can put them in someone’s house to frame them.


WhatsApp automatically saves images to the phone, which automatically get uploaded.

So yes, it does work this way. It does on my iPhone.


The app doesn’t. The iOS camera roll uploads newly captured or saved images when it connects to Wi-Fi.


If you choose to save it, then you no longer a “victim”. But someone else can’t force a save to iCloud on your device.


Sure, but some apps allow you to choose once and henceforth save all photos sent to you. That provides a vector for someone to inject things into your iCloud.

However, I find this risk exaggerated because if someone actually does this, you can just block the app’s ability to access your camera roll in the privacy settings or stop using the app. And report the user to the service, of course.


The engineers that worked on this should honestly be ashamed of themselves. We need some sort of oath of ethics in computer science.


> To reiterate: scanning your device is not a privacy risk, but copying files from your device without any notice is definitely a privacy issue.

I think the article is wrong about this. Or, right-but-situationally-irrelevant. As far as I can tell from Apple's statements, they're doing this only to photos which are being uploaded to iCloud Photos. So, any photo this is happening to is one that you've already asked Apple to copy to their servers.

> In this case, Apple has a very strong reason to believe they are transferring CSAM material, and they are sending it to Apple -- not NCMEC.

I also suspect this is a fuzzy area, and anything legal would depend on when they can actually be said to be certain there's illegal material involved.

Apple's process seems to be: someone has uploaded photos to iCloud and enough of their photos have tripped this system that they get a human review; if the human agrees it's CSAM, they forward it on to law enforcement. There is a chance of false positives, so the human review step seems necessary...

After all, "Apple has hooked up machine learning to automatically report you to the police for child pornograpy with no human review" would have been a much worse news week for Apple. :D


> There is a chance of false positives, so the human review step seems necessary...

You misunderstand the purpose of the human review by Apple.

The human review is not due to false positives: The system is designed to have an extremely low rate of hits where the entry isn't in the database and the review invades your privacy regardless of who does it.

The human review exists to legitimize an otherwise unlawful search via a loophole.

The US Government (directly or through effective agencies like NCMEC) is barred from searching or inspecting your private communications without a warrant.

Apple, by virtue of your contractual relationship with them, is free to do so-- so long as they are not coerced to do so by the government. When Apple reviews your communications and finds what they believe to be child porn they're then required to report it and because the government is merely repeating a search that apple already (legally) performed, no warrant is required.

So, Apple "reviews" the hits because, per the courts, if they just sent automated matches without review that wouldn't be sufficient to avoid the need for a warrant.

The extra review step does not exist to protect your privacy: The review itself deprives you of your privacy. The review step exists to suppress your fourth amendment rights.


This is the part I am very concerned about. This is definitely a violation of 4th Amendment rights because images are viewed by humans not on the device. What happened to just on device scanning for them?


There is one thing to be concerned about individuals violating terms of service and scanning on the device to identify and refer to law enforcement. It’s a WHOLE other thing to have humans somehow review images that are not in a device.


So.. how well does "human review" work with copyright on youtube?

This is basically fearmongering, and saying "if you're not a pedo, you have nothing to fear", installing the tech on all phones, and then using that tech to find the next wikileaks leaker (who was the first person with this photo), trump supporters (just add the trump-beats-cnn-gif to the hashes), anti-china protesters (winnie the pooh photos), etc.

This is basically like forcing everyone to "voluntarily" record inside of their houses, AI on that camera would then recognise drugs, and only those recordings will be sent to the police.


I was pointing out an inaccuracy in the article, not commenting about whether or not this tech is a good idea. I think if we're opposed to it, we should avoid misrepresenting it in arguments.

On which note... it really does seem to be voluntary, as there's an easy opt-out of the process in the form of "not using iCloud Photos". Or Google Photos, or assorted other services, which apparently all do similar scanning.

Yes, there's a slippery-slope argument here, but the actual service-as-it-exists-today really does seem scoped to a cautious examination of possible child porn that's being uploaded to Apple's servers.


> just add the trump-beats-cnn-gif to the hashes

They could literally do this right now server-side and nobody would ever know.


But noone uploads that to iCloud. That's why they must implement this feature client-side ("because if you're not a pedo, you have nothing to worry about"), and then enable it for on-phone-only photos too ("because if you're not a pedo, you have nothing to worry about"). Then use the same feature on OSX ("because if you're not a pedo, you have nothing to worry about").


> Apple claims that there is a "one in one trillion chance per year of incorrectly flagging a given account". I'm calling bullshit on this.

Why? You can get any false positive rate you want if you don't care about the false negative rate.

It seems likely that that was a design criterion and they just tweaked the thresholds and number of hits required until they got it.

The last analysis on HN about this made the exact same mistake, and it's a pretty obvious one so I'm skeptical about the rest of their analyses.

It is nice to have some actual numbers from this article though about how much CP they report, the usefulness of MD5 hashes, etc.

Edit: reading on, it seems like he just misread - it sounds like he thinks they're saying there's a 1 in a trillion chance of a false positive on a photo but Apple are talking about an account which requires multiple photo hits. The false positive rate per photo might be 1 in 1000 but if you need 10 hits then it's fine.


This made me think of a good point regarding apples error rate of “one in a trillion”. If they are so confident that there won’t be false positive’s why bother sending it to apple for manual review? Clearly they think the error rate will be high enough that a human has to double check everything. But that is not what they are saying.


Apple doesn’t have access to the original CSAM images — only the hashes generated by the agency that produced them, so there could be hashes that don’t match anything illegal that trigger a false positive.


If this is true and content of pictures can be unhashed to a 26x26 picture this is ethically a total nightmare. Nobody wants to carry a phone with child porn with them, discusting if you think about it.

I don't even want to imagine what very religious people and countries will think about the iPhone then.


> As it was explained to me by my attorney, that is a felony.

Apple could argue they were already going to receive the photo (since this algorithm only affects Photos destined for iCloud Photos) and thus "when in the upload process it was classified" is simply technological semantics.

> This was followed by a memo leak, allegedly from NCMEC to Apple:

Well, we certainly are the minority. If a majority of people knew and were mad we'd have protests in major cities (this of course doesn't invalidate the concerns).


That PhotoDNA is reversible is ridiculously shocking to me...


Why? It's ancient, and hasn't been subject to open scrutiny from academics and researchers like open-source systems.


I'm just wondering how long it will take before average people, jokingly or otherwise, imply that buying an android phone makes you a criminal.


Sadly, I'm not worried about that particular possibility. I'm fully expecting Google to roll out some similar BS within a year or two. "For the children" of course.


Maybe you can answer this. What amount of the CP pictures are actual CP? Like are teens posting selfies of themselves in swimsuits or revealing clothes being submitted? People with their kids in the bathtub/pool? Are most of these pictures real give you nightmares CP?

Its seems insane to me that anyone would knowingly upload CP to a forensics site on purpose. Much less several times a day.


> If someone were to release code that reverses NCMEC hashes into pictures, then everyone in possession of NCMEC's PhotoDNA hashes would be in possession of child pornography.

Please correct me if I'm wrong, but wouldn't it be more correct to say they "would be in possession of images recognized by PhotoDNA as child pornography" rather than actual CP?


Technically not possible without severely grasping assumptions. It would be more likely you would create a collision and have calculated an image of a duck (south african whitenoise duck).

Problem with all this is that the images of naked children made by their parents is CP in the eyes of its consumers.

Perceptual AI is the best approach, but produces a certainty < 1.

In my cryptography course I had a project about invisible watermarks and secret messages in imaging. The first hurdle was beating partial images and compression, so most early algorithms worked in the frequency domain. At that time it was basically an arms race between protection or deletion of said messages and I think that hasn't changed.

Conventional file hashes can be beaten by randomizing meta data since a quality hash function would immediatly create a completely different hash. Never mind just flipping or probably just resaving them.

If you create a polynomial approximation of frequencies or color histograms of an image, you have a relatively short key indicator. But you need a lot of those to even approach certainty. Could always be an image of a duck.


https://web.archive.org/web/20210808233609/https://www.hacke... (I can't seem to access archive.is, so archive.org)


Bad Apple it is. Also hypocrite Apple, with their fake (at the very least misleading) privacy stance against internet advertising.


1 in 1 trillion of what?

https://i.imgur.com/E1YRsXQ.jpg


> Apple's license agreement says that they own the operating system, but that doesn't give them permission to search whenever they want or to take content.

If this is true, how has this gotten past Apple's legal team? Are they not aware it would be a flagrant violation of the law?


> So where else could they get 1 trillion pictures?

That's a real kicker in my opinion. Unless they get training data from NCMEC I struggle to understand how they're training their model? Unless it's entirely algorithmic and not based on ML?


What assurances do we have that this system will not be used to flag "extremist content" i.e. memes and to send that information to law-enforcement in the future?


Somewhat tangential, but people like this author amaze me in their deep knowledge AND ability to communicate it well.

Also, none of this topic is something I would want to deal with.


You believe CP is evil. You believe privacy is sacred. You are Tim Cook. What do you do?

If this is a sincere effort, clearly Apple has failed to thread the needle. This announcement could also be a fumbled attempt to reframe what is already a common practice at the company, to get ahead of a leak. I heard from a friend who's related to an Apple employee that Apple scans the mountains of data on its servers, already, for "market research". The claims otherwise... marketing gambits


Here's the gist of a post I made a couple of days ago (I removed one sentence that someone considered inflammatory):

Having known many victims of sexual violence and trafficking (Seriously. I deal with them, several times a week, and have, for decades), I feel for the folks that honestly want that particular kind of crime to stop. Humans can be complete scum. Most folks in this community may think they know how low we can go, but you are likely being optimistic.

That said, law enforcement has a nasty habit of having a rather "binary" worldview. People are either cops, or uncaught criminals.

With that worldview, it can be quite easy to "blur the line" between child sex traffickers, and traffic ticket violators. I remember reading a The Register article, about how anti-terrorism tools were being abused by local town councils to do things like find zoning violations (for example, pools with no CO).

Misapplied laws can be much worse than letting some criminals go. This could easily become a nightmare, if we cede too much to AI.

And that isn't even talking about totalitarian regimes, run by people of the same ilk as child sex traffickers (only wearing Gucci, and living in palaces).

”Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be.” -Lyndon B. Johnson

[EDITED TO ADD]: And I 100% agree that, if we really want to help children, and victims of other crimes, then we need to start working on the root causes of the issues.

Poverty is, arguably, the #1 human problem on Earth, today. It causes levels of desperation that ignore things like climate change, resource shortages, and pollution. People are so desperate to get out of the living hell that 90% of the world experiences, daily, that they will do anything (like sell children for sex), or are angry enough to cause great harm.

If we really want to solve a significant number of world problems, we need to deal with poverty; and that is not simple at all. I have members of my family that have been working on that, for decades. I have heard all the "yeah...but" arguments that expose supposedly simple solutions as...not simple.

Of course, the biggest issue, is that the folks in the 0.0001% need to loosen their hands on their stashes, and that ain't happening, anytime soon. I don't know if the demographic represented by the Tech scene is up for that, since the 0.0001% are our heroes.


Given that the weights for the NeuralHash algorithm are being shipped to every iOS device and neural network adversarial attacks are pretty well studied in literature, it should be trivial to make a website or Android camera app that tweaks a few pixels of an image to make it have a hash collision with apple's CSAM database. If anyone seriously wants to kill this initiative, widely distributing a few memes with hash collisions could go a long way.


I hear a Siren Outside Someone has a serious problem. Don't just shine it on.


If they file even one false claim, then it ruins lives.


To help fight back against false positives, why not just repeatedly trigger the code that sends the data to NCMEC (per the article's claimed legal requirements) and create a DoS attack?


That code is not accessible to us. The report to NCMEC would be generated by Apple after they have manually reviewed the derived images included in the security vouchers your device submitted after it analysed your photos.


> That code is not accessible to us.

Not to tools like IDA and Ghidra


No, the tool that reports to NCMEC is on Apple employee workstations (or a private server). The stuff running in the iPhone essentially just flags things as possibly-CSAM, after which someone at Apple verifies it.

Now, I suppose you could DDoS Apple's verification process.

Either way, though, it's not like anyone who would do either of these things would win any points in the court of popular opinion. I can see the headlines now: "Hackers Disable Apple's CSAM Reporting Tools; Legitimate CSAM Reports Get Lost".


> "Hackers Disable Apple's CSAM Reporting Tools; Legitimate CSAM Reports Get Lost"

What website are we on?


Imagine being a victim of CP, knowing that all over the world people hold in their hands devices storing artifacts of your terror.


Great title.


awesome product!


cool


so maybe i'm confused, but i thought it worked like this:

pre this thing:

* before syncing photos to icloud, the device encrypts them with a device local key, so they sit on apple's servers encrypted at rest and apple cannot look at them unless they push an update to your device that sends them your key or uploads your photos unencrypted somewhere else

after this thing:

* before syncing photos to icloud, the device encrypts them, but there are essentially two keys. one on your device, and one that can be derived on their servers under special circumstances. the device also hashes the image, but using one of these fancy hashes that are invariant to crops, rotations, translations and noise (like shazam, but for pictures)

* the encrypted photo is uploaded along with the hash (in a special crypto container)

* their service scans all the hashes, but uses crypto magic that does the following:

1) it does some homeomorphic encryption thing where they don't actually see the hash, but they get something like a zero knowledge proof if the image's hash (uploaded along with the image in the "special crypto container") is in their list of bad stuff

2) if enough of these hit, then there's a key that pops out of this process that lets them decrypt the images that were hits

3) the images get added to a list where a room full of unfortunate human beings look at them and confirm that there's nothing good going on in those photos

4) they alert law enforcement

a couple points of confusion to me:

1) i'm assuming they get their "one in a trillion" thing based on two factors. one being the known false positive rate of their perceptual hashing method and the other being their tunable number of hits necessary to trigger decryption. so they regularly benchmark their perceptual hash thing and compute a false positive rate, and then adjust the threshold to keep their overall system false positive probability where they want it?

2) all the user's photos are stored encrypted at rest, it seems that this thing isn't client side scanning, but rather assistance for server side scanning of end to end encrypted content put on their servers.

first off i think it's actually pretty cool that a consumer products company offers end to end encrypted cloud backup of your photos. i don't think google does this, or anyone else. they can just scan images on the server. second off, this is some pretty cool engineering (if i understand it correctly). they're providing more privacy than their competition aaand they've given themselves a way to ensure that they're not in violation of the law by hosting CSAM for their customers.

but i guess the big question is, if people don't like this, can't they just disable icloud?


They do not provide e2e for iCloud photos (or iCloud backups, etc.) and did not specify that they were planning to.

They do not provide more privacy than the competition (in this regard at least), and yes, people can just disable iCloud services. That argument that "you can just not use it" is a pretty weak one for defending poor privacy practices. The same could be said for any website or server with poor privacy practices and given the prevalence of iCloud photos, this change affects lots of people.


i dunno, reading this thing sure does make it sound like it's only for photos stored in icloud. and the existence of such an elaborate system sure does seem to indicate that photos are end to end encrypted on their servers, otherwise they could just hash them there.

it does the matching client side, so the hashes never leave the client. what it does send with the photos is this threshold thing where if enough of them are hits, it reveals them, otherwise they see nothing.

it seems pretty unequivocal that this is all a system to support scanning of e2e encrypted images on icloud. there would be no need for such an elaborate system if they could scan the images on the server. as far as i know, google does not e2e encrypt images, and for all we know, they actually do engage in server side scanning.

to quote their statement https://www.apple.com/child-safety/:

> To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

> Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

> Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

> Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

> Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.


okay, so according to this [1], icloud photos are not e2e encrypted. point still stands, this is a system that is designed to flag the images without looking at them server side. assuming that it is successful, it paves the way for when they could turn on e2e encryption for icloud photos.

[1] https://support.apple.com/en-us/HT202303


I don't buy this argument one bit. If you're going to release a feature with client-side data scanning in order to turn on e2e, you would specify that to provide the reasoning and prevent backlash. Apple would be smart enough to say that (if that was their plan).

Furthermore, e2e isn't all that meaningful with this feature. A client-side scanner that allows someone to view a thumbnail of the photo if it matches some other photo they have kinda takes most of the protection e2e is supposed to provide...


i don't think they would necessarily advertise a plan to attempt e2e this early. what if it proved to be infeasible? reversing course after an announcement like that would be a massive black eye.

have you ever seen a system like this that would be capable of flagging accounts for hosting bad material with a tiny false positive rate while being capable of e2e encrypting the material at rest like this one? i haven't, and it's an awful lot of engineering going to waste if that's not the goal.


...this early?

no e2e on iCloud has been a major issue for years. It's not like they're beta testing (the underlying principle/structure that is)

and again, I haven't seen a system that flags bad material that is also e2e because the whole premise is flawed.


and obviously a contentious issue...

they probably already do server side scanning, and probably regularly find stuff.

assuming that was true (which it very well could be) it would be insanely irresponsible to roll out e2e at scale without something like this...

there are probably hundreds of people or more at apple who can legitimately access the contents of a user account. e2e with inbound scanning would completely ameliorate that.


I really HATE usage of “CP”, it’s abuse and has nothing to do with pornography.


I'm concerned about personal security risks of this move by Apple (e.g., Swatting-like attacks).

Just wasted the weekend on this, and now asking for options: https://news.ycombinator.com/item?id=28111995


I think Apple may have really screwed the pooch with this move. They're going to catch hell for being able to take images from your device without warning or consent, and, they'll catch hell if they remove it.

Worse, they've also opened the door to government censorship of images and content and propped that door wide open.


Unfortunately, we're in a bit of an echo chamber here. The average person is either unaware of these types of issues, doesn't care, or can be easily swayed with "think of the children"-type arguments.

There's going to be absolutely no oversight or transparency into occasions when an image is removed from a device. Nobody will ever know when a non-CSAM image is accidentally pulled back from a device. All the public will ever see are headlines to the tune of "CSAM scanning system catches bad person once again".

This is a really awful path that we're going down, and this is absolutely going to get abused by regimes around the world.


Yes, they screwed the pooch with us, representing:

1. The few who care about privacy.

2. That's about it. Maybe 1% if you're being generous. ( And subtract the portion of that 1% on this site who DO care about privacy but are being compensated $400k/yr by Apple and with Bay Area rents being what they are, they shouldn't rock the boat, and they really need to make good on their Tesla Roadster reservation. )

But go ask your mother-in-law, your dad, or any random normie friend. They're fighting PedoBear! And if they expand this further, well, they're fighting terrorists! And if they expand even further, well, I have nothing to hide! Do you?

feh ~/memes/dennis_nedry_see_nobody_cares.gif


I disagree. Ask anyone you listed if they'd be comfortable with Apple reviewing their photos and possibly sending them to another organization without any warning or recourse and I highly doubt any of them would be okay with it.


Question: who is or will be making money on this deal? Answer that ("follow the money") and then I think we'll have a handle on what's really going on.


My guess: Apple is doing this to get regulators of its back while implementing end-to-end encryption.


Apple gets to maintain its 30% commission and monopoly abuse of the app store (which is becoming the de-facto place where all consumer software downloads and payments occur), in return for feeding the iCloud data of all users worldwide to US intelligence agencies.

Presumably the quid pro quo outcome is that Apple is allowed to win the Epic vs Apple lawsuit.


Ah! This and the other comment suggest a method: Apple implements something for "THINK OF THE CHILDREN" in exchange for government calling off the regulation dogs. Cynical.


I haven’t read all the details, articles, and comments. My personal thoughts on the whole situation are the following.

If you are a parent and lose a child then you would want every possible avenue taken to find your child. You would be going mad wanting to find them. If there is a way to match photos to known missing children then I say it should be at least tried.

I equate this to Ring cameras. They are everywhere. You cannot go for a walk without showing up on dozens of cameras, which we know Amazon (god mode) and law enforcement abuse their access privileges. However, if a crime happened to you and a Ring camera captured it, then I know almost everyone would certainly want that footage reviewed. Would you ignore the Ring footage possibility just because you despise Ring cameras? Probably not.

It’s all an invasion of privacy until you’re sitting on the other side of the table where you have a vested interest in getting access to the information.


If I were a parent and my child went missing, something much higher on my priority list than matching photos in some national database would be having the police search every home in a 2 mile radius of my home.

It seems just as rational to argue that the people who live in those homes should be willing to give up their right to not have their home searched if it means potentially finding a missing child in a potential criminal's home.

We can come up with all kinds of hypothetical situations. I am empathetic to parents going through the hell of a missing child, and to the children themselves. But the protection of children and victims must be balanced with the preservation of rights and freedoms considered deeply sacrosanct.


> If you are a parent and lose a child then you would want every possible avenue taken to find your child. You would be going mad wanting to find them. If there is a way to match photos to known missing children then I say it should be at least tried.

the problem with this argument is that you can use it to justify basically anything.


> It’s all an invasion of privacy until you’re sitting on the other side of the table where you have a vested interest in getting access to the information.

That's why it's so insidious. People keep making it about children, which are a very worthy cause. Nobody wants child abuse. But the problem is that the technology can be used for anything. Wouldn't it be nice to read all of the internal Slack messages at a company you're about to acquire? Wouldn't it be nice for Apple to have a copy of Google's source code? Wouldn't it be nice for a political candidate to cause their opponent some legal trouble? Spying on other people is always very valuable and governments (and corporations, probably) spend billions of dollars a year on it.

If we want to make this just about detecting abuse of children, I'm totally on board. Pass a law that says using this technology to steal corporate secrets or to gain a political advantage is punishable by death. Then maybe it can be taken seriously as something very narrow in scope. That, of course, would never happen. (Death penalty arguments aside; I'm exaggerating for effect.)

I can't believe that once this technology is out, it's only going to be used for good. (I imagine politicians will love this. Look at the recent Andrew Cuomo abuse allegations -- he abuses women, and then his staff try to cover it up by leaking documents, to discredit the abused. Sure would be nice to see what sort of things they have on their phone, right? Will someone like Cuomo NEVER have a friend that can add detection hashes to the set and review the results? I would say it's a certainty that a powerful politician will have loyal insiders in the government, and that if this system is rolled out, we're going to see colossal abuse -- flat-out facilitation of crime, lives ruined -- over the next 20 years.)

I have to wonder what Apple's angle is here. It would cost them less money to do nothing. They could be paying these researchers to work on something that makes Apple money, and banning users from their platform (or having them hauled off to prison) doesn't help them make money. I really don't want to be the conspiracy theory guy, but doesn't it seem weird that the Department of Justice wants to investigate Apple's 30% app store cut, and right about that time, Apple comes up with a new way to surveil the population for the Department of Justice? Maybe I read too much HN.


> I equate this to Ring cameras.

But it's just a totally different situation isn't it? I'm not personally opposed to the general concept of security cameras in public spaces. If you choose to install one on your property that doesn't seem unreasonable. If you choose to share that footage with the police that doesn't seem unreasonable.

The problem with Apple's system isn't even the current implementation (which is on device, but iCloud only). It's that the system can be trivially expanded to all on device content, and report the user of the phone for any unacceptable behaviour.

It's more like a camera in your home that reports you to the police if you do anything unacceptable. Would anyone choose to have that?

But the bigger problem is that Apple have been selling products based on "privacy first" marketing. By reporting on their users off-line content they break that trust. And there are really few viable options in smartphones so you can't really just move to another provider.


And this is exactly why we have things like the fourth amendment in the US.


If I lost my child, I would do anything and everything to get them back. I would shoot anyone who got in the way. This is fine because I am not the police and they won't give me guns.

Kinda unrelated to that, is the 4th amendment, which protects the rights of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures by the government.


> It’s all an invasion of privacy until you’re sitting on the other side of the table where you have a vested interest in getting access to the information.

It is still an invasion of privacy regardless of the side on which you sit. This situation is unlike a ring camera capturing footage in a public space where individuals already have no expecation of privacy.


> It’s all an invasion of privacy until you’re sitting on the other side of the table where you have a vested interest in getting access to the information.

That’s the thing about privacy that so many don’t want to admit. “Everyone deserves it, until they don’t”. All of society’s privacy is more important than your single child, sorry.

Meanwhile I created this great new technology. It runs in the background super efficiently on your phone. It immediately detects housefires and alerts you. It uses a combination of sensors to literally detect the spark of flame and there’s only one in 1 trillion false positives.

Simply download the app and give it permission to sample your microphones, cameras, accelerometers and historical gps data to build a profile, then flick a lighter anywhere in your house and sure enough your phone alarm goes off.

How it works is incredible, an algorithm listens for supersonic soundwaves created by the chemical reaction during combustion. Video and sound samples are then reviewed by one of our technical representatives.

Housefires and house fire deaths will be a thing of the past. The only compromise is that our algorithms listen to all of your video and audio and occasionally monitored by a human technician, which might include audio of you making love to your wife.


> "All of society’s privacy is more important than your single child, sorry." "which might include audio of you making love to your wife."

You and your wife bumping uglies just isn't that special, your embarassment about it is way out of proportion to the amount necessary. The fact that you would rather people die in house fires than a small chance that a stranger hears you breathing heavily and talking dirty should cause you a lot more concern than it seems to be causing you.


Just to be clear, you're saying that it's unreasonable to prefer everyone having private bedroom conversations over the elimination of spontaneous house fires?

That it's worth having a random human being able to listen to all said conversations?

To me at least it's incredibly reasonable to make that tradeoff. As a society we make tradeoffs all the time and people die as a result. Some of those tradeoffs are arguably unreasonable but this hypothetical seems very clear...


Just to be clear, you’re saying “I would not undergo temporary embarrassment to avert mass agonising death” and you think that’s incredibly reasonable?


"It’s all an invasion of privacy until you’re sitting on the other side of the table"

This is very insightful. It's also depressing, because it's a great point for those who oppose privacy. I even find myself swayed by this reasoning.


The government believes you might have information about a missing child and are withholding it. They are about to restrain you against your will and search your rectum without your consent.

The reason this violation was able to take place is because…

> It’s all an invasion of privacy until you’re sitting on the other side of the table

The thing about privacy and the government is you must always envision yourself on the wrong side of the table, because you are. Right up to the point they are inside you. [1] Your right to privacy is your strength and without it you are a slave to forces you don’t understand, can’t see, hear, taste, or smell.

Privacy or lack there of changes who you are and how you act. It changes what you say and why.

Be very careful how you submit to surveillance. The privacy you forfeit can never be recaptured.

[1] https://news.yahoo.com/mexico-man-sues-over-repeated-anal-pr...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: