Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Children can be protected without companies combing through personal data

Darn right they can.



Okay, that's a nice soundbite. Now let's steelman it.

I believe the following to be true:

1. Children are sexually abused. This abuse is far more widespread than we as a society want.

2. Images of this abuse is called CSAM (child sexual abuse material). CSAM is easily distributed over the Internet.

3. People who intentionally download CSAM are criminals who deserve to be punished.

4. Services that scan for CSAM (using hashes provided by the NCMEC, the National Center for Missing and Exploited Children, and similar bodies) identify these criminals, who are in turn prosecuted by the police. In fact, hundreds of thousands of CSAM are reported by cloud services every year. This, along with the fear of being caught, reduces the demand for CSAM.

5. Some of those caught are found to be creating as well as consuming CSAM. Catching them reduces the creation of CSAM as well as the demand for it.

6. Reducing demand and creation of CSAM protects children.

7. Scanning files for CSAM catches criminals that would not otherwise be caught.

Therefore, "combing through personal data" protects children that would not otherwise be protected.

When you agreed with "children can be protected without companies combing through personal data," did you mean that my sequence of reasoning is wrong? If so, how?

Or were you expressing a personal philosophy that's closer to "I think the benefits to privacy (from not scanning) are more important than the benefits to children (from scanning)?" If so, why do you believe it?


I think the biggest way to convince people on this is Point 1 in your argument. The prevalence of this activity is much more widespread than I have seen publicly acknowledged anywhere.

I worked at a large cloud company (smaller than iCloud) that scanned for CSAM and in some cases notified law enforcement - the number of yearly occurrences ended all internal arguments about the "morality" of scanning immediately.

As an engineer, knowing that the code I wrote was handling this kind of data every day, thousands of times per day, involving tens of thousands of individuals per year changed my opinion from "we should protect the privacy of the user" to "F** YOU stop using my product".

Short of products that offer full end to end encryption, any that does not scan for this kind of thing should look for an afternoon at what their servers are doing, then see how they feel. The marketing department will absolutely not resist the temptation to poke through unencrypted data.


The practice of "Steelman"ing is presenting the strongest possible argument. So please answer me this question.

--

The Fourth Amendment of the United States Constitution reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

--

Before you take step 4 above, please explain to me how you established probable cause for each and every scan that takes place. Both those scans that were positive and found CSAM, and those that were negative and found nothing.

In the US, there is a long history of Dragnets being deemed unconstitutional.

--

And I personally understand that this Amendment protects us from actions by the US Government. Not Apple. But here Apple is conducting the scans and forwarding the results onto the Government for the purposes of law enforcement.

In my personal opinion, when apple began these scans, they effectively became an agent of the state.

--

In any event, I am glad the scans have stopped.


> In my personal opinion, when Apple began these scans, they effectively became an agent of the state.

I think reading United States v. Miller (982 F.3D 412) will probably convince you otherwise. Here, the state actor doctrine is defined to include activities that are always the province of the government. While arresting offenders and prosecuting them is definitely in this zone, scanning hashes and comparing them to a list - even one provided by NCMEC/the government - is not. In Miller, the court says, “Only when a party has been ‘endowed with law enforcement powers beyond those enjoyed by’ everyone else have courts treated the party’s actions as government actions.”

Don’t get me wrong, I think it’s a terrible product decision to violate the user’s privacy like this, especially in this golden age of the state’s ability to surveillance. But it is certainly not state action.


I think "state actor doctrine" is important here. The above poster wrote: "here Apple is conducting the scans and forwarding the results onto the Government for the purposes of law enforcement." If that were true, Apple would probably lose whatever class action suit was brought by the pedos. But in this case, Apple has every right to ensure that its servers and platforms are not used to disseminate unlawful material. They could always be held liable by victims or the state for failing to curb the kind of activity that is certainly happening today whereby people make icloud folders partially or publicly accessible. How is it different to charge for a subscription to an icloud folder vs a paid substack? Both of these companies share in the responsibility not to allow csam on their platforms.

a private company can always publish terms that grant them on-device or in-cloud access to whatever data they are manipulating or storing or whatever. One could argue that clicking ok on "we can manipulate your data" is legal umbrage for "running a hash search and forwarding material to fbi".

Is "post fib.gov" the new "Sunset Filter"? and can they run that filter automatically?


A lot to unpack here.

First, I don’t think a civil suit by people caught by the hash-checking would be the remedy any of them would seek. More than likely, they would be trying to convince a court that Apple, acting on behalf of the government, violated their rights by searching their device without a warrant. The remedy they’d likely be seeking would be to have the “fruit of the poisonous tree” (Apple pun intended) excluded in their criminal prosecution.

I guess maybe they could file a civil suit afterwards but I’m having a little trouble imagining the assemblage of a class around which to file a class action since (allegedly) this hash matching system should “catch” innocent people extremely rarely.

And what duty does Apple owe to users? The duty not to inform the police that they might be committing a crime? This seems pretty shaky.

The problem with all this is the liability they’d be seeking to impose on Apple is specifically relieved by 18 U.S.C. 2259 (barring recklessness, malice, or a disconnect between apples action and 2258A).

The other thing is that “forwarding the results on[ ]to the government for the purposes of law enforcement” explicitly, under Miller, is not government action. It’s the law enforcement function itself that would be interpreted as state action.

> How is it different to charge for a subscription to an icloud folder vs a paid substack?

I think the answer is that liability for failure to moderate substack might be relieved under section 230 of the CDA[0], while liability for the act of moderating Apple’s cloud by scanning hashes and reporting hits to the FBI would be relieved under 18 U.S.C. 2259.

[0]: assuming, of course, su stack didn’t/shouldn’t have known about the offense. Once they know they have a 2259A duty to report.


Thank you for these citations. I'm not a legal expert, but I appreciate the response. I don't know what the correct thing to do for apple or others is, but it seems like apple has far more authority than most people suspect to do things like hash checks and law enforcement notification. The general tenor of most complaints is that this is a violation of first amendment or constitutionally protected privacy rights. But I don't know if that's true, and from your response, it would seem like there is some precedent for corporations being protected in their right to act on crimes they are aware of.


Protection from liability when aware of a crime and responding accordingly; that’s exactly what 18 U.S.C. 2258A gives corporations, in the context of child sexual exploitation materials.

Sadly, I think you will find we don’t have any constitutionally protected privacy rights. We used to have some, and that’s where the protection for abortion rights came from, but the current court (at least in my interpretation) believes that interpreting the Constitution that way was a huge mistake. We still have some remnant of these rights, but as they come up for review by the Roberts court, they will be abolished one by one until they run out of time or we run out of patience with them and enshrine these rights in statutes.

But I think when you say privacy rights you mean “fourth amendment rights against unreasonable search and seizure” and insofar as that’s the case, I agree with you: Apple has a lot more power to violate these rights than most people think, because they have smart lawyers who have advised them exactly how much leeway they have before they become “state actors”; their software stops just shy of that point.

I think one thing that’s important to remember is that Apple has a lot of power but only because we give it to them. All you have to do to avoid this hash scanning nonsense forever is just throw away your iPhone. (The problem is, at least for me, I’m not ready to do that).


It took me a moment to parse your argument here. To summarize, I think you're saying that my reasoning is wrong because it's illegal in the US for private services to conduct mass scanning for CSAM and notify the government.

That's an... interesting take. It doesn't pass the sniff test, sorry. First, if it was illegal, I wouldn't have expected it to make it past the corporate counsel at Google, Facebook, and Microsoft (all who do CSAM scanning in the cloud). Second, I would have expected it to have been brought up by the defense in a trial and stopped as a result. That it still happens is a strong indicator that it is, in fact, not illegal.

Frankly, it sounds like the same nonsense "sovereign citizens" get up to.


The argument rests on the comparison of a user's local device to a private residence. Google, Facebook and Microsoft do scanning in their cloud and it's fine because it's in their servers. Apple's original suggestion was to do client-side scanning in the user's device and this is compared to a search of a residence.


> > In my personal opinion, when apple began these scans, they effectively became an agent of the state.

> [I think you mean] it's illegal in the US for private services to conduct mass scanning for CSAM and notify the government

Nearly. If the government is requiring this then it becomes improper.

Not that Apple would be breaking the law, but that the search would be by a government proxy and thus inadmissible.

imho with the FBI's past pressure on Apple it doesn't seem unlikely that this is somewhat coerced.

> I would have expected it to have been brought up by the defense in a trial and stopped as a result. That it still happens is a strong indicator that it is, in fact, not illegal.

If their lawyer doesn't feel they could win with this argument they won't waste time bringing it up. Their job is defense not legal correctness.

> Frankly, it sounds like the same nonsense "sovereign citizens" get up to.

SovCits argue about stuff like that they don't belong to the government because they spell their name in all-caps, not about actual constitutional principles.


> Nearly. If the government is requiring this then it becomes improper.

Even if the government does not require it, if it acquiesces, and the action is the exclusive province of the government, it’s state action (i.e., improper).

But yes, if they require it, it’s state action too. 18 U.S.C. 2258A allows but does not require scanning hashes, and scanning hashes is not the exclusive realm of the state, so it’s fair game.

The idea that the FBI is somehow twisting Apple’s arm doesn’t really ring true to me. The FBI has lawyers, they understand state action doctrine, and they know if they were coercing Apple, one leak could ruin a lot of prosecutions. I guess I couldn’t say it’s impossible though.

I’m kind of interested in this idea elsewhere in the thread that executing this hash-matching code on your device as opposed to in the cloud is somehow more deserving of 4A protection. One part of me says a textual originality SCOTUS is pretty unlikely to read “mobile handset” as “house” but who really knows what those characters would say these days.


These days, your handset is de facto your personal id (via eSim) and contains your personal digital possessions. A person taking over your handset could easily impersonate you and there's a good chance they could access your medical and banking records.

I am far from being a legal expert, but I see there's a 9-0 ruling in Riley v. California that a smartphone has 4A protections.


Your point is valid and I think it makes me change my argument somewhat.

On the other hand, Riley (and Wurie , from the companion case) were arrested, though, prior to their phones being searched without a warrant.

Here, though, there is no arrest and no government search, just a software provider selling you a phone running software that does something that dozens amount to state action.

With the benefit of your post, I think I can say that what I meant was that I doubt today’s Supreme Court is going to tell us that Apple can sell a service in the cloud that scans hashes, but they can’t sell a phone running software that does the same thing in your phone in the logic of “your phone is your house.”

Put another way, if you invite me into your house (like you welcome Apple software in to your phone) and I want to search your house, that’s not a violation of the Fourth Amendment (it’s just very rude and maybe trespassing).


>just a software provider selling you a phone running software that does something that dozens amount to state action.

>Put another way, if you invite me into your house... and I want to search your house

Say Apple did implement client-side scanning. I think you're right here that Apple's would-be actions would not amount to state action. However, we would be allowing a broad-based search of users' personal files whose results may be auto-forwarded to government agencies based on a private actor's whims.

Say there was a private militia searching people's homes, and forwarding anything too suspicious to the police. Say that the only way to buy a house (in Cupertino, or in the entire country) would be signing a deal with an HOA to allow the militia in. Or less than that, that not signing a deal would merely be arduous and subject you to penalties. At which point of annoyance you'd be de facto cancelling 4A?

So there needs to be some sort of a line, and it could be debated where to put it. In this case it seems that multiple separate people have come around to separating the local device from the cloud as a line, and this does have the advantage of being a clear line.

> they can’t sell a phone running software that does the same thing in your phone

Note that this equivalence also works against allowing this search. If Apple (and others) have a widely-accepted equivalent alternative there isn't a need to allow running the scan locally. [EDIT: They could have chosen a different method, like scanning all the files. Or at least implementing while/after doing E2E. Doing it the way they did made the approach appear like crossing a line without any benefit to anyone but Apple, leading to increased backlash.]

IANAL and all.


YMNBALBYTLY: you may not be a lawyer but you’re thinking like one.

I want to again make clear that I think this kind of scanning sucks. I line the idea of a line. I like the idea of the line being that I decide which software functionality runs in my device. Your hypo about the militia is a great one. I would not live in a country like that, but I have an iPhone…


Thanks for repeating what you got out of my post. There is a little miscommunication. I'll see if I can clear that up.

I'm a "spirit of the law" kind of guy.

And I think that when the bill of rights was passed, the idea was that Americans should be shielded from searches unless there is probable cause.

The founding fathers didn't anticipate the technical innovations of telecommunication and the computer revolution. Nor the power this would grant private companies over the lives of US citizens.

Let's be blunt, if Apple had to pay human staff to manually evaluate hard-copy documents for CSAM violations, they simply wouldn't shoulder that expense. Arguably the only entity that could "afford" it would the the US Government.

Yet here we are in the modern age and these technologies exist. And these searches really do happen - without probable cause. There are at least two known cases of fathers being criminally investigated by the police based on a photo of their children that the father sent to a medical doctor after these photos were automatically scanned and flagged by automated systems[1].

The only point I'm making here is this: I feel that the bill of rights have been diminished in the digital age, and companies can do do wield automated algorithms that end in criminal investigations in ways that detrimentally impact lives of innocent people. And that these events taking place run contrary to the spirit of the bill of rights.

That opinion may differ from your own. Or it may differ from how case law and legal precedent have evolved since the bill of rights was initially ratified. But I don't think dismissing that opinion as "nonsense" is either appropriate or a sign that you are debating in good faith.

[1]: https://www.nytimes.com/2022/08/21/technology/google-surveil...


Thanks for responding. You're right, I misunderstood, and I apologize for being dismissive.

I thought you were making a legal argument, but instead, it sounds like you were making the moral argument: mass algorithmic searches by private entities are obviously bad if they eventually result in criminal prosecution (assuming no warrant). Presumably you don't like it when e.g., Google and Facebook conduct mass searches of your data for advertising reasons, either, but that's a separate conversation.

I'm sympathetic to that argument in the abstract.

When you get to the specific case of CSAM, though, that argument results in this position: mass automated searches for known CSAM hashes causes more harm than allowing that CSAM to be shared unchecked.

And that I don't agree with.

My logic is that Facebook, Microsoft, and Google have already been scanning for NCMEC hashes for years, and I'm not aware of any injustices as a result. Please note that I'm specifically talking about hash scanning, not the ML-based classification systems that presumably caused your [1]. I'm not an absolutist; a few cases where people were referred to police as a result of fraud (e.g., a jealous ex-lover planting evidence) is not necessarily a deal breaker for me, especially since the real source of harm is the fraud, which could have been conducted in any number of other ways. I'm also not sympathetic to the slippery slope fallacy.

On the other side, I believe that there are mass pedophile rings and that these scans have helped detect them and take them down.

So for me, the harm of mass CSAM hash scanning is low and the benefit is high. The balance is in favor of CSAM hash scanning, but not in favor of ML-based CSAM classification.

That's a "from specific consequences" argument, not a "from abstract principles" argument—there's probably philosophy terms for those positions that I'm unaware of—and I respect that other people could see it differently.

PS: I've actually been thinking about Google/Facebook/Microsoft in this thread, not Apple—since they never rolled out their system—but, in my mind, Apple's proposed system threaded the needle perfectly. Combined with their recently-announced e2e encryption, they provided just the right balance of privacy, hash scanning, and protection against abuse and false positives. I'm sad they've shut it down.


5a. Some of those caught will likely be non-technological false positives: images of their own children[1] or CSAM images uploaded by hostile actors attempting to frame the person[2].

[1] https://www.nytimes.com/2022/08/21/technology/google-surveil...

[2] https://www.huffpost.com/entry/wife-framed-husband-child-sex...

The second one is particularly worrisome. In that case the wife was utterly and totally stupid and was therefore caught. How many times has that happened already and the person who did it wasn't caught?


> Reducing demand and creation of CSAM protects children.

You're thinking of this as if CSAM production follows a rational supply/demand relationship (such as with food, or furniture). But is there really evidence to back up the idea that CSAM production follows the same rational relationship?

Child abuse is not rational, and an abuser provides their own demand for the CSAM they create; a child will not be saved from abuse even if there's nobody for their abuser to share the photos with.


The existence of international trading networks of CSAM material speaks otherwise to this, as well as the frequency with which participants in trading but not active producers are usually an ingress point into compromising them.

Even if the currency is barter - i.e. "more unique content" - that means the incentive is for people operating unchecked to feed their thirst for it have a market for the creation of it. Temporarily cornering the market with new content no one's seen before means you can trade for huge amounts of other content successfully.

After all, everyone involved is taking a huge risk by even being a participant.


It scared me when this story first broke that there was almost no focus on the real harms being addressed. It was all about calling out the flawed methodology about the scanning, not the overarching reason why anyone needed to talk about scanning images at all.

I think the litmus test for if a service perfectly respects privacy is if a pedophile can conduct their entire operation using the service without their private data being exposed to prying eyes. By that metric, there ought to be no perfectly private services on the clearnet. To many it is a matter of compromise and "thinking of the children."

Those who really want a higher level of privacy don't need to be beholden to a publicly traded company, subject to legal requirements, to provide it to them.


What is an acceptable murder rate for society?

I think quite obviously, no one actually wants murders to occur. We all want there to be zero murders, but that's not realistic. In a modern society, murder will happen, so we have to decide what are the trade-offs societally. We can make and enforce laws, then make and enforce more, and so on. Ultimately, a society with a 0% murder rate isn't utopia; it's a police state where no one is free. On the balance, I think that society is worse off than one with an otherwise "acceptable" murder rate.

This is the calculus of those arguing against CSAM scanning. We all agree that we don't want CSAM, but is the cure worse than the disease?


Of course I believe it's important to protect children, and people consuming and producing CSAM deserve pure unadulterated justice. I think we should do everything reasonable to catch and punish such loathsome persons.

However, we have other means at our disposal to catch makers and consumers of CSAM that don't require blanket trawling of personal data. I am afraid to surrender privacy in the name of even something very good because surveillance is easy to abuse.

I doubt you'd advocate for the extreme of e.g. the police knocking on everyone's door and performing a full search of their house/flat/whatever every so often to search for CSAM on computers or in magazines etc. That would be sure to catch more criminals and protect more children, but the societal cost would be astronomical. No one wants to live in a police state.

And I hope you don't understand me as advocating for the complete opposite of that, where no searches are ever performed in the name of personal privacy. That's too dangerous.

There's some room between the two extremes. I get nervous about mechanisms that can easily be used to compromise privacy of people who never were under any sort of suspicion. Maybe I'm wrong about where to draw the line. Some of my family lived in East Germany under the Stasi, and what a dark time that was. Perhaps that's part of the source of my aversion to mechanisms like mass digital surveillance.


#1 is 99% of the problem, and #2-#7 is 1%.


I always thought starting with CSAM was just a convenient pretext for slowly introducing surveillance tech in the U.S.

#1-#7 are all bad. I don't think saying "some aspects of [bad thing] aren't so bad" is an effective argument against surveillance tech. Better just to call bullshit on the surveillance tech itself.


[flagged]


[flagged]


Well that certainly worked for WhatsApp, when they delayed the new privacy policy, and then just rolled out when the media coverage died down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: