Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple kills plans to scan for CSAM in iCloud (wired.com)
736 points by ashton314 on Dec 7, 2022 | hide | past | favorite | 408 comments


I'm really surprised that they tried it in the first place. I'm sure governments want it (China would love to see your Winnie the Pooh meme stash), but Apple is pretty good at fighting the US government and should have felt 100% free to say "in the absence of a law that compels us to write software, which is unconstitutional btw, we're not doing it". They have done it many times, so it felt really out of character. There must have been some contract / favor they were going after, and the opportunity must have expired. (I'm sure some large department of the federal government has some shiny new Android phones today.)

It would be interesting to figure out the real story.

My favorite part of the whole saga is that leaked letter that said "the screeching voice of the minority" will kill the project. We did indeed, and I'm happy to screech again the next time the government wants a tool they can use to scan my phone without a search warrant.


> in the absence of a law that compels us to write software, which is unconstitutional btw

In their 2016 dispute with the FBI, the gist of Apple's 1A and 5A arguments were:

Writing software is a form of speech within the meaning of the First Amendment. Forcing Apple to create software would therefore be compelled speech and so the order to do so must be narrowly tailored to obtain a compelling state interest (see: "strict scrutiny").

The FBI has a legitimate interest in investigating and prosecuting terrorists, but their request does not pass strict scrutiny:

1. The government was only vaguely speculating that there might be something useful on the iPhone, but what was requested would have far reaching adverse consequences beyond just that single device.

2. Apple publicly values privacy and data security. Forcing Apple to create software (i.e. compel speech) that runs contrary to their values is a form of viewpoint discrimination.

3. Apple is a private party in the matter, far removed from the crime itself and the request is a lot of work. So conscripting Apple to assist the government in the matter would constitute an undue burden and therefore be a violation of Apple's substantive due process rights.


I would really like the defence to be based on our rights as human beings, rather than 'placing undue burdain on a corporation'.

Suppose next time NSA writes the code for Apple and Apple just has to sign it, will the defence stand uo in court?


Due process and speech are among our rights so that’s literally what this is

If NSA has a way to break into a device without violating any parties’ rights, they would do so.

Even if NSA wrote the code you might argue the [Apple just has to sign it] step violates Apple’s rights.


However, it's repeatedly been show that they either 1) Break our rights w/ impunity or 2) They go to rubberstamp courts that practically never deny a request to invade privacy.


Wasn't that how FISA warrants were enforced?


Presumably yes, for both of the same reasons, or at least the first one.


Frankly the FBI should have just requested the bootloader signing keys, bootloader documentation and just write it themselves. The only protection then is the 5A - and apple can't take it because it is them on the stand.


My belief is that the Feds already had the tools they needed to get the information they needed off the phone, but wanted to use the situation to create a new standard of practice as it was too juicy to pass up—never let a horrific event go to waste.

Luckily, it didn’t work in the way they hoped or at least to my knowledge, it didn’t.


> I'm really surprised that they tried it in the first place.

Compare the approach Apple floated in their CSAM white paper to what Google is already doing today.

Google:

Is scanning for images of any naked child (including images of your own children that Android automatically backed up) and reporting parents to the police for a single false positive.

https://www.nytimes.com/2022/08/21/technology/google-surveil...

Apple's proposed system:

>Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the database of known CSAM hashes.

The device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images.

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

Apple designed a system where they don't know anything about the data you upload to their server, and where they don't do anything at all about positive matches, until you cross the threshold (later revealed to be 99 images) that match against the database of known kiddie porn.

Even then they have a human review the images you shared before taking further action to protect against the possibility that there might be 99 false positives on a single account.

vs.

Google's system where taking a picture of your own child at the request of your doctor can result in Google reporting you to the police for a single false positive and deleting your account with no human in the loop.


The problem is that Google’s system “works better” from the point of view of law enforcement. By that I mean it’s much less restrictive, and will find “novel” CSAM.

The problem with Apple’s privacy-conscious approach is that it conceded the fundamental principle and agreed that scanning private (unshared) photo repositories was reasonable. Having conceded that point, everything is just a confusing technical argument about effectiveness-versus-restrictions.

Apple would have been under continuous law enforcement pressure to enhance their limited technology so that it was “as effective” as Google’s ML-based scanners. They might have resisted the pressure for a while, but once you’ve conceded the easily-understood principle then all you have to respond with is dull technical arguments that society at large won’t understand. Law enforcement would have (correctly) argued that Apple had agreed that CSAM scanning of private files was justified, so why are they using a creaky old technology that can’t find novel CSAM and is letting bad guys get away with child abuse? Eventually law enforcement would have won that battle and Apple would have been forced to deploy an ML-based scanner, which would have undermined the thoughtful privacy protections deployed in their first version.


> The problem is that Google’s system “works better” from the point of view of law enforcement.

Reporting parents to the police for a single false positive is not better in any way.

Law enforcement does not have unlimited resources that can be wasted every single time Google's algorithm screws up.


> Law enforcement does not have unlimited resources that can be wasted every single time Google's algorithm screws up.

Are you arguing from LE perspective? Or from taxpayer perspective? LE is happy to enforce anything that gives them more power (vide civil forfeiture laws). And signal-to-noise problem is not really a problem if real signal is political expediency. Selective enforcement is the way.


All you're saying is that Google needs to do a more careful human review before reporting to law enforcement. This is not a strong argument for privacy-preserving tech.


No. I'm saying that Google's system of using machine learning to look for images of naked children and reporting parents to the police when there is a single false positive (instead of only looking for known examples of kiddie porn) is exceptionally problematic.

The fact that Google refuses to put humans in the loop when they know the decisions made by their algorithms on this and other subjects are highly unreliable simply adds insult to injury.

Loosing access to your account because Google's algorithm screwed up is bad enough. Being accused of child abuse because you took a picture of your child's first bath is a bridge too far.


Apple's proposed system did scanning on your device, whereas google's system does scanning in the cloud. That is the distinction. A person's own hardware shouldn't be used to report them to the police. You cannot interpret technology like this in terms of what it is applied to today, because its use will be broadened in the future.


> Apple's proposed system did scanning on your device, whereas google's system does scanning in the cloud. That is the distinction.

No, the distinction is that Apple's proposed system only scanned photos you uploaded to publicly accessible iCloud albums and did so in a way that not even they had access to the results until you crossed the "99 images that match known kiddie porn" threshold. Even then, they had an actual human being review the situation to make sure there weren't 99 false positives on a single account before calling the police.

There was absolutely no possibility of calling the police for a single false positive, which is what Google is already doing today.


False positives are inherent to the justice system. While unfortunate, they do not cross any lines or send us down any slippery slopes. They are priced in. Thus, "what about google" is a mere "what-about-ism". People are ocassionally falsely accused, charged, and even convicted of crimes. That is not what is under discussion here.


> False positives are inherent to the justice system

This is in no way an excuse for Google calling the police to report child abuse because their algorithm made a bad call, and Google doesn't want to hire human beings who can intervene when their algorithms very frequently screw up.


Google is also bad. We are agreed on that.


I am against CSAM as much as anyone but NCMEC and their database is a total farce. They produce mountains of unactionable reports, false positives, and their (or was it Microsoft’s) weird hashing thing was visually identifiable as CSAM itself and reversible. That letter was just a whole mask off moment for them.

I wouldn’t be surprised if NCMEC had a hand in starting Pizzagate tbqh. I have no proof and I don’t think that they did. But if it broke the news tomorrow I would be like, “Nah that’s totally believable.”


> They produce mountains of unactionable reports, false positives, and their (or was it Microsoft’s) weird hashing thing was visually identifiable as CSAM itself and reversible.

Are there examples or reports of this? I hadn’t heard this before. (It’s not hard to believe though, eg TSA airport security)


This paper [1] cites some stats from the UK over one year (I think 2019) - from about 29.4m reports to NCMEC, 102,842 were referred to the NCA, of which 20,038 were referred to policing agencies, which led to 6,500 suspects, and about 750 of those suspects were prosecuted, which it estimates makes up about 3% of all prosecutions for indecent image offences. That comes out to about 0.7% of NCMEC reports to the UK's NCA leading to prosecution.

I read another article that linked to stats from Irish and Swiss federal police, and both of them reported discarding about 80% of NCMEC's reports to them in the first stage as not being at all criminally relevant, but I can't find the link right now.

1. https://www.cl.cam.ac.uk/~rja14/Papers/chatcontrol.pdf


> That comes out to about 0.7% of NCMEC reports to the UK's NCA leading to prosecution

That doesn’t imply the rest was unactionable. Certainly, the reductions from 20,038 to 6,500 and from there to 750 aren’t due to this tech. The first could be because it produced multiple hits for one person, the second because of policies of the police (minors might have been warned rather than prosecuted, untraceable phone numbers ignored, and persons known to have died removed, for example)

NCMEC also would (rightfully, AFAIK) argue that they never claimed the code to be perfect, and that the earlier reductions are by design.

The real problems, IMO, are (from that pdf):

“the data do not support claims of large-scale growing harm that is initiated online and that is preventable by image scanning.”

and

“The first wave of prosecutions for illegal abuse images, Operation Ore, swept up many innocent men who were simply victims of credit card fraud, their cards having been used to pay for illegal material. A number were wrongly convicted, and at least one innocent man killed himself”


Wow, that's a Kafkaesque nightmare.


Back when the news first came out there were several blog posts posted to HN by people who host user uploaded photos at scale who had NCMEC come knocking at their door to strong arm them into participating in their programs.

They are not even a proper government agency yet they walk around with the balls to pretend they are equivalent to the NSA.


It’s even worse than you can imagine. Even when you find an image… you can’t delete it right away, so as to not tip your hat that it was automated.


> I wouldn’t be surprised if NMEC had a hand in starting Pizzagate tbqh. I have no proof and I don’t think that they did. But if it broke the news tomorrow I would be like, “Nah that’s totally believable.”

If I’m not mistaken it was this kind of speculation that actually precipitated a bunch of people thinking a random pizza joint was involved in sex trafficking and ultimately led to someone opening fire with actual guns.

Maybe we could do with less unfounded speculation.


I can see how it’d be read like that. Please allow me to be clear, they did not do that. I do not think they did that. There is absolutely ZERO proof they did that.

However if they were to do something outlandish and insane like that, it would not be surprising. NCMEC is drunk on their own power and status. They are on a holy crusade and even constitutional rights mean absolutely nothing to them in their quest. The people at the top are crusaders on a holy war and there are no means that they would not use to these ends.

There are many people in the trenches who are fighting a good fight. They deserve our respect and empathy. However the people at the top…see above. NCMEC needs to be reined in, but of course the second anyone in Congress even looks at them sideways you get a screeching voice of the minority going on about the children and then you’re in the news as a child predator.

In a way, they are responsible for shaping the discourse such that democrats got knocked on as “for” CSAM which is what led to Pizzagate taking hold. But I don’t believe for a second they intentionally, directly created it. But it’s not a big stretch from where they are now.


It was not “this kind of speculation” and the comparison is weak at best


Ha! If you think that apple does not work with governments all around the world and that you can take their word at face value you're going to have a bad time.


I'm sure Apple will give me up to China in a heartbeat, but I actually don't think they'll give me up to the US. It's all about incentives; China can say "sorry, you can't buy our cheap cheap capacitors anymore" and they go out of business. The US can write them a strongly-worded letter about how they don't love freedom, and it's their constitutional right to throw that letter directly into the trash while donating to the campaign of whoever is running against the letter-writer. Pesky constitution!!

I always worry about how many FBI / NSA employees work at Apple/Google/etc. though. If you're doing a lot of illegal stuff, I'd avoid uploading the evidence to the Internet. Even someone with the best intentions writes software with bugs or hires compromised employees.


Apple is a key member of PRISM [1] and got on board just about the time their phone business started to skyrocket. Just because the media stopped talking about PRISM doesn't mean it went away. If anything I'd expect it's only grown more emboldened given the relatively tepid public response to it. I'd argue this is likely the real reason that Huawei was banned. The US 3-letter agencies agencies had the choice of bringing a Chinese company into the surveillance state web, having a phone on the market they have no control over, or simply banning those phones.

As for incentive, big tech regularly flaunts nearly every single behavior that all of our anti-competitive clauses were meant to prohibit. 'Back in the day' Microsoft was targeted, and lost, an anticompete lawsuit for bundling Internet Explorer with Windows. Now a days bundling everything in your own packages, locking down other companies into a an unescapable market where you impose a 30% tribute, buying out competitors to prevent competition, and more - that's all perfectly cool, somehow.

If Apple, Microsoft, Google, et al stopped playing ball - they'd be out of the game before nightfall.

[1] - https://en.wikipedia.org/wiki/PRISM


The entire organization that investigates leaks and supply chain tampering at one of those is formers and retirees of U.S. and allied intelligence and law enforcement communities. It is a large organization. It is itself a very small part of the organization containing it.

This is not uncommon, of course. Cops work at Kohl’s and shadow the Zuckerbergs and bounce at clubs and set security policy at multinationals. I don’t get why this is weird unless you’re implying that working in LE or intelligence automatically makes someone untrustworthy. Which is odd, since half of the peace officer or clearance processes are establishing trust and creating an enormous hill to climb to successfully violate it. The threat model is obvious, and it only takes a moment of thought to realize that “ex-CIA person” you’re looking at with a cocked eyebrow was deemed by said organizations to be trustworthy enough to represent the interests of the United States or wherever they come from. Do they get it wrong? Sure. Is it as often as you think? No. You don’t hear the successes and the ratio is way lower than you think it is.

There are hundreds of thousands of people who would disagree with your premise. Many of them have a quiet, nonzero involvement in ensuring you can safely share that opinion and eat, a majority are former military and had direct involvement in the same, and at the end of that they’d like to put the gun away and provide for their family. Why is that automatically suspect? It’s not like they’re walking out of government with an armful of implants.

Put it this way: would you rather someone that the government spent millions of dollars training in, say, cybersecurity and active threat assessment end their career by buying and operating a movie theater or by making sure the Internet and power grid keep working? I’m about as liberal as it gets and even I can get there while acknowledging that occasionally those powers are used for malevolence. I’d counter that Sand Hill is just as capable of capitalizing that malevolence as Fort Meade and arguably more successful on some axes (no pesky laws). And sure, leak investigation is a bit stupid and mildly malevolent, but supply chain isn’t, and it’s also their prerogative to run their house as they wish.

We’d all benefit from debating the policy instead of the person a bit more, I think.


Oh, I'm not making a value judgement. I think that these government agencies should absolutely try to get people into Apple adding secret undetectable backdoors that help them catch criminals. I also think Apple is completely within their rights to make them work for it.

Basically, I totally agree with pretty much everyone that people who abuse children and upload pictures of it to iCloud should burn in hell. But I don't think that Apple should be compelled to add hell to iOS 17 if that makes any sense. If the NSA wants to hire double agents that pass the Apple interview process and add sneaky undetectable code that reports those people to them, I think that's great.

I think I just like the chaos of it all. That's why nobody votes for me when I run for Congress or whatever.


You are making a value judgment, though, even in your followup. You’ve implied that employment of one of those people is prima facie surreptitious, despite a carefully-drawn picture of how nearly all of the cases you’re gesturing toward are benign and not worth your ongoing concern. Foreign governments doing exactly what you’re alleging, on the other hand…

To be honest, that opinion makes you unelectable because you’ve alienated a huge group of people (way, way bigger than you think, and across the political spectrum), partially by imagining the beltway and back rooms of FAANGs as a le Carré novel. Reality is boring. Do horror shows happen? Duh. But take PRISM, for example. PRISM is an efficient, automated legal warrant process to streamline subpoena and provenance of user data for national security purposes, nothing more, but everybody screamed oh my holy hell! because the Snowden slides didn’t contextualize that and could be taken to imply something far worse. It’s toil reduction. Ask anybody in compliance at major companies. Subpoenas are a huge bitch at scale and nearly all major companies have a PRISM equivalent facing the other direction precisely for the reason PRISM exists. It’s not cigar smoke and port mirroring but for some people that’s more fun to imagine, I guess.

Seriously, reality in all of it is far less interesting than you think, and that’s one example of many. And often the fetishization of the secrecy and imaginative scenarios takes away from the real issue, which is market forces incentivizing the erosion of privacy and civil liberties. So ironically, by worrying you’re weakening the worry.

I know this because I do.


> PRISM is an efficient, automated legal warrant process to streamline subpoena and provenance of user data for national security purposes

Dropping the euphemisms, it's a for cops to do their job easier. That's cool when they're doing it transparently, with sufficient oversight and protections against abuse, and with a high success rate. The extent to which any of those three is true is pretty dubious.

Even if you truly do believe in this, which I totally understand–I have plenty of friends who work in this area, they are really behind the whole "we protect America" thing and I'm guessing you're probably in a similar camp–the issue is even more fundamental: it's one of attitude. Just because you're doing something good doesn't mean you have the right to have your job made easier. That's just not how it works. You don't get to take nuclear secrets and work on them from your personal laptop on the beach, no matter how pure your intentions are. When you work on things that are "dangerous" you don't get to just do whatever you want. The PRISM leaks were a scandal because they made the American people feel like the government was not accountable to them, and that is the price you have to pay to do anything in this country. End of story.


Look you make lots of great points.

But the real question is “why do so many people have these suspicious attitudes towards law enforcement and CIA and so on”.

Might it be because of the Hoovers and the Nixons and the countless CIA overthrows and the fake WMDs and the support for Israel and the military industrial complex and 2008 and ..

The government has most definitely not behaved in not earned our trust. And yeah, that will lead to scenarios where when we (the people) see violations of constitutional rights we don’t rush to nuanced investigations we rush to assuming it’s another instance of a corrupt government.


This type of shit is why I am always refusing to get a clearance. I wouldn’t be able to strut around proudly about how I’m destroying the country like this guy.


Not just this country — but countries all around the world.


I think you two might be talking about different groups of people? I read your comment as "current employees of three letter agencies act as spies by embedding themselves into tech companies" while the comment you're replying to seems to be talking about "people who used to work for these places should be able to get jobs as civilians".


To be fair, I recall seeing an article here a few days ago stating that Apple was moving a significant chunk of their manufacturing out of China. I don't think Apple is that dependent on China. I'm sure Apple has lines they will not cross. Maybe giving you up to China isn't one of them, but I'm sure they have them.


Absent significant changes in world order no major company will ever be successful in severing relations with China. No nation today is positioned to be able to both bring up the manufacturing AND compel their people to work. Without the two no nation can match Chinas production and thus produce the alternative “cheap capacitors”.


Where do you think Apple is headquartered? There’s a lot the US government can do to make them heel.


Apple publishes the volume of government information requests across the world: https://www.apple.com/legal/transparency/choose-country-regi...

They try to reduce their exposure to these government information requests; by keeping personal data on the devices and storing anonymized records on their servers when possible.


I remember hearing about these types of requests that also have a gag order placed on them so companies can't explicitly inform the public about them. I haven't heard much about that sort of thing lately though. Either very good or very bad.


Keep in mind that warrants and gag orders are very much behind the state of the art. Apple might receive a request for all the information they have on you, and are absolutely compelled by law to give that to them. But unfortunately, the information is all encrypted with a key that only you know. So the government has no choice but to come to your house and hit you with a rubber hose until you tell them the key, but bad news! it turns out it's illegal to do that. They can't hit you with a rubber hose until a pesky jury has heard the evidence against you and returned a guilty verdict, which if they had, they wouldn't be probing your iCloud account.

What is more of a grey area is whether or not a court can compel Apple to hit you with some malware. The government can ask "give jrockway a special firmware so that whenever he tries to shitpost on HN, the browser locks up until he types his iCloud encryption key into nsa.gov". What's not clear is whether or not Apple has to obey that request. It seems pretty likely that the government doesn't have that power, so Apple can tell them to go away. But who knows, it's perfectly possible to pass a constitutional amendment that literally says "Apple, Inc. has to dedicate 100% of their resources to spying on readers of the dark website Hacker News", and it's all moot. I've seen Congress (or rather, the states in this case) trying to agree on less controversial things before, though, and I'm not too worried about anything changing. (What's the opposite of pro? Con. What's the opposite of progress? Congress.)

Encryption is literal kryptonite for democratic governments. Apple is putting themselves at the forefront of the debate. It will be interesting to watch!


There's no such thing as literal kryptonite.


> "in the absence of a law that compels us to write software, which is unconstitutional btw"

This isn't really settled. Writing software is not always constitutionally protected speech, and Apple being compelled to write software would probably not constitute a violation of the First Amendment. Federal wiretap law can compel companies to make it easy for the government to get data via a warrant (which necessarily entails writing code to produce that data) and has been upheld in the past. Also companies are often liable for the code they write. Both of those are examples of when code is not considered speech.


The government compelling apple to cause peoples phones to search themselves (with no probable cause that the suspects of the searches committed a crime) would be facially unconstitutional under the fourth amendment, not the first.

Compelled speech would be an interesting argument against compelled writing of software, but is definitely the weaker one here.

Edit: Oh, I see in one of the other replies GP raised the first amendment. Just take this as a reply to the idea in general...


The government need not care how Apple complies with the law, which could merely state that cloud storage providers are liable for illegal material stored there by their customers, regardless of the cloud provider's knowledge. This would be catastrophic to cloud storage in general, of course, but given that strict liability is a thing, I don't see how such a law could be ruled unconstitutional.


Trying to workaround the 4th like that might manage to make the law facially constitutional, but I'd be surprised if it made the searches conducted as a result of it valid.

By my understanding you have to avoid a "warrantless search by a government agent" to avoid violating the constitution. The "warrantless search" part is really beyond dispute, so it's the "government agent" part that is in question. In general "government agent" is a term of art that means "acting at the behest of the government", but I don't know exactly where the boundary lies. I'd be fairly surprised if any law that allowed for accidentally storing CSAM after a failed search, but didn't allow for accidentally storing CSAM content without a search, didn't make the party doing the search a government agent. If you make the former illegal, cloud storage at all (scanning or not) is an impossible business to be in.


You already have the FBI partnering with computer repair shops to do dragnet searches of customer's hard drives for CSAM when they bring their computers in for repairs.


I'd argue the Fifth Amendment should apply to mobile phone headsets, but law enforcement would pitch a fit to lose those.


Section I of the Thirteenth Amendment reads: “Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.”


You heard it here first: all regulations are slavery!


There is a difference between regulations and compelled action. The government make your ability to do X conditional on you also doing Y, but it generally can't just make you do Y.

The exceptions are actually quite few outside of conscription, eminent domain sales, wartime powers.


That sounds right to me. The government has tremendous powers. Forcing people to write computer programs isn't one of them. (They could have saved a lot of money on healthcare.gov if they had that power!)


How does conscription fit into that picture?


It fits in if 5 out of 9 supreme court justices want there to be a draft.


> This isn't really settled.

I agree with that. Especially in recent years, it's nearly impossible to tell what is and isn't settled case law.

It would be an expensive battle for the taxpayers to compel Apple to write software, I think. They've tried and failed. It will just be more expensive next time.


> Federal wiretap law can compel companies to make it easy for the government to get data via a warrant

Not the same thing. The wiretap law cannot compel any company; only licensed telcos. These companies, in exchange for license to be able to provide such service to general public get some advantages, but have to agree to a set of limitations.

I have my doubts that Apple would be a holder of such license that would make it a subject to such regulations.


> There must have been some contract / favor they were going after, and the opportunity must have expired.

Gratis dragnet scanning in return for the DOJ looking the other way on their monopolistic behavior. There's been an administration change and the new guy may be ready to work on behalf of the people.


> *"in the absence of a law that compels us to write software, which is unconstitutional btw"

I'm not sure that's true. Consider telcos, for example. They're required to write software to make it easy for law enforcement to tap into phone calls, given an appropriate warrant. Telcos aren't just allowed to throw up their hands and say, "sorry, we don't have the capability to let you do that".

And regardless, the law need not compel them specifically to write software, it need only make them liable for any illegal material stored in iCloud, with or without their knowledge. So if someone is caught with CSAM on their iPhone, for example, and it's discovered that person had iCloud backups enabled (or whatever it is), then the government would have pretty clear pretense for a warrant to search that user's iCloud files.

Granted, Apple could then implement end-to-end encryption so even they would not be able to access the files, but that might not even be a good enough defense, if it can be proved that the phone itself had uploaded CSAM to iCloud.


> They're required to write software to make it easy for law enforcement to tap into phone calls, given an appropriate warrant.

Not exactly right. They are required to make it technically possible to tap a phone line, which can be as simple as giving access to cables. It's not necessary to give investigators a tap-from-home capability and free foot massages.

> it need only make them liable for any illegal material stored in iCloud, with or without their knowledge.

I can see a mall lawyer ripping up that requirement three ways - Apple has an army of way better educated legal representation.


There is also the issue that once the government had access, that the telco could not create a fake software layer to just use this excuse. And if they create a real and required software layer, then there was an expectation to have similar access as before.

This is great reason to never grant it in the first place. It forever locks access and security at that point in time.


Just to restate the solution... it didn't scan your phone. For photos that were going to be synced to iCloud it would use the handset to create a hash and send that to Apple. It was the most private solution out there for online photos.

Google just outright scan your photos, As do Microsoft. I guess Apple probably do to, or will now.

So your "win" here was "Apple can't use your handset to help ensure the privacy of your photos while also trying to meet local laws."

Meanwhile you're fine to register your face on that exact same handset, let it sync to your iPad, have it store and share your voice print for Siri. People are happy for apple to scan their heart beat and send out alerts if they fall over.

But don't take a hash of a photo you're about to upload to Apple's servers! That is too far.

Ridiculous.


> I'm really surprised that they tried it in the first place

Might be because they saw https://ec.europa.eu/commission/presscorner/detail/en/ip_22_... coming. If that becomes law it will introduce “an obligation for providers to detect, report, block and remove child sexual abuse material from their services”

Might also because their lawyers said banning apps containing even mild sexual content while not even trying to do something like this wouldn’t hold up in court.

Being as large as they are, they can even afford to do this with the goal of seeing public outrage, ‘forcing’ them to not go forward with the idea.


i am not an apple user (wont be because they cost an arm and a leg in my place so why bother) but i WAS and AM an active opponent and the said "minority".

the problem is not CSAM for finding pedos. This opens doors for, as you said, china because "constitutional guarantees" and other legal stuff only exist on paper when it comes to fighting and oppressing dissenters. Sure there are rules in place but the best thing is that NO one should have this function in the first place. If you open the door a crack,you have to open it full.

i don't even care about "search warrant" because tell me. Do you think in china there is some sanctity in "police search warrant" issued by a judge who will consider the rights of end users with the government breathing down their necks for "results"? they have institutionalized and criminalized dissent, like with India for example and they see dissenters same as pedos, maybe more so this tech should not exist in the first place.

court orders dont mean shit when the entire system is adversarial against you. Now, if there were no means, then nothing will happen but if there was a way, they will use it.

i am a dissenter so i understand how not apple but other manufacturers would be forced to allow this because "well if apple can do it, you can too. if you dont, you wont be given license to sell here" and they might not be as honest about their actions so yeah


Do you have a source for the "not constitutional" bit? I referenced it in a post below and was looking for a source.


I'm not a lawyer, but my current theory is that code is speech, and the government cannot compel you to say something you don't want to say.

There is some case law on the saying something you don't want to say part: https://www.mtsu.edu/first-amendment/article/933/compelled-s...

And there is some case law on the whole "code is speech" thing: https://en.wikipedia.org/wiki/Bernstein_v._United_States

(Apple has cited Bernstein v. US as the reason they didn't have to write an app to unlock some mass shooter's phone for a fishing expedition, so that's why I think they agree with my didn't-go-to-law-school opinion there.)

(BTW, an aside aside. DJB represented himself in that case. Just some random number theory lover / C programmer that increased freedom for everyone in the country in his spare time. Super cool guy.)


From the wikipedia article you linked, the 1999 case that is cited as precedent and is the one that matters was not self-represented.

> Bernstein was represented by the Electronic Frontier Foundation, who hired outside lawyer Cindy Cohn and also obtained pro bono publico assistance from Lee Tien of Berkeley; M. Edward Ross of the San Francisco law firm of Steefel, Levitt & Weiss; James Wheaton and Elizabeth Pritzker of the First Amendment Project in Oakland; and Robert Corn-Revere, Julia Kogan, and Jeremy Miller of the Washington, DC, law firm of Hogan & Hartson

As for the original question, the framework for this kind of legislation is usually “we ban the hosting of CSAM, you either implement something that eliminates it or you risk being fined for breaking the law”. That may not sound different to you but it is an extremely clear distinction in first amendment terms from “the state department may deny you an export license to publish your code”. Bernstein v US was saying that the burdens to publishing were too high and so he was unable to speak. The burdens did include submitting code and ideas to the government. With CSAM scanning, you are not forced to publish your code (speak), just to do something that satisfies the ban on hosting the content. There are thousands of completely constitutional laws that require you to do stuff a certain way that may involve writing code. This would be one of those.

The San Bernardino thing is a bit more like Bernstein — the government wanted Apple to give them a software tool to unlock a phone. A bit like “give us your code and ideas” but still not quite “give us your code and ideas or we silence you”.


The irritating part of encryption is that it's hard to determine what the underlying data is. With one key, it could be a word document that says "paying my taxes is the one joy i have in life" and with another, it could be the most horrifying image you can imagine.

I understand the government's interests in this particular issue, people that abuse children are the biggest pieces of shit that I can imagine, but unfortunately math is a pain in the ass and their laws are not possible to administer.

If you think someone is abusing children, you can always send a cop to their house and have them check. That is well within the rights of the government, and I'd even go so far as to say I support that right.


Not difficult, just limited in power. Any time you find decrypted material on a device, figure out where it was stored, and send that company a penalty notice for failing to report it.


As far as I know, that's not a federal law. I can walk past someone violating every human right and constitutional amendment in broad daylight and it's my right to tell nobody about it ever. Apple has the same right, it seems. Amend the Constitution if you don't like it.


Uh, I wasn't saying it was. I was sketching out how you would implement such a law without difficulty. The "you" is the FBI. This is all a hypothetical, hence my use of the subjunctive throughout.


I don't believe that's a constitutional right at all.

Consider "mandatory reporters". People like schoolteachers are legally required to report certain things like parental abuse or neglect of their students. If the government wanted to write a law that said you are required to report a crime if you witness one, I'm not sure that would be unconstitutional.


I don't think they are legally required to do anything. Them having a license to work in their field is dependent on it, though.

As an aside, if you are legally required to report something you have seen, and do not, then you are violating the law, which means legally requiring you to report yourself is unconstitutional under the 5th amendment, so we enter into paradox territory...


Yeah, I think that if someone lost their medical license for, say, endorsing a particular candidate up for election, that would be an EZ 9-0 supreme court victory for them.

As they said on slashdot 100 years ago, child porn is the root password to the constitution. Configure sshd to only accept keys!


I can’t speak for the USA but in the UK, professions are required to act in certain cases where a layperson isn’t.

For example, a layperson can ignore someone having a heart attack but a doctor is required to help (even if it’s just calling for an ambulance).


I must be missing something. The government compels people to speak all the time. In courtrooms every day, witnesses are brought to testify in a manner they can only refuse if it criminally implicates them. A witness simply not wanting to answer a question can and will be held in contempt of court and subject to imprisonment.


From [0]:

...though all compelled speech derives from the negative speech right, that right lends itself to two distinct models representing two distinct approaches to compelled speech: compelled speech production and compelled speech restriction.

A. Compelled Speech Production

Intuitively, the right to free speech necessarily implicates the right to choose what not to say. The characteristic element of this negative speech right model is a compelled movement from silence to speech. A prohibition occurs as a function of the government regulation, but it is a prohibition on silence.

...The original compelled speech cases follow the speech production model of the negative speech right. West Virginia State Board of Education v. Barnette,22×22. 319 U.S. 624 (1943). the original compelled speech case, followed this model: school children had no capacity to opt out of reciting the Pledge of Allegiance and saluting the flag.23×23. Id. at 626. If they could, they would have remained silent at their desks. Instead, the West Virginia regulation required them to enter public discourse, to engage in speech where they otherwise would not have done so.

...It is the right to be able to say what one wishes to say and nothing else. But since every law implicates autonomy to some degree, the Court has been more lenient unless the infringement on speaker autonomy raises additional concerns under the circumstances. The government does have some capacity to compel production of speech expressing a particular viewpoint given its need to take positions on political issues.

...In other circumstances, though, compelled speech production need not trigger maximal constitutional suspicion if the law does not meaningfully infringe on speaker autonomy.

B. Compelled Speech Restriction

The second model of the negative speech right involves compelled speech that restricts speech. The amount of possible speech supported by any given speech medium is often limited. Forcing someone to speak thereby forces the speaker to occupy a portion of a limited speech medium with expression that she would not otherwise have engaged in. The result is that she no longer has the room to say what she otherwise would have used the limited speech medium to say.

...In Tornillo, the Court applied strict scrutiny and invalidated a Florida right-of-reply statute that required newspapers to publish the response of a public figure about whom the newspapers had previously published criticism.42×42. Tornillo, 418 U.S. at 244, 258. In so doing, the Court relied on the notion that the limited nature of the newspaper medium meant that newspapers could publish only so much speech.43×43. Id. at 258. By compelling some speech, the law stopped the newspapers from fully expressing what they wanted to say.

[0]: https://harvardlawreview.org/2020/05/two-models-of-the-right...


But they don’t have to compel code, they just have to create a law that says “you must regularly review images for CSAM”.


Or even just, "you are liable for any CSAM your users upload to your platform".


I was personally quite disappointed at the time, but I don’t care anymore. I completely stopped using iCloud at the time, and I’m not going back. Migrating off it was such a pita, and I don’t want to have to do it a second time.


>Apple is pretty good at fighting the US government

To me, thinking that any US based entity can fight the US government, seems a bit absurd.

They can regulate and inspect it into oblivion.


>that leaked letter that said

What letter is this in reference to? Not sure I remember this from 2021. Would love to read it if anyone has a link.



Yeah. This reminds me of how Airbnb has now rolled up "show total" pricing (inclusive of cleaning fees, tax, etc right before Biden mandated such practice into law.


> "screeching … minority"

Could someone put that on a shirt with a nice design and take my money? :D


They also appear to have killed the crazy part of scanning on device. That’s the crazy part, not scanning in iCloud.


Scanning in the cloud can be just as problematic.

Google is not just scanning for known examples of kiddie porn, they are scanning for images of naked children, and reporting parents to the police.

for instance, they tried to get a parent arrested for using their Android phone to get medical assistance for their child during lockdown.

>With help from the photos, the doctor diagnosed the issue and prescribed antibiotics, which quickly cleared it up. But the episode left Mark with a much larger problem, one that would cost him more than a decade of contacts, emails and photos, and make him the target of a police investigation.

“I knew that these companies were watching and that privacy is not what we would hope it to be,” Mark said. “But I haven’t done anything wrong.”

The police agreed. Google did not.

https://www.nytimes.com/2022/08/21/technology/google-surveil...


I wouldn't be surprised if Microsoft wasn't doing the same thing with photos viewed in their default Photo app (The names of the images viewed in that app are sent as telemetry already along with the times they were viewed and how long each image was left open).

Windows defender may even be checking the files it scans for whatever content MS considers objectionable. We likely wouldn't won't know unless it becomes a news story or a whistleblower tells us.


> I wouldn't be surprised if Microsoft wasn't doing the same thing with photos viewed in their default Photo app

Google has apparently shared their algorithm with others, but only Facebook was mentioned by name.

>in 2018, Google developed an artificially intelligent tool that could recognize never-before-seen exploitative images of children. That meant finding not just known images of abused children but images of unknown victims who could potentially be rescued by the authorities. Google made its technology available to other companies, including Facebook.

https://www.nytimes.com/2022/08/21/technology/google-surveil...


I had no idea the photos app was doing this. This seems beyond excessive and I’m glad I happen to use the old photo viewer. The new one always takes a second to load the photo, and I suppose that’s because it’s doing all this other stuff in the background.


I often hear the argument that the average user does not care about privacy, but I don't think that this is true.

Instead I think the average user is just not aware of the insane amount of data that is exfiltrated and what can be done with that data, because the software does not make it clear what it actually is doing.


Honestly, why on Earth does Microsoft want to know the filenames of the photos being opened? And, how often each one is viewed? What possible enhancement to the app can be justified because of this information?


Cynically, I'm sure it could be "justified" in all kinds of ways like "knowing more about the photos viewed most often with our Photos app helps us to determine what new features would best enhance the user experience. Only metadata is being collected, so no employee is ever able to view your pictures. We value your privacy blah blah blah"

Even more cynically, it just gives them more data about you and your interests (with viewing frequency/duration being a measure of interest) which lets them push more narrowly targeted ads at you, which gives them more money, which lets them invest more in improving Windows as a whole or whatever else they tell themselves so they can face their own reflections.


I checked out this claim and and was able to find this:

* https://www.reddit.com/r/Windows10/comments/8zk1yy/a_simple_...

It stores that information in a local index, and also stores telemetry data which counts interactions with the app, but there is no evidence it is uploading indexes to MS.

Do you have any evidence of this?


I believe that info was in one of these links:

https://learn.microsoft.com/en-us/windows/configure/windows-...

https://learn.microsoft.com/en-us/windows/configure/basic-le...

They both 404 now. I had saved a copy of both at the time, but those files must be on another drive. I'll keep looking though!

There were several articles about it at the time https://www.extremetech.com/computing/247311-microsoft-final...

https://www.ghacks.net/2017/04/06/windows-10-full-and-basic-...

These documents were massive and filled with lots of information buried in places that seemed odd and scattered across different sections so that you couldn't get all the information about what an app collected in one section/table of the document.

Another thing I remember from those pages was that in some circumstances short clips (less than 1 minute) of the videos you played using window's default player would be sent to MS as well, but they said that would only happen in the event of crashes.


Even if it isn't currently checking against CSAM databases, it does by default share file metadata with Microsoft.


If you're still using Microsoft or Google services at this point, you deserve what you get.


Victim blaming is very easy, yes?


Yes, pretty easy. Keyboard-warrioring in general is.

But that's not the point. Victim blaming doesn't do any good for today's victim, but maybe it will help show the next person that their fate is under their control.


If the photos are hosted on Google's servers, they have every right to scan whatever they want. If you are privacy conscious, you can upload files you encrypt yourself at the cost of not being able to share easily. I recognize edge cases like the one in the article you linked, but I don't see an alternative. Not scanning for CSAM on your own servers isn't a realistic expectation.


> If the photos are hosted on Google's servers, they have every right to scan whatever they want.

Google is trying to get people arrested because Android backed up their personal photo libraries to Google Photos.

> The day after Mark’s troubles started, the same scenario was playing out in Texas. A toddler in Houston had an infection in his “intimal parts,” wrote his father in an online post that I stumbled upon while reporting out Mark’s story. At the pediatrician’s request, Cassio, who also asked to be identified only by his first name, used an Android to take photos, which were backed up automatically to Google Photos. He then sent them to his wife via Google’s chat service.

https://www.nytimes.com/2022/08/21/technology/google-surveil...

> Not scanning for CSAM on your own servers isn't a realistic expectation.

Google isn't just scanning for known examples of kiddie porn, they have developed an algorithm that scans for any image of naked children and, as usual, there doesn't seem to be a human in the loop when their algorithm makes a bad call.


It's Google's servers. I imagine every content hosting company has a heuristics based approach to detecting CSAM. Google has a habit of tuning moderation algorithms for false positives rather than missed positives.

Not having a human in the loop is terrible and a problem frequently when it comes to getting support from Google companies. That's a separate issue from Google or Dropbox being able to scan unencrypted files for CSAM. Google's policy of automating as much as it can has tons of downsides. But it's understandable when you look at the scale Google functions.

It's important to separate the policy of scanning from Google's terrible appeal process and the algorithm false positives.

I would feel differently if the story was that Android scanned an outgoing SMS or a photo saved locally. I am not sure where the balance point is to identify and report CSAM while also respecting user's rights to privacy.


> It's Google's servers. I imagine every content hosting company has a heuristics based approach to detecting CSAM.

The slippery slope claim made in the past was that they were only looking for files that match known examples of kiddie porn.

For instance, this tech from Microsoft:

>PhotoDNA creates a unique digital signature (known as a “hash”) of an image which is then compared against signatures (hashes) of other photos to find copies of the same image. When matched with a database containing hashes of previously identified illegal images, PhotoDNA is an incredible tool to help detect, disrupt and report the distribution of child exploitation material.

https://www.microsoft.com/en-us/photodna

Reporting parents to police because they took pictures of their child's first bath and those photos were automatically backed up to Google servers is much, much worse.


Please stop using 'child's first bath' as an indicator of overreaction by the algorithm. It is not helpful to argue something which will make you look hyperbolic or histrionic once your argument is looked into.

1. The picture(s) in question mentioned above which got an account shut down and police notified was not a 'child in a bath', it was close ups of a child's genitals

2. You are probably not a parent since a child's 'first bath' is not a thing. They are bathed as infants starting immediately. If you are referring to a child's first independent bath, then no one should be taking pictures since private bathing is not something that should be intruded upon for picture taking

All that said, I agree with you.


> 2. You are probably not a parent since a child's 'first bath' is not a thing. They are bathed as infants starting immediately. If you are referring to a child's first independent bath, then no one should be taking pictures since private bathing is not something that should be intruded upon for picture taking

Are you a parent? A prudish parent perhaps?

I have pictures of me as a kid in a bath (genitals obscured underwater). It's a common parental thing among parents who AREN'T DRIED UP PRUDES AND FRIGHTED OF NUDITY.

The bottom line is - another case of our ability to use our technology as we desire being interrupted and interfered with. I hate to even remotely invoke RMS, but... He wasn't wrong!


Please re-read what I wrote. Was it really unclear? I said that if a child is bathing independently that it is not appropriate to barge in and take pictures of them. They would be at least 5 or 6 years old at that point. Is that prudish? Why is everyone so emotional about this? All I said was that it was a terrible and inaccurate argument to talk about pictures of a child in a bath because that is not what flagged the algorithm and there is no such thing as a child's first bath.

Please try to parse a response before yelling at someone for something they didn't state.


> 1. The picture(s) in question mentioned above which got an account shut down and police notified was not a 'child in a bath', it was close ups of a child's genitals

No, the example of Google calling the police on a parent given in the NYT article was because the parent sought medical assistance for their child during lockdown and the doctor requested pictures of the issue the child was having.

This is no more an acceptable reason for Google to report parents to the police for child abuse than taking pictures of your child's first bath, and taking pictures of your child's milestones has long been something that people have done.


>> 1. The picture(s) in question mentioned above which got an account shut down and police notified was not a 'child in a bath', it was close ups of a child's genitals

> No, the example of Google calling the police on a parent given in the NYT article was because the parent sought medical assistance for their child during lockdown and the doctor requested pictures of the issue the child was having.

These things are not mutually exclusive.

Stop being defensive. It doesn't help you. I am trying to help you by telling you that you are making a shitty argument.


> I am trying to help you by telling you that you are making a shitty argument.

OK. Allow me to tell you that, in my opinion, you are the person making the shitty argument

Your argument that "no one should be taking pictures" of their child's first bath is just cringe-worthy.

Google is to blame for the situation when they call the police over a single false positive from their faulty algorithm, not parents who take pictures of their own children.


Are you even reading what I am writing? I never said that no one should take pictures of their children's first bath -- I said that there is no such thing as a child's first bath, just like there is no such thing as a child's first meal, or a child's first nap, or a child's first bowel movement. They are given baths as infants.

You are obviously either unable or unwilling to parse what I am trying to tell you, and fixated on this notion that you have to be 'right' no matter what even though I am not telling you that you are wrong!

I am at a loss as to what to reply to you with besides I urge you to read things before continuing to respond because otherwise you end up talking around people instead of with them.

Good day.


How do you take a photo on android to only store it locally?

The whole photos experience on Android has been simplified to break that distinction and there is no way to even really know or realize what is happening - BY DESIGN.

The real answer here is, again, Android is not privacy first by default.

BTW parents taking medical pictures of their children != CSAM which is an acronym for "Child Sexual Abuse Material"... Taking care of your child's medical problem isn't "sexual abuse" despite the conflation of the two people are glossing over in this thread.


I installed an offline gallery and disabled Google Photos. For years and years Samsung android phones didn't have the option of cloud backups.

There's a very clear distinction between local and cloud, every single photo taken using any camera app is local only. Galleries, like Google Photos, and backup services like Dropbox have an explicit setting to enable cloud backup. Google Photo backup is very distinctly different from Google Drive and phone backups in general.

I use FolderSync to sync it with a self hosted NextCloud instance. If I am travelling, I prefer using Syncthing, a wifi file server running off the phone, or a physical wire to transfer photos. I know exactly what pictures are sent where and backed up with what method. I am not sure why you believe Android treats photo and file management like a black box, I don't believe Apple does either. Apple iOS devices are much more difficult to manually backup as they only allow background photo sync to iCloud, if the screen is turned off syncing to a third party will most likely fail. But there is a straightforward toggle to disable Photos backup to iCloud.

Android file management is superior to any other mobile OS. Android is file privacy first by design, as long as you are comfortable managing backup and syncing with self-hosted and self-managed solutions. I have gone the additional step of rooting my phone as well to get around any Android limitations on what folders certain apps have access to.

No one is arguing against the last point you made. No one was conflating the two.


> There's a very clear distinction between local and cloud, every single photo taken using any camera app is local only.

>Android is file privacy first by design

Yet we have the NYTimes article stating that the only ways Google could have access to the photos they tried to have a parent arrested for would be to scan content on the device, scan automatic backups, or spy on the text messages sent from the device.

The examples in the story were parents sharing images directly with their doctor, not the with the general public.


I wonder how algorithms like Stable Diffusion will impact CSAM detection policies.


If a child isn't exploited to make it, it's not technically CSAM.

But you could maybe argue that if the ML model is based on CSAM then it's a product of it and would need to be banned.


Google scans your data for anything it finds objectionable. Here's another example of Google flagging and deleting a video from a user's Drive storage: https://twitter.com/nicholasdeorio/status/158386625133969818...


There are many examples of Google's automated systems making egregious mistakes while scanning user data with no human in the loop to review the decision.

>Ed Francis studies the evolution of military technology over at his YouTube channel, Armoured Archives. But this week, Francis says five years’ worth of research stored on Google Drive has become inaccessible thanks to Google’s automated error.

Francis says the file in question was simply a collection of data on various tanks for a coming video on how military vehicles have evolved across historical conflicts. But Google’s automated systems deemed the file a terrorist threat, resulting in a complete lockdown of his YouTube, GMail, and Google Drive accounts.

https://www.vice.com/en/article/qj8yj7/google-locks-historia...

Having a shitty algorithm kill your whole Google account with no way to reach a human to fix the problem is one thing.

Having a shitty algorithm report you to the police for taking pictures of your child's first bath is a bridge too far.


I remember that story, I came across a post asking people to upvote the support ticket on the r/datahoarder subreddit by a fan on the 17th. I came across it and cross posted a link on hacker news on the 22nd in the evening and by the following morning the account was restored and taken care of. I can't say for sure how much posting it here helped, but based on the timeline I think it made a difference. I believe it was also one of the highest upvoted stories of that day.

What stood out to me what one of the first couple comments in the support thread was the contact information for the Museum Director of the Swedish Tank Museum.

My unconfirmed theory is that Google's OCR pdf service flagged specific text and pages in the pdf of tank plans and repair manuals as they are considered classified and shouldn't be in the public unredacted. It's historically significant and absolutely worth preserving, but I can see why it might get flagged.

This was also interesting in that it got the entire account taken down. Usually Google flags a file and disables sharing completely. Google disabled sharing backups of a recent Kanye interview that was filled with hate speech called "Kanye champs removed video.mp4". I have not watched it but it was my first time seeing something removed because it "may violate Google Drive's Hate Speech Policy. Some features related to this file were restricted."

I am not sure if it was the post to HN or various videos other made that helped, but it made me realize that the one of the best ways to get in touch with a human at Google is posting here on HN and that Google will continue to become more aggressive with scanning files.

Section 230 was modified by FOSTA-SESTA when related to hosting sexual solicitation, and EARN IT is back under review by congress which would effectively make E2EE illegal and make hosts liable for illegal content uploaded by its users.

Would scanning be okay if there is a government entity with a human you can appeal to that would override any flags made by automated systems? No corporation wants more government oversight, and the only way to avoid it is to do good faith self policing. EARN IT would receive a lot more support if the public sentiment changes to believe Google and Apple and other hosts intentionally choose to ignore problematic materials. Reddit is criticized often because it allowed problematic subreddits to grow. I am not sure where things will end up.


I'm not defending this at all, but one of the reasons why there are no (or few) humans that can be contacted is that they* said that it was tried before and it caused a lot more issues with mistakes/takeovers due to social engineering.

* Can't remember who said it but it was at a town hall this year


> one of the reasons why there are no (or few) humans that can be contacted is that they* said that it was tried before

This just sounds like yet another excuse for holding payroll down as much as possible.

If I am a customer of Amazon, Apple, Netflix, Walmart or any number of the other companies with a similar market cap, I can get access to real live human beings who provide customer support.


> If the photos are hosted on Google's servers, they have every right to scan whatever they want.

I don't get why that should be the case. When you rent an apartment the owner is generally no longer able to just walk in and do whatever he likes, including installing cameras in the shower. So I would expect that when Google tries to sign you up for its online storage (repeatedly) you should get the same protections, especially with smart phones being such an important part of modern life.


It's more about shareable media. Relative to your analogy, I feel storing files in encrypted containers is renting an apartment and feeling confident about the landlord not walking in. On the other hand, if you hang up an obscene image in your window of your apartment, I imagine it's understandable that a landlord would use a key and take it down. If hanging up obscene images becomes a pattern, I think the landlord would kick you out.

From one perspective, I understand that cloud storage should include certain rights to privacy. If I buy some compute time on AWS, I manage and control all flow of data. It would be a complete violation if AWS policed the kind of content I could store. I think self-managed vs company managed is what dictates my expectations. Google photos and drive, especially since most people don't pay for it, fall into the company managed category in my opinion.

In the last few years Google Drive became more strict about sharing copyright work and started flagging files, which limits the ability the "share to anyone with a link". For people who use it as a backup, the advice I started hearing was to encrypt the data before uploading. The opsec while considering nation-state level threats to privacy already recommended only using encrypted containers since the beginnings of cloud storage.

The Communications Decency Act is an interesting law that dictates the liability of online content and service providers. There is an ongoing battle to increase ISP and host liability as a way for the entertainment industry to combat piracy.

FOSTA-SESTA and the ruling on Backpage [1] made things more difficult for hosts like Google and changed the liability surrounding sexual solicitation. If a host like Google knows what's being hosted and does nothing, they get in trouble. They don't have the protections they did in the past.

https://en.wikipedia.org/wiki/Section_230


> It's more about shareable media... if you hang up an obscene image in your window of your apartment

Again, the parents Google is reporting to the police are not trying to share photos with anyone except their own doctor.

This isn't scanning information because you made it publicly available. This is scanning information on your device because it was automatically backed up by Android.


> [...] I imagine it's understandable that a landlord would use a key and take it down.

Just to give context to perspectives people might have on this analogy, the above would be illegal in e.g. Germany.


Maybe monitor companies should put chips inside that scan the current screen every second to check for CSAM matches or whatever else they don’t want the customer using their monitors for and diligently report violations to the police?


You’re joking but smart TV screens already do this to identify and report content back to the mothership.


Yes! The Creating Sustainable American Mployment (CSAM) Act will create thousands of new jobs too.


> Not scanning for CSAM on your own servers isn't a realistic expectation

It is if the false positives are overwhelming, given their impact on innocent people's lives.


Not scanning for CSAM on your own servers isn't a realistic expectation.

What makes it not a realistic expectation? According to other references, the USA cannot compel companies to run scans on their own customers.


They were wanting to implement end-to-end encryption for Photos and iCloud backups (both announced today). Scanning uploaded data in the cloud for CSAM wouldn’t work then. Hence on device scanning


But they backtracked on on-device scanning. So that's not happening unless I missed something. There was a huge outcry when they proposed that.


That was just for scanning photos in general. Doing it on device is probably preferable to on cloud and at worst no different.


> Doing it on device is probably preferable to on cloud and at worst no different.

I own my device, I don't own their cloud. That's a big difference. Don't co-opt my property to do work you want done. Data stored on your servers is your business, so doing the checking there is fine, as long using them isn't mandatory.


They can't do the check there because clearly they had already been working on encrypting the data end to end making that impossible. So the middle ground was end to end encryption with on device scanning which is a step up from no encryption. Somehow we ended up with the best option of no scanning at all which is nice.


He knows they can't do it server-side with E2E. That's his point. Apple can scan my encrypted files all they want.


> That was just for scanning photos in general.

Are you trolling? What Federighi proposed before was scanning "for CSAM" on device [1]. Same angle.

> Doing it on device is probably preferable to on cloud and at worst no different.

Please elaborate. How is it better to force users to run software they don't want than to let them decide whether or not to have their photos scanned when they choose to upload them to the cloud?

Anyway it's a false dichotomy. Apple isn't doing on-device scanning, and now they've announced they won't do it in the cloud either.

[1] https://youtu.be/OQUO1DSwYN0?t=426


Well the scanning was allegedly supposed to take place only when uploading. If a user chooses not to opt-in for cloud library the device scanning was allegedly supposed to be turned off.

So yeah allegedly no worst.

You might notice i used the word "allegedly" a lot, it's because we are speculating about a feature that was never actually rolled out and that nobody audited externally to my knowledge. If you don't trust Apple then this argument don't apply and you are probably better of not using an iPhone.

Nonetheless it's still not worst than actually rolled out CSAM scanning feature of Google Cloud that already had major user adversial effect. So you should trust Google even less and you definitely shouldn't use a stock Android device.


iOS is closed source. Literally every part is unverifiable and "forced". I have no way of proving that my iphone isn't and hasn't always been scanning my photos. But I don't have the time or energy to care about that, I've decided that using an Apple product is safer than an Android I didn't audit (which is essentially impossible). By scanning on device it enables the possibility of end to end encryption which reduces the risk of a hack or bug exposing my photos.


Yes, this whole thing is just another public relation exercise of "Apple cares about your privacy" bullshit when they are actually saying that they still plan to scan your device for CSAM. "End to end" encryption of backup on iCloud is also a joke when they are going to store the encryption keys on the iDevices on which you can run no other system software apart from the closed source ones provide by Apple.


Nope. This is a step forward for privacy.


How is having a hash-based scanner a step forward for privacy compared to not having one?


Meant end-to-end encryption, as mentioned by poster, is a step forward. Even if the devices aren't running OSS.


Ah sorry, yes I agree.


My understanding was: it was only ever scanning things on the client that were uploaded to iCloud Photos, in the same exact way they are scanned server-side today but arguably a step more “transparently” for the user (at least if the user is a reverse engineer, heh).

What was crazy about this? The media outrage around the PR was incoherent to me at the time, and it seemed no one took any time to understand the present realities or details.


Your apartment complex decides they don’t want to be party to anything illegal. Just in case, they set up a police precinct in the lobby. They set up hidden cameras in every room of your apartment, and if their AI model detects anything suspicious, they send the video to the detective. Because you aren’t doing anything illegal, you have nothing to worry about, right?

What’s crazy about this?

Another way of expressing the concern: does your iPhone work for you, with the help of Apple’s services, or does your iPhone work for Apple? Working for me means not having software designed to report me to the police for how I use my device. The hash database for now includes CSAM hashes; there’s no reason it couldn’t be extended to include similarly heinous material, like Winnie the Pooh memes, Hong Kong freedom posters, rainbow flags, hormone therapy instructions, or anything else offensive to the regime.

People that are comfortable with their device working for a company with the assistance of the user choose Android.


The apartment analogy doesn't quite work, as the scanning only happened with photos that were going to be uploaded to iCloud.

It wasn't like constantly monitoring your private residence, more like setting up a breathalyzer checkpoint on the way to the freeway.


Fixed:

Your apartment complex decides they don’t want to be party to anything illegal. Just in case, they set up a police precinct in the lobby. They set up hidden cameras in the lobby, the hallway leading to your apartment, and the balcony of your apartment. "All public spaces of course" the crowd consents. And if their AI model detects anything suspicious, they send the video to the detective. Because you aren’t doing anything illegal, you have nothing to worry about, right?


You’ve just described every photo upload service in America, although my understanding was Apple would use a list of hashes of known bad content, not an “AI” as google does.

Everyone scans for CSAM. I am not conjecturing on the ethics of scanning photos here, I am suggesting that moving from server- to client-side scanning had no effect on any of the things you are ranting about. Hence why I do not understand the outrage.

> Working for me means not having software designed to report me to the police for how I use my device.

Let me fix it: “how I use iCloud Photos, a hosted service on Apple’s servers”.


Isn’t this literally done in most office buildings in the US and probably a lot of apartment buildings already? There are CCTV cameras everywhere in the US (and other countries).


Closer to scanning your vehicle for meth on the way to the freeway. Rather fraught analog either way perhaps.


These analogies are wrong.

You agree _in your rental contract_ not to dump explosives in the apartment complex trash compactor, and that the apartment complex has the right to process your trash.

The apartment complex stops looking inside everyone's bags that land in the compactor (a bit late to prevent it from getting into the trash), instead sniffing bags as you put them into the trash chute on your floor, and tagging the bag with a red sticker if it the chemical composition matches a particular pre-registered explosive from a list.

At the other end of the chute, a counter checks if you've dumped a dozen bags with red stickers, and if so, a human may open one to see WTF you're up to. If you're indeed dumping explosives in bulk, they let the bomb squad know.

EITHER WAY, that bag would be headed to the compactor, you are affirmatively putting it there, and any explosives are a breach of agreement. In no other case besides you opening the chute to dump a bag is it sniffing anything else anywhere, except when you open the chute to send these bags to the compactor, bags you decide the contents of.


You'd have a hard time verifying that said on-device scanning would only have been run only on iCloud-uploaded content.

Once the feature exists your local data might become accessible to a government warrant, which would make the iPhone the opposite of a privacy oriented device.

If it's only for iCloud uploaded data they can simply do the scanning there. There's no reason to use customer's CPU/battery against them.


This necessitates a workflow where both the photos and decryption keys are accessible by the same server and that there is a security workflow to request the users decryption keys without the user involved.

This is specifically what Apple is trying to avoid - they are intentionally pushing an environment where the user must be involved to get the keys, by way of their account password and/or other enforcement mechanisms designed to ensure only the real user can access such keys.

This is discussed in detail in their Apple Platform Security guide: https://help.apple.com/pdf/security/en_US/apple-platform-sec...

All of the facial recognition, object identification, etc is all done on-device for the same reason. By contrast Google can and does do this in the cloud - and there are Google servers that intentionally have access to decrypt your photos.

iCloud backup was previously a vector that bypassed this, however they also today announced they are fixing that: https://news.ycombinator.com/item?id=33897793 "Apple introduces end-to-end encryption for backups"


Sure, from a technical perspective, it is a nice solution to a set of problems.

But there are many serious problems with it. I'll isolate one: an on-device content surveillance mechanism is a slippery slope to a bad, bad place.

As the saying from the 90s goes, "child porn [CSAM] is the root password to the US Constitution." It is bad enough that people are willing to suspend their better judgement to do something about it.

But after you already have the mechanism accepted an in place, it is far easier to add to it. Pick your boogeyman: you're giving all of them a lovely tool to address their desires. Dissent suppression and Winnie-the-Poo detection for Xi, Erdogan gets to sniff out Golem memes, and choose-your-own horror for the coming dictator of the US.

It is far harder to tell a sovereign, "we could easily do that, but will not" than "we don't have a mechanism to do that." And pretending that it won't happen doesn't pass the laugh test - we have seen this show many, many times. But if you want to argue, start by explaining how Apple's jumping to implement the 10-minute-max sharing limit shows how they'd stand up to China about this.


I agree there is a slippery slope concern, and Apple has themselves made related arguments such as the FBI case and not wanting to create a software update to decrypt the contents of a phone. However that is contrasted with a very real and much more practical concern of malicious parties getting access to your cloud stored photos.

It would help to note Apple also today announced "Advanced Data Protection" in iCloud which closes the hole where iCloud Backups, iCloud Photos and various other bits of data were technically decryptable by Apple. They've closed that (but it's opt-in, to balance the average user losing all their photos against other users desire to be secure even if it means losing their data). Details: https://support.apple.com/en-us/HT202303#advanced

However even without the "Advanced Data Protection" what I said about not having a workflow for any Apple server to "normally" request both the keys and photo data is also still good security.


To detect Winnie-the-poo, it required a code push of a new database to all clients. If that’s the bar, than a corrupt apple could also push a software update tomorrow that enabled such scanning, whether this scheme was implemented or not.


> If it's only for iCloud uploaded data they can simply do the scanning there.

This is what Apple was trying to avoid. Scanning on iCloud also requires that Apple can see your photos.

If the scanning is done on device, Apple could encrypt the photos in the cloud so that they can't decrypt them at all. Neither could the authorities.

> There's no reason to use customer's CPU/battery against them.

The amount of processing ALL phones do for phones is crazy, adding a simple checksum calculation in the workflow does fuck-all to your battery life. Item detection alone is orders of magnitude more CPU/battery heavy.


> Once the feature exists your local data might become accessible to a government warrant, which would make the iPhone the opposite of a privacy oriented device

Why does that dystopia require on device scanning? Why couldn't they just do it with an OS update today? It's not a reasonably slippery slope, given the actual mechanics of how the CSAM system was designed (perceptual image hashes, not access to arbitrary files)

> There's no reason to use customer's CPU/battery against them.

That's the better argument, but still not super strong. On-device scanning means you know and can verify what hashes are being scanned for, who is providing them, and when they change. Cloud scanning is a complete black box. None of us would know if Google was doing ad-hoc scans of particular users' photos at the behest of random LEOs.


> Why couldn't they just do it with an OS update today?

See my comment here:

https://news.ycombinator.com/item?id=33903825

> None of us would know if Google was doing ad-hoc scans of particular users' photos at the behest of random LEOs.

Not your device, not your software. You should assume anything you upload unencrypted is scanned. This distinction was clearly voiced by the majority during the debacle of Apple's on-device scanning proposal. They basically said, "Scanning in the cloud is [choose one: fine, skeezy], but we draw the line at doing on-device scans. I don't want that software on my device."


> You'd have a hard time verifying that said on-device scanning would only have been run only on iCloud-uploaded content.

That would be precisely as difficult to verify as verifying whether your Apple device is currently scanning your content.


The point of the feature is to use the data in court cases, which are public record. So word would get out there, via journalist, a whistleblower, etc. They had to make the proposal public before implementing it.


The crazy part is forcing a user to run software that undermines their own interests, on a device they purportedly own that is likely a huge part of their life. Software running on a computer should represent that computer owner's interest, period.

Apple can run software that represents Apple's interests on their own servers.


What part of comparing checksums of images on your device to KNOWN child pornography checksums is "undermining your interests"?

This was not some Google-style hotdog or not algorithm that was using "AI" to guess if a photo had a nude person on it or not.


First, the huge possibility of false positives. They're not "checksums", but rather "perceptual hashes", which are nowhere near as solid as a cryptographic hash. When you multiply a small chance of a false positive across hundreds of millions of deployed devices, you end up with many false positives - false positives that result in people being characterized as "child abusers".

Second, the overall aim of the technology also targets legitimate or inadvertent activities, that the law has also unjustly criminalized - say taking family pictures of your kids, 17 year olds sexting, or someone trolling you. Once again unjustly branding people as "child abusers", this time with "evidence" to back up the narrative.

But even in the case of a proper match to the NCMEC list for its bona fide purpose, that still is undermining the phone's owner's interests. That's the philosophical contour, even if you personally wish to brush it aside with the desire to catch people looking at evil images. Our society respects similar longstanding privacy boundaries, even if it does end up helping some "bad people".

It's really not in the NCMEC's interest to respect any of this, as their foundational dynamic is that there is a horrible thing happening in the world, and they must do everything possible to stop it. That kind of advocacy is certainly needed, but we shouldn't just accept their desires as if they're an unbiased neutral party.


Yes, the internet went on a "generate false perceptual hash positives" binge.

Everyone just forgot that in the Apple system you don't get automatically banned and reported to the authorities if you get a match (or five). The matches get manually checked by an actual human, who will see in 0.2 seconds that it's not actual CP, but a highly distorted picture of a cat or some gibberish. Zero action will be taken.

Also: the scanning was only done if iCloud is enabled. If iCloud is enabled, the EXACT SAME scan could be done in the cloud. Why are people not objecting to this with the same fervour?


1. What's to guarantee that the checksums are of child pornography?

2. What's going on with the "fuzzy comparison"?


The database is generated by the NCMEC[0]. I trust them to have a rigorous process for adding stuff to it.

[0] https://en.wikipedia.org/wiki/National_Center_for_Missing_%2...


A lot of the concern is that others may not trust the NCMEC, or that they don't trust that other images won't be added in whatever ends up on your device.


It took me a while of ruminating but I think I like this answer the best. Fundamentally here some people are upset that something they own would betray them.

Which is weird, because it implies anything you used but didn’t buy can and should betray you, and it’s your own fault. You should have been smarter. But it makes a bit of sense, for some odd reason I can’t quite make tangible. When the samsung TV I bought starts showing me ads, when my pricey cable or Hulu package interrupts for commercials, I feel a tinge of this.

So now I think I somewhat get the feeling. Thank you.


> because it implies anything you used but didn’t buy can and should betray you, and it’s your own fault

The contrapositive isn't automatically true. You can be upset when a cloud host or ISP violates your expectations, even though you don't own them. You can expect Apple to basically work in your interests even on their own servers, since you're paying them.

The ownership thing is more about the structure of society. If computing and communications technology is going to move us forward as a distributed democratic society, then those capabilities must remain distributed throughout the population rather than being controlled by a handful of centralized gatekeepers.

And yes, we've strayed quite far from this ideal. Most people have little control over what their mobile phone does. Still, there is a distinction between a device not having your desired functionality because the manufacturer didn't create it (or even disallowed others from creating it), and the manufacturer deliberately creating functionality that harms users.


Meh. Running it on device means that it is narrowly scoped and transparent. There is no way to secretly add a new hash or to vary what you're searching for depending on a user's location or identity.

Running it on their servers means nobody knows how expansive the scans are, and there is nothing preventing ad-hoc scans for arbitrary content for individual users.


The software uploaded an encrypted fingerprint as metadata as part of every photo.

The actual scanning and apple self-interest all happened server side.


Truthfully generating that fingerprint is not in the user's interest. User representing software would skip doing that. If the storage service required it, it would generate plausible dummy values.


users havent owned devices in quite a while in this space


Pretty much every "file" you create on your iphone is uploaded to iCloud by default. For example if you take a screenshot, it saves to your photos library which defaults to uploading new photos to iCloud immediately.


The same files would’ve been scanned anyway. What’s the problem?


It's my device, working against me; and it's also not going to catch anyone remotely intelligent, since everyone and their mother will know about this feature.


What’s always so interesting about this then is what’s the point? Apple catches the least sophisticated of criminals who don’t know how to turn off iCloud (likely the least harmful nodes in this network of horror)?

And to accomplish that all we have to put up with is a huge lack of privacy and tons of false positives?


The point was to encrypt all iCloud data so that Apple could not give access to it to authorities.

Now the authorities can go "there might be CP in there, think of the children!" and a judge will give them the warrant.

On-device check agains KNOWN CP checksums + E2EE encrypted iCloud -> authorities need to get the actual phone, they can't just go looking around on people's iCloud accounts to see if they might find something.


The point of encryption is obviously privacy.


The question was obviously “What’s the point of only scanning the cloud?”


I don't understand your question.

Are you arguing in favor of on-device scanning? Or arguing against all scanning?

Perhaps easier, what do you think they should be doing?


Yea?

Either commit and say “scanning for CSAM is the top priority” and scan on device or say “privacy is the top priority” and don’t scan at all.

I’m literally saying this halfway path is the worst of both worlds, since people still lose privacy, but very little meaningful progress against csam will be made.


> Additionally, the core of the protection is Communication Safety for Messages, which caregivers can set up to provide a warning and resources to children if they receive or attempt to send photos that contain nudity.

No, it’s definitely still there.


That was announced at the same time, but is different from what the parent is talking about.

One program scans photos that were about to be messaged and if they look like nudity asks "are you sure you want to do that" (and also optionally notified the parents, although it looks like they aren't going through with that part). The scanning happens on device, and the results of the scanning are never sent to Apple or any other third party.

The other program scanned all images that were being uploaded to iCloud and if it found any that matched a government maintained image fingerprint database would notify Apple who would forward that information to the government. Big difference in scope and impact.


Messages sent through Apple's systems are not "on-device" scanning. Different topic.


They haven't really - I logged in to specifically post about that. Like you pointed out, the real outrage was never about scanning for CSAM in the "cloud" but on your device. And this clever fluff of public relation exercise by Apple is just to cover up the fact that device scanning for CSAM is still in the picture.

    Communication Safety for Messages is opt-in and analyzes image attachments users send and receive on their devices to determine whether a photo contains nudity ... The company told WIRED that while it is not ready to announce a specific timeline for expanding its Communication Safety features, the company is working on adding the ability to detect nudity in videos sent through Messages when the protection is enabled.

   .... “Additionally, because the minor is typically sending newly or recently created images, it is unlikely that such images would be detected by other technology, such as Photo DNA. While the vast majority of online CSAM is created by someone in the victim’s circle of trust, which may not be captured by the type of scanning mentioned, combatting the online sexual abuse and exploitation of children requires technology companies to innovate and create new tools. Scanning for CSAM before the material is sent by a child’s device is one of these such tools and can help limit the scope of the problem.” 
As for the newly announced "end-to-end encryption" on iCloud, note that the keys will be stored on the iDevices, and as such, available to Apple through its software anytime. This is exactly why the US government and BigTech have been pushing for "passwordless" authentication so strongly. It's no coincidence that Microsoft suddenly started asking for TPM to run Windows. With "passwordless" authentication, we will even lose control over the digital keys that allow us to encrypt and access various services, and BigTech becomes in charge.

Yes, I am being very cynical here for 2 major reasons - (1) Governments around the world want control and access to our device, and BigTech can deliver. This is not an Apple or Google problem, this is a social and democratic problem that we need to fight politically with the government by demanding stringent data protection and privacy laws. (2) Apple is a corporate who needs to be profitable. Collecting data and selling it (either to the government through programs like PRISM or to advertisers through a platform is a very lucrative source of revenue. Anybody who things Apple will let go of billions of dollars from that is delusional (and that is exactly why Apple is pivoting to become a service company). Apple's history when it comes to invading its user's privacy is just as bad as Google's or Facebook.


Scanning in cloud is far more problematic. There is not transparency at all about what's being scanned for or when new things are added to the scan.


Scanning on iCloud is infinitely worse.


Arguably, scanning on iCloud is precisely the right answer, but in specific circumstances. I figure that cloud storage accessible only to yourself should be considered as private as the storage on your own physical device. But when you share content with other people, especially if you share via a publicly reachable URL (not sure if iCloud can even do that, though), then scanning for illegal content is fair game.


You understand the person to whom you're replying was nonsensically arguing that cloud-scanning is worse than on-device scanning, right?


It wasn’t device scanning. A fingerprint of every photo was uploaded to the cloud as metadata for each photo. It’d be like being spooked if the title of the photo is uploaded to the cloud and examined for known csam titles.

And to make things more secure, the only way apple could read those fingerprints was if you uploaded 5 csam photos.

How is that worse than Google where engineers can look at every photo you upload regardless of the content as part of their server side scanning?


I took their argument to be that scanning at all was the problem. I may have misunderstood. I've made way too many comments today on HN, and I should most probably step away from the keyboard. In fact, I'm going to do that for real, my daughter just got home from school and I need to remember priorities.


On what basis? I'd really like to hear your justification. Please include as many details as possible.


Scanning on iCloud means that Apple can see the content of all your scannable data in iCloud. Scanning on-device is compatible with Apple never having access to your data in an unencrypted format. If Apple has a legal obligation to ensure that iCloud does not store CSAM/etc. then either you have to scan on device before upload _or_ you have to store iCloud data without E2E encryption. From a privacy perspective, on-device scanning before upload is obviously better.


> Scanning on-device is compatible with Apple never having access to your data in an unencrypted format.

Only if you exclude the following network transmission, which is the easy part that needs no special code. The privacy concern comes in with those two things together. So yeah if you take away one half of a bad thing such that the bad thing no longer becomes possible, it's not bad anymore. The concern is the whole process, on-device scanning being the key not-yet-implemented component.

> If Apple has a legal obligation to ensure that iCloud does not store CSAM/etc. then either you have to scan on device before upload _or_ you have to store iCloud data without E2E encryption.

Apple does not have that legal obligation. If they can't decrypt the content on their servers, then their only response to a government-issued warrant would be to hand over encrypted data.

Also, CSAM is not the concern. The concern is this would be used against dissidents in authoritarian countries. On-device scanning takes us a step towards becoming one and further empowering the existing ones.


This doesn't necessarily follow. Law enforcement having near realtime access to everything you ever photographed is a worse situation then them having to know in advance which things you might be storing, so they can add a fingerprint of it to a database and then wait until your device matches and uploads it.

CSAM is a serious problem and you're ignoring Apple's moral or even repetitional obligation to try to address it. However poorly.

We shouldn't be hyperbolic and just flatten levels of badness, or we'll can never find a compromise (which is, I suspect, just how you want things).


" If Apple has a legal obligation to ensure that iCloud does not store CSAM/etc"

My understanding is in the USA companies like Apple cannot be legally obligated to ensure that iCloud does not store CSAM. Something about the US Constitution, but I can't remember what. Apple is legally obligated to report CSAM if they come across it themselves though.

This appears to be the case in Europe as well. But may not always be that way. The EU appears to be working on legislation that can compel cloud providers to scan for CSAM: https://9to5mac.com/2022/05/11/apples-csam-troubles-may-be-b...

Welcome sources on this from others. Last time I dug into this was a year ago.


> Something about the US Constitution, but I can't remember what.

The first amendment comes into play. The government cannot compel Apple to write software in a certain way, such as "write your encryption so you have keys that access all of users' data". That would be "compelled speech". So if the government provides Apple with a warrant, Apple can only provide encrypted or whatever meta information they have, not decrypted content.


EDIT: Whether or not Apple should be scanning for CSAM in some way is an entirely separate argument to whether it should be "on device" or "in cloud". That's not obvious, so my explanation below explains why that is the case.

I disagree. On-device scanning is desirable here and scanning it in iCloud though the industry-norm, is not desirable.

Apple specifically stated they would only scan the photos being uploaded to iCloud [1] and not local photos which were not being uploaded.

Most cloud storage providers are scanning in the cloud, this necessitates a workflow where both the photos and decryption keys are accessible by the same server and that there is a security workflow to request the users decryption keys without the user involved (by way of their account password, or other enforcement mechanisms designed to ensure only the real user can access such keys, Apple has previously detailed Hardware HSM enforced details of that for iCloud in their Apple Platform Security guide [2]).

By extension that opens up the potential for a malicious employee or third-party to either intercept those photos while being scanned or otherwise allows a workflow that could be exploited to request the keys and photos. This is a major loss of security and privacy which Apple is specifically and intentionally designing to avoid. Until now iCloud Backups were a major weakness in that story however they also today announced they are fixing that [3].

Hence: the on-device scanning was just an entirely technical solution to scanning "only photos uploaded to icloud" (like everyone else) while also "ensuring some server in icloud can't decrypt your photos".

You could of course argue that it makes a later change of policy to scan local-only photos easier. While this is true, Apple could decide to add code to iOS to do anything they want at any time, so they could just decide to do that anyway, even if they were scanning iCloud photos in the cloud. At the same time, by not scanning them in the cloud, they have a technical solution to ensure it's much harder for a malicious actor to get unauthorised access to iCloud user photos. Because there is no place/server that is "supposed" to be able to access both the photos and the keys.

[1] https://www.apple.com/child-safety/pdf/Expanded_Protections_... [2] https://help.apple.com/pdf/security/en_US/apple-platform-sec... [3] https://news.ycombinator.com/item?id=33897793 "Apple introduces end-to-end encryption for backups"


> On-device scanning is desirable

What are the benefits? It would tell Apple (and hence law enforcement) who had copies of existing, known CSAM. I guess that would help catch such people. But such individuals would also have strong incentive to simply turn of iCloud to completely avoid detection.

What are the risks? Apple employees rummaging through your photos (after 30 NeuraHash matches, a human reviewer would go in and check your photos, and as some analysts pointed out, hash matches can be spurious, that is, completely different photos can have the same hash by coincidence).

Over time, would the program expand beyond CSAM? Consider the intersection of countries in which Apple sells iPhones and countries that criminalise being LGBTQ [1]

[1] https://www.humandignitytrust.org/lgbt-the-law/map-of-crimin...


All of your concerns are valid for "should Apple do CSAM scanning at all" and I share those.

However my argument is specfically to the OPs point that they should just scan the photos "in the cloud" and not "on your device". But the reality is they were only scanning photos that were being uploaded to the cloud, they were just doing it on your device so that Apple doesn't need to have a server that can decrypt your photos to do it in the cloud. Which is what most others (including Google) do.


> as some analysts pointed out, hash matches can be spurious, that is, completely different photos can have the same hash by coincidence

I think you might have misinterpreted whatever those analysts wrote. The chances of thirty images all coincidentally matching both perceptual hashes is essentially nil.


The human reviewer can't "go in and check your photos".

The human reviewer would've gotten "reduced quality images" of the matches and would make a decision based on those.

And yes, people could've just turned off iCloud. That's how encryption works.


I can't agree. Normalizing scanning of files on people's devices is a very very bad precendent. It turns Apple into a low level government policeman rifling through your device which would spread to your PC and android devices. Normalizing this behavior would be truly awful, CSAM would come first, then tax audits, anything that looks "suspicious", etc. That is a very very bad precedent, police should only be able scan your device if they have a warrant.


The normalisation aspect of this would have been the biggest horror. It would mean owning a device that doesn't do this becomes associated with criminality. "Don't you know that only pedophiles run Linux?"


It wasn’t scanning photos looking for csam, it was computing a fingerprint of every photo that could only be read on the server if 5 of those fingerprints matched known csam.

I can’t see how that is worse than unencrypted photos in the cloud that can be scanned at any time server side.


It's a proxy for scanning on your phone and reporting to the government. Toe-mae-toe toe-mah-toe


If Apple were to calculate the perceptual hash client-side but only do the actual hash comparison server-side following an upload, it would preserve the 'no keys on iCloud' policy while also preserving the 'no local scanning' policy most of us want.

That the server may know the hash is IMHO much less a problem (it shouldn't be reversible anyway) than the legitimization of on-device scanning.


I would think calculating the perceptual hash client side is precisely the "local scanning" part most are concerned about. Pushing a perceptual hash of every photo to Apple would actually decrease your privacy as now Apple can find any known photo in your library - not just CSAM but on any topic - memes, political protests, porn, whatever.


IMHO, it's the comparison to a stored hash value which is the scanning part. In my scenario, the hash calculation is only done during upload and the hash is not saved on device.

Let's say tomorrow China tells Apple to do client-side scanning, and let's say Apple decides to comply.

Scenario A (Apple already did on-device scanning):

Apple flips a few bits. iOS no longer cares whether images where uploaded on not. The code change can easily be hidden assuming no reverse engineering. There's little measurable difference unless the user has a matching image in which case the user is already done for [EDIT: in retrospect, the device should be running a scan on the local images every blacklist fetch. I still think that scanning more images is much less of a difference than the other scenario, where the device has to do stuff it just did not do before]

Scenario B (Hash Comparison is server-side):

The code change can be hidden of course. But otherwise there are significant changes:

The device needs to calculate hashes to every image upon creation which it did not before. The device needs to fetch the blacklist every X days which it did not before. The device needs to run the local scan every blacklist fetch which can be slow if many images.

All these things are far more likely to be noticeable and far more likely to generate network traffic.

>Pushing a perceptual hash of every photo to Apple would actually decrease your privacy as now Apple can find any known photo in your library - not just CSAM but on any topic - memes, political protests, porn, whatever.

The Hash shouldn't be reversible, so the only way for Apple to do that is to have an existing dataset. Which is possible, but preferable to the no E2E situation and (IMHO) the on-device scanning situation.


That’s not how their encryption worked. The only way for apple to know a csam photo matched was if you had 5 photos that had a csam fingerprint from their database uploaded to iCloud. This matching only happened server side for photos uploaded to the cloud.

This seems far better than having every photo scanned server side and have those photos sit there unencrypted.


I think people objected to having the flagged hash database and comparison locally. I suppose the split of perceptual hash client side, comparison server-side might be objectionable on the "it's draining my battery" angle, but I suspect those who prefer cloud-side scanning would generally like hash creation client-side if it were the price of E2EE.

Of course that's a lot less secure and leaks more privacy info, but the objections seem more about the morality of having a device you own doing the scanning, not so much about any privacy concerns.


The comparison didn’t happen locally in Apple’s scheme. Along with each photo a encrypted fingerprint was uploaded and all scanning happened sever side.


That wasn't my read from https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni... :

Lots to look at there but the key bit is:

>If the user image hash matches the entry in the known CSAM hash list, then the NeuralHash of the user image exactly transforms to the blinded hash if it went through the series of transformations done at database setup time. Based on this property, the server will be able to use the cryptographic header (derived from the NeuralHash) and using the server-side secret, can compute the derived encryption key and successfully decrypt the associated payload data

So my understanding is that the server won't even know the hash of the image unless the client-side operations determine that it matches one of the hashes in the CSAM database, which is stored on the client.

Are you seeing it differently?


I’m seeing the same. There are a lot of people saying apple’s scheme was somehow worse than Google’s. I was arguing that apples was at least attempting to be privacy preserving.


Perceptual hashes are quite reversible. It is not a cryptographic hash that is hard to reverse.


I think the state of the art has progressed past that? A non-trivial reversal of a perceptual hash would mean that every cloud provider maintaining a CSAM scan list violates CSAM laws - if they could reverse the hash to get too close to the original image, the hash is just a lossy storage format.


Not the way apple implemented it. The hash was encrypted and could only be decrypted if 5 hashes matched a known csam hashes.


> On-device scanning is desirable

My own devices should not snitch on me to the government. Obviously this mandatory snooping of all private data is not going to stop at the single most politically salable application; that's just the starting point.


iOS doesn't have local-only photos. You either put all photos in iCloud, or none (bar iCloud being full).


You are right, but by local-only photos I meant "not using iCloud photos" - e.g. "none".

The point is, I think if Apple decided to scan your photos for CSAM even if you weren't using iCloud, that this would be a distinctly different situation to scanning only the ones uploaded to iCloud.


I think this really demonstrates how companies aren't monolithic, even ones that take pains to present a unified front like Apple.

One group wanted to implement a tool to improve child safety, another team wanted to enforce user privacy. At one point, the former was about to complete their project, but was stymied by the second. Of course, it's surely more complicated than "one team" vs "another team", but the concept of conflicting goals within the same entity applies. It's not some top-down dictat.


Their solution of sending encrypted fingerprints as photo metadata was extremely convoluted precisely because the first group wanted to enforce user privacy.

Otherwise, they could have pulled a Google and just scanned server side.



Good riddance, but, just to make sure, they should:

1) Stuff garlic down its throat.

2) Decapitate it.

3) Draw and quarter it.

4) Bury each piece under a crossroad, so it cannot find its way out.

5) Take off and nuke the site from orbit. It's the only way to be sure.

Like all of these efforts, the driving goal is laudable, but that's not actually how it will be used. I remember reading in The Register, about British housing councils, using antiterrorism tools to find unauthorized swimming pools.

"Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be."

-Lyndon B. Johnson


I’m a nudist. This kind of technology has crucified us as a community. I’m glad to see any sign of it being walked back.


Any tool that can scan for X can easily be modified to scan for Y. The “for the kids” excuse has been used to systematically invade our privacy before. Was anyone under the impression that this would never have been used for evil?

Today it’s CSAM, tomorrow it’s political memes used to identify dissenters. If you think you’re not living in the worst timeline, you’re wrong.


Not to mention there have been multiple occasions of agents charged and convicted of such crimes within the 3-letter organizations (or receiving no punishment at all) [1]. There is no telling how many/how long those sick individuals got away with it too.

All of this leads me to believe that these organizations do not care about the children, but merely want more power/control over the populace and use "think of the children" as a way to convince people to give up their freedoms to privacy.

However, I do think some of the individuals that work for these organizations actually do care and actually do want to help people, but they themselves are not representative of the organizations as a whole.

[1] https://edition.cnn.com/2012/05/15/justice/ex-fbi-agent-porn...

https://www.justice.gov/usao-mdtn/pr/fbi-electronics-technic...

https://sarahwestall.com/cia-caught-covering-up-rampant-chil...

etc..


I’m a nudist. We’re going to be crucified by this kind of technology because it’s coarse, context-unaware, and enforced with draconian consequences by automated systems and corporate functionaries with absolutely no understanding of other people’s lifestyles. I’m as horrified by child abuse as anybody else but I’ve seen members of my community put through endless grief and even families broken up because of it. There’s so much more to identifying child abuse than spotting a naked child, and the tech giants that rolled out this tech have demonstrated they really don’t understand (and aren’t interested in understanding) other’s lifestyles and “err on the side of caution” dictated by their mainstream guidelines. Without a proper superstructure this infrastructure is just police-state level crap.


This is great news for individuals who don’t want companies snooping on their private images and don’t want to get ensnared with legal issues due to false positives. It also ensures a barrier against governments demanding scanning for dissident material and the like.

But let’s not pretend it’s a move that doesn’t come with a cost. Scanning for CSAM by companies does catch a lot of child abusers and result in children being rescued from abuse, and it’s likely that Apple implementing such scanning would have done the same.


Why stop at CSAM? Why not scan for all kinds of crimes?

IMHO, the answer is that the police work should be done by the police and not by consumer products.

For some strange reason the “think of the children” argument is alive and well and can be used for all kind of privacy busting.

Seriously, why we don’t think of the domestic abuse victims and mandate all time listing microphones then?


The slippery slope argument is no doubt a valid one. But Apple specifically implemented (almost) the CSAM scanning after being threatened by multiple politicians - notably Lindsey Graham - that if they didn’t find a solution that they would legislate a backdoor into the OS for law enforcement. I think Apple was doing everything they could to appease the demand without actually violating users’ privacy. While it’s clear they failed to please end users, the solution itself was pretty ingenious.

Edit: https://www.eff.org/deeplinks/2019/12/senate-judiciary-commi...


The scanning for known fingerprints in Apple’s scheme only happened sever side.

Unlike Google, Apple didn’t have access to all photos in an unencrypted form to do whatever they pleased with them. If and only if 5 photo fingerprints matched a known csam photo, could Apple view a thumbnail of the photo.

How could that possibly be thought of as worse than Google’s “we can see all your data all the time” approach?


The fingerprinting doesn't happen by magic, your device analyses your photos and sends the results to Apple for control and Apple alert the authorities once enough matches happen because once enough matches happen your photos are decrypted for human inspection.

Inconsequential implementation details. Your device still acts as a snitch for whatever the authorities are looking for. Today looking for CSAM match tomorrow can be environmentalism, abortion, meat products etc - depending on how the politics evolve.


Apple’s severs are the snitches and they can only snitch if you upload five matching fingerprints.

Fine to think that scanning for csam is wrong, but the fact that it’s based on fingerprints sent to the cloud doesn’t make their scheme worse than Google who has access to unencrypted photo data to do whatever they want with (as argued by many in this thread).


Trauma experienced in childhood results in individual’s psychology being messed up in various ways for the remaining life. This messed-uppedness is contagious: it spreads to descendants, peers (school bullying), subordinate people (teacher to student) and so on.

Since I believe that a lot of issues humanity has to deal with come from mental health issues (I like to make this point so I won’t belabor it), I also find “think of the children”, while often misused, on its actual substance not an argument to be ridiculed.

Today I am reminded about The Ones who Walk Away from Omelas by Ursula Le Guin a little.


I think the issue here is that CSAM is and often was used as an excuse for invasive privacy breaking measures. Nobody can "not" think of the children, so whenever someone would argue against a measure, all they can say back is "so you don't care about children then?"

And people start getting tired of this serious issue being used, not because the people who want these measures passed care, but because of whatever agenda those people are actually interested.

It is precisely because it is so severe, that it is difficult to say no to measures in favor of it. But that same fact, is abused by people with bad intentions.


I have no desire to promote privacy invasive measures, but like the comment that started the thread: we have to acknowledge the horrible epidemic of this abuse and that the measures proposed would have been effective against it. We should be mindful that detecting and, upon human review (can’t believe how much disinformation was spread on this front), reporting CSAM would have been a good thing if it was entirely side-effect free.

However, we (as a crowd) have decided that those side-effects would be even worse than CSA itself. And perhaps we were right, but it should weigh on our consciences as we were the utilitarianist arbiters that decided individual suffering was not significant enough, and we should not rest as if we’ve “won” because a big problem (to solve which Apple seemed even willing to put its reputation on the line) remains unsolved.


No one is arguing that child porn is not a heinous thing. The argument is that there are also many other crimes that are in the same ballpark of severity, so why only police this one?


My comment explained why I think it is in a different ballpark of badness, maybe you could respond to that.


I never said that it's not an important issue and people shouldn't bother for the wellbeing of children.


I was actually responding to your lamenting how this argument is alive and well. Do you notice how you are self-contradicting?


> Scanning for CSAM by companies does catch a lot of child abusers and result in children being rescued from abuse, and it’s likely that Apple implementing such scanning would have done the same.

A strong claim that still lacks any strong proof. Especially once you start defining the term "child abuser".

According to actual child protection organisations [1] encrypted communications play a very minor role in distribution and is barely relevant, and the biggest factor is not detection, but the lack of proper resources to actually investigate cases and leads. There is no shortage of hard leads to investigate.

It is also rarely acknowledged that are very large percentage of abusers are from the victims social circle and often family members [2]. Meanwhile the whole surveillance fetishists very much try to keep up the false image of children being kidnapped by strangers and such.

The whole idea that child abusers are incredibly hard to catch and are oh so well hidden is a fairy tale. It is also important to differentiate between 1st and second grade child abusers, those who actively abuse the child, and those who "only" encourage it by sharing/purchasing the results of the abuse (I'm sure there is a significant overlap, but I do not know).

So is there a cost? Sure. But let's not pretend that it is even the lowest hanging fruit, or that the people building these services and asking for them are even interested in actually improving the situation, otherwise they would throw their weight behind one of the dozens of other more effective ways to actually improve child protection instead of screaming for a perpetual noise machine that even further drains the already very scarce resources of child protection services.

[1]: https://netzpolitik.org/2022/massenueberwachung-das-sagen-ki...

[2]: https://en.wikipedia.org/wiki/Child_abuse


True; but as mainstream platforms keep fighting CSAM, the only "reliable" place to get it is specialized sites on the dark web, and governments have been quite competent at busting those and identifying their users. I much prefer them turning darkweb CSAM sites into honeypots, than them dragnetting huge portions of the population.

At least the harm is limited to those who deserve it.


This is true in general, but not in Apple's CSAM detections that have been abandoned. They were only scanning for images that matched a know hash of CSAM. This did nothing to prevent new CSAM from being created, only possession of existing material.


Detecting known CSAM does help fight its creation.

There's a Venn diagram of people who possess of library of CSAM and are also active child abusers themselves. Detecting possession is one way to try to get people who fit into that overlap.

It can also help reveal criminal networks. Prosecuting possession of CSAM works like it does for other networked crimes like drugs. CSAM is material that is illegal and hard to get. People who possess quantities of it almost certainly acquired it from suppliers, who themselves acquired from suppliers, etc. Law enforcement's goal is to find and flip possessors to walk this network back to the source: people who are abusing children and documenting it to create new material.

They also hope to discourage its creation by suppressing demand and raising risks and costs for abusers who create it (in both cases, by the threat of prosecution).


That still catches abusers, as abusers are often consumers of material.


> But let’s not pretend it’s a move that doesn’t come with a cost. Scanning for CSAM by companies does catch a lot of child abusers and result in children being rescued from abuse, and it’s likely that Apple implementing such scanning would have done the same.

Let's not pretend that scanning everyone's photos is the only (or even a good) way to catch child abusers. Child abusers have been convicted without CSAM scanning before, so I am sure it will continue to be possible without. Just scanning everyone's photos may or may not be a convenient way to catch many, but the social cost for that is too damn high.

There is no guarantee that fewer children would get abused by scanning for CSAM, since abusers would probably adapt and avoid scanned services, while the innocent majority of people would have lost their privacy.


I think it's good to acknowledge, yes, this is worth the cost (since even if it would have opened up massive potential for this system to be abused, massive privacy loss for law abiding citizens, etc., it's true it might have caught a few abusers). Just like how the physical world equivalent - Governments putting cameras in everybody's homes that are watched by AI and police can access at any time when something is flagged - would be far more open to abuse than they would be useful for stopping crime.

We must always remember that mass surveillance is never the most effective way to stop crime, and is rarely even that useful. You make the haystack far larger and finding the needles gets more difficult, and AIs, while impressive, generate huge numbers of false positives.


Freedom is rarely free of costs.


I’m a nudist. Scanning for CSAM has caused no end of grief to the community and even led families losing their kids. It’s police-state type crap and I’m glad to see it go.


It's a relief to see the end of this anti-privacy wedge.

It's also a relief not having to listen to wailing paranoiacs whinge about how new Apple features, such as communication safety in messages, is a cynical, nefarious or reckless step down the road to a police state.

I still hope something can be done to address child abuse images being shared. I guess if anti-virus profiles can stop certain files being spread, and if we are prepared to accept the risk of that not being abused to cover up leaks or politically embarrassing information, then it could be used to stop known CP from being passed around. As for unknown material it looks like there may not be a way to prevent it that doesn't risk everyone's privacy.


Apple wins points for being less evil than Google but I'm not going to abandon my Graphene OS phone for an iPhone anytime soon.


Why do you think you can trust Graphene OS devs?


GrapheneOS is a project rather than just a few devs. They have exhibited very high focus on security and peer review before commits. The code is FOSS and heavily audited by the most paranoid people like me. It is one of many AOSP projects with a relatively small user base. Therefore it is not a valuable target for compromise or coercion from threat actors.

Apple on the other hand is the biggest corporation in the world beholden to governments, closed source, and with a massive user base.


If you enable automatic updates, you still have to trust that the GrapheneOS devs won’t push a malicious update to your phone, or sell that ability to someone else.


Yes some trust of the project is required. Consider what would motivate them to do something malicious. They do not know who is using the OS since these phones are anonymous and the user base is relatively small. Neither targeted surveillance nor bulk collection is really possible because of this.

The code is completely open source. With the most paranoid of us using and reviewing the code, any malicious code inserted would immediately kill the entire project. We can install and switch to another OS anytime we need to.


Why do you think you can trust the hardware? What about E911, etc.?


Yes we are using the same closed source hardware as 99% of people. Open source HW is ideal, but the options are currently very limited. I use rotating burner SIMs on occasions when I need cellular and never make calls with the SIM. The OS provides other SW mitigations for the HW such as no SW access to the device ID's, fine grained app firewall, and ability to turn off all cellular but enable WIFI.


Can you set my mom up with a Graphene phone? And deal with the inevitable support?

I think I’ll keep recommending Apple to 99% of people.


I think the better argument is the sibling question about why would you believe you can trust the Graphene developers more than you can trust Apple. Certainly Apple has far more to lose by being shady.


Apple also has infinitely more eyes looking at it.

Nobody is going to get a huge PR boost by finding a major security flaw in Graphene OS. But if you find a remote zero day from iOS, you're going to be rich and/or famous instantly.


> Nobody is going to get a huge PR boost by finding a major security flaw in Graphene OS

Among the public, no. Among the security community, yes.


That’s fair.

I also know which of the two could resist the FBI more than the other.


Can you not do it, why do you need some random person on HN to do it?


It was a rethorical question on their part. But they missed the point of a use case for one person doesn't mean a use case for everyone (his Mother). They were saying they want a plug and play option for family members cause they're the tech person in their lives it seems.


GrapheneOS has like step-by-step instructions that are pretty easy for almost anyone with maybe like a 6th grade comprehension to follow.


You expect the same people who don't even read the error popups they get to follow step-by-step instructions to install an alternative OS on their phone?

I love your optimism :)


You mean a person with early onset Alzheimer's? Yeah, even with an iPhone they are going to get scammed by robocallers, family, and friends.


Can you describe your phone and software setup? I’ve installed lineage before but lately have been a little stuck deciding on my next open phone. I’d like to be able to install microg. I don’t plan on necessarily using this as a daily phone all phone but have some medical automations that rely on Bluetooth and in a perfect world could have some water resistance or waterproofing.


I use grapheneos.org on a Pixel phone. I have tried many AOSP variants. Graphene is hands down the most secure, private, and functional. Couple this with Nextcloud and you have a superior setup.


I like grapheneOS and it's streets ahead from any other effort in the area. The problem i have with it is that it's hard/impossible for your average joe to install it. A small thing but big when it comes to privacy in general


Compared to installing the other AOSP versions, I find Graphene install the easiest. Yes it does require following the instructions and setting some things right. If anyone is hesitant about doing that themselves, they could buy one preinstalled on Ebay or from nitrophone.


Do you know whether it’s compatible with microg? If so, what version of pixel are you running.

I’m familiar with adb.


I am not sure it is, but Graphene has an optional sandboxed Google Play Services that does not require signature spoofing and does not require a Google account. I have it installed on a work profile for some proprietary work apps that require it. My main profile contains all FOSS apps that use web sockets or Unified Push(Nextpush on Nextcloud) to deliver notifications.


There is the Google Play compatibility layer on Graphene, so you probably don't need microg - https://grapheneos.org/features#sandboxed-google-play


Yeah. The point is that you're a somewhat technical person to manage to do that.


Yes you need to know how to open a web browser and place an order online. ;) The self install is really slick. You basically set the phone into developer mode, connect it via USB to your laptop or another phone, and just follow the web installer steps.


I daily drive /e/ OS, I'm happy about it.


Very interesting!


They needed to release pro-privacy news to balance out the bad press re: AirDrop in China.


[flagged]


I'd love if there was someplace we could go to test this assertion instead of relying on unbacked statements.

I'm not saying it's untrue, but there is a lot of ambiguity in this claim as things stand. Are you referring to abuse of their monopoly power to censor speech? Abuse of same to compromise human rights? Nefarious trickery to subvert their own security model? Use of forced labor to assemble devices? Each is its own form of evil; the evidence for each ranges from beyond doubt to (afaik) non-existent; typing them out every time one wishes to discuss apple is daunting.


I'm as cynical as the next person - probably even more so. But, I have come to recognize that there is value in "pretending". It establishes what our societal values are, even if we don't live up to them. This allows us to critique the failures, and work towards a better world with fewer failings.

So, is Apple actually going to back down from this and respect that computing devices should fully serve the interests of the owner? Perhaps not, and likely not to the point of allowing end user customization, software modification, etc. But I'd much rather live in the world where we say that Apple should do so, and they nominally agree. Where we keep holding their feet to the fire, by pointing to the hypocrisy when they reference lofty values. The alternative is to continue down the path where we just accept Surveillance Valley's expedient panopticons, and they move on to destroying the next societal value they find inconvenient.


With their plans to encrypt icloud contents by the the end of the year they would not be able to scan anyways if i user has that enabled anyways right?


The proposed scanning was on-device. They could have done it regardless of iCloud encryption.


But it only did so if the user had iCloud photos enabled.


We don't actually know because they never shipped the feature. But having iCloud enabled was not a required part of their scanning system, it was simply an if-else check, and we just had to take their word for it that they wouldn't scan anyway.

Still, even if the scanning was only if iCloud was enabled, that's still a step too far in my book because Apple makes it very difficult for alternate photo backup apps to work well. iCloud uses background syncing/upload in ways that third-party apps are prevented from using.


Which could be easily changed without users' knowledge any time.


It wasn’t scanning. They took a fingerprint of every photo and encrypted it so only if that fingerprint matched on the server side and only if 5 of those matches occurred, they could view a photo thumbnail.


Have they actually killed it? Or will they resurrect it once the media frenzy dies down?


The media frenzy died down a year and a half ago. They announced it's dead for good.


I'm fairly certain, based on things I've see while under NDA at other vendors with cloud offerings for media files, that other companies are running CSAM scanners, at the Government's request, on their servers. (The one's I've seen searched for known CSAM based on signatures and didn't try to use AI to guess if an image contained it.)

I think this Wired article is referring to scanning on the User's device. Apple may have wanted to do this so they could enable end-to-end encryption and still be able to comply with "secret" government requests.

I'm not sure why companies agree to do this scanning in the first place. Somehow our government applies pressure to the companies to add these scanners. And it's hard for be to believe that "CSAM" is all they're looking for.


I've followed this for quite some time now, not just Apples implementation of CSAM detection tools but the many other large cloud players including Cloudflare and Google who have had this in place for a number of years.

Whilst Apple's initial implementation might have been more aggressive than some of the other players. I do think that CSAM detection technology is a good way of being able to detect this type of information. However with all things, there is an opportunity to use it for good and bad.

We love using technology to improve our lives, but seem to actively despite using technology to plug the dangerous gaps in these cloud services. The public don't mind using cloud services despite terrible CSAM and terrorist content being circulated throughout them undetected, but resist when other technology might try detect it.


> The public don't mind using cloud services despite terrible CSAM and terrorist content being circulated throughout them undetected, but resist when other technology might try detect it.

Isn't it obvious, though? CSAM and terrorism is a tragedy, but for most of us, it's out of sight, out of mind (second-order consequences we all suffer from are less apparent).

The calculus changes once someone introduce technology that has non-zero chance of having a false positive that lands you in prison, or on a sex offender list, or even makes you go through a trial which is bound to destroy your life through the rumors alone - despite you being completely innocent. Technology that reaches into files you consider private (whether on-device or in the cloud) and never intended to share with a wide audience. Technology that fundamentally can't distinguish between what's legal private data and what's a crime. Technology that's not just a potential threat, but also whose very existence makes you realize the vendors lied to you, or at least misled you, with respect to who owns what on the phone and in the cloud.

This is less about CSAM per se, and more about Apple scaring countless tech-aware parents shitless, all while reminding everyone that their smartphones and their cloud storage aren't really theirs, and never were. If Google and Dropbox and Cloudflare got a free pass here, it's because they weren't this blatant with it.


I get the concern, but is there any evidence to suggest technology based solutions by themselves lead to any of those obviously bad false positive outcomes? Has anybody ever been send to prison, placed on a sex offender list, or even been charged based purely on the output of an automated evaluation that you posses CSAM or some other illegal material? One can certainly imagine the dystopian horror of such a scenario, but whatever the specific technology in question, it's difficult to imagine it playing out in the current legal system of any of country that isn't already based around just semi-randomly throwing people in jail.


>I get the concern, but is there any evidence to suggest technology based solutions by themselves lead to any of those obviously bad false positive outcomes?

Yes. In fact, it's mathematically provable that these false positives must happen, unless you never label anything as CSAM.

Welcome to signal detection theory; it's a really cool, and useful field. I recommend this book: https://www.taylorfrancis.com/books/mono/10.4324/97814106119...


> it's mathematically provable that these false positives must happen

This is not true. It’s mathematically provable that false positives are possible, not that they must happen. In fact, the chances of collisions happening are astronomically low and basically nil when you use them in the way Apple was proposing (two separate hashes; several matches required).


> This is not true. It’s mathematically provable that false positives are possible, not that they must happen. In fact, the chances of collisions happening are astronomically low and basically nil when you use them in the way Apple was proposing @JimDabell

WTF are you on about?!

That IS NOT how math works. If millions of images are uploaded per day, there are GUARANTEED false positives.

It's not a /maybe/ but an absolute fucking certainty.

This guy's a shill...


I agree his overconfident tone isn't helping, but let's not assume malevolence.

I think his argument is that the FP rate can be set to a vanishingly small value, and so we can in theory proceed as if it were effectively zero, as we do with hash collisions. This reasoning is false for two reasons: one theoretical, and one practical.

The theoretical reason is that a nonzero FP rate means that as the number of samples approaches infinity, a false-positive is guaranteed to occur (as you rightly point out).

The practical reason is that complex image classifiers do not exhibit what any sane person would consider a "vanishingly small" FP rate, especially given the volume of samples being processed in this case.


I'm sorry, you are mistaken. You can set the FP rate arbitrarily low, but they will eventually happen. "Basically nil" is not the same as "actually nil".

I also suspect you are grossly exaggerating the d', and grossly understating the FP rate of Apple's solution, as I am unaware of any image processing system whose FP rate is "basically nil". Your assertion warrants hard numbers.


> You can set the FP rate arbitrarily low, but they will eventually happen. "Basically nil" is not the same as "actually nil".

I know “basically nil” is not the same as “actually nil”, that’s why I said “basically nil” and not just “nil”. It is “nil for all practical purposes”. This is a good enough standard for software the whole world uses all day every day to depend upon – otherwise hashes would be pretty useless.

“Eventually” needs clarification here. Of course, if you spend from now until the heat death of the universe iterating through the search space you could get there. But that doesn’t mean that in practical use, such a collision is guaranteed as you claim. This system could have run its entire lifetime without a collision. You are mistaking “if you cover the entire search space” for what happens in the real world.

Do you understand that there is not one, but two hashes, and that they both need to collide simultaneously for each image? And do you understand that this has to happen not just a single time, but for several images on your system? That’s why I specifically said “basically nil when you use them in the way Apple was proposing” and described how it worked.

Everybody with even a basic grasp of what hashes are understand that collisions are certain when considering the entire search space. But that doesn’t correspond with what happens in practical terms, which is why hashing is actually useful in general, and I think people are getting too fixated on “hashes can have collisions” to notice the other properties of the system.

Apple can tune the false positive rate by varying the number of matches necessary to flag an account. They say they chose a threshold that would result in a false positive rate of one in a trillion. Are you saying they got their maths wrong or were lying, and that’s actually impossible? Because there’s only eight billion people on the planet and most of them don’t have Apple accounts, so if one in a trillion is accurate, then it seems entirely possible for this system to have run indefinitely without a single false positive.

Remember – I said “basically nil when you use them in the way Apple was proposing (two separate hashes; several matches required)”, and not that a single hash function wouldn’t ever produce a single collision for a single image. Do you really disagree with that?


>“Eventually” needs clarification here.

We're discussing mathematics, so it has a precise meaning: as the number of samples approaches infinity, the probability of observing a false-positive approaches 1.

As I mentioned in another comment, your claim is both formally and practically incorrect. It is formally incorrect for the reason above. Given enough samples, a false-positive must occur.

It is practically incorrect because there exists no image-classification system whose FP rate is small enough that multiple FPs won't be observed daily, given the number of samples at play [0].

[0] Except, of course, for the trivial case in which you allow an exorbitant number of misses, but surely this isn't what you're arguing...


> We're discussing mathematics

No, we are discussing a real-world system.

> as the number of samples approaches infinity

There are not an infinite number of people with Apple accounts.


Actually, we're talking about both, and you are wrong on both counts.

If you weren't, you would be able to point to an existing system that has demonstrated the capability of classifying an image set on the order of iCloud's, without producing a false-positive. You can't, because such a system doesn't exist.

Even the "solved" problem of OCR isn't capable of such a feat: https://youtu.be/XxCha4Kez9c


> you would be able to point to an existing system that has demonstrated the capability of classifying an image set on the order of iCloud's, without producing a false-positive.

This is not relevant to what I was saying; it sounds like you might be mixing up collisions with false positives.

As I said before:

> Remember – I said “basically nil when you use them in the way Apple was proposing (two separate hashes; several matches required)”, and not that a single hash function wouldn’t ever produce a single collision for a single image. Do you really disagree with that?

The relevant false positive is when an account gets flagged by Apple. That’s when it actually matters; that is what we don’t want to happen; that is what Apple are describing as a one in a trillion chance. The rate of hash collisions are only important as an input to that larger system.

That “number of samples”? It’s not the number of images in the whole of iCloud. It’s the number of Apple accounts. It doesn’t matter what happens “approaching infinity”, it matters what happens in the range zero to eight billion. We have a known upper bound, and it is substantially less than infinity. It doesn’t matter if a false positive is guaranteed “approaching infinity”, what matters is the likelihood of a false positive for any Apple account. That is not guaranteed, and actually extremely unlikely.

Also, just checking – you do understand that iCloud is using perceptual hashes and that it’s iMessage that uses an image classifier, right? Perceptual hashes aren’t normally referred to as image classifiers and aren’t really doing the same sort of thing as OCR; with OCR you need to output a token even if the shape is uncertain, but that failure case doesn’t exist for perceptual hashes because the result is just that there isn’t a match. What would be a false positive in the OCR case would be a false negative in the perceptual hash case, which we don’t care about for the purpose of this discussion.


> is there any evidence to suggest technology based solutions by themselves lead to any of those obviously bad false positive outcomes?

Specific to this type of technology? YouTube and their Content ID comes to mind - stories abound about people getting random videos flagged or demonetized because of a match to something in a background, or false positive match to their own original rendition, or to their own original rendition of their own work they own copyright to, or even against noise. Some of those stories end up harming those YouTubers financially. It's happening often enough that pretty much every channel I've watched has a video commenting or complaining about this at this point.

Then, related, random bans of Google accounts, or people having their apps kicked off Apple's App store or Google's Play Store for seemingly no reason (following some of those stories over the years, I estimate it's 3:1 false positive to the person actually violating ToS). Google account bans, in particular, can easily make your life very difficult for some time, or even kill your company, and they have a nasty feature of being transitive (if you have multiple Google Accounts connected in an obvious way, e.g. by recovery number, having one banned tends to be followed by having the rest banned too).

Point being, those big cloud companies don't have particularly good reputation when it comes to algorithmic moderation. So when one of those companies wants to deploy another set of automated scans, with a twist that a false positive now has a chance of quickly and irreversibly derailing your whole life, it's hopefully understandable why people are apprehensive.

> whatever the specific technology in question, it's difficult to imagine it playing out in the current legal system of any of country that isn't already based around just semi-randomly throwing people in jail.

With this particular crime, you don't have to go into jail to have your life destroyed. You don't even have to go to court. All it takes is that a rumor leaks out that you're accused of consuming or producing CSAM - it'll do irreparable damage to your relationship with others, as they're always be wondering whether there wasn't something to those rumors.

In a way, this technology being criticized so loudly and broadly is a form of mitigating the damage it could cause: the more people are aware it's liable to spurious false positives, the more chance the victims of such false positives have that others will believe them.


The public also doesn't mind using postal services despite terrible CSAM and terrorist content being circulated throughout them undetected, but rightfully resist when other technology might try detect it.


You know that it's a distinction WITH a big difference right? Physical items take up physical space, have to physically be in transit and can be physically on a person on at a persons property. Hiding one phone is much easier than hiding many boxes of pictures of children being raped or abused in various different ways. The phone can also contain much more material than an entire room or house could contain.

Personally I don't care about the argument that privacy is oh-so-special, you can have that all day long, as long as you also fight equally for the prevention and containment of CSAM, which most self-proclaimed privacy advocates tend to not do.


What's the difference? The only significance of the pictures is that serves to prove that you were party to the rape/abuse, either directly or indirectly (i.e. gave reason for someone else to produce the content). The quantity of content makes little difference. The child wasn't doubly abused by someone behind the camera being shutter happy. Stashing away one physical photo or one phone with hundreds of pictures of the act doesn't really change anything.


The effort and timelines are different. Deleing a few pictures or wiping a phone is a lot easier and faster than trying to hide many Kg of material taking up many m^3 of space. Sharing something physical is also harder, you have to copy/reproduce it, and then distribute it, which also means at many points in time multiple copies are close/near to the owner. Getting caught via 1000 leads pointing to the owner is a lot more likely than 1 lead (in the shape of a phone).


Interesting - their approach to hash checking prior to iCloud upload (only when upload was enabled) seemed intended to allow them to support e2ee which they announced today (without the tradeoff of enabling CSAM).

I guess they decided it wasn’t worth it and it’d be good to ship e2ee anyway despite that meaning it may protect CSAM. I had figured they’d enable e2ee first and then reannounce the hash check if you enable it but I guess not.

I can’t see the rest of the article due to the paywall, but I wonder what changed. Maybe the risks with imperfect hash matches or the recent news of Google reporting that guy to police made it not worth it?

It’s in someways to Apple’s benefit to do this, they can’t see the data and are no longer responsible for policing its contents. They can’t accidentally leak it or get social engineered to restore access to a fraudster (how some celebrities got their images leaked).

Whatever it is, it’s nice when the incentives are aligned around user privacy. It’s cool (and a little surprising) to see real consumer device facing e2ee for cloud storage, even if behind an advanced flag.


There's no point in marring good PR from the news they announced today with the CSAM tech. When governments pass legislation targeted at E2E they will likely land on Apple's proposed solution as a good compromise, at which point it will be out of Apple's hands and they can just go forward with it.


I suspect they were related and the earlier CSAM tech was announced because of plans to enable e2ee like they announced today (otherwise the tech didn't make sense since they could just scan the unencrypted photos on their servers).


>I guess they decided it wasn’t worth it and it’d be good to ship e2ee anyway despite that meaning it may protect CSAM. I had figured they’d enable e2ee first and then reannounce the hash check if you enable it but I guess not.

I read the e2ee message as allowing the device to upload an hash to the server (it depends on what a 'raw byte checksum' actually is), and then Apple can do a check using that checksum server-side. IMHO, so long as the actual check is server-side it's fine privacy-wise.


They didn’t do hash checking on device. They uploaded an encrypted fingerprint of every photo and it was up to the sever to determine if that matched known csam.


You are wrong.

They downloaded the CSAM hash database from iCloud (iCloud _HAD_ to be enabled). After that every photo was matched with the DB using their algorithm.

If enough confirmed matches were found, "reduced quality images" would be sent to a human to double-check if the matches are correct and not false positives.

Only after that the authorities would've been contacted.

Compare this with Google's approach of "our AI just scans everything and if it sees something, it'll directly contact the police and shuts down your account with no way to recover it". (This actually happened).

Which is worse?


That’s not how apple’s scheme worked. All matching happened server side based on encrypted fingerprints sent along with the photos.


> After that every photo was matched

Only photos destined for iCloud were checked as part of the upload process.


This article[0] on Apple's new iCloud encryption system seems to be related to the CSAM project. I wonder if this is a sign that Apple are moving in the right direction.

[0] https://www.wsj.com/articles/apple-plans-new-encryption-syst...


Nice but too late. Tomorrow, I’ll be wiping (and returning) my last Mac after 22 years of using Apple products.

My iPhone will follow suit once I receive my Librem 5.


Apple still scans all sent or received messages for explicit images if the phone is registered to a minor and if a specific setting is turned on by a parent. It blurs received NSFW pictures and gives a warning if sending one. This client side scanning is already implemented. Parents are no longer notified, thankfully.

I am glad the more blatantly privacy violating policy was canceled, but I can't help but be cynical. Especially as Apple has "followed local laws" when strong armed by certain governments, and a recent class action was filed for lying about if Apple tracks you ignoring explicit settings not to.

https://www.apple.com/child-safety/

https://9to5mac.com/2022/11/21/ios-privacy-concerns-deepen/

https://twitter.com/mysk_co/status/1594515229915979776


But what are your opinions on child sexual abuse? Because that's the true topic at hand. It's very easy to complain about stuff, but it doesn't really help with anything does it... so what would be your method?


There is another issue. If they can do it, they may have to compile … better not to touch it unless it is shared via some sharing device which explicitly said it will be shared with authority. For individual devices and backup, I were they I would not touch it. Too liable for both ends.


Is it possible to scan when user has E2EE enabled?

(The other announcement today)


In the past they've announced local scanning on the iPhone. They postponed it indefinitely after the public feedback, and now completely cancelled it.


The last paragraph of the story literally says that they're using it as an alternative to the icloud scanning plan.


The last paragraph of the story is a concluding statement by the author on the difficulty of countering CSAM, and it says no such thing.

The announced opt-in feature for iCloud family accounts (Communication Safety for Messages) will scan content that is sent and received by the Messages app, and alert the associated parent or caregiver directly, without informing Apple.


This was the last paragraph before WIRED edited the article to add commentary from RAINN as the last paragraph:

>"Technology that detects CSAM before it is sent from a child’s device can prevent that child from being a victim of sextortion or other sexual abuse, and can help identify children who are currently being exploited,” says Erin Earp, interim vice president of public policy at the anti-sexual violence organization RAINN. “Additionally, because the minor is typically sending newly or recently created images, it is unlikely that such images would be detected by other technology, such as Photo DNA. While the vast majority of online CSAM is created by someone in the victim’s circle of trust, which may not be captured by the type of scanning mentioned, combatting the online sexual abuse and exploitation of children requires technology companies to innovate and create new tools. Scanning for CSAM before the material is sent by a child’s device is one of these such tools and can help limit the scope of the problem.”

Those quotes are a continuation of the statement from "The Company" ie Apple.


Ah, I think you're confused by the way the preceding paragraph ends ("Apple told WIRED that it also plans to continue working with child safety experts [...]").

The paragraph you're quoting ("'Technology that detects...scope of the problem.'") is entirely commentary from Erin Earp at RAINN, and is what was added by WIRED with the edit.

And, sorry to nitpick, but "Countering CSAM is a complicated and nuanced endeavor [...]" has always been the last paragraph (both before and after the edit).


Well, now they're storing hashes of every E2EE file on their own servers.

Did they specify what kind of hash? Could it be the perceptual hashing?


If they're embracing convergent encryption then it'll still be trivial to check if specific blocks (or files if at the file level) exist in user backups.


So is it going to pick up innocent naked baby photos? It’s quite concerning an algorithm could ruin someone’s life over incorrectly flagged material.


Remember that saying: It's always darkest under the lamp. And it's not about Apple in this context.



We, Apple users, won the battle. Nothing more to add.


So I guess that this: https://news.ycombinator.com/item?id=33897793

is part of the reason why? Or will there be a backdoor in their supposed "end-to-end" encryption of iCloud data?


For now.


Good for the people who send their kids nude pictures to doctors for medical examination as well as for false positive victims who got theit life ruined.


The headline really does not read well.


I guess with the advent of Image generating AI that endavour was hopeless from the start.


> Children can be protected without companies combing through personal data

Darn right they can.


Okay, that's a nice soundbite. Now let's steelman it.

I believe the following to be true:

1. Children are sexually abused. This abuse is far more widespread than we as a society want.

2. Images of this abuse is called CSAM (child sexual abuse material). CSAM is easily distributed over the Internet.

3. People who intentionally download CSAM are criminals who deserve to be punished.

4. Services that scan for CSAM (using hashes provided by the NCMEC, the National Center for Missing and Exploited Children, and similar bodies) identify these criminals, who are in turn prosecuted by the police. In fact, hundreds of thousands of CSAM are reported by cloud services every year. This, along with the fear of being caught, reduces the demand for CSAM.

5. Some of those caught are found to be creating as well as consuming CSAM. Catching them reduces the creation of CSAM as well as the demand for it.

6. Reducing demand and creation of CSAM protects children.

7. Scanning files for CSAM catches criminals that would not otherwise be caught.

Therefore, "combing through personal data" protects children that would not otherwise be protected.

When you agreed with "children can be protected without companies combing through personal data," did you mean that my sequence of reasoning is wrong? If so, how?

Or were you expressing a personal philosophy that's closer to "I think the benefits to privacy (from not scanning) are more important than the benefits to children (from scanning)?" If so, why do you believe it?


I think the biggest way to convince people on this is Point 1 in your argument. The prevalence of this activity is much more widespread than I have seen publicly acknowledged anywhere.

I worked at a large cloud company (smaller than iCloud) that scanned for CSAM and in some cases notified law enforcement - the number of yearly occurrences ended all internal arguments about the "morality" of scanning immediately.

As an engineer, knowing that the code I wrote was handling this kind of data every day, thousands of times per day, involving tens of thousands of individuals per year changed my opinion from "we should protect the privacy of the user" to "F** YOU stop using my product".

Short of products that offer full end to end encryption, any that does not scan for this kind of thing should look for an afternoon at what their servers are doing, then see how they feel. The marketing department will absolutely not resist the temptation to poke through unencrypted data.


The practice of "Steelman"ing is presenting the strongest possible argument. So please answer me this question.

--

The Fourth Amendment of the United States Constitution reads:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

--

Before you take step 4 above, please explain to me how you established probable cause for each and every scan that takes place. Both those scans that were positive and found CSAM, and those that were negative and found nothing.

In the US, there is a long history of Dragnets being deemed unconstitutional.

--

And I personally understand that this Amendment protects us from actions by the US Government. Not Apple. But here Apple is conducting the scans and forwarding the results onto the Government for the purposes of law enforcement.

In my personal opinion, when apple began these scans, they effectively became an agent of the state.

--

In any event, I am glad the scans have stopped.


> In my personal opinion, when Apple began these scans, they effectively became an agent of the state.

I think reading United States v. Miller (982 F.3D 412) will probably convince you otherwise. Here, the state actor doctrine is defined to include activities that are always the province of the government. While arresting offenders and prosecuting them is definitely in this zone, scanning hashes and comparing them to a list - even one provided by NCMEC/the government - is not. In Miller, the court says, “Only when a party has been ‘endowed with law enforcement powers beyond those enjoyed by’ everyone else have courts treated the party’s actions as government actions.”

Don’t get me wrong, I think it’s a terrible product decision to violate the user’s privacy like this, especially in this golden age of the state’s ability to surveillance. But it is certainly not state action.


I think "state actor doctrine" is important here. The above poster wrote: "here Apple is conducting the scans and forwarding the results onto the Government for the purposes of law enforcement." If that were true, Apple would probably lose whatever class action suit was brought by the pedos. But in this case, Apple has every right to ensure that its servers and platforms are not used to disseminate unlawful material. They could always be held liable by victims or the state for failing to curb the kind of activity that is certainly happening today whereby people make icloud folders partially or publicly accessible. How is it different to charge for a subscription to an icloud folder vs a paid substack? Both of these companies share in the responsibility not to allow csam on their platforms.

a private company can always publish terms that grant them on-device or in-cloud access to whatever data they are manipulating or storing or whatever. One could argue that clicking ok on "we can manipulate your data" is legal umbrage for "running a hash search and forwarding material to fbi".

Is "post fib.gov" the new "Sunset Filter"? and can they run that filter automatically?


A lot to unpack here.

First, I don’t think a civil suit by people caught by the hash-checking would be the remedy any of them would seek. More than likely, they would be trying to convince a court that Apple, acting on behalf of the government, violated their rights by searching their device without a warrant. The remedy they’d likely be seeking would be to have the “fruit of the poisonous tree” (Apple pun intended) excluded in their criminal prosecution.

I guess maybe they could file a civil suit afterwards but I’m having a little trouble imagining the assemblage of a class around which to file a class action since (allegedly) this hash matching system should “catch” innocent people extremely rarely.

And what duty does Apple owe to users? The duty not to inform the police that they might be committing a crime? This seems pretty shaky.

The problem with all this is the liability they’d be seeking to impose on Apple is specifically relieved by 18 U.S.C. 2259 (barring recklessness, malice, or a disconnect between apples action and 2258A).

The other thing is that “forwarding the results on[ ]to the government for the purposes of law enforcement” explicitly, under Miller, is not government action. It’s the law enforcement function itself that would be interpreted as state action.

> How is it different to charge for a subscription to an icloud folder vs a paid substack?

I think the answer is that liability for failure to moderate substack might be relieved under section 230 of the CDA[0], while liability for the act of moderating Apple’s cloud by scanning hashes and reporting hits to the FBI would be relieved under 18 U.S.C. 2259.

[0]: assuming, of course, su stack didn’t/shouldn’t have known about the offense. Once they know they have a 2259A duty to report.


Thank you for these citations. I'm not a legal expert, but I appreciate the response. I don't know what the correct thing to do for apple or others is, but it seems like apple has far more authority than most people suspect to do things like hash checks and law enforcement notification. The general tenor of most complaints is that this is a violation of first amendment or constitutionally protected privacy rights. But I don't know if that's true, and from your response, it would seem like there is some precedent for corporations being protected in their right to act on crimes they are aware of.


Protection from liability when aware of a crime and responding accordingly; that’s exactly what 18 U.S.C. 2258A gives corporations, in the context of child sexual exploitation materials.

Sadly, I think you will find we don’t have any constitutionally protected privacy rights. We used to have some, and that’s where the protection for abortion rights came from, but the current court (at least in my interpretation) believes that interpreting the Constitution that way was a huge mistake. We still have some remnant of these rights, but as they come up for review by the Roberts court, they will be abolished one by one until they run out of time or we run out of patience with them and enshrine these rights in statutes.

But I think when you say privacy rights you mean “fourth amendment rights against unreasonable search and seizure” and insofar as that’s the case, I agree with you: Apple has a lot more power to violate these rights than most people think, because they have smart lawyers who have advised them exactly how much leeway they have before they become “state actors”; their software stops just shy of that point.

I think one thing that’s important to remember is that Apple has a lot of power but only because we give it to them. All you have to do to avoid this hash scanning nonsense forever is just throw away your iPhone. (The problem is, at least for me, I’m not ready to do that).


It took me a moment to parse your argument here. To summarize, I think you're saying that my reasoning is wrong because it's illegal in the US for private services to conduct mass scanning for CSAM and notify the government.

That's an... interesting take. It doesn't pass the sniff test, sorry. First, if it was illegal, I wouldn't have expected it to make it past the corporate counsel at Google, Facebook, and Microsoft (all who do CSAM scanning in the cloud). Second, I would have expected it to have been brought up by the defense in a trial and stopped as a result. That it still happens is a strong indicator that it is, in fact, not illegal.

Frankly, it sounds like the same nonsense "sovereign citizens" get up to.


The argument rests on the comparison of a user's local device to a private residence. Google, Facebook and Microsoft do scanning in their cloud and it's fine because it's in their servers. Apple's original suggestion was to do client-side scanning in the user's device and this is compared to a search of a residence.


> > In my personal opinion, when apple began these scans, they effectively became an agent of the state.

> [I think you mean] it's illegal in the US for private services to conduct mass scanning for CSAM and notify the government

Nearly. If the government is requiring this then it becomes improper.

Not that Apple would be breaking the law, but that the search would be by a government proxy and thus inadmissible.

imho with the FBI's past pressure on Apple it doesn't seem unlikely that this is somewhat coerced.

> I would have expected it to have been brought up by the defense in a trial and stopped as a result. That it still happens is a strong indicator that it is, in fact, not illegal.

If their lawyer doesn't feel they could win with this argument they won't waste time bringing it up. Their job is defense not legal correctness.

> Frankly, it sounds like the same nonsense "sovereign citizens" get up to.

SovCits argue about stuff like that they don't belong to the government because they spell their name in all-caps, not about actual constitutional principles.


> Nearly. If the government is requiring this then it becomes improper.

Even if the government does not require it, if it acquiesces, and the action is the exclusive province of the government, it’s state action (i.e., improper).

But yes, if they require it, it’s state action too. 18 U.S.C. 2258A allows but does not require scanning hashes, and scanning hashes is not the exclusive realm of the state, so it’s fair game.

The idea that the FBI is somehow twisting Apple’s arm doesn’t really ring true to me. The FBI has lawyers, they understand state action doctrine, and they know if they were coercing Apple, one leak could ruin a lot of prosecutions. I guess I couldn’t say it’s impossible though.

I’m kind of interested in this idea elsewhere in the thread that executing this hash-matching code on your device as opposed to in the cloud is somehow more deserving of 4A protection. One part of me says a textual originality SCOTUS is pretty unlikely to read “mobile handset” as “house” but who really knows what those characters would say these days.


These days, your handset is de facto your personal id (via eSim) and contains your personal digital possessions. A person taking over your handset could easily impersonate you and there's a good chance they could access your medical and banking records.

I am far from being a legal expert, but I see there's a 9-0 ruling in Riley v. California that a smartphone has 4A protections.


Your point is valid and I think it makes me change my argument somewhat.

On the other hand, Riley (and Wurie , from the companion case) were arrested, though, prior to their phones being searched without a warrant.

Here, though, there is no arrest and no government search, just a software provider selling you a phone running software that does something that dozens amount to state action.

With the benefit of your post, I think I can say that what I meant was that I doubt today’s Supreme Court is going to tell us that Apple can sell a service in the cloud that scans hashes, but they can’t sell a phone running software that does the same thing in your phone in the logic of “your phone is your house.”

Put another way, if you invite me into your house (like you welcome Apple software in to your phone) and I want to search your house, that’s not a violation of the Fourth Amendment (it’s just very rude and maybe trespassing).


>just a software provider selling you a phone running software that does something that dozens amount to state action.

>Put another way, if you invite me into your house... and I want to search your house

Say Apple did implement client-side scanning. I think you're right here that Apple's would-be actions would not amount to state action. However, we would be allowing a broad-based search of users' personal files whose results may be auto-forwarded to government agencies based on a private actor's whims.

Say there was a private militia searching people's homes, and forwarding anything too suspicious to the police. Say that the only way to buy a house (in Cupertino, or in the entire country) would be signing a deal with an HOA to allow the militia in. Or less than that, that not signing a deal would merely be arduous and subject you to penalties. At which point of annoyance you'd be de facto cancelling 4A?

So there needs to be some sort of a line, and it could be debated where to put it. In this case it seems that multiple separate people have come around to separating the local device from the cloud as a line, and this does have the advantage of being a clear line.

> they can’t sell a phone running software that does the same thing in your phone

Note that this equivalence also works against allowing this search. If Apple (and others) have a widely-accepted equivalent alternative there isn't a need to allow running the scan locally. [EDIT: They could have chosen a different method, like scanning all the files. Or at least implementing while/after doing E2E. Doing it the way they did made the approach appear like crossing a line without any benefit to anyone but Apple, leading to increased backlash.]

IANAL and all.


YMNBALBYTLY: you may not be a lawyer but you’re thinking like one.

I want to again make clear that I think this kind of scanning sucks. I line the idea of a line. I like the idea of the line being that I decide which software functionality runs in my device. Your hypo about the militia is a great one. I would not live in a country like that, but I have an iPhone…


Thanks for repeating what you got out of my post. There is a little miscommunication. I'll see if I can clear that up.

I'm a "spirit of the law" kind of guy.

And I think that when the bill of rights was passed, the idea was that Americans should be shielded from searches unless there is probable cause.

The founding fathers didn't anticipate the technical innovations of telecommunication and the computer revolution. Nor the power this would grant private companies over the lives of US citizens.

Let's be blunt, if Apple had to pay human staff to manually evaluate hard-copy documents for CSAM violations, they simply wouldn't shoulder that expense. Arguably the only entity that could "afford" it would the the US Government.

Yet here we are in the modern age and these technologies exist. And these searches really do happen - without probable cause. There are at least two known cases of fathers being criminally investigated by the police based on a photo of their children that the father sent to a medical doctor after these photos were automatically scanned and flagged by automated systems[1].

The only point I'm making here is this: I feel that the bill of rights have been diminished in the digital age, and companies can do do wield automated algorithms that end in criminal investigations in ways that detrimentally impact lives of innocent people. And that these events taking place run contrary to the spirit of the bill of rights.

That opinion may differ from your own. Or it may differ from how case law and legal precedent have evolved since the bill of rights was initially ratified. But I don't think dismissing that opinion as "nonsense" is either appropriate or a sign that you are debating in good faith.

[1]: https://www.nytimes.com/2022/08/21/technology/google-surveil...


Thanks for responding. You're right, I misunderstood, and I apologize for being dismissive.

I thought you were making a legal argument, but instead, it sounds like you were making the moral argument: mass algorithmic searches by private entities are obviously bad if they eventually result in criminal prosecution (assuming no warrant). Presumably you don't like it when e.g., Google and Facebook conduct mass searches of your data for advertising reasons, either, but that's a separate conversation.

I'm sympathetic to that argument in the abstract.

When you get to the specific case of CSAM, though, that argument results in this position: mass automated searches for known CSAM hashes causes more harm than allowing that CSAM to be shared unchecked.

And that I don't agree with.

My logic is that Facebook, Microsoft, and Google have already been scanning for NCMEC hashes for years, and I'm not aware of any injustices as a result. Please note that I'm specifically talking about hash scanning, not the ML-based classification systems that presumably caused your [1]. I'm not an absolutist; a few cases where people were referred to police as a result of fraud (e.g., a jealous ex-lover planting evidence) is not necessarily a deal breaker for me, especially since the real source of harm is the fraud, which could have been conducted in any number of other ways. I'm also not sympathetic to the slippery slope fallacy.

On the other side, I believe that there are mass pedophile rings and that these scans have helped detect them and take them down.

So for me, the harm of mass CSAM hash scanning is low and the benefit is high. The balance is in favor of CSAM hash scanning, but not in favor of ML-based CSAM classification.

That's a "from specific consequences" argument, not a "from abstract principles" argument—there's probably philosophy terms for those positions that I'm unaware of—and I respect that other people could see it differently.

PS: I've actually been thinking about Google/Facebook/Microsoft in this thread, not Apple—since they never rolled out their system—but, in my mind, Apple's proposed system threaded the needle perfectly. Combined with their recently-announced e2e encryption, they provided just the right balance of privacy, hash scanning, and protection against abuse and false positives. I'm sad they've shut it down.


5a. Some of those caught will likely be non-technological false positives: images of their own children[1] or CSAM images uploaded by hostile actors attempting to frame the person[2].

[1] https://www.nytimes.com/2022/08/21/technology/google-surveil...

[2] https://www.huffpost.com/entry/wife-framed-husband-child-sex...

The second one is particularly worrisome. In that case the wife was utterly and totally stupid and was therefore caught. How many times has that happened already and the person who did it wasn't caught?


> Reducing demand and creation of CSAM protects children.

You're thinking of this as if CSAM production follows a rational supply/demand relationship (such as with food, or furniture). But is there really evidence to back up the idea that CSAM production follows the same rational relationship?

Child abuse is not rational, and an abuser provides their own demand for the CSAM they create; a child will not be saved from abuse even if there's nobody for their abuser to share the photos with.


The existence of international trading networks of CSAM material speaks otherwise to this, as well as the frequency with which participants in trading but not active producers are usually an ingress point into compromising them.

Even if the currency is barter - i.e. "more unique content" - that means the incentive is for people operating unchecked to feed their thirst for it have a market for the creation of it. Temporarily cornering the market with new content no one's seen before means you can trade for huge amounts of other content successfully.

After all, everyone involved is taking a huge risk by even being a participant.


It scared me when this story first broke that there was almost no focus on the real harms being addressed. It was all about calling out the flawed methodology about the scanning, not the overarching reason why anyone needed to talk about scanning images at all.

I think the litmus test for if a service perfectly respects privacy is if a pedophile can conduct their entire operation using the service without their private data being exposed to prying eyes. By that metric, there ought to be no perfectly private services on the clearnet. To many it is a matter of compromise and "thinking of the children."

Those who really want a higher level of privacy don't need to be beholden to a publicly traded company, subject to legal requirements, to provide it to them.


What is an acceptable murder rate for society?

I think quite obviously, no one actually wants murders to occur. We all want there to be zero murders, but that's not realistic. In a modern society, murder will happen, so we have to decide what are the trade-offs societally. We can make and enforce laws, then make and enforce more, and so on. Ultimately, a society with a 0% murder rate isn't utopia; it's a police state where no one is free. On the balance, I think that society is worse off than one with an otherwise "acceptable" murder rate.

This is the calculus of those arguing against CSAM scanning. We all agree that we don't want CSAM, but is the cure worse than the disease?


Of course I believe it's important to protect children, and people consuming and producing CSAM deserve pure unadulterated justice. I think we should do everything reasonable to catch and punish such loathsome persons.

However, we have other means at our disposal to catch makers and consumers of CSAM that don't require blanket trawling of personal data. I am afraid to surrender privacy in the name of even something very good because surveillance is easy to abuse.

I doubt you'd advocate for the extreme of e.g. the police knocking on everyone's door and performing a full search of their house/flat/whatever every so often to search for CSAM on computers or in magazines etc. That would be sure to catch more criminals and protect more children, but the societal cost would be astronomical. No one wants to live in a police state.

And I hope you don't understand me as advocating for the complete opposite of that, where no searches are ever performed in the name of personal privacy. That's too dangerous.

There's some room between the two extremes. I get nervous about mechanisms that can easily be used to compromise privacy of people who never were under any sort of suspicion. Maybe I'm wrong about where to draw the line. Some of my family lived in East Germany under the Stasi, and what a dark time that was. Perhaps that's part of the source of my aversion to mechanisms like mass digital surveillance.


#1 is 99% of the problem, and #2-#7 is 1%.


I always thought starting with CSAM was just a convenient pretext for slowly introducing surveillance tech in the U.S.

#1-#7 are all bad. I don't think saying "some aspects of [bad thing] aren't so bad" is an effective argument against surveillance tech. Better just to call bullshit on the surveillance tech itself.


[flagged]


[flagged]


Well that certainly worked for WhatsApp, when they delayed the new privacy policy, and then just rolled out when the media coverage died down.


I have a young child. For me, this news over a year ago has significantly curbed sharing of baby pictures/videos through iMessage/WhatsApp etc.

I don’t even record as much video of our son since everything is backed up to iCloud and Google Photos. I have an auto-adding album where any picture taken of our son goes directly into a shared album with my parents and my wife’s parents.

My wife wanted to record us giving our son a bath because it is cute how he plays with the water. We didn’t do that because of this worry that it might be construed as CSAM.

I can’t have my Google account be suspended because I use the same account for most of my freelance work.

The whole situation seems to be a huge violation of the principles of privacy.

I understand this is intended to protect children from terrible evils. But still, I don’t feel this is the best way to do it.


You were smart not to risk it. Even if you can get your hands on a digital camera that lets you record those moments privately you'll have to be careful about what the OS is on the computer where you store/view those images because now or at some point in the future it could also decide to do the same thing.

No matter how terrible or illegal something is, even if it's abuse or terrorism our own devices shouldn't be treating us as an adversary, watching everything we do to make sure we don't step out of line, and reporting violations to the police.


The apple one was only hash matches with known ncmec images over a certain threshold. Even then there was a human in the loop iirc. So it wouldn’t have been an issue.

Google was doing novel image detection though and mistakes there are more possible and more serious. As evidenced by the recent NYT article about that guy.


Because of this I moved my email to my own domain hosted by Proton mail (many providers out there). Can't risk your livelihood on an error on their part or yours. I can just switch my DNS to another provider in minutes if they ban me.


Signal + your own photo backup.


Do not worry; you are in the proper location. Nothing beats having reliable hackers. Have any of your social media accounts been hacked? and you're seeking for a hacker to recover them? For a specific hacking task like this, you should get in touch with SPYWARE CYBER. I was very terrified for myself when I lost my verified Twitter account to internet phishers since I was concerned that I might be impersonated for a very bad purpose given that it was verified and had so many followers. I tried to get help to recover my lost Twitter account but it was in vain. I contacted SPYWARE CYBER on the basis of a referral, and fortunately, these professionals were able to assist me to recover my stolen social network account within a few hours. I felt happy, so I decided to spread the word to everyone else so that you can avoid falling victim to internet phishers that want to steal your social network accounts for illegal activities. Getting in touch with Spyware@cybergal.com, will be very beneficial.

Thank you.


[flagged]


Can you point me to the Google Drive/Photos/etc source code? Android itself might be open source, but the parts that most people care about are not.


And let's not even get started on the LTE radio baseband firmware (Qualcomm etc), which is a totally opaque thing. The radio itself has its own operating system, firmware, it's a magic black box as far as anyone is concerned.


And this differs from Apple hardware how?


The parts that people care about, like cloud are not in the user's control. If you want your own cloud file sharing then there are apps for Nextcloud, and even an open source AppStore called F-Droid that has the open source apps. Google Drive offers E2E as well, and these announcements are nothing but lip service to something that is already being offered elsewhere to give something for Apple cultist to feel good about.


I'd love a source on Google Drive offering actual E2E, all I've seen is that they encrypt everything but still have access themselves.


https://support.google.com/docs/answer/10519333?hl=en#:~:tex....

Also, Google Drive has an open API to use things like rclone to add third-party encryption. As usual, Apple does not provide a similar API:

https://github.com/rclone/rclone/issues/1778


Developing in open is not equal to an open eco system. Android is as walled as Apple in most aspects, can't even unload boatyard lbloatware with out breaking stuff.


Sure you can.. Load up GrapheneOS. Let's see you do that with Apple, or install pretty much any non-app store app on your iPhone using a console tool like adb.


[flagged]


iCloud is not e2ee. Not sure why that’s relevant.


They announced today it will be if you opt in.


Oh that’s awesome


Why did they start this PR disaster in the first place? Who was/is supposed ro pay the bills for development, operation and storage of this service?

Because they really want to prevent child abuse? Because they want to offer technology to governments like China that can then look for anti-Chinese-government content?


> Because they really want to prevent child abuse?

It's pretty likely that the vast majority of people involved with anti-CSAM efforts are in it for precisely that reason. Apple is just an organization of people, after all. For every free speech absolutist that believes child porn is a necessary evil we must accept in order to preserve our freedom of expression, there are probably a thousand people who are comfortable with scanning hashes if it stands a good chance of catching the assholes making it and spreading it around.


I’m sure the CSAM detection disaster gives Apple some leeway/plausible deniability with the governments who would have liked them to scan for illegal data being uploaded to iCloud before encryption, since they can say “well we tried!”


A related question is why did they wait to cancel it until today?

Announcing end-to-end encryption the same day suggests this program was blocking end-to-end encryption from moving forward, too.


Has no one read the article?

>"Technology that detects CSAM before it is sent from a child’s device can prevent that child from being a victim of sextortion or other sexual abuse, and can help identify children who are currently being exploited,” says Erin Earp, interim vice president of public policy at the anti-sexual violence organization RAINN. “Additionally, because the minor is typically sending newly or recently created images, it is unlikely that such images would be detected by other technology, such as Photo DNA. While the vast majority of online CSAM is created by someone in the victim’s circle of trust, which may not be captured by the type of scanning mentioned, combatting the online sexual abuse and exploitation of children requires technology companies to innovate and create new tools. Scanning for CSAM before the material is sent by a child’s device is one of these such tools and can help limit the scope of the problem.”

I see so many top level comments about how they're stopping on-device scanning of any photo which is more intrusive. This is not true. It appears the opt-in is limited to receiving alerts and blocking sending harmful images, but the implications here is that Apple is moving iCloud scanning of media onto the device.

If anyone is going to vote on this comment or respond, please atleast go to your iPhone settings -> Accessibility -> Hearing -> Sound Recognition. Those are on device ML models working on an audio buffer that is ALWAYS ON. The idea is to do the same for visual media. Any images you take or potentially even hit the image buffer of your device could potentially be classified by increasingly powerful ML models.

It doesn't matter if the actual content is then end to end encrypted, because the knowledge of the content is now available to Apple. This is DANGEROUS.


I think you confuse two different things: The optional nudity detection and the mandatory iCloud CSAM scanning.

They abandoned the later and that’s what the article is about.

The first one is a child safety feature that scans content and warns the user, if enabled.

The second one scans everyone’s all content and if it detects more items than some number your data is decrypted and you are reported to the police. A deeply disturbing, dystopian feature where your personal devices become the police and snitch you. Today they say it’s CSAM but there’s no technical reason not to expand it to anything.


Im not confusing anything.

iCloud CSAM scanning does not do what you described in your last paragraph, you're describing the proposed on device csam scanning. There are many "product features" with a finite set of technical building blocks. Lets break it down:

----------------------- Encryption , Storage and Transport:

-Presently, this is considered "strong" on-device where Apple claims they cannot access content on your device and it is encrypted.

-Data sent to their servers as part of the automatic (opt in) photo backup service (icloud photos) is considered fair game to scan and they do that routinely.

-This data is encrypted "End to end". What that means is up for debate since it's taken on a commercial brand shape and isn't a technical guarantee of anything.

------------------

Explicit consent for access and storage:

-This is the point. Your device belongs to you. No one should be snooping around there.

-Apple is presently retiring the aforementioned icloud photo scanning and has simultaneously released a statement that's very obscure in what precisely they mean.

-They indicate that scanning on the server is waayyy too late in preventing CSAM and indicate that they think the best way to prevent CSAM is at the point of origin.

-Of course, what that means is : "Scan everything, when something matches, do an action"

-The "Opt-in" here is for the warnings on a child device, but it is an OS LEVEL FEATURE.

-If some media is being scanned, ALL media is potentially being scanned.

-Presently, for audio, one easy way to reduce your bandwidth costs of backing up, scanning on a server etc is to move everything onto the device itself.

-This has been demonstrated by the Accessibility menu feature i have already called out in my parent comment. You have an audio buffer on your iPhone TODAY that is ALWAYS ACTIVE. When the accessibility toggle is turned on, the contents of this buffer are regularly classified against a trained model.

-When a match occurs, the OS responds with the configured response.

THIS IS A DANGEROUS FRAMEWORK. Swap the media type to any generic type and swap what you're looking for from CSAM to a mention of a political phrase such as abortion. You're asking us to TRUST that the company will never be compelled to do that by authoritarian governments or any hostile entity for that matter? No fucking thank you.


That’s not how Apple’s system worked. The matching only happened server side. Your device had no knowledge of whether a photo was csam or not.


Nope. In the proposed system the scan was performed by the device, the entity that evaluated the result of the scan is a implementation detail. Technically it was also Apple that would call the police but that’s also an implementation detail.

It’s just your device scanning your files for content deemed illegal by authorities and snitching you to the authorities but with extra steps.


Your device attached an encrypted fingerprint of each photo as you uploaded it. The scanning and matching happened server side.


And how do you think that fingerprint is attached exactly? By scanning your files on your device.

Anyway what is the point of this conversation? Are we going to argue over what scanning means?

I’m sorry but I find this intellectually dampening. Okay, not scanning but analyzing and sending the results of the analysis to Apple where Apple can scan. So?


Because you said this: “personal devices become the police and snitch you”

Which makes it seem like Apple’s approach is worse than what Google does where they actually do scan through everyone’s full resolution photos on the server doing what they please with the results.

My point in making this distinction is that if a company is going to do server side scanning, apple’s approach is far more privacy preserving than a company like Google’s and that point is being lost.


> all content

It was only content going to iCloud.


Correct but this means all content if iCloud is enabled, Photos doesn't give you an option to create a folder or album where you can store your photos on-device only.

You also can't have drop in replacement alternative cloud provider if you are not OK to be scanned and reported to the authorities for government disallowed content(There's no technical reason for the reported content being CSAM only, can be anything) because alternative apps can't have the same level of iPhone integration with iCloud.


> Correct but this means all content if iCloud is enabled

That's a pretty big distinction between all content and all content on it's way to iCloud.

> (There's no technical reason for the reported content being CSAM only, can be anything)

Now we're back to all sorts of assumptions. There's also no technical reason Apple can't scan phones right now.

Apple did say at the time the hash db would be checked against different sources to prevent random content checks. And now with E2E, Apple's proposed method was more secure than what happens in the various clouds today where LEO could show up and ask them to search for anything.

Obviously the most secure is not to check at all, but the law may force the issue at some point.


There are only two important points about CSAM scanning.

The first is that the business model of implementing dystopia is always rolled out as "to save the children". Rest assured once the infrastructure is in place the oppression will have nothing to do with "saving children" and everything to do with oppressing people for wrongthink or "the fifth amendment doesn't apply to IP / digital property" and so on.

The second is that the point of dystopia is to stabilize a society by enforcing classism, for better or worse. The point of this system is to make everyone understand perfectly that the purpose of the system is to ensure that, for example, if the president's son were caught he would be unpunished, in fact the story would be aggressively censored, but if a poor person were falsely accused by some kind of mistake they would go to prison because our legal system is entirely monetary based. Its a loud message of "everyone get on that hamster wheel and run faster because we say so and we need more concentration of money and power at the top". The powers that be think there's too much class mobility, too much social instability, too much equality, we need to lock down to preserve their position at the top, and if they just lock down harder and harder they'll never fall. Historically this strategy has always been implemented because it works in the short term yet usually fails in the long term.


This is conjectural, bordering on paranoid. It could happen, but you present no evidence that it will, or even that the effort to make it so is deliberate and coordinated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: