This CSAM Prevention initiative by Apple is a 180 degress change of their general message around privacy. Imagine investing hundreds of millions of dollars in pro-privacy programs, privacy features, privacy marketing, etc... just to pull this reverse card.
Of course this is going to spark concern within their own ranks. It's like working for a food company that claims to use organic, non-processed, fair-trade ingredients and in just a day deciding that you're going to switch to industrial farming sourcing and ultra-processed ingredients.
It's a complete change of narrative and there's no easy way to explain it and still defend Apple's Privacy narrative, wihout doing extreme mental gymnastics.
I actually think something else happened, and to be honest I think many at Apple behind this decision are likely pretty surprised by the blowback.
That is, it seems like Apple really wanted to preserve "end-to-end" encryption, but they needed to do something to address the CSAM issue lest governments come down on them hard. Thus, my guess is, at least at the beginning, they saw this as a strong win for privacy. As the project progressed, however, it's like they missed "the forest for the trees" and didn't quite realize that their plan to protect end-to-end encryption doesn't mean that much if one "end" is now forced to run tech that can be abused by governments.
This kind of thing isn't that uncommon with large companies. To me it seems very similar to the Netflix Qwikster debacle of a decade ago. Netflix could very clearly see the writing on the wall, that the DVD-by-mail business needed to morph into a streaming business. But they fumbled the implementation, and what largely came across to the public was "you're raising prices by 40%."
I give Netflix huge props for actually reversing the decision pretty quickly - I find that exceptionally rare among large corporations. Remains to be seen if Apple will do the same.
That is, it seems like Apple really wanted to preserve "end-to-end" encryption,“ … except they still have not mentioned anything about E2E encryption… and they currently don’t encrypt icloud backups.
You would think apple would get ahead of this story and mention … or maybe they don’t have any E2E plans at all.
iMessages are already E2E encrypted, but you are correct, iCloud backups are decryptable with a warrant (and that was reportedly added at the FBI's request). But I agree with Ben Thompson's point in that blog post, that it's OK to not have strong, unbreakable encryption be the default, and that it's still possible to use an iPhone without iCloud and get full E2E.
But with Apple's CSAM proposal is NOT possible to have an iPhone that Apple isn't continuously scanning.
> But I agree with Ben Thompson's point in that blog post, that it's OK to not have strong, unbreakable encryption be the default, and that it's still possible to use an iPhone without iCloud and get full E2E.
I disagree completely on this. For one, users aren't aware that using iCloud means that Apple has your decryption key and can thereby read and share all of your phone's data.
And two, opt-out is a dark pattern. Particularly if you surround it with other hurdles, like long click-wrap and confusing UI, it's no longer a fair choice, but psychological manipulation to have users concede and accept.
Third, as it's hinted, smarter criminals, the ones the FBI should actually worry about, will know to opt-out. So instead the vast majority of innocent users have their privacy violated in order to help authorities catch "dumb" criminals who could have very well been caught through countless other avenues.
disclaimer: I just read Bruce Schneier's "Click Here To Kill Everybody" after the pipeline ransomware attack, and these points come straight from it.
I agree. If Apple wants to go this route, they should abstain from pushing the user towards using iCloud as they currently do, and instead just present a clear opt-in choice.
"Do you want to enable iCloud photos? Your private photos will be uploaded encrypted to Apple's servers, so that only you can access them. Before uploading, your photos will be locally scanned on your device for any illegal content. Apple will only be notified once a sufficient volume of illegal content is detected."
They moved away from the TimeCapsule model. If they were really committed to privacy there would be a small box you can put in your closet that provides iCloud storage functionality.
> Your private photos will be uploaded encrypted to Apple's servers, so that only you can access them.
This would be a large change from the way it works today, where Apple can view your iCloud photos. They’ve not made any statements indicating that is going to change.
I don't see any reason to implement CSAM scanning on-device as opposed to doing it on the server if it wasn't to switch to a model where the server doesn't have access to the data.
Indeed, up until reading these comments I had no idea that iCloud wasn’t encrypted.
Everything about Apples messaging makes you believe otherwise. That seems pretty disingenuous.
Then again, 99% of consumers have very little choice that doesn’t include huddles and complicated setup. We all get the same moon goo, under a differ different brands.
> Indeed, up until reading these comments I had no idea that iCloud wasn’t encrypted.
iCloud data is encrypted at rest (edit: except for Mail apparently). The type of encryption (service or end-to-end [E2E]) is specified here: https://support.apple.com/en-us/HT202303
It can be argued that from a user's viewpoint not having E2E encryption is tantamount to not having encryption at all, but from a technical standpoint the data is encrypted.
It's encrypted at rest, but Apple has the decryption keys, and will give up your customer data when asked to by the government[1]. Also, iCloud photos are not encrypted[2].
They are encrypted at rest. This protects from someone hacking in to Apple, getting the data on disk but somehow not getting the keys, which Apple also possesses.
Apple (and thus, LEO) absolutely can look at your photos on iCloud. What you are missing is that "encrypted at rest" is essentially "not encrypted in any meaningful way".
This is more from Schneier's book, but I would say the most import reason E2E encryption should be the default is that in the event of a data breach, nothing would be lost. If a company's servers are hacked, they'd have access to the symmetrical encryption keys, and therefore all of the data. It also ensures that the company can't be selling/sharing your data, as they don't have access to it in the first place.
Edit: I also meant iCloud backups in my original post and how Apple can decrypt your E2E encrypted iMessages with the key the backups contain. But I posted it last night and couldn't edit it once I caught the error. It would be amazing for other iCloud services to have E2E encryption so long as the implications of iCloud backups having your encryption keys is stated front and center when choosing to opt-in.
What are the advantages of having unencrypted cloud backups? The only advantage is that authoritarian governments can better control their citizens. Apple is playing on the side of dictators instead of protecting their users.
As someone who frequently has to help older family members recover from lost passwords and broken phones, I sincerely hope they leave strong encryption of backups to be "opt out".
It's fine if you know you want to be the sole person in the world who can get to your data and have the discipline to store master passwords, and you accept Apple support cannot help you even if you have a purchase receipt for the device. But for the average older person that's not a good default.
> help older family members recover from lost passwords and broken phones,
Apple just needs to design good UX to help these people.
Perhaps a large cardboard 'key' comes with every phone. When first setting up the phone, you scan a QR code on it to encrypt your data with the key, and then you tell the owner that of they ever lose the key to their phone, they won't be able to get in again.
People understand that - it's like losing the only key to a safe.
From time to time you require them to rescan the key to verify they haven't lost it, and if they have, let them generate a new one.
Well if the alternative is the current situation where Apple has access anyway, a key that Apple may know is still a better idea.
For even better security they can instead provide an NFC smartcard or hybrid NFC + USB (like a Yubikey) where the key is stored in hardware and can't be extracted.
Far easier to develop a culture of using something physical like yubikeys - over flimsy cardboard QR codes. The key metaphor makes a lot more sense that way, too.
So you are suggesting making everyone life more cumbersome with the very likely potential of losing all the data on your phone because you lost a piece of cardboard or forgot a key for which benefit exactly? Being hypothetically safe for government scrutiny concerning information they could a dozen other way?
I really like E2E encryption of messages with proper backups on my end. It's good to know the carrier of my messages can't scan them for advertising purpose. But for my phone if I know there is no way to easily get the data back if I have an issue that seems like a major hassle with very little upsides.
I dunno, when I read the choices in iTunes about backups, I didn’t feel particularly manipulated. It seemed straight forward. Trade offs were clear to me.
I guess some people just see dark patterns where I don’t.
The least of the biases at play, here, is the change aversion bias.
Making things "opt-out", without even making it hard, dramatically increases the number of people who opt-in. It can be used for many reasons.
For example, the UK switched it’s organ donation laws from opt-in to opt-out.
Another example, my government has decided that them selling their citizen's personal information to garages, insurance companies, etc, when we buy cars, should also be opt-out.
So they freely sell the car make, acquisition date, buyer's name,… [1][2]
Unless it has changed with GDPR. I’ve never bought a car.
While we're on the topic of GDPR, that is exactly why it insists on making cookies, tracking, and the "sharing of information with partners" opt-in.
And that is without hiding what you’re doing, making it unclear or confusing, or requiring multiple actions to change it.
Because you only have to express it for it to be respected.
Which is the bare minimum if you want others to be able to even consider respecting it.
The default choice being, in the absence of any information, the one that increases chances of survival of actually living humans doesn’t seem misguided to me.
——
If you want compulsory and overreaching (although not necessarily bad. Got no opinion on that aspect) rules, over here we:
- can’t disinherit a descendant. The state decides the minimum each one of your genetic or legal relatives can get from your assets
- have to financially support our adult children until an arbitrary time decided by the law
- have to support and assist our parents and grand-parents if they can’t provide for themselves
- basically can be legally compelled to financially assist any ascendant or descendant
- may end up having to pay up after a divorce, even if you had an iron-clad contract, to compensate for the loss of QoL of the spouse with a lesser income
Because you only have to express it for it to be respected.
... and if you do not know that an opt-out is required to not have your organs removed, then how can it be said that you have willingly contributed them. And if your organs are removed by default, without you actually willing that this be the case, then how can this practice be called "donation"?
So instead, shall we call it "organ retrieval", "organ harvesting" and so on? All rather ugly words. But when the words that accurately describe what you are doing appear ugly, instead of looking for words which inaccurately describe what you are doing, perhaps you should ask yourself if you should be doing what you are doing.
so, basically your only problem is that the possibility of opt-out may not be adequately communicated? and, i think everyone would agree that changing the law but not telling anyone would be bad and wrong. but, spoiler alert, that didn't happen. and if we found someone who could or did not understand, the default would be to not use their organs.
They're dead, so it's often a bit tricky to find out. What you could do is have some way of recording that people have been told about the opt-out, then they could carry a card around with them that says "I have been informed of the opt-out and choose not to, please take my organs". Only take the organs if they are carrying such a card.
And I'd be interested to know of the percentage of UK adults who are aware of the opt-out. I had a search around but couldn't find any surveys, are you aware of any?
good question. every household got a big blue envelope with large print writing in it saying something like 'important information about organ donation changes' (or similar, can't find my copy now) and a pamphlet inside. like i said, this was sent to every household, so it does miss the homeless, illiterate and non english speakers (although the pamphlet had instructions in other languages explaining how to access more information) and there were press, radio, tv, internet and outdoor adverts, as i recall. i mean, only covid-19 health information dissemination has been more extensive recently. but, you're right, would be interesting to know how effective the campaign was, and im sure they checked, since it cost a fair bit of public money, so the information should be available somewhere online...
There’s a reason why they strongly encourage people to make their preference clearly known to family & friends.
Because most often, when there’s even the sliver of a doubt on the deceased one's wishes, the corpse will keep all it’s organs to the grave / incinerator.
> But with Apple's CSAM proposal is NOT possible to have an iPhone that Apple isn't continuously scanning.
As currently implemented, iOS will only scan photos to be uploaded to iCloud Photos. If iCloud Photos is not enabled, then Apple isn't scanning the phone.
I think the issue is that, as Ben Thompson pointed out, the only thing preventing them from scanning other stuff now is just policy, not capability, and now that Pandora's box is open it's going to be much more difficult to resist when a government comes to them and says "we want you to scan messages for subversive content".
Given that they obviously already have the capability to send software updates that add new on-device capability, this seems like a meaningless distinction. It’s already “just policy” preventing them from sending any conceivable software update to their phones.
I disagree, strongly. Let's say you're authoritarian government EvilGov. Before this announcement, if you went to Apple and said "we want you to push this spyware to your iPhones", Apple would and could have easily pushed back both in the court of public opinion and the court of law.
Now though, Apple is already saying "We'll take this database of illegal image hashes provided by the government and use it to scan your phone." It's now quite trivial for a government to say "We don't have a special database of just CSAM, and a different database of just Winnie the Pooh memes. We just have one big DB of 'illegal images', and that's what we want you to use to scan the phones."
Could they though? An authoritarian government could just as easily say "if you do not implement this feature, we will not allow you to operate in this country" and refuse to entertain legal challenges.
Authoritarian countries are the least of the worries. There's a large class of illegal imagery that is policed in democratic countries: copyrighted media. We are only two steps away from having your phone rat you to the MPAA because you downloaded a movie and stored it on your phone.
I can guarantee that some industry bigwigs are salivating at the prospect of this tech. Imagine youtube copyright strikes but local to your device.
This would have been a huge fear of mine if this was 10 or 15 years ago and legitimate publishers were still engaging in copyright litigation for individual noncommercial pirates. The RIAA stopped doing individual lawsuits because it was surprisingly expensive to ruin people's lives that way over a handful of songs. This largely also applies to the MPAA as well. The only people still doing this in bulk are copyright trolling outfits - companies like Prenda Law and Malibu Media. They make fake porn, covertly share it and lie to the judge about it, sue you, and then offer a quick and easy settlement route.
(Yes, I know, it sounds like I'm splitting hairs distinguishing between copyright maximalism and copyright trolling. Please, humor me.)
What copyright maximalists really want is an end to the legal protections for online platforms (DMCA 512 safe-harbor) so they can go after notorious markets directly. They already partially accomplished this in the EU. It's way more efficient from their end to have one throat to choke. Once the maximalists get what they want, platforms that want to be legitimate would be legally compelled to implement filters, ala YouTube Content ID. But those filters would apply to content that's published on the platform, not just things on your device. Remember: they're not interested in your local storage. They want to keep their movies and music off of YouTube, Vimeo, Twitter, and Facebook.
Furthermore, they largely already have the scanning regime they want. Public video sites make absolutely no sense to E2E encrypt; and most sites that deal in video already have a content fingerprinting solution. It turns out it was already easy enough to withhold content licenses to platforms that accept user uploads, unless they agreed to also start scanning those uploads.
(Side note: I genuinely think that the entire concept of safe harbors for online platforms is going to go away, just as soon as the proposals to do so stop being transparently partisan end-runs around the 1st Amendment. Pre-Internet, the idea that an entity could broadcast speech without having been considered to be the publisher of it didn't really exist. Publishers were publishers and that was that.)
One reason I haven't used itunes in years, and don't use iOS, is that I have a huge collection of mp3s. It's almost all of music which I bought and burned off CDs over decades, and it's become just stupidly difficult to organize your own mp3 collection via Apple's players. Even back in the ipod days, you could put your mp3s on an ipod but you needed piracy ware to rip them back off. I can easily see this leading to people like me getting piracy notices and having to explain that no, I bought this music, since we're now in a world where not that many people have their own unlocked music collection anymore.
The only authoritarian country that holds any sway over Apple is China and China can easily just ban iPhones if they feel like it, especially with Xi Jinping in charge.
Democratic countries are much more of a danger because Apple has offices in them and cannot easily pull out of the market if they feel the laws are unconscionable.
All these years I have heard of people getting in trouble for sharing copyrighted materials with others but never for possessing them or downloading them (BitTorrent somewhat blurs the line here). And there's nothing illegal about ripping your own CDs. I find this scenario far-fetched. iTunes Match would seem to be a longrunning honeypot if you take this seriously.
Should this fact be reassuring or even more frightening ?
Because it’s something to push back against governments by saying « I can’t ». But it’s a societal issue if a corporation can say « I don’t want » to a government.
It’s really not the same thing and Apple just burned their « I can’t » card and are implying they can just say « no » to governments. Which is quite an even more dystopian thing.
Of course they won't. The idea of a company refusing to comply with the laws of the countries they operate in purely out of some grand sense of principle just seems naive. Especially if it's the country where HQ is.
Yeah I'm pretty sure that this wasn't an easy decision for the Chinese government. Even in dictatorships you can't afford to piss off the population too much.
Google is pretty big too but they implemented the filtering in China as requested. Then when they decided they didn't want to do that anymore there went their site.
I meant in the context of U.S. governments. What Democrat administration wants to ban Apple? What Republican administration wants to ban Apple?
Sounds like political suicide on either side: government interference with the country's largest company, an iconic brand, for no reason other than to hopefully spy more on your citizens.
Analogy: the prospect of “vaccine passports” for jurisdictional movement and access is getting serious consideration at all levels of U.S. governance. Not long ago, anything so resembling “papiere, bitte” commonly inspired a sense of violent resistance against totalitarianism. Just takes one engaging “but this is different” to reframe the worst of humanity to popular compulsion.
> Apple is already saying "We'll take this database of illegal image hashes provided by the government and use it to scan your phone."
This is incorrect.
Apple has been saying—since 2019![0]—that they can scan for any potentially illegal content, not just images, not just CSAM, and not even strictly illegal.
That's what should be opposed. CSAM is a red herring.
The difference is that's scanning in the cloud, on their servers, of things people choose to upload. It's not scanning on the user's private device, and currently they have no way to scan what's on a private device.
The phrase is “pre-screening” of uploaded content, which is what is happening. I'm pretty sure this change in ToS was made to enable this CSAM feature.
Why do you think essentially no one is complaining about using ML to understand the content of photos, then (especially in comparison to this rather targeted CSAM feature)? My impression is that both Apple and Google have already been doing that since what? 2016? Earlier? There's been no need for a database of photos, either company could silently update those algorithms to ping on guns, drugs, Winnie the Pooh memes, etc.
Why do you think essentially no one is complaining about using ML to understand the content of photos…
At last in Apple's case, ML is used only if a minor child (less than 13 years old) who is on a Family account where the parent/guardian has opted-in to the ability to be alerted if potentially bad content is either sent or received using the Messages app.
Google and Apple both use ML to understand the content of photos already. Go into your app and you can search for things like "dog", "beer", "birdhouse", and "revolver". Given that it can do all that (and already knows a difference between type of guns), it doesn't seem like a stretch to think it could, if Apple or Google wanted, understand "cocaine", "Winnie the Pooh memes", or whatever repressive-government style worries are had. And it's existed for years, without slippery sloping into anything other than this Messages feature for children.
Bingo. All Apple has done is add another category for “nude child” with a simple notification when the AI finds one. The tech has arrived; consequences happen.
There is so much disinformation about this with people not being informed. Apple have put out clear documentation, it would be a good idea to read it before fear-mongering.
1) The list of CSAM hashes is that provided by the relevant US government agency, that is it.
2) The project is not targeted to roll out anywhere other than the USA.
3) The hashes are baked into the OS, there is no capability to update them other than on signed and delivered Apple OS updates.
4) Apple has exactly the same leverage with EvilGov as at any other time. They can refuse to do as asked, and risk being ejected from the country. Their stated claim is that this is what will happen. We will see.
Regarding 3: it’s very easy to make a mistake in the protocol that would allow apple to detect hashes outside the CSAM list. Without knowing exactly how their protocol works it’s difficult to know whether it is correct.
For example here is a broken PSI protocol in terms of point 3. I don’t think normally in PSI this is considered broken because the server knows the value so it is part of its private set.
Server computes M_s = g . H(m) . S_s
where g is a generator of an elliptic curve, H(m) is the neural hash of the image and S_s is the server blinding secret.
The client computes M_sc = M_s . S_c where S_c is the client ephemeral secret. This M_sc value is the shared key.
The client also computes M_c = g . H(m) . S_c
and sends the M_c value to the server.
The server can now compute M_cs = M_c . S_s = M_sc since they both used the same H(m) values. This allows the server and client to share a key based on the shared image.
However, what happens if the client does it’s step using the ‘wrong’ image. If 3) is to hold it should not be possible for the server to compute the key.
Client computes:
M_sc = M_s . S_c
M_c = g. H(m’) . S_c
The clients final key share is: M_sc = g . H(m) . S_c . S_s
Now server computes: M_cs = M_c . S_s = g . H(m’) . S_c . S_s
The secret shares don’t match. But if the server knows H(m’) it can compute:
M_cs’ = M_cs . inv(H(m’)) . H(m)
and this secret share will match
Normally this client side list in PSI is just used to speed up the protocol so the server does not have to do a crypto operation for every element in its set. It is not a pre-commitment from the server.
Also, maybe the way I’m doing it here is just normally broken because it is not robust against low entropy inputs to the hash function.
I've also reversed some of apple's non-public crypto that is used in some of it's services and they have made dubious design decisions in the past they have created weird weaknesses. Without knowing exactly what they are doing I would not try and infer properties that might not exist or trust their implementation.
Authoritarian governments have never needed preexisting technical gizmos in order to abuse their citizens. Look at what Russia and the Nazis did with no magic tech at all. Privacy stems from valuing human rights, not easily disproven lies about it being difficult to actually implement the spying.
True. You can have an authoritarian regime with little technology.
But what people are afraid of is how little it takes when the technology is there. And how Western governments seem to morph towards that. And how some bastions of freedom are falling
Even a Jew stuck in a ghetto in Nazi Germany enjoyed significantly better privacy at home from both the state and capital than a wealthy citizen in a typical western liberal nation today.
Soviet Russia and Nazi Germany in fact prove the exact opposite of what point you think you're making. They were foremost in using the "magic tech" of their day - telecommunications and motorised vehicles.
Even then, they had several rebellions and uprisings that were able to develop thanks to lacking the degree of surveillance tech found today.
On the contrary, the reason Stalin and Hitler could enact repressive measures unprecedented in human history was precisely new technological developments like the telephone, the battle tank, and IBM cards. Bad people will never value human rights; getting privacy from bad people requires you to make it actually difficult to implement the spying, rather than just lie about it. That's why the slogan says, "cypherpunks write code."
No, they could enact repressive measures because they had almost absolute power. The form that oppression took would obviously rely on the then current technology, but the fundamental enabler of oppression was power, not technology.
Power which was extended through the use of said technology. I'm fine with focusing on regressive hierarchical systems as the fundamental root of oppression but pretending like the information and mechanical technology introduced in the late 19th century to early 20th century did not massively increase the reach of the state is just being ridiculous.
Prior to motorised vehicles and the telegraph for example, borders were almost impossible to enforce on the general population without significant leakage.
Ironically in spite of powered flight, freedom of movement is in a sense significantly less than even 200 years ago when it comes to movement between countries that do not already have diplomatic ties to one another allowing for state authorized travel.
Incidentally, this is part of why many punishments in the medieval age were so extreme - the actual ability of the state to enforce the law was so pathetically limited by today's standards that they needed a strong element of fear to attempt to control the population.
There's a flipside to this, too: technology doesn't just improve the ability of the state to enforce law (or social mores), it also improves the ability of it's subjects to violate it with impunity.
Borders in medieval Europe weren't nearly as necessary because the physical ability to travel was limited. Forget traveling from Germany to France - just going to the next town over was a life-threatening affair without assistance and wealth. Legally speaking, most peasant farmers were property of their lord's manor. But practically speaking, leaving the manor would be quite difficult on your own.
Many churches railed against motor vehicles because the extra freedom of movement they made possible also broke sexual mores - someone might use that car to engage in prostitution! You see similar arguments made today about birth control or abortion.
Prior to the Internet, American mass communication was effectively censored by the government under a series of legally odd excuses about public interest in efficiently-allocated spectrum. In other words, this was a technical limitation of radio, weaponized to make an end-run around the 1st Amendment. Getting rid of that technical limitation increased freedom. Even today, getting banned from Facebook for spouting too much right-wing nonsense isn't as censorious as, say, the FCC fining you millions of dollars for an accidental swear word.
Whether or not a technology actually winds up increasing or reducing freedom depends more on how it's distributed than on just how much of it there is. Technology doesn't care one way or the other about your freedom. However, freedom is intimately tied to another thing: equity. Systems that economically franchise their subjects, have low inequality, and keep their hierarchies in check, will see the "technology dividend" of increased freedom go to their subjects. Systems that impoverish people, have high inequality, and let their hierarchies grow without bound, will instead squander that dividend for themselves.
This is a feedback effect, too. Most technology is invented in systems with freedom and equality, and then that technology goes on to reinforce those freedoms. Unequal systems squander their subjects' ability to invent new technologies. We didn't see this with Nazi Germany, because the Allies wiped them off the map too quickly; but the Soviet Union lost their technological edge over time. The political hierarchy they had established to replace the prior capitalist one made technological innovation impractical. So, the more you use technology to restrict, censor, or oppress people, the more likely that your country falls behind economically and stagnates. The elites at the top of any hierarchical system - aside from the harshest dictatorships - absolutely do not want that happening.
A lot of excellent points though it's completely inaccurate to frame the fall of the Soviet Union's scientific progress due to their hierarchy replacing the "prior capitalist one" when the former lead to the mass industrialisation and rapid economic growth that lead to the USSR being the first nation to conquer space flight whereas under the latter, the USSR was an agrarian society suffering from food shortages and mass poverty. The decline of Soviet science and economy had more to do with NATO and American restrictions on trade and movement than something self imposed by the Soviets.
Capitalism is completely compatible with hierarchies and censorship - indeed one could argue that capitalism is completely incompatible with a true flattening of hierarchies since it rests on the ability of an owner class holding a monopoly over the means of production. The majority of dictatorships around the world use capitalism as the basis of it's economy. Following the dissolving of the USSR, Russia continues to be authoritarian and arguably more so than the USSR was past the Stalin era.
Aside from that, I think it's a little premature to frame the internet's ultimate effect on the world as reducing the ability of the government to censor the population when it's only been around for less than 30 years and we are already seeing mass adoption and development of both censorship and surveillance tech that goes beyond even the wildest dreams of 20th century era dictators.
I don't really buy the argument that most technology is invented in systems with freedom and equality either. It just sounds more like something we want to believe than something borne out by data. The internet and rocket ships were the product of the military - an institution that has more to do with using force to enact the will of it's host nation on others and limiting their freedoms than preserving the freedoms of their own, especially for superpowers like the USA and the CCP, which are effectively immune to conquest by military force.
This is in fact the same for the Silicon Valley and most private industry, you only think all this tech is the product of your freedom and equality because all of the actual extreme inequality and lack of freedom is kept compartmentalised to the global south through long and convoluted supply chains. It's not really freedom if only 10% of the actual people involved in the sustaining of an economy have any semblance of it -leaving aside the observation that for even this 10% that represents the average US citizen, their actual democratic agency in the state or in their job is vanishingly low.
While there's some good food for thought in here, I want to quibble with a couple of things.
First, the Soviet Union didn't have a prior capitalist hierarchy; the whole reason for Leninism as such was that Russia was a feudal, agrarian society, which in Marx's theory had not progressed to the stage of capitalism and therefore could not progress to communism.
Second, food shortages and mass poverty did not end with the establishment of the Soviet Union; in fact, they became enormously worse. Even before being invaded by Germany in WWII, the Soviet system caused the Holodomor, a famine unprecedented in the history of the Ukraine.
Third, the internet has been around for 52 years, not less than 30. I've personally been using it for 29 years, and I can tell you that it already had a long history when I started using it. One of the founders of Y Combinator first became well-known as a result of breaking significant parts of the internet 33 years ago, an event which resulted in a criminal trial.
Fourth, the internet was the product of universities, although the universities were funded by ARPA. Rocket ships have a long evolution including not only militaries but also recreational fireworks, Tipu Sultan of Mysore, Tsiolkovsky, Goddard, the peaceful space agencies, H. G. Wells, and possibly Lagâri Hasan Çelebi.
Fifth, I don't think the argument is that the technology is necessarily produced by systems with freedom and equality, but that it is invented by them. This is somewhat dubious, but not as open-and-shut wrong as your misunderstanding of it. Goddard's rockets were put into mass production as the V2 in Nazi Germany using slave labor, but he invented them at WPI and Princeton. Tsiolkovsky lived in the Czar's Russia and then the USSR, and his daughter was arrested by the Czar's secret police, but he himself seems to have had considerable freedom to experiment with dirigibles and publish his research (but no slaves to build rockets for him), and indeed he was elected to the Socialist Academy.
I think we can make an excellent case that certain kinds of intellectual repression, whether grassroots or hierarchical, fall very heavily on the kinds of people who tend to invent things. William Kamkwamba's family thought he was insane, Galileo spent the last years of his life under house arrest, Newton was doing alchemical research that could have gotten him burned at the stake in Spain, Qin Shi Huang buried the Mohists alive, Giordano Bruno was in fact burned at the stake, and the repression of Lysenko's intellectual opposition was a major reason for the USSR's and PRC's economic troubles in the 01950s and 01960s.
Living in the so-called "global south" (a term which I have come to regard as useless for understanding the world system, if not actively counterproductive) I have to tell you that there's very little production of advanced technology going on here. But I live in Argentina, and the situation is different in different countries; Indonesia, Thailand, Vietnam, etc., have all done significant technical production for the world economy while in the grip of dictatorships, even though Argentina never has. But most countries don't. If tantalum cost ten times as much, we'd still have cellphones, and you probably wouldn't even be able to detect the price difference.
There's a difference -- often a big one -- between being theoretically able to build some feature into a phone (plus the server-side infra and staffing to support it), and actually having built it. That's the capability angle.
However, the difference between having a feature enabled based on the value of a seemingly-unrelated user-controlled setting, versus having that feature enabled all the time... is basically zero. Additionally, extending this feature to encompass other kinds of content is a matter of data entry, not engineering. That's the policy angle.
When you don't yet have a capability, it might take a lot of work and commitment to develop it. But on the contrary, policy can be changed with the flip of a switch.
Yes, of course that is true. I use iCloud Photos and find this terribly creepy. If Apple must scan my photos, I'd rather they do it on their servers.
I could maybe understand the new implementation if Apple had announced they'd also be enabling E2E encryption of everything in iCloud, and explained it as "this is the only way we can prevent CSAM from being stored on our servers."
> "this is the only way we can prevent CSAM from being stored on our servers."
Why is that supposed to be their responsibility?
Given the scale of their business, there is effectively a certainty that people are transmitting CSAM via Comcast's network. That's no excuse for them to ban end to end encryption or start pushing out code to scan client devices.
When you hail a taxi, they don't search you for drugs before letting you inside.
Because they're not the police. It isn't their role to enforce the law.
They (and Google, Microsoft, Facebook, etc.) are essentially mandated reporters; if CSAM is on their servers, they're required to report it.
It's like how a doctor is a mandated reporter regarding physical abuse and other issues.
Because they're not the police. It isn't their role to enforce the law.
They're not enforcing the law; if the CSAM reaches a certain threshold, they check it out and if it's the real deal, they report to National Center for Missing and Exploited Children (NCMEC); they get law enforcement involved if necessary.
> They (and Google, Microsoft, Facebook, etc.) are essentially mandated reporters; if CSAM is on their servers, they're required to report it.
I don't think that's correct. My understanding is that if they find CSAM, they're obligated to report it (just like anyone is). I don't believe they are legally obligated to proactively look for it. (It would be a PR nightmare for them to have an unchecked CSAM problem on their services, so they do look for it and report it.)
Consider that Apple likely does have CSAM on their servers, because they apparently don't scan iCloud backups right now. I don't believe they're breaking any laws, at least until and unless they find any of it and (hypothetically) don't report it
Last year Congress mulled over a bill (EARN IT Act) that would explicitly require online services to proactively search for CSAM, by allowing services to be sued by governments for failing to do so. It would also allow services to be sued for failing to provide law-enforcement back doors to encrypted data. There's also SESTA/FOSTA, already in law, that rescinded CDA 230 protection for cases involving sex trafficking.
Quite honestly, my opinion is that any service not scanning for CSAM is living on borrowed time.
Not necessarily. The scanning implementation can be similar to what they plan on doing on your device. I don't want them to do the scanning on my phone. If shit hits the fan and I become a dissident, I would prefer to have the option to stop using iCloud, Dropbox or any other service that might enable my government or the secret police to suppress me.
One can encrypt a picture before uploading it to the aforementioned services, and as such nothing would really happen. That’s not a possibility on pictures taken on iPhones that get uploaded to iCloud.
>As currently implemented, iOS will only scan photos to be uploaded to iCloud Photos. If iCloud Photos is not enabled, then Apple isn't scanning the phone.
Except for the "oops, due to an unexpected bug in our code, every image, document, and message on your device was being continuously scanned" mea culpa we will see a few months after this goes live.
After they introduced AirTags I went and disabled item detection on all my devices. Yesterday I went into settings and guess what, that feature is enabled again. On all my devices! I’m not sure when that happened, but my guess would be that latest iOS update caused that…
My point is, you will only have things working to the extent you have testing for, and test coverage is likely less robust for less popular features or where perception tells developers that “no one is going turn that mega-feature off”.
Except for the "oops, due to an unexpected bug in our code, every image, document, and message on your device was being continuously scanned" mea culpa we will see a few months after this goes live.
Only images that match the hashes of the database of CSAM held by the National Center for Missing and Exploited Children (NCMEC) that are uploaded to iCloud Photos are checked.
Based on their technical documents, it's not even possible for anything else to be checked; even if they could, they learn nothing from documents that aren't CSAM.
>>> Only images that match the hashes of the database of CSAM held by the National Center for Missing and Exploited Children (NCMEC) that are uploaded to iCloud Photos are checked.
Incorrect. All files on the phone will be checked against a hash database before being uploaded to iCloud. Any time before, which means all the time, if you have iCloud enabled.
All files on the phone will be checked against a hash database before being uploaded to iCloud.
A cryptographic safety voucher is created for each photo as they're uploaded. The iPhone doesn't know wether or not anything matched. Nothing happens unless the user reaches a threshold of 30 CSAM images that have been uploaded to iCloud Photos.
The phone also doesn't know whether something is about to be uploaded or not. Therefore, all files on the phone will be checked is the relevant portion of that sentence.
Nothing limits Apple to shipping just the hashes provided by NCMEC. And people who worked with NCMEC's database say it contains documents that aren't CSAM.
The potential for this is overblown and I think a lot of people haven’t taken the time to understand how the system is set up. iOS is not scanning for hash matches and phoning home every time it sees one, it’s attaching a cryptographic voucher to every photo uploaded to iCloud that, if some number of photos represent hash matches (the fact of the match is not revealed until the threshold number is reached), then allow Apple to decrypt the photos that match. This is not something where “oops bug got non-iCloud photos too” or “court told us to flip the switch to scan the other photos” would make any sense, some significant modification would be required. Which I agree is a risk especially with governments like China/EU that like to set conditions for market access, just not a very immediate one.
I hear you, 100%, but as a longtime Apple and iCloud user I have been through the absolute ringer when it comes to iCloud and have little faith in it operating correctly.
1. About 3 years ago, there were empty files taking up storage on my iCloud account, and they weren't visible on the website so I couldn't delete them. All it showed was that an excessive amount of storage was taken up. Apple advisors had no idea what was going on and my entire account had to be sent to iCloud engineers to resolve the issue. They never followed up, but it took months for these random ghost files to stop taking up my iCloud storage.
2. Sometimes flipping them on and off a lot causes delays in the OS and then you have to kill the Settings app and re open it to see if they're actually enabled.
Back when ATT was just being implemented, I noticed it was grayed out on my phone every time I was signed in to iCloud, but was completely operational when I was signed out. Many others had this issue and there was literally no setting to change within iCloud; it was a bug that engineering literally acknowledged to me in an email (they acknowledged that it happened for some users and fixed it in a software update).
Screwups happen in an increasingly complex OS and I just feel that there will be a day when this type of bug surfaces in addition to everything that already happens to us end users.
> it’s attaching a cryptographic voucher to every photo uploaded to iCloud that, if some number of photos represent hash matches
OK, but what if Apple silently pushes out an update (or has existing code that gets activated) that "accidentally" sets that number to zero? Or otherwise targets a cryptographic weakness that they know about because they engineered it? That wouldn't require "significant modification".
Funamentally, it's closed software and hardware. You can't and shouldn't trust it. Even if they did do some "significant modification" how are you going to notice/prove it?
> it’s attaching a cryptographic voucher to every photo uploaded to iCloud that, if some number of photos represent hash matches
I see this number very quickly getting set to '1', because the spin in the opposite direction is "What, so you're saying, people get X freebies of CSAM that they can store in iCloud that Apple will never tell anybody about?"
Communication safety in Messages is a different feature than CSAM detection in iCloud Photos. The two features are not the same and do not use the same technology. In addition:
Communication safety in Messages is only available for accounts set up as families in iCloud. Parent/guardian accounts must opt in to turn on the feature for their family group. Parental notifications can only be enabled by parents/guardians for child accounts age 12 or younger.
Yeah, for now this is what is stated. It should be alarming how much they pivoted and changed with out any background statment. Something more is at play.
Thats the thing - with an unauditable black box like iOS you have to trust the vendor that it actually does what they say.
If they suddenly start to break you trust - who knows what they will push to that black box next time ? You have no way to tell or prevent that, other than leaving the whole platform altogether.
>iMessages are already E2E encrypted, but you are correct, iCloud backups are decryptable with a warrant (and that was reportedly added at the FBI's request).
Not to be nit-picky but IIRC, this isn't exactly how it went down. icloud backups have always been decryptable and Apple had announced they were planning on changing this and making them fully encrypted but when pressured by the FBI, they scraped those plans and left them the way they are.
iMessage isnt true E2E encryption. Apple could easily decode messages for a 3rd party if they wanted to. This has been widely discussed on HN previously. All they have to do is insert an additional private key in the two party chat, as the connection is ultimately handled centrally.
There's a very good reason Signal (and previously Whatsapp) were recommended over iMessage.
Not to mention the inability to swap iMessage off for SMS which has made it a favourite exploit target for hacking firms like NSO.
iPhones are already continuously scanning your photos. Every notice how it recognizes faces and organizes your photos by them?
You can’t turn that off!!
There’s no way to tell the iPhone, “stop with the AI recognition on my photos.”
Since Apple has decided it’s going to categorize all your photos based on AI, it was just a matter of time before they started categorizing some bad stuff.
iCloud Backup is a backdoor in the iMessage e2e, and Apple can decrypt all the iMessages. No warrant is required for Apple to do so.
Even with iCloud off, the other endpoint (the phones you're iMessaging with) will be leaking your conversations to Apple because they will still have iCloud Backup defaulted to on.
iMessage is no longer an e2e messenger, and Apple intentionally preserves this backdoor for the FBI and IC.
They access 30,000+ users' data per year without warrants.
CSAM scanning is probably the easiest thing they could do to satisfy this demand, and if you don't use iCloud Photos, you're not affected by it at all.
As far as encrypted backups go, it's an open question whether they want to deal with the legal and support headaches that such a change would bring. If they continued to do nothing, Congress might force their hand by legislatively outlawing stronger encryption - they had to shit or get off the pot.
For users, if you enable this feature, but then lose your password, you are entirely screwed and Apple can't help you. Encrypting "In-transit" as a middle ground is likely good enough for most people, until researchers manage to come up with a better solution.
First, while the "Erase after X unsuccessful attempts" feature exists, I've never used it and really don't recommend anyone else do so unless you are exceptionally careful.
Second, my comment was meant in the context of iCloud - if you don't enable the feature as described, Apple may be able to decrypt and recover something, but if you do go all the way to the logical end, yes, you're out of options.
My point is, Apple has already crossed the rubicon of allowing users to lock themselves out of their data. On an iPhone, regardless of auto-erase settings, if you lock yourself out, Apple cannot help you.
Especially on Apple devices. You cannot just reset the device if it was ever associated with an account and you don't know the password anymore. You must contact Apple support. Some people still want to employ Apple devices in a business environment.
The amount of money that is spend on their devices is just insane.
> …to be honest I think many at Apple behind this decision are likely pretty surprised by the blowback.
If Apple is surprised by the blowback, it’s entirely Apple’s fault because it didn’t engage with many outside experts on the privacy front.
I recall reading either in Alex Stamos’s tweets or Matthew Green’s tweets that Apple was invited before for discussions related to handling CSAM, but it didn’t participate.
Secretiveness is helpful to keep an advantage, but I can’t believe that Apple is incapable of engaging with knowledgeable people outside with legal agreements to keep things under wraps.
This blowback is well deserved for the way Apple operates. And I’m writing this as a disappointed Apple ecosystem user.
Just because Apple didn’t attend Alex’s mini conference for his contacts does not mean they didn’t engage any privacy experts. Furthermore, Alex is a security expert and not a privacy expert. Finally, if that conference was really pushing the envelope on privacy, where are the innovations from those who did attend? The status quo of CSAM scanning is at least as dangerous as an announcement of an alternative.
> but they needed to do something to address the CSAM issue lest governments come down on them hard.
If Apple does not have decryption keys for iCloud content, they can't decrypt it, only the user can. For Apple, it's not technically feasible, and the law supports that.
Apple has now demonstrated that it is technically feasible to overcome this challenge by doing things "pre-encryption" on the user's device.
From a user's point-of-view, Apple's privacy is worth nothing. It makes no sense for them to encrypt anything, if they leak all data before encryption. Children today, terrorism tomorrow. All your messages, photos, and audio, can be leaked.
There are thousands of people at Apple involved in the release of any single feature, from devs and testers, to managers all the way up to VPs.
The fact that this feature made it to production proofs that Apple is an organization that I can't trust with my privacy.
Their privacy team must be pulling their hairs out. They better start looking for a new job.
> Apple has now demonstrated that it is technically feasible to overcome this challenge by doing things "pre-encryption" on the user's device.
> From a legal stand point, they are now screwed, because now they _must_ do it.
That's not the right takeaway. They're only required to turn over data to which they have access. If they continue writing software that makes it so they don't have access to the data, then they do not need to turn it over because they can't. The fact that you can write software to store or transmit unencrypted data is irrelevant.
Right, but there is a big difference between them having the software in place already to steal files off a phone for a government, versus a government telling them "you must deploy this new software." In the past Apple has said no to writing any new spyware for the government. They would not be able to say no very easily if the software is already on the devices.
Something probably did happen. Apple no doubt became increasingly aware of just how legally culpable they are for photos uploaded to iCloud.
For content that is pirated Apple would receive a slap on the wrist, but for content that shows the exploitation of children? And that Apple is hosting on their owned servers? There is no doubt every government will throw their full legal weight behind that case.
Now they are stuck between a rock and a hard place. Do they or do they not look at iCloud Photos? Well, not in the cloud i guess, but right before you upload them. Is that any better? I don't know? Probably not. They really have no choice though. They either scan in the cloud of scan on the device. Scanning in the cloud would at least let people know where their content truly stands, but then they can't advertise end-to-end encryption and on paper it may have felt like a bigger reversal.
I think the long and short answer is. Apple cannot really legally protect your privacy when it comes to data stored on their servers. The end.
I doubt a judge can find apple liable for hosting encrypted data if its illegal. That would mean every e2e storage solution, or even just encryption and storing files on a s3 bucket host would be liable.
Apple should use it's gigantic legal arm to set good precedents for privacy in court.
First, there is nothing stopping a full legislative assault on end to end encryption, various such proposals have had traction recently in both the US and EU.
The lawyerly phrase is “knew or should have known.” It could be argued that now that this technology is feasible, failure to apply it is complicity.
Think about GDPR. Eventually homomorphic encryption is going to be at a point where “we need your data in the clear so we can do computations with it on the server” will cross from “failure to be on the bleeding edge of privacy research” to “careless and backwards.”
> First, there is nothing stopping a full legislative assault on end to end encryption, various such proposals have had traction recently in both the US and EU.
And nothing stops Apple from fighting against the legislative assault. They have made themselves a champion of user privacy and have deep pockets to fight the fight, yet they just surrender at first attempt.
Pull all apple products from the shelves for a month and launch a billion dollar ad campaign about what the hell our government is up too.
At least try to stick to one governments. Tech companies are entering tricky waters by trying to follow laws of many nations (which conflict) simply because their customers are there
Then grow a pair and point at government influence. Leak DOJ demand letters. Do something to indicate this was forced on them. Or is a gag order in place? That’s my bet right now.
Yea, I agree. I see so many comments where people view the issue is black and white and that apple is clearly in the wrong. Yes, privacy is a great ideal. There are absolute monsters in this world and there is zero chance that no apple employees have brushed up against those monsters. I would ask that every single developer step back and think about putting themselves into a situation where software that you write is being used by monsters to horribly scar children around the world and ask yourself what you would do. Saying the moral and right thing to do is to protect the privacy of all of your users including those monsters just cannot be done if you are not a sociopath that only cares about profits and has glimpsed the horror that is out there. Fuck if I would write any code that would make it easier for those monsters to exist or turn a blind eye towards it. It may be a slippery privacy slope but I don't see how people that have glimpsed the horror would be able to sleep at night without stepping out on that slope.
When I was young, I was not effectively spied on. My activity and digital trail were ephemeral, and hard to pin down. i wasn't the subkect of constant analysis and scrutiny. I had the privilege of being invited into the engine of an idling train to share an Engineer's drinks along with my parents and no one batted an eye.
There were pedophiles then too. guess what? We didn't sacrifice everyone's right to privacy. We didn't lock more and more rights out of the reach of minors. You didn't need a damn college degree and an insane amount of idealistic commitment to opt out of the surveillance machine. I could fill most prescriptions, even out of State.
We didn't fear the monster. They were hunted with everything we could justify mustering, but you weren't treated as one by default.
I imagine that maybe, somehow, tomorrow's youth may get to experience that. Not live in fear, but in appreciation of the fact that we too, looked into things absolutely worth being afraid of, but resisted the temptation to surrender yet more liberties for the promise of a safety that will never manifest. Just change its stripes.
I think there will always be a small number bad people in the world, that will always find a way to be bad.
I know they’re out there in the world, but I leave my home anyways, because living my entire life curled up in my bed in fear would be a sad and short life.
I’m more afraid that people will lose the ability to be happy because they’re so full of fear and skepticism that they don’t form meaningful relationships, relax, experience life.
I’m also more afraid of what can happen when a small group of people are allowed to dictate and control the lives of many, justifying it by pushing fear. Then it’ll be more than a small group of abusers harming a small group of people (in this case, child abusers). It becomes a different group of abusers (dictators) harming the entire population through their dystopian politics.
How many people, including children, died when a group of dictators used fear & skepticism in the 1930s to push an agenda that ultimately led to WWII and concentration camps?
We need to be extraordinarily careful that we don’t try to save a very small number of lives at the expense of many lives in the future, as a result of policies that we put in place today.
> I think many at Apple behind this decision are likely pretty surprised by the blowback.
Maybe they would have been surprised where it ended up, but someone in charge there is definitely on board with a reversal on the previous privacy stance, given the doubling-down with an email signal-boosting the NCMEC calling privacy concerns the "screeching of the minority".
They announced CSAM scanning but would you trust there is no scanning if they said they reversed their decision and eventually gave you E2E? Considering how big the firmware size is and that all images are processed by their APIs and they control both SW and HW they could hide such scanning in the firmware or even hardware if they wanted or were forced to. They could add a new separate microcontroller similar to Secure Enclave which would (in addition to "AI-enhancing" the picture) compute and compare CSAM hash. On a positive match, the main CPU would decrypt and run a hidden reporting code which would report the image over TLS-secured channel with Secure Enclave-backed certificate pinning to prevent any MITM. No one would notice because it would be decrypted only after a positive match and no one could see the image being sent out on a positive match because of TLS. It is not just Apple, though. We currently don't have enough control over our devices, especially smart phones. There are some vendor binary blobs on Android too. Now or in the future, these proprietary, very capable devices are ideal for many types of spying. Stallman warned about this many years ago.
My (layperson) estimation is that Apple knows the rewording of it is liable pass both House/Senate in some way either in the 2022 or 2023 sessions and are attempting to get ahead of the issue. (most likely as a "rider" bill at this point)
That's my thought process on what may have precipitated Apple's move toward "CSAM"; and I welcome it in some ways (unequivocally, children should not be subjected to sexual abuse) and am wary of it in others (ML is in its infancy in too many respects, and if a false-positive arises who's to say what's liable to happen on the Law Enforcement level).
There are infinitely many concerns to be discussed on both sides of this argument, and I want both to succeed in their way. CSAM, however, feels like an attempt to throw a blanket over Apple's domains-of-future-interests while playing an Altruism Card.
One thing is certain, though: they've opened the floodgates to these discussions on an international level...
> It's no secret that Apple has lobbyists which by turns have a "pulse" on the trajectory of Bills in the House and Senate.
Great, let the bills pass and then they get to publicly blame congress for being "forced to legally comply" instead of jumping the gun and doing an about face on 80% of your reputation built over the last 20+ years.
There is 0 reason to do this before being legally forced to. It's not a gamble with a possible upside somewhere. It's directly shooting yourself in the foot.
Yes, 100% agree, make congress go through with passing such a law. Make it a public discussion. Make them consider if they would like their own devices to be built insecurely. They're among the biggest targets of people who would like to subvert US policy.
Adding to the confusion, they announced both the AI powered parental control feature for iMessage and hash-based scanning of iCloud pictures.
The iMessage thing is actual spyware, by design, it is a way for parents to spy on their children, which is fine as long as it is a choice and the parents are in control. And that's the case here.
The iCloud thing is much more restrictive, comparing hashes of pictures before you send them to Apple servers. For some reason, they do it client side and not server side. AFAIK, iCloud photos are not E2E encrypted so I don't really know why they are doing it that way when everyone else does it server-side.
But combine the news and it sounds like Apple is using AI to scan all your pictures. Which is not true. Unless you are a child and still under your parents authority, it will only compare hashes of picture you send in the cloud, no fancy AI here, and it is kind of understandable for Apple not to want to host child porn.
They're doing it clientside so that they can do it for images and files that aren't ever sent off the device.
I am increasingly convinced this is a backroom deal, just like the iCloud Backup e2e feature (designed and, as I understand it, partially implemented) being nixed on demand (but not legal compulsion) from the FBI.
If you get big enough, the 1A doesn't apply to you anymore.
Apple know what they are doing. Don't fool yourself by giving plausible excuses. The further away you are from your phone the more privacy you will have.
Almost everyone was traced back to the Jan 6th insurrection, because unlike antifa they took their phones with them, so easy trace for government. Now they are in solitary confinement in prison while the world loughs at their expectations of privacy from their phones.
If they get a covid passport implemented, you won't be able to go out in public without your phone to prove you've been vaccinated. So leaving a phone behind will not be much of an option if this is to pass. Metadata and contact tracing will keep track of everyone you're near. This dystopian future is possibly just a few weeks away.
So Apple has rightfully lost the trust of its users then. We submit to their walled garden because we believe they are benevolent. But when they make this kind of a mistake how can we trust them?
I don't really see how they can recover from this. Will they go out of business? no. Will they even lose market share? No. But there will always be that example of the time Apple got it wrong.
If I was an Apple employee I would certainly be concerned about how something like this could happen.
,,Netflix could very clearly see the writing on the wall, that the DVD-by-mail business needed to morph into a streaming business.''
I don't argue with your point, but actually Reed Hastings originally created Netflix the DVD rental company so that he could turn it into a movie streaming company. He had to wait for internet to be fast enough. His interview with Reid Hoffman is pretty cool:
I also think that some kind of negotiation between Apple and the government happened in the background. Probably something like “don’t break up the company and we give you a backdoor”. Maybe we throw stones at the wrong window? I wish we could know more about this, but it is probably that the inner workings of state surveillance are not meant to be public. What do we do?
I agree with your analysis -- I also think it was probably meant to mollify law enforcement without sacrificing encryption. The same thought occured to me independently right when it was announced.
> didn't quite realize that their plan to protect end-to-end encryption doesn't mean that much if one "end" is now forced to run tech that can be abused by governments
Not just one end, but specifically the end that their users thought they had control over.
I'm not a techie, but I think iOS already has similar technology: Camera detects faces, Photos can recognize people and objects (or cats) — all with help of AI (or ML). I think they are just expanding this and combining with "share".
They would not talk about this in their public messaging because it would require them to admit that their privacy policies are influenced not just by their own company morals but also by the government.
They would have to say something like "We know some users want to fully encrypt their cloud data. We also want to encrypt their data. But the government won't allow us to do it until we protect their interests in stopping child abuse and terrorism". That would contradict their marketing image of having their interests being aligned with their users'.
> That is, it seems like Apple really wanted to preserve "end-to-end" encryption, but they needed to do something to address the CSAM issue lest governments come down on them hard.
Why? It's already been well-reported that Apple makes a teeny fraction of the NCMEC reports that other large cloud providers make, and that the reason they hold the keys to iCloud backups was from some requests/pressure from the FBI.
Because the government doesn't just care about CSAM, they care about terrorism, drug and human trafficking, gangs etc. Scanning for CSAM won't change the government's opposition to E2EE, and Apple knows this because, according to their transparency reports, they respond with customers' data to NSL and FISA data requests about 60,000 times a year.
Occam's razor also says they aren't playing 11th dimensional chess, and that they just fucked up.
The point is that with this technology, Apple can now please both the government and Apple users that want their data in the cloud to be fully encrypted.
First they enable this technology to prove to the government that they can still scan for bad stuff, then they enable true E2E. I don't know if this is really their motivation but it sounds plausible to me.
If they scan on device unencrypted and report what they found (even if it's child porn now, we know that is not going to last), they made the e2e effectivelly useless. The point of end to end enceyption is to make the data accessible to just one or two people - and that is not fulfilled here.
EDIT: though you are right - they can please governments by making encryption useless while lying to people they still have it. Not sure that was your message though.
The difference between Apple being able to scan data on your phone vs on their servers comes down to who is able to access your data. In both cases, Apple has access. But by storing both your data and your keys somewhere in their cloud servers, that data becomes available to governments with the appropriate subpoena, rogue Apple employees and hackers. None of that would be possible with true E2E encryption.
It seems plausible to me that Apple might believe some of their privacy conscious users would prefer this situation and hence "privacy" can be considered a plausible explanation for their actions here.
I'm suggesting that the client-side CSAM scanning technology could allow Apple to turn on true E2E encryption and still satisfy the government's requirement to report on CSAM which as you point out is not something that Apple currently implements.
The "client side" CSAM detection involves a literal person in the middle examining suspected matches. The combination of that and whatever you think true E2E means isn't E2E by definition.
What other "large cloud providers"? Ones that are similar to Apple? Or general cloud companies?
And is "pressure from the FBI" really the only reason why Apple might hold backup keys? Please sketch out your "lost device, need to restore from backup" user journey. For a real person, who has only an iPhone.
> Please sketch out your "lost device, need to restore from backup" user journey. For a real person, who has only an iPhone.
How about "enter your master passphrase to access your photos". Like literally all syncing password managers work, or things like Authy work to back up your one-time password seeds. It's not complicated.
There are methods for dealing with CP that would maintain privacy, not subject humans to child pornography, and would jive with their end to end encryption goals. But we’ll never know what they’re doing under the hood and I think that’s the concerning part.
They're not magical. They're just smart use of cryptography.
Ashton Kutcher's foundation Thorn https://en.wikipedia.org/wiki/Thorn_(organization) does a lot of work in this space. They have hashed content databases, they have suspected and watchlisted IP addresses, etc...
I don't know why I'm being downvoted. This is an active area of cryptography research, and I've applied these types of content checks in real life. You can maintain good privacy and still help combat child pornography.
Sometimes HN takes digital cynicism a step too far.
Checking the images you upload to icloud against a database of CSAM image hashes is what Apple is updating people’s phones to do. I actually don’t really see the issue and I don’t think the average consumer will care about this one bit. apple has always been able to see your icloud backups in plan text anyway
It's actually easier to the the scanning on the backend compared to scanning of the user's phones.
My opinion, maybe entirely wrong, is that it's related to how Apple operates in China. In China, Apple does not actually operate their backends, but their BE is operated by government owned business, which is "leased" by Apple.
Maybe they don't want to enable scanning on BE for that reason, and over-engineered local device scanning, that really doesn't make much sense if you think about it twice?
Thus, my guess is, at least at the beginning, they saw this as a strong win for privacy. As the project progressed, however, it's like they missed "the forest for the trees" and didn't quite realize that their plan to protect end-to-end encryption doesn't mean that much if one "end" is now forced to run tech that can be abused by governments.
Sincere or not, the "the only way we could do true X is by making X utterly useless and meaningless" thing certainly seems like bureaucracy speaking. It's a testament to "the reality distortion field" or "drinking the Koolaid" but it shows that the "fall-off" of such fields can be steep.
What has changed again? The practical state is that photos are uploaded in the clear to iCloud today, and Apple can scan them if they want or more importantly if the US Government asks and with any valid subpoena must hand them over.
Consider the this new hypothetical future state:
1) photos are encrypted on iCloud
2) photos are hash scanned before being encrypted and uploaded.
In this new state no valid subpoena can get access to the encrypted photos on iCloud without also getting the key on your device. What is your metric where this is not a better state than the current state where every photo is available in the clear to any government that asks and must be preserved when given a valid order to do so?
If you don’t upload photos to iCloud there is no change.
In all scenarios you have to trust Apple to write software that does what they say and that they won’t write software in response to a subpoena to target you specifically.
The attack with the hash scanning is someone gets a hash for something in your photos specifically (that doesn’t match a ton of people) and through some means gets it into the database (plausible), but can’t get anything without going though this route. Specifically subpoenas can’t just get all your photos without going through this route.
My conclusion is that you are more protected in the hash scan and encrypt state than in current state.
Apple privacy focus was always opportunistic: they couldn't get a foothold into online advertising so they decided to make that weakness a selling point instead.
They'll ditch privacy in a second if they can sell you enough ads, just as they ditched privacy for Chinese customers when the Chinese government asked them to.
> Apple privacy focus was always opportunistic: they couldn't get a foothold into online advertising so they decided to make that weakness a selling point instead.
You could spin that the other way: Apple’s foray into advertising was opportunistic, but didn’t sit with their focus on privacy. You’d need internal memos to prove it went either way.
You could, but it seems more likely that a mega corporation gave up on a huge market because it failed to succeed there rather than moral principles. It's not like Apple has showed a lot of moral standing when dealing with the CCP, or isn't ruthlessly crushing competition when it gets the chance.
I have a friend who knows Apple's head of user privacy. He vouches that the guy is deeply serious about security/privacy.
Between that and Apple's principled stance against the FBI, I'm inclined to believe that at that time, they were making principled choices out of a real concern for user privacy.
I suspect this time around they have a real concern for dealing with the growing child exploitation/abuse problem, but failed to prioritize system design principles in their zeal to pursue positive change in the world.
I don't envy them: in their position, they have an incredible amount of pressure and the stakes are incredibly high.
May cool heads and rational, principled thinking prevail.
To me the issue is that it makes no sense: if it's just scanning iCloud photos, ___why does this feature need to exist at all__?
Apple is saying this: "we need to scan your phone because we need to look for this material, and by the way, we're only scanning things _we've been scanning for years already_"
Basiclaly, everything else aside, in the current state of things, this feature makes no sense. They are not scanning anything they weren't scanning already (literally, everything on iCloud) so why did they bother?
A bunch of people are making speculative claims that Apple is doing this to enable encryption of data in iCloud, but there's no reason to believe that's true. Apple certainly hasn't mentioned it.
> A bunch of people are making speculative claims that Apple is doing this to enable encryption of data in iCloud, but there's no reason to believe that's true. Apple certainly hasn't mentioned it.
There is nothing surprising here. Apple never mentions things unless it’s ready for release or is just about to be released. It’s just how Apple works.
Here's my hope: E2E encryption. Maybe the government is throwing a fit about Apple harboring potential CP on its server if it's E2E encrypted, so apple says, "fine well review it prior to upload, and then E2E encrypt the upload itself"
> "we need to scan your phone because we need to look for this material, and by the way, we're only scanning things _we've been scanning for years already_"
As far as we know, Apple is not already using their decryption keys to decrypt images upload to iCloud and compare them with known images. That's according to this blog post that claims Apple make very few reports to the NCMEC:
If I were to speculate, I’ve seen lots of references as to how Apple reports way less of this to the appropriate authority than the rest of their competition. It does kinda sound to me like this is the consequence of external pressure. I cannot prove this but speculate, of course.
I just don’t think the messaging they had around this is reassuring at all. While they had all sorts of technical explanations as to how trustworthy this will all be because they are in charge, the whole thing quickly went from “this is only for iCloud photos” and only in the US to “3rd party integration can happen” and “we’re expanding it to other countries”. Which I guess is a logical next step but it is a hint at expansion.
The walled garden becomes much less tempting after all of this, especially right after a scandal like the Pegasus one.
Apple was never pro-privacy. Apple has always been pro-Apple. They will do whatever benefits them the most at any time. It just so happens that with the failure of their own attempts to start an advertising network, they saw an opportunity to spin themselves as the privacy company and take digs at their competitors. Nothing lasts forever. Maybe it's time for a new spin.
If a grandmother takes a photo of their grandchild in a bathtub this is technically against federal law 18 U.S.C. § 2252. There is no "I am grandma" exception. This is a serious felony with serious consequences. If you use an algorithm without any context I think you are looking at a ton of new 18 U.S.C. § 2252 (child pornography) charges through these scans. These charges are notoriously difficult to defend against and even the charge itself will ruin lives and careers [1]. Getting custody back of your child will also be impossible [found guilty or not]. I am getting my popcorn ready. Apple has no idea what they have unleashed.
You seem to be missing the fact that they're only scanning for existing images in a database, not detecting brand new images. Grandma's photo won't be in the database unless she uploads it somewhere that reports it.
For me the only thing stopping me was the lack of type-c and a headphone jack, which at the time was an annoyance but now is turning out to be a blessing in disguise
They’re certainly trying. But the video of Craig trying to use a novel definition of “law enforcement backdoor” and accusing people of being confused is contemptibly dishonest.
That is not an accurate analogy at all. Apple DID design this with privacy in mind. What would work to fit your inaccurate analogy is if Apple suddenly started reading everything of yours in the plain, directly violating your privacy.
As it stands, the system is designed so that the phone snitches you out if you have CSAM on it matching some perceptual hashes (and plenty of it), but if you do not have CSAM above threshold amounts then your phone never even phones home about it. It is in fact a privacy-preserving mechanism in the sense that Apple never sees your photos.
To be honest you lost the privacy battle when you bought apple hardware. Can't run Linux, can't run windows, you're stuck with zero competition in the OS space.
> It's like working for a food company that claims to use organic, non-processed, fair-trade ingredients and in just a day deciding that you're going to switch to industrial farming sourcing and ultra-processed ingredients.
No it's not, it's more like in just a day they started including a chemical preservative amongst all the organic ingredients. All the good stuff is still there, it's just pointless now for people who want to eat pure organic food.
I don't think this is a good metaphor. You still get the privacy same privacy aspect as you did before. Only your iCloud images are now being scanned for visual matches. If you are already trusting Apple with rest of your private life this hardly is complete 180 degree turn.
I don't quite understand why people are so against this. What if some algorithm scans your images? If you are using any cloud service to share your images then your images are already being scanned and far more invasively. Is the the storing of hashes that concerns you? Your password(s) is already being stored as a hash and we trust that it can not be reversed back to plain text, why wouldn't we trust that these image hashes can't be reversed back into images?
I've seen some notion that this is a backdoor and governments and other Lovecraftian entities could use it to suppress freedom of speech by censoring certain images, but that would require them to first have the image in question, then creating hash of the image, and then inserting it into the Apple's database and after that it would only be flagged for human verification and again assuming you trust Apple to handle rest of your digital life (such as iMessages) why wouldn't you trust Apple to share your images? If some entity has it's hooks so deep in apple that they start to censor images why wouldn't they just stop all of your messages being sent?
Please correct me if I'm missing something obvious.
> why wouldn't we trust that these image hashes can't be reversed back into images?
You should avoid making political or moral arguments when your technical knowledge is 10 feet below your ego. An image is orders of magnitude larger than a password, meaning that many more collisions. Also, because it's a perceptual hash, even more collisions. One does not simply reverse a perceptual hash. Keep your hat on ;-)
>You should avoid making political or moral arguments
I am not being political or moral. Why would you think that? Is this one of those "since you don't immediately agree with me it means you are the enemy and disagree on everything" type things?
>An image is orders of magnitude larger than a password, meaning that many more collisions.
Whole point of hashing algorithms is that they produce unique hashes for different inputs. Wouldn't a larger input value (bits of the image) make it harder to have collisions?
>One does not simply reverse a perceptual hash
So what actually is the privacy concern here? There is no way to get the original image from the hash at best you can get an approximation of it.
>Keep your hat on ;-)
I guess this is suppose to be some kind of dig at me. No need to be an asshole even if you don't agree with someone.
> Whole point of hashing algorithms is that they produce unique hashes for different inputs. Wouldn't a larger input value (bits of the image) make it harder to have collisions?
However, a perceptual hash desires the opposite. Due to the pigeonhole principle, a larger space mapping to a smaller space, involves more collisions. In fact, all hashes have infinite collisions, one just tries to design hashes that don't have many collisions on smaller length inputs. Ultimately though, there will be at least 2^n collisions for all inputs n bits long if the hash is n bit output. You can easily calculate this by looking at the excess in size between input space size and output space size.
Perceptual hashes are essentially designed to collide on similar data. The fine details are lost. An ideal perceptual hash algorithm would quantize as many alterable properties of an image as possible. Contrast, brightness, edges, hue, fine details, etc. In the end, you have a bunch of splotches in a certain composition that form the low dimensional eigenbasis of the hash.
You're welcome. I think the crux is that people are worried about the police knocking at their door and/or inspecting their photos due to mismatching on this purportedly unreliable hash method.
That feels like purposefully misundertanding the situation. The hash match only escalates the matter to a human inspection. In no sane world is that inspection done physically. It means that if your picture approximitely matches a known abuse picture someone gets to view it to verify if it contains abuse or not and then potential legal actions can be taken.
Police knocking down your door will only happen if you are found to posses multiple abuse pictures and they suspect you might be endagering people.
For one I don't think you should be taking photos of your kids in the tub. No one is going to want to watch those. Secondly if you manage to capture same position on your kid and same angle as someone in an abuse picture there might be something wrong with your bathing routine.
This just feels like people constructing imaginary boogieman out of this
I both understand and acknowledge your position. I agree that this technology can do a lot of good if used for its intended purposes. However, it does strike me as a bit overarching.
The question really is, will it be used for intended purposes or will it become like roadside drug sniffing dogs? A common euphemism is "probable cause on 4 legs."
If it is overly sensitive, or if Apple or the government expands the list to include much more than what it has initially been slated for, it's possible that the faux detections will be used for escalation and ultimately result in parallel constructions for crimes like drug possession.
To clarify - here's an arcane scenario. Authorities receive an alert and are given permission by Apple to look at all photos taken the same day as the photo in question. The photo in question is a false match but the rest of the photos that day are still inspected. The agents observe a photo of a bag of white powder, and use it to procure further investigations/warrants. The target is later arrested for possession once physical evidence is found.
I find the laws in this nation, particularly drug laws, to be quite obtuse. I don't think it is wise to give any additional enforcement capabilities to those that are continuing to deprive citizens of life and liberty over archaic drug laws.
I hate pedophiles and child abusers as much as anyone else should. I'd execute the abusers if we are being totally honest. It's just that this technology strikes me as a potentially Orwellian device.
I don't get why they would get all of the images taken that day if the flagged image doesn't actually contain any abuse material.
This might not be true, but I've always assumed that if I am suspected of something the police can already access all my iCloud (or any other cloud storage) images with court order.
Has it been confirmed that police get your all of your media in case of a collision and not just the offending image? Do they even get the offending image?
I just do not want Apple scanning my phone for the purpose of finding something they can send to the police.
I’m not even talking about any “slippery slope” scenarios and I’ll never have any of the material they are looking for. And right or wrong, I don’t really fear a false identification, so this isn’t about a worry that they will actually turn me in. I just don’t want them scanning my phone for the purpose of turning me in. I definitely do not want to take an OS update or new phone that does this.
(False positives are the huge flaw in this system, though. There are human judgements by anonymous people using opaque processes without appeal throughout this. A lot of damage can be done before anything goes to court, not to mention courts themselves are susceptible to the illusion of infallible technology. Others have well laid out the scenarios that will result in serious consequences for innocent people.)
Who wants their printer to scan every document they print, and if it sees something the printer manufacturer thinks is illegal it's reported to the police?
Or their TV scans every frame displayed and every word said if it sees anything the TV manufacturer thinks is illegal it's reported to the police?
Or their blender has an array of chemical sensors in it, and if it detects a substance the manufacturer thinks is illegal it's reported to the police?
How about smart floorboards that detect illegal dancing by women in Iran? Shall we do that too?
Apple where are you on that one? I can point you to a country filled with eager customers, seems right up your alley.
> Who wants their printer to scan every document they print, and if it sees something the printer manufacturer thinks is illegal it's reported to the police?
FYI most (all?) modern printers will refuse to print anything they think is counterfeit money, and will include a unique code traceable back to you in every printed document. - https://en.wikipedia.org/wiki/Machine_Identification_Code
I'm not a fan of this either but at least it doesn't call the police on me. It places a marker so that a human could track it back to be, sure, but that still requires me to actually try to do something illegal with something I've printed.
I still think it's BS that you can't scan money though and it very much an overreach. Your home printer isn't gonna make a convincing fake anyway.
I like this take. It shortcuts around all the "but people misunderstand the technology" back and forth and gets to the root of it. "Don't scan my phone for stuff you can send to the police."
This is the most honest take on this that I’ve seen. The slippery slope arguments don’t make sense, nor does the risk of false positives.
But the idea of being suspected even in this abstract way, because of something other people do, is at the very least distasteful, bordering on offensive.
This is the word that most captures my feeling not just around the feature but how it was rolled out as well. Reading the PR announcement and then the leaked memo just felt - quite distasteful.
There are a number of things Apple has done and continues to do that fit into that category for me personally - the fact that iOS is so locked down for example.
But this really takes the cake. It's like someone at Apple dialed up the distasteful to 11 and let loose.
The whole industry does distasteful things but this is a harbinger of our inability to trust them at a deeper level. Not just Apple.
> The slippery slope arguments don’t make sense, nor does the risk of false positives.
Yeah no, there are about 1b iphones in use, even a ridiculous low amount of false positive would be a major pain in the ass for a lot of people regularly.
The slippery slope argument is totally valid, whatever legal safeguards you have today might not exist tomorrow, but the tech will stay here forever.
I actually fail to see how your second sentence support the first. Treating everyone as a potential suspect of a crime (which isn't even wide spread in the first place) is a major slippery slope. Today it's "for the children and only in the US" tomorrow it might be "find me pro Palestinians pictures of people living in Israel". Nobody can predict what apple stance will be in 25+ years
I agree with this take, but the slippery slope arguments do make sense, as exemplified by any spying technology ever. Once a capability exists, it's too enticing not to continue expanding the scope.
Not everyone in the world is protected by the US Constitution and although Apple is initially only rolling this out in the US there is no doubt it will be rolled out in other countries sooner or later.
I would not say the slippery slope take doesn't make sense. It is perfectly possible that had no one cried out about this change then the next change would have been:
Large government to Apple: "Please now also create a hash on each photo metadata field including date/time and location. Let us query these. AND BTW, this change must be kept secret. Thanks."
> it is possible that had no-one cried about this change,
The “UK porn filter” has already been extended to all sorts of “« extremism online »” (to no effect in the capital of knife attacks, it seems — as usual invasive police rights do not equal a reduction of criminality) and it’s already being proposed to be extended to:
> Large government to Apple: "Please now also create a hash on each photo metadata field including date/time and location. Let us query these. AND BTW, this change must be kept secret. Thanks."
How do you know this hasn’t already happened? What does it have to do with the CSAM technology?
How do you know the invisible pink unicorn doesn't exist? You don't, but going after the known bad things is the most effective action you can take. You certainly don't ignore known bad things just because there may be even worse unknown things.
Neither 1 nor 2 are in fact true. Spotlight indexes all kinds of metadata, as does photos search. Adding an agent to upload data from these is easier than extending the CSAM mechanism, and the CSAM mechanism as is is not all that plausible to abuse either technically or socially given how clear Apple’s promises are.
> the CSAM mechanism as is is not all that plausible to abuse either technically or socially given how clear Apple’s promises are.
That’s the problem: Apples promise means nothing exactly because it’s so easy to abuse.
Apple says they will refuse when asked to scan for anything that is not CSAM. That’s one of those nice ‘technically true’ statements lawyers like to include.
Apple will not have to refuse anything, because they won’t be asked.
Apple doesn’t get a huge pile of CSAM images (obviously), they get a list of hashes from a 3rd party. They have no way of ensuring that these hashes are only for CSAM content. And when ‘manually’ checking, they aren’t actually looking at potential CSAM, they are checking the ‘visual derivative’ (whatever that means exactly), basically a manual check if the hashes match.
So yes, they would technically refuse if asked to scan for other material, but they will just accept any list of hashes provided to them by NCMEC (an organization created by the US government and funded by the US DoJ) no questions asked. The US government could include any hash they wish into this list without Apple ever noticing.
> And when ‘manually’ checking, they aren’t actually looking at potential CSAM, they are checking the ‘visual derivative’ (whatever that means exactly), basically a manual check if the hashes match.
The visual derivative is enough to tell the difference between a collection of CSAM and say documents, or pictures of protests.
> The visual derivative is enough to tell the difference between a collection of CSAM and say documents, or pictures of protests.
How would you know ? They have never given details on what this 'visual derivative' actually is. If it's detailed enough to recognise it as CSAM, then Apple isn't allowed to look at it.
Apple very recently promised that “what happens on iPhone stays on iPhone”. At literally the same time, they were building a system that was designed to break that promise. Why should we believe their new promises when their last promise was disingenuous?
Why should Apple's promises be taken at face value? Doubly so in a world where they can be compelled legally to break those promises and say nothing (especially in the more totalitarian countries they willingly operate in and willingly subvert some of their other privacy guarantees)?
What stops Apple from altering the deal further?
And if you have an answer for that, what makes you believe that 10 years in the future, with someone else at the helm, that they won't?
Your device is now proactively acting as a stool pigeon for the government where it wasn't prior. This is a new capacity.
And it's for CSAM. For now. The technological leap from locally scanning just the stuff you upload, to scanning everything, is a very simple change. The leap from scanning for CSAM to scanning for some other perceptual hashes the government wants matched is very simple.
What stops them from altering the deal is that they have said they won’t, and there really is no evidence to believe otherwise.
If they do, then they do. There are never any guarantees about anyone’s future behavior. However just saying ‘what stops you from becoming a thief?’, does not imply that you will become a thief.
There is no new capability other than what they have said. They already had far more general purpose mechanisms for analysing and searching your files already in the operating system.
The leap from doing spotlight indexing to searching for the keywords ‘falun gong’ or ‘proud boys’ in your files is also simple. So is the leap from searching your photos locally for ‘dogs’, to searching locally for ‘swastikas’ and reporting back when they are found.
If they decide to build some spyware, there is no need for it to be based on this. It’s a red herring.
That article is very obviously not about what is being searched for. It's about who it is being reported to. For example, if Apple one day decided to detect unauthorized copyrighted media, they could report it to the RIAA. The RIAA is independent of the government, thus the 4th amendment is not violated.
So then you must agree that the fears of government overreach are unwarranted. Given that’s what most of the complaints here raw about, that’s a big deal.
> For example, if Apple one day decided to detect unauthorized copyrighted media, they could report it to the RIAA. The RIAA is independent of the government, thus the 4th amendment is not violated.
It’s true that Apple could decide to implement any search or scanning mechanism at any time, and report anything they like to anyone they want to. So what? This is true of anyone who writes software that handles user data.
What does that have to do with the CSAM mechanism? It seems like an unrelated fear.
But only because Apple is doing the searches on device. If the searching is done in the cloud, the third party doctrine means that the US Government could demand anything and the 4th Amendment wouldn't apply.
How difficult would it be to transform it into looking for images from political dissenters? Now, you have a magical list to find targets to probe further.
Maybe you want to identify behaviors you disagree with.
Images tell a story and maybe not to the people who are looking out for your best interests.
Not to mention how this kind of stuff can be abused.
Think about the digital version of the cop that drops a small bag containing a white powder in the back of your car.
Good luck proving that it wasn't you.
Wait until you hear about judges who are actively on the take to send people to jail - like the judge who was taking money from for-profit juvenile detention centers to send kids to their facilities on ridiculous charges.
Yeah - this is an architecture designed to be abused.
This is exactly how I feel to a T. It's intangible to explained but I'm not a criminal and I'm not on probation. Apple as zero business scanning my phone for anything and I don't want that to be done.
From what I understand, Apple does not want to scan your phone.
They want to have end-to-end encryption and store your photos encrypted. This is why they scan before sending them encrypted.
They won't scan anything which you would not put in the cloud anyway.
This is pretty clever way to preserve end-to-end encryption and satisfy requirement not to store anything CP related on their servers. Probably too clever for journalists to understand :(
> This is pretty clever way to preserve end-to-end encryption and satisfy requirement not to store anything CP related on their servers.
Correction: Anything in the shady government-supplied database, pushed through a private corporation, not CP. The official purpose today is CP (well, CSAM, which is already more vague than CP), but the distinction is important. Also, not only to prevent storing that content on their servers, but to report you to the authorities.
What? Scanning your phone is exactly what Apple is doing. They say they won't scan anything not put on the cloud but you have no control over that and it is subject to change at anytime. They've already weakened their systems at the request of the government so it wouldn't be at all unexpected for them to scan photos not uploaded to the cloud. End-to-end encryption is completely useless if you have spyware installed on one of those ends.
>This is pretty clever way to preserve end-to-end encryption and satisfy requirement not to store anything CP related on their servers.
It's a pretty clever way to normalize adversarial devices, there's no preserving of any kind of privacy going on here.
I contend that if they are storing encrypted data that they cannot decode, then they are not knowingly storing CP at all.
If someone then downloads that, decrypts it with the key that only they know, and it turns out to be CP, then they are the ones storing CP, which as we all know is already illegal.
Obviously I'm not a lawyer and this isn't how icloud works, but lots of companies legally store encrypted data that they can't decode.
There's no legal requirement that the storage provider needs to ensure there's no CP, only that they don't knowingly store it.
Apple just doesn't want to allow people to have real encryption that they control, almost definitely so they can make more money by forcing people to use expensive icloud instead of local encrypted backups, and retain the ability to scan everyone's data. All under the flimsy excuse that 0.1% of people might forget their password and be unable to unlock their data. Nevermind you could only allow this for power users at their own request but "it's for your own good because you're too dumb to manage a password", lol fuck you Apple.
So it's just yet another self-serving, penny pinching, thinly veiled customer privacy violation, this time wrapped in a big fat "think of the children", which as we know is corporate-speak for "turn brain off now - we want to do $thing_you_wont_like".
I'm yet to think of how Apple plans to make money out of this one, but I'm sure there's an angle in there somewhere and we'll find out eventually.
Maybe simply that either NSA or China said "give us this new scanning system or we shut you down" - and I'm not American but I do wonder about people who still cling to the belief that the fourth amendment would be respected post Snowden revelations.
Hell, I'm Australian - they're probably just planning to push that hash-matched data through servers in Australia so they can see everything, because Australia has no bill of rights, and getting more authoritarian with every uncontested bill that passes. Or some other kind of "legal" fuckery to move even closer to the dystopia they are so hell-bent on creating.
Please do not spread misinformation. The system DOES scan your local device for photos that match CSAM hashes that are downloaded to the device. What you probably meant is that currently it is only enabled if you have iCloud Photos enabled (which almost everyone has), but now that the mechanism for client-side scanning is in place there’s nothing preventing them to turn it on later regardless of your user settings.
I don’t like that system, but according to their summary document, CSAM detection system is only (as described today) processing images that are uploaded to iCloud.
Please look more carefully at that document. It says, on page 5: "Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations."
And look at the diagram. The matching of image hashes with user photos occurs "on device."
I just think that there are better arguments against it without unnecessary exaggeration. So I wanted to be more specific on which images on your device are scanned according to that paper.
Obviously I should have phrased is better, like “about to be uploaded to iCloud”…
On device != Scanning your device. It scans your iCloud uploads and if configured, iMessages (which you didnt mean, because that doesnt work with hashes)
The general worry is that the fact that it happens on the device means in the future it could conceivably scan your device, even if there are some software checks in place to only scan things being synced to iCloud.
I'll need to see their source before I believe that. Apple ("privacy is a human right") lost their presumption of good faith when they announced that they will automatically notify law enforcement if their algorithms and processes suspect you're doing something naughty on your device.
In their attempt to make this extra private by scanning 'on device', I think they've managed to make it feel worse.
If they scan my iCloud photos in iCloud, well lots of companies scan stuff when you upload it. It's on their servers, they're responsible for it. They don't want to be hosting CSAM.
It feels much worse them turning your own, trusty iPhone against you.
I know that isn't how you should look at it, but that's still how it feels.
The practical difference is that with on-device scanning, the system is just a few bit flips away from scanning every photo on your device, instead of just the ones that are about to be uploaded. With server-side scanning, the separation is clear—what's on Apple's server can be scanned, and what's only on your phone cannot.
This all plays so perfectly into my long-time fears about the locked down nature of the iPhone. Sure, it's more secure if Apple is a good steward, but no one outside of Apple can inspect or audit what it's doing!
With server-side scanning, the separation is clear
In all these threads everyone is coming close to the crux of the issue, but I want to restate it in clearer terms:
There is a sacrosanct line between "public" and "private," "mine" and "yours." That line cannot be crossed by Western governments without a warrant. Cloud computing has deliberately blurred this line over time. This on-device scanning implementation blows right past that line.
Our tools before the computing revolution, and our devices after, become a part of us. Our proprioception extends to include them as part of "self." A personal device -- a tool that should be wholly owned and wholly dependable, like pen and paper -- that betrays its user, is a self that betrays itself.
> There is a sacrosanct line between "public" and "private," "mine" and "yours." That line cannot be crossed by Western governments without a warrant.
This is a self-delusion, I am afraid. The line has been crossed more than once, and it will be crossed again. UK and Australian governments are just two prime examples of waving terrorism and pedobear banners as a pretext to get invasive with each new legislation, and the Oz government already has a new legislation draft to make it a crime to refuse cooperation with law enforcing services when they request access to the encrypted content (think Signal messages). Also, refer to the Witness K case to see how cases that are unfavourable to the standing government completely bypass a «trustworthy» Western judicial system, including the Minister of Justice.
CSAM is guaranteed to be abused under whatever new pretext politicians can come up with, and we won't even know that Apple has been quietly subjugated to comply with it in those jurisdictions. There will be even less transparency and even more abuse of CSAM in «non-Western» countries. That is the actual worry.
The line between private and public has been crossed many times but we're still better off having that line officially there.
Even if the cops and private security along with TLAs are actively spying on people on an extremely regular basis, we're better off if officially they aren't supposed to be, if some court cases can get tossed for this, that they have to retreat sometimes, etc.
This is why having Apple overtly spying on the "private side" is still a big thing even with all the other revelations.
In the US however this division remains, even if it has been lost elsewhere. (This loss is one of the reasons I have had to never want to live in the UK nor Australia.)
Sir James Wilson Vincent Savile OBE KCSG (/ˈsævɪl/; 31 October 1926 – 29 October 2011) was an English DJ, television and radio personality who hosted BBC shows including Top of the Pops and Jim'll Fix It. He raised an estimated £40 million for charities and, during his lifetime, was widely praised for his personal qualities and as a fund-raiser. After his death, hundreds of allegations of sexual abuse were made against him, leading the police to conclude that Savile had been a predatory sex offender—possibly one of Britain's most prolific. There had been allegations during his lifetime, but they were dismissed and accusers ignored or disbelieved; Savile took legal action against some accusers.
There's never been any indication that there has been any more difficulty in catching these guys with better smartphone privacy features. It's easier than ever largely because the police have become better at investigations involving the internet.
Asking the police to put some investigative effort in rather than a dragnet sweep of every photo on every persons device is not too much to ask.
Importantly this should be the hard-capped precedent for every sort of crime. There's always a ton of other ways these people get caught without handing over preemptive access to every innocent persons phone. Same with terrorism, murder, etc.
Maybe I should add more text in support of the parent comment's point. When I read "UK and Australian governments are just two prime examples of waving terrorism and pedobear banners," my first thought was: all the while _protecting_ monsters like Savile.
I see push for CSAM as an alignment of powers in the form of "The Baptist and Bootlegger." The well meaning moral campaigners are a tool for folks who seek to surveil and control.
Respectfully, I don't understand where you're going with this at all. I could point to it and say, "Wow, we need to make sure nothing like that ever happens again, no matter what the cost to personal liberty!"
I can see that interpretation for sure. My impression is that folks like Savile will always be protected from CSAM by the same powers that be calling for CSAM. I'm not certain if that viewpoint is realism or cynicism, or if there is a difference.
> that betrays its user, is a self that betrays itself.
And cloud computing are corporations that take your self away from you.
And that was always going to happen. And people have been fighting that idea in the shadows for a while.
This issue is just bringing it to the foreground in a most intense way. "Think of the children."
They are coming for your self. They will take it. There's not doubt. There never was. Your self is too valuable and you've just been squandering it anyway.
Eventually you become Apple. Unless you start resisting, but that's hard. Just slooooowly warm the water. This froggy is fine. Nice and warm. Just don't touch the bottom of the pot.
>the system is just a few bit flips away from scanning every photo on your device
I prefer to think of it as being just one national security letter away from that happening.
Which is of course a schroedinger's cat kinda thing. It's already been sent. Or it hasn't. But why would the national security apparatus not take advantage of this obvious opportunity?
Anyone giving benefit of the doubt to this kind of stuff nowadays is IMO very naive or just poorly informed of recent digital history.
How would the agency issuing an NSL be able to generate a hash of a photo they’re looking for? Presumably if they already had a photo to derive the hash they’d already have whatever it is they’re searching for.
Because you're searching for people, not documents.
For a normal law enforcement context, say you bust 1 drug dealer, and are trying to find more. Maybe they have an image of a "contract" (document outlining terms of a drug deal) on their phone. You could find the other parties to that contract by searching everyone's phones for that image.
For a national security context, you could imagine equivalents to the above, you could also imagine searching for classified documents that are believed to have been leaked, or maybe searching for documents you stole from an adversary that you believe there spies are likely to be storing on their phones.
I'm saying documents here instead of images, but plenty of documents are just images, and I have little doubt that they could get this program to expand to include "scan PDFs, because you could make a CSAM into a pdf" (if it doesn't already?).
I think you already got one great reply, I have just one thing to add to it: your post literally presupposes a 1.0 version of this software that never has its feature set expanded. I don't think that's a reasonable assumption. After all, with 1.0 the goal of catching this class of person is barely achieved. They'll likely arrest people, 99% of whom are just CSAM jpeg collectors who get an extra kick out of viewing a class of image that is super illegal. And nothing more.
Then for version 2.0, they'll realize the INTERPOL, FBI, whatever can provide them a combo hash plus one string of text that can nail a known active producer. The holy grail. This small feature add to get so much closer to the original "moral goal" will prove too appealing to pass up. Now all the code is in place for the govt to pass over requests for data pinpointing mere common criminals.
The program has capability to upload questionable photos for review.
Just make it do equivalent of .* of all photos on device. It would be hard to argue, that's more difficult than scanning for specific hash.
And there is nothing specific about images. Extending this to scan arbitrary data is probably not that much code, depending on how its program maybe configurable.
I agree. Our phones tend to hold all of our most intimate data. Violating them is akin to violating our homes. It would be nice if the laws saw it this way.
I feel like this is a false sense of security. Even before this change, they can easily access and scan photos on your device. If they do any post-processing of the image on device, they already do.
It’s not a false sense of security, it’s a clear delimitation between theirs and mine; Debian package maintainers can also slip a scanner on your machine but that is a big line to cross on purpose and without notifying the user.
That is technically true but in a real very practical sense everyone here using OSS absolutely is trusting a third party because they are not auditing every bit of code they run. For less technical people there is effectively zero difference between open and closed software.
It’s really disingenuous to suggest that open source isn’t dependent on trust, you just change who you are trusting. Even if the case is someone else is auditing that code, you’re trusting that person instead of the repository owners.
I’ll concede that at least that possibility to audit exists but personally I do have to trust to a certain extent that third parties aren’t trying to fuck me over.
> Even if the case is someone else is auditing that code, you’re trusting that person instead of the repository owners.
Suppose Debian's dev process happened at monthly in-person meetings where minutes were taken and a new snapshot of the OS (without any specific attribution) released.
If that were the case, I'd rankly speculate that Debian devs would have misrepresented what happened in the openssl debacle. A claim would have been made that some openssl dev was present and signed off on the change. That dev would have then made a counterclaim that regular procedure wasn't followed, to which another dev would claim it was the openssl representative's responsibility to call for a review of relevant changes in the breakout session of day three before the second vote for the fourth day's schedule of changes to be finalized.
Instead, there is a very public history of events that led up to the debacle that anyone can consult. That distinction is important-- it means that once trust is in question, anyone-- including me-- can pile on and view a museum of the debacle to determine exactly how awful Debian's policy was wrt security-related changes.
There is no such museum for proprietary software, and that is a big deal.
That's certainly true, and it is a strong 'selling point,' so to speak, for open software. But openness is just one feature of many that people use for making considerations about the sort of software they run and frankly, for an average consumer, it probably weighs extremely low on their scale, because in either case it's effectively a black box, where having access to that information doesn't actually make them more informed, nor do they necessarily care to be informed.
Most people don't care to follow the controversies of tech unless it becomes a tremendously big issue, but even then, as we've seen here, there are plenty of people that simply don't have the technical acumen to really do any meaningful analysis of what's being presented to them and are depending on others to form their opinion, whether that be a friend/family member or some tech pundit writing an article on a major news organization's website.
Trusting Apple presents a risk to consumers but I'd argue that for many consumers, this has been a reasonable risk to take to date. This recent announcement is changing that risk factor significantly, though in the end it may still end up being a worthwhile one for a lot of people. Open Source isn't the be all end all solution to this, as great as that'd be.
Thinking about this.. I guess my trust, is that someone smarter than I will notice it, cause a fuss, and the community will raise pitch forks and.. git forks. My trust is in the community, I hope it can stay healthy and diverse for all time.
Trusting in a group of people like you to cover areas you might not is the benefit of open source and a healthy community.
With Apple you have to trust them and trust they don't get national security order.
I trust that if everyone who had the ability to audit OSS got a national security order it would leak and it would be impossible for many who live in other nations.
Maybe if you drink from the NPM PyPI firehouse without checking (as too many do unfortunately).
For regular Linux distribution there are maintainers updating packages from upstream source that can spot malicious changes slipped in upstream. And if maintainers in one district don't notice, it is likely some in onether distro will.
And there are LTS/enterprise distros where upstream changes take much longer to get in and the distro does not change much after release. Making it even less likely a sudden malicious change will get in unnoticed.
Somewhere along the line someone is producing and signing the binaries that find their way onto their computer, they could produce those binaries from different source code and I would be none the wiser.
Debian tries to be reproducible, so to avoid being caught they might need to control the mirror to so that they could send it to only me. I.e. if I'm lucky it would take a total of 2 people to put malicious binaries on my computer (1 with a signing key, 1 with access to the mirror I download things from).
The only way what you said is not true for any networked device is to just go down to the river and throw it in and never use a digital device again. It's not a false sense of security, it's a calculated position on security and what you will accept, moving spying from the server to the phone was the last straw for a lot of people.
Apple already scans your photos for faces and syncs found faces through iCloud. I’d imagine updating that machine learning model is at least as straightforward as this one.
They're searching for different things though. To my knowledge, before now iOS has never scanned for fingerprints of specific photographs. It would be so darn easy to replace the CSAM database with fingerprints of known tiananmen square photos...
That is a distinction without a difference. I’m sure you could put together quite a good tank man classifier (proof: Google Reverse Image Search works quite well), and it’d catch variations which a perceptual hash wouldn’t.
The only difference is intent. The technical risk has not changed at all.
The technical risk to user privacy - if your threat model is a coerced Apple building surveillance features for nation state actors - is exactly the same between CSAM detection and Photos intelligence which sync results through iCloud. In fact, the latter is more generalizable, has no threshold protections, and so is likely worse.
It's the legal risk that is the biggest problem here. Now that every politician out there knows that this can be done for child porn, there'll be plenty demanding the same for other stuff. And this puts Apple in a rather difficult position, since, with every such demand, they have to either accede, or explain why it's not "important enough" - which then is easily weaponized to bash them.
And not just Apple. Once technical feasibility is proven, I can easily see governments mandating this scheme for all devices sold. At that point, it can get even more ugly, since e.g. custom ROMs and such could be seen as a loophole, and cracked down upon.
This hypothetical lacks an explanation for why every politician has not demanded Apple (or say Google) do this scope creep already for photos stored in the cloud where the technical feasibility and legal precedent has already been established by existing CSAM scanning solutions deployed at scale.
I have to note that one of those solutions deployed at scale is Google's. But the big difference is that when those were originally rolled out, they didn't make quite that big of a splash, especially outside of tech circles.
I will also note that, while it may be a hypothetical in this particular instance as yet, EU already went from passing a law that allows companies to do something similar voluntarily (previously, they'd be running afoul of privacy regulations), to a proposed bill making it mandatory - in less than a year's time. I don't see why US would be any different in that regard.
Ok but now you’ve said that the precedent established by Google and others already moved the legislation to require terrible invasions of privacy far along. You started by saying Apple’s technology (and, in particular, its framing of the technology) has brought new legal risk. What I’m instead hearing is the risk would be present in a counter factual world where nothing was announced last week.
At this point of the discussion, people usually pivot to scope creep: the on-device scanning could scan all your device data, instead of just the data you put on the cloud. This claim assumes that legislators are too dumb to connect the fact that if their phone can search for dogs with “on-device processing,” then it could also search for contraband. I doubt it. And even if they are, the national security apparatus will surely discover this argument for them, aided by the Andurils and NSOs of the world.
As I have repeatedly said: the reaction to this announcement sounds more like a collective reckoning of where we are as humans and not any particular new risk introduced by Apple. In the Apple vs. FBI letter, Tim urged us to have a discussion about encryption, when we want it, why we want it, and to what extent we should protect it. Instead, we elected Trump.
The precedent established by Google et al is that it's okay to scan things that are physically in their data centers. It's far from ideal, but at least it's somewhat common sense in that if you give your data to strangers, they can do unsavory things with it.
The precedent now established by Apple is that it's okay to scan things that are physically in possession of the user. Furthermore, they claim that they can do it without actually violating privacy (which is false, given that there's a manual verification step).
The precedent established by Apple, narrowly read, is it’s ok to scan data that the user is choosing to store in your data center. As you pointed out, this is at least partly a legal matter, and I’m sure their lawyers - the same ones who wrote their response in Apple vs. FBI I’d imagine - enumerated the scope or lack thereof.
Apple’s claim, further, is that this approach is more privacy-preserving than one which requires your cloud provider to run undisclosed algorithms on your plaintext photo library. They don’t say this is not “violating privacy,” nor would that be a well-defined claim without a lot of additional nuance.
Nonsense. Building an entire system as opposed to adding a single image to a database is a substantially different level of effort. In the US at least this was used successfully as a defense. The US cannot coerce companies build new things on their behalf because it would effectively create "forced speech" which is forbidden by the US Constitution. However they can be coerced if there is minimal effort like adding a single hash to a database.
Photos intelligence already exists, and if people are really going to cite the legal arguments in Apple vs. FBI, then it’s important to remember the “forced speech” Apple argued it could not be compelled to make was changing a rate limit constant on passcode retries.
Exactly this. The whole thing is a red herring. If Apple wanted to go evil, they can easily do so, and this very complex CSAM mechanism is the last thing that will help them.
I’ve read your comments, and they are a glass of cold water in the hell of this discourse. This announcement should force people to think about how they are governed - to the extent they can influence it - and double down on Free Software alternatives to the vendor locked reality we live in.
Instead, a forum of presumably technically savvy people are reduced to hysterics over implausible futures and a letter to ask Apple to roll back a change that is barely different from, and arguably better than, the status quo.
We need both - develop free software alternatives (which means to stop pretending the alternatives are good enough), and to get real about supporting legal and governance principles that would protect against abuses.
If people want to do something about this, these are the only protections.
A false positive in matching faces results in a click to fix it or a wrongly categorized photo. A false positive in this new thing may land you in jail or have your life destroyed. Even an allegation of something so heinous is enough to ruin a life.
The "one in trillion" chance of false positives is Apple's invention. They haven't scanned trillions of photos and it's a guess. And you need multiple false positives, yet no one says how many, so it could be a low number. Either way, even with how small the chance of it being wrong is, the consequences for the individual are catastrophic. No one sane should accept that kind of risk/reward ratio.
"Oh, and one more thing, and we think you'll love it. You can back up your entire camera roll for just $10 a month and a really infinitesimally minuscule chance that you and your family will be completely disgraced in the public eye, and you'll get raped and murdered in prison for nothing."
I literally do not take that risk in 2021. I do, currently, make the reasoned assurance that the computational overhead of pushing changes down to my phone, and the general international security community, are keeping me approximately abreast of whether my private device is actively spying on me (short answer: it definitely is, longer answer: but to what specific intent?)
Apple's new policy is: "of course your phone is scanning and flagging your private files to our server - that's normal behavior! Don't worry about it".
There is also an argument that the cloud scanning is a few bit flips away from allowing random Apple employees from looking through all my photos while on device scanning means they have to be specifically looking for certain content. On device scanning is probably preferred if one's fear is someone stealing their nudes, which is one of the more common fears about photo privacy and one that Apple has already shown is a problem for them.
When you choose to use a cloud (aka someone else's computer), you trust that someone else, because you're no longer in control of what happens to your data once it leaves your device. That's a sane expectation, anyway. But you don't expect your own device to behave against your interests.
Sure, but I don't particularly care because I don't upload sensitive photos to iCloud. As I'd recommend to everyone else. If it's on someone else's computer, it's not yours.
But, I suppose my iPhone was never really mine either. I knew that, I just never quite put two and two together...
>Sure, but I don't particularly care because I don't upload sensitive photos to iCloud
So you are saying you have nothing to fear because you aren't hiding anything?
Is this a tacit admission that you don't want them scanning your phone for CSAM because they will find something?
I'm obviously not seriously accusing you of anything, just pointing out how your line of argument applies equally to privacy whether on the cloud or on your device.
"I'm not giving my photos to anyone. I am just taking photos on my phone and Apple is automatically backing them up to iCloud. I have no idea how that feature was turned on or how to turn it off." ~ most normal people.
People here too often only think of these issues from the perspective of the type of person who browses HN. Apple is thinking of this from the perspective of an average iPhone users.
Is this issue getting much attention outside of tech circles? I have seen a few news stories here or there, but nothing major. I have not heard a word about it from any of my non-tech friends or from the non-tech people I follow on social media. Meanwhile people on HN are acting as if this is one of the biggest stories of the year with multiple posts on it every day.
There is a huge separation in the importance that the tech community puts on the general concept of privacy while the average person rarely worries about it.
I mean, absent a gallop poll there's really no way to know, but we were discussing this at my office today, and I work at a graphic design studio, not a software development firm.
You have to configure iCloud backups when setting up a phone for the first time. Someone who is non-technical, but privacy conscious isn’t going to do this.
You could enable scanning of all files using a few bit-flips, but that doesn't have any effect - the results need to be evaluated by someone. Afaik, the system is designed such that the client side does not know whether a match occurred. So to actually report anyone based on the results of the proposed CSAM scanner, Apple also needs to upload all of the scanned files, including the client-side scan result, to figure out whether there's a match.
Obviously, this is stupidly oversimplified, I have no idea how Apple has structured their code. But the fact of the matter is, if the scanning routine is already on the phone, and the photos are on the phone, all anyone has to do is change which photos get scanned by the routine...
By the way they already scan photos that aren’t uploaded to iCloud. I’ve never used iCloud and I can go on the photos app and search for food for example
Right, the difference is that now, you can be reported to the authorities for a photo that the CSAM algorithm mistook for child pornography, whereas before the image classifying was purely for your own use.
Right, but neither is the CSAM code being used to detect anything but CSAM.
If a bit can be flipped to make the CSAM detector go evil, surely it can be flipped to make photo search go evil, or spotlight start reporting files that match keywords for that matter.
There is nothing special about this CSAM detector except that it’s narrower and harder to repurpose than most of the rest of the system.
It has already been added. That's the point here. The phone already has the code to report its own owner. People are rightfully pissed about that.
Also, adding a one-liner of code is a lot more than a bit-flip. That's the addition of a feature, which is a major business decision. Code that already exists is a lot closer to "misfiring" than code that simply does not. Flipping a policy config, OTOH, could be "explained" away a lot easier. In fact, Big Tech does it all the time! (Remember Facebook's "Oh. Sorry. It was a bug."?)
----
Please stop this poor attempt at moving the goalposts by saying, "But, similar code could be added at any time in the future! Why're you guys complaining now?!"
It's also not about where the scanning is happening, or whether that's new or old. The old scanning is not a problem, precisely because it does no reporting. The new scanning IS a problem, precisely because it does the reporting.
The former is not a problem on its own. The latter is. That's the point you keep missing.
Right, but the CSAM scanning routine is absurdly narrow and difficult to use for detecting anything but CSAM, whereas a general purpose hash matching algorithm really is just a few lines of code.
This whole ‘now that they have a scanner it’s easier’ reasoning doesn’t make sense. iOS already had numerous better matching algorithms if your goal is to do general spying.
It could be 1 or 2, but the claim is that it is set to only detect people who are actually building a collection of CSAM images because Apple doesn’t want this system to flag people who are unlikely to be actual criminals. I.e. they don’t just want no false positives, they don’t want positives that aren’t highly likely to be criminal. I’m guessing it’s a lot more than 2.
> but the claim is that it is set to only detect people who are actually building a collection of CSAM
From what I read the threshold is for a different purpose, ,the false positives are too many so to reduce the number of reports the threshold workaround is introduced, so is a hack to workaround the false positives and not a threshold to catch people with big collections.
I have seen both in comments from Apple. Certainly spokespeople have talked about wanting any reports to NCMEC to be actionable.
I think a lot hinges on what you call a ‘false positive’. It could mean ‘an image that is not CSAM’ or it could mean ‘an account that has some apparent CSAM, but not enough to be evidence of anything’.
I think it is clear that Apple will have no choice, if they for sure find 1 CSAM image they can't say "it is just 1 img so it is not a big collection..." they have to send it to whoever to handle it.
What would help me with this kind of things would be more transparency, like to know the algorithm, the threshold, the real numbers of false positives, can the hashes be set per country or individual, can independent people look inside iOS and confirm that the code works as described...
What would also help is to know for sure hoe many children were saved by this kind of system,
> I think it is clear that Apple will have no choice, if they for sure find 1 CSAM image they can't say "it is just 1 img so it is not a big collection..." they have to send it to whoever to handle it.
This is a misunderstanding of how it works. The threshold has to be met before a detection occurs. Unless the threshold is met Apple doesn’t get to know about any images.
As for independent auditing and transparent statistics, I fully agree.
Let's say the threshold is 10 , you trigger the threshold so Apple checks all the 10 images , but 9 are false possitive and one appears to be a correct match, Apple will report you for 1 image ... so my point is that is not designed to catch people with big collections of CP, is designed to not trigger to often.
Isn't the scanning routine searching for fingerprints? What happens if someone adds a fingerprint to the database, which matches something other than CSAM?
It’s way more complex than that, and has been engineered to prevent exactly that scenario.
I recommend you actually check out some of Apple’s material on this.
The idea that it’s just a list or fingerprints or hashes that can easily be repurposed is simply wrong, but is the root of almost all of the complaints.
Yes, I did. You do realise that CSAM is just a policy control, and there is literally nothing technical from Apple adding non-CSAM content to the same policy and systems?
I feel like you may not have actually read the paper in depth. The paper clearly shows this is a generalised solution for detecting content that perceptually matches a database. The "what" is CSAM today, but nothing about the technicals require it to be CSAM.
To answer your specific response of safety voucher and visual derivative:
- The 'safety voucher' is an encrypted packet of information containing a part of the user's decryption keys, as well as a low resolution, grayscale version of the image (visual derivative). It is better described as "backdoor voucher".
- The 'visual derivative' is just the image but smaller and grayscale.
None of those two technologies have anything to do with CSAM. Apple's statement that they will only add CSAM to the database is a policy statement. Apple can literally change it overnight to detect copyrighted content, or Snowden documents, or whatever.
It doesn’t matter where the fingerprints are coming from, they’re all just fingerprints. Today they come from NCMEC. Tomorrow they could be from the Chinese Communist Party.
Yes. This is it. When I wrote the above I couldn’t quite put my finger on why I felt like this but this is the reason. It’s the slippery slope of mine / yours.
Also if you remotely think the scanning is going to stop at CSAM I have a bridge in brooklyn to sell you. Copyrighted material, national security, thought crime, are already inside this trojan horse waiting to jump out.
In a few years, all the pedophiles will stop using iPhone, and then only innocent people will be scanned in perpetuity. Remember, if someone makes new CSAM, it won't match any hashes so even "new" pedophiles won't get caught by this.
So really, the steady state is just us regular folks getting scanned. As the years go on, what is defined as CSAM will morph into "objectionable material", and will then include "misinformation" like today. They say they won't right now, but where will things be in a few years?
Als die Nazis die Kommunisten holten, habe ich geschwiegen;
ich war ja kein Kommunist.
Als sie die Sozialdemokraten einsperrten, habe ich geschwiegen;
ich war ja kein Sozialdemokrat.
Als sie die Gewerkschafter holten, habe ich geschwiegen;
ich war ja kein Gewerkschafter.
Als sie die Juden holten, habe ich geschwiegen;
ich war ja kein Jude.
Als sie mich holten,
gab es keinen mehr, der protestieren konnte.
Yes, thats exactly how i feel. I'd still hate it if my iCloud uploads are scanned but I'm already assuming that anyway.
But the fact that my iOS device can potentially report me to any authorities, for whatever reason, is crossing a line that makes it impossible to ever own an iPhone again.
I bought my first one in 2007 so I'm not saying this lightly..
Does anybody know if this policy will extend to macOS too?
Thankfully flip phones aren't necessary! Startups like Purism are at the cutting edge of building privacy-respecting devices, check it out here: https://puri.sm
Plan to get one as my next phone, although it should be said that both of these offerings are still a bit rough around the edges currently, but getting there
Great. I recently invested $3800 in an excellent, new iMac. Now I'm starting to wonder if I should have spent a couple thousand less for a barebones PC and installed my favorite Linux distro. It would have done 75% of what I needed, and the other 25% (music and video work)... well, that's the tradeoff.
If anyone in my circle of family, friends, and social network asks my advice, the formerly easy answer "Get a Mac, get an iPhone; you won't regret it!" is probably going to be replaced with something more nuanced ("Buy Apple, but know what you're getting into; here's a few articles on privacy....").
The CSAM detection technical summary [1] only mentions iOS and iPadOS.
If it does come to macOS it will be part of Photos.app, as that's the only way to interact with iCloud Photos. I would recommend you to avoid that app and cloud in general if you care about privacy.
Yes it will come with the next "+1" version due out in late fall (winter?). It will be an integral to the icloud photos platform. Supposedly if you turn off iCloud photos backup it gets turned off as well
>Does anybody know if this policy will extend to macOS too?
I don't see how it could be in the same way, or at least it only could be on brand new ARM Macs right? The thing is that at least on Intel Macs Apple simply doesn't control the whole stack. I mean, obviously, while more effort it's perfectly possible to get x86 macOS running virtualized or directly on non-Apple hardware entirely. So any shenanigans they try to pull there can get subverted or outright patched out. Without the whole hardware trust chain and heavy lock down like they have on iOS this sort of effort, even if attempted, becomes less threatening. I guess we'll see what they go for though :(. They could certainly make it pretty nasty anyway so as much as anything it's more of a forlorn hope they'll mainly focus on low hanging fruit.
I think what you describe and what is shown by Apple is "the" concrete piece of evidence, and what I have suspected for long that there is no longer any product person running the company. It used to be Steve Jobs.
You have people arguing over the technical definition of privacy, or on devices, off devices terminology. It doesn't matter. The user doesn't understand any of it. And they just feel exactly what you wrote. An intrusion of their private space by a company that has been telling you how much they value your privacy for a better part of decade.
These sort of technical explanation used to be a Tech Companies / Google / Microsoft things. Absolutely make sense to nerds, but nothing much to a layman. Now Apple are doing the same. If you watch their Keynotes, you should notice the increase usage of technical jargons or numbers for performance over the past 5 years. Something that Steve Jobs's keynote used to keep it to minimal.
There are plenty of over very real concerns with this. For example, say some activist intern at Apple decides they don't like your politics and hits the report button, next thing you know, the police show up to your door and are confiscating all your devices. Just one of many potential abuses.
I wonder how much of this is shifting the overton window. I'd really not have people scanning my data at all really.
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." said Cardinal Richelieu in the 16th century.
I'm pretty sure my phone and online accounts hold a lot more data than just six lines. How do I know there won't be accidents, or 'accidents'?
Why is this not how you should look at it. This is exactly how I look at it. It's your device, no company should be able to scan shit on a device that they have sold to someone. That's like keeping the key of a house after you have sold it and then checking sneakily every week whether the new owners are doing illegal stuff.
iPhone runs software which is in full control of Apple. They take active measures to stop general public from auditing its behavior (i.e. no source code, no API specifications, etc).
In my opinion, such a device can never be considered as belonging to their users or "trusted" in any reasonable way. The reason is that its root functionality always remains in control of a different entity.
I believe that such a relationship can more accurately be described as a lease.
Instead of providing specifications and source code, which would help establish the fundamental trust in their products and their behavior, Apple seems to increase their reliance on the "trust me bro" promises, which their current user base has accepted.
> I know that isn't how you should look at it, but that's still how it feels.
Why do you think you know that? You didn't say. It seems like that is how you should see it since they are using your processing power on your device potentially against you.
Oh hey.. Roomba scanning for illicit drugs on your coffee table. It sounds ridiculous, but that thing is a literal camera roaming around your entire house, with machine learning.. so..
They didn’t do themselves any favors by blurring the line between apps and the OS. If it’s just Apple’s Photos app doing this, you can install a different app and not use that one.
> I know that isn't how you should look at it, but that's still how it feels
It's definitely how you should look at it because it's right. Once a device is compromised in this way, there's no going back and the erosion of privacy and expansion of scope will become neverending. If a capability exists, it will just be too hard for spooks to keep their fingers out of the cookie jar.
Don’t second-guess yourself. The viewpoint you express is completely valid. Other child comments have pointed this out, but corporate messaging does not get to tell you how _you_ choose to look at things.
I think its just more visible, its easy to upload to the internet and think im uploading to the internet, you don't think I am sharing this with law enforcement and Apples third parties.
The detailed specifics of this would be interesting to know. If I accidentally flip iCloud photos on and then turn it off within a few minutes, would that give the system license to scan all my photos once? What if my phone is in airplane mode and I turn iCloud photos on? Will the system scan photos and create vouchers offline for later upload?
Edit: On a related note, will turning on iCloud photos now come with a modal prompt and privacy warning so you're fully informed and can't do it "by accident"? Pardon my ignorance if this happens already. I've never turned mine on and am not about to start.
Turning your own device against you is better? There is no way that is better for you as a user. It's great for governments and police forces but it doesn't do anything for you as a user but spy on you without getting a warrant.
Often, despite all its disadvantages, on-device scanning would at least let know see what was being scanned for.
But Apple have circumvented that with NeuralHash. There's no way for the user to verify that only CSAM is being detected - which could be partially accomplished by having iOS only accept hashes signed by multiple independent child protection organisations (with some outside of FVEY).
Wild thought: could on-device scanning be considered a violation of the 3rd Amendment (quartering soldiers [enforcing agents of the government] in peoples homes [property]) ?
"...You can't have a backdoor that's only for the good guys. That any back door is something that bad guys can exploit."
"No one should have to choose between privacy and security. We should be smart enough to do both."
"You're assuming they're all good guys. You're saying they're good, and it's okay for them to look. But that's not the reality of today."
"If someone can get into data, it's subject to great abuse".
"If there was a way to expose only bad people.. that would be a great thing. But this is not the world. .... It's in everyone's best interest that everybody is blacked out."
"You're making the assumption that the only way to security is to have a back door.....to be able to view this data. And I wouldn't be so quick to make that judgement."
No matter the mental gymnastics now, it's still a pretty drastic departure from what it used to be.
"The ark of the covenant is open and now all your faces are going to melt" - to paraphrase "The Thick of It".
Prior to this when a government tapped Apple on the shoulder asking to scan for what they (the government) deem illicit material, Apple could have feasibly say "we cannot do this, the technology does not exist".
Now the box is open, the technology does exist, publicly and on the record - Now we are but a national security letter, or a court order with an attached super-injunction, away from our computers and phones ratting us out for thought crime, government critique and embarrassing the establishment. At first it's the photos, but what about messages and browsing history or the legion other things that people use their devices for, when do they follow?
CSAM is beyond abhorrent, morally reprehensible and absolutely should be stopped. This is not in question. And I have no reason to doubt that there is sincerity in their goal to reduce and eliminate it from Apple's perspective.
But they have root; this system and it's implementation is a fundamental betrayal of the implicit trust that their users have placed in them (however foolish it was, in hindsight). It just is not possible to regain that trust now that it is gone, when the level of control is so one sided and the systems involved are so opaque.
> the technology does exist, publicly and on the record
And not only that, but with a little footnote saying "* Features available in the U.S." - in other words, publicly and on the record: this can be switched on and off on per jurisdiction. If another jurisdiction comes and says "here's my list of hashes of illegal images", how can Apple say "sorry, can't do..."?
Well stated. I am one of those that trusted Apple, and though it seems totally obvious now (the problems present with not having full control of ones device) for whatever reason it took all this for me to realize how privacy truly requires that control.
You are also right that it would take a lot for me to trust Apple again and they are unlikely to do the things required for that since transparency, openness are not typically how they operate.
I wasn't implying they were equivalent, but that doesn't make the second one any less awful.
In fact, it is the more insidious danger of the two, since it can easily strip you from all your freedoms while not evoking as visceral a reaction as the first one.
Good. I would be furious if I worked there. After the San Bernardino case, I viewed them as the paragon of privacy and security, to the extent I ignored most criticisms, including their lack of support for right-to-repair and concerns over App Store rejections. All of that is back on the table for me after this decision.
It is out-of-step with everything they've been doing up to this point, and it makes me wonder who has something over the head of Apple that we aren't hearing about. The stated benefits of this tech are far outweighed by the potential for harm, in my view.
If Apple pushes forward on this, I want to hear a lot more from them, and I want it to be a continuing conversation that they bear until we all understand what's going on.
What's fishy to me is the fact that they are letting this be known. Like, if you're trying to capture people who share CSAM or have it on their phone, why would you go around saying that you can now scan for it?
I think this announcement would have been well known to cause a major outrage so why say anything? I'm wondering if this is a cover up for something else. Get people talking about this in the news and getting outraged about it while something else goes unnoticed.
Announcing it isn't fishy. If they didn't announce it, they'd risk a whistleblower telling the story for them. In fact, the story did break a day before Apple could officially announce it. That may already have hurt them more because they weren't even part of the discussion at that point. What you want is to tell the story on your terms. If someone beats you to it you can't do that.
Unrelated to Apple, but that's one of the big problems with misinformation. You can't get ahead of it because it's all made up.
> I'm wondering if this is a cover up for something else. Get people talking about this in the news and getting outraged about it while something else goes unnoticed.
Apple turned over all of the San Bernardino iPhone data, as iCloud Backup information is not e2e encrypted and is readable in full by Apple without ever touching or decrypting the phone.
The Apple vs FBI narrative was a coordinated marketing move (coordinated between the FBI and Apple) to push that "paragon of privacy and security" brand message. It's false.
Apple explicitly preserves a non-e2e backdoor in their cryptography for the FBI:
They didn't unlock that phone for the FBI. The FBI got some foreign contractor to do it. Yes, Apple did turn over data to which they had access. Their stance back then was not a farce.
Can you imagine the beautiful New World we are building under the wise leadership of corporations and politicians motivated by unstoppable greed, control and power?
Can you imagine "the screeching voices of minorities with critical thinking"?
The big picture of hyperconnected future in which automated systems will decide your fate is in place and it's running well. It is not perfect. Yet. But it will be. Soon.
You will find a way to cope. As usual. As a collective. As a "exemplary citizen with nothing to hide".
"A blood black nothingness began to spin. Began to spin. Let's move on to system. System."
The system will grow. Powered by emotional reasoning created by the social engineers, paid well to package and sell "The New Faith" for the masses.
You are not consumers anymore. You are not the product anymore. You are the fuel.
Your thoughts and dreams, emotions and work are the blood of the system. You will be consumed.
"Do you get pleasure out of being a part of the system? System. Have they created you to be a part of the system? System."
Yes. The pleasure of collective truism. In the name of the "common good" - the big, the bold and the brave brush, build to remove any form of suspicion or oversight.
"A blood black nothingness. A system of cells. Within cells interlinked. Interlinked"
"Apple says it will scan only in the United States and other countries to be added one by one, only when images are set to be uploaded to iCloud, and only for images that have been identified by the National Center for Exploited and Missing Children and a small number of other groups."
First reference I read about adding other countries as a done deal, and especially about an incredibly opaque "small number of other groups" being involved as well.
Man, after all they've said about being obsessed with privacy they're really not doing themselves any favors here. What a tragedy.
Can someone explain how Apple being coaxed or coerced into searching all of our personal devices for illegal files by federal law enforcement is not an unconstitutional warrantless search?
Law Enforcement can also get access to that metadata -- it's called a mail cover. See U.S. v. Choate.
"A mail cover is a surveillance of an addressee's mail conducted by postal employees at the request of law enforcement officials. While not expressly permitted by federal statute, a mail cover is authorized by postal regulations in the interest of national security and crime prevention, and permits the recording of all information appearing on the outside cover of all classes of mail."
This cuts both ways. Our laws not evolving with technology is also the reason why tapping and intercepting secured digital communication is orders of magnitude more difficult than tapping someone's analog phone line, or why decrypting a hard drive is far more difficult for the police than sawing apart a safe.
Agreed. Technology has allowed abusive images to thrive and reach new consumers. Any pressure to "evolve laws with technology" will surely lead to more draconian anti-privacy measures.
probably depends more on the kind of old lawyer. I've actually noticed that in general older people tend to be more privacy conscious than younger ones. Old people at least still know what secrecy of correspondence even is. It's not even a concept that exists any more for a lot of gen z.
In particular American supreme court arguments often stand out to me in how clearly the judges manage to connect old principles to newer technologies.
Meanwhile, it's predominantly young people who freely hand their information to mega-corps to be disseminated and distributed. Outside your (our) bubble young people don't give two craps about privacy.
It has nothing to do with them being "old". Saying that is a very ageist thing to say. I know a bunch of old techies who are completely against this. It's about lack of technical knowledge and how this is being twisted to seem okay. Your average millennial doesn't give a damn about it either.
They aren't being coaxed or coerced. If they were forced-- or even incentivized-- by the government to perform these searches than they'd be subject to the fourth amendment protecting against warrantless searches.
Apple is engaging in this activity of their own free will for sake of their own commercial gain and are not being incentivized or coerced by the government in any way.
As such, the warrantless search of the customer data is lawful by virtue of the customer's agreements with Apple.
> Apple is engaging in this activity of their own free will for sake of their own commercial gain and are not being incentivized or coerced by the government in any way.
Honest question: how do you know this, for sure? Or is your comment supposed to be read as an allegation phrased as fact?
I'm assuming you're kidding here, but if you look at the list of countries already selling/supporting iPhones, what's not on the US export ban list that's a totalitarian regime?
My money is this is secretly about nuclear, bio, chemical or other WMD or “dangerous” material. I can’t imagine that Apple would be so stupid to open such a Pandora’s box to an Orwellian state and crimes against all children in the name of making no statistical difference in the porn problem. There has got to be a hidden story here and severe pressure on top executives, otherwise I can’t see how the math makes sense. Please keep digging, journalists!
It's most likely a demand by China so that they can create an infrastructure to locate political dissidents. Oh look, a Winnie the Pooh Xi meme ended up in your gallery/inbox. Why is there a knock at the door? I'm pretty sure thats the real reason.
My ears perk up whenever I hear a "Just think of the Children" argument because after Sandy Hook, I'm pretty certain the US could careless about children. There's a real reason behind this.
Whenever someone makes a "think of the children" argument it has absolutely nothing to do with whether they actually care about children. They just want to make it extremely difficult to counter-argue without being labelled as pedophile adjacent. It is a completely disingenuous argument 99% of the time it is used.
I feel like there’s a fallacy for this but I’m not sure. Either way doesn’t matter the logic here isn’t that the US could careless about children , it’s that the US cares more about Gun rights than it does children , but that doesn’t say anything about the minimum level of care they have , only the maximum.
I don't know about the suggestion that it's the government of China pushing for the feature itself, but the fact the feature now exists and WILL be used by authoritarian regimes to scan for political content is clearly understood by Apple employees. From the article:
> Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters. Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread.
Apple isn't being pressured from China, but criticizing the CCP as the brutal dictatorship it is is not racist (and that idea is done verbatim CCP propaganda).
Randomly suggesting that every authoritarian decision taken unilaterally by an american company was made to please the evil chinese government while offering no substantial proof is not "criticizing the CCP", it is just shoehorning US's far-right talking points into a thread that has nothing to do with it.
The wording of the CSAM law is that content should be scanned when uploaded. That condition "upon upload" triggers the 3rd party doctrine.
Apple has gone above and beyond here, not the actual US gov. So, the bill of rights doesn't apply to Apple's decision to scan content on device, just before it is uploaded.
Any chance you have a good link explaining this? I saw the parent comment earlier and wanted to add this. But I only recently learned that the US federal government cannot require or incentivize providers to scan user's private documents, so didn't want to post without clear sources.
Apple hasn't gone above and beyond anything here. The only way they can save face is laugh at the government and not give into their demands to set up a backdoor tool to scan every iphone user.
"The third-party doctrine is a United States legal doctrine that holds that people who voluntarily give information to third parties—such as banks, phone companies, internet service providers (ISPs), and e-mail servers—have "no reasonable expectation of privacy.""
Because it's a contract between you and apple. You don't have to use their phones and they're a corp not the government. Personally I see it as an underhanded attempt for the government to use apple as a police force in everything but actually deputizing them. They're using a loop hole in the 4th amendment to spy on you without a warrant.
As I understand, they're acting as an agent of the government, it's a private company. So, the 4th amendment protecting against unreasonable searches _by the government_ does not apply.
Acting as a agent, used to anyway, mean the 4th amendment attached to the company
if they WERE NOT acting as an agent then the 4th would not apply. I think they way they get around that is not do not report to the FBI but instead to NCMEC a "non-profit"
> I think they way they get around that is not do not report to the FBI but instead to NCMEC a "non-profit"
No. The courts have already explicitly rejected the entity that NCMEC is a private entity. NCMEC can only handle child porn via special legislative permission and is 99% funded by the government.
The searches here are lawful because Apple searches out of their own free will and commercial interest and when there is something detected their employees search your private communications (which the EULA permits).
This is also why Apple must review the matches before reporting them-- if they just matched an NCMEC database and blindly forwarded then to NCMEC then it would be the NCMEC conducting the search and a warrant would be required.
Reporting to the government doesn't make you a government agent, doing the search at the direction of the government does.
If a private citizen, on their own initiative, searches your house, finds contraband, and reports it to the government, they may be guilty of a variety of torts and crimes (both civil and criminal trespass, among others, are possibilities), there is no fourth amendment violation.
If a police officer asks them to do it, though, there is a different story.
> No, the Fourth Amendment applies to private actors acting on behalf of the government.
It applies to private actors acting as agents of the government, it doesn't apply to private actors who for private reasons not directed by the government conduct searches and report suspicious results to the government (other property and privacy laws might, though.)
Why stop there? Instead of getting pesky warrants to search apartments, the government could just contract the landlord to do the search for them. After all, the landlord owns the property and ownership trumps every other consideration in libertarian fantasy-land.
Because for the last at least 80 years the constitution as not been seen as a document limiting government power, but instead as a document limiting the peoples rights
it has literally been inverted from it original purpose
> Constitutional rights can be voluntarily waived.
That...varies. The right to a speedy trial can be waived. The right against enslavement cannot. You can consent to a warrantless or otherwise unreasonable search given adequate specificity, but outside of a condition for release from otherwise-constitutional more severe deprivation of liberty (e.g., parole from prison) I don't think it can be generally waived in advance, only searches specifically and immediately consented to when being executed. But I don't have any particular cases in mind, and that understanding could be wrong. But “constitutional rights can be waived” is definitely way to broad to be a good useful guide for resolving specific isssues.
You can definitely consent to giving up the right to not be subject to warrantless search and seizure. Everyone who ever joins the military under UCMJ allows their commander to conduct random inspections of personal property at any unannounced time.
There are limitations, of course. If you live off-post in private housing, your commander has no legal authority to inspect it. They can only go through your stuff if you live in government-owned housing.
That's not a waiver, that's a space warrantless searches are reasonable under the fourth amendment and Congress’ Art. I, Sec. 8 powers with regard to the military, etc. Otherwise, when we had conscription, conscripts, who do not freely consent, would have been immune.
It's not, but it's one of the few that doesn't have some verbiage along the lines of UDHR:
"In the exercise of his rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society."
You can bet they were by the TLA intel agencies, but you will not find that documented anywhere other than "Met with FBI/NSA/XXX for liason purposes". I'm sure they were given an ultimatum and they folded.
I stopped my Apple Music subscription and downgraded iCloud to free, then donated a year of those fees to the EFF.
$13/month is nothing to Apple, and I really regret it since photos in iCloud is really convenient, however it feels unethical to contribute to the Apple services system when it will inevitably be used to hurt people like dissidents and whistle blowers.
It is arrogant to blow off security and human rights experts outside of Apple as simply confused or mistaken, when there are sufficiently informed and imaginative people within Apple to understand the practical implications of preemptively investigating users. Such contempt for users is also a sign of worrisome complacency.
On the plus side, Apple has seemed to crowd out interest in third party OS user interfaces. Knowing that Apple believes you're a pedophile until they've run image analysis software on your stuff is one way to motivate people. Maybe 2021 will be the year of Linux on the desktop.
Anyway I hope that more users sacrifice the convenience of Apple services now that its hostility is apparent, and I hope that more people tell their political representatives that it is time for meaningful updates to anti-trust legislation. I know that I was complacent when I thought that having a choice, where at least one of the companies didn't place ads in the start menu, was a tenable state of the market.
I totally agree that those actions individually are not much. However, with enough people doing those small actions, it will add up in the long run. Plus we’re going to be better off with open alternatives.
They just lost a few thousands from me alone not upgrading all my iDevices for next model year. Savings go to privacy advocacy organizations.
> This project is probably going to cost millions, they won't catch any pedophiles, they will only scare them so they don't use Apple devices.
Mission accomplished? They can argue that encrypted iPhones aren’t being abused by pedophiles, and they can argue that alternative App Stores will be avenues for illegal material.
Unfortunately, what with all the negative publicity this is causing, Apple now has a huge incentive to catch somebody, anybody, just to justify the project. The thing about pedophiles is, all it really takes is an accusation; the public will presume guilt, the target's job and home and life are taken away, and Apple + NCMEC can say, "See? It works." Even if the target is later exonerated, the damage is done. The teams that vet the candidate images might even have quotas to fill. "How many did you catch last month?" Your innocent baby bath pictures might catch you up in a net that destroys your life, just so some low wage clown can claim they made quota.
End game (being charitable to them here) is they can now start encrypting iCloud photos, and iCloud backups for that matter, with a key that they do not have any way to access, while getting the FBI off their backs on this one hot button issue.
That by itself makes it look ok.
Until you consider… there are a couple counterarguments.
One, Apple has not actually enabled such private encryption with the keys out of reach to Apple for iCloud backups.
Two, that child protection in other countries will sometimes be defined in such repugnant terms that it will compromise Apple fully to scan for the hashes provided by “child protection” organizations in those countries.
“Child protection” is in quotes not because I think countries will get away with shoehorning, say, terrorist
content hashes in as purported child pornography hashes. It’s in quotes because the concept of child protection can be so wildly bizarrely corrupted in some countries for religious or ideological reasons. Who decides what is off limits for children, from having gay parents, to having friends of the opposite sex, to having an unislamic head covering? Well, each random government, of course, with Apple as the enabler, and individual and human rights be damned.
So while the most charitably viewed end game may be good, they seem to be papering over the real impacts this could have.
Big tech companies are under a lot of US govt pressure right now to crack down on CSAM, most especially Apple because of their very low number of reports compared to most other tech giants. I think Apple saw this as a way to ease some of that government pressure while not jeopardizing their ability to use end-to-end encryption, which something like the EARN IT Act could effectively make illegal by requiring a government backdoor for all encrypted cloud services that operate in the US.
Apple probably saw the on-device CSAM scanning as a small, widely-acceptable concession to make that could prevent much bigger crackdowns, but maybe didn't anticipate the level of blowback from people seeing the CSAM scanning itself as an unacceptable government backdoor on their own device.
> Secondly, it seems rife for abuse: dont like someone who uses an ios device, msg them some cp and destroy their entire life.
I see this a lot. First of all, if you even have CP in the first place that seems quite bad and that you've made yourself vulnerable. Messaging someone will also reveal yourself as the sender. But my bigger question of this hypothetical issue is that Facebook, Google, Dropbox, Microsoft, etc etc have been scanning for CP all of this time on their servers. So it's not like this is somehow a new proposition. Thus wouldn't this already be a big issue through the many, many more people that use all of these other company's cloud products?
I think a lot of people overestimate how hard it is to stay hidden for small jobs.
Furthermore there is a lack of imagination here:
- work from a hacked account (I've seen gmail accounts that have been "shared" with an attacker for long enough to let the attacker having conversations with their bank, i.e. attacker used the account but stayed hidden until the bank called to verify details.)
- VPNs exist.
- Russia exist.
- nasty images can be embedded into details of innocent high resolution photos.
- very soon they'll have to scan pdfs, pdf attachments and zip files otherwise this is just a joke.
- at this time it becomes even easier to frame someone
- etc
Remember, the goal is probably not to get someone convicted, just to mess up their lives by getting business contacts, family and neighbors to wonder if they were really innocent or if they just got off the hook.
That said, the bigger problem with having a secret database of objects to scan for is that it is secret: anything can go into it, allowing all kinds of fishing expeditions.
I mean: if you have such a wonderful system it would be a shame not to use it to stop copyright infringement in the west, blasphemy (Christian or Atheist literature) in Muslim countries or lack of respect for the dear leader (Winnie Pooh memes) in China.
because its scanning the content of the device, not what you did on the device.
An iFrame full of CP downloaded from 4chan that fills the browser cache with CP just by visiting a harmless site would not go anywhere near Google Photos, Facebook messenger, etc etc
It would trigger a scanner checking the browser cache for CP images.
Those are just the obvious ways, by "rife" I mean there is any number of ways to get CP onto someones device without them knowing, unless the device itself is being scanned nothing would achieved by this.
Good thing this system doesn’t scan browser cache, then. Unless you’re programmatically extracting photos from your browser cache and uploading them to iCloud, which would be a pretty impressive way to make sure you’re using up all your storage.
Ok, here is something that happened that could have triggerd this:
Back when I used WhatsApp I got a lot of nice photos from friends and family there.
WhatsApp doesn't (at least didn't) have a way to export them but I could find them on disk and I had an app that copied that folder to a cloud service. Problem is this folder contained all images from all chats.
Now I'm not very edgy so most of my pictures are completely non-problematic (in fact I don't know of any problematic ones), but once in a while someone will join one of your groups and post shock pr0n.
I've long since stopped backing up everything since I use Telegram now and it allows me to selectively backup the chats I want.
But I wanted to mention just one very simple way were doing something completely reasonable could result in an investigation of an innocent person.
But that’s not the proposed design. Browser cache isn’t getting advanced. It’s photos for iCloud.
I’m not asking if people do bad things, I’m saying over and over again this is getting coverage on Hn and people are pointing to this hypothetical issue — yet this hypothetical issue has been possible for years on many more devices.
"Does this mean Apple is going to scan all the photos stored on my iPhone?
No. By design, this feature only applies to photos that the user chooses to upload to iCloud Photos, and even then Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM. The system does not work for users who have iCloud Photos disabled. This feature does not work on your private iPhone pho- to library on the device."
Totally tangential, but i didn't realize that there were well-advertised "anonymous helplines for at-risk thoughts". I'm kind of curious about it as a pathology (what does "help" look like?), but I'm uneasy about even getting that in my search history
Currently it would not catch that 4chan thing as they only scan things that are uploaded to icloud photos. So you'd have to click on it and save to photos. This is the current implementation. I would guess that once they have their hooks in though all files with an image extension/header will be scanned and reported.
yeah I'm not sure I do know it's a separate step to move photo from whatsapp to "general" photo access. I would guess it is roped off or they wouldn't need that step.
That wasn’t the claim though, the claim was that if I text someone a photo via WhatsApp it will do all of this automatically without them choosing to save it.
If WhatsApp does this by default then, ho boy is that a terrible implementation.
It seems easy as hell for them to just change the system to do server-side scanning instead of client-side scanning and it would probably be enough to calm the horde.
I think people can understand, Apple can't have certain content on their servers. People have a much harder time understanding that Apple needs to make sure you don't have certain content on your phone.
If iCloud content was encrypted (as it should be given what Apple says about themselves and privacy) they wouldn’t and couldn’t care any less what the data on their servers contains since, ideally, they’d have no way of decrypting that data. No need for any privacy invasion!
> Apple needs to make sure you don’t have certain content on your phone.
Excuse me, what? What does Apple care what I have on my device?
> they wouldn’t and couldn’t care any less what the data on their servers
Except they're legally forced to care. Thats why this is happening at all. They have a legal obligation to care, encryption be damned. They chose to preserve encryption instead of preserve trust.
No, they are not. I linked to the text of the law itself [1]. In particular note the section titled "protection of privacy" that states:
(f) Protection of Privacy.
—Nothing in this section shall be construed to require a provider to—
(1) monitor any user, subscriber, or customer of that provider;
(2) monitor the content of any communication of any person described in paragraph (1); or
(3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).
they really don't though. no one can force them to scan a user for anything, however if they do scan and spot CP then obviously they have a duty to report it. Just like you have a duty to report child abuse if you're a teacher. No scans, no CP, no issues. This has been forced on them by the government by some means I think, either the US government or the CCP.
Of course this is going to spark concern within their own ranks. It's like working for a food company that claims to use organic, non-processed, fair-trade ingredients and in just a day deciding that you're going to switch to industrial farming sourcing and ultra-processed ingredients.
It's a complete change of narrative and there's no easy way to explain it and still defend Apple's Privacy narrative, wihout doing extreme mental gymnastics.