Here's the thing I keep circling around: assume that bad actors, government or otherwise, want to target political dissidents using internet-enabled smartphones. The more we learn about the way Apple actually implemented this technology, the less likely it seems that it would make it radically easier for those bad actors to do so. For instance, the "it only scans photos uploaded to iCloud" element isn't just an arbitrary limitation that can be flipped with one line of code, as some folks seem to think; as Erik Neuenschwander, head of Privacy Engineering at Apple, explained in an interview on TechCrunch:
> Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.
Will this stop those bad actors if they're determined? No, of course not, but there are so many ways they can do it already. They'll get cloud storage providers to give them access. If they can't, they'll get network providers to give them access. And if those bad actors are, as many people fear, government actors, then they have tools far more potent than code: they have laws. It was illegal to export "strong encryption" for many years, remember? I've seen multiple reports that European lawmakers are planning to require some kind of scanning for CSAM. If this goes into effect, technology isn't going to block those laws for you. Your Purism phone will either be forced to comply or be illegal.
I wrote in a previous comment on this that one of Silicon Valley's original sins is that we tend to treat all problems as if they're engineering problems. Apple is treating CSAM as an engineering problem. Most of the discussion on HN about how horrible and wrong Apple is here still treats it as an engineering problem, though: well, you can get around this by just turning off iCloud Photos or never using Apple software or throwing your iPhone in the nearest lake and switching to Android, but only the right kind of Android, or maybe just never doing anything with computers again, which admittedly will probably be effective.
Yet at the end of the day, this isn't an engineering problem. It's a policy problem. It's a governance problem. In the long run, we solve this, at least in liberal democracies, by voting people into office who understand technology, understand the value of personal encryption, and last but certainly not least, understand the value of, well, liberal democracy. I know that's easy to dismiss as Pollyannaism, but "we need to protect ourselves from our own government" has a pretty dismal track record historically. The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.
The one thing that Apple is absolutely right about is that expanding what those hashes check for is a policy decision. Maybe where those hashes get checked isn't really what we need to be arguing about.
The slippery slope argument is the only useful argument here.
The fundamental issue with their PSI/CSAM system is that they already were scanning iCloud content  and that they're seemingly not removing the ability to do that. If the PSI/CSAM system had been announced along side E2E encryption for iCloud backups, it would be clear that they were attempting to act in their users best interests.
But it wasn't, so as it stands there's no obvious user benefit. It then becomes a question of trust. Apple are clearly willing to add functionality at the request of governments, on device. By doing this they lose user trust (in my opinion).
> In the long run, we solve this, at least in liberal democracies, by voting people into office who understand technology, understand the value of personal encryption
But this is clearly not the way it happens in practice. At least in part, we vote with our wallets and give money to companies willing to push back on governmental over-reach. Until now, Apple was one such company .
I realize that Apple likely don't "care" about privacy (it's a company, not a individual human). But in a purely cynical sense, positioning themselves as caring about privacy, and pushing back against governmental over-reach on users behalf was useful. And while it's "just marketing" it benefits users.
By implementing this functionality, they've lost this "marketing benefit". Users can't buy devices believing they're supporting a company willing to defend their privacy.
Absolutely. I don't know whether there's a reason for this timing (that is, if they are planning E2E encryption, why they announced this first), but this is probably the biggest PR bungle Apple has had since "you're holding it wrong," if not ever.
> Apple are clearly willing to add functionality at the request of governments, on device.
Maybe? I'm not as willing to state that quite as definitively, given the pushback Apple gave in the San Bernardino shooter case. Some of what their Privacy Engineering head said in that TechCrunch article suggests that Apple has engineered this to be strategically awkward, e.g., generating the hashes by using ML trained on the CSAM data set (so the hashing system isn't as effective on other data sets) and making the on-device hashing component part of the operating system itself rather than a separately updatable data set. That in turn suggests to me Apple is still looking for an engineering way to say "no" if they're asked "hey, can you just add these other images to your data set." (Of course, my contention that this is not ultimately an engineering problem applies here, too: even if I'm right about Apple playing an engineering shell game here, I'm not convinced it's enough if a government is sufficiently insistent.)
A minor interesting tidbit: your linked Sophos story is based on a Telegraph UK story that has this disclaimer at the bottom:
> This story originally said Apple screens photos when they are uploaded to iCloud, Apple's cloud storage service. Ms Horvath and Apple's disclaimer did not mention iCloud, and the company has not specified how it screens material, saying this information could help criminals.
It's hard to say what they were actually doing, but it's reasonable to suspect it's an earlier, perhaps entirely cloud-based rather than partially cloud-based, version of NeuralHash.
> The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers, which we’ve never done for iCloud Photos.
But they do appear to do "something" server-side. It's possible that all data in scanned as it is ingested for example. I dislike this statement, because it's probably technically correct but doesn't help clarify the situation in a helpful way. It makes me trust Apple less.
The qualifier is "Photos" - different services have different security properties.
Email transport is not E2E encrypted because there are no interoperable technologies for that.
Other systems are encrypted but apple has a separate key escrow system outside the cloud hosting for law enforcement requests and other court orders (such as a heir/estate wanting access).
Some like iCloud Keychain use more E2E approach where access can't be restored if you lose all your devices and paper recovery key.
iCloud Photo Sharing normally only works between AppleID accounts, with the album keys being encrypted to the account. However, you can choose to publicly share an album, at which point it becomes accessible via a browser on icloud.com. I have not heard Apple talking about whether they scan photos today once they are marked public (going forward, there would be no need).
FWIW this is all publicly documented, as well as what information Apple can and can't provide to law enforcement.
> This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.
- Tim Cook, Apple
At what point does the fact that half a decade has passed since those words were written yet the hacker community has made little contribution to that discourse about the importance of privacy start implicating us in the collective failure to act?
That’s not my area of expertise and don’t know how to fix that, but that should be an important consideration.
I really dislike this statement. It's likely designed to be "technically true". But it's been reported elsewhere that they do scan iCloud content:
Perhaps they scan as the data is being ingested. Perhaps it's scanned on a third party server. But it seems clear that it is being scanned.
My interpretation is Sophos got it wrong (they don't give a quote from the Apple Officer involved and manage to have a typo in the headline).
Apple does scanning of data which is not encrypted, such as received and sent email over SMTP. They presumably at that time were using PhotoDNA to scan attachments by hash. This is likely what Apple was actually talking about back at CES 2020.
They may have been also scanning public iCloud photo albums, but I haven't seen anyone discuss that one way or another.
According to  it does seem like Apple didn't do any wide scale scanning of iCloud Data.
iCloud has perhaps 25% of the users of Facebook. Of that 25% it's not clear how many actively use the platform of backups/photos. iCloud is not a platform for sharing content like Facebook. So how many reports should we expect to see from Apple? It's unclear to me.
So, I'm not saying the number isn't suspiciously low. But it doesn't really clarify what's going on to me...
This scanning was of email attachments being sent through an iCloud-hosted account, not of other iCloud hosted data (which is encrypted during operation.)
As I understand it, this was more "leaked" than "announced". That is, it wasn't part of Apple's planned rollout strategy.
My thought was that they'll announce that soon, but then again, I'm shocked soon wasn't last week. I have no idea. Maybe there is some other limitation (legal) on he iCloud E2E backups they needed to solve first?
Most likely because the only way this announcement makes sense, is that they tried to respond for misleading leaks. We'll see on September probably some E2EE announcements, since iOS 15 beta supports tokens for backup recovery. At least, let's hope so.
What Neuenschwander said doesn't establish it isn't just an arbitrary limitation.
Where the hashes get checked is relevant to the policy problem of what
Forcing companies to create back doors in their own is legally a very different situation. As to why iCloud is accessible by Apple, the point is to backup a phone someone lost. Forcing people to keep some sort of key fob with a secure private key safe in order to actually have access to their backups simply isn’t tenable.
Apple legal killed the feature, because of pressure from the US government.
They also run iCloud (mostly not e2e) on CCP-controlled servers for users in China.
They can decrypt ~100% of iMessages in real-time due to the way iCloud Backup (on by default, not e2e) escrows iMessage sync keys.
Apple does not protect your mprivacy from Apple, or, by extension, the governments that ultimately exert control over Apple: China and the USA.
“However, a former Apple employee said it was possible the encryption project was dropped for other reasons, such as concern that more customers would find themselves locked out of their data more often.”
The idea that Apple would fight this is a farce, as they regularly give up customers' data without a fight when the government requests it.
There are laws regarding this, so they don't have a choice. If they get a subpoena from a FISA court, there's not much they can do, but that goes for every US-based company.
Whatever fighting is going on is behind the scenes, so we wouldn't know about it.
If we trust US courts to stop law enforcement agencies from demanding everything they want from companies, they they can stop law enforcement agencies from demanding Apple add non-CSAM data to the NeuralHash set. If we don't trust the courts to do that, then we're kind of back at square one, right?
Therefore if the Government forces Apple to change the search parameters contained within private devices, I cannot see how this would work around the 4th Amendment.
If this is correct, it might be possible to argue that Apple's approach has (for Americans) constitutional safeguards which do not exist for on-cloud scanning performed by Google or Microsoft.
I did find someone in the media making this point, so it turns out I'm not being completely original: https://9to5mac.com/2021/08/10/misusing-csam-scanning-in-us-...
The government can't pay someone to break into your house and steal evidence they want without a warrant. I mean, they can, but the evidence wouldn't be admissible in court.
In Carpenter v. United States (2018), the Supreme Court ruled warrants are needed for gathering cell phone tracking information, remarking that cell phones are almost a “feature of human anatomy”, “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user”.
...[cell-site location information] provides officers with “an all-encompassing record of the holder’s whereabouts” and “provides an intimate window into a person’s life, revealing not only [an individual’s] particular movements, but through them [their] familial, political, professional, religious, and sexual associations.”
Why do you think they are doing it then?
The point is that with a Purism phone or custom ROM on my Android phone, I could disable these "legally required" features, because the law is fucking dumb, and my rights matter more.
The law can ban E2EE, cryptocurrencies, and privacy, but so long as we have some degree of technical freedom we can and will give it the middle finger.
Apple does not offer this freedom. Here we see the walled garden of iOS getting worse and more freedom-restricting by the year. When the governments of the world demand that Apple become an arm of the dystopia, Apple will comply, and its users will have no choice but to go along with it.
Apple, knowing that it is a private company completely and utterly incapable of resisting serious government demands (ie GCBD in China) should never have developed this capability to begin with.
If Apple is going to open this Pandora's box, they ought to open up their devices too.
If they aren't _everybody's_ rights then they aren't really your rights either. They are at best a privilege, and at worst something you have just been able to get away with (so far).
> The law can ban E2EE, cryptocurrencies, and privacy, but so long as we have some degree of technical freedom we can and will give it the middle finger.
Sure, in that scenario techies can secretly give it the middle finger right up until the authoritarian government they idly watched grow notices them.
If someone is seriously concerned about that happening, they could always consider trying to divert the government away from such disaster by participating.
> When the governments of the world demand that Apple become an arm of the dystopia, Apple will comply, and its users will have no choice but to go along with it.
Government demands of this sort are normally referred to as legal and regulatory compliance. Corporations, which are a legal concept allowed by the government, generally have to conform to continue to exist.
> Apple, knowing that it is a private company completely and utterly incapable of resisting serious government demands (ie GCBD in China) should never have developed this capability to begin with.
IMHO, having some portion of a pre-existing capability doesn't matter when you aren't legally allowed to challenge the request or answer "no".
But if we assume a government is determined to do this, can't they find other ways to do it? If you were using Google Photos with your Purism phone, it doesn't matter what you do on your device. And you can say "well, I wouldn't use that," but maybe your ISP is convinced (or required) to do packet inspection. And then you can say, "But I'm using encryption," and the government mandates that they have a back door into all encrypted traffic that goes through their borders.
And I would submit that if we really assume a government is going to extreme lengths, then they'll make it as hard as possible to use an open phone in the first place. They'll make custom ROMs illegal. They'll go after people hosting it. They'll mandate that the phones comply with some kind of decryption standard to connect to cellular data networks. If we assume an authoritarian government bound and determined to spy on you, the assumption that we can be saved by just applying enough open source just seems pretty shaky to me.
So, I certainly don't think that a purely technological solution is enough, in the long run. This is a policy issue. I think hackers and engineers really, really want to believe that math trumps policy, but it doesn't. By all means, let's fight for strong encryption -- but let's also fight for government policy that supports it, rather than assuming encryption and open source is a guarantee we can circumvent bad policy.
And then one can compromise and infect millions of such backdoored devices and start feeding (much cheaper than the government enforcement implementation) spoofed data into these systems at scale on these backdoored devices that act like "swatting as a service" and completely nullify any meaning they could get from doing this.
I'm personally really interested in router level malware + 0days on devices as distribution vectors rather than the typical c&c setup.
> They'll go after people hosting it.
Not too hard to imagine one being able to distribute such things across millions of ephemeral devices that are networked and incentivized to host it, all across the world, regardless of illegality in any particular jurisdiction. Technology enables this, without such, it wont be possible.
> I think hackers and engineers really, really want to believe that math trumps policy, but it doesn't
I don't think that at all, I think it comes down to incentives. I was listening to a talk the other day where someone mentioned that for the longest time (since at least wwII), governments pretty much had a monopoly on cryptographers and now there are lots of places/systems that are willing to pay more to apply cutting edge research.
> but let's also fight for government policy that supports it, rather than assuming encryption and open source is a guarantee we can circumvent bad policy.
Much more cheaper for an individual, with more immediate feedback mechanisms doing one vs another. One can also scale a lot faster than another esp since one is very much divorced from implementation.
You are basically arguing that an iphone be incablable of doing any function at all.
Dont want the government demanding the sent/recieved data - no data sending and recieving funcitons.
Dont want the government demanding phone call intercepts - no phone call functionality.
Dont want the government demanding the contents of the screen - no screen.
The pandoras box was opened the day someone inseted a radio chip and microphone into a device. It has been open for a very long time, this is not the moment it suddenly opened.
I would argue that Apple is creating systems so they can't become an arm of the dystopia.
For example, even if a government somehow forced Apple to include non-CSAM hashes to the database, the system only uses hashes from multiple child protection agencies in different jurisdictions where the CSAM is the same.
So Apple only uses the hashes that are the same between org A, B and C and ignores the rest.
This, along with the audibility Apple recently announced and the other features makes it so there's literally nothing counties can do to force Apple to comply with some dystopian nightmare…
Of course, with potentially more open operating systems, it would be trivial by comparison for state actors to create a popular/custom ROM for Android that's backdoored.
For the life of me I can't see how catching child molesters is part of a dystopia.
This is not a reason to let your guard down on security. Keeping up with securing things against bad actors is a constant battle. Tim Cook put it best  and I want to hear how this is not exactly what he described 5 years ago.
Neuenschwander seems to, maybe deliberately, be conflating : "Apple's servers have to be in the loop" and "the code can only look at photos on iCloud".
You are right, the problem is a "slippery slope," but Apple just built roller skates and there are governments trying to push us down it. Apple is in a far better position to resist those efforts if they say " we don't have this code, we will not build it, and there's no way for it to be safe for our users."
I'd say that's a little different than the slippery slope. Something more like (in)defense in depth.
The sad truth is that once lawmakers decide they want access to all data all the time, there's nothing Apple, Google or Microsoft can do to stop that, except going out of business.
That being said, the big players in cloud storage have scanned your files for a decade. Google, Microsoft, Amazon, they all scan files being uploaded. Apple may also be doing this for iCloud Drive.
The only way to circumvent this is to use E2E encryption, either by using a service that has E2E encryption built in, or by "rolling your own", i.e. using Cryptomator. Again, this is just one law away from being outlawed.
The thing that bothers me most is that hashes are somewhat trivial to spoof, so if someone was to get a hold of the list of hashes, they could start sending spoof messages on a large scale, causing a false positive.
I remember when Echelon was making the rounds on the internet a couple of decades ago, and lots of people started adding X headers to their emails to cause false positives.
The counterpoint: we (as in, the Western-aligned countries) don't have a true liberal democracy and likely never had. Not with the amount of open and veiled influence that religion (and for what it's worth, money) has in our societies - ranging from openly Christian centrist/center-right parties in Europe to entire communities in the US dominated by religious sects of all denominations.
And all of these tend to run on a "think about the children" mindset, especially regarding anything LGBT.
The result? It is very hard if not outright impossible to prevent or roll back authoritarian measures that were sold as "protect the children", since dominant religious-affiliated people and institutions (not just churches, but also thinktanks, parties and media) will put anyone in their crosshairs. Just look at how almost all blog pieces and many comments on the CSAM scanner debacle have an "I don't like pedophilia" disclaimer...
I thought this was the implementation for the EU. If it was - it was fast?
So Apple's move can hardly have been for the EU, but had the European Parliament not passed it, Apple would have had to disable it in the EU.
No they're treating it as a political and legal problem (with the UK and the EU being the furthest along on passing legislation). Their implementation is the compromise that preserves end-to-end encryption, given those political winds.
We've seen this time after time where the 3 letter agencies give companies an ultimatum: either comply or get shut down.
The worse part is that nobody can do anything about it.
Under the cloak of secrecy and threats a faceless government is shaping major policy, breaking laws etc...
My only hope is that the biggest companies can speak up since they have a bit of a leverage. They can rally people behind them if the requests are borderline unethical, like spying on everybody.
If Apple can't deal with this, NOBODY else can.
I think if a three letter agency 'shuts down' Apple, the political blowback will be nuclear. The agency will get put on a leash if they pull some shit like tthat
If they don't speak up then nobody else can.
We technically can. We also can technically elect people that understand technology. But practically its 100x more likely that I, a person that works with technology for a living, magically finds a way to provide my family with a decent lifestyle doing somethign that doesn't use computers at all. And I view the chances of that happening to be almost non-existent.
Increasingly I believe this is the right and probably only answer. How do we collectively accomplish this, that’s the real problem. It has to be financially and economically solved. Political, social and asymmetric solutions won’t work.
This is not very hard to do. For example, WhatsApp has the "Save pictures to camera roll" option, on by default.
 "GrapheneOS vs CalyxOS ULTIMATE COMPARISON (Battery & Speed Ft. Stock Android & iPhone)", https://www.youtube.com/watch?v=7iS4leau088
It now has https://grapheneos.org/usage#sandboxed-play-services providing broader app compatibility.
That video is quite misleading and it's not the best source for accurate information about GrapheneOS.
Depends on implementation. I can easily see a possibility that the check is gated by a server side logic that can be changed at any moment without anybody knowing.
Then why isn’t Apple pushing this angle?
Yes, it is ultimately policy problem. But the way we get people in office who understand technology is to get technological capabilities in the hands of people before they get into office, as well as the hands of those who will vote for them. Just like "bad facts make bad law", bad engineering makes bad law.
Twenty years ago we forcefully scoffed at the Clipper Chip proposal and the export ban on crypto, because they were so at odds with the actual reality of the digital environment. These days, most people's communications are mediated by large corporations operating on plaintext, and are thus are straightforward to monitor and censor. And especially when companies lead the charge, governments expect to have the same ability.
If I could hole up and rely on Free software to preserve my rights indefinitely, I wouldn't particularly care what the Surveillance Valley crowd was doing with their MITM scheme. But I can't, because Surveillance Valley is teaching governments that communications can be controlled while also fanning the flames and creating glaring examples of why they need to be controlled (cf social media "engagement" dumpster fire). And once governments expect that technology can be generally controlled, they will rule any software that does not do their bidding as some exceptional circumvention device rather than a natural capability that has always existed. This entire "trust us" cloud culture has been one big vaccination for governments versus the liberating power of technology that we were excited for two decades ago. This end result has been foreseeable since the rise of webapps, but it's hard to get software developers to understand something when their salary relies upon not understanding it.
Apart from my ][gs (and later my secondhand NeXT), I've never been a huge Apple fan. But I had hoped that by taking this recent privacy tack, they would put workable digital rights into the hands of the masses. Design their system to be solidly secure against everyone but Apple, control the app store to prevent trojans, but then stay out of users' business as software developers should. But brazenly modifying their OS, which should be working for the interests of the user, to do scanning against the interests of the user is a disappointing repudiation of the entire concept of digital rights. And so once again we're back to Free software or bust. At least the Free mobile ecosystem seems to be progressing.
It might be a happy incident if their architecture limits this feature to CSAM now, but their ToS are clearly much more general-purpose than that, strategically allowing Apple to pre-screen for any potentially illegal content. If ToS remain phrased this way, surely the implementation will catch up.
> > Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.
I don't see how that proves what Neuenschwander says it proves. Sending the voucher to the server is currently gated on whether or not the associated photo is set to be uploaded to iCloud. There's no reason why that restriction couldn't be removed.
The hard part about building this feature was doing the scanning and detection, and building out the mechanism by which Apple gets notified if anything falls afoul of the scanner. Changing what triggers (or does not trigger) the scanning to happen, or results being uploaded to Apple, is trivial.
The only thing that I think is meaningfully interesting here is that it's possible that getting hold of a phone and looking at the vouchers might not tell you anything; only after they vouchers are sent to Apple and algorithm magic happens will you learn anything. But I don't think that really matters in practice; if you can get the vouchers off a phone, you can almost certainly get the photos associated with them as well.
I read through the article you linked, and it feels pretty hand-wavy to me. Honestly I don't think I'd believe what Apple is saying here unless they release a detailed whitepaper on how the system works, one that can be vetted by experts in the field. (And then we still have to trust that what they've detailed in the paper is what they've actually implemented.)
The bottom line is that Apple has developed and deployed a method for locally scanning phones for contraband. Even if they've designed things so that the result is obfuscated unless there are many matches and/or photos are actually uploaded, that seems to be a self-imposed limitation that could be removed without too much trouble. The capability is there; not using it is merely a matter of policy.
> I know that's easy to dismiss as Pollyannaism, but "we need to protect ourselves from our own government" has a pretty dismal track record historically. The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.
Absolutely agree, but I fear we have been losing this battle for many decades now, and I'm a bit pessimistic for our future. It seems most people are very willing to let their leaders scare them into believing that they must give up liberty in exchange for safety and security.
What Snowden exposed is that these changes are happening in secret, with no democratic support or oversight.
Laws are being circumvented so people aren't being given the choice to support or not.
The problem is there's no reason Apple couldn't do anything. All the ML face data gathered on device would be way more valuable than these hash vouchers.
E.g. a simple one: anyone that may have saved a "winnie the pooh" meme image?
And it's not like Apple's going to publish a list of images that the system searched; though it'd be easy to differentiate between child abuse content vs. other types of images - but would it ever get out of Apple if someone went rogue or was "experimenting" to say "gather stats" of how many people have X image saved?
>> Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.
The first paragraph does not follow from the detail in the second at all. Setting aside how abstract the language is, what about adding more complexity to the system is preventing Apple from scanning other content?
This is all a misdirect: they're saying "look at how complex this system is!" and pretending that, particularly when they built and control the entire system including it's existence, that any of that makes changing how it operates "difficult".
I think what you're arguing is that Apple could still change what's being scanned for, and, well, yes: but that doesn't really affect my original point, which is that this is a policy/legal issue. If you assume governments are bad actors, then yes, they could pressure Apple to change this technology to scan for other things -- but if this technology didn't exist, they could just as easily pressure Apple to do it all on the servers. I think a lot of the anger comes from they shouldn't be able to do any part of this work on my device, and emotionally, I get that -- but technologically, it's hard for me not to shake the impression that "what amount happens on device vs. what amount happens on server" is a form of bikeshedding.
There's a massive difference between a search carried out on a device someone "owns" and carries with them at all times, and a server owned by someone else.
To claim otherwise is absurd. Hence all this backlash.
This is just security theater, they already sign the operating system images where the database reside. And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.
> This feature runs exclusively as part of the cloud storage pipeline for images being up- loaded to iCloud Photos and cannot act on any other image content on the device
Until a 1-line code change happens that hooks it into UIImage.
Although this is true, the same argument already applies to "your phone might be scanning all your photos and stealthily uploading them" -- Apple having announced this program doesn't seem to have changed the odds of that.
At some point you have to trust your OS vendor.
The best covert exfiltration is when you can hit individual devices in a crowd, so people already have no reason to be suspicious. But you're still leaving tracks - connections, packet sizes etc. if you actually want to do anything and you only need to get caught once for the game to be up.
This on the other hand is essentially the perfect channel for its type of surveillance...because it is a covert surveillance channel! Everyone is being told to expect it to exist, that it's normal, and that it will receive frequent updates. No longer is their a danger a security researcher will discover it, its "meant" to be there.
Letting Apple scan for government-related material on your device is a slippery slope to government surveillance and a hard line should be drawn here. Today, Apple may only be scanning your device for CSAM material for iCloud. Tomorrow, Apple may implement similar scanning elsewhere, gradually expand the scope of scanning to other types of content and across the entire device, implement similar government-enforcing processes on the device, and so on. It's not a good direction for Apple to be taking, regardless of how it works in this particular case. A user's device is their own personal device and anything that inches toward government surveillance on that device, should be stopped.
Another point made was that government surveillance never happens overnight. It is always very gradual. People don't mean to let government surveillance happen and yet it does because little things like this evolve. It's better to stop potential government surveillance in its tracks right now.
 I understand the emissions cheating was not supossed to be open, but a relatively open system allowed said student to take a peek and see what was going on.
Read evidence from past trials it is obvious. See also successful and failed attempts to subpoena this info from VPN services.
Only people with iCloud will be using the relay.
It is true on the surface the photos is disconnected from the use. However, Apple only needs a solid answer that handles the bad optics of what you can do with the Tor-like anonymity of iCloud Privacy Relay.
However, if you look more closely, the CSAM service and its implementation are crafted exactly around the introduction of the relay.
It's better to be forthright, or you risk your valuation on the whims of a single employee.
On the other hand, once Apple has written a backdoor enthusiastically themselves, it's a lot easier to force someone to change how it can be used. The changes are small and compliance can be immediately verified and refusal punished. To take it to its logical extreme: you cannot really fire or execute people who delay something (especially if you lake the expertise to tell how long it should take). But you can fire or execute people who refuse to flip a switch.
This technology deeply erodes Apple and its engineers' ability to resist future pressure. And the important bit here is there adversary isn't all powerful. It can coerce you to do things in secret, but its power isn't unlimited. See what happened with yahoo.
And if your phone has the capability to upload to the cloud, then you have to trust your OS vendor to respect your wish if you disable it, etc.
It's curious that this is the particular breaking point on the slope for people.
The "on device" aspect just makes it more immediate feeling, I guess?
In other words, extracting the code and analysing it to determine that it does do what you expect is, although not easy, still legal. But the source, the CSAM itself, is illegal to possess, so you can't do that verification much less publish the results. It is this effective legal moat around those questioning the ultimate targets of this system which people are worried about.
So in a world where Apple pushes you to set up icloud photos by default, and can do whatever they want there, and other platforms have been doing this sort of of thing for years, it's a bit startling that "on device before you upload" vs "on uploaded content" triggers far more discontent?
Maybe it's that Apple announced it at all, vs doing it relatively silently like the others? Apple has always had access to every photo on your device, after all.
There's certainly a slippery-slope argument, where some future update might change that scanning behavior. But the system-as-currently-presented seems similarly trustable.
And that's assuming they weren't actively hiding anything by e.g. splitting them up into chunks that could be slipped into legitimate traffic with Apple's servers.
The most-optimistic take on this I can see is that this program could be the prelude to needing to trust less people. If Apple can turn on e2e encryption for photos, using this program as the PR shield from law enforcement to be able to do it, that'd leave us having to only trust the OS vendor.
What I find interesting is that so many people find it worse to do it on device, because of the risk that they do it to photos you don't intend to upload. This is clearly where Apple got caught off-guard, because to them, on-device = private.
It seems like the issue is really the mixing of on-device and off. People seem to be fine with on-device data that stays on-device, and relatively fine with the idea that Apple gets your content if you upload it to them. But when they analyze the data on-device, and then upload the results to the cloud, that really gets people.
But that ship has long sailed, right?
Every packet that leaves a device potentially incriminates its owner. Every access point and router is a potential capture point.
A device I own should not be allowed to collect and scan my data without my permission.
It's not scanning; it's creating a cryptographic safety voucher for each photo you upload to iCloud Photos. And unless you reach a threshold of 30 CSAM images, Apple knows nothing about any of your photos.
I don't think anyone's necessarily being too upset or paranoid about THIS, but maybe everyone should also be a little less trusting of every closed OS - macOS, Windows, Android as provided by Google - that has root access too.
Apple has built iOS off user trust & goodwill, unlike most other OSes.
Where does that fit in your set intersection?
...except you can't? Not sure where these assumptions come from.
Well they could not do it.
I gave a pretty obvious and clear answer to that, and apparently you didn't care about the question in the first place, and have now misdirected to something else.
I am also not sure what possible definition of "privacy" that you could be using, that would not include things such as on device photo scanning, for the purpose of reporting people to the police.
Like, lets say it wasn't Apple doing this. Lets say it was the government. As in, the government required every computer that you own, to be monitored for certain photos, at which point the info would be sent to them, and they would arrest you.
Without a warrant.
Surely, you'd agree that this violates people's privacy? The only difference in this case, is that the government now gets to side step 4th amendment protections, by having a company do it instead.
And the answer is that they shouldn't implement things that violate people's privacy, such as things that would be illegal for the government to do without a warrant.
That is the answer. If it is something that the government would need a warrant for, then they shouldn't do it, and doing it would violate people's privacy.
If the OS was open source and supported reproducible builds, you would not have to trust them, you could verify what it actually does & make sure the signed binaries they ship you actually correspond to the source code.
Once kinda wonders what they want to hide if they talks so much about user privacy yet don't provide any means for users to verify their claims.
Changes to binding contractual terms that allow broad readings and provide legal justification for future overreach are dangerous. If they really are serious that they are going to use these new features in a highly limited way then they can put their money where their mouth is and add legally binding contractual terms that limit what they can do with serious consequences if they are found to be in breach. Non-binding marketing PR assurances that they will not abuse their contractually justified powers are no substitute for the iron fist of legal penalty clause.
I guess it comes down to that I don't trust an OS vendor that ships an A.I. based snitch program that they promise will be dormant.
This being an area people are paying attention to makes it less likely they'll do unpopular things involving it, from a pure "we like good PR and profits" standpoint. They might sneak these things in elsewhere, but this specific on-device-scanning program has been shown to be a risk even at its current anodyne level.
Yes, and we were trusting Apple. And now this trust is going away.
Is it really? There are some very loud voices making their discontent felt. But what does the Venn diagram look like between 'people who are loudly condemning Apple for this' and 'people who were vehemently anti-Apple to begin with'?
My trust was shaken a bit, but the more I hear about the technology they've implemented, the more comfortable I am with it. And frankly, I'm far more worried about gov't policy than I am about the technical details. We can't fix policy with tech.
Yeah. I don't really understand the tech utopia feeling that Apple could simply turn on e2ee and ignore any future legislation to ban e2ee. The policy winds are clearly blowing towards limiting encryption in some fashion. Maybe this whole event will get people to pay more attention to policy...maybe.
I really don't understand this view. You are using proprietary software, you are always an N-line change away from someone doing something you don't like. This situation doesn't change this.
If you only use open source software and advocate for others to do the same, I would understand it more.
There is always a chain of trust that you end up depending on. OSS is not a panacea here.
(And besides, it's far more likely that this nefarious Government agency will just conceal a camera in your room to capture your fingers entering your passwords.)
And I don't understand why it has to be black and white, I think the N is very important in this formula and if it is low that is a cause for concern. Like an enemy building a missile silo on an island just off your coast but promising it's just for defense.
All arguments I see is along the lines of "Apple can technically do anything they want anyways so this doesn't matter". But maybe you're right and moving to FOSS is the only solution long-term, that's what I'm doing if Apple goes through with this.
They describe a process for third parties to audit that the database was produced correctly.
The story here is that there is a black box of pictures. Apple will then use their own black box of undeclared rules to pass things along to the feds which they have not shared what would be considered offending in any way shape or form other than "we will know it when we see it". Part of the issue here is that Apple is taking the role of a moral authority. Traditionally Apple has been incredibly anti-pornography and I suspect that anything that managed to get into the database will be something Apple will just pass along.
But if your problem is with NCMEC, you’ve got a problem with Facebook and Google who are already doing this too. And you can’t go to jail for possessing adult pornography. So even if you assume adult porn images are in the database, and Apple’s reviewers decide to forward them to NCMEC, you would still not be able to be prosecuted, at least in the US. Ditto for pictures of Winnie the Pooh. But for the rest of what you describe, simulated child pornography is already legally dicey as far as I know, so you can’t really blame Apple or NCMEC for that.
In any case it is likely the government would try to negotiate a plea to get you into some predator database to help fill the law enforcement coffers even if they have no lawful case to take it to court once they have your name in their hands.
References to Winnie the Pooh in these discussions are about China, where images of Winnie are deemed to be coded political messages and are censored.
The concern is that Apple are building a system that is ostensibly about CSAM, and that some countries such as China will then leverage their power to force Apple to include whatever political imagery in the database as well. Giving the government there the ability to home in on who is passing around those kinds of images in quantity.
If that seems a long way indeed from CSAM, consider something more likely to fit under that heading by local government standards. There's a country today, you may have heard of, one the USA is busy evacuating its personnel from to leave the population to an awful fate, where "female teenagers in a secret school not wearing a burqa" may be deemed by the new authorities to be sexually titillating, inappropriate and illegal, and if they find out who is sharing those images, punishments are much worse than mere prison. Sadly there are a plethora of countries that are very controlling of females of all ages.
If this system didn't exist, nobody would have to trust Apple.
> you would still not be able to be prosecuted
But I wouldn't want to deal with a frivolous lawsuit, or have a record in the social media of being brought CSA charges.
An analogy I can come up with is: the government hires people to visit your house every day, and while they’re there they need your resources (say, food, water, and electricity). In other words, they use up some of the stuff you would otherwise be able to use only for yourself — constantly — and the only reason they’re there is to see if you’re doing anything wrong. Why would you put up with this?
When a cop is checking your car for parking violations, they're using some of your taxpayer dollars to see if you're doing something wrong. You don't benefit at all from your car getting checked. But you probably benefit from everyone else's cars being checked, which is why we have this system.
Or similarly, you don't benefit from airport security searching your luggage. It's a waste of your time and invasion of your privacy if you don't have anything forbidden and even worse for you if you do have something forbidden. But you help pay for airport security anyway because it might benefit you when they search other people's luggage.
You don't benefit at all from having your photos scanned. But you (or someone else) might benefit from everyone else's photos being scanned (if it helps catch child abusers or rescue abused children). It's just a question of whether you think the benefit is worth the cost.
When they open and look into the package while in your house for the pickup, they are using your light to see and your heating/cooling is keeping the driver comfortable while they inspect the package.
In the past, Apple has done what they can to stand their ground against the first, but they (like any other company) will have to comply with any laws passed.
Whether the 'if a warrant is obtained the gov. should be granted access' law is a violation of the 4th amendment remains to be seen.
The FBI is unable to get a warrant to search data on everyones phones, regardless of what laws are passed. They might be able to get a warrant to search all the data on apple's server (I would consider this unlikely, but I don't know of precedent in either direction), but that data is fundamentally not on everyones phones. This isn't a novel legal question, you cannot search everyones devices without probable cause that "everyone" committed a crime.
My immediate thought is that this could still be poisoned by Five Eyes participants, and that it does not preclude state actors forcing Apple to replicate this functionality for other purposes (which would leave the integrity of the CSAM database alone, thus not triggering the tripwire).
The thing is, if this is your threat model you're already screwed. Apple has said they comply with laws in jurisdictions where they operate. The state can pass whatever surveillance laws they want, and I do believe Apple has shown they'll fight them to an extent, but at the end of the day they're not going to shut down the company to protect you. This all seems orthogonal to the CSAM scanning.
Additionally, as laid out in the report, the human review process means even if somehow there is a match that isn't CSAM, they don't report it until it has been verified.
monetization I guess being to hope for subscribers on github, as this could likely just be a nested dependency that many apps import. a convenient app for this specific purpose might not last long in app stores.
Any perceptual hash apps to test that theory on?
It might well be a genetic heritage. Being trusted in a tribe is crucial to survival, and so is likely wired deep into our social psychology.
Apple is making a mistake by ignoring that. This isn’t about people not trusting Apple. It’s about people not feeling trusted by Apple.
Because of this, it doesn’t matter how trustworthy the system is or what they do to make it less abusable. It will still represent distrust of the end user, and people will still feel that in their bones.
People argue about the App Store and not being trusted to install their own Apps etc. That isn’t the same. We all know we are fallible and a lot of people like the protection of the store, and having someone to ‘look after’ them.
This is different and deeper than that. Nobody want to be a suspect for something they know they aren’t doing. It feels dirty.
It really only does what they say it does, and it really is hard to abuse.
But that doesn’t matter. The point is that even so, it makes everyone into a suspect, and that feels wrong.
It is. It's a simple matter for two foreign governments to decide they don't want their people to criticize the head of state with memes, and then insert such images into the database Apple uses for scanning.
Apple's previous position on privacy was to make such snooping impossible because they don't have access to the data. Now they are handing over access.
What I and thousands of others online are describing ought to be understandable by anyone at Apple. The fact that an Apple exec can sit there in a prepared interview and look perplexed about how people could see this as a back door is something I don't understand at all. This "closing his ears" attitude may indicate he is full of it.
That really doesn’t sound like anything I’d describe as a “back door”. A back door implies general purpose access. A system which required the collision of multiple governments and Apple and their child abuse agencies simply is not that.
One of the casualties of this debate is that people are using terms that make things sound worse than they are. If you can’t get at my filesystem, you don’t have a back door. I understand the motive for stating the case as harshly as possible, but I think it’s misguided.
Having said this, I would find it interesting to hear what Federighi would say about this potential abuse case.
Agree to disagree.
> I would find it interesting to hear what Federighi would say about this potential abuse case.
Personally I would not. That's a political consideration and not something I want to hear a technologist weigh in on while defending their technology. Apple's previous stance, with which I agree, was to not give humans any chance to abuse people's personal data,
It’s not. The abuse case flows from their architecture. Perhaps it isn’t as ‘easy’ as getting multiple countries to collude with Apple. If the architecture can be abused the way you think it can, that is a technical problem as well as a political one.
Apple’s model has always involved trusting them. This model involves trusting other people in narrow ways. The architecture determines what those ways are.
Now Apple will scan the device and we must trust that 3rd parties will not abuse the technology by checking for other kinds of imagery such as memes critical of heads of state.
The proposed change is so much worse than the previous state of things.
Do you live in a country where the head of state wants to check for such memes?
If the government can force Apple to also turn over these reports then they could have just made Apple add their political meme database directly and it's already game over.
Apple is run by humans who are subject to influence and bias. Who knows what policy changes will come in Apple's future. Apple's previous stance was to not hand over data because they don't have access to it. This change completely reverses that.
Backdoor is defined by the Oxford Dictionary as "a feature or defect of a computer system that allows surreptitious unauthorized access to data."
The system in question requires you to upload the data to iCloud Photos for the tickets to be meaningful and actionable. Both your phone and iCloud services have EULA which call out and allow for such scanning to take place, and Apple has publicly described how the system works as far as its capabilities and limitations. In the sense that people see this as a change in policy (IIRC the actual license agreement language changed over a year ago) , Apple has also described how to no longer use the iCloud Photos service.
One less standard usage is not about unauthorized access but specifically to private surveillance (e.g. "Clipper chip") - but I would argue that the Clipper chip was a case where the surveillance features were specifically not being talked about, hence it still counting as "unauthorized access".
But with a definition that covers broad surveillance instead of unauthorized access, it would still be difficult to classify this as a back door. Such surveillance arguments would only pertain to the person's phone and not to information the user chose to release to external services like iCloud Photos.
To your original point, it would still work with iCloud Photos did not have the key escrow, albeit with less data being capable of being able to be turned over to law enforcement. However iCloud Photos being an external system would still mean this is an intentional and desired feature (presumably) by the actual system owners (Apple).
I bet the authorities would be happy with a surveillance mechanism disclosed in the EULA, though. Even if such a system is not technically a back door, I am opposed to it and would prefer Apple to oppose it.
Edit: I just noticed that you had already clarified your argument in other replies. I am sorry to make you repeat it.
Note that while the EFF's mission statement is about defending civil liberties, they posted two detailed articles about Apple's system without talking about the questionable parts of the underlying CSAM laws. There was nothing about how the laws negatively impact civil liberties and what the EFF might champion there.
The problem is that the laws themselves are somewhat uniquely abusable and overreaching, but they are meant to help reduce a really grotesque problem - and reducing aspects like detection and reporting is not going to be as effective against the underlying societal issue.
Apple has basically been fighting this for the 10 years since the introduction of iCloud Photo, saying they didn't have a way to balance the needs to detect CSAM material without impacting the privacy of the rest of users. PhotoDNA was already deployed at Microsoft and being deployed by third parties like Facebook when iCloud Photo launched.
Now it appears that Apple was working a significant portion on that time toward trying to build a system that _did_ attempt to accomplish a balance between social/regulatory responsibility and privacy.
But such a system has to prop technical systems and legal policies against one another to make up the shortcomings of each, which make it a very complex and nuanced system.
What I personally don’t understand is why Apple didn’t come out with a different message: we’ve made your iPhone so secure that we’ll let it vouch for your behalf when it sends us data to store. We don’t want to see the data, and we won’t see any of it unless we find that lots of the photos you send us are fishy. Android could never contemplate this because they can’t trust the OS to send vouchers for the same photos it uploads, so instead they snoop on everything you send them.
It seems like a much more win-win framing that emphasizes their strengths.
That is PR speak that would have landed worse in tech forums. I respect them more for not doing this.
The core issue is this performs scans, without your approval, on your local device. Viruses already do that and it's something techies have always feared governments might impose. That private companies are apparently being strong-armed into doing it is concerning because it means the government is trying to circumvent the process of public discussion that is typically facilitated by proposing legislation.
This isn’t true. You can always turn off iCloud Photo Library and just store the photos locally or use a different cloud provider.
There’s no disable button or even clear indication that it’s going on.
iCloud Photos is an on-off switch.
In terms of clearly indicating everything that is going on within the service, that is just not possible for most non-tech users. It appears to have been pretty difficult even for those familiar with things like cryptography and E2E systems to understand the nuances and protections in place.
Instead, the expectation should be that using _any_ company's cloud hosting or cloud synchronization features is fundamentally sharing your data with them. It should then be any vendor's responsibility give give customers guarantees on which to evaluate trust.
I can imagine many people doing that in response to this news.
I don't think I ever said you did.
People who haven't heard what Apple is doing are not presented with a choice. Not everyone follows tech news so closely.
However, I don’t think any framing would have improved things. I think it was always going to feel wrong.
I would prefer they don’t do this because it feels bad to be a suspect even in this abstract and in-practice harmless way.
Having said that, having heard from people who have investigated how bad pedophile activity actually is, I can imagine being easily persuaded back in the other direction.
I think thins is about the logic of evolutionary psychology, not the logic of cryptography.
My guess is that between now and the October iPhone release we are going to see more media about the extent of the problem they are trying to solve.
That is how Apple wins this.
There are terrible things out there that we should seek to solve. They should not be solved by creating 1984 in the literal sense, and certainly not by the company that became famous for an advertisement based on that book .
Apple, take your own advice and Think Different .
I take it you haven’t read 1984.
When Craig Federighi straps a cage full of starving rats to my face, I’ll concede this point.
In case you missed it, I think this is probably a bad move.
I just don’t think these arguments about back doors and creeping totalitarianism are either accurate or likely to persuade everyday users when they weigh them up against the grotesque nature of child exploitation.
Agree to disagree. This opens the door to something much worse in my opinion.
Absolutely not. I don’t think they need to persuade people who are convinced of their iniquity.
What they need is an environment in which those people look like they are arguing over how many dictatorships need to collude to detect anti-government photos and how this constitutes literal 1984, while Apple is announcing a way to deter horrific crimes against American children that they have seen on TV.
A lot of people like strong border controls for instance, even if it means when they return to their country from abroad they have to go through more checks or present more documents to get in. Or consider large gated communities where you have to be checked by a guard to get in.
Many peoples seem fine with being distrusted as long as the distrust is part of a mechanism to weed out those who they feel really are not trustworthy and it is not too annoying for them to prove that they are not one of those people when they encounter a trust check.
However, it may certainly be the case that in the end people in general do accept this as a price worth paying to fight pedophiles.
Now apple are telling me that my trusted companion is scanning my photos as they are uploaded to iCloud, looking for evidence that apple can pass to the authorities? They made my phone a snitch?
This isn't about security or privacy, I don't care about encryption or hashes here, this is about trust. If I can't trust my phone, I can't use it.
I think this is a nudge for folks to wake up and see the reality of what it means to use the cloud. We are leasing storage space from Apple in this case.
Technically, it’s no different than a landlord checking up on their tenants to make sure “everything is okay.”
And technically, if you do not like iCloud, don’t use it and roll your own custom cloud storage! After all, it’s redundancy and access the cloud provides. And Apple provides APIs to build them.
Hell, with the new Files app on iOS i just use a Samba share with my NAS server (just a custom built desktop with RAID 5 and Ubuntu.)
By this logic, you don't own your phone but rent it from Apple?
We probably also sign something that allows Apple to do what they will with our images—albeit we retain the copyrights to them.
Just to reiterate: the landlord metaphor is referring to software which we lease from Apple—not the device itself.
This is yet another case where I want to emphasize the importance of OPEN hardware and software designs. If we truly want ownership, we have to take ownership into our hands (which means a lot of hard work). Most commercial software is licensed so no we don’t own anything in whole that runs commercial software.
Not so now, they are in the clear either way because Apple themselfes cannot look into the hashes databank. So the backdoor is there, the responsibility of the crime of spying is forwarded to different actors, which even cannot be monitored by Apple.
This is truly a devilish device they thought up.
Make misuse possible, exonerate any responsibility to outside actors, act naive as if hands are clean.
They have betrayed us.
I have purchased my last iPhone.
> It is my partner in crime.
The threats people identify have minimal risk. If a total stranger offers you a bottle of water, you may worry about it being spiked, but him having offered the bottle doesn't make it more, or less, likely that he'll stab you after you accept it. They're separate events, no "slippery slope". It's very convincing that any alteration that would "scan" your whole phone would be caught eventually, and even if it's not, the announcement of this feature has no bearing on it. If Apple has evil intent, or is being coerced by NSLs, they would (be forced to) implement the dangerous mechanism whether this Child Safety feature existed or not. The concept of backdoors is hardly foreign to governments, and Apple didn't let any cats out of any bags. This document shows that Apple did go to great lengths to preserve privacy; including the necessity of hashes being verified in two juristictions, the fact that
neither the phone nor Apple know if any images have been matched below the threshold; the lack of remote updates of this mechanism; the use of vouchers instead of sending the full image; the use of synthetic vouchers; and on and on.
Furthermore, the risks of the risks of this mechanism are lower than the existing PhotoDNA used by practically every competing service. Those have no thresholds; the human review process is obscure; there is no claim that other pictures won't be looked at.
The controversy, the fact that it uses an on-device component. But PhotoDNA-solutions fail many of Apple's design criterias, which require an on-device solution:
- database update transparency
- matching software correctness
- matching software transparency
- database and software universality
- data access restrictions.
What about the concerns that the hash databases could be altered to contain political images? PhotoDNA could be, too; but would be undetectable unlike with Apple's solutions. Worse: with serverside solutions, the server admin could use an altered hash DB only for specific target users, causing harm without arousing suspicion. Apple's design prevents this since the DB is verifiably universal to all users.
A rational look at every argument I've seen against Apple's solution indicates that it is strictly superior to current solutions, and less of a threat to users. Cryptographically generating vouchers and comparing hashes is categorically not "scanning people's phones" or a "backdoor." I think the more people understand its technical underpinnings, the less they'll see similarities with sinister dystopian cyberpunk aesthetics.
The problem is that this "hard" technological core (the crypto) is subject to an awful lot of "soft" policy issues around the edge - and there's nothing but "Well, we won't do that!" in there.
Plus, the whole Threat Model document feels like a 3AM brainstorming session thrown together. "Oh, uh... we'll just use the intersection of multiple governments hashes, and, besides, they can always audit the code!" Seriously, search the threat model document for the phrase "subject to code inspection by security researchers" - it's in there 5x. How, exactly, does one go about getting said code?
Remember, national security letters with gag orders attached exist.
Also, remember, when China and Apple came to a head over iCloud server access, Apple backed down and gave China what they wanted.
Even if this, alone isn't enough to convince you to move off Apple, are you comfortable with the trends now clearly visible?
Still much better than all but the most esoteric inconvenient alternatives.
Because sacrificing privacy for convenience is why we got to this point.
I'm rapidly heading there. I'm pretty sure I won't run Win11 given the hardware requirements (I prefer keeping older hardware running when it still fits my needs) and the requirement for an online Microsoft account for Win11 Home (NO, and it's pretty well stupid that I have to literally disconnect the network cable to make an offline account on Win10 now, and then disable the damned nag screens).
If Apple is going full in on this whole "Your device is going to work against you" thing they're trying for, well... I'm not OK with that either. That leaves Linux and the BSDs. Unfortunately, Intel isn't really OK in my book either with the fact that they can't reason about their chips anymore (long rant, but L1TF and Plundervolt allowing pillage of the SGX guarantees tells me Intel can't reason about their chips)... well. Hrm. AMD or ARM it is, and probably not with a very good phone either.
At this point, I'm going down that road quite quickly, far sooner than I'd hoped, because I do want to live out what I talk about with regards to computers, and if the whole world goes a direction I'm not OK with, well, OK. I'll find alternatives. I accept that unless things change, I'm probably no more than 5-10 years away from simply abandoning the internet entirely outside work and very basic communications. It'll suck, but if that's what I need to do to live with what I claim I want to live by, that's what I'll do.
"I think this is a terrible idea and I wish Apple wouldn't do it, but I don't care enough about it to stop using Apple products" is a perfectly reasonable stance, but it does mean that Apple now knows they can do more of this sort of thing and get away with it. Good luck with the long term results of allowing this.
There's always the potential to work on the underlying laws. Not every problem can be solved by tech alone.
Apple used to fight implementing dangerous mechanisms. And succeeded.
In this case it seems only a matter of policy that Apple submits CSAM hashes to devices for scanning. No new software needs to be written, no new vulnerability found. There’s no difference between a hash database of CSAM, FBI terrorists photos, or Winnie the Pooh memes. All that’s left is Apple’s words that their policy is capable of standing up to the legal and political pressures of countries it operates in. That seems like a far lower bar.