Hacker News new | past | comments | ask | show | jobs | submit login
Security Threat Model Review of the Apple Child Safety Features [pdf] (apple.com)
325 points by sylens 35 days ago | hide | past | favorite | 374 comments



In other HN comments on this subject I've (hopefully) made it clear that I'm not really in favor of this project of Apple's, and that there's a legitimate "slippery slope" argument to be made here. So I hope people will entertain a contrarian question without downvoting me into oblivion. :)

Here's the thing I keep circling around: assume that bad actors, government or otherwise, want to target political dissidents using internet-enabled smartphones. The more we learn about the way Apple actually implemented this technology, the less likely it seems that it would make it radically easier for those bad actors to do so. For instance, the "it only scans photos uploaded to iCloud" element isn't just an arbitrary limitation that can be flipped with one line of code, as some folks seem to think; as Erik Neuenschwander, head of Privacy Engineering at Apple, explained in an interview on TechCrunch[1]:

> Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.

Will this stop those bad actors if they're determined? No, of course not, but there are so many ways they can do it already. They'll get cloud storage providers to give them access. If they can't, they'll get network providers to give them access. And if those bad actors are, as many people fear, government actors, then they have tools far more potent than code: they have laws. It was illegal to export "strong encryption" for many years, remember? I've seen multiple reports that European lawmakers are planning to require some kind of scanning for CSAM. If this goes into effect, technology isn't going to block those laws for you. Your Purism phone will either be forced to comply or be illegal.

I wrote in a previous comment on this that one of Silicon Valley's original sins is that we tend to treat all problems as if they're engineering problems. Apple is treating CSAM as an engineering problem. Most of the discussion on HN about how horrible and wrong Apple is here still treats it as an engineering problem, though: well, you can get around this by just turning off iCloud Photos or never using Apple software or throwing your iPhone in the nearest lake and switching to Android, but only the right kind of Android, or maybe just never doing anything with computers again, which admittedly will probably be effective.

Yet at the end of the day, this isn't an engineering problem. It's a policy problem. It's a governance problem. In the long run, we solve this, at least in liberal democracies, by voting people into office who understand technology, understand the value of personal encryption, and last but certainly not least, understand the value of, well, liberal democracy. I know that's easy to dismiss as Pollyannaism, but "we need to protect ourselves from our own government" has a pretty dismal track record historically. The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.

The one thing that Apple is absolutely right about is that expanding what those hashes check for is a policy decision. Maybe where those hashes get checked isn't really what we need to be arguing about.

[1]: https://techcrunch.com/2021/08/10/interview-apples-head-of-p...


> there's a legitimate "slippery slope" argument

The slippery slope argument is the only useful argument here.

The fundamental issue with their PSI/CSAM system is that they already were scanning iCloud content [1] and that they're seemingly not removing the ability to do that. If the PSI/CSAM system had been announced along side E2E encryption for iCloud backups, it would be clear that they were attempting to act in their users best interests.

But it wasn't, so as it stands there's no obvious user benefit. It then becomes a question of trust. Apple are clearly willing to add functionality at the request of governments, on device. By doing this they lose user trust (in my opinion).

> In the long run, we solve this, at least in liberal democracies, by voting people into office who understand technology, understand the value of personal encryption

But this is clearly not the way it happens in practice. At least in part, we vote with our wallets and give money to companies willing to push back on governmental over-reach. Until now, Apple was one such company [2].

I realize that Apple likely don't "care" about privacy (it's a company, not a individual human). But in a purely cynical sense, positioning themselves as caring about privacy, and pushing back against governmental over-reach on users behalf was useful. And while it's "just marketing" it benefits users.

By implementing this functionality, they've lost this "marketing benefit". Users can't buy devices believing they're supporting a company willing to defend their privacy.

[1] https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

[2] https://epic.org/amicus/crypto/apple/


> If the PSI/CSAM system had been announced along side E2E encryption for iCloud backups, it would be clear that they were attempting to act in their users best interests.

Absolutely. I don't know whether there's a reason for this timing (that is, if they are planning E2E encryption, why they announced this first), but this is probably the biggest PR bungle Apple has had since "you're holding it wrong," if not ever.

> Apple are clearly willing to add functionality at the request of governments, on device.

Maybe? I'm not as willing to state that quite as definitively, given the pushback Apple gave in the San Bernardino shooter case. Some of what their Privacy Engineering head said in that TechCrunch article suggests that Apple has engineered this to be strategically awkward, e.g., generating the hashes by using ML trained on the CSAM data set (so the hashing system isn't as effective on other data sets) and making the on-device hashing component part of the operating system itself rather than a separately updatable data set. That in turn suggests to me Apple is still looking for an engineering way to say "no" if they're asked "hey, can you just add these other images to your data set." (Of course, my contention that this is not ultimately an engineering problem applies here, too: even if I'm right about Apple playing an engineering shell game here, I'm not convinced it's enough if a government is sufficiently insistent.)

A minor interesting tidbit: your linked Sophos story is based on a Telegraph UK story that has this disclaimer at the bottom:

> This story originally said Apple screens photos when they are uploaded to iCloud, Apple's cloud storage service. Ms Horvath and Apple's disclaimer did not mention iCloud, and the company has not specified how it screens material, saying this information could help criminals.

It's hard to say what they were actually doing, but it's reasonable to suspect it's an earlier, perhaps entirely cloud-based rather than partially cloud-based, version of NeuralHash.


Right, and in the interview linked [1] above they state:

> The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers, which we’ve never done for iCloud Photos.

But they do appear to do "something" server-side. It's possible that all data in scanned as it is ingested for example. I dislike this statement, because it's probably technically correct but doesn't help clarify the situation in a helpful way. It makes me trust Apple less.

[1] https://techcrunch.com/2021/08/10/interview-apples-head-of-p...


What I don’t understand is that if they announced they would do that scanning server side, the only eyebrows that would be raised is of people who thought they were doing it already. It’s not like if those pictures were e2e encrypted. I still haven’t seen any convincing argument about why searching client side provide any benefit to end users, while being a massive step in the direction of privacy invasion.


> But they do appear to do "something" server-side. It's possible that all data in scanned as it is ingested for example. I dislike this statement, because it's probably technically correct but doesn't help clarify the situation in a helpful way. It makes me trust Apple less.

The qualifier is "Photos" - different services have different security properties.

Email transport is not E2E encrypted because there are no interoperable technologies for that.

Other systems are encrypted but apple has a separate key escrow system outside the cloud hosting for law enforcement requests and other court orders (such as a heir/estate wanting access).

Some like iCloud Keychain use more E2E approach where access can't be restored if you lose all your devices and paper recovery key.

iCloud Photo Sharing normally only works between AppleID accounts, with the album keys being encrypted to the account. However, you can choose to publicly share an album, at which point it becomes accessible via a browser on icloud.com. I have not heard Apple talking about whether they scan photos today once they are marked public (going forward, there would be no need).

FWIW this is all publicly documented, as well as what information Apple can and can't provide to law enforcement.


Even without publicly sharing your iCloud photos, they are accessible on iCloud.com (e.g. you can see your camera role there).


> This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users’ libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through users’ iCloud Photos.

https://techcrunch.com/2021/08/10/interview-apples-head-of-p...

> This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

- Tim Cook, Apple

At what point does the fact that half a decade has passed since those words were written yet the hacker community has made little contribution to that discourse about the importance of privacy start implicating us in the collective failure to act?


And yet we can go e.g. on Twitter and observe comments from relevant security researchers that appear to describe a chilling atmosphere surrounding privacy research, including resistance to even consider white paper.

That’s not my area of expertise and don’t know how to fix that, but that should be an important consideration.


> The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers, which we’ve never done for iCloud Photos.

I really dislike this statement. It's likely designed to be "technically true". But it's been reported elsewhere that they do scan iCloud content:

https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

Perhaps they scan as the data is being ingested. Perhaps it's scanned on a third party server. But it seems clear that it is being scanned.


https://www.forbes.com/sites/thomasbrewster/2020/02/11/how-a...

My interpretation is Sophos got it wrong (they don't give a quote from the Apple Officer involved and manage to have a typo in the headline).

Apple does scanning of data which is not encrypted, such as received and sent email over SMTP. They presumably at that time were using PhotoDNA to scan attachments by hash. This is likely what Apple was actually talking about back at CES 2020.

They may have been also scanning public iCloud photo albums, but I haven't seen anyone discuss that one way or another.


That link doesn’t confirm that they have already been doing it. Just that they changed an EULA.


My understanding based on piecing together the various poorly cited news stories is that Apple used to scan iCloud Mail for this material, and that’s it.


If you have references to also help me piece this together I'd find that really helpful.


> Last year, for instance, Apple reported 265 cases to the National Center for Missing & Exploited Children, while Facebook reported 20.3 million

According to [1] it does seem like Apple didn't do any wide scale scanning of iCloud Data.

[1] https://www.nytimes.com/2021/08/05/technology/apple-iphones-...


If they weren't doing any scanning why would they find any to report? The data is encrypted as rest so... why would they find any to report. This clearly doesn't include search requests [1].

iCloud has perhaps 25% of the users of Facebook. Of that 25% it's not clear how many actively use the platform of backups/photos. iCloud is not a platform for sharing content like Facebook. So how many reports should we expect to see from Apple? It's unclear to me.

So, I'm not saying the number isn't suspiciously low. But it doesn't really clarify what's going on to me...

[1] https://www.apple.com/legal/transparency/pdf/requests-2018-H...


Forbes had the only evidence based reporting where they cited a court case where Apple automatically detected known CSAM in attachments to iCloud Mail: https://www.forbes.com/sites/thomasbrewster/2020/02/11/how-a...

Everyone else is making inferences from a chance to their privacy policy and a statement made by a company lawyer.


> The fundamental issue with their PSI/CSAM system is that they already were scanning iCloud content

This scanning was of email attachments being sent through an iCloud-hosted account, not of other iCloud hosted data (which is encrypted during operation.)


I don’t think that photos are encrypted as you can view them from the iCloud website.


If it’s encrypted but apple has the key, it’s not encrypted to them.


Do you have a public reference for this?


> If the PSI/CSAM system had been announced along side E2E encryption for iCloud backups, it would be clear that they were attempting to act in their users best interests.

As I understand it, this was more "leaked" than "announced". That is, it wasn't part of Apple's planned rollout strategy.

My thought was that they'll announce that soon, but then again, I'm shocked soon wasn't last week. I have no idea. Maybe there is some other limitation (legal) on he iCloud E2E backups they needed to solve first?


> If the PSI/CSAM system had been announced along side E2E encryption for iCloud backups, it would be clear that they were attempting to act in their users best interests.

Most likely because the only way this announcement makes sense, is that they tried to respond for misleading leaks. We'll see on September probably some E2EE announcements, since iOS 15 beta supports tokens for backup recovery. At least, let's hope so.


Lots of people have made policy arguments. No US law requires client side scanning. No US law forbids E2E encryption. US courts don't let law enforcement agencies just demand everything they want from companies. Apple relied on that 5 years ago successfully.[1] And capitulating preemptively is bad strategy usually.

What Neuenschwander said doesn't establish it isn't just an arbitrary limitation.

[1] https://en.wikipedia.org/wiki/FBI–Apple_encryption_dispute

Where the hashes get checked is relevant to the policy problem of what


That’s different. The FBI can legally require Apple or any other US company to search for specific files it has access to on it’s own servers because nothing currently shields backup providers. They could and did force Apple to aid in unlocking iPhones when Apple had that capacity. What they couldn’t do was “These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.”

Forcing companies to create back doors in their own is legally a very different situation. As to why iCloud is accessible by Apple, the point is to backup a phone someone lost. Forcing people to keep some sort of key fob with a secure private key safe in order to actually have access to their backups simply isn’t tenable.


Apple is a trillion dollar company with a lot of smart people. You could probably get them to design a system of N of M parts for recovery, or an apple branded key holder that you can store in your bank vault and friends houses. If they wanted to they'd do it.


More than that: Apple already designed and partially implemented such a "trust circles" system.

Apple legal killed the feature, because of pressure from the US government.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

They also run iCloud (mostly not e2e) on CCP-controlled servers for users in China.

They can decrypt ~100% of iMessages in real-time due to the way iCloud Backup (on by default, not e2e) escrows iMessage sync keys.

Apple does not protect your mprivacy from Apple, or, by extension, the governments that ultimately exert control over Apple: China and the USA.


That’s not what the article says, they simply don’t know.

“However, a former Apple employee said it was possible the encryption project was dropped for other reasons, such as concern that more customers would find themselves locked out of their data more often.”


Each year, Apple gives up customer data on over 150,000 users based on US government data requests, and NSL and FISA requests[1].

The idea that Apple would fight this is a farce, as they regularly give up customers' data without a fight when the government requests it.

[1] https://www.apple.com/legal/transparency/us.html


The idea that Apple would fight this is a farce, as they regularly give up customers' data without a fight when the government requests it.

There are laws regarding this, so they don't have a choice. If they get a subpoena from a FISA court, there's not much they can do, but that goes for every US-based company.

Whatever fighting is going on is behind the scenes, so we wouldn't know about it.


None of the laws do yet. My observation isn't about the laws as they necessarily exist now, just as the worry about how this could be abused isn't about Apple's policy as it exists now.

If we trust US courts to stop law enforcement agencies from demanding everything they want from companies, they they can stop law enforcement agencies from demanding Apple add non-CSAM data to the NeuralHash set. If we don't trust the courts to do that, then we're kind of back at square one, right?


I'm not American, but my understanding is that as soon as Government is forcing Apple to search our devices for something, 4th Amendment protections apply. (Unless they hold a search warrant for that specific person, of course.) Is this not correct?


No. The 4A protections don't apply to third parties. This is part of why the US has nearly nonexistent data protection laws.


If Apple was performing scans on their cloud servers, you'd be absolutely right. But if the scanning is being done on the individual's device, I'm not sure it's that straightforward. The third party doctrine surely cannot apply if the scanning is performed prior to the material being in third party hands.

Therefore if the Government forces Apple to change the search parameters contained within private devices, I cannot see how this would work around the 4th Amendment.

If this is correct, it might be possible to argue that Apple's approach has (for Americans) constitutional safeguards which do not exist for on-cloud scanning performed by Google or Microsoft.


This point was made in the economist today.

https://www.economist.com/united-states/2021/08/12/a-38-year...


I read the article; I don't think they highlighted this specific point that on-device scanning has a potential, hypothetical constitutional advantage in comparison to Google, Microsoft and Facebook who scan exclusively in the cloud.


They don’t draw out the comparison, but they do mention the protection.


You are right. But I think the distinction is an important one to make. People might not be aware about this critical distinction.

I did find someone in the media making this point, so it turns out I'm not being completely original: https://9to5mac.com/2021/08/10/misusing-csam-scanning-in-us-...


> The 4A protections don't apply to third parties.

The government can't pay someone to break into your house and steal evidence they want without a warrant. I mean, they can, but the evidence wouldn't be admissible in court.


They don't need a warrant. You gave data to someone else. That someone isn't bound to keep it secret. They can demand a warrant if they are motivated by ethical principles but that is optional and potentially overruled by other laws.


But if they're looking for incriminating evidence on your private property (i.e. on-device scanning) then they do need a warrant. It doesn't matter if a copy of it was also given to a third party (i.e. uploaded to iCloud) what matters is where the actual search takes place.


The authorities aren't doing the scanning. You will be made to agree in the fine print to let Apple do it when iCloud sync is enabled. If they run across evidence of a crime then c'est la vie.


The sort of questions about 4A protections here haven't really been tested. Third party doctrine might not apply in this circumstance and the court is slowly evolving with the times.


e.g. https://en.wikipedia.org/wiki/Carpenter_v._United_States

In Carpenter v. United States (2018), the Supreme Court ruled warrants are needed for gathering cell phone tracking information, remarking that cell phones are almost a “feature of human anatomy”, “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user”.

...[cell-site location information] provides officers with “an all-encompassing record of the holder’s whereabouts” and “provides an intimate window into a person’s life, revealing not only [an individual’s] particular movements, but through them [their] familial, political, professional, religious, and sexual associations.”


> And capitulating preemptively is bad strategy usually.

Why do you think they are doing it then?


> It was illegal to export "strong encryption" for many years, remember? I've seen multiple reports that European lawmakers are planning to require some kind of scanning for CSAM. If this goes into effect, technology isn't going to block those laws for you. Your Purism phone will either be forced to comply or be illegal.

The point is that with a Purism phone or custom ROM on my Android phone, I could disable these "legally required" features, because the law is fucking dumb, and my rights matter more.

The law can ban E2EE, cryptocurrencies, and privacy, but so long as we have some degree of technical freedom we can and will give it the middle finger.

Apple does not offer this freedom. Here we see the walled garden of iOS getting worse and more freedom-restricting by the year. When the governments of the world demand that Apple become an arm of the dystopia, Apple will comply, and its users will have no choice but to go along with it.

Apple, knowing that it is a private company completely and utterly incapable of resisting serious government demands (ie GCBD in China) should never have developed this capability to begin with.

If Apple is going to open this Pandora's box, they ought to open up their devices too.


> ... because the law is fucking dumb, and my rights matter more.

If they aren't _everybody's_ rights then they aren't really your rights either. They are at best a privilege, and at worst something you have just been able to get away with (so far).

> The law can ban E2EE, cryptocurrencies, and privacy, but so long as we have some degree of technical freedom we can and will give it the middle finger.

Sure, in that scenario techies can secretly give it the middle finger right up until the authoritarian government they idly watched grow notices them.

If someone is seriously concerned about that happening, they could always consider trying to divert the government away from such disaster by participating.

> When the governments of the world demand that Apple become an arm of the dystopia, Apple will comply, and its users will have no choice but to go along with it.

Government demands of this sort are normally referred to as legal and regulatory compliance. Corporations, which are a legal concept allowed by the government, generally have to conform to continue to exist.

> Apple, knowing that it is a private company completely and utterly incapable of resisting serious government demands (ie GCBD in China) should never have developed this capability to begin with.

IMHO, having some portion of a pre-existing capability doesn't matter when you aren't legally allowed to challenge the request or answer "no".


> The point is that with a Purism phone or custom ROM on my Android phone, I could disable these "legally required" features, because the law is fucking dumb, and my rights matter more.

But if we assume a government is determined to do this, can't they find other ways to do it? If you were using Google Photos with your Purism phone, it doesn't matter what you do on your device. And you can say "well, I wouldn't use that," but maybe your ISP is convinced (or required) to do packet inspection. And then you can say, "But I'm using encryption," and the government mandates that they have a back door into all encrypted traffic that goes through their borders.

And I would submit that if we really assume a government is going to extreme lengths, then they'll make it as hard as possible to use an open phone in the first place. They'll make custom ROMs illegal. They'll go after people hosting it. They'll mandate that the phones comply with some kind of decryption standard to connect to cellular data networks. If we assume an authoritarian government bound and determined to spy on you, the assumption that we can be saved by just applying enough open source just seems pretty shaky to me.

So, I certainly don't think that a purely technological solution is enough, in the long run. This is a policy issue. I think hackers and engineers really, really want to believe that math trumps policy, but it doesn't. By all means, let's fight for strong encryption -- but let's also fight for government policy that supports it, rather than assuming encryption and open source is a guarantee we can circumvent bad policy.


> And then you can say, "But I'm using encryption," and the government mandates that they have a back door into all encrypted traffic that goes through their borders.

And then one can compromise and infect millions of such backdoored devices and start feeding (much cheaper than the government enforcement implementation) spoofed data into these systems at scale on these backdoored devices that act like "swatting as a service" and completely nullify any meaning they could get from doing this.

I'm personally really interested in router level malware + 0days on devices as distribution vectors rather than the typical c&c setup.

> They'll go after people hosting it.

Not too hard to imagine one being able to distribute such things across millions of ephemeral devices that are networked and incentivized to host it, all across the world, regardless of illegality in any particular jurisdiction. Technology enables this, without such, it wont be possible.

> I think hackers and engineers really, really want to believe that math trumps policy, but it doesn't

I don't think that at all, I think it comes down to incentives. I was listening to a talk the other day where someone mentioned that for the longest time (since at least wwII), governments pretty much had a monopoly on cryptographers and now there are lots of places/systems that are willing to pay more to apply cutting edge research.

> but let's also fight for government policy that supports it, rather than assuming encryption and open source is a guarantee we can circumvent bad policy.

Much more cheaper for an individual, with more immediate feedback mechanisms doing one vs another. One can also scale a lot faster than another esp since one is very much divorced from implementation.


> Apple, knowing that it is a private company completely and utterly incapable of resisting serious government demands (ie GCBD in China) should never have developed this capability to begin with.

You are basically arguing that an iphone be incablable of doing any function at all.

Dont want the government demanding the sent/recieved data - no data sending and recieving funcitons.

Dont want the government demanding phone call intercepts - no phone call functionality.

Dont want the government demanding the contents of the screen - no screen.

The pandoras box was opened the day someone inseted a radio chip and microphone into a device. It has been open for a very long time, this is not the moment it suddenly opened.


When the governments of the world demand that Apple become an arm of the dystopia, Apple will comply, and its users will have no choice but to go along with it.

I would argue that Apple is creating systems so they can't become an arm of the dystopia.

For example, even if a government somehow forced Apple to include non-CSAM hashes to the database, the system only uses hashes from multiple child protection agencies in different jurisdictions where the CSAM is the same.

So Apple only uses the hashes that are the same between org A, B and C and ignores the rest.

This, along with the audibility Apple recently announced and the other features makes it so there's literally nothing counties can do to force Apple to comply with some dystopian nightmare…

Of course, with potentially more open operating systems, it would be trivial by comparison for state actors to create a popular/custom ROM for Android that's backdoored.


>I would argue that Apple is creating systems so they can't become an arm of the dystopia.

For the life of me I can't see how catching child molesters is part of a dystopia.


One way to catch child molesters would be to have everyone wear a body cam at all times. Would that be dystopian enough for you?


> Will this stop those bad actors if they're determined? No, of course not, but there are so many ways they can do it already.

This is not a reason to let your guard down on security. Keeping up with securing things against bad actors is a constant battle. Tim Cook put it best [1] and I want to hear how this is not exactly what he described 5 years ago.

[1] https://youtu.be/rQebmygKq7A?t=57


This is the refreshing analysis that I hope to see when I come to HN. Thank you for being reasonable.


> Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.

Neuenschwander seems to, maybe deliberately, be conflating : "Apple's servers have to be in the loop" and "the code can only look at photos on iCloud".

You are right, the problem is a "slippery slope," but Apple just built roller skates and there are governments trying to push us down it. Apple is in a far better position to resist those efforts if they say " we don't have this code, we will not build it, and there's no way for it to be safe for our users."

I'd say that's a little different than the slippery slope. Something more like (in)defense in depth.


I have no doubt that Apple doesn't really want to implement this, and have tried to find a "non intrusive" way of doing it.

The sad truth is that once lawmakers decide they want access to all data all the time, there's nothing Apple, Google or Microsoft can do to stop that, except going out of business.

That being said, the big players in cloud storage have scanned your files for a decade. Google, Microsoft, Amazon, they all scan files being uploaded. Apple may also be doing this for iCloud Drive.

The only way to circumvent this is to use E2E encryption, either by using a service that has E2E encryption built in, or by "rolling your own", i.e. using Cryptomator. Again, this is just one law away from being outlawed.

The thing that bothers me most is that hashes are somewhat trivial to spoof[1], so if someone was to get a hold of the list of hashes, they could start sending spoof messages on a large scale, causing a false positive.

I remember when Echelon was making the rounds on the internet a couple of decades ago, and lots of people started adding X headers to their emails to cause false positives.

[1] https://towardsdatascience.com/black-box-attacks-on-perceptu...


> The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.

The counterpoint: we (as in, the Western-aligned countries) don't have a true liberal democracy and likely never had. Not with the amount of open and veiled influence that religion (and for what it's worth, money) has in our societies - ranging from openly Christian centrist/center-right parties in Europe to entire communities in the US dominated by religious sects of all denominations.

And all of these tend to run on a "think about the children" mindset, especially regarding anything LGBT.

The result? It is very hard if not outright impossible to prevent or roll back authoritarian measures that were sold as "protect the children", since dominant religious-affiliated people and institutions (not just churches, but also thinktanks, parties and media) will put anyone in their crosshairs. Just look at how almost all blog pieces and many comments on the CSAM scanner debacle have an "I don't like pedophilia" disclaimer...


I agree that this is a policy issue. The EU passed a new law just a last month regarding this [0].

I thought this was the implementation for the EU. If it was - it was fast?

[0] https://news.ycombinator.com/item?id=27753727


I still see a lot of people thinking the EU law demands the scanning of images for CSAM, but really it permits the scanning of images again. Apparently no one noticed that EU privacy laws actually prohibited the scanning of user images until last year, when companies like Facebook stopped scanning images to avoid fines.

So Apple's move can hardly have been for the EU, but had the European Parliament not passed it, Apple would have had to disable it in the EU.


>Apple is treating CSAM as an engineering problem.

No they're treating it as a political and legal problem (with the UK and the EU being the furthest along on passing legislation). Their implementation is the compromise that preserves end-to-end encryption, given those political winds.


Agree. I think a lot of HN is seeing it as strictly an engineering problem while completely ignoring the blowing political winds.


Absolutely.

We've seen this time after time where the 3 letter agencies give companies an ultimatum: either comply or get shut down.

The worse part is that nobody can do anything about it. Under the cloak of secrecy and threats a faceless government is shaping major policy, breaking laws etc...

My only hope is that the biggest companies can speak up since they have a bit of a leverage. They can rally people behind them if the requests are borderline unethical, like spying on everybody.

If Apple can't deal with this, NOBODY else can.


"either comply or get shut down'

I think if a three letter agency 'shuts down' Apple, the political blowback will be nuclear. The agency will get put on a leash if they pull some shit like tthat


That's why i said big companies have some leverage. People will notice.

If they don't speak up then nobody else can.


>The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.

We technically can. We also can technically elect people that understand technology. But practically its 100x more likely that I, a person that works with technology for a living, magically finds a way to provide my family with a decent lifestyle doing somethign that doesn't use computers at all. And I view the chances of that happening to be almost non-existent.


What's the contrarion question?


Hmm. I should perhaps have said "contrarian view," although I'm not sure it's actually even super contrarian in retrospect. Maybe more "maybe we're not asking the right questions."


“never doing anything with computers again, which admittedly will probably be effective.”

Increasingly I believe this is the right and probably only answer. How do we collectively accomplish this, that’s the real problem. It has to be financially and economically solved. Political, social and asymmetric solutions won’t work.


In this case I still think there is an engineering problem too: A bad actor only needs to get a bunch of CSAM pictures in your iCloud library to get you into big trouble.

This is not very hard to do. For example, WhatsApp has the "Save pictures to camera roll" option, on by default.


Speaking of mobile OS. I am a bit of a newbie myself in this area. I am an Android user but I want to decouple from Google as much as possible. Is there an mobile OS out there that offers a similar experience to, say, Android in terms of functionalities, apps, etc without the drawback of privacy concerns?


The problem is that many Android apps require Google services to function properly. You can try two Android derivatives: CalyxOS[1] that implements a privacy-conscious subset of Google services allowing many Android apps to work properly, and GrapheneOS[2] that excludes Google services altogether at the cost of lower app compatibility[3]. Both require using Google Pixel hardware.

[1] https://calyxos.org/ [2] https://grapheneos.org/ [3] "GrapheneOS vs CalyxOS ULTIMATE COMPARISON (Battery & Speed Ft. Stock Android & iPhone)", https://www.youtube.com/watch?v=7iS4leau088


> GrapheneOS[2] that excludes Google services altogether at the cost of lower app compatibility[3].

It now has https://grapheneos.org/usage#sandboxed-play-services providing broader app compatibility.

That video is quite misleading and it's not the best source for accurate information about GrapheneOS.


Thank you for pointing this out, I will start linking to the updated material.


thanks!


Your best bets are GrapheneOS or CalyxOS. In both cases be prepared to sacrifice a lot in terms of convenience (more with GrapheneOS).


thanks!


> … the less likely it seems that it would make it radically easier for those bad actors to do so

Depends on implementation. I can easily see a possibility that the check is gated by a server side logic that can be changed at any moment without anybody knowing.


> The more we learn about the way Apple actually implemented this technology, the less likely it seems that it would make it radically easier for those bad actors to do so.

Then why isn’t Apple pushing this angle?


> It's a policy problem. It's a governance problem. In the long run, we solve this, at least in liberal democracies, by voting people into office who understand technology, understand the value of personal encryption

Yes, it is ultimately policy problem. But the way we get people in office who understand technology is to get technological capabilities in the hands of people before they get into office, as well as the hands of those who will vote for them. Just like "bad facts make bad law", bad engineering makes bad law.

Twenty years ago we forcefully scoffed at the Clipper Chip proposal and the export ban on crypto, because they were so at odds with the actual reality of the digital environment. These days, most people's communications are mediated by large corporations operating on plaintext, and are thus are straightforward to monitor and censor. And especially when companies lead the charge, governments expect to have the same ability.

If I could hole up and rely on Free software to preserve my rights indefinitely, I wouldn't particularly care what the Surveillance Valley crowd was doing with their MITM scheme. But I can't, because Surveillance Valley is teaching governments that communications can be controlled while also fanning the flames and creating glaring examples of why they need to be controlled (cf social media "engagement" dumpster fire). And once governments expect that technology can be generally controlled, they will rule any software that does not do their bidding as some exceptional circumvention device rather than a natural capability that has always existed. This entire "trust us" cloud culture has been one big vaccination for governments versus the liberating power of technology that we were excited for two decades ago. This end result has been foreseeable since the rise of webapps, but it's hard to get software developers to understand something when their salary relies upon not understanding it.

Apart from my ][gs (and later my secondhand NeXT), I've never been a huge Apple fan. But I had hoped that by taking this recent privacy tack, they would put workable digital rights into the hands of the masses. Design their system to be solidly secure against everyone but Apple, control the app store to prevent trojans, but then stay out of users' business as software developers should. But brazenly modifying their OS, which should be working for the interests of the user, to do scanning against the interests of the user is a disappointing repudiation of the entire concept of digital rights. And so once again we're back to Free software or bust. At least the Free mobile ecosystem seems to be progressing.


> For instance, the "it only scans photos uploaded to iCloud" element isn't just an arbitrary limitation that can be flipped with one line of code, as some folks seem to think

It might be a happy incident if their architecture limits this feature to CSAM now, but their ToS are clearly much more general-purpose than that, strategically allowing Apple to pre-screen for any potentially illegal content. If ToS remain phrased this way, surely the implementation will catch up.


> For instance, the "it only scans photos uploaded to iCloud" element isn't just an arbitrary limitation that can be flipped with one line of code, as some folks seem to think; as Erik Neuenschwander, head of Privacy Engineering at Apple, explained in an interview on TechCrunch[1]:

> > Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.

I don't see how that proves what Neuenschwander says it proves. Sending the voucher to the server is currently gated on whether or not the associated photo is set to be uploaded to iCloud. There's no reason why that restriction couldn't be removed.

The hard part about building this feature was doing the scanning and detection, and building out the mechanism by which Apple gets notified if anything falls afoul of the scanner. Changing what triggers (or does not trigger) the scanning to happen, or results being uploaded to Apple, is trivial.

The only thing that I think is meaningfully interesting here is that it's possible that getting hold of a phone and looking at the vouchers might not tell you anything; only after they vouchers are sent to Apple and algorithm magic happens will you learn anything. But I don't think that really matters in practice; if you can get the vouchers off a phone, you can almost certainly get the photos associated with them as well.

I read through the article you linked, and it feels pretty hand-wavy to me. Honestly I don't think I'd believe what Apple is saying here unless they release a detailed whitepaper on how the system works, one that can be vetted by experts in the field. (And then we still have to trust that what they've detailed in the paper is what they've actually implemented.)

The bottom line is that Apple has developed and deployed a method for locally scanning phones for contraband. Even if they've designed things so that the result is obfuscated unless there are many matches and/or photos are actually uploaded, that seems to be a self-imposed limitation that could be removed without too much trouble. The capability is there; not using it is merely a matter of policy.

> I know that's easy to dismiss as Pollyannaism, but "we need to protect ourselves from our own government" has a pretty dismal track record historically. The entire point of having a liberal democracy is that we are the government, and we can pull it back from authoritarianism.

Absolutely agree, but I fear we have been losing this battle for many decades now, and I'm a bit pessimistic for our future. It seems most people are very willing to let their leaders scare them into believing that they must give up liberty in exchange for safety and security.


>It seems most people are very willing to let their leaders scare them into believing that they must give up liberty in exchange for safety and security.

What Snowden exposed is that these changes are happening in secret, with no democratic support or oversight.

Laws are being circumvented so people aren't being given the choice to support or not.


> There's no reason why that restriction couldn't be removed.

The problem is there's no reason Apple couldn't do anything. All the ML face data gathered on device would be way more valuable than these hash vouchers.


What apple is doing is fucked up. Full stop. Mental gymnastics are required to go beyond this premise.


The issue is, it is all software settings. The iMessage filter is only active for acconts held by minors, that can be changed by Apple. Might not even be a client side setting. Also the code which can scan uploaded images can be used anywhere, scan every picture which is being stored on the device. It is all software and everything which is needed is one update.


Could you not potentially infer a lot about a person if they habe X photo content saved - via gathering that data elsewhere?

E.g. a simple one: anyone that may have saved a "winnie the pooh" meme image?

And it's not like Apple's going to publish a list of images that the system searched; though it'd be easy to differentiate between child abuse content vs. other types of images - but would it ever get out of Apple if someone went rogue or was "experimenting" to say "gather stats" of how many people have X image saved?


> For instance, the "it only scans photos uploaded to iCloud" element isn't just an arbitrary limitation that can be flipped with one line of code, as some folks seem to think; as Erik Neuenschwander, head of Privacy Engineering at Apple, explained in an interview on TechCrunch[1]:

>> Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature.

The first paragraph does not follow from the detail in the second at all. Setting aside how abstract the language is, what about adding more complexity to the system is preventing Apple from scanning other content?

This is all a misdirect: they're saying "look at how complex this system is!" and pretending that, particularly when they built and control the entire system including it's existence, that any of that makes changing how it operates "difficult".


The system isn't entirely on-device, but relies on uploading data to Apple's servers. Hence, why I said that it's not an arbitrary limitation that it only scans photos uploaded to iCloud. The system literally has to upload enough images with triggering "safety vouchers" to Apple to pass the reporting threshold, and critical parts of that calculation are happening on the server side.

I think what you're arguing is that Apple could still change what's being scanned for, and, well, yes: but that doesn't really affect my original point, which is that this is a policy/legal issue. If you assume governments are bad actors, then yes, they could pressure Apple to change this technology to scan for other things -- but if this technology didn't exist, they could just as easily pressure Apple to do it all on the servers. I think a lot of the anger comes from they shouldn't be able to do any part of this work on my device, and emotionally, I get that -- but technologically, it's hard for me not to shake the impression that "what amount happens on device vs. what amount happens on server" is a form of bikeshedding.


>"what amount happens on device vs. what amount happens on server" is a form of bikeshedding

There's a massive difference between a search carried out on a device someone "owns" and carries with them at all times, and a server owned by someone else.

To claim otherwise is absurd. Hence all this backlash.


> Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the en- crypted database present on their device, and compare it to the expected root hash in the Knowledge Base article.

This is just security theater, they already sign the operating system images where the database reside. And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

> This feature runs exclusively as part of the cloud storage pipeline for images being up- loaded to iCloud Photos and cannot act on any other image content on the device

Until a 1-line code change happens that hooks it into UIImage.


> And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

Although this is true, the same argument already applies to "your phone might be scanning all your photos and stealthily uploading them" -- Apple having announced this program doesn't seem to have changed the odds of that.

At some point you have to trust your OS vendor.


Which is why I am confused by a lot of this backlash. Apple already controls the hardware, software, and services. I don't see why it really matters where in that chain the scanning is done when they control the entire system. If Apple can't be trusted with this control today, why did people trust them with this control a week ago?


Because people (HN especially) overestimate how easy it is to develop and deploy "stealth" software that is never detected in a broad way.

The best covert exfiltration is when you can hit individual devices in a crowd, so people already have no reason to be suspicious. But you're still leaving tracks - connections, packet sizes etc. if you actually want to do anything and you only need to get caught once for the game to be up.

This on the other hand is essentially the perfect channel for its type of surveillance...because it is a covert surveillance channel! Everyone is being told to expect it to exist, that it's normal, and that it will receive frequent updates. No longer is their a danger a security researcher will discover it, its "meant" to be there.


This is the conclusion that I personally arrived at. When I confronted some friends of mine with this, they gave me some good points to the contrary.

Letting Apple scan for government-related material on your device is a slippery slope to government surveillance and a hard line should be drawn here. Today, Apple may only be scanning your device for CSAM material for iCloud. Tomorrow, Apple may implement similar scanning elsewhere, gradually expand the scope of scanning to other types of content and across the entire device, implement similar government-enforcing processes on the device, and so on. It's not a good direction for Apple to be taking, regardless of how it works in this particular case. A user's device is their own personal device and anything that inches toward government surveillance on that device, should be stopped.

Another point made was that government surveillance never happens overnight. It is always very gradual. People don't mean to let government surveillance happen and yet it does because little things like this evolve. It's better to stop potential government surveillance in its tracks right now.


Yeah. I will say, though, I am happy that people are having the uncomfortable realization that they have very little control over what their iPhone does.


Now we just need everyone to have that same realization about almost all the software we use on almost all the devices we own. As a practical matter 99.99% of us operate on trust.


Remember that emission cheating scandal? Where we supposedly had a system in place to detect bad actors and yet it was detected by a rare case of some curious student exploring how things work or some such.


Which is why open(-ish[0]) things are good, because people can get curious and see how they work.

[0] I understand the emissions cheating was not supossed to be open, but a relatively open system allowed said student to take a peek and see what was going on.


That’s where I’m at. They could have just started doing this without even saying anything at all.


But in that case they would eventually be caught red-handed and won't get to do the "for the children" spiel and get it swept under the rug like it's about to be.


The goal is not for it to be swept under the rug. The goal is for it to deflect concerns over the coming Privacy Relay service.


The government cares far more about other things than CSAM, like terrorism, human and drug trafficking, organized crime, and fraud. Unless the CSAM detection system is going to start detecting those other things and report them to authorities, as well, it won't deflect any concerns over encryption or VPNs.


Their private relay service appears orthogonal to CSAM… it won’t make criminals and child abusers easier or harder to catch, and it doesn’t affect how people use their iCloud Photos storage.


These people are commonly prosecuted using evidence that includes server logs that showing their static IP address.

Read evidence from past trials it is obvious. See also successful and failed attempts to subpoena this info from VPN services.

Only people with iCloud will be using the relay.

It is true on the surface the photos is disconnected from the use. However, Apple only needs a solid answer that handles the bad optics of what you can do with the Tor-like anonymity of iCloud Privacy Relay.

However, if you look more closely, the CSAM service and its implementation are crafted exactly around the introduction of the relay.


Agreed. I think this just shined a spotlight for a lot of people who didn’t really think about how much they had to trust Apple.


They risk a whistleblower if they don't announce this news while implementing that in a country with a free press.

It's better to be forthright, or you risk your valuation on the whims of a single employee.


What happens if someone tries to coerce Apple into writing backdoor code? Engineers at Apple could resist, resign, slow roll the design and engineering process. They could leak it and it would get killed. Things would have to get very very bad for that kind of pressure to work.

On the other hand, once Apple has written a backdoor enthusiastically themselves, it's a lot easier to force someone to change how it can be used. The changes are small and compliance can be immediately verified and refusal punished. To take it to its logical extreme: you cannot really fire or execute people who delay something (especially if you lake the expertise to tell how long it should take). But you can fire or execute people who refuse to flip a switch.

This technology deeply erodes Apple and its engineers' ability to resist future pressure. And the important bit here is there adversary isn't all powerful. It can coerce you to do things in secret, but its power isn't unlimited. See what happened with yahoo.[0]

https://www.reuters.com/article/us-yahoo-nsa-exclusive/exclu...


If you're uploading to the cloud, you have to trust a lot more than just your OS vendor (well, in the default case, your OS vendor often == your cloud vendor, but the access is a lot greater once the data is on the cloud).

And if your phone has the capability to upload to the cloud, then you have to trust your OS vendor to respect your wish if you disable it, etc.

It's curious that this is the particular breaking point on the slope for people.

The "on device" aspect just makes it more immediate feeling, I guess?


Yes, you had to trust Apple, but the huge difference with this new thing is that hiding behind CSAM gives them far more (legally obligated, in fact --- because showing you the images those hashes came from would be illegal) plausible deniability and difficulty of verifying their claims.

In other words, extracting the code and analysing it to determine that it does do what you expect is, although not easy, still legal. But the source, the CSAM itself, is illegal to possess, so you can't do that verification much less publish the results. It is this effective legal moat around those questioning the ultimate targets of this system which people are worried about.


Surely they could do their image matching against all photos in iCloud without telling you in advance, and then you'd be in exactly the same boat? Google was doing this for email as early as 2014, for instance, with the same concerns about its extensibility raised by the ACLU: https://www.theguardian.com/technology/2014/aug/04/google-ch...

So in a world where Apple pushes you to set up icloud photos by default, and can do whatever they want there, and other platforms have been doing this sort of of thing for years, it's a bit startling that "on device before you upload" vs "on uploaded content" triggers far more discontent?

Maybe it's that Apple announced it at all, vs doing it relatively silently like the others? Apple has always had access to every photo on your device, after all.


It isn't startling people trust they can opt out of iCloud photos.


If you trust that you can opt out of iCloud Photos to avoid server-side scanning, trusting that this on-device scanning only happens as part of the iCloud Photos upload process (with the only way it submits the reports being as metadata attached to the photo-upload, as far as I can tell) seems equivalent.

There's certainly a slippery-slope argument, where some future update might change that scanning behavior. But the system-as-currently-presented seems similarly trustable.


I trust Apple doesn't upload everyone's photos despite opting out because it would be hard to hide.


I bet it'd take a while. The initial sync for someone with a large library is big, but just turning on upload for new pictures is only a few megabytes a day. Depending on how many pictures you take, of course. And if you're caught, an anodyne "a bug in iCloud Photo sync was causing increased data usage" statement and note in the next iOS patch notes would have you covered.

And that's assuming they weren't actively hiding anything by e.g. splitting them up into chunks that could be slipped into legitimate traffic with Apple's servers.


Yeah, it's weird. Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me. But this is clearly not a universal opinion.

The most-optimistic take on this I can see is that this program could be the prelude to needing to trust less people. If Apple can turn on e2e encryption for photos, using this program as the PR shield from law enforcement to be able to do it, that'd leave us having to only trust the OS vendor.


> Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me.

What I find interesting is that so many people find it worse to do it on device, because of the risk that they do it to photos you don't intend to upload. This is clearly where Apple got caught off-guard, because to them, on-device = private.

It seems like the issue is really the mixing of on-device and off. People seem to be fine with on-device data that stays on-device, and relatively fine with the idea that Apple gets your content if you upload it to them. But when they analyze the data on-device, and then upload the results to the cloud, that really gets people.


Is this really surprising to you? I'm not trying to be rude, but this is an enormous distinction. In today's world, smartphones are basically an appendage of your body. They should not work to potentially incriminate its owner.


They should not work to potentially incriminate its owner.

But that ship has long sailed, right?

Every packet that leaves a device potentially incriminates its owner. Every access point and router is a potential capture point.


When I use a web service, I expect my data to be collected by the service, especially if it is free of charge.

A device I own should not be allowed to collect and scan my data without my permission.


A device I own should not be allowed to collect and scan my data without my permission.

It's not scanning; it's creating a cryptographic safety voucher for each photo you upload to iCloud Photos. And unless you reach a threshold of 30 CSAM images, Apple knows nothing about any of your photos.


From the point of view of how image processing works, what is happening can indeed be called “scanning”.


This seems like a necessary discussion to have in preparation for widespread, default end to end encryption.


Them adding encrypted hashes to photos you don’t intend to upload is pointless and not much of a threat given the photo themselves are they. They don’t do it, but it doesn’t feel like a huge risk.


No, the threat model differs entirely. Local scanning introduces a whole host of single points of failure, including the 'independent auditor' & involuntary scans, that risk the privacy & security of all local files on a device. Cloud scanning largely precludes these potential vulnerabilities.


Your phone threat model should already include "the OS author has full access to do whatever they want to whatever data is on my phone, and can change what they do any time they push out an update."

I don't think anyone's necessarily being too upset or paranoid about THIS, but maybe everyone should also be a little less trusting of every closed OS - macOS, Windows, Android as provided by Google - that has root access too.


Sure, but that doesn't change the fact that the vulnerabilities with local scanning remain a significant superset of cloud scanning's.

Apple has built iOS off user trust & goodwill, unlike most other OSes.


Cloud Scanning vulnerability: no transparency over data use. On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data. On the cloud, anything about your photos is fair game.

Where does that fit in your set intersection?


> On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data.

...except you can't? Not sure where these assumptions come from.


It’s code running your device is the point, so while “you” doesn’t include everyone, it does include people who will verify this to a greater extent than if done on cloud.


It differs, but iOS already scans images locally and we really don't know what they do with the meta data, and what "hidden" categories there are.


Yes, exactly why Apple breaching user trust matters.


And how is telling you in great detail about what they’re planning to do months before they do it and giving you a way to opt out in advance a breach of trust? What more did you expect from them?


> What more did you expect from them?

Well they could not do it.


You might prefer that, but it doesn’t violate your privacy for them to prefer a different strategy.


why even ask the question " What more did you expect from them?" if you didn't care about the answer?

I gave a pretty obvious and clear answer to that, and apparently you didn't care about the question in the first place, and have now misdirected to something else.

I am also not sure what possible definition of "privacy" that you could be using, that would not include things such as on device photo scanning, for the purpose of reporting people to the police.

Like, lets say it wasn't Apple doing this. Lets say it was the government. As in, the government required every computer that you own, to be monitored for certain photos, at which point the info would be sent to them, and they would arrest you.

Without a warrant.

Surely, you'd agree that this violates people's privacy? The only difference in this case, is that the government now gets to side step 4th amendment protections, by having a company do it instead.


My question was directed at someone who claimed their privacy was violated, and I asked them to explain how they would’ve liked their service provider to handle a difference in opinion about what to build in the future. I don’t think your comment clarifies that.


> how they would’ve liked their service provider to handle a difference in opinion about what to build in the future

And the answer is that they shouldn't implement things that violate people's privacy, such as things that would be illegal for the government to do without a warrant.

That is the answer. If it is something that the government would need a warrant for, then they shouldn't do it, and doing it would violate people's privacy.


You forgot, 'after it leaked'


It’s almost certain the “leak” was from someone they had pre-briefed prior to a launch. You don’t put together 80+ pages of technical documentation with multiple expert testimony in 16 hours.


'Almost certain'? Have you heard of contingency planning?


What’s the difference between hybrid cloud/local scanning “due to a bug” checking all your files and uploading too many safety vouchers and cloud scanning “due to a bug” uploading all your files and checking them there?


...because cloud uploads require explicit user consent, practically speaking? Apple's system requires none.


Wouldn't both of those scenarios imply that the "bug" is bypassing any normal user consent? They're only practically different in that the "upload them all for cloud-scanning" one would take longer and use more bandwidth, but I suspect very few people would notice.


I think the difference lies in the visibility of each system in typical use. Apple's local scanning remains invisible to the user, in contrast to cloud uploading.


[flagged]


Ditto, too bad you got flagged earlier


What about trust-but-verify ?

If the OS was open source and supported reproducible builds, you would not have to trust them, you could verify what it actually does & make sure the signed binaries they ship you actually correspond to the source code.

Once kinda wonders what they want to hide if they talks so much about user privacy yet don't provide any means for users to verify their claims.


Yes, they can technically already do so, but that is not the question. The question is what can they legally do and justify with high confidence in the event of a legal challenge.

Changes to binding contractual terms that allow broad readings and provide legal justification for future overreach are dangerous. If they really are serious that they are going to use these new features in a highly limited way then they can put their money where their mouth is and add legally binding contractual terms that limit what they can do with serious consequences if they are found to be in breach. Non-binding marketing PR assurances that they will not abuse their contractually justified powers are no substitute for the iron fist of legal penalty clause.


Yeah that's true, although to do some sort of mass scanning stealthily they would need a system exactly like what they built with this, if they tried to upload everything for scanning the data use would be enormous and give it away.

I guess it comes down to that I don't trust an OS vendor that ships an A.I. based snitch program that they promise will be dormant.


Speaking cynically, I think that them having announced this program like they did makes it less likely that they have any sort of nefarious plans for it. There's a lot of attention being paid to it now, and it's on everyone's radar going forwards. If they actually wanted to be sneaky, we wouldn't have known about this for ages.


They'd have to be transparent about it as someone would easily figure it out.You have no way of verifying the contents of that hash database. once the infrastructure is in place (i.e. on your phone) it's a lot easier to expand on it. People have short memories and are easily desensitized, after a year or two of this, everyone will forget and we'll be in uproar about it expanding to include this or that...


You're making the mistake of anthropomorphizing a corporation. Past a certain size, corporations start behaving less like people and more like computers, or maybe profit-maximizing sociopaths. The intent doesn't matter, because 5 or 10 years down the line, it'll likely be a totally different set of people making the decision. If you want to predict a corporation's behavior, you need to look at the constants (or at least, slower-changing things), like incentives, legal/technical limitations, and internal culture/structure of decision-making (e.g. How much agency do individual humans have?).


I feel that I was stating the incentives, though.

This being an area people are paying attention to makes it less likely they'll do unpopular things involving it, from a pure "we like good PR and profits" standpoint. They might sneak these things in elsewhere, but this specific on-device-scanning program has been shown to be a risk even at its current anodyne level.


No they wouldn’t need a system like this. They already escrow all your iCloud Backups, and doing the scanning server side allows you to avoid any scrutiny through code or network monitoring.


> At some point you have to trust your OS vendor.

Yes, and we were trusting Apple. And now this trust is going away.


> now this trust is going away

Is it really? There are some very loud voices making their discontent felt. But what does the Venn diagram look like between 'people who are loudly condemning Apple for this' and 'people who were vehemently anti-Apple to begin with'?

My trust was shaken a bit, but the more I hear about the technology they've implemented, the more comfortable I am with it. And frankly, I'm far more worried about gov't policy than I am about the technical details. We can't fix policy with tech.


> I'm far more worried about gov't policy than I am about the technical details. We can't fix policy with tech.

Yeah. I don't really understand the tech utopia feeling that Apple could simply turn on e2ee and ignore any future legislation to ban e2ee. The policy winds are clearly blowing towards limiting encryption in some fashion. Maybe this whole event will get people to pay more attention to policy...maybe.


> Until a 1-line code change happens that hooks it into UIImage.

I really don't understand this view. You are using proprietary software, you are always an N-line change away from someone doing something you don't like. This situation doesn't change this.

If you only use open source software and advocate for others to do the same, I would understand it more.


Did you verify all the binaries that you run are from compiled source code that you audited? Your BIOS? What about your CPU and GPU firmware?

There is always a chain of trust that you end up depending on. OSS is not a panacea here.


It's not a panacea but the most implausible the mechanism, the less likely it's going to be used on anyone but the most high value targets.

(And besides, it's far more likely that this nefarious Government agency will just conceal a camera in your room to capture your fingers entering your passwords.)



> I really don't understand this view. You are using proprietary software, you are always an N-line change away from someone doing something you don't like. This situation doesn't change this.

And I don't understand why it has to be black and white, I think the N is very important in this formula and if it is low that is a cause for concern. Like an enemy building a missile silo on an island just off your coast but promising it's just for defense.

All arguments I see is along the lines of "Apple can technically do anything they want anyways so this doesn't matter". But maybe you're right and moving to FOSS is the only solution long-term, that's what I'm doing if Apple goes through with this.


I’d leave this one out to the lawyers. I’m not one but I don’t think that the court will evaluate the number of lines of code required for help.


The size of N doesn't really matter. I'm sure Apple ships large PRs in every release, as any software company does.


Maybe not if you assume Apple is evil but for the case of Apple being good intentioned but having its hand forced, they will have a much harder time resisting a 1 line change than a mandate to spend years to develop a surveillance system


Apple shipped iCloud Private Relay which is a “1-line code change that hooks into CFNetwork” away from MITMing all your network connections, by this standard.


For me the standard is that I don't want any 1-line code change between me and near-perfect Orwellian surveillance.


Since your one-liners seem to be immensely dense with functional changes, I can’t understand how you trust any software.


Any connection worth its salt should be TLS protected.


Also in CFNetwork. Probably a one line change to replace all session keys with an Apple generated symmetric key.


> And there is no way to audit that the database is what they claim it is, doesn't contain multiple databases that can be activated under certain conditions, etc.

They describe a process for third parties to audit that the database was produced correctly.


Do we have any idea how the NCMEC database is curated? Are there cartoons from Hustler depicting underage girls in distress? Green text stories stating they are true about illegal sexual acts? CGI images of pre-pubescent looking mythical creatures? Manga/Anime images which are sold on the Apple Store? Legitimate artistic images from books currently sold? Images of Winnie the Pooh the government has declared pornographic? From the amount of material the Feds claim is being generated every year I would have to guess all of this is included. The multi-government clause is completely pointless with the five-eyes cooperation.

The story here is that there is a black box of pictures. Apple will then use their own black box of undeclared rules to pass things along to the feds which they have not shared what would be considered offending in any way shape or form other than "we will know it when we see it". Part of the issue here is that Apple is taking the role of a moral authority. Traditionally Apple has been incredibly anti-pornography and I suspect that anything that managed to get into the database will be something Apple will just pass along.


Apple is manually reviewing every case to ensure it’s CSAM. You do have to trust them on that.

But if your problem is with NCMEC, you’ve got a problem with Facebook and Google who are already doing this too. And you can’t go to jail for possessing adult pornography. So even if you assume adult porn images are in the database, and Apple’s reviewers decide to forward them to NCMEC, you would still not be able to be prosecuted, at least in the US. Ditto for pictures of Winnie the Pooh. But for the rest of what you describe, simulated child pornography is already legally dicey as far as I know, so you can’t really blame Apple or NCMEC for that.


Facebook I completely approve of. You are trafficking data at that point if you are posting it. I just recall the days of Usenet and Napster when I would just download at random and sometimes the evil would mislabel things to cause trauma. I do not download things at random any more but when I was that age it would have been far more appropriate to notify my parents then it would be to notify the government.

In any case it is likely the government would try to negotiate a plea to get you into some predator database to help fill the law enforcement coffers even if they have no lawful case to take it to court once they have your name in their hands.


> Ditto for pictures of Winnie the Pooh.

References to Winnie the Pooh in these discussions are about China, where images of Winnie are deemed to be coded political messages and are censored.

The concern is that Apple are building a system that is ostensibly about CSAM, and that some countries such as China will then leverage their power to force Apple to include whatever political imagery in the database as well. Giving the government there the ability to home in on who is passing around those kinds of images in quantity.

If that seems a long way indeed from CSAM, consider something more likely to fit under that heading by local government standards. There's a country today, you may have heard of, one the USA is busy evacuating its personnel from to leave the population to an awful fate, where "female teenagers in a secret school not wearing a burqa" may be deemed by the new authorities to be sexually titillating, inappropriate and illegal, and if they find out who is sharing those images, punishments are much worse than mere prison. Sadly there are a plethora of countries that are very controlling of females of all ages.


Drawings are prosecutable in many countries including Canada, the UK, and Australia. Also, iCloud sync is enabled by default when you set up your device, whereas the Facebook app at least is sandboxed and you have to choose to upload your photos.


> You do have to trust them on that.

If this system didn't exist, nobody would have to trust Apple.

> you would still not be able to be prosecuted

But I wouldn't want to deal with a frivolous lawsuit, or have a record in the social media of being brought CSA charges.


I don’t like the idea of stuff running on my device, consuming my battery and data, when the only point is to see if I am doing something wrong?

An analogy I can come up with is: the government hires people to visit your house every day, and while they’re there they need your resources (say, food, water, and electricity). In other words, they use up some of the stuff you would otherwise be able to use only for yourself — constantly — and the only reason they’re there is to see if you’re doing anything wrong. Why would you put up with this?


A lot of things work this way. The government is quite literally taxing some of your income and using it to check if you're doing anything wrong.

When a cop is checking your car for parking violations, they're using some of your taxpayer dollars to see if you're doing something wrong. You don't benefit at all from your car getting checked. But you probably benefit from everyone else's cars being checked, which is why we have this system.

Or similarly, you don't benefit from airport security searching your luggage. It's a waste of your time and invasion of your privacy if you don't have anything forbidden and even worse for you if you do have something forbidden. But you help pay for airport security anyway because it might benefit you when they search other people's luggage.

You don't benefit at all from having your photos scanned. But you (or someone else) might benefit from everyone else's photos being scanned (if it helps catch child abusers or rescue abused children). It's just a question of whether you think the benefit is worth the cost.


So 3rd amendment defense? These are digital soldiers being quartered in our digital house.


Probably a clarification of the 4th in order; there are multiple levels of government and private institutions at play to disguise that the government is pressuring private institutions to do search and seizure on their behalf.


A better fitting analogy would be you ask FedEx to pick up a package at your house for delivery, and they have a new rule that they won't pick up packages unless they can look inside to make sure that they are packed correctly and do not contain anything that FedEx does not allow.

When they open and look into the package while in your house for the pickup, they are using your light to see and your heating/cooling is keeping the driver comfortable while they inspect the package.


I'd say the analogy is closer to, they'll x-ray your package to make sure there aren't any prohibited/dangerous items.


The point of it is to make sure iCloud Photos remains a viable service in light of real and perceived regulatory threats, and possibly leave the door open to end to end encryption in the future.


This is a false dichotomy that is being pushed constantly online. E2EE is possible without this tech and in fact is only useful without tech like this to act as a MITM.


There is no regulatory threat within the US that could require this happen, if this was demanded by the government it would be a blatant violation of the 4th amendment. Apple should have stood their ground if this was in response to perceived government pressure.


There are two gov. issues. One is if the FBI comes knocking and the other is new laws. The government could absolutely write a law banning e2ee so that when the FBI does knock with a warrant they can get access.

In the past, Apple has done what they can to stand their ground against the first, but they (like any other company) will have to comply with any laws passed.

Whether the 'if a warrant is obtained the gov. should be granted access' law is a violation of the 4th amendment remains to be seen.


When the FBI does knock with a warrant they already get access to data on iCloud. iCloud is not meaningfully encrypted to prevent this.

The FBI is unable to get a warrant to search data on everyones phones, regardless of what laws are passed. They might be able to get a warrant to search all the data on apple's server (I would consider this unlikely, but I don't know of precedent in either direction), but that data is fundamentally not on everyones phones. This isn't a novel legal question, you cannot search everyones devices without probable cause that "everyone" committed a crime.


IMHO, the relevant regulations are pretty carefully worded to make 4th amendment defenses harder. In this case, Apple has some liability for criminal activity on their platform whether it is was E2E encrypted or not.


So they open a huge back door instead of waiting for a threat and fighting that?


I think this is the first time they have mentioned that you will be able to compare the hash of the database on your device with a hash published in their KB article. They also detailed that the database is only the intersection of hash lists from two child safety organizations under separate governmental jurisdictions.

My immediate thought is that this could still be poisoned by Five Eyes participants, and that it does not preclude state actors forcing Apple to replicate this functionality for other purposes (which would leave the integrity of the CSAM database alone, thus not triggering the tripwire).


> this could still be poisoned by Five Eyes participants, and that it does not preclude state actors forcing Apple to replicate this functionality for other purposes

The thing is, if this is your threat model you're already screwed. Apple has said they comply with laws in jurisdictions where they operate. The state can pass whatever surveillance laws they want, and I do believe Apple has shown they'll fight them to an extent, but at the end of the day they're not going to shut down the company to protect you. This all seems orthogonal to the CSAM scanning.

Additionally, as laid out in the report, the human review process means even if somehow there is a match that isn't CSAM, they don't report it until it has been verified.


the opportunity being to add general functions in photo viewing apps that add a little entropy to every image (for this specific purpose), to rotate hashes, rendering the dual databases useless

monetization I guess being to hope for subscribers on github, as this could likely just be a nested dependency that many apps import. a convenient app for this specific purpose might not last long in app stores.


Perceptual hashes are specifically designed to be resistant to minor visual alterations.


Let the cat and mouse race begin

Any perceptual hash apps to test that theory on?


A key point that needs to be mentioned: we strongly dislike being distrusted.

It might well be a genetic heritage. Being trusted in a tribe is crucial to survival, and so is likely wired deep into our social psychology.

Apple is making a mistake by ignoring that. This isn’t about people not trusting Apple. It’s about people not feeling trusted by Apple.

Because of this, it doesn’t matter how trustworthy the system is or what they do to make it less abusable. It will still represent distrust of the end user, and people will still feel that in their bones.

People argue about the App Store and not being trusted to install their own Apps etc. That isn’t the same. We all know we are fallible and a lot of people like the protection of the store, and having someone to ‘look after’ them.

This is different and deeper than that. Nobody want to be a suspect for something they know they aren’t doing. It feels dirty.


The part Federighi's "interview" where he can't understand how people perceive this as a back door [1] seems incredibly out of touch. The back door is what everyone is talking about. Someone at Apple should at least be able to put themselves in their critics' shoes for a moment. I guess we need to wait to hear Tim Cook explain how this is not what he described 5 years ago [2].

[1] https://youtu.be/OQUO1DSwYN0?t=425

[2] https://youtu.be/rQebmygKq7A?t=57


It’s not a back door in any sense of the word. That’s why he is surprised people see it as one.

It really only does what they say it does, and it really is hard to abuse.

But that doesn’t matter. The point is that even so, it makes everyone into a suspect, and that feels wrong.


> It’s not a back door in any sense of the word.

It is. It's a simple matter for two foreign governments to decide they don't want their people to criticize the head of state with memes, and then insert such images into the database Apple uses for scanning.

Apple's previous position on privacy was to make such snooping impossible because they don't have access to the data. Now they are handing over access.

What I and thousands of others online are describing ought to be understandable by anyone at Apple. The fact that an Apple exec can sit there in a prepared interview and look perplexed about how people could see this as a back door is something I don't understand at all. This "closing his ears" attitude may indicate he is full of it.


Edit: I reconsidered my previous reply.

That really doesn’t sound like anything I’d describe as a “back door”. A back door implies general purpose access. A system which required the collision of multiple governments and Apple and their child abuse agencies simply is not that.

One of the casualties of this debate is that people are using terms that make things sound worse than they are. If you can’t get at my filesystem, you don’t have a back door. I understand the motive for stating the case as harshly as possible, but I think it’s misguided.

Having said this, I would find it interesting to hear what Federighi would say about this potential abuse case.


> A system which required the collision of multiple governments and Apple and their child abuse agencies simply is not that.

Agree to disagree.

> I would find it interesting to hear what Federighi would say about this potential abuse case.

Personally I would not. That's a political consideration and not something I want to hear a technologist weigh in on while defending their technology. Apple's previous stance, with which I agree, was to not give humans any chance to abuse people's personal data,

https://youtu.be/rQebmygKq7A


> Personally I would not. That's a political consideration and not something I want to hear a technologist weigh in on while defending their technology.

It’s not. The abuse case flows from their architecture. Perhaps it isn’t as ‘easy’ as getting multiple countries to collude with Apple. If the architecture can be abused the way you think it can, that is a technical problem as well as a political one.


You can't solve the human-bias problem with technology. That's the whole reason Apple didn't want to build in a back door in the first place.


You may not be able to solve the bias problem altogether, but you can definitely change the threat model and who you have to trust.

Apple’s model has always involved trusting them. This model involves trusting other people in narrow ways. The architecture determines what those ways are.


Trusting Apple to not scan my device in the past was easy because as an engineer I know I would speak up if I saw that kind of thing secretly happening, and I know security researchers would speak up if they detected it.

Now Apple will scan the device and we must trust that 3rd parties will not abuse the technology by checking for other kinds of imagery such as memes critical of heads of state.

The proposed change is so much worse than the previous state of things.


> checking for other kinds of imagery such as memes critical of heads of state.

Do you live in a country where the head of state wants to check for such memes?


Probably. You underestimate humans if you don't think any of us will try to squash things that make us look bad.


Does your state not have protections against such actions?


In theory it does. Laws put in place by previous generations do require maintenance to uphold.


They do indeed.


How does this attack even work? So some government poisons the database with political dissident memes and suddenly Apple starts getting a bunch of new reports which when reviewed are obviously not CSAM.

If the government can force Apple to also turn over these reports then they could have just made Apple add their political meme database directly and it's already game over.


More like, the government says Apple can't operate there unless they include what they say is illegal.

Apple is run by humans who are subject to influence and bias. Who knows what policy changes will come in Apple's future. Apple's previous stance was to not hand over data because they don't have access to it. This change completely reverses that.


I would say that it is a back door, because it would work even if iCloud Photos were E2E encrypted. It may not be a generic one, but one that is specific to a purpose Apple decided is rightful. And there is no guarantee that Apple (or authorities) won't decide that there are other rightful purposes.


> I would say that it is a back door, because it would work even if iCloud Photos were E2E encrypted.

Backdoor is defined by the Oxford Dictionary as "a feature or defect of a computer system that allows surreptitious unauthorized access to data."

The system in question requires you to upload the data to iCloud Photos for the tickets to be meaningful and actionable. Both your phone and iCloud services have EULA which call out and allow for such scanning to take place, and Apple has publicly described how the system works as far as its capabilities and limitations. In the sense that people see this as a change in policy (IIRC the actual license agreement language changed over a year ago) , Apple has also described how to no longer use the iCloud Photos service.

One less standard usage is not about unauthorized access but specifically to private surveillance (e.g. "Clipper chip") - but I would argue that the Clipper chip was a case where the surveillance features were specifically not being talked about, hence it still counting as "unauthorized access".

But with a definition that covers broad surveillance instead of unauthorized access, it would still be difficult to classify this as a back door. Such surveillance arguments would only pertain to the person's phone and not to information the user chose to release to external services like iCloud Photos.

To your original point, it would still work with iCloud Photos did not have the key escrow, albeit with less data being capable of being able to be turned over to law enforcement. However iCloud Photos being an external system would still mean this is an intentional and desired feature (presumably) by the actual system owners (Apple).


I see. My interpretation doesn't hold up given your definitions of back door.

I bet the authorities would be happy with a surveillance mechanism disclosed in the EULA, though. Even if such a system is not technically a back door, I am opposed to it and would prefer Apple to oppose it.

Edit: I just noticed that you had already clarified your argument in other replies. I am sorry to make you repeat it.


It has proven very difficult to oppose laws meant to deter child abuse and exploitation.

Note that while the EFF's mission statement is about defending civil liberties, they posted two detailed articles about Apple's system without talking about the questionable parts of the underlying CSAM laws. There was nothing about how the laws negatively impact civil liberties and what the EFF might champion there.

The problem is that the laws themselves are somewhat uniquely abusable and overreaching, but they are meant to help reduce a really grotesque problem - and reducing aspects like detection and reporting is not going to be as effective against the underlying societal issue.

Apple has basically been fighting this for the 10 years since the introduction of iCloud Photo, saying they didn't have a way to balance the needs to detect CSAM material without impacting the privacy of the rest of users. PhotoDNA was already deployed at Microsoft and being deployed by third parties like Facebook when iCloud Photo launched.

Now it appears that Apple was working a significant portion on that time toward trying to build a system that _did_ attempt to accomplish a balance between social/regulatory responsibility and privacy.

But such a system has to prop technical systems and legal policies against one another to make up the shortcomings of each, which make it a very complex and nuanced system.


But the catch is: all the incumbents already treated your data as if you were guilty until proven innocent. Apple’s transparency about that change may have led people to internalize that, but it’s been the de facto terms of most cloud relationships.

What I personally don’t understand is why Apple didn’t come out with a different message: we’ve made your iPhone so secure that we’ll let it vouch for your behalf when it sends us data to store. We don’t want to see the data, and we won’t see any of it unless we find that lots of the photos you send us are fishy. Android could never contemplate this because they can’t trust the OS to send vouchers for the same photos it uploads, so instead they snoop on everything you send them.

It seems like a much more win-win framing that emphasizes their strengths.


> What I personally don’t understand is why Apple didn’t come out with a different message: we’ve made your iPhone so secure that we’ll let it vouch for your behalf when it sends us data to store. We don’t want to see the data, and we won’t see any of it unless we find that lots of the photos you send us are fishy.

That is PR speak that would have landed worse in tech forums. I respect them more for not doing this.

The core issue is this performs scans, without your approval, on your local device. Viruses already do that and it's something techies have always feared governments might impose. That private companies are apparently being strong-armed into doing it is concerning because it means the government is trying to circumvent the process of public discussion that is typically facilitated by proposing legislation.


> without your approval

This isn’t true. You can always turn off iCloud Photo Library and just store the photos locally or use a different cloud provider.


I hate this argument. Pressing yes to the T&C once when you setup an Apple account doesn’t exactly constitute my approval imo (even if it does legally).

There’s no disable button or even clear indication that it’s going on.


> There’s no disable button or even clear indication that it’s going on.

iCloud Photos is an on-off switch.

In terms of clearly indicating everything that is going on within the service, that is just not possible for most non-tech users. It appears to have been pretty difficult even for those familiar with things like cryptography and E2E systems to understand the nuances and protections in place.

Instead, the expectation should be that using _any_ company's cloud hosting or cloud synchronization features is fundamentally sharing your data with them. It should then be any vendor's responsibility give give customers guarantees on which to evaluate trust.


No, the comment above meant there's no explicitly marked disable button. People who don't follow tech news won't hear about this.


Let's not pretend anyone is really "opting in" on this feature.


Anyone who doesn’t like it can switch off iCloud Photo Library.

I can imagine many people doing that in response to this news.


The fact that you can opt out for now does not set me at ease.


Nor me. I will not opt out because I think there is no threat to me and I like iCloud photos. That doesn’t mean I like the presence of this mechanism.


> That doesn’t mean I like the presence of this mechanism.

I don't think I ever said you did.


No - but you did suggest there was no opting in. I’m pointing out that just because I’m not entirely happy with the choice doesn’t mean it isn’t a choice.


> doesn’t mean it isn’t a choice.

People who haven't heard what Apple is doing are not presented with a choice. Not everyone follows tech news so closely.


I agree this is vastly better than what anyone else is doing, and you know I understand the technology.

However, I don’t think any framing would have improved things. I think it was always going to feel wrong.

I would prefer they don’t do this because it feels bad to be a suspect even in this abstract and in-practice harmless way.

Having said that, having heard from people who have investigated how bad pedophile activity actually is, I can imagine being easily persuaded back in the other direction.

I think thins is about the logic of evolutionary psychology, not the logic of cryptography.

My guess is that between now and the October iPhone release we are going to see more media about the extent of the problem they are trying to solve.

That is how Apple wins this.


> Having said that, having heard from people who have investigated how bad pedophile activity actually is, I can imagine being easily persuaded back in the other direction.

There are terrible things out there that we should seek to solve. They should not be solved by creating 1984 in the literal sense, and certainly not by the company that became famous for an advertisement based on that book [1].

Apple, take your own advice and Think Different [2].

[1] https://www.youtube.com/watch?v=VtvjbmoDx-I

[2] https://www.youtube.com/watch?v=5sMBhDv4sik


> creating 1984 in the literal sense

I take it you haven’t read 1984.

When Craig Federighi straps a cage full of starving rats to my face, I’ll concede this point.


China has tiger chairs. Should we move closer to their big brother systems?


No but this has nothing to do with that.

In case you missed it, I think this is probably a bad move.

I just don’t think these arguments about back doors and creeping totalitarianism are either accurate or likely to persuade everyday users when they weigh them up against the grotesque nature of child exploitation.


> I just don’t think these arguments about back doors and creeping totalitarianism are either accurate or likely to persuade everyday users when they weigh them up against the grotesque nature of child exploitation.

Agree to disagree. This opens the door to something much worse in my opinion.


I don’t think I agree. If you think this boils down to instinct, do you think a story about coming together to save the next generation will work well on people cynical enough to see TLAs around every corner? At the very least, I feel like Apple should probably make a concession to the conspiracy minded so that they can bleach any offensive bits from their device and use them as offline-first DEFCON map viewing devices, or some such.


> do you think a story about coming together to save the next generation will work well on people cynical enough to see TLAs around every corner?

Absolutely not. I don’t think they need to persuade people who are convinced of their iniquity.

What they need is an environment in which those people look like they are arguing over how many dictatorships need to collude to detect anti-government photos and how this constitutes literal 1984, while Apple is announcing a way to deter horrific crimes against American children that they have seen on TV.


That's not at all clear.

A lot of people like strong border controls for instance, even if it means when they return to their country from abroad they have to go through more checks or present more documents to get in. Or consider large gated communities where you have to be checked by a guard to get in.

Many peoples seem fine with being distrusted as long as the distrust is part of a mechanism to weed out those who they feel really are not trustworthy and it is not too annoying for them to prove that they are not one of those people when they encounter a trust check.


Your examples are not a good analogy. The distrust is transient and then you are cleared. This is a stare of permanently being a suspect.

However, it may certainly be the case that in the end people in general do accept this as a price worth paying to fight pedophiles.


My phone is an extension of my brain, it is my most trusted companion. It holds my passwords, my mail, my messages, my photos, my plans and notes, it holds the keys to my bank accounts and my investments. I sleep with it by my bed and I carry it around every day. It is my partner in crime.

Now apple are telling me that my trusted companion is scanning my photos as they are uploaded to iCloud, looking for evidence that apple can pass to the authorities? They made my phone a snitch?

This isn't about security or privacy, I don't care about encryption or hashes here, this is about trust. If I can't trust my phone, I can't use it.


Technically they could have been scanning the photos already to power some AI algorithms or whatever else.

I think this is a nudge for folks to wake up and see the reality of what it means to use the cloud. We are leasing storage space from Apple in this case.

Technically, it’s no different than a landlord checking up on their tenants to make sure “everything is okay.”

And technically, if you do not like iCloud, don’t use it and roll your own custom cloud storage! After all, it’s redundancy and access the cloud provides. And Apple provides APIs to build them.

Hell, with the new Files app on iOS i just use a Samba share with my NAS server (just a custom built desktop with RAID 5 and Ubuntu.)


They do scan the photos on my device, to provide useful functionality to me. All good. If they scan photos in iCloud, that's up to apple, I can use it or not. No problem. With this new setup, my device can betray me.I think that is different and crosses a line.


> Technically, it’s no different than a landlord checking up on their tenants to make sure “everything is okay.”

By this logic, you don't own your phone but rent it from Apple?


Technically I own my iPhone but the software is licensed to me (not a lawyer but that’s my understanding).

We probably also sign something that allows Apple to do what they will with our images—albeit we retain the copyrights to them.

Just to reiterate: the landlord metaphor is referring to software which we lease from Apple—not the device itself.

This is yet another case where I want to emphasize the importance of OPEN hardware and software designs. If we truly want ownership, we have to take ownership into our hands (which means a lot of hard work). Most commercial software is licensed so no we don’t own anything in whole that runs commercial software.


No they really couldn't, because someone investigating, monitoring and reverse engineering the device traffic might have noticed, it would be leaked by a whistleblower, there are plenty of ways this could ruin Apple.

Not so now, they are in the clear either way because Apple themselfes cannot look into the hashes databank. So the backdoor is there, the responsibility of the crime of spying is forwarded to different actors, which even cannot be monitored by Apple.

This is truly a devilish device they thought up. Make misuse possible, exonerate any responsibility to outside actors, act naive as if hands are clean.


This is the thing Apple doesn't realize: our phone's OS vendor has one job: not to betray us.

They have betrayed us.

I have purchased my last iPhone.


good luck finding a replacement, you'll end up using a custom rom if you are serious about tracking and privacy, and that custom rom will be... fully dependent on donations.


I'd pay for it as a subscription service TBH. It would be good for security minded corps and nerds.


of all the items in the list, this is what the govt is interested in :

> It is my partner in crime.


I strongly considered switching away from Apple products last weekend; but this document has convinced me otherwise.

The threats people identify have minimal risk. If a total stranger offers you a bottle of water, you may worry about it being spiked, but him having offered the bottle doesn't make it more, or less, likely that he'll stab you after you accept it. They're separate events, no "slippery slope". It's very convincing that any alteration that would "scan" your whole phone would be caught eventually, and even if it's not, the announcement of this feature has no bearing on it. If Apple has evil intent, or is being coerced by NSLs, they would (be forced to) implement the dangerous mechanism whether this Child Safety feature existed or not. The concept of backdoors is hardly foreign to governments, and Apple didn't let any cats out of any bags. This document shows that Apple did go to great lengths to preserve privacy; including the necessity of hashes being verified in two juristictions, the fact that neither the phone nor Apple know if any images have been matched below the threshold; the lack of remote updates of this mechanism; the use of vouchers instead of sending the full image; the use of synthetic vouchers; and on and on.

Furthermore, the risks of the risks of this mechanism are lower than the existing PhotoDNA used by practically every competing service. Those have no thresholds; the human review process is obscure; there is no claim that other pictures won't be looked at.

The controversy, the fact that it uses an on-device component. But PhotoDNA-solutions fail many of Apple's design criterias, which require an on-device solution:

- database update transparency

- matching software correctness

- matching software transparency

- database and software universality

- data access restrictions.

What about the concerns that the hash databases could be altered to contain political images? PhotoDNA could be, too; but would be undetectable unlike with Apple's solutions. Worse: with serverside solutions, the server admin could use an altered hash DB only for specific target users, causing harm without arousing suspicion. Apple's design prevents this since the DB is verifiably universal to all users.

A rational look at every argument I've seen against Apple's solution indicates that it is strictly superior to current solutions, and less of a threat to users. Cryptographically generating vouchers and comparing hashes is categorically not "scanning people's phones" or a "backdoor." I think the more people understand its technical underpinnings, the less they'll see similarities with sinister dystopian cyberpunk aesthetics.


I understand the technologies they're proposing deploying at a decent level (I couldn't implement the crypto with my current skills, but what they're doing in the PSI paper makes a reasonable amount of sense).

The problem is that this "hard" technological core (the crypto) is subject to an awful lot of "soft" policy issues around the edge - and there's nothing but "Well, we won't do that!" in there.

Plus, the whole Threat Model document feels like a 3AM brainstorming session thrown together. "Oh, uh... we'll just use the intersection of multiple governments hashes, and, besides, they can always audit the code!" Seriously, search the threat model document for the phrase "subject to code inspection by security researchers" - it's in there 5x. How, exactly, does one go about getting said code?

Remember, national security letters with gag orders attached exist.

Also, remember, when China and Apple came to a head over iCloud server access, Apple backed down and gave China what they wanted.

Even if this, alone isn't enough to convince you to move off Apple, are you comfortable with the trends now clearly visible?


> Even if this, alone isn't enough to convince you to move off Apple, are you comfortable with the trends now clearly visible?

Still much better than all but the most esoteric inconvenient alternatives.


Then people need to start asking themselves if privacy is really a value they really hold or is just an empty, bandwagon idealism.

Because sacrificing privacy for convenience is why we got to this point.


And if those are all that's left that meet your criteria for a non-abusive platform, then... well, that's what you've got to work with. Maybe try to improve those non-abusive platforms.

I'm rapidly heading there. I'm pretty sure I won't run Win11 given the hardware requirements (I prefer keeping older hardware running when it still fits my needs) and the requirement for an online Microsoft account for Win11 Home (NO, and it's pretty well stupid that I have to literally disconnect the network cable to make an offline account on Win10 now, and then disable the damned nag screens).

If Apple is going full in on this whole "Your device is going to work against you" thing they're trying for, well... I'm not OK with that either. That leaves Linux and the BSDs. Unfortunately, Intel isn't really OK in my book either with the fact that they can't reason about their chips anymore (long rant, but L1TF and Plundervolt allowing pillage of the SGX guarantees tells me Intel can't reason about their chips)... well. Hrm. AMD or ARM it is, and probably not with a very good phone either.

At this point, I'm going down that road quite quickly, far sooner than I'd hoped, because I do want to live out what I talk about with regards to computers, and if the whole world goes a direction I'm not OK with, well, OK. I'll find alternatives. I accept that unless things change, I'm probably no more than 5-10 years away from simply abandoning the internet entirely outside work and very basic communications. It'll suck, but if that's what I need to do to live with what I claim I want to live by, that's what I'll do.

"I think this is a terrible idea and I wish Apple wouldn't do it, but I don't care enough about it to stop using Apple products" is a perfectly reasonable stance, but it does mean that Apple now knows they can do more of this sort of thing and get away with it. Good luck with the long term results of allowing this.


> And if those are all that's left that meet your criteria for a non-abusive platform, then... well, that's what you've got to work with. Maybe try to improve those non-abusive platforms.

There's always the potential to work on the underlying laws. Not every problem can be solved by tech alone.


> If Apple has evil intent, or is being coerced by NSLs, they would (be forced to) implement the dangerous mechanism whether this Child Safety feature existed or not.

Apple used to fight implementing dangerous mechanisms. And succeeded.[1]

[1] https://en.wikipedia.org/wiki/FBI–Apple_encryption_dispute


Their “you can’t compel us to build something” argument was for building a change to the passcode retry logic, which is presumably as simple as a constant change. Certainly building a back door in this system is at least as difficult, so the argument still stands.


In that specific case at least the FBI is asking Apple to produce new firmware that will bypass existing protection on an existing device - basically asking Apple to root a locked down phone which would likely require them breaking their own encryption or finding vulnerabilities in their own firmware. This is not exactly technically trivial since anything of this sort that’s already known would be patched out.

In this case it seems only a matter of policy that Apple submits CSAM hashes to devices for scanning. No new software needs to be written, no new vulnerability found. There’s no difference between a hash database of CSAM, FBI terrorists photos, or Winnie the Pooh memes. All that’s left is Apple’s words that their policy is capable of standing up to the legal and political pressures of countries it operates in. That seems like a far lower bar.


No, they were asked to sign a software update which would bypass the application processor enforced passcode retry limit on an iPhone 5C which had no secure element.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: