Hacker News new | past | comments | ask | show | jobs | submit login
An Open Letter Against Apple's Privacy-Invasive Content Scanning Technology (appleprivacyletter.com)
1187 points by nomoretime on Aug 6, 2021 | hide | past | favorite | 670 comments



I know the privacy approach Apple has been pushing was just marketing, but I didn't care too much because I enjoyed my iPhone and M1 macbook.

Because of this decision (and the fact that my iPhone more-or-less facilitates poor habits throughout my life), I'm considering moving completely out of the Apple ecosystem.

Does anyone have any recommendations for replacements?

* On laptops: I have an X1 carbon extreme running linux, but it's entirely impratical to take outside of the house: it has terrible battery life and runs quite hot. I also cannot stand its trackpad. Is there any linux-capable device with an okay trackpad?

* On phones: are there any capable phones out there that aren't privacy nightmares? All I want is a phone with a long lasting battery that I can use with maps, a web browser, and texting.

* On cloud: I've considered executing on a NAS forever, but I'm unsure where to start.


Laptop | Desktop: https://www.dell.com/en-us/work/shop/overview/cp/linuxsystem...

Router: https://www.turris.com/en/omnia/overview/

Media Center: https://osmc.tv/vero/

Cloud (Use TrueNAS Scale): https://www.truenas.com/systems-overview/

Phone: https://www.pine64.org/pinephone/

Watch: https://pine64.com/product/pinetime-smartwatch-sealed/

Smart Thermostat: https://hestiapi.com/product/hestiapi-touch-one-free-shippin...

If you go this route all devices will be running Linux. The one OS route is kindof nice, hence the pinephone over open android alternatives (like Graphene OS).

I sorted from least to most technical. I also tried to pick the least technical challenging in each category. The Dell stuff should just work. The Phone will require some tinkering for the moment.


I'm sympathetic to remove Apple and Google from my life, but this list looks so sad. You know the experience, integration and headache of all this is going to be horrible.


Depends what you're looking to do - if you really value your privacy, yes, you're going to have to give up some convenience, but I can assure you, it's really not that bad if you put in a little work on an occasional basis, it's not the constant maintenance nightmare that some people seem to expect.

I fired up Syncthing on my phone and NAS and restic to perform encrypted backups nightly to B2, sure, you're not going to get some magical cloud image recognition stuff, but do you really need or even want it?

Everything can integrate quite nicely, but plan to do the connecting work yourself as a one-off with some occasional maintenance a few times a year.


I used to think that privacy arguments are ok in principle but we aren’t really loosing much by sharing some personal data with Apple or Google’s algorithms. In return we got so much convenience, software and products that worked well with each other and even shopping recommendations that were great.

But with this move, they have just crossed a huge threshold. Now it’s clear that we are hurtling into a truly dystopian future where our personal thoughts and experiences are simply not our own. America at least seems to be heading towards some form of corporate fascism that may not be dominated by an individual or a group, but will eventually lead to social ossification and societal decay by outlawing dissent.

To introduce this tech under the disguise of fighting child abuse is just amazing. This will eventually lead to identification of individuals who for example may hold sympathetic views for political views that are currently out of favour.

At some point you have to swallow the inconveniences outside the garden and recognise what this is leading to.


If you had the option of buying this hardware stack through an integrated third-party brand/storefront that offered support for the products and their integration would that make you feel differently?


That's an interesting idea. The amount of work even for a techie to maintain all this, is considerable. Could it be set up for a "user". Tech support, would be; Interesting.


I made an example storefront a few months ago and jotted down a plan in a bout of procrastination. I have a few thoughts on a pragmatic lazy (in the cs sense) approach.

If you or anyone one else is interested email me: ipodopt@pm.me


This has the potential to be a great laptop, if execution is good:

https://frame.work/


Laptop: Dell XPS 13 and very happy. Maxed out specs and clearly higher price range.

Or: Lenovo Yoga Convertible. My second device. I just don't do games. Or bigger data stuff on this machine. Some design work. Some photo and smaller video stuff. I love the flexibility of the convertible when working with PDF and doing annotations by hand.


The battery on my xps seems to be swelling and messing up the trackpad. Apparently it’s a pretty common issue

Edit: seems to be the precision line too

> The same problem is happening with the Precision 1510 line with the same batteries. I purchased 10 of these laptops for my department around the same time you did. We've had four of these failures so far in three laptops.

reddit.com/r/Dell/comments/6bzhtw/dell_xps_15_9550_battery_swelling_causing/


Good to know. Will watch for it. Currently not an issue.



I have their laptop and do not recommend them. Honestly, I sort of regret the purchase.

- They lie about specifications like battery life (and other things). I get about 1 hour on my ~5 hour battery life when web browsing.

- My laptop had a race condition at boot that would prevent boot 50% of the time. There was a workaround.

- Wifi had a range of maybe ten feet (not joking).

I am sure there new laptop is better however I do not really trust them after my interactions. Especially for something more novel like a linux phone.

On the other hand, Pine64 is very focused on their hardware stack. All their products are running a very similar hardware unlike Purism. They are moving way more product then Purism and are better liked. Hence they have a stronger community. They are also much cheaper phone-wise for a similar feature set. And you can actually buy and receive the phone.

In terms of alternatives I think System76 is pretty good desktop wise right now. Laptop are alright. Waiting for their upcoming in-house laptop.


This is quite interesting. I'm writing this on their Librem 15 and can't recommend it enough. No problems with booting or anything. Battery life got short with time (but I never cared about it).

> Wifi had a range of maybe ten feet (not joking).

Purism is using the only existing WiFi card that works with free firmware. It is less performant than typical proprietary ones. If you don't care about your freedom strongly, you can replace the card (they are very cheap on ebay). Also, check Purism forums for such questions. It works better than "ten feet" for me.

> On the other hand, Pine64 is very focused on their hardware stack.

And the software is provided by Purism (Phosh, for most Pinephone users). Pinephone is great, I'm using it, too. But Librem 5 is much more performant. Many videos show it's working fine, except of the non-finished software (same as for Pinephone).


> Purism is using the only existing WiFi card that works with free firmware. It is less performant than typical proprietary ones. If you don't care about your freedom strongly, you can replace the card (they are very cheap on ebay). Also, check Purism forums for such questions. It works better than "ten feet" for me.

Not the wifi card, the surround material like the chassis are attenuating the signal (the librem 14 should have fixed this issue). I swapped mine out for a well supported and performant Intel card and only got marginal improvements to the signal.

My "ten feet" was using it at coffee shops. I was traveling with the laptop. It was hard. The card switch did get it over the hump for this use case. Not an issue most of the time but still will have get a spot closer the router for video calls (like standup).

So the constraints for work were getting a seat close the router AND a power plug. I ended up USB tethering with my phone alot.

I do appreciate their contributions to the ecosystem but was wronged as a consumer. They need to be truthful.

I take it you have the v3 with the standard boot drive?


> I take it you have the v3 with the standard boot drive?

Yes, v3. What do you mean by "standard boot drive"? Using SSD if that's what you mean.


The v4 updated the screen which burned more power. I remember telling them my real life results and them proceeding to not update their product marketing page while admitting it was based on the v3. I feel like they were already stretching it to begin with with v3 but you would know.

I also got the fastest SSD they had in checkout. Which I think contributed to the booting race condition. I never got a link to an upstream ticket so I do not know if it is fixed.

When I emailed them they said they do not have a laptop in that configuration to test, haha.


Sad the high DPI displays didn't go well. My eyes can't take the 1080 vertical pixel displays that are still so common on open laptops nowadays. But I really want to like the Librems; there aren't many trustworthy laptops with kill switches out there.

I have an X1 Carbon Gen 9 with a high DPI 16:10 display, 32GB RAM, and anywhere from 4 to 12 hours of battery depending on workload. It's worth a look for people who can tolerate Lenovo's history (BIOS rootkits targeting Windows in the non-ThinkPad lines).


This looks like a similar problem to yours. They fixed that, even though it affected SSDs that they did not themselves sell: https://forums.puri.sm/t/support-with-booting-from-nvme-ssd/....


I think it's about time to say: "Thank you, Apple!" Finally these awesome projects will get the funding and the support they deserve.

Also, has anyone tried the FXtec phones? https://www.fxtec.com I am thinking about getting the FXtec pro1 version, which promises to get a decent UbuntuTouch support as well as lineageOS.

I feel that with the comeback of Vim, there might be a sufficient user base for devices that use the keyboard for most tasks. I miss the days, when I could send a text message without taking the phone out of my pocket.

Edit: Just found a relevant HN discussion: https://news.ycombinator.com/item?id=26659150


I just ordered the pinephone! Looking forward to trying it, hope it can keep up with most things I use my 6S for.

Guess, I'll dig out the old digital camera again, since that is a weak point for the pinephone. :-D


Is there a Linux Laptop at 2560 x 1600 resolution like Macbooks ? System 76 still runs at 1920x1080. It really makes a difference wrt crisp font rendering and less eye strain.


Most of them I believe should allow you to get HiDPI (aka Retina) displays.

I've been looking at replacing my now 9y-old Macbook Pro (primarily running Manjaro Gnome as my daily driver) with a dedicated Linux laptop, and I've narrowed my selection down to the Lenovo ThinkPad P series or the Framework laptop. For the ThinkPad's, the 4k display (3840 x 2160) is recommended I believe (over the WQHD ones). The Framework laptop comes with a standard 2256 x 1504 display.

- https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-p/c/t...

- https://frame.work/products/laptop


I hadn't come across that Omnia router before - it looks great! Bit of a shame it doesn't support 802.11ax, and it is more expensive than I'd like, but still...


Might be a good idea to wait. It's coming soonish I think:

- https://github.com/openwrt/luci/pull/4994

- https://forum.turris.cz/t/wifi-6-ax-adapter/10390/77

I think the router might be my favorite open hardware piece:

- It was easier to setup then my old Asus router

- Schematics and source are easy to access.

- It has never not worked.

- It is made by a company that is a domain registrar with a good track record on open source projects (click the more button it the top right and you might recognize a few projects): https://www.nic.cz/

- And if you need to do something advance with it, you can. Mine is has Bird runing BGP.


The Omnia was quite expensive, but I've gotten frequent updates for the last four years. It's a nice mix of hackable and "just works".

I've turned off the WiFi at this point, and just use it as a router now. The UniFi access point that I installed provides better coverage in my house since it's easier to place in a central location.

Overall, I'd rate it as a good value.


The dell and lenovo laptops are nowhere the quality of the apple machines, sadly.

I have a maxed out xps and it is a downgrade in all respects but privacy. :/


Yeah, I figured. When waiting for lsp autocompletes in emacs, my entry level M1 MacBook with no gccjit is orders of magnitudes faster than my almost maxed out Lenovo with emacs and GCCjit.

The difference is so stark that I cannot bear to autocomplete on type on the Lenovo machine, it lags too much and frequently locks up.


My thinkbook g2 14 are is almost the same speed as m1 macbook and runs linux without any issue. It has "only" 9 hours of battery in my usecase, but that's completely fine by me.


I can't complain. But can't compare it to the M1. I had a 2020 MBPro and am currently using a XPS 13 with maxed out specs.

The camera on the XPS is some leagues below. The microphone had driver issues from the start and it cost me two days for a software workaround.

Other than that I am in every way more happy. Keyboard, track pad and resolution. Performance even with crappy corporate spy-and crapware is definitely way better.

I thought I would miss the Mac more. Not looking back once the mic was fixed.


Speakers. Apple finally fixed laptop speakers. Nobody else has figured out how to copy it yet. :/


OK. Not my use case. I have never really listened that intensely.

But totally understand when the milage varies.


I saw your blogpost [1] mentioning how the iPhone 12 is likely your last. Have you given any thought since then to what your next smartphone would be? Or if you still use smartphone at all?

[1]https://sneak.berlin/20210202/macos-11.2-network-privacy/


I already don't put a SIM in my phone; I use a dumbphone for GSM.

I will likely begin using Lineage OS or Graphene. I'll begin testing soon and will probably post what I end up doing, either on my blog or the bbs.


Have you considered Purism and Fairphone or are their specs too underwhelming to consider?

With regards to your laptop, does not being able to use Mac-specific development tools (XCode, etc.) interfere with your work in any way or do you just limit the work you take to ones that are friendlier to Linux?


I run macOS on some of my laptops, connected to the internet only via a VPN router on which I have root and can filter traffic.

Rarely do I need to do macOS-specific stuff though.


Laptops with an "s"? How many do you have and how many do you carry on your person? When you say you run MacOS on your laptops, do you mean as a VM or on Apple hardware? Did you keep the M1 laptop you blogger about or did you send it back?


Nothing quite like apple. The step down is short enough not to stumble though :)


> On laptops: I have an X1 carbon extreme running linux, but it's entirely impratical to take outside of the house: it has terrible battery life and runs quite hot.

I have X1C (not extreme) and it has excellent battery life with Linux. Consider using TLP (https://linrunner.de/tlp/), if you want to significantly improve your laptop power consumption.


It's possible he's got an old X1C and the battery is just on it's last legs as well. If that's the case, he'd probably be better served using his M1 laptop sparingly until Linux support for it is better.


I understand the impulse to turn away from Apple. However, it’s not a practical solution. How long until this tech or worse is mandated in all products? I think the real answer is for the people to stop bickering about primary colors and work together toward passing legislation that limits the pervasive invasion of privacy by governments and corporations.


You can do both. Boycott Apple because they're the ones who committed this violation, whilst also protesting government efforts to spy.


As someone else mentioned, for a NAS using TrueNAS (used to be FreeNAS) is easy enough and quite satisfying. You can find plenty of guides.

In general, you need to balance budget, capacity requirements, and form factor. Old servers are often great. Big disks seem like a good idea, but for rebuild times going over 4TB is horrible.

However, unfortunately HDD prices right now are horrible...


define horrible? i have a 8x18tb raid-6 array and it rebuilds in 2-3 days.


I guess it depends. Personally I don't like having rebuilds longer than a day.


I have a Lemur Pro from System76 and it has insane battery life. I feel like I never charge it b/c it's always got juice. I run Pop_OS! on it b/c at my age I just want to get work done and not tinker with a distro.


acer nitro 5 is my choice for a laptop for running ubuntu, good battery life, I get about 5+ hours.


Laptop: Framework Laptop

Phone: Pixel with GrapheneOS or CalyxOS. In the future a Linux phone when the software improves.


> Phone: Pixel with GrapheneOS or CalyxOS

I've seen this recommendation very often lately. As I am shopping for a new phone: Why is it that hardware directly from Google is recommended for putting another OS onto it (I've seen recommendations for LineageOS as well). What makes it better than any stock phone supported by LineageOS?


Consistency. All the Google hardware forever has allowed easy factory unlocking without a fuss, easy ways to restore to standard OS images without jumping through hoops, and are widely available. Plus they allow re-locking the booloader and the phone equivalent of enrolling your own custom secure boot keys. They also provide firmware updates for a long time so you can get platform/hardware patches too. CalyxOS does provide these in their images.

The 3a/4a are cheap and have headphone jacks and good cameras. What's not to love? Until they change their policy on unlocking bootloaders and installing custom OSs they're great devices. I still have a Nexus 5 that runs PostmarketOS and Ubuntu Touch, and if it completely breaks I can always use ADB/Fastboot to flash the Android 6 images that are still on Google's website. Don't even have to log in to get them.


Devices supported by the Sony Open Device Program shoukd be also a good target:

https://developer.sony.com/develop/open-devices/

There are projects such as Sailfish OS that make use of this to run on originally Android hardware.


See the GrapheneOS or CalyxOS websites for more details, they are significantly hardened for security compared to LineageOS.

Currently those two projects only support Pixels, mainly because they're all bootloader unlockable. If these projects had as many volunteers as LOS then more devices could be officially supported.

LOS on a supported Android phone is still a better option than a stock Android or iPhone at least.


My https://divestos.org project, while not as secure as GrapheneOS, provides lots of security to many older devices.


Interesting project. Thanks for sharing! Any reason why at least some of those patches couldn't be upstreamed to LOS?


Most things simply aren't in their scope.

I do send occasional patches to Lineage if they are in-scope and am in contact with some of them reasonably frequently.

The big blocker is that their Gerrit instance requires a Google account to login.

Example of a recent fix I emailed them: https://review.lineageos.org/c/LineageOS/android_device_htc_...


CalyxOS is fantastic. You can get a like-new Pixel on swappa.com for cheap, and have a virtually Google-free, Apple-free phone that supports most/all the apps you would want via MicroG. Can't recommend it enough. GrapheneOS is similar except without support for MicroG if you don't need it.


If I were to buy a laptop, it would be the Framework laptop that just started shipping. It doesn't have a Ryzen chip, though, so that's a deal breaker for me. Otherwise, that laptop ticks all of my boxes.

Phone-wise, there are many options to choose from. I like the idea behind the Teracube 2E[1], as they take some of the principles behind Framework and apply it to phones.

> On cloud: I've considered executing on a NAS forever, but I'm unsure where to start.

Depends on how much you want to tinker. You can't go wrong with a Raspberry Pi and some external harddrives, but there is also dedicated NAS equipment that requires less setup and maintenance, some of them are Linux based, as well.

[1] https://myteracube.com/pages/teracube-2e


I'm suprised nobody has mentioned GrapheneOS: https://grapheneos.org/


> On laptops

My XPS 15 does about 6 hours (it's a maxed out i7) on Linux and 3 on Windows.


On phones: I'm considering a Pixel device with https://calyxos.org/ The main limitations will be the featureset of MicroG (no ~maps~/wearOS/auto). Possible issues with important apps like those for banking.

On cloud: Synology Diskstation is amazing. Only use it through a local VPN though.

edit: maps work (see reply)


Are the only phone options iOS or Android? I’m also considering leaving my iPhone behind because of this privacy violation but I’m definitely not moving to Google. That seems like two steps back.


Calyx is a degoogled Android ROM. It's probably the best choice until mobile Linux improves.


What are the benefits over graphene OS?


microG support mainly, it's needed for some Play Services APIs like push notifications and maps, though the choice all depends on what apps you use. GrapheneOS is great also and even better for those with very high security and privacy requirements.


Sailfish OS (built on Maemo/MeGo legacy, for those who remember those) is what I have been using for years (and this is typed from).

Also there are a lot of Linux distros targeting the PinePhone maturing by the day.


I’m considering the Nokia 8110 (banana phone)

KaiOS, somewhere between a smart phone and a dumb phone. Still has Google maps, not sure if you need a Google account tho.


> no maps

I don't experience any issues using mapping apps on Android with microG instead of Google Play Services. Closed source navigation apps including Google Maps, HERE WeGo, and Magic Earth work just fine. Open source navigation apps like Organic Maps and OsmAnd also work with no problems.


I have found that my Pixel with CalyxOS and microG doesn't have any text to speech engine. So the maps apps work, but with no turn by turn audio.


RHVoice is a free and open source text-to-speech output engine for Android. Try it out:

- F-Droid: https://f-droid.org/en/packages/com.github.olga_yakovleva.rh...

- GitHub: https://github.com/RHVoice/RHVoice

Alternatively, if you prefer Google's closed source text-to-speech engine (that is preinstalled on most commercial Android devices), you can download Speech Services by Google through Aurora Store:

- https://play.google.com/store/apps/details?id=com.google.and...

Instructions for configuring the default text-to-speech engine on Android are here:

- https://support.google.com/accessibility/android/answer/6006...

Most navigation apps come with their own voice banks for turn-by-turn navigation audio, and installing a text-to-speech engine isn't necessary for these apps. However, Organic Maps (which is a recommended app in the CalyxOS setup wizard) doesn't, and it relies on the default text-to-speech engine.


Thanks! I didn't know about RHVoice. I tried installing a couple others I found through searching F-Droid, but nothing worked. I just installed RHVoice.


Thank you for that insight! My comment was based on some (older) Reddit comments. I'm glad to hear there are working apps.


If you're not sure whether an app is compatible with microG or a flavor of Android without Google Play Services, Plexus is a helpful database of crowdsourced compatibility ratings:

- Plexus: https://plexus.techlore.tech

- GitHub: https://github.com/techlore/plexus


Alternatively, if you don’t use iMessage or iCloud then I don’t think these changes affect you.


lol. nope. trust is binary. you either trust apple or not. after peddling their privacy marketing for so long and doing this, they literally lost my trust.


For most people, trust isn't binary though.

For example, just because you hire someone for one purpose doesn't mean they should have the keys to your house.


If I hire someone to do X I trust them to do X. If X involves them getting in my house to do something while I'm not there I give them the keys. If they screw up I no longer trust them for X and I no longer trust them in general.

I'm not saying you trust someone for all the possible things under the sun.

I'm saying if I trust you for X and you mess it up I no longer trust you for X.


TBH I've been considering just dropping a smartphone all together, I don't really get much value out of it since i'm on my laptop most of the time when I want to internet anyway


For laptops I'd recommend the framework laptop (https://frame.work/). It's thin, powerful, and modular.


I use an hp spectre x360, it's not the most powerful thing in the world and has occasional issues knowing when to charge but otherwise I love it with ubuntu/i3wm


You know Google scans the CP database too, right?


From what I understand, you can use Linux on the M1 now, so at least as a stopgap.

For nas, synology is always a good brand.


Linux on M1 isn't ready yet for daily use, but we're working on it. Expect things to move pretty quickly in the next few months.


Link?


This person is the creator of Asahi Linux and the one who did the early work of Linux on M1 Macs. [0]

When it is ready, it will be ready and they will say so.

Until then it is not ready for general use and is still under active heavy development.

[0] https://asahilinux.org/


Why do people still think they have _any_ power over user-hostile corporations and its proprietary code?

Apple can (and likely will) say they won't do it and then do it anyway. It's a proprietary platform. They already have it, now. All they are claiming is an excuse not to be caught in a certain way they wouldn't like at a later point.


This seems fatalist and goes against what we have learned from internet society: making a ruckus is exactly how you sway large companies.


I'm with you bud. Fatalism is how bad things survive.

"Oh, nothing will happen to this [corrupt politician|naughty rich guy|corporation doing a thing I don't like], because they're all [bad thing]. It's all over already." :facepalm:

(Also, I couldn't read the original article because my work thinks it's spreading malware or something so I'm really only referring to the fatalism thing here, not Apple.)


There can always be leaks, anonymous sources that confirm something is still ongoing, despite what was publicly announced. Brand image is important.


Are you sure?

Here is a true story for you:

Company X claimed users voice commands never left their devices. Then someone leaked recordings of people having sex, commiting crimes, etc recorded from X devices. This was brought up by media. For every concerned user there were ten apologist trying to justify this behaviour and a week later everyone forgot this ever happened.


If you’re talking about Amazon, those stories did affect Echo sales, and the trajectory of the category overall. When was the last time you read a breathless article about how smart speakers will change everything?


I was thinking of another company, but I am very happy that Amazon bottom line was affected (?)


Post a link so everyone can remember. If it's something you care about, you need to keep reminding people. I'll post stuff about police brutality on threads that are relevant. It helps remind everyone that they are susceptible to the whims of any rogue policeman at any given time, regardless of social status. Most likely with no recourse.


Hilarious. I knew what they meant. Fanboys and their short memories. https://www.forbes.com/sites/jeanbaptiste/2019/07/30/confirm...


I recently listened to a Darknet Diaries episode on messaging app Kik. This app is apparently being used by many people to trade child pornography. In this episode, there was some criticism expressed on how Kik doesn't scan all the images on their platform for child pornography.

I would really like to hear from people who sign this open letter, how they think about this. Should the internet be a free for all place without moderation? Where are the boundaries for moderation (if it has to exist), one-on-one versus group chat, public versus private chat?

To quote this open letter: “Apple is opening the door to broader abuses”. Wouldn't not doing anything open a different door to broader abuse?

Edit: I would really love an actual answer, since responses until now have been "but muh privacy". You can't plead for unlimited privacy and ignore the impact of said privacy. If you want unlimited privacy, at least have the balls to admit that this will allow free trade of CSAM, but also revenge porn, snuff-films and other things like it.


My personal opinion is if we could spy on all citizens all the time we could stop all or most of the crimes, do we want this? If you say Yes then stop reading here. Else if you say 100% surveillance is too much then what do you have against people discussing where the line should be drawn?

Some person that would sign that letter might be fine with video cameras in say a bank or some company building entrance but he is probably not fine with his own phone/laptop recording him and sending data to soem company or government without his consent.

So let's discuss where should the line be drawn, also if competent people in this domain are present let's discuss better ideas on preventing or catching criminals, or even for this method let's discuss if it can be done better to fix all the concerns.

What I am sure is that clever criminals will not be affected by this, so I would like to know if any future victim would be saved (I admit I might be wrong, so I would like to see more data from different countries that would imply the impact of this surveilance)


This only finds known pictures of child abuse not new ones and especially it doesn't find the perpetrators or prevent the abuse.

But it creates a infrastructure for all other kind of "criminal" data. I bet sooner or later governments want to include other hashes to find the owners of specific files. Could be bomb creation manuals, could be flyers against a corrupt regime. The sky is the limit and the road to hell is paved with good intentions.


> This only finds known pictures of child abuse not new ones

No, it finds images that have similar perceptual hashes, typically using the hamming distance or another metric to calculate differences between hashes.

I've built products that do similar things. False positives are abound with perceptual hashes, especially when you start fuzzy matching them.


But these are just variations of known pictures to recognize them if they cropped or scaled. The hash of a really new picture isn't similar to the hash of a known picture.


Perceptual hashes have collisions, and it is entirely possible for two pictures that have nothing to do with each other to have similar perceptual hashes. It's actually quite common.


Sure it is possible to have collisions, this is the main thing about hashes , in general collisions are rare but this are a different type of hashes and Apple is probably tweaking the parameters so the collisions are manageable.

But if the database of hashes is big and the total number of unique photos of all iPhone users is also a giant number you will find for sure collisions.


It certainly does help find the perpetrators and prevent abuse. Unlike videos of many other kinks, CSAM distribution is not one-way from a small number of producers to a large number of consumers but is often based on sharing "their own" material.

When we arrest people for "consuming" known old CSAM, we often find new CSAM produced by themselves; the big part of busting online CSAM sharing rings is not the prevention of CSAM sharing but the fact that you do get a lot of actual abusers that way.


I would like to read more about your claims,the problem is this subject is very dangerous , the only things I know about it is from articles that got popular and part of this articles are about innocent people that had their lives destroyed because someone made a mistake(like read an IP address wrong).

A nightmare scenario bould be soemthing like,

- giant tech creates secret algorithm , with secret params and threshold that they can tweak at will

- bad guys reverse it or find a way to make a inocent looking picture to trigger the algorithm

- the previous is used by idiots in chat groups or even DMs to troll you, like SWAT-ing in US , or DDOS and other shjit some "gamers" do to get revenge for losing some multiplayer game.

- consequences innocent person loses his job, family, friends, health etc.

I don't trust giants moderators either, they make mistake or just don't do they job and randomly click stuff.


Many people share photos of child pornography via the mail. There has been criticism that the USPS does not open all mail and scan them for child pornography.

I would really like to hear from people who do not sign this letter how they think about that.


They already scan for bombs and hazardous materials, so yes the line is drawn somewhere between ‘let anything get sent’ and ‘track everything’


Not sure I'm going to go all in on unqualified support, but it seems like comparing an image to a series of known hashes is qualitatively somewhat different than the postal inspector open all your mail. Though those convenient little scans they'll send you if your mail if you sign up suggests they do have some interest in who sends you mail already.


A closer comparison would be a machine opening your mail, scanning the contents, then using the same perceptual “hash” with an unknown database of “hashes”

If whoever controls that database wants to prohibit say a meme making fun of a politician, they need only to add the picture to the DB.


I think it would be horrible if they opened all mail... However, if they had a system that could detect CSAM in unopened mail with an extremely low false negative rate, then I'd be fine with it.

With many packages this already happens in the search for drugs and weapons and I have no problem with that either.


I believe millions more nudes are sent via snap/watsap/etcetc every day then are sent in the mail in a year.. with "extremely low false negative rate" - if that means there are a bunch of geeks and agents staring at your wife/gf.. your daughter/ etc.. is that okay? Some people will say yes - but I don't think those people should get to decide that all the other people in the world have to hand over all the nudes and dick pics everyone else has taken.

when they open mail with drugs and weapons it's most likely not a sculpture of your wife's genitals / daughter's tits, pics of your hemorrhoids / erectile implant - whatever other private pic stuff that no one else should be peering at.

If they were opening and reading every written word sent and received - I can guarantee you people would be up in arms about it.


> I believe millions more nudes are sent via snap/watsap/etcetc every day then are sent in the mail in a year.

I know for sure this is true, but this BS scenario was still used as a rebuttal against my comment, so I figured I'd give them an actual answer.

> when they open mail with drugs and weapons it's most likely not ... other private pic stuff that no one else should be peering at.

People order a lot of stuff that is very private online. From self help books, to dilos (which show up clearly on x-ray), to buttless chaps. If anything, packages are probably more private than mail these days, since people only mail postcards and bills.

> If they were opening and reading every written word sent and received - I can guarantee you people would be up in arms about it.

They would. But this is completely incomparable to what's happening on iOS devices now.


Maybe some crossed wires in the discussion - I thought your initial reply was in regard to what gp wrote about letters - and then you were saying about xrays and weapons and drugs.. which generally are not an issue with letters..

think I brought too many examples in basically trying to say flat paper.. yeah I think most people assume boxes/packages are xrayed/scanned by usps - and people do still send stuff they would not want public that way.

we are in agreement in that "They would. But this is completely incomparable to what's happening on iOS devices now."


I suppose one important distinction here, if this makes any difference, is that drugs and weapons (to use your examples) are physical items, and these could arguably cause harm to people downstream, including to the postal system itself and to those working in it. In contrast, photos, text, written letters, and the transmissions of such are merely information and arguably not dangerous in this context (unless one is pro-censorship).


The proliferation of CSAM is extremely harmful to the victims in the photos and more people seeing them might encourage more CSAM production in general.


I suppose I should clarify my point that I was referring to those dangerous items in the mail in the sense of their capacity to directly cause physical harm to those handling or receiving them, rather than in the more general sense of the societal effects of their proliferation, which is something else altogether (to your point).

To be clear, I'm not disagreeing with you in regards to the harm caused in the specific case of CSAM, but I can't help but see this as a slippery slope into having the ability in the future to label and act on other kinds of information detected as objectionable (and by whom one must also ask), which is itself incredibly harmful to privacy and to a functioning free society.


The big reason I don't want someone else scanning my phone (and a year later, my laptop) is that last I checked, software is not perfect, and I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision.

It's the same reason I don't want police having any opportunistic information about me - both of these have an agenda, and so innocent people get pulled into scenarios they shouldn't be when that information becomes the unlucky best-fit of the day.

That Apple has even suggested this is disturbing because I have to fully expect this is their vision for every device I am using.

And then I expect from there, it's a race to the bottom until we treat our laptops like they are all live microphones.


The chance of you getting enough false positives to reach the flagging threshold is one in a trillion. And then the manual review would catch the error easily. The FBI won’t be called on you.


Manual review. So in other words they're going to go through your private photos unbeknownst to you to decide you're innocent. And you're never even going to know it happened. Wonderful.

What if it really was a very private photo?


That's a claim by Apple. Due to the opaque process this is completely unverifiable.

As well this statement relies on the fact that no malicious actor out there is trying to thwart the system. The hashes used are deliberately chosen to easily produce collisions otherwise the system won't work. This almost certainly will be abused.


The concept of checking images against a hash list is not unique to Apple or even new.

https://en.m.wikipedia.org/wiki/PhotoDNA

What Apple announced is a way to do it on the client device instead of the server. That has some security implications, but they’re more specific than just “hashes might collide.”


They're not just checking against a hash list, they're using perceptual hashing, which is inexact and unlike cryptographic hashing or checksumming. Then, they use a fuzzy metric like hamming distance to determine if one image is a derivative of an illegal image.

The problem is that the space for false positives is huge when you use perceptual hashing, and it gets even larger when you start looking for derivative images, which they have to do otherwise criminals would just crop or shift color channels in order to bypass filters.


“This almost certainly will be abused.” is a claim by you and completely unverifiable. Apple has published white papers explaining how their hashing, matching, and cryptography work. How do you see someone thwarting that process specifically?


Apple's claims on how it will work are also completely unverifiable. What's stopping a government from providing Apple with hashes of any other sort of content they dislike?


And then Apple reviews the account and sees that what was flagged was not CSAM. And again, the hashes aren’t of arbitrary subject matter, they’re of specific images. Using that to police subject matter would be ludicrous.


How would Apple know what the content was that was flagged if all they are provided with is a list of hashes? I completely agree it's ludicrous, but there are plenty of countries that want that exact functionality.


If they have the hash/derivative they dont need to look on device or even decrypt, theyll know that data with this hash is on device, and presumably 100s of other matching hashes from the same device


The matched image’s decrypted security voucher includes a “visual derivative.” I’m sure in other words they can do some level of human comparison to verify that it is or is not a valid match.


Easy -- by having the authorities expand what is being searched for.


> and then the manual review would catch the error easily.

How do we know this? It's not obvious to me that this system would work well for the edge cases that it seems intended for.


I won't use a service that performs dragnet surveillance on my communication. The US Postal service does not open every single letter to check if it contains illegal material. If I rent a storage unit, the storage company does not open up my boxes to see if I have albums of child pornography. This move by Apple is equivalent.

I'm going to turn this around and say that those in favor of Apple's move: "Should the internet be a place where your every move is surveilled?" given that we have some expectation of privacy in real life.


> The US Postal service does not open every single letter to check if it contains illegal material.

You'd be surprised about the amount of packages that gets x-rayed in order to find drugs. But yes, you're 100% right that it's not all of them.


It is illegal for the USPS to look at the contents of first class mail except with a court order.

Other classes of mail may have less stringent protections. For instance, third class (and maybe second class, I forget) mail can be opened to make sure that the things it contains are allowed to be mailed in that class.


An x-ray is nothing like reading the contents of letters or randomly checking ahipped harddrives and USB sticks for content. I don't know how to clarify legally or conceptually, but I feel confident there is a very clear difference


Perhaps I'm being too cynical but I think the difference is they haven't figured out a convenient, automated way to do the latter


I forgot about that. I think it's nearly all stamped mail over a certain weight? I don't remember if labeled mail is similarly scanned and my google skills are failing me today.


I think there is a very deep and troublesome philosophical issue here. Ceci n'est pas une pipe. A picture of child abuse /is not/ child abuse.

Let me ask you a counter-question. If I am able to draw child pornography so realistically that you couldn't easily tell, am I committing a crime by drawing?


> am I committing a crime by drawing

That really depends on the law in the country where you're doing that. According to the law in my country, yes, you are.

None of this actually answers my question though, it's just a separate discussion. I would appreciate an actual answer.


Right, my question is of course a rhetorical one. If I similarly draw other crimes being committed, the same does not apply. Why is that?

And to address your explicit question, it is far too complex and specific to a given community to answer in any meaningful way here. I can tell you what isn't the answer though: deploying spyware on phones worldwide.


> If I similarly draw other crimes being committed, the same does not apply. Why is that?

The answer is much more simple than you seem to think it is. Because the law doesn't say it is. I can only go by the law in my country, which is that (loosely translated) 'ownership of any image of someone looking like a minor in a sexually suggestive position' is a crime. Since a drawing is an image, it's a crime. Having an image of someone breaking into a home is not a crime according to law in my country. That's why you can have a drawing of that.

Your question is like asking "why don't I get a fine for driving 60 on the highway, but I do get a fine for driving 60 through a school zone?" Well, because one is allowed and the other isn't.


Yeah but he's using reducto-ad-absurdem to illustrate the law as written is absurd in some cases. So yes you can pedanticly cite the law and have an answer to the question, but you're missing the larger discussion.

At somepoint somebody is gonna actually have to tackle that looking at images, especially fake ones, might not only be harmless, it might be safer to let people get release from their fantasies (I'd be curious to see what the research says).

Some day, people will say "Obviously nobody cares if you possess it or look at it, obviously, it's just about MAKING it."

I think this is follow the process of weed (unspeakable horror, "drugs will kill you" in dare, 1-strike expulsion from high-school or college) when I was growing up, yet now legal to even grow.


It seems just as plausible that viewing such depictions could feed into someone's urges. Without some evidence I'd be hesitant to go with someone's conjectures here. There is also, frankly, the fact that a lot of people find viewing such stuff so shockingly depraved that they don't care if someone is "actually" harmed in the process, a claim that is hard to evaluate in the first place.


> So yes you can pedanticly cite the law and have an answer to the question, but you're missing the larger discussion.

While I do agree that drawings being illegal might not be in anyone's best interest, that doesn't have anything to do with the questions I asked.

> At somepoint somebody is gonna actually have to tackle that looking at images, especially fake ones, might not only be harmless, it might be safer

I thought I agreed with you for a second, buy "especially fake ones" implies you think that child pornography is actually a good thing. I hope I'm wrong about that.

> Some day, people will say "Obviously nobody cares if you possess it or look at it, obviously, it's just about MAKING it."

Guess I'm not and you really think possession of child pornography is harmless. My god that's depressing.

If you are reasoning from the standpoint that the ownership and trade of child pornography is harmless, yes, then I understand that giving up privacy in order to reduce this is a bad thing. Because in your eyes, you're giving up privacy, but you gain nothing.


>> implies you think that child pornography is actually a good thing.

Holy cow, that's the worst phrase anybody has ever put into my mouth. That's too offensively disingenuous to warrant any further discussion. Shame on you.


It is a pretty dumb law IMHO for the mere fact that two teenagers sexting on snapchat when they are 17 years and 355 days old are committing a fairly serious crime that can completely mess up their lives if caught but doing that the next day is totally ok all of a sudden and they can talk and laugh about it.


> but doing that the next day is totally ok all of a sudden and they can talk and laugh about it.

Unless a leap year is positioned unfortunately (probability ~1/4), in which case it's back to being a serious crime.


What is this absurd counter?

Who’s talking about surrealistic drawings? We’re talking about actual material in real life, being shared by users and abusers.

To be clear, I’m not supporting surveillance, just stating facts.


You should maybe read about the work [0], or just read the rest of what I said. Surrealism has nothing to do with the argument I made, so why do you bring it up?

CSAM databases can by necessity not contain novel abuses, because they are novel. In fact, filtering by CSAM databases even indirectly /encourage/ novel abuses because these would not be caught by said filter.

Catching CP hoarders does little to help the children being exploited in the first place, and does a lot to harm our integrity and privacy.

[0] https://en.wikipedia.org/wiki/The_Treachery_of_Images


> Catching CP hoarders does little to help the children being exploited in the first place

This is completely untrue, since hoarders tend to often also be part of groups where images of still unknown abuse circulate. Those images help identify both victims and suspects and in the end help stop abuse.


I sometimes wonder where logic is, if it isn't on HN.

Surely if one takes measures against only existing material, and not production of new, this only encourages appearance of new material.

For evidence you can look up what happened with synthetic cannabinoids, and how much harm they brought.


Sorry for the late reply, but for evidence you can look up Robert Miķelsons, who was an active child predator in the Netherlands. He was identified due to images found on a computer in the US. It ended up leading police to him, which caused him to stop. According to further investigations he abused an estimated 83 children, one of which was abused close to 100 times. Many of the victims were less than 5 years old, some were babies.

Without finding the child porn, he would not have been identified and would've victimized many more children.


> Catching CP hoarders does little to help the children being exploited in the first place

If there is no market for it, there might be less incentive to produce any more of it.

Not that I believe we should all be continuously spied by people who merely pinky sweared to do it right and not abuse it.


I don't think market forces are what drive the production of CSAM. Rather, it's some weird fetish of child abusers to share their conquests. I'm sure you're aware, but when CP collectors are busted, they often have terabytes upon terabytes of CP.

But that's, I think, tangential - I don't understand pedophilia well enough to say something meaningful here.


Quite funny: looking for some figures, it seems I wasn’t the first one to do so, and that finding anything even remotely reliable isn't easy at all. See

- https://www.wsj.com/articles/SB114485422875624000

- https://thehill.com/blogs/congress-blog/economy-a-budget/260...

- http://www.radosh.net/archive/001481.html

> I'm sure you're aware, but when CP collectors are busted, they often have terabytes upon terabytes of CP.

Always appalled me that there is so much of it out there. FFS.

For some reason, it reminds me of when the French government decided to spy on BitTorrent networks for copyright infringement (HADOPI).

Defense & Interior asked them not to, saying that it would only make their work harder.

That some geeks would create and democratise new tools not to be caught downloading some random blockbuster, or even just out of principle, and both child pornographers & terrorists would gladly adopt them in a heartbeat, because while they weren’t necessarily the type capable of creating such tools, they had historically been shown to be proficient enough to use them.

Quite hilarious, when you think of it. Some kind of reverse "Think of the Children".

We still got HADOPI however, and now any moron has access to and knows how to get a VPN.


> We’re talking about actual material in real life, being shared by users and abusers.

And, more importantly, material that has been and will be produced through the actual abuse of real children. The spreading of which encourages the production of more of it.

GP's counter is absurd.


Or if you make a film about child abuse in which a child actor pretends to be abused, can you arrest the actor who abuses? If any depiction of the act is a crime, then you can.

This issue came before the US Supreme Court about a decade ago and they ruled that the depiction of a crime is not a crime so long as the depiction can in any way be called "art". In effect, any synthetic depiction of a crime is permitted.

However that ruling predated the rise of deep fakes. Would the SC reverse that decision now that fakes are essentially indistinguishable from the real thing? Frankly I think the current SC would flip since it's 67% conservative and has shown a willingness to reconsider limits on the first Amendment (esp. speech and religion).

But how would we re-draw the line between art and crime? Will all depictions of nudes have to be reassessed for whether the subject even might be perceived as underage? What about films like "Pan's Labyrinth" in which a child is tortured and murdered off-screen?

Do we really want to go there? This enters the realm of thought-crime, since the infraction was solely virtual and no one in the real world was harmed. If we choose this path, the freedom to share ideas will be changed forever.


Yes. People have been convicted for possession of hand-drawn and computer-generated images. Editing a child's image to make it pornographic is also illegal. So some "deepfake" videos using faces from young celebs are in all likelihood very illegal in many jurisdictions.

Images can be illegal if all people are of age, but are portrayed as underage. Many historical dramas take legal advice about this when they have adult actors portraying people who historically married while underage by modern standards. (Ie the rape scene in BBC's The White Princess series.) This is why American porn companies shy away from cheerleader outfits, or any other suggestion of highschools.


This topic is really fascinating to me, how we deal with images of bad things. Clearly, murder and assault is fine to reproduce at 100% realism - but even plain consensual sex is forbidden in the US for a wider audience.

This reminds me of a classic Swedish movie I watched, I forget its name, it's made in the 70s and contains upskirt panty shots of an actress who is playing a 14-year-old, along with her exploring sexuality with a same-age male peer. I think the actual actress was around 14 years old too. It made me feel real uneasy, maybe because my parents in law were around, but also because I thought "wait, this must be illegal to watch". In the end, the movie was really just a coming-of-age story from a time when we were more relaxed about these things.



> People have been convicted for possession of hand-drawn and computer-generated images.

If that's indeed the law in some countries, it is a stupid law that nobody should help enforce.

In Shakespeare's take on the story of Romeo and Juliet, Juliet is thirteen. So this play should probably be illegal in the countries that have such laws.


Not sure where you get your advice from. Supreme court has ruled loli as legal. You can find it on 4chan in about 30 seconds.


"Supreme court has ruled loli as legal." - proof of this?

Had not heard that change.

the US arrested some diplomat like guy some years ago for having sexualized bathing suit models underage on a device. all non nude I believe.

I think most of what gp saying is mostly true - some places cartoons can be / have been prosecuted.

Just cuz its on 4chan every day - doesn't mean it's legal - I thought 4chan deletes it all with in 24 hours (?) - so that if they got a warrant to do something about it, it would already be gone before the warrant printed much less signed and served.

However charges are going to vary by jurisdiction - I don't think many prosecutors are trying to get cartoon porn convictions as a top priority these days, but that doesn't mean it couldn't happen.

I don't think gp is accurate in saying that american porn avoids cheer and high for these reasons. the market forces have changed many times over the past so many years and depending on which porn portals one might peruse they will see more or less of cheer and such. With some portals the whole incest titling is used for clickbait - it just depends on the portal, and much of what is not seen on portals is not there because of dmca / non-rights.. not because things aren't being produced and made available elsewhere.


And companies are terrified of any sort of liability here, which can lead to over-broad policies.

On reddit, you can't link to written content like stories or fanfiction (words only, without any image files) if any character in your fictional story is under 18 and does anything sexual in the story.


Maybe in Saudi Arabia? That’s not the case in the US.


I think it's very easy to get people to hate things that gross them out. It's always interesting to me how casually people joke about Epsteins death while simultaneously never talking about cp without a gross look on their face.

I'm not sure I fully understand how society can be more relaxed with actual pedophiles than they are with cp material.

Personally it bothers me to see the focus around things that gross people out rather than the actual child abuse.


Yikes, do not engage in a defense of child pornography in these types of arguments. The opposition is gunning for that.


I'm not defending any specific point of view. I'm merely pointing out a problem of map-territory conflation.


> I'm merely pointing out a problem of map-territory conflation.

Using complicated language does not make you smart... It just makes you hard to understand. Maybe if you expressed yourself in common language, you'd understand that the points you're trying to make are bogus.


I think OP is saying that it can become less straight-forward to apply judgement in some real-world situations (which is encapsulated in the old adage "the map is not the territory", which I guess I thought _was_ common language but we all love in our own bubbles, I suppose). Anyway, I believe the reason that is often cited for why child pornography is so bad is because it involves the coercion/exploitation of a human being that is unable to consent. I believe OPs position is that if this harm is removed, there would need to be some other grounds on which to prosecute someone. That is, some identifiable harm would need to be demonstrated.

There may be a reasonable counter to this argument but I would not say that OPs position is "bogus" on is face.


A picture of child abuse is child abuse in that the abuse of a child was necessary to take the picture in the first place.

If the picture had no grounds to spread, it would likely not have been made—no incentive. As such, the fact that the picture is able to spread indirectly incentivises further physical abuse to children.


That does simply not follow from logic. Child abuse existed before cameras.

Edit: People are unhappy with this refutation, so a second one then. The claim, specifically, is that CP is CA because CP requires CA to happen. So a photo of theft is theft. A recording of genocide is genocide. Clearly absurd. Never mind the context that the pornography in my question is drawn, thus no actual child was hurt.

Edit 2: The point was made somewhere that CP is hurtful to the child that it depicts, and this is obviously true - but only if spread. Therefore, distributing CP should be illegal, but that does not mean that it's justified to scan every phone in the world for such images.


Sure, some child abuse isn't perpetrated for the purpose of being filmed and sold/distributed. But a large percentage is.

That large percentage is disincentivized when technology that makes it more difficult to spread and/or makes it easier for LE to apprehend owners is implemented.

I never said that there would no longer be any child abuse with these steps, just less of it.


I think you'll be hard pressed to show this with any kind of evidence. What happens when you effectively ban some expression? People hide that expression. Likewise, if you are successful in this CSAM pursuit, you'll mostly drive pedophiles to reduce evidence sharing. I bet you dollars to donuts, the people who fuck kids, will still fuck kids.


> Child abuse existed before cameras.

Therefore we shouldn't do anything about it.

While your comment is true, it does nothing to refute anything the previous commenter said. They are completely right. Also the spread of child porn increases the impact on the victim and should just for that reason be minimized.


> A picture of child abuse is child abuse in that the abuse of a child was necessary to take the picture in the first place.

People have made all of these points before, but I still wonder what the legal and/or practical basis of the law is when the pictures or videos are entirely synthetic.

On another note, I wonder how much of this kind of Apple(tm) auto-surveillance magic has made it into the MacOS.


> A picture of child abuse /is not/ child abuse.

Yes, exactly.

It may even help prevent child abuse, because it may help pedophiles overcome their urges and not act upon it.

I'm not aware of any data regarding this issue exactly, but there are studies that show that general pornography use and sexual abuse are inversely correlated.


I see the argument, but the counterargument is that by you doing that, you could possibly be nurturing and perpetuating abuse.

In effect, possessing such material may incentivize further production (and abuse), "macroeconomically" speaking. And I hate that evidently, yes, there is an economy of such content.


In many jurisdictions you are.


There's a difference a mile wide between moderating a service and moderating people's personal computers.

Maybe the same thing that makes this hard to understand about iOS will be what finally kills these absolutely wretched mobile OSes.


except that the boundary between device and service is steadily eroding. A screen is a device and netflix is a service, but a smart tv that won't function without an internrt connection? A nokia was a device you owned, but any Apple device is more like a service than a device really, as you don't control what OS can run on it, which unsigned software you run etc. So if you are ok with moderation on services and your device turns into a service... Perhaps we need laws to fully separate hardware and software but actually the integration between hardware and software is becoming stronger and stronger every year


> moderating people's personal computers

This is done client side because your data is encrypted in their cloud. It won't be done if you disable cloud sync. If you just keep your cp out of the cloud, you're fine.


Apple can decrypt data stored on iCloud, and they scan it already it seems:

https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

Which it what makes adding the client side functionality even more problematic. They can easily extend this to scanning all offline content.


TBH it doesn't matter at all why they're doing it, they've crossed a line here.


The boundary should be that user generated personal data that is on a user device should stay personal on a user device unless explicit consent is otherwise given. That's it. The argument is always well "don't you want to save the children" to which I give this example.

In the USA guns are pretty freely available, relative to other countries. There is a huge amount of gun violence (including shooting at schools targeting children) yet every time major gun restriction legislation is introduced it fails, with one major reason being the 2nd amendment of the US constitution. This could again be amended, but sufficient support is not there for this to occur. What does this say about the US?

They have determined, as a society, that the value of their rights is worth more than all the deaths, injuries and pains caused by gun violence. A similar argument can be made regarding surveillance and child porn, or really any other type of criminal activity.


>> The boundary should be what is on a user device should stay personal on a user device.

But how many apps only store the locally on the device versus sending data to the cloud, getting data from the cloud, or calling cloud APIs to get functionality that cannot be provided on device?

Having the power of a personal computing device is huge, but having a networked personal computing device is far greater.

Keeping everything on-device is a good ideal for privacy, but not very practical for networked computing.


Ah, I was talking specifically about user generated personal information. I have edited my post to make it clearer.


The conversation today is not really about PhotoDNA (checking image hashes against known bad list). That ship has sailed and most large tech companies already do it. Apple will do it one way or another. It’s a good way to fight the sharing of child porn, which is why it is so widely adopted.

The question is whether it is worse to do it on-device, than on the server. That’s what Apple actually announced.

I suspect Apple thought doing it on-device would be better for privacy. But it feels like a loss of control. If it’s on the server, then I can choose to not upload and avoid the technology (and its potential for political abuse). If it’s on my locked and managed mobile device, I have basically no control over when or which images are getting checked, for what.


You have the exact same level of privacy as before. Before, the images were scanned in the cloud. Now they are being scanned as they exit your phone. In that way, the avoidance protocol is the same, namely

> I can choose to not upload and avoid the technology

They are not scanning your entire photo library at rest.

Could they? Sure. But they’ve had the hardware to do this for years, so they could have done so silently at any time.

This entire saga has been one of poor messaging and rampant speculation.


> If you want unlimited privacy, at least have the balls to admit that this will allow free trade of CSAM, but also revenge porn, snuff-films and other things like it.

Sure, in exactly the same way that the postal system, parcel delivery services, etc. allow it. But that's not to say that such things are unrestricted -- there are many ways that investigation and enforcement can be done no matter what. It's just a matter of how convenient that is.

It would also restrict CSAM a lot if authorities could engage in unrestricted secret proactive searches of everybody's home, too. I don't see this as being any different.


By that logic, can we send in someone into your house every day to look through every corner for child porn in digital or print form?

In fact there are a lot of heinous crimes out there some much worse than child porn IMHO. Singling out child porn as the reason seems like it's meant only to elicit an emotional response.


Fundamentally there is a difference between scanning files on your servers and scanning files stored on someone else's device, without their consent.


Not that I wholly agree with Apple's move due to other implications, but it has to be said that only photos that you have chosen to upload to iCloud will be scanned.

So, in a sense, you do consent to having those photos scanned by using iCloud. Maybe it's time for Apple to drop the marketing charade on privacy with iCloud, since iCloud backups aren't even encrypted anyway.


Should all personal computers be scanned all the time by windows/apple/... in case it contains cp?


Are you unable to answer any of the questions I asked? I'm seriously interested in hearing, from people who think Apple is in the wrong, where the border of acceptability is.

Would you prefer iOS submitted the photo's to their cloud unencrypted and Apple scanned them there? Because that's what the others are doing.


iCloud photos are not end-to-end encrypted, Apple has full access to them. https://support.apple.com/en-us/HT202303


Evidently the episode you listened to was loaded with false information, because Kik has used PhotoDNA to scan for CSAM for 7 years. [1]

[1] https://www.kik.com/blog/using-microsofts-photodna-to-protec...


> Doc: From what I can tell, it only starts looking in the rooms and looking at individual people if they are reported for something.

https://darknetdiaries.com/transcript/93/

If I were Kik, I would also write a blog post about using something like this. Many, many things point at Kik only doing the bare minimum though. (If you're the type who supports moderation, apparently they're already doing too much according to much of HN.)


I don't want to be arrested by the FBI and my life ruined because my daughter, who is 6 and doesn't know any better, takes a selfie with my phone while she happens to be undressed/changing clothes and the photo automatically syncs to the cloud.


Chances are your daughter would be arrested or at least investigated for having child pornography.

https://www.theguardian.com/society/2019/dec/30/thousands-of...


You won’t. Understand what this is instead of falling for the hysteria.


It starts with comparing file hashes. I’m worried about where this goes next.


By that logic, how do you know it hasn’t already gone there, years ago?

And this scan is currently happening on the server. They’re just making it so that the scan will now happen at the time of upload as opposed to N seconds later.


I think the best way to stop child porn is to install government provided monitoring software + require an always online chip to be present on any video capture system and any media player must submit screen hashes + geolocation and owner ID to a government service. If the device fails to submit hashes for more than 24 it should block further operation. Failing this we will never be safe from child porn in the digital world.

Not doing anything opens the door to further abuse.


If a company explicitly states in a service's terms and conditions that content stored or shared through that service will be scanned, I think that's acceptable because the user then can decide to use such a service on a case-by-case basis.

However, making this a legal requirement or deliberately manipulating a device to scan the entire content stored on that device without the user's consent or knowledge even, is extremely problematic, not just from a privacy point of view.

Such power can and will be abused and misused, sometimes purposefully, sometimes accidentally or erroneously. The devastating outcome to innocent people who have been wrongfully accused remains the same in either case (see https://areoform.wordpress.com/2021/08/06/on-apples-expanded... for a realistic scenario, for example).

The very least I'd expect if such a blanket surveillance system were implemented is that there were hefty, potentially crippling fines and penalties attached to abusing that system in order to avoid frivolous prosecution.

Otherwise, innocent people's lives could be ruined with little to no repercussions for those responsible.

Do strict privacy requirements allow crimes to be committed? Yes, they do. So do other civil liberties. However, we don't just casually do away with those.

If the police suspect a crime to have been committed they have to procure a warrant. That's the way it should work in these cases, too.


There are reasonable measures to take against child abuse, and there are unreasonable ones. If we can’t agree on that, there’s no discussion to be had.


> Should the internet be a free for all place without moderation?

The better question would be: do you want an arbitrary person (like me) to decide whether you have a right to send an arbitrary pack of bytes?

Neither "society" nor "voters" nor "corporations" make these decisions. It is always an arbitrary person who does. Should one person surrender his agency into the hands of another?


>Neither "society" nor "voters" nor "corporations" make these decisions. It is always an arbitrary person who does.

Except in this case, a corporation (Apple) is making the decision relative to the sexual mores of modern Western society and the child pornography laws of the United States. It's unlikely this decision was made and implemented randomly by a single "arbitrary" individual. Contrary to your claim, it's never an arbitrary person.

And yes, I believe Apple has the right to decide how you use their product, including what bytes can and cannot be sent on it.

>Should one person surrender his agency into the hands of another?

We do that all the time, that's a fundamental aspect of living in a society.

But in this specific case, no one is forcing you to use an Apple phone, so you're not surrendering your agency, you're trading it in exchange for whatever convenience or features lead you to prefer an Apple product over competitors. That's still your choice to make.


"Apple has the right to decide how you use their product"

I hate this argument. If Apple wants to claim ownership of _their_ products then they shouldn't sell them. They should lease them.


Should the internet be a free for all place without moderation?

Yes!!!


I think different opinions are great. Just wanted to point out saying the internet should be moderated on the hacker news forums is surprising and funny.


“Apple is opening the door to broader abuses”. Wouldn't not doing anything open a different door to broader abuse?

Actually I believe that in regards to "not doing anything open a different door to broader abuse" - no. Starting scans will lead to broader, and no tin foil hat needed.

If we compare 1-apple starts scanning for known hashes, not "looking at all your naked pics and seeing if you've got something questionable"... this is just looking for known / already created by others / historical things... by doing a scan for one thing - they open the pandoras box to start scanning for other things - and then they will be compelled by agents to scan for other things - and I believe that is broader, much.

Next month it will be scan for any images with a hash that matches a meme with fauci - the current admin has already stated that in their desire to stop 'disinformation' they want to censor sms text messages and facebook posts (assuming also fbk DMs and more).

There is a new consortium of tech co's sharing a list of bad people who share certain pdfs and 'manifestos' or something like that now right? Might as well scan for those docs too, add all them to the list.

What power this could lead to - soon the different agencies want scans for pics of drugs, hookers.. how about a scan for those images people on whatsapp are sharing with the black guy killing a cop with a hood on?

What happens when a new admin takes the house and white house and demands scans for all the trumped a traitor pics and make a list?

See this is where the encryption backdoors go.. and where is that line drawn? Is it federal level agencies that get to scan? can a local arizona county sheriff demand a scan of all phones that have traveled through their land/air space?

Frankly, public chats, public forums.. if you post kids or drugs or whatever is not legal there - then it's gonna get the men with guns to take note. What you do in private chats / DMs, etc - I think should stay there and not go anywhere else.

I don't like the idea that Msoft employees look at naked pics of my gf that are added to a pics folder because someone setup a win system without disabling one drive. So I don't use one drive and tell others to not use it - and not put pics of me there or on fbk or tictoc.

For all those people that have nothing to hide - I feel sorry for you - but wonder if your kids/grandkids should have thousands of agents looking into their private pics just to make sure there is nothing not legal there.

so would these scans get nudes sent through whatsapp? That would kill the encryption thing there kind of.

Would this scan get a cach if someone was using a chat room and some asshat posted something they shouldn't - and every person in the chat got a pic delivered to their screen. so many questions.

I also question what the apple scans would scan as far as folders and what not.. would it scan for things in a browser cache? like not purposefully downloaded.. if someone hit a page that had a setup image on it - would that person now be flagged for inspection / seizing?

If they are working with nice guys in Cali that just want to tap people on the shoulder and have a talk with people - will they send flagged notices to agents in other places who may go in with guns drawn and end up killing people?

I'm sure many people are fine with either outcome - I think there is a difference between someone surfing the web and someone who coerces real world harm, and not all those who surf the web deserve bullets to the chest, and there is no way to control that.. well maybe in the uk where cops aren't allowed to have guns maybe.


Maybe I'm an overly-sensitive person but I really can't get comfortable with a neural network looking over my shoulder, spying on me 24/7.

My parents gave me privacy and treated me with respect when I was a child. Now I'm an adult, and in a way it's like I have less privacy than when I was a kid. And the entities violating my privacy have way more power than my parents.

I want to continue working with technology, but how can I make mass consumer goods (i.e. apps) without being a user myself? These moves are going to slowly force me out of technology, which is sad, because creating with programming is my favorite activity. But life without some semblance privacy is hardly life.

Here's to a slow farewell, Apple! It was a good run.


I could have written this comment myself, albeit less eloquently.

That is a very accurate representation of how I feel about this, too. I enjoy building apps, but I don't know that I can keep using these devices.

I was looking forward to upgrading to the new hardware in the fall, but now I'm not sure I can stomach the implications of buying a new device that may at any point start policing what I do far beyond what I'd accepted at the time of purchase.


Why is CP punished more harshly than CA [1]? Is it because it is easier to do so, or is it because it gives an illusion of protecting children?

This, of course, ignores how a lot of child abusers are underage themselves and know the victim,[2] and that the prosecutors are committing the same crime as the prosecuted in the case of CP, and that, in too many cases, the content in question is impossible to call malicious if it is seen in context.[3]

[1] https://columbiachronicle.com/discrepancy-in-sex-offender-se...

[2] https://web.archive.org/web/20130327054759/http://columbiach...

[3] https://news.ycombinator.com/item?id=5825087


your [1] 404's


My bad, got [1] and [2] backwards. [1] is supposed to be the archive link, and [2] is https://www.d2l.org/wp-content/uploads/2017/01/all_statistic....


If Apple doesn't change on this my next phone will not be an iPhone. I will dump my iPad, and Apple watch as well. This is complete bullshit. I'm angry. I never thought I'd see "privacy" Apple come out and say we're going to scan your photos on your device, scan you imessages, etc. This is insane.


I gave up on Macs almost 20 years ago, but I have an iPhone 7 that I need to upgrade. This whole shitshow has put that purchase on hold indefinitely.


I just got an iPhone 12 a couple months ago because “Hey it’s definitely more private than google”, better control of app permissions, and nice hardware.

Seems unclear what the alternative is now. The Linux phones I’ve looked at have a lot of catching up to do.


Linux on the desktop has a lot of catching up to do when it comes to Mac/Win. The question is whether it is good enough.

Personally, Linux desktop has been good enough for decades, for my work as a software engineer. I gave up many conveniences when I switched fully from Macs (e.g. BBEdit, Finale for music, etc.).

I think phones are reaching the same point, where I am ready to sacrifice a decade or more of software “progress” to regain control of my hardware.

In the very long term, I will always bet on open source when it comes to commodity devices. I believe phone development will progress much like the Linux desktop experience, which is now relatively rock solid. It may take a decade, but each of us that joins their movement will speed these alternatives on to their inevitable market domination.


> To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos.

After reading OP, my understanding had been that this update will cause _all_ photos on Apple devices to be scanned. However, the above quote from Apple's statement seems to indicate that only photos uploaded to iCloud Photos are scanned (even though they are technically scanned offline).

This doesn't invalidate any of the points people are making, but it does seem the update does not directly affect those of us who never stored our photos “in the cloud”.


And this has always been Apple’s stance because of the government. We lock down the phone and don’t let the government in but if you use cloud service that doesn’t have the same rights because it’s not stored on your property.

https://morningstaronline.co.uk/article/w/apple-drops-plans-...

This sounds like them trying to find a way to encrypt your data and remove the fbi from having a “what about the children excuse”. They scan the image on upload and then store it on iCloud and encrypt it.

It’s of course a slippery slope but the FBI has been trying to have back doors everywhere since forever.


In a somewhat judo move, it could end up protecting user security going forward if the FBI has no leverage with pedophile content in iCloud, there’s no argument for a backdoor. I won’t play total corporate shill here but it seems people are jumping to this being the end of times vs a) a way to catch severe abusers of child pornography and b) removing a trump card for future security org strawman arguments


Exactly this. And this further opens the opportunity for E2EE iCloud Photos where Apple can’t look at your photos on their servers at all. This gives them cover that they’re not hosting illegal content.


How long until those of use not storing data in the cloud are the guilty ones? At least, the very suspect ones?


https://www.apple.com/customer-letter/

A stark change since Apple/FBI.


And what happened since then, I wonder?


They were probably strong-armed into this. Perhaps the CCP and the FBI talked, and Apple was told they'd be cut off from their suppliers if they didn't introduce this.

It doesn't matter. This is the wrong choice, and everyone should rebuke and abandon Apple for this.


US Government: We suspect the person in this photo of committing a crime. Here is your subpoena, Apple. You are directed to scan all iPhone and iCloud storage for any pictures matching this NeuralHash and report to us where you find them.

Chinese Government: Here is the NeuralHash for Tienanmen square. Delete all photos you find matching this or we will bar you from China.

Apple has at this point already admitted this is within is capability. So regardless of what they do now, the battle is already lost. Glad I don't use iThings.


Yeah as I mentioned down the thread, once you act against the users, you're dead. I'm done. They are going from my life.

Good job it's ebay 80% off fees this weekend here in the UK.


Pretty sure I read this would only apply to US based users


Yes so far. Once the mechanism is in place it will be rolled out. I'm in the UK and we have a fairly nasty set of state legislation on censorship already and an increasingly authoritarian government so this is a big worry.


Absolutely agree, this will no doubt reach Apple devices worldwide, just a matter of time.

But there is still time for Apple to halt these changes.


Apple will try again with something that's wrapped up to be more palatable to consumers. A change of heart in the next 90 days does not mean that Apple is a good actor.


Well it’s now common knowledge that they can and will do this. So oppressive governments will almost certainly mandate that its required to do business in those countries. Thus it’s a no win game for the end user. Unless you choose not to play.


I mean eBay isn’t exactly free of bad behavior…


I think they're just selling their iDevices on there


Yes that. I'd rather take them back to the Apple store and have a large fire out the front while holding up a doom saying sign about their laughable privacy stance, but that'd detract from the point.


Oh I misunderstood


Guess I'll just keep my spyPhone then. /s


Selfies are suddenly going to be used as surveillance tools by the govt, or maybe a bad actor at Apple wants to find/track down their ex.

This system has so much potential for abuse with very little upside.


Suddenly?


Nailed it. It was the intention all along. Funny how everyone just pretends that it's not what it is.


The Chinese government already has direct access to everyone's data and messages. There is no encryption. This is mandated.

[0]: https://www.nytimes.com/2021/05/17/technology/apple-china-ce...


"Apple CEO Tim Cook Slams Indiana’s ‘Religious Freedom’ Law"

https://www.rollingstone.com/politics/politics-news/apple-ce...

The hypocrisy levels are vomit inducing...


What hipocrisy are you seeing?


You must be trolling...Provoking...Joking...Or you are Tim Cook :-) Are the personal ethics and values disconnected from daily business, to be changed based on the place of business or Apple must do it because they are just complying with local laws in China ?

Because IBM was also in 1939...Just the local laws.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


So this is just about China?

When Apple started working in China, the prevailing western belief was that economic liberalization would cause political liberalization in China, so even though China was still relatively repressive, doing business there would help move things in a better direction.

At the time, this was a very reasonable and widespread belief, which turned out to be wrong.

Betting on other people and turning out to be wrong doesn’t make you a hypocrite. It makes you naive.


Naive is not something I would associate with Apple, but I guess they have seen it now. I guess they must be ready to pull off any time...Or is their CEO too busy, lecturing others on the righteous paths in other jurisdictions?

"Apple's Good Intentions Often Stop at China's Borders"

https://www.wired.com/story/apple-china-censorship-apps-flag...


> Naive is not something I would associate with Apple, but I guess they have seen it now.

It wasn’t just Apple who was naïve, it was the entirety of US and European foreign policy too. Do you want to claim that the west didn’t expect political liberalization in China.

That article doesn’t change anything. Apple didn’t go into China thinking they were helping to strengthen an authoritarian regime. They went in thinking they were helping to liberalize it.

We all, Apple included, got played.


> direct access to everyone's data and messages.

From what I understand, only to Chinese customers's data and messages (bad enough, sure, but not as bad as you say).


Depends on the company. If it’s a Chinese company like TikTok, data is stored in China and therefore property of the state. Things are about to get worse and they crack down on private companies in China.

And it’s not just Americans who worry about this. When word got out that Line was storing user data in China servers, it blew up in the news in Japan and Taiwan and went into immediate investigation. Line ended up moving Japanese user data to South Korea. Chinese aggression and Xi’s obsession with power is not just something Americans spout off in Reddit and YouTube comments. They’re a legit threat to Taiwan, Japan, Australia and India.

https://asia.nikkei.com/Business/Technology/Line-cuts-off-ac...


TikTok is run by a US-based subsidiary of ByteDance

They have insisted “that TikTok U.S. user data is stored in Virginia, with a back-up in Singapore and strict controls on employee access.”

https://techcrunch.com/2020/08/17/tiktok-launches-a-new-info...


China's state security laws trump all "strict controls" for employees.


Which employees? The ones who aren't Chinese citizens and aren't located in China? What does China state security have to do with them?


ByteDance's Douyin product has Chinese employees that are based in China. TikTok employees are also ByteDance employees which means a ByteDance employee that passes through their "strict controls" can access whatever TikTok data they want. Even if that's a dozen Chinese nationals that can get access that's a dozen people required by Chinese law to help the state security aparatus.

I don't see any reason to give them the benefit of the doubt considering they already moderate content the Chinese government doesn't like [0] as a matter of company policy.

[0] https://www.theguardian.com/technology/2019/sep/25/revealed-...


> Depends on the company

I was referring specifically to Apple & iCloud (and I thought GP was as well).


That is what I meant. The point is that everyone talking about how this policy introduces a backdoor which can be exploited by totalitarian states are wrong.

There's no need. Apple willfully obliges local laws.


What will Apple do when Iran or Qatar (or any of the other 71 countries where homosexuality is illegal) upload photos to the CSAM database that they consider illegal acts?

In some of those countries same-sex sex is punishable by death.


Nothing is stopping these countries from doing this already. China, Saudi Arabia, Iran already consider forcing tech companies to track user activity. At the end of the day these companies are subject to laws of the country they do business in and this has already screwed over HK, Uigher, Iranian, Egyptian citizens. Laws forcing data to be stored in given regions alongside encryption keys has already made it dangerous to be homosexual in these countries you’ve mentioned (except Iran which most businesses cannot do business in)


> You are directed to scan all iPhone and iCloud storage for any pictures matching this NeuralHash and report to us where you find them.

I think the US Govt (and foreign) would actually send Apple tens of thousands of NeuralHashes a week. Why would they limit themselves? False positives are "free" to them.


> False positives are "free" to them.

Correct, just look at the incentives when it comes to handling sniffer dogs.


Hasn’t Google been parsing Google Photos images, email content, and pretty much everything else since forever? Do you just stay off of smartphones and cloud platforms entirely?


Microsoft and Google have been doing that in their online services for ages, but not on your personal devices.


So if I don’t backup with Google Photos or Google Drive, would they be safe for now?


yes theoretically.


They scan things I send them.

They don't (publicly announce that they) scan things on my device.


The Apple feature discussed here is for photos being synched to iCloud Photos. It does not scan arbitrary local content.


> It does not scan arbitrary local content.

Yet.

Before it was "only content uploaded to iCloud is scanned" and now it's "photos are scanned on-device". That's frog boiling that tomorrow easily becomes "arbitrary files are scanned anywhere on the device".


Only photos being uploaded to iCloud are scanned on device for CE imagery. This is the alternative to having cloud storage having broad decryption ability to do scanning in-service (as say Microsoft, Google, Twitter, and Facebook do)


They already can decrypt iCloud photos, why else perform an on-device scan ? If not with the intention to scan all local contents ?


And the matching photo is uploaded upon match. So regardless the photo is uploaded. What's the point again of taking this further step?


That is an EXCELLENT question, fwiw.

They could have just had a local failure. I suspect there were a lot of arguments around this point - should they be making an attempt merely to prevent such content from their servers, or to detect/report behaviors which may be illegal and harmful.


It also scans every photo that an iMessage user sends/receives.


I get this sentiment but the known-bad-image-scanning technology is not new. That’s not what Apple announced. Many tech services already do that, which already enables the slippery slope you’re illustrating here.

I’m not trying to minimize the danger of that slope. But as someone who is interested in the particulars of what Apple announced specifically, it is getting tiresome to wade through all the comments from people just discovering that PhotoDNA exists in general.


But now its on your device.

"Its just when you upload to icloud or recieve an iMessage" they say. BUT the software's on your device forever. What about next year? What about after an authoritarian goverment approaches them? By 2023 it might be searching non-uploaded photos.

You can always not use cloud tech like PhotoDNA but you can't not use software BUILT IN to your device. Especially when its not transparent.


It’s also not inconceivable to access the camera live and perform recognition live on feeds. It’s not even that expensive to do that anymore.


It would probably kill the battery to do that constantly, as the processor would not be able to sleep ever. Although relying on energy efficiency to not improve is not a good long term situation.


You can be selective about it. There is GPS to tell you where you are, microphone to tell you how many people are around, motion sensor, light sensor to sense if you’re in a pocket, etc.

It doesn’t have to be constant, it could even be on demand.

Personally, I am resigned to the fact that this kind of surveillance will happen. Technology cannot be slowed down or stopped artificially. There are just too many actors (not just Apple) to expect all of them to act in good faith. On top of that, the governments all over the world are salivating over this technology too. I don’t trust governments will curtail technology - and thus themselves - via policy. And even if one does, there is always another. This is almost nothing new in the UK, for example.

I’m not optimistic in the direction we have been heading for the ladt 15 years or so.


Facial recognition is already part of all camera software today.


Nothing stopping countries from demanding every tech organization does this anyway, it didnt just become a possibility now Apples running code on-device. Also this code can and probably will be able to be activated/deactivated/removed remotely (for better or worse!)


Have they? The whitepaper made it sound like the encrypted security voucher is created from the database of known cp on the device at the time the image is uploaded, and the only way for Apple to decrypt something is if it matched something in that database.

It did not sound like they could retroactively add additional hashes and decrypt something that already was uploaded. They could theoretically add something to the list of hashes and catch future uploads of it but my understanding was they cannot do this for stuff that has already been stored.


Directly from the link:

> ...the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

>... The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.


> they cannot do this for stuff that has already been stored.

That's a very simple software update.

Every year iOS gets more images the automatic tagging supports (dogs, shoes, bridges, etc). And if you add a friend's face to known faces, it'll go and search every other photo for that face.

It sounds absolutely trivial to re-scan when the hash DB gets updated.


As far as I can see:

1. This is a serious attempt to build a privacy preserving solution to child pornography.

2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.

However:

Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue.

‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear.

So: where are the proposals for a better solution?

I see here people who themselves are afraid of the real consequences of government/corporate surveillance, and whose fear prevents them from empathizing with the people who are afraid of the equally real consequences of organized child sexual exploitation.

‘My fear is more important than your fear’, is the root of ordinary political polarization.

What would be a hacker alternative would be to come up with a technical solution that solves for both fears at the same time.

This is what Apple has attempted, but the wisdom here is that they have failed.

Can anyone propose anything better, or are we stuck with just politics as usual?

Edit: added ‘widely thought of as’ to make it clear that I am referring to a widely held position, not that I am arguing for it.


>Child pornography and the related abuse is a massive problem

>proposals for better solution

How do you define "massive"? What makes you think that current approach is not working well enough?

Of course, it's really bad as soon as the very first case.

But we need to go beyond the sensationalist articles which talk about millions of pictures outthere. Even though this is, yes, a problem.

In the US, it looks like the number of child pornography cases is going down each year, from 1,593 in 2016 to 1,023 cases in 2020:

https://www.ussc.gov/sites/default/files/pdf/research-and-pu...

So yes, there is a problem, but I would love some to see some justification that the problem is actually as massive as people imply, or getting worse. Or if they have a political goal in mind.

And it feels to me like we talk about it a lot more, there is less taboo or social protections for perpetrators about it, which must help to fight this more efficiently than in the past.


> In the US, it looks like the number of child pornography cases is going down each year

Prosecuting fewer cases doesn't mean less abuse is occurring.

It doesn't even mean that the number of instances of abuse for which cases are prosecuted are going down, just that the number of offenders prosecuted is going down.


Agree. It may well be a bad proxy. But it does not mean the problem is getting worse either, one would need to back it up with some arguments/estimates. The number of pictures on the internet does not seems a good one to me (cats population has not grown that big).

I am thinking that people don't seem to know if the problem is really getting bigger. They just say it. If we don't know that, we cannot use growing child pornography as an argument to reducing civil liberties.


> They just say it. If we don't know that, we cannot use growing child pornography as an argument to reducing civil liberties.

https://www.fbi.gov/news/stories/child-predators

The FBI says it. People can and do use that argument whether we like it or not.


Yes, they said it in 2011. With just one data point and no trend. Nothing shows here it exploded since. So the FBI and the whole society may have found somewhat effective ways without scanning phones.


I don’t expect to convince you the FBI is right. I am pointing out that other people will be convinced.


> child pornography cases is going down each year, from 1,593 in 2016 to 1,023 cases in 2020

Sam Harris (and the FBI) would likely say this is because detection is getting harder thanks to encryption.

The only authority on this is actually the FBI, but I presume you wouldn’t trust them.

That distrust is reasonable, but irrelevant. What matters is that many people will trust them and see the fear as legitimate.

Arguing over numbers with me is irrelevant. I’m saying both positions are reasonable fears.

You cannot win by denying that.

One winning move would be to take the problem seriously and work out a better technical solution. There may be others.


>One winning move would be to take the problem seriously

What makes you think that the problem is not taken seriously? I am no specialist, but it looks like cases are investigated and people are going to jail. You say that it is not the case, so I am curious what leads you to this statement.

Yes, policies can be based on fears and opinion, but some numbers and rationality usually help make better decisions. I'd love to hear something more precise than fears and personal opinion, including the FBI of course.


> Yes, policies can be based on fears and opinion, but some numbers and rationality usually help make better decisions.

Sure, but you aren’t the decision maker. This is a political decision, and so will be based on the balance of fears and forces at play, not a rational analysis.

It doesn’t matter how right or rational you are. It only matters how many people think you understand what they care about.

If the Hacker position is ‘people shouldn’t care about child pornography because the solutions are all to invasive’ so be it. I just don’t think that’s an effective position.


>you aren’t the decision maker.

That is only partially true. If we both write, other people may read, then you and me possibly influence people. Wa are social beings, we are influenced by people around us. So we're both a tiny part of the decision process. And you talk about political decision: politicians keep listening to people through polls, phone calls received, and other ways. Even dictators can be, when numbers are high enough.

So it does matter how rational we are. Over time and on average, people are more often convinced by rational discourse than irrationality. Probably because rationality has a higher likelihood of being right. But yes, we'll go through periods where irrationality wins... Still, it's very hard to be on the opposite side of logic and facts for a long time.

>If the Hacker position is "people shouldn’t care about child pornography"

I have not read that here. If you believe that child pornography is a massive issue, I respect that. I just would have hoped you could better describe the actual size of the problem and its evolution. You could have influenced me more effectively.


> If you believe that child pornography is a massive issue, I respect that. I just would have hoped you could better describe the actual size of the problem and its evolution.

I don’t mind whether you are convinced. I’m not trying to convince you about the size of the problem.

My position isn’t about how large the threat is. I don’t have any information that you don’t have access to. My position is that if we care about privacy we have to accept that people other than ourselves think the problem is big enough to warrant these measures.

You have already lost this battle because enough people are convinced about the problem that tech companies are already doing what you don’t want them to do.

https://www.fbi.gov/news/stories/child-predators


>if we care about privacy we have to accept that people other than ourselves think the problem is big enough to warrant these measures.

Not sure what you mean here by "accept". Accept the fact those people and opinions exist? Sure! Accept their opinion without challenging, without asking questions? No. Accept this opinion is big and rational enough for the majority to follow and make laws? No.

>You have already lost this battle

What makes you think that? That's just your opinion. You know, even when battles are lost, wars can be won later won. "Right to Repair", "Net neutrality", "Global Warming", and here: "open source hardware". All those battles have been fluid. Telling people it is over, it is too late, is a very common trick to try to influence/convince people to accept the current state. That certainly does not make it true.

I understand you may try to convince readers that it is over, because it may be your opinion. If that's the case, just be frank about it, and speak proudly for yourself of what you wish. Don't hide behind "politicians", "tech companies" and "other people".


> What makes you think that? That's just your opinion.

It’s not just my opinion that Apple has implemented a hash based mechanism to scan for child pornography that runs on people’s phones. People complaining about it have definitely lost the battle already. It is already here.

> I understand you may try to convince readers that it is over, because it may be your opinion.

That is not an accurate understanding of my argument.

My position is to agree with those who see this as a slippery slope of increasingly invasive surveillance technology, and to point out that simply arguing against it has been consistently failing over time.

I am also pointing out that one reason it’s failing is that even if the measures are invasive and we think that is bad, the problems they are intended to solve are real and widely perceived as justifying the measures.

What I advocate is that we accept that this is the environment, and if we don’t like Apple’s solution, we develop, or at least propose alternative ways of addressing the problem.

That way we would have a better alternative to argue in favor of rather than just complaining about the solution which Apple has produced and which is the only proposal on the table.


Are you falling into their trap knowingly or not?

There is a child molestation problem everywhere in the world, including online. I have seen nothing explaining it is getting bigger / worse. I have read that most of the cases are family members, in the real world.

So when I hear Apple and Government explain "because of the children" they want to monitor our phones more, in the context of growing assumed dictatorships, Pegasus, Snowden reveleation, do you really think that solving the child pornography issue will help refrain them, or slow them down? Open source hardware, political pressure, consumer pressure, and regulation, possibly monopoly break-ups. In the US, it starts with the people.

But doing better with child pornography won't change anything there, it juts moves the discussion to some other topic. Distraction. That is my point all along. There is no data that shows that all of a sudden child pronography has progressed leaps and bounds. So people suddenly concerned by that are most likely not truthful,a dn they have a very strong agenda. That's what we need to focus on, not their "look at the children" distraction.


> Are you falling into their trap knowingly or not?

This is a false dichotomy and a false assumption.

> There is a child molestation problem everywhere in the world, including online.

Agreed.

> I have seen nothing explaining it is getting bigger / worse. I have read that most of the cases are family members, in the real world.

Have you listened to Sam Harris, or heard the FBI? They have a very different view.

It could be that both are true: there is a child porn problem and governments are using it as an excuse.

The only thing you seem to be going on is a story you once heard, that may have been true at the time, but may not be now.

> So when I hear Apple and Government explain "because of the children" they want to monitor our phones more, in the context of growing assumed dictatorships, Pegasus, Snowden reveleation, do you really think that solving the child pornography issue will help refrain them, or slow them down?

That would misleading sense given that you are assuming child porn is not a growing problem.

Porn in general is growing hugely why wouldn’t child porn also be growing?

Generally Apple has resisted overreach, but I agree that they are slowly moving in the wrong direction.

Apple is not the government.

> Open source hardware,

> political pressure, consumer pressure, and regulation, possibly monopoly break-ups. In the US, it starts with the people.

You contradict yourself here. You seem to think the government can’t be slowed and yet political pressure will work. Which is it?

> But doing better with child pornography won't change anything there,

I agree - it won’t eliminate the forces that want to weaken encryption etc.

But a more privacy respecting solution would still help.

> it juts moves the discussion to some other topic. Distraction. That is my point all along.

> There is no data that shows that all of a sudden child pronography has progressed leaps and bounds. So people suddenly concerned by that are

Isn’t there? The FBI claims it is growing.

> most likely not truthful,

Ok, we know you don’t trust the FBI.

But enough people do that we can’t ignore them. Even if the problem isn’t growing as Sam Harris claims it is, trying to persuade people that the problem doesn’t need to be solved seems like a good way to undermine the causes you support.

> a dn they have a very strong agenda. That's what we need to focus on, not their "look at the children"

As I say, I agree there are people trying to exploit ‘look at the children’ in support of their own agenda.

I just don’t think that means there isn’t a real problem with child porn. Denying that there is a problem seems equally agenda driven.


There is no end to the loss of privacy in the name of safety if your bogeyman is increasingly sophisticated actors.

The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it. Isn’t that what the FBI is complaining about, that they have grown increasingly sophisticated?

The more sophisticated privacy adherents will also avoid Apple and resort to end to encryption and offline tools.

What is the actual outcome? You won’t get more arrests of child molesters. Instead, you get a security apparatus that can be weaponized against the masses. Furthermore, you will have the FBI complaining that child molesters are increasingly out of their reach and demanding greater powers. They will then try to mandate child porn detectors built into every phone.

This creep has been occurring for years. Go read the Snowden disclosures.

First your cell phone companies worked with the government for mass harvesting of data. No need for any suspicion because they promise not to look unless there is one. That wasn’t enough because the data was encrypted.

Second they had the companies holding the data snoop in on data that was shared. That wasn’t enough.

Third they had the companies holding the data snoop in on data EVEN when it wasn’t shared, just when it was uploaded to them. Not enough for them!

Now they will have it done on device prior to uploading. Does this mean that if it fails to upload, it gets scanned anyway. Why yes!

Next they will have it done on device even if it never is held by the company and never shared and never even intended to be uploaded.

The obvious goal is that the government has access to ALL data in the name of safety. No need for warrants. Don’t worry about it. They won’t look unless there was any suspicion. Opps never mind that, we will just have tools look.

There is no end to the loss of privacy in the name of safety if your bogeyman is increasingly sophisticated actors.

Why isn’t this obvious to everyone?

Anyone old enough to remember the Clipper chips?


> The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it.

I think this is exactly what Apple wants to he the result of their their iMessage scanning.

They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.

As far as the FBI goes, this is presumably not their preference, but it’s still good for them if it makes predation a little harder.


My point is that the FBI will just use that as a pretext for greater intrusion into privacy. Why stop at users who have iCloud photos turned on? Why not scan all photos?

Why limit it to child predators? Why not use this too for terrorists and anyone the FBI or any other government deems as subversive?

In fact, if you just look at what the FBI has been saying over the years, that is exactly what they intend to do.

People who say that this is a slippery slope argument don’t even notice that they have been sliding down that slippery slope for decades.


> Why stop at users who have iCloud photos turned on? Why not scan all photos?

If they wanted to, they could have done this silently, years ago when the Neural Engine was first released. This is at least an attempt at a transparent, privacy-oriented approach. And it opens the door to more E2EE content in iCloud without the government openly accusing them of enabling distribution of abusive content.


Where is anyone arguing against you? The point is that the demands for solutions may be neverending, but that doesn’t mean the problems are non-existent.

If we want to limit the damage caused by the dynamic, we need to offer better solutions, not just complain about the FBI.


The problem of child molesters is a fixed quantity. The loss of privacy is ever increasing. When you see this dynamic, you know that a con is afoot.

The solution for child porn isn’t found in snooping on everyone. It is in infiltrating the networks. Go have the FBI infiltrate the rings. Here is an example of why the FBI is disingenuous. This child porn ring wasn’t using phones. Guess what they were using?

https://abc7.com/archive/7844451/

Computers in public libraries

Like I said, sophisticated actors aren’t the targets.

Another example. Osama bin Laden. He was air gapped. No cell phones or computers for him. No one even calling on a phone nearby. Osama bin Laden was found via an informant.

The next actor will be even more sophisticated. Probably single use dumb paired cell phones with call locations randomized. Probably plastic surgery. Locations that spy satellites cannot see.

Did snooping on cell phone data help find Osama? When that wasn’t enough, did grabbing all online data help? How about grabbing data straight from smart phones? Nope. Nope. Nope. Yet governments want more, more, more. Why do you think snooping helps against people who don’t even use phones for their most private communications?


You were saying that this is a slippery slope argument which implies that it is a fallacious argument. I am saying that isn’t the case. We have been sliding on this slope for decades which deems that the argument is valid and not fallacious.

From Wikipedia’s entry for slippery slope: “The fallacious sense of "slippery slope" is often used synonymously with continuum fallacy, in that it ignores the possibility of middle ground”

This isn’t a middle ground. Every year the supposed middle ground shifts toward less and less privacy. Notice the slippery slope? The very presence of this means that the supposed middle ground just slipped further?

Isn’t that obvious?


You’re ignoring the sentence after I mention the slippery slope, where I say:

> These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.

I’m not the one ignoring the middle ground.


[flagged]


Can you describe the middle ground that I have ignored?

What disingenuous argument have I made?

The only thing I have argued for is for hackers to attempt technical solutions that are more to their liking than Apple’s, because arguments are not preventing the slide.


I am saying that you are promoting a slippery ground falsely as middle ground.

Basically the argument I hear from you is “If you build a back door, then people will use it. So let’s build it anyway because it is a middle ground.” The problem I have with it is the “let’s build it anyway”.

That seems as clear as day. Why do I have to keep repeating myself? Don’t be an apologist.


> If you build a back door, then people will use it. So let’s build it anyway because it is a middle ground.

This looks a completely made up position that has nothing to do with anything I have said. If you can find a comment where I am advocating building back doors, I invite you to quote it.

> That seems as clear as day. Why do I have to keep repeating myself?

If it was clear you’d be able to support it with a quote. I’m pretty sure you can’t.

> Don’t be an apologist.

It doesn’t seem like you have been following my argument, so it’s unclear why you’d stoop to a personal attack.


Sure, I am quite willing to hang you with your own words.

zepto “As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography.”

“ > The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it. <- this is you quoting me

I think this is exactly what Apple wants to he the result of their their iMessage scanning. They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”

So according to your very own words, this ISN’T a serious attempt to build a privacy supporting solution to child pornography. First, it isn’t a solution because as you stated, it won’t actually catch anyone. Second, it isn’t serious because it was never intended to catch anyone.

“2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”

So according to your words these are not slippery slope arguments (the invalid argument sense) since, as you state, if you build a back door, bad people will use it. Don’t subtly use negative connotations to try to advance your argument.

Next you disingenuously frame the problem as a conflict between privacy and child pornography. That is an unsupported dichotomy.

“ However: Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue. ‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear.”

Lastly you call for better solutions for a “solution” that actually isn’t a solution.

“So: where are the proposals for a better solution?”

“ Apple’s solution is the best on offer. ”

Another unsupported dichotomy and a false assignment of responsibility.

If this solution is bad, then toss it out. You don’t need another proposal in its place. You don’t need to deploy this backdoor in the meantime.

It is NOT our responsibility to do the FBI’s job. It is THEIR responsibility to come up with better proposals.

If you do actually want a solution, my recommendation is to concentrate on real harm like child molestation and child trafficking. Trace how children have been trafficked historically. See how you can shut that down.

I feel dirty analyzing all the dirty tricks that you employed. Are you a politician or do you work for one? Work on policy?


> Sure, I am quite willing to hang you with your own words.

That isn’t what you’ve done.

“As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography.”

- False as you even argued

I don’t argue that.

>> They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”

> So according to your very own words,

Erm..

> this ISN’T a serious attempt to build a privacy supporting solution to child pornography.

These are your words, not what you quoted of mine.

It’s absolutely a solution to the problem of child porn on their platform. They care about making it safe for their users. Who is expecting Apple to solve the problem beyond that?

> It is at best something to keep the FBI at bay

These are your words, not something I have said.

> even though as you say it also introduces a back door.

Where do I say it introduces a back door?

> “2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”

> False, these are not slippery slope arguments

It is your opinion that they are not slippery slope arguments. I think they are, and as you have quoted I think they are reasonable. Slippery slope arguments are fallacies in the sense that the conclusions don’t logically follow, but that doesn’t mean that they are always wrong.

You haven’t hung me with anything - you’ve just voiced some misrepresentations of your own interleaved with quotes of me.


Resplendent. I won’t even add my own words. I will let you speak for yourself.

““ As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography.” I don’t argue that.””

“ Apple’s solution is the best on offer. ”

“ I think this is exactly what Apple wants to he the result of their their iMessage scanning. They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”

“2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”

“So: where are the proposals for a better solution?”

Let the viewer decide. Are you going to argue that I have misrepresented your words? Feel free to argue with yourself.


Zepto complained that it was out of context. Here are entire comments.

“ As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography. 2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it. However: Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue. ‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear. So: where are the proposals for a better solution? I see here people who themselves are afraid of the real consequences of government/corporate surveillance, and whose fear prevents them from empathizing with the people who are afraid of the equally real consequences of organized child sexual exploitation. ‘My fear is more important than your fear’, is the root of ordinary political polarization. What would be a hacker alternative would be to come up with a technical solution that solves for both fears at the same time. This is what Apple has attempted, but the wisdom here is that they have failed. Can anyone propose anything better, or are we stuck with just politics as usual? Edit: added ‘widely thought of as’ to make it clear that I am referring to a widely held position, not that I am arguing for it.”

> The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it. I think this is exactly what Apple wants to he the result of their their iMessage scanning. They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them. As far as the FBI goes, this is presumably not their preference, but it’s still good for them if it makes predation a little harder.


Still out of context - they are replies to things you wrote. You have left out your side of the conversation for some reason. How does that help?

Did you know they can still be seen exactly as I wrote them in the thread, where anyone can make sense of them, and see what I was replying to?

Remind us what this copy and paste behavior is meant to prove?


Also in contrast to zepto, here are words from Edward Snowden: No matter how well-intentioned, @Apple is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow.

They turned a trillion dollars of devices into iNarcs—without asking.


So you’ve just given up on responding to my points and are just quoting someone else who you agree with now?

I am sure this will appeal to people who already agree with you.


It looks like you’ve copied and pasted some quotes from my comments into a jumble.

There is nothing to ‘hang’ anyone with. Just a series of quotes taken out of context which even then don’t look particularly nefarious.

Remind us again what you’re trying to prove with this?


> Can anyone propose anything better

Widespread education about how child abuse usually works in the real world outside of movies, public examination of what tools we have available and whether they're being used effectively, very public stats about how prevalent the problem is and what direction it's moving in.

Better media that more accurately reflects how these problems play out for real people and that doesn't use them as a cheap gimmick to pray on people's fears to raise narrative stakes.

Better education about the benefits of encryption and the risks of abuse, better education about how back doors play out in the real world and how they are abused both by governments and by individual abusers. Better education about the fail-rate of these systems.

> Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography.

Most people don't have a basis for this instinct, they believe it because they were taught it, because it's what the movies tell them, because it fits their cultural perception of technology, because it's what the CIA and FBI tell them. Unless they're basing their fear on something other than emotion and instinct, there is no technical or political solution that will reduce their fear. It's all useless. Only education and cultural shifts will make them less afraid.

If you survey the US, over 50% of adults will tell you that crime today is at the same level or worse than it was in 1950s, an absurd position that has no basis in reality. So what, should we form a police state? At some point, caring about the real problems we have in the real world means learning to tell people that being scared of technology is not a good enough justification on its own to ban it.

Nothing else will work. People's irrational fears by and large will not be alleviated by any technical solutions. They won't even be alleviated by what Apple is doing, parents who have an irrational fear of privacy are not going to have that fear lessened because Apple introduced this scanning feature. They will still be afraid.


Yes. “We must do something, “It’s for the children”, “Even if it saves just one person”. No one has ever used this triple fallacy before.

How big do you really think the market is on kiddie porn that there are people storing it plainly on their iPhones and are “safe” because they aren’t uploading it to iCloud?

This is bullshit through and through.

The best case is this is the step Apple needs to get E2E iCloud storage, to prove they’re doing enough while maintaining privacy. The worst case is that if their is potential for a list and reporting to be abused, it will.

There seems to be no scenario for the best case to exist without the worst case.


Your position is to simply deny that child porn or predation is a serious problem.

I outlined that this was one of the sides in the debate - I.e. you take position 2, and have no empathy for position 1.

The problem is that this is a political trap. You can’t win position 2 by painting technologists as uncaring or dismissive.

‘Think of the children’ works for a reason.


I don’t believe people sharing child porn on iPhones is a serious problem, no. Laptops, desktops, sure.

My position is that this system has far more likely harm than potential good. It’s not worth it.

It has nothing to do with not recognizing another side.


Do you believe that sexual predators sharing images with children on iPhones is a problem?

> It has nothing to do with not recognizing another side.

It seems like you are simply stating that the other side’s priorities are wrong.

That seems like a reasonable belief that won’t help anyone with the problem of creeping surveillance.


Do you believe you can stop people from sharing imagines pre-determined to be exploitative and illegal by a central authority who has already logged and recorded the hash of that photo with children by deploying a universal scan system to all phones - when phones aren't a primary tool of trafficing illegal content?

>It seems like you are simply stating that the other side’s priorities are wrong.

This is a juvenile attempt at a nobility argument. That you can do anything as long as the goal is noble. Just as I wrote first, anything if it’s “for the children”. This is how all sorts of abuses are carried out under the four horsemen.


> when phones aren't a primary tool of trafficing illegal content?

Are they not? This is just an assumption you have made. It doesn’t matter what I think about it

I asked:

> Do you believe that sexual predators sharing images with children on iPhones is a problem?

You haven’t answered.

>>It seems like you are simply stating that the other side’s priorities are wrong.

> This is a juvenile attempt at a nobility argument.

No it isn’t. It is an honest attempt to characterize your approach. It seems clear that you think the other sides priorities are wrong.

That’s fine. I have to assume everyone thinks that. The point is that it doesn’t matter that you think they are wrong. You have already lost. The things you don’t want are already happening.

My argument is that since that debate is lost, any attempt to restore privacy must accept that other people’s priorities are different. Simply trying to get the other side to change priorities when you are already losing doesn’t seem like a good approach.


> ‘Think of the children’ works for a reason.

"Think of the children" will always work, no matter what the context is, no matter what the stats are, and no matter what we do. That does not mean that we should not care about the children, and it does not mean that we shouldn't care about blocking CSAM. We should care about these issues purely because we care about protecting children. If there are ways for us to reduce the problem without breaking infrastructure or taking away freedoms, we should take those steps. Similarly, we should also think about the children by protecting them from having their sexual/gender identities outed against their wishes, and by guaranteeing they grow up in a society that values privacy and freedom where they don't need to constantly feel like they're being watched.

But while those moral concerns remain, the evergreen effectiveness of "think of the children" also means that compromising on this issue is not a political strategy. It's nothing, it will not ease up on any pressure on technologists, it will change nothing about the political debates that are currently happening. Because it hasn't: we've been having the same debates about encryption since encryption was invented, and I would challenge you to point at any advancement or compromise from encryption advocates as having lessened those debates or having appeased encryption critics.

Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.

Reducing the real-world occurrences for irrational fears doesn't make those fears go away. If we reduce shark attacks on a beach by 90%, that won't make people with a phobia less frightened at the beach, because their fear is not based on real risk analysis or statistics or practical tradeoffs. Their fear is real, but it's also irrational. They're scared because they see the deep ocean and because Jaws traumatized them, and you can't fix that irrational fear by validating it.

So in the real world we know that the majority of child abuse comes from people that children already know. We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum. We know the broader privacy risks. We know that abusers (particularly close abusers) often try to hijack systems to monitor and spy on their victims. We would also in general like to see more stats about how serious the problem of CSAM actually is, and we'd like to know whether or not our existing tools are being used effectively so we can balance the potential benefits and risks of each proposal against each other.

If somebody's not willing to engage with those points, then what makes you think that compromising on any other front will change what's going on in their head? You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse. They have an image in their head of predators using encryption, and that image is never going to go away no matter what the real-world stats do and no matter what solutions we propose.

The central fear that encryption critics have is a fear of private communication. How can technologists compromise to address that fear? It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself.

Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough." So the idea that compromising now will change anything about how that line is used in the future -- it just seems naive to me. Really, the problem here can't be solved by either technology or policy. It's cultural. As long as people are frightened of the idea of privacy and encryption, the problem will remain.


> So in the real world we know that the majority of child abuse comes from people that children already know.

Historically this has been regarded as true, but according to the FBI, online predation is a growing problem.

https://www.fbi.gov/news/stories/child-predators

> Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.

What makes you think I think that? You have misrepresented me here (effectively straw-manning), but I will assume an honest mistake.

You are right that there are people who will always seek to ban or undermine encryption no matter what, and who use ‘think of the children’ as an excuse regardless of the actual threat. ‘Those people’ as you put it, by definition will never have their minds changed by technologists. Indeed there is no point in technologists trying to do that.

However I don’t think that group includes Apple, nor does it include most of Apples customers. Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though. These people are not ideologues or anti-encryption fanatics.

Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use, not does it do anything to assuage the legitimate fears of Apple’s customers.

Perhaps you believe that there is no way to build a more privacy preserving solution than the one Apple has.

I would simply point out in that case, that the strategy of arguing against ‘think of the children’, has already lost, and commiserate with you.

I’m not convinced that there is no better solution. Betting against technologists to solve problems usually seems like a bad bet, but even if you don’t think it’s likely, it seems irrational not to hedge, because the outcome of solving the problem would have such a high upside.

It’s worth pointing out that Public Key cryptography is a solution to a problem that at one time seemed insoluble to many.


> Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use

- If the stats don't justify their fears

- And I come up with a technological solution that will make the stats even lower

- Their fears will not be reduced

- Because their fears are not based on the stats

----

> Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though

Are they worried about this because of a rational fear based on real-world data? If so, then I want to talk to them about that data and I want to see what basis their fears have. I'm totally willing to try and come up with solutions that reduce the real world problem as long as we're all considering the benefits and tradeoffs of each approach. We definitely should try to reduce the problem of CSAM even further.

But if they're not basing their fear on data, then I can't help them using technology and I can't have that conversation with them, because their fear isn't based on the real world: it's based on either their cultural upbringing, or their preconceptions about technology, or what media they consume, or their past traumas, or whatever phobias that might be causing that fear.

Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.

That problem can't be solved with technology, it can only be solved with education, or emotional support, or cultural norms. If they're scared right now without knowing anything about how bad the problem actually is, then attacking the problem itself will do nothing to help them -- because that's not the source of their fear.


> Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.

Not really.

I’m agreeing that parents will be afraid for their children regardless of the stats, and are unlikely to believe anyone who claimed they shouldn’t be. The ‘stats’ as you put it won’t change this.

Not because the stats are wrong, but because they are insufficient, and in fact predation will likely continue in a different form even if we can show a particular form to not be very prevalent. The claim to have access to ‘the reality of the situation’ is not going to be accepted.

You won’t be able to solve the problem through education or emotional support because you can’t actually prove that the problem isn’t real.

You actually don’t know the size of the problem yourself, which is why you are not able to address it conclusively here.

What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.


> What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.

And what I'm saying is that this is a giant waste of time because if someone has a phobia about their kid getting abducted, that phobia will not go away just because Apple started scanning photos.

You want people to come up with a technical solution, but you don't even know to define what a "solution" is. How will we measure that solution absent statistics? How will we know if it's working or not? Okay, Apple starts scanning photos. Are we done? Has that solved the problem?

We don't know if that's enough, because people's fears here aren't based on the real world, they're based on Hollywood abduction movies, and those movies are still going to get made after Apple starts scanning photos.

You are completely correct that the stats are insufficient to convince these people. But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away, because in your own words: "parents will be afraid for their children regardless of the stats."

If Apple's policy reduces abuse by 90%, they'll still be afraid. If it reduces it by 10%, they'll still be afraid. There is no technological solution that will ease their fear, because it's not about the stats.

----

I'm open to being proven wrong that predation is a serious problem that needs drastic intervention. I'm open to evidence that suggests that encryption is a big enough problem that we need to come up with a technological solution. I just want to see some actual evidence. People being scared of things is not evidence, that's not something we can have a productive conversation about.

If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.

If that's not on the table, then also in your words, we need to "accept that this is the environment" and stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.


> But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away,

I can’t be wrong about that since I’m not claiming that anywhere or assuming it.

> because in your own words: "parents will be afraid for their children regardless of the stats."

Given that I wrote this, why would you claim that I think otherwise?

> There is no technological solution that will ease their fear, because it's not about the stats.

Agreed, except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.

> People being scared of things is not evidence,

It’s evidence of fear. Fear is real, but it’s not a measure of severity or probability.

> that's not something we can have a productive conversation about.

I don’t see why we can’t take into account people’s fears.

> If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.

Yes. One of those metrics could be ‘in what ways does this compromise privacy’, and another could be ‘in what ways does this impede child abuse use cases’. I suspect Apple is trying to solve for those metrics.

Perhaps someone else can do better.

> If that's not on the table, then also in your words, we need to "accept that this is the environment"

This part is unclear.

> stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.

It’s unclear why you would say anyone is pretending this, least of all me. I have wholeheartedly agreed with you that these calls are ‘evergreen’.

I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.


> except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.

Statistics are a reflection of reality. When you say that the stats don't matter, you are saying that the reality doesn't matter. Just that people are scared.

You need to go another step further than you are currently going, and realize that any technological "solution" will only be affecting the reality, and by extension will only be affecting the stats. And we both agree that the stats can't solve the problem.

It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.

----

Let's say we abandon this fight and roll over and accept Apple moving forward with scanning. Do you honestly believe that even one parent is going to look at that and say, "okay, that's enough, I'm not scared of child predators anymore."? Can you truthfully tell me that you think the political landscape and the hostility towards encryption would change at all?

And if not, how can you float compromise as a political solution? What does a "solution" to an irrational fear even look like? How will we tell that the solution is working?

You say the stats don't matter; then we might as well give concerned parents fake "magic" bracelets and tell them that they make kids impossible to kidnap. Placebo bracelets won't reduce actual child abuse of course, but as you keep reiterating, actual child abuse numbers are not why these people are afraid. Heck, placebo bracelets might help reduce parent's fear more than Apple's system, since placebo bracelets would be a constantly visible reminder to the parents that they don't need to be afraid, and all of Apple's scanning happens invisibly behind the scenes where it's easy to forget.

----

> I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.

Out of curiosity, how will you prove to these people that your solutions are sufficient and that they work as substitutes for weakening encryption? How will you prove to these people that your solutions are enough?

Will you use stats? Appeal to logic?

You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.


> Statistics are a reflection of reality.

No, they are the output of a process. Whether a process reflects ‘reality’ is dependent on the process and how people understand it. This is essential to science.

Even when statistics are the result of the best scientific processes available, they are typically narrow and reflect only a small portion of reality.

This is why they are insufficient.

> When you say that the stats don't matter,

I never said they don’t matter. I just said they were insufficient to convince people who are afraid.

> you are saying that the reality doesn't matter.

Since I’m not saying they don’t matter, this is irrelevant.

> It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.

Can you find a place where this contradicts something I’ve said? I haven’t argued to the contrary anywhere. I don’t expect to get the fears to go away.

As to whether they are rational are not, some are, and some aren’t. We don’t know which are which because you don’t have the stats, so we have to accept that there is a mix.

> Will you use stats? Appeal to logic?

Probably a mix of both, maybe some demos, who knows. I won’t expect them to be sufficient to silence the people who are arguing in favor of weakening encryption, not make parents feel secure about their children being protected against predation forever.

> You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.

Again you misrepresent me. Can you find a place where I argue that technological solutions are not subject to the same problems as the current debate?

I don’t think you can find such a place.

I have fully agreed that you can’t escape the vicissitudes of the current debate. Nonetheless, you can still produce better technological solutions. This isn’t about prevailing over unquantifiable fears and dark forces. It’s about making better technologies in their presence.


> This is essential to science.

Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?

Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.

If they did, we could argue with them. But we can't, because they don't.

> Can you find a place where this contradicts something I’ve said?

Yes, see below:

> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to

I'm open to some kind of clarification that makes this comment make sense. How are your "solutions" going to make people less afraid? On what basis are you going to argue with these people that your solution is better than banning encryption?

Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?


> This is essential to science.

>> Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?

No. Did I say something to that effect?

> Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.

You say this as if you are arguing against something I have said. Why?

> If they did, we could argue with them. But we can't, because they don't.

We can still argue with them, just not with science.

> Can you find a place where this contradicts something I’ve said?

> Yes, see below:

You’ll need to explain what the contradiction is. You have said you don’t understand it, but you not understanding doesn’t make it a contradiction.

>> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to

> I'm open to some kind of clarification that makes this comment make sense.

It makes sense to have solutions that don’t weaken privacy. Wouldn’t you agree?

> How are your "solutions" going to make people less afraid?

They won’t.

> On what basis are you going to argue with these people that your solution is better than banning encryption?

Which people? The parents, the nefarious actors, apple’s customers?

> Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?

Of course not because you are going to play the role of an irrational parent who cannot be convinced.

Neither of us disagree that such people exist. Indeed we both believe that they do.

Why does changing such a person’s mind matter?


> Neither of us disagree that such people exist. Indeed we both believe that they do.

> Why does changing such a person’s mind matter?

Okay, finally! I think I understand why we're disagreeing. Please tell me if I'm misunderstanding your views below.

> You’ll need to explain what the contradiction is.

I kept getting confused because you would agree with me right up to your conclusion, and then suddenly we'd both go in opposite directions. But here's why I think that's happening:

You agree with me that there are irrational actors that will not be convinced by any kind of reason or debate that their fears are irrational. You agree with me that those people will never stop calling to ban encryption, and that they will not be satisfied by any alternative you or I propose. But you also believe there's another category of people who are "semi-rational" about child abuse. They're scared of it, maybe not for any rational reason. But they would be willing to compromise, they would be willing to accept a "solution" that targeted some of their fears, and they might be convinced than an alternative to banning encryption is better.

Where we disagree is that I don't believe those people exist -- or at least if they do exist, I don't believe they are a large enough or engaged enough demographic to have any political clout, and I don't think it's worth trying to court them.

My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.

So when you and I both say that we can't convince the irrational people with alternative solutions, my immediate thought is, "okay, so the alternative solutions are useless." But of course you think the alternative solutions are a good idea, because you think those people will listen to your alternatives, and you think they'll sway the encryption debate if they're given an alternative. I don't believe those people exist, so the idea of trying to sway the encryption debate by appealing to them is nonsense to me.

In my mind, anyone who is rational enough to listen to your arguments about why an alternative to breaking encryption is a good idea, is also rational enough to just be taught why banning encryption is bad. So for people who are on the fence or uninformed, but who are not fundamentally irrationally afraid of encryption, I would much rather try gently reaching out to them using education and traditional advocacy techniques.

----

Maybe you're right and I'm wrong, and maybe there is a political group of "semi-rational" people who are

A) scared about child abuse

B) unwilling to be educated about child abuse or to back up their beliefs

C) but willing to consider alternatives to breaking encryption and compromising devices.

If that group does exist, then yeah, I get where you're coming from. BUT personally, I believe the history of encryption/privacy/freedom debates on the Internet backs up my view.

Let's start with SESTA/FOSTA:

First, Backpage did work with the FBI, to the point that the FBI even commented that Backpage was going beyond any legal requirement to try and help identify child traffickers and victims. Second, both sex worker advocates and sex workers themselves openly argued that not only would SESTA/FOSTA be problematic for freedom on the Internet, but that the bills would also make trafficking worse and make their jobs even more dangerous.

Did Backpage's 'compromise' sway anyone? Was there a group of semi-reasonable people who opposed sites like Backpage but were willing to listen to arguments that the bills would actively make sex trafficking worse? No, those people never showed up. The bills passed with broad bipartisan support. Later, several Senators called to reexamine the bills not because alternatives were proposed to them, but because they put in the work to educate themselves about the stats, and realized the bills were harmful.

Okay, now let's look at the San Bernardino case with Apple. Apple gave the FBI access to the suspect's iCloud account, literally everything they asked for except access to decrypt the phone itself. Advocates argued that the phone was unlikely to aid in the investigation, and also suggested using an exploit to get into the phone, rather than requiring Apple to break encryption. Note that in this case the alternative solution worked, the FBI was able to get into the phone using an exploit rather than by compelling Apple to break encryption. The best case scenario.

Did any of that help? Was there a group of semi-reasonable people who were willing to listen to the alternative solution? Did the debate cool because of it? No, it changed nothing about the FBI's demands or about the political debate. What did help was Apple very publicly and forcefully telling the FBI that any demand at all to force them to install any code for any reason would be a violation of the 1st Amendment. So minus another point from compromise as an effective political strategy in encryption debates, and plus one point to obstinance.

Okay, now let's jump back to early debates about encryption: the clipper chip. Was that solved by presenting the government and concerned citizens with an alternative that would better solve the problem? No, it wasn't -- even though there were plenty of people who argued at the time for encryption experts to work with the government instead of against it. Instead the clipper chip problem was solved both when encryption experts broke the clipper chip so publicly and thoroughly that it destroyed any credibility the government had in claiming it was secure, and it was solved by the wide dissemination of strong encryption techniques that made the government's demands impossible, over the objections of people who called for compromise or understanding of the government's position.

----

I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse. If that group does exist, it does a very good job of hiding, and a very bad job of intervening during policy debates.

I do think that people exist who are skeptical about encryption but who are not so irrational that they would fall into our category of "impossible to convince." However, I believe they can be educated, and that it is better to try and educate them than it is to reinforce their fears.

Because of that, I see no political value in trying to come up with alternative solutions to assuage people's fears. I think those people should either be educated, or ignored.

It is possible I'm wrong, and maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great. However, I very much doubt that you can come up with a solution like that, and if you can, I very much doubt that outside of technical communities anyone will be very interested in what you propose. I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.

And it's my opinion, based on the history of privacy/encryption, that traditional advocacy and education techniques will be more politically effective than what you propose.


> My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.

We disagree here, indeed. My view is not that there are ‘semi-rational’ people. My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved. I think this describes most of us, most of the time.

The idea that there is a clear distinction between ‘rationally’ understanding a complex social problem through science, and being ‘irrational and unconvincable’ seems inaccurate to me. Both of these positions seem equally extreme, and neither qualify as reasonable in my view, nor are they how most people act.

I think there are a lot of people who are reasonably afraid of things they don’t fully understand and which nobody fully understands. These people reasonably want solutions, but don’t expect them to be perfect or to assuage everyone’s fear.

These are the people who can easily be persuaded to sacrifice a little privacy if it means making children safer from horrific crimes.

They are also people who would prefer a solution that didn’t sacrifice so much if it was an option.

My argument is that the best way to make things better is to make better options available. Irrationally paranoid parents, and irrationally paranoid governments exist, but are the minority.

Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.

If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.

Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better.

If we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem.

If we don’t offer an alternative solution we aren’t offering them anything at all.

> I see no political value in trying to come up with alternative solutions to assuage people's fears.

Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.

> I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse.

Why do you assume education about encryption/abuse is relevant? Even people who deeply understand the issue still have to choose between the options that are available and practical.

> If that group does exist, it does a very good job of hiding,

It’s not a meaningful group definition.

> and a very bad job of intervening during policy debates.

Almost nobody intervenes during policy debates unless there have a strong position. Most people just choose the best solution from what is available and get on with their lives which are not centered on these issues.

> maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great.

Indeed. Isn’t that what we really want here? The only reason people are engaged in all this ideological battle is that they assume there isn’t a technical solution.

> However, I very much doubt that you can come up with a solution like that.

You could have just said you are someone who doesn’t believe a technical solution is possible.

> I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.

Why would you think I would be disappointed? We have already discussed how I don’t expect those people to change their minds.

Fortunately that is irrelevant to whether a solution would help, since it is not aimed at them.


> My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved.

Heavily agreed. But those are not irrational fears.

They become irrational fears when learning more about the risks and learning more about the benefits and downsides of different mitigation techniques doesn't change anything about those fears one way or another.

We all form beliefs based on incomplete information. That's not irrational. It is irrational for someone to refuse to look at or engage with new information. If someone is scared of the potential for encryption to facilitate CSAM because they're working with incomplete information, that's not irrational.

If someone is scared of encryption because they have incomplete information, and they refuse to engage with the issue or to learn more about the benefits of encryption, or the risks of banning it, or what the stats on child predators actually are -- at that point, it's an irrational belief. What makes them irrational is the fact that they are no longer being adjusted based on new information.

A rational person is not someone who knows everything. A rational person is someone who is willing to learn about things when given the opportunity.

> Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.

I guess I don't understand what you are arguing for then.

Let's look at your "reasonable people who are reasonably afraid" camp. We'll consider that these people have doubts about encryption, but don't hate it. They are scared of the potential for abusers to run rampant, but are having trouble figuring out what that looks like or what the weak points are in a complicated system. They are confused, but not bad-faith, and they have fears about something that is legitimately horrific. We will say that these people are not irrational, they recognize a real problem and earnestly want to do something about it.

There are 2 things we can do with these types of people:

1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.

or

2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.

You seem to be suggesting that we try #2? And this apparently isn't designed to assuage their fears? But I'm not sure what it does then. Presumably the reason they'll accept your proposal is because it addresses the fears they have.

My preference is to try #1. I believe that if someone is actually in the camp you describe, if they have reasonable fears but they're looking at a complex social problem, openly talking to those people about the complex downsides of banning encryption is OK. They'll listen. They might come up with other ideas, they might bring up their own alternative solutions. All of that is fine, none of us are against reducing CSAM, we just want people to understand the risks behind the systems being proposed.

But importantly, if someone is genuinely reasonable, if they aren't irrational and they're just trying to grapple with a complex system -- then talking about the downsides should be enough, because those people are reasonable and once they understand the downsides then they'll understand why weakening encryption isn't a feasible plan. From there we can look at alternatives, but the alternatives are not a bargaining chip. Even if there were no alternatives, that wouldn't change anything about the downsides of making software more vulnerable. First, people must understand why a proposed solution won't work, and then we can propose alternatives.

To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.

> If we don’t offer an alternative solution we aren’t offering them anything at all.

Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.

We can look for a solution together once we've taken those things off the table.

Because if someone comes to me asking to find a good solution, I want to know that they're coming in good faith, that they genuinely are looking for the best solution with the fewest downsides. If they're not, if they're using encryption as some kind of threat, then they're not really acting in good faith about honestly looking at the upsides and downsides. I have a hard time figuring out how I would describe that kind of a person as "reasonable".

> I personally think you would be very disappointed

> Why would you think this? Did I say anything anywhere about convincing people who are arguing for weakening encryption?

Let me be even more blunt. I think that you could come up with a brilliant solution today with zero downsides that reduced CSAM by 90%. And I think you would be praised if you did come up with that solution, and it would be great, and everyone including tech people like me would love you for it. And I also think it would change literally nothing about the current debates we're having. I think we would be in the exact same place, I think all of the people who are vaguely worried about CSAM and encryption (even the good faith people you mention above) would still be just as worried tomorrow. You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.

I'm not just talking the irrational people. It would not change the opinions of the reasonable people you're describing above. Because why would it? However good your solution is, if encryption is genuinely not worth preserving, then it would always be better to implement your solution and ban encryption. I don't say that derisively, if the benefits of banning encryption really did outweigh the downsides, then it would genuinely be good to get rid of encryption.

The only reason we don't get rid of encryption is because its benefits do heavily outweigh its downsides. Not because this is some kind of side in a debate, but because when you examine the issue rationally and reasonably, it turns out that weakening encryption is a really bad idea.

> My argument is that the best way to make things better is to make better options available.

This is another point where we differ then.

As far as I can tell, any reasonable person who is convinced that encryption is a net negative is always going to be interested in getting rid of encryption unless they understand what the downsides are. Any reasonable person who is on the fence about encryption is going to stay on the fence until they get more information. I don't see how proposing alternative solutions is going to change that.

So I believe that the only way these reasonable people you describe are going to change their minds are if they're properly educated about the downsides of making software vulnerable, if they're properly educated about the upsides of privacy, and if they're properly educated about the importance of device ownership.

And maybe I'm overly optimistic here, but I also do believe that reasonable people are willing to engage in good faith about their proposed solutions and to learn more about the world. I don't think that a reasonable person is going to clam up and get mad and stop engaging just because someone tells them that their idea to backdoor software has negative unintended side effects. I think that education works when offered to reasonable people.


> There are 2 things we can do with these types of people:

> 1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.

> or

> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.

Why are those the only two solutions? That seems like a false dichotomy.

Again why would you think I’m suggesting #2.

Can I ask you straight up, are you trolling?

There is a pattern where you say “I think you are saying X” where X is unrelated to anything I have actually said. I ask “why do you think I think X”, and you don’t answer, but just move on to repeat the process.

I have been assuming there is good faith misunderstanding going on, but the fact that you keep not explaining where the misunderstandings have arisen from when asked is starting to make me question that.

Most of what you’ve written in this reply is frankly incoherent, or at least seems to be based on assumptions about my position that are neither valid nor obvious, as to make it seem seem unconnected from our previous discussion.

For example this:

> To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.

Just seems like a gibberish hypothetical that doesn’t have much to do with what we are talking about.

And this:

> You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.

What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?

Let’s take another part:

> Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.

Ok, but those are all in fact fluid concepts whose status is changing as time goes by, and mostly not in the directions it sounds like you would prefer. Nobody is thinking of them as prizes. The status quo is that they are in jeopardy.

> We can look for a solution together once we've taken those things off the table.

Ok, but this just means you aren’t willing to participate with people who don’t agree to a set of terms, which in fact don’t represent anything anyone has so far developed.

That’s a comment about your personal boundaries not about whether a better solution than what Apple is proposing could be built.

That’s fine by me, in fact I’d be happy if a solution did incorporate all of the concepts you require. I agree we need that. I argue for it quite often.

I don’t think such a thing has been built yet, and if it were built, I suspect parents would like to have some mechanism to control whether it was. vector of child exploitation before they let their kids use it.

So what would that solution look like?


> Why are those the only two solutions? That seems like a false dichotomy.

It's not? It's a real dichotomy. What other solution could there be?

I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options.

I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.

----

> why do you think I think X

Frankly, if this isn't what you think, then I don't understand what you're thinking.

You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal, we have to offer an alternative if we're going to criticize. But why?

- I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.

- I thought the point was to compromise with critics: no, you're saying that's not what you mean.

- I thought the point was to try and get people to stop attacking encryption: no, you're saying that's not what you mean.

- Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?

> What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?

What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.

I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.

----

Honest to God, hand over my heart, I am not trolling you right now. I understand that this is frustrating to you, but my experience throughout this conversation has been:

- You say something

- I try to interpret and build on it

- You tell me that's not what you meant and ask me why I thought that

- Okay, I try to reinterpret and explain

- The cycle repeats

- The only information I can get out of you is that I apparently don't understand you. I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.

What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal? You seem to think this will help keep people from attacking encryption, but I'm wrong to say that it will help by reducing their fears, or by distracting them, or by teaching them, or by solving the problems that they think they have, or... anything.

You tell me that "if we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem." But then you tell me that "encryption is not a prize" and the goal is not to convince them of anything, which to me completely contradicts the previous sentence.

If encryption isn't a prize, if "nobody is thinking of them as prizes", then why does it sound like you're telling me that preserving encryption is conditional on me coming up with some kind of alternative? If encryption isn't a prize, then great, let's take it off the table.

But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms". So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.

You're jumping back and forth between positions that seem completely contradictory to me. I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".

So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand, or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible. If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent, and we can go off and do something more productive with our evenings. But I'll give this exactly one last try:

----

> Most of what you’ve written in this reply is frankly incoherent

Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position:

A) it is desirable to reduce CSAM without violating privacy.

B) the downsides of violating privacy are greater than the upsides of reducing CSAM.

C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance.

D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights.

E) the ones that refuse to be educated are never going to change their views.

F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time.

G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy).

H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with)

I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM.

Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?


>> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.

> Why are those the only two solutions? That seems like a false dichotomy. It's not? It's a real dichotomy. What other solution could there be?

3) Offer a better technical that is less of a compromise than what Apple is offering, or indeed is not a compromise at all.

> I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options. I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.

I’m not suggesting #2 because #2 is a straw man.

---- >> why do you think I think X

>Frankly, if this isn't what you think, then I don't understand what you're thinking.

Ok - that seems like a straightforward response. You don’t understand.. But I clearly am not saying the things you are attributing to me.

I have no repeatedly asked where I said anything that leads you to think they are my view. It’s rare that you answer. From my point of view that means you aren’t actually responding to what I have written. You read what I write, don’t understand it, and then make something up that isn’t what I’ve said (or is even directly contradicted by what I’ve said) and then you tell me that’s what I’m saying.

If this was a one time thing, it would be fine, but at this point it doesn’t seem to matter what I say - you’ll just respond as if I said something else, and you won’t explain why when asked. From here it looks like you are having a discussion with your own imagination, rather than with what I write.

Here’s an example:

>> You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal,

Where do I ‘keep saying that we can’t just criticize apple’s proposal.’? If that is something I have said more than once, you should be able to quote me. If not then it isn’t actually something I keep saying, it’s only in your imagination that I am saying it.

> we have to offer an alternative if we're going to criticize. But why?

Another example of something I you are imagining me to be saying, but that I am not.

> - I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.

I have now said it is not what I mean, multiple times with explanation, and yet you keep saying it is. Why is that?

> - I thought the point was to compromise with critics:

Why do you think that? I have never said it. Again it’s something you are imagining. What is the text that made you imagine it? If we knew that, we could uncover where you haven’t understood.

> no, you're saying that's not what you mean.

Of course, because I didn’t say it.

> - I thought the point was to try and get people to stop attacking encryption:

Again I have never said this was the point, not only that I have said we can never do so.

But you have not explained why you thought this was the point.

> no, you're saying that's not what you mean. - Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?

In this case then misunderstanding is mine. I misunderstood ‘reduce csam’ as ‘reduce csam detection’. I.e. I read it as get Apple to reduce their efforts.

> What does it even mean to ‘reduce CSAM’?

This is what I do when I don’t understand what someone has written - I ask them. You answered, and we have uncovered where I misunderstood.

If you answered my questions, we might have understood why you haven’t been understanding me.

> Why do we care about changing people’s minds here about encryption?

It seems to me that you have an agenda to change people’s minds about encryption. What isn’t clear is why you attribute that to me.

> What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.

Agreed - like I say I just misread the phrase.

> I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.

Ageeed - like I say I just misread the phrase. ---- > Honest to God, hand over my heart, I am not trolling you right now.

The reason it looks like trolling, is that when you say ‘your are saying X’, and X doesn’t appear to be supported by my words, X seems like a straw man. I have assumed this not to be intentional, and I believe you, but by not answering the question ‘why would you think I think that?’ you created ambiguity in your intentions.

> I understand that this is frustrating to you,

It’s not so much ‘frustrating’, as not functional as a discussion. If you misunderstand me and don’t answer questions aimed at getting to the root of the misunderstanding then you’ll likely just talk past me. I am just trying to evaluate whether an alternative is possible.

> but my experience throughout this conversation has been: - You say something - I try to interpret and build on it - You tell me that's not what you meant and ask me why I thought that - Okay, I try to reinterpret and explain - The cycle repeats

This seems like close to a description of what I am seeing but not quite. Let’s examine the steps:

1. You say something 2. I try to interpret and build on it 3. You tell me that's not what you meant and ask me why I thought that 4. Okay, I try to reinterpret and explain 5. The cycle repeats

In #2 you say ‘You tell me that's not what you meant and ask me why I thought that’. This isn’t quite true. I often don’t ask ‘why you thought that’ in a vague way. I ask ‘what did I say that made you think that’. I ad,it there may be a few lapses, but most of the time I ask what i said that led to your understanding.

In #4 you said “I try to reinterpret and explain”. What you don’t do is answer the question - what is it I said that led to your understanding?

By not answering this question, we don’t get to the root cause of the misunderstanding.

a> - The only information I can get out of you is that I apparently don't understand you.

You don’t.

> I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.

This is false. I ask what I said that led to the misunderstanding. I do not move on.

What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal?

> You seem to think this will help keep people from attacking encryption,

What have I said that makes you think that?

[there are a few paragraphs that I can’t respond to because they don’t make sense]

> But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms".

Did I misunderstand you? Did you mean something else by ‘taking encryption off the table’?

> So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.

I thought you agreed that there are some people who will always attack encryption. I didn’t think it was just me ‘telling you that’. Did I misunderstand you - do you think you can get people to stop attacking encryption?

> You're jumping back and forth between positions that seem completely contradictory to me.

That’s possible, but I don’t think so. Can you quote where you think I have contradicted myself?

> I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".

This seems like sarcasm and bad faith. You are misrepresenting me. For example, I have never mentioned human rights.

Privacy on the other hand, is definitely a fluid concept.

What we consider it to mean has changed over time as both technology and society have developed.

> So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand,

What do you mean by flowery language?

> or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible.

I know you do.

> If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent,

Your views to the extent that I know them, don’t seem gibberish or incoherent. It’s when you incorporate interpretations of my views that don’t relate to what I have said, that what you write appears incoherent to me.

and we can go off and do something more productive with our evenings. But I'll give this exactly one last try: ---- > Most of what you’ve written in this reply is frankly incoherent Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position: A) it is desirable to reduce CSAM without violating privacy. B) the downsides of violating privacy are greater than the upsides of reducing CSAM. C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance. D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights. E) the ones that refuse to be educated are never going to change their views. F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time. G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy). H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with) I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM. Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?

Honestly, no. This looks like just a blunt attempt to win some argument of your own with me playing a role that has nothing to do with the conversation so far. You are also asking me to do a lot of work to answer your questions when you have been unwilling to answer mine. That doesn’t seem like good faith.

Remember, you came to this subthread by replying to me. But you have consistently ignored clarifying questions.

Was it your goal along was to simply ignore what I have been saying and find a spot to just make your own case? I am genuinely unsure.

How about we start somewhere simpler? When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?


> When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?

Okay, sure. When you ask me to try and justify why I think you hold your position, I interpret that as a distraction (hopefully a good faith one). I don't want to argue on a meta-level about why I got confused about your comments, I want to know what you believe. I'm frustrated that you keep trying to dig into "why are you confused" instead of just clarifying your position.

My feeling is we could have skipped this entire debate if you had sat down and made an extremely straightforward checklist of your main points, consisting of maybe 5-10 bullet points, each one to two sentences max. This is a thing I've done multiple times now about my beliefs/positions during this discussion. If we get mixed up about what the other person is saying, the best thing to do is not to dive into that, it's to take a step back and try to clarify from the start in extremely clear language.

You looked at the final checklist and said "this looks like just a blunt attempt to win some argument of your own". I looked at it as a charitable invitation to step back, write 10-20 sentences instead of 15 paragraphs, and to just cut through the noise and figure out where we disagree. If your checklist doesn't overlap with mine, fine. It's not bad for us to discover that we're arguing past each other. What's bad is if we spend X paragraphs getting frustrated about meta-arguments that have nothing to do with Apple.

I don't want to debate language or start cross indexing each other's comments, I want to debate ideas.

So when you tell me that I'm wrong about what you believe, I look over your statements and try to reinterpret, and I move on. Very rarely is my instinct to sit down and try to catalog a list of statements to try and prove to you that you do believe what I think, because I take it as a given that if you tell me that I misinterpreted you... I did.

So I accept it and move on.

----

Yes, we could get into a giant debate about "what makes you think I think that". That might go something like:

> You seem to think this will help keep people from attacking encryption,

> What have I said that makes you think that?

And I could reply by linking back to one of your previous comments:

> "Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.

> If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.

> Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better."

Which to me sounds quite a bit like: "offer a solution that doesn't target encryption, and then these people won't target encryption because 'most people just want reasonable solutions'".

----

But what's the point of the above conversation? I already know that you don't interpret those 3 paragraphs about a "privacy preserving option" as meaning "a proposal that will stop reasonable people from attacking encryption." Because you told me that's not what you believe.

So how weird and petty would I need to be to start arguing with you, "actually you did mean that, and I have proof!" Is it any value to either of us to try and trip each other up over "well, technically you said"? I'm not here trying to trap you, I want to understand you.

Honestly, the short answer to why I rarely reply back with quotes about "why I think you said that", is I kind of interpreted "what makes you think I think that" as a vaguely rude attempt to derail the conversation and debate language instead of ideas, and I've been trying to graciously sidestep it and move on.

- I'm happy to debate privacy with someone

- I'm happy to listen to them so I can understand their views better

- I'm not happy to debate whether or not someone believes something. I think that's a giant meaningless waste of time.

I don't think that means you're operating in bad faith, but I can't think anything I would rather do less than spend all day going back over all of your statements to cross-reference them so I can prove that... what? That I misunderstood your actual position? I believe you, you don't need to prove to me that I misunderstood you! Let's just skip that part and move on to explaining what the actual position really is.

It doesn't matter "why I think you said what I said", it just matters that I understand you. So why get into that meaningless debate instead of just asking you to clarify or trying to reinterpret? I don't care about technicalities and I don't care about "winning" against you, and I interpret "justify why you thought I thought that" as a meaningless distraction that only has value in Internet points, not in getting me any closer to understanding what your views are.


> "Reducing the real-world occurrences for irrational fears doesn't make those fears go away." "You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse"

We could say the same thing the other way - people up in arms are not frightened by statistics of abuse of a surveillance system, but frightened of the idea of a company or government abusing it. This thread is full of people misrepresenting how it works, claims of slippery slopes straight to tyranny, there's a comparison to IBM and the Holocaust, and it's based on no real data and not even the understanding from simply reading the press release. This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused. For example Skype has intercepted your comms since Microsoft bought it and routed all traffic through them, Chrome and FireFox and Edge do smartscreen blocking of malware websites - what are the stats on those systems being abused to block politically inconvenient memes or similar? Nothing Apple could do would in any way reassure these people because the fears are not based on information. For example your comment:

> "We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum."

Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]. This comment from you is reacting to a fear of something disconnected from the facts of what's been announced where that fear is guarded against as part of the design.

You could say that the hash database is from a 3rd-party so that it's not Apple acting unilateraly, but that's not taken as reassurance because the government could abuse it. OK guard against that with Apple reviewing the alerts before doing anything with them, that's not reassuring because Apple reviews are incompetent (where do you hear of groups that are both incompetent and capable of implementing worldscale surveillance systems? conspiracy theories, mostly). People say it scans all photos and when they learn that it scans only photos about to be uploaded to iCloud their opinion doesn't seem to change, because it's not reasoned based on facts, perhaps? People say it will be used by abusive partners who will set their partner to be a minor to watch their chats. People explain that you can't change an adult AppleID to a minor one just like that, demonstrating the argument was fear based not fact based. People say it is a new ability for Apple to install spyware in future, but it's obviously not - Apple have been able to "install spyware in future" since they introduced auto-installing iOS updates many years ago. People say it's a slippery slope - companies have changed direction, regulations can change, no change in opinion; nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them. People saying it could be used by bad-actors at Apple to track their Ex's. From the design, it couldn't. But why facts when there's fearmongering to be done? The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.

> "We would also in general like to see more stats about how serious the problem of CSAM actually is"

We know[2] that over 12 million reports of child abuse material to NMEC were related to FaceBook messenger and NMEC alone gets over 18 million tips in a year. Does that change your opinion either way? Maybe we could find out more after this system goes live - how many alerts Apple receives and how many they send on. A less panicky "Open Letter to Apple" might encourage them to make that data public, how many times it triggered in a quarter, and ask Apple to commit to removing it if it's not proving effective. And ask Apple to state what they intend to do if asked to make the system detect more things in future.

> "their fear is not based on real risk analysis or statistics or practical tradeoffs"

Look what would have to happen for this system to ruin your life in the way people here are scaremongering about:

- You would have to sync to iCloud, such that this system scans your photos. That's optional. - Someone would have to get a malicious hash into the whole system and a photo matching it onto your device. That's nontrivial to say the least. - Enough of those pictures to trigger the alarm. - The Apple reviewers would have to not notice the false alarm photo of a distorted normal thing. - The NMEC and authorities would have to not dismiss the photo.

It's not impossible, but it's in the realms of the XKCD "rubber hose cryptography" comic. Sir Cliff Richard, his house was raided by the police, the media alerted, his name dragged through the mud, then the crown prosecution service decided there was nothing to prosecute. The police apologised. He sued the police and they settled out of court. The BBC apologised. He sued them and won. The crown prosecution service reviewed their decision and reaffirmed that there was nothing to prosecute. His name is tarnished, forever people will be suspicious that he paid someone off or otherwise pulled strings to get away with something; a name-damaging flase alarm which is something what many people fear happening in this thread. Did anyone need to use a generative-adversarial network to create a clashing perceptual hash uploaded into a global analysis platform to trigger a false alarm convincing enough to pass two or three human reviews? No, two men decided they'd try to extort money and made a false rape allegation.

People aren't interested in how it works, why it works the way it does, whether it will be an effective crime fighting tool (and how that's decided) or whether it will realistically become a tyrannical system, people aren't interested in whether Apple's size and influence could be an independent oversight on the photoDNA and NCMEC databases to push back against any attempts of them being misused to track other-political topics, people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.

> "So in the real world we know that the majority of child abuse comes from people that children already know."

Those 12 million reports of child abuse material related to FaceBook messenger; does it make any difference if they involved people the child knew? If so, what difference do you think that makes? And Apple's system is to block the spread of abuse material, not (directly) to reduce abuse itself - which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.

> "Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough.""

Are the EFF not in the political sphere? Are the groups quoted in the letter not? Here[3] is a UK government vote from 2014 on communication interception, where it was introduced with "interception, which provides the legal power to acquire the content of a communication, are crucial to fighting crime, protecting children". 31 MPs voted against it. Here[4] is a UK government vote from 2016 on mass retention of UK citizen internet traffic, many MPs voted against it. It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating. Which could be an example of you taking your position by fear instead of fact.

> "It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself."

The UK government statement linked earlier[2] disagrees when it says "On 8 October 2019, the Council of the EU adopted its conclusions on combating child sexual abuse, stating: “The Council urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted or hosted on IT servers located abroad, without prohibiting or weakening encryption and in full respect of privacy". The people whose views you claim to describe explicitly say the opposite of how you're presenting them. Which, I predict, you're going to dismiss with something that amounts to "I won't change my opinion when presented with this fact", yes?

There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems" - but most of the criticisms made in this thread are silly.

[1] on the right, here: https://www.apple.com/v/child-safety/a/images/child-safety__...

[2] https://www.gov.uk/government/publications/international-sta...

[3] https://www.theyworkforyou.com/debates/?id=2014-07-15a.704.0...

[4] https://www.theyworkforyou.com/debates/?id=2016-06-07a.1142....


> This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused.

It is filled with explanations about why the systems you mention are tangibly different from what Apple is proposing. There is a huge difference between scanning content on-device and scanning content in a cloud. That doesn't mean that scanning content in the cloud can't be dangerous, but it is still tangibly different. There is also a huge difference between a user-inspectable list of malware being blocked in a website and an opaque list of content matches that users can not inspect or debate. There is also a huge difference between a user-inspectable static list and an AI system with questionable accuracy guarantees. And there is a huge difference between a user-controlled malware list that is blocked locally without informing anyone, and a required content list that sends notifications to other people/governments/companies when it is bypassed.

That being said, if you want to look at stats about how accurate AI filters are for explicit material in the examples you mention, there are a ton of stats online about that, and they're mostly all quite bad.

> nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them

There's a lot to unpack in this one sentence, and it would take more time than I'm willing to give, but are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?

> Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]

You misunderstand the concern. The risk is not the child themselves clicking through to the photo (although it would be easy for them to accidentally do so), it's the risk of that data being leaked from other phones because a friend thoughtlessly clicks through the prompt.

> The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.

Where? Here's the second paragraph:

> Apple's proposed technology works by continuously monitoring photos saved or shared on the user's iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child's parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity.

The only thing I can think of is the word "continuously" which is not strictly inaccurate but could be misinterpreted as saying that the scanning will be constantly running on the same set of photos, and the subtle implication that this scanning will happen to photos "saved", which might be misinterpreted as implying that this scan will happen to photos that aren't uploaded to iCloud. But given that the second sentence immediately clarifies that this is referring to photos uploaded to iCloud, it seems like a bit of a stretch to me to call this misinformation.

> people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.

Hang on a sec. A little while ago you were calling my fears theoretical, now you're admitting that governments routinely abuse this kind of power. You really think it's unreasonable to be cautious about giving them more of this power?

> Does that change your opinion either way?

It does not change my opinion, but at least it's real data, so more of that in these debates please. I'm not denying or rejecting the numbers that the UK lists.

> A less panicky "Open Letter to Apple" might encourage them to make that data public

Holy crud, I would hope this is the bare minimum. Are we really having a debate over whether or not Apple will make that data public? I thought that was just assumed that they would. If that's up in the air right now, then we've sunk really low in the overall conversation about public accountability and human rights.

> which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.

I realize this is branching out on in a different direction, but it sure as heck better reduce abuse or its not worth building. CSAM is disgusting, but the primary reason to target it is to reduce abuse. If reducing CSAM doesn't reduce abuse, it's not worth doing and we should focus our efforts elsewhere.

I know this is something that might sound abhorrent to people, but we are having this debate because we care about children. We have to center the debate on the reduction of the creation of CSAM, the reduction of child abuse, and the reduction of gateways into child abuse. Reducing child abuse is the point. We absolutely should demand evidence that these measures reduce child abuse, because reducing child abuse and reducing the creation of CSAM is a really stinking important thing to do.

Which leads back to your other note:

> does it make any difference if they involved people the child knew? If so, what difference do you think that makes?

Yes, it makes a massive difference, because knowing more about where child abusers are coming from and how they interact with their victims makes it easier to target them and will make our efforts more effective. We should care about this stuff.

> It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating.

I think you misunderstand. Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession. That there are some entities (like the EFF) who are willing to reject the argument as a debate killer and look at it through a risk analysis lens does not mean the unquestioned argument of "one child is too many" is any less toxic to real substantive political debate.

> without prohibiting or weakening encryption and in full respect of privacy

I don't want to bash on the EU too hard here, but it has this habit of just kind of tacking onto the end of its laws "but make sure no unintended bad things happen" and then acting like that solves all of their problems. It doesn't mean anything when they add these clauses. This is the same EU that argued for copyright filters and then put on the end of their laws, "but also this shouldn't harm free expression or fair use."

It means very little to me that the EU says they care about privacy. What real, tangible measures did they include to make sure that in practice encryption would not be weakened?

Look, I can do the same thing. Apple should not implement this system, but they should also reduce CSAM. See, I just proved I care about both, exactly as convincingly as the EU! So you know I'm serious about CSAM now, I said that Apple should reduce it. But I predict that you'll "dismiss with something that amounts to 'I won't change my opinion when presented with this fact', yes?"

> There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems"

Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! What else are people criticizing? These are really big concerns! You got to the end of your comment, then suddenly listed out 6 extremely good reasons to oppose this system, and then finished by saying, "but other than those, what's the problem?"


> "Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! These are really big concerns!"

My point is your original point - where is the data to support these criticisms, the the facts, the statistics? Merely saying "I can imagine some hypothetical future where this could be terrible and misused" should not be enough to conclude that it is, in fact, terrible, and will more likely than not be misused.

We've had years of leaks showing that three letter agencies and governments simply don't need to misuse things like this. The USA didn't slide down a slope of banning Asbestos for health reasons and end up "oops" banning recreational marijuana. The USA didn't slide down a slippery slope into the Transportation Security Authority after 9/11 it appeared almost overnight, and then didn't slide down a slippery slope into checking for other things, it stayed much the same ever since.

The fact that one can imagine a bad future is not the same as the bad future being inevitable; the fact that one can imagine a system being put to different uses doesn't mean it either will be, or that those uses will necessarily be worse, or that they will certainly be maximum-bad. It's your comment about "fear based reasoning" turned to this system instead of to encryption.

You ask "are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?" and I'm saying the position "because slippery slopes exist, this system will slide down it and that's the same as being at the bottom of it" and then expecting the reader to accept that without any data, facts, evidence, stats, etc. is low quality unconvincing commenting, but is what makes up most of the comments in this thread.

> "Where? Here's the second paragraph:"

The paragraph which implies it happens to all photos (not just iCloud ones), and immediately alerts the authorities with no review and no appeal process. There are people in this thread saying "I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision" for a system which does not look at browser cacdhe, does not call the FBI, has a review process, does have an appeal process.

> "Holy crud, I would hope this is the bare minimum."

Why would you hope "the bare minimum" the letter could ask for is something the letter is clearly not asking for? Or that the bare minimum from a company known for its secrecy is openness and transparency? It would be nice if it was, yes. I expect it won't be, because we would all have very different legal systems and companies if laws and company policies were created with metrics to track their effectiveness and specified expiry dates and by default only get renewed if they are proving effective.

> "What else are people criticizing?"

My main complaint is that people are asking us to accept criticism such as "Iraq will use this to murder homosexuals" unquestioningly. But still, to quote from people in this thread: "Apple can (and likely will) say they won't do it and then do it anyway." - despite Apple announcing this in public they're going to lie about it, and you should just believe me without me supporting this position in any way. "This will lead to black mailing of future presidents in the U.S." - and you should believe that because reasons. "Made for China" - and you should agree because China is the boogeyman. (Maybe it is, if so justify why the reader should agree). "It's not Apple. It's the government" - because government bad. "Scan phones for porn, then sell it for profit and use it for blackmail. Epstein on steroids" - because QAnon or something, who even knows???. "the obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used." - because they said 'obviously' you have to agree or you're clueless, I guess. "I never thought I'd see 'privacy' Apple come out and say we're going to [..] scan you imessages, etc." - and they didn't say that; unless the commentor is a minor which is against the HN guidelines.

It's very largely unreasoned, unjustified, unsupported, panicky worst-case fearmongering even when the concerns could be serious - if justified.

> "Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession."

That is probably true, but probably self-supporting. Someone who honestly uses "think of the children" likely thinks the children's safety is not being thought of enough, and is self-selectedly less likely to immediately turn around and agree the opposite.

> "It means very little to me that the EU says they care about privacy."

Well, the witch is being drowned despite her protests.

> "What real, tangible measures did they include to make sure that in practice encryption would not be weakened?"

Well they didn't /ban/ it for a start; which they could have done as exemplified by Saudi Arabia and Facetime discussed in this thread, and they didn't explicitly weaken it like the USA did with its strong encryption export regulations of the 1990s. Those should count for something in defense of their stated position?


I'm not going to push too hard on this, but I do want to quickly point out:

> Well they didn't /ban/ it for a start [...] and they didn't explicitly weaken it

Does not match up with:

> urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted

If you're pushing a company to ensure access to encrypted content based on a warrant, you are banning/weakening E2E encryption. It doesn't matter what they say their intention is/was, or whether they call that an outright ban, I don't view that as a credible defense.

----

My feeling is that we have a lot of evidence from the past and present, particularly in the EU, about how filtering/reporting laws evolve over time (EU's CSAM filters within the ISP industry are a particularly relevant example here, you can find statements online where the EU leaders argue that expanding the system to copyright is a good idea specifically because the system already exists and would inexpensive to expand). I also look at US programs like the TSA and ICE and I do think their scope, authority, and restrictions have expanded quite a bit over the years. I don't agree that those programs came out of nowhere or that they're currently static.

If you don't see future abuse of this system as credible, or if you don't see a danger of this turning into a general reporting requirement for encrypted content, or if you don't think that it's credible that Apple would be willing to adapt this system for other governments -- if you see that stuff as fearmongering, then fine I guess. We're looking at the same data and the same history of government abuses and we're coming to different conclusions, so our disagreement/worldview differences are probably more fundamental than just the data.

To complain about some of the more extreme claims happening online (and under this article) is valid, but I feel you're extrapolating a bit here and taking some uncharitable readings of what people are saying (you criticize the article for "implying" things about the FBI, and the article doesn't even contain the words FBI). Regardless, the basic concerns (the "chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between 'things you own' and 'things you own which are closely tied to the manufacturer's storage and messaging systems'") are enough of a problem on their own. We really don't need to debate whether or not Apple will be willing to expand this system for additional filtering in China.

We can get mad about people who believe that Apple is about to start blackmailing politicians, but the existence of those arguments shouldn't be taken as evidence that the system doesn't still have serious issues.


>2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.

I don't know how old you are but for anyone over 30 the slippery slope isn't a logical fallacy, it's a law of nature. I've seen it happen again and again. The only way to win is to be so unreasonable that you defend literal child porn so you don't need to defend your right to own curtains.


The intention to prevent child sexual abuse material is indeed laudable. But the "solution" is not. Especially when we all know that this "solution" gives a legal backing to corporates to do even more data mining on its users, and can easily be extended (to scan even more "illegal" content) and abused into a pervasive surveillance network desired by a lot of government around the world.

For those wondering what's wrong with this, two hard-earned rights in a democracy go for a toss here:

    1. The law presumes we are all innocent until proven guilty.
    2. We have the right against self-incrimination.
Pervasive surveillance like this starts with the presumption that we are all guilty of something ("if you are innocent, why are you scared of such surveillance?" or "what do you have to hide if you are innocent?"). The right against self-incrimination is linked to the first doctrine because compelling an accused to testify transfers the burden of proving innocence to the accused, instead of requiring the government to prove his guilt.


> But the "solution" is not.

Agreed, that’s my point, and I would say that your comment outlines some good principles a better solution might respect, although it’s worth noting that those principles have only ever constrained the legal system.

Your employer, teacher, customers, vendors, parents etc, have never been constrained in these ways.


I appreciate this well thought out and logical approach.

The issue I have is with the way this is presented.

I actually really like the local-only features related to making underage users second-guess sending/viewing potentially harmful content. IMO this is a huge problem especially since most do not have the context to understand how things live forever on the internet, and is likely a source of a large percentage of this "Known CSAM material" in circulation. I think it's a great step in the right direction.

But the scan-and-report content on your device approach is where it becomes problematic. It's sold as protecting privacy because of a magic box "the cryptographic technology" somehow prevents abuse or use outside the scope it is intended for. But just like this entire function can be added with an update, it can also be modified with an update. Apple pretends the true reality of this feature is somehow solved by technology, glosses over the technical details with catchphrases that 99.9% of people that read it won't understand. "It's 100% safe and effective and CANNOT IN ANY WAY BE ABUSED because, cryptographic technology." They're forcing their users to swallow a sugar-coated pill with very deep ramifications that only a very small percentage will fully comprehend.

I'm 100% for doing anything reasonably possible in the advancement of this cause. But you're correct in that this is "one fear is more important than another fear." And I don't think anyone can say how slippery this slope is or where it can go. I also don't really feel like you can even quantify in a way that they can be compared the harm caused by CSAM or mass surveillance. So a judgement call on "which is worse" really isn't possible because the possibilities for new and continued atrocities in both cases are infinite.

But at least in the US, a line is crossed when something inside your private life is subject to review and search by anyone else even when you have not committed a crime. If you want to search my house, you must, at least on paper, establish probable cause before a judge will grant you a search warrant.

"It's okay, these specially trained agents are only here to look for just drugs. They've promised they won't look at anything else and aren't allowed to arrest you unless they find a large quantity of drugs" -- Would you be okay with these agents searching every house in the nation in the name of preventing drug abuse(which also is extremely harmful to many people including children even when they are not direct users)?

The argument "well just don't use apple" doesn't stand either. A landlord can't just deputize someone to rummage through my house looking for known illegal things to report. Even though it's technically not "my house" and you could argue that if I don't like it, well I should just move somewhere else. But I don't think that can be argued as reasonable. Our phones are quickly becoming like our houses in that many people have large portions of their private lives inside them.

I can't quite put my finger on the exact argument so I apologize for not being able to articulate this more clearly, but there is something with removing the rights of large groups of people to protect a much smaller subset of those people, from an act perpetrated by bad actors. You are somehow allowing bad actors to inflict further damage, in excess of the direct results of their actions, on huge populations of people here. I know there is a more eloquent way to descibe this concept, but doing so just doesn't seem like the right course of action.

And I'm sorry but I am not smarter than all of the people that worked on this, so I don't have a better solution that would accomplish the same goal, but I know that this solution has a huge potential to enable massive amounts of abuse by nations who do not respect what much of the world considers basic human rights.


> But just like this entire function can be added with an update, it can also be modified with an update.

That argument is a valid argument that by using an Apple device you are trusting them not to issue an abusive update in the future. It applies regardless of whether they release this feature or not - at any time they could issue spyware to any or all devices.

I actually fully agree that this is a problem.

My position is that Apple isn’t going to solve this problem. If we want it solved, we need to solve it.

The value of using Apple devices today, and even the sense that they are going to protect children who use their devices, far outweighs relatively vague and unproven assertions about future abuses that haven’t yet materialized in most people’s minds even if they turn out to be right in the end.


> So: where are the proposals for a better solution?

Better policing and child protective services to catch child abuse at the root, instead of panicking about the digital files it produces? If you'd been paying attention, you'd have noticed that real-world surveillance has massively increased, which should enable the police to catch predators more easily. Why count only privacy-betraying technology as a "solution", while ignoring the rise of police and surveillance capabilities?

Edit as reply because two downvotes means I am "posting too fast, please slow down" (thank you for respecting me enough to tell me when I can resume posting /s):

> How do you police people reaching out to children via messaging with sexual content?

First, this is one small element of child abuse - you want to prevent child rape, merely being exposed to sexual content is nowhere near severe enough to merit such serious privacy invasion. To prevent the actual abuse, one could use the near-omnipresent facial recognition cameras, license plate readers, messaging metadata, to find when a stranger is messaging or stalking a child, DNA evidence after the fact that is a deterrent to other offenders, phone location data, etc. etc. At first I thought I didn't have to spell this out.

Second, to answer your question: very easily. With parental controls, a decades old technology that is compatible with privacy and open-source. The parent can be the admin of the child's devices, and have access to their otherwise encrypted messages. There is no need to delegate surveillance (of everyone, not just children) to governments and corporations, when we have such a simple, obvious, already existing solution. It frankly boggles the mind how one could overlook it, especially compared to how technically complex Apple's approach is. Does the mention of child abuse simply cause one's thinking to shut down, and accept as gospel anything Apple or the government says, without applying the smallest bit of scrutiny?


You have overlooked that the system for protecting minors from sexual messaging /is/ the parental controls you wish it was, and is /not/ the serious privacy invasion you think it is. It is on-device only, it only alerts the parents, in the condition that the parents enable it and the child continues past a warning informing them that their parent will find out if they continue.

> "The parent can be the admin of the child's devices, and have access to their otherwise encrypted messages."

Apple have done even better - the parent doesn't get access to all their messages (at least not as part of this system).

This is not the same system as the photo-scanning and authority-alerting one.


> You have overlooked

No, the comment I replied to overlooked it.

> This is not the same system as the photo-scanning and authority-alerting one.

Which makes fighting child-grooming an even worse argument in favor of the authority-alerting system.


How do you police people reaching out to children via messaging with sexual content?

> Why count only privacy-betraying technology as a "solution", while ignoring the rise of police and surveillance capabilities?

I don’t. But if your solution is ‘just police more’, you need to explain how the police should detect people who are grooming children by sending images to them.


As if this will be limited to "iThings". If you don't think this tech is coming to all devices, you're not paying attention to the state of the world.


Technically true, though I don't know why you think that capability is limited to Apple products...

https://transparencyreport.google.com/youtube-policy/feature...


Was it ever actually in doubt that Apple could read stuff you put in their cloud of they so desired? It seems obvious.


Even if you don’t use iOS, seems likely Google could do/is doing this also.


Its worse:

Apple: hey govt, here's some probable cause.

Govt: warrant, search, arrest

User: prosecuted


Please talk to non-tech people around you about this. This overreach simply cannot stand, and from a company that sees success from touting itself as a privacy-centric alternative? These really are some dark times. I imagine that if the world had Stasi fresher in its mind, this would never have happened.


Anyone who engaged in such a discussion with a non-technical person is going to defacto seem like an advocate for child pornography, similar to how advocating for encryption can easily be twisted to being pro-crime.

Having said that, there is an enormous amount of misinformation and fear-mongering about a pretty tame change. This seems like so much ado about very close to nothing.

a) They optionally scan messaged photos for nudity using a NN if the participants are children and in a family (the account grouping), and the group adult(s) have opted in, giving children warnings and information if they send or receive such material. A+++ thumbs up.

b) They scan photos that you've uploaded to iCloud (available at photos.icloud.com, unencrypted -- in the E2E sense, effectively "plaintext" from Apple's perspective -- etc) for known CP hashes. Bizarrely Apple decided to scan these on device as well, causing 99% of the outrage and confusion, yet every major cloud photo service in the world does such checks for the same reason, whether you have the photo set to private or not, and presumably Apple decided to do it on device simply as free distributed computing, taking advantage of hundreds of millions of high performance chips, but most importantly as a PR move demonstrating that "Apple Silicon helps with Child Safety", etc.

That's it. Various "this is a harbinger of doom and tomorrow they're going to..." arguments are unconvincing. This does absolutely nothing to break or subvert E2E encryption or on device privacy.

EDIT: The moderation of this comment has been fascinating, going to double digits, down to negatives, back up again, etc.


They were already doing it on the cloud:

https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

So now they’re doing it on device too. This feels like it’s putting in place the foundation to scan all offline content.


Thanks for the link, I had assumed that Apple was already doing it on servers (like all other online services providers), which makes the announcement even more terrible.

Moving it on device will show 0 improvement to the original goal, while opening a door that quite frankly I never expected Apple to be the one to open (I would have bet on Microsoft).


> Moving it on device will show 0 improvement to the original goal, while opening a door that quite frankly I never expected Apple to be the one to open (I would have bet on Microsoft).

The CSAM scan is only for photos that are to be uploaded to iCloud Photos. Turning off iCloud Photos will disable this.


Sorry if my point wasn't clear, I do understand this yes.

My point is that to my knowledge, this is the first time that an on device "content check" is being done (even if it's just for photos that will end up in iCloud). This is the precedent (the on device check) that makes me and some others uneasy, as pointed out in the linked letter. The fact that it applies only to photos going to the cloud is an implementation detail of the demonstrated technology.

Legislators around the world now have a precedent and may (legitimately) want it extended to comply with their existing or upcoming laws. This is not a particularly far fetched scenario if you consider that Apple has already accommodated how they run their services locally (as they should, they have to comply with local laws around the world in order to be able to operate).

That's the crux of the issue most of the people quoted in the letter have, one can argue it's just a slippery slope argument, I personally think that one can be legitimately concerned of the precedent being set.

Keeping doing it on server, in my opinion, was a much better option for users (with the same compliance to local laws and effectiveness to the stated goal as far as we know, there's no improvement on that front, or none that couldn't have been brought to the existing server check), and ultimately also a safer option in the long run for Apple.

They've opened themselves, for little reason, to a large amount of trouble on an international scale and at this point rolling it back (to server checks) might not make a difference anyway.


Scanning on device (albeit only of photos shared off device) seems like an ill-considered PR move for a whole child safety push (perhaps with a "look at how powerful our iPhone chips are" angle). As you mentioned, they've already been doing these checks for some time on their servers, and people concerned about false positives should realize that Microsoft, Google, Facebook, Amazon et al are doing identical checks with a very similar process.

I imagine there are some frantic meetings at Apple today. However the grossly misleading claims people have been making to fear-monger aren't helpful.


> a) They scan photos for nudity using a NN if the participants are children and in a family (the account grouping), giving children warnings and information if they send or receive such material. A+++ thumbs up.

Leave my kids alone.

If they want to share photos of themselves naked, it's none of anyone's business except them and maybe me (maybe), certainly not a huge American corporation.

Neither me or my kids have iPhones, but as others have observed, I have no illusions that Google will follow suit. Our options are becoming pretty limited at this point.


Hopefully this is another configurable option that falls under the already very extensive family screen time feature. I understand where you're coming from and respect your position, but I fall on the opposite side. This is something I do want for my kid.


On the Child Safety page one of the dialogs is the opt in (or out) configuration, so it seems, as one would expect, that the adult(s) in the family sharing group get to configure this.

And it's a useful, valuable option that many (I would wager the overwhelming majority) parents will enable.

Apple made a huge PR mistake announcing both of these systems together (the CP hashing system and the NN message warning system), because as seen throughout the comments it has led to lots of people conflating and mixing and matching elements of both into a fearsome frankensystems.


> it has led to lots of people conflating and mixing and matching elements of both into a fearsome frankensystems.

I agree they could have slowly walked people through the components one by one, but so much of what is in the comments is pure bad faith that I am not sure that it would have helped.

By “bad faith”, I mean strongly asserting as true claims that people know they haven’t checked, and which later turn out to be false.

In a cynical way this might actually be better PR for Apple. They know they can’t prevent people who dislike them from jumping to the most negative conclusions possible. By presenting these features together and letting the crazy interpretations abound, they make their opponents seem unhinged, and this obscures the real but less apocalyptic concerns.


The NN is the system I thought they were making, and I applaud it. The hashing one feels really dangerous, though; I don't think that's people just exaggerating. Apple hasn't done enough to limit their own power, so they might (read: will) be made to use it to hurt people.


You want Apple employees going trough naked photos of your kid and deciding if its child porn or not because the algo flagged it? Because that is what this means.


No, it doesn't. You have conflated two entirely different systems.


What would the two different systems be then?

I am certainly willing to agree I conflated them if you clarify.


One system optionally checks iMessages (to be sent or received) against one or more "nudity" neural networks if the user is identified as a child in the iCloud family sharing group. If it triggered, the child is given information/opt out, optionally choosing to go ahead with sending or viewing, with the caveat that their parent(s) will be notified (optionally, I presume). Nothing about this event is sent off device. Apple employees aren't receiving or evaluating the photos. No one other than the identified parent(s) is party to this happening.

Neural networks are from perfect, and invariably parents and child are going to have a laugh when it triggers on a picture of random things.

The other, completely separate system checks files stored in iCloud photos against a hash list of known, identified child abuse photos. This is already in place on all major cloud photo sites.

Apple clearly wanted to roll out the first one but likely felt they needed more oomph in the child safety push so they made a big deal about expanding the second system to include on-device image hash functionality. I would not be surprised at all if they back down from the latter (though 100% of that functionality will still happen on the servers).


There are two systems and I will comment first on the second one. The second system you were referring to, as in the 'other', is the CSAM. And by the funcionality description, already sounds terrifying enough.

You will be flagged, reported to the authorities under technical argumentation the Algorithm has a "one in a trillion" chance of failure. Account blocked, and you start your "guilty until proven innocent process" from there.

Due to scale at what Apple works its also clear to see, if you think about it, that the additional human process will be on a random basis. The volume would be too high for a human chain to validate each flagged account.

In any case at multiple occasions, it is clear that the current model of privacy with Apple is that there is no privacy. It is a tripartite between you, the other person you interact with, and Apple algorithms and human reviewers. Is there any difference between this, and having a permanent video stream from what is happening inside each division in your house, analyzed by a "one in a trillion neural net algorithm", and additionally reviewed by a human on a need to do basis ?

CSAM

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

"Using another technology called threshold secret sharing, the system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images."

"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated."

So in other words, there is no doubt for this one there will be intervention, of Apple employees when required. For training purposes, for system testing, for law enforcement purposes etc...

I guess these and other functionality was the reason the decided not to encrypt the icloud backups

"Apple dropped plan for encrypting backups after FBI complained" https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

Now concerning the second feature/system called: "Expanded Protections for Children"

"https://www.apple.com/child-safety/pdf/Expanded_Protections_..."

After reviewing what I believe to be every single document published so far by Apple on this, I found only one phrase and nothing more detailing how it works. Looking at the amount of detail published for CSAM, the lack info on this one already looks like a redflag to me. So I am not really sure how you can be so certain of the implementation details.

The phrase is only this: "Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages."

Nothing more I could find. If you have additional details please let me know.

This is a feature that will be added in a future update there some conclusions from my part. If we just stay with the phrase "The feature is designed so that Apple does not get access to the messages." I have to note its missing the same level of detail that was published for CSAM.

I can only conclude that:

- They do not get access to the messages currently due to the current platform configuration. Note they did not say the images will stay on the phone currently or in the future. Just that uses local neural nets technology and feature is designed, so that Apple does not get access to the messages.

- They did not say they will not update the feature in the future for leveraging the icloud compute capabilities

- They did not say if there is any opt in or opt out for data for their training purposes

- They do not say if they can update locally the functionality at the request of law enforcement.

I agree with you that it looks like it stays locally by the description, but the phrase as written looks like weasel words.

They also mention one of the scenarios: Where the child will agree to send the photo to the parent before viewing, would that be device to device communication or via their icloud/CSAM system ? Unclear, at least for what I could gather so far.


> Leave my kids alone.

> If they want to share photos of themselves naked, it's none of anyone's business

What about if adult sexual predators want to share sexually explicit photos with your kids?

My understanding is that this feature isn’t about stopping kids from messaging each other - it’s about giving parents a warning if strangers are trying to groom them.

Also:

“ The Messages feature is specifically only for children in a shared iCloud family account. If you’re an adult, nothing is changing with regard to any photos you send or receive through Messages. And if you’re a parent with children whom the feature could apply to, you’ll need to explicitly opt in to enable the feature. It will not turn on automatically when your devices are updated to iOS 15.”

So basically the furore is based on a misunderstanding.


"They're going to scan your phone and probably have someone review any photos with a lot of human flesh in them" would be enough to get a lot of non-technical users to take notice.

That would get a lot of people nervous. Let alone anyone smart who thinks through the implications here of how far the line is being pushed on how public your phone is.


Simply turning off iCloud Photos will ensure that photos on your iPhone are never scanned. Why are you trying to make this thing about photos stored on-device? Photos in iCloud have always been available to Apple through iCloud backups. If you are concerned about privacy, turn it off.

"And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account."


So if an offender turns off iCloud then this move will absolutely be useless??

How would that help catch them if they can simply flip the switch?


> So if an offender turns off iCloud then this move will absolutely be useless??

No on-device photo scanning unless iCloud Photos is enabled. Isn't funny when you get the most important aspect wrong?

> How would that help catch them if they can simply flip the switch?

They never claimed that this would help "catch them" if they are not using iCloud Photos.


It would, but luckily that's not what's happening


Go read the announcement, the "CSAM detection" heading [0]. It is exactly what they are doing.

Although they're assuring us that they don't make mistakes. The technical term for that is either going to be "blatant deception" or "delusion". Apple are impressive but they haven't developed a tech that can't make mistakes.

[0] https://www.apple.com/child-safety/


That isn't what they're doing at all. You have significantly misunderstood or conflated different sections.

Though I don't blame you at all: Read through the various hysterical posts about this and there are a lot of extraordinary misrepresentations.


Ah, I see what you're getting at. They're currently hashing for specific photos.

I don't care. There is no way on this good earth that law enforcement is going to let them get away with that. They're claiming that they will be scanning things that are obviously child porn and ignoring it. That isn't a long term stable thing to be doing - if they think scanning for anything is ok there is no logical reason to stop here. So they probably aren't going to stop, and they certainly aren't going to announce every step they take to increase the net.

And their 1:1,000,000,000,000 number is still delusional. The system is going to produce false positives. There are more sources of error here than the cryptographic hash algorithm.


Apple does a lot of the ML and personalization stuff on user devices for privacy reasons as well, keeping your data out of the cloud, and that is a good thing.


Why does everyone mention they already did it on cloud like that has any relevance whatsoever?

I have never once in my life thought about activating an automatic back up to cloud feature on any phone I have ever owned, for a single second. So yes, it is hella different. This is for all the same reasons I backup personal data only to my NAS and use cloud accounts for generic shit like purchased music backups and nothing more.

I prefer losing all my photos if my phone is pickpocketed in between backups to having a public record of everything I ever photographed. Am I the 0.00000001% or something? I didn't even realize I was the odd man out, honestly.


"Why does everyone mention they already did it on cloud like that has any relevance whatsoever?"

Given that this only applies to photos that are stored to the cloud, it seems like it has total relevance given that for users literally nothing has changed. To argue that there is some fearsome new development requires one to extrapolate into "Well what if..." arguments.


The problem with this is that they can now scan images on phones regardless of upload, and it will render any future promisses of e2e iCloud deceiving.

I don't see a reason to do this, other than to either scan all images, or claim that iCloud is e2e in the future.

And of course they went with the protection of children argument, which is bullshit. Apple gets paid by its users and should have no other interests than to get paid as much money as possible, regardless of who pays them.


> Apple gets paid by its users and should have no other interests than to get paid as much money as possible, regardless of who pays them.

That’s exactly why they are doing this. Providing a safe haven for child sex predators is bad for their brand.


As a followup, to be clear on the intentions in my post, while I do think that a lot of the reactions have been over the top (there are numerous comments claiming outright falseshoods about this system, out of either ignorance or to prejudice), that Apple decided to do the CSAM stuff on device is incredibly ill considered.

Do it in the cloud just like every other service does. None of this anger would have happened if they just kept it in the cloud, and I truly can not fathom how this made it this far. I would peg overwhelming odds that they abandon the on device idea as it makes no sense and has brought incredible ill will.


I guess the upside might be that this could be a compromise for them to start doing end to end encryption on iCloud backups and iCloud photo libraries. They might be able to argue that if they’re scanning for illegal content on the client side, then they don’t need to be able to decrypt on the cloud side… We’ll have to see, it’s still creepy but potentially a small net gain overall if that becomes an option…


If they claim e2e, but we scan on the client side then we should sue them for deceptive marketing.

e2e means something specific and should be an absolute requirement in a post Snowdon age, not something that is optional or a compromise.


This on-device scanning is even worse. They cant even tell what was violating, or what hash, or what image. Just that the computer said you are violating.

We have no idea about the hash collisions. When talking about whole world, 2^256 isn't a big enough space.... even if they're using 256 bits.

And how dare anybody criticize this - criticism is tantamount to being for child porn. (Then again, that's why it was chosen. We'll soon see other things 'forbidden'.)


There is a shared (among all of the major tech companies and presumably law enforcement) hash database of child abuse material. Going from a photo to the hash is deterministic: How it gets to a hash on your phone is surely the same way it gets to a hash running the exact same algorithm on the cloud, whether iCloud, Amazon Photos, etc.

Such a collision would generate a human validation.

This applies to cloud-shared files. It actually has applied to cloud shared photos for years. It applies to literally every major cloud photo service.


1. how many false positives have there been?

2. is it really reviewed by a human? I see how YT works, and automated failure at scale is the name of the game

3. apple has said that the on-device scanning only provides a binary yes/no on CP detection. How do you defend against a "yes" accusation? (when stored on someone else's server, the evidence is there)


This is part of iCloud photo sync, and on hitting some threshold of matching pictures it would trigger human review.

There would also presumably be human review involved in the legal process, e.g. law enforcement getting a subpoena based on Apple notifying them, and then using gathered evidence for a warrant.

The system is based on known image hashes, not arbitrary ML detection.

As this system is used only for iCloud photo uploads, the evidence gathering should be similar to that done by LE with other cloud hosting providers for years


2*256 is a huge space. If everyone in the whole world had 50,000 images stored in iCloud, the probability of a collision would still be infinitesimally, unimaginably small.


2*256 is 512. you should probably type 2^256.

lol picky I know


I think you are massively under estimating what this change means if you think it is "pretty tame change"

Clearly you do not understand the full ramification of what is happening here.


What are the ramifications that I "do not understand"?

I will repeat: It is a very tame change. Were you frantic and delirious when a neural network first identified a dog in your photos? Isn't that the slippery slope to it reporting you to the authorities for something bong shaped?

Speaking of which, every bit of fear mongering relies upon a slippery slope fallacy. What is clearly a PR move is somehow actually the machinations of a massive surveillance network. Why? Why would Apple do that?


> Why would Apple do that?

Because it gets required to by the laws of countries responsible for most of their market? And because authoritarian regimes intentionally blur the lines between criminal and political surveillance over time, making it harder for companies to draw hard policy lines? Concern about this doesn't require any bond villains, it just requires well-intentioned pragmatists on one side and idealogical politicos on the other.

FWIW, I think you have a point about the doom-saying. Countries with good judicial protections around privacy already use it as a backstop against dirty tricks where folks use one type of surveillance to require another. But it makes sense to wonder how those barriers will erode over time, and to worry about places where they don't exist.


In which case how is this a slippery slope? They didn’t do something and there was no legal mandate. Now they are required to do something to be able to operate in said company. Is the slippery slope that they can be in compliance faster?


That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

[1] https://www.eff.org/deeplinks/2021/08/apples-plan-think-diff...


We can carry every single element back to its inception and make identical arguments. That's the problem with slippery slopes.

Apple - creates messaging platform.

Slippery slope - governments can force Apple to send them all messages.

Apple - creates encrypted messaging platform.

Slippery slope - governments can force Apple to send them the keys and all messages

Apple - Adds camera to device (GPS, microphone, accelerometer, step detection, etc)

Slippery slope - governments can force them to record whenever the government demands and send a live stream to the government. Or send locations, or walking patterns, or conversations.

Apple - makes operating system

Slippery slope - Basically anything. There are so many ways I could go with this. Government can force them to make a messaging and photo system, add cameras to their devices, entice users to take and accumulate pictures, and do image hashing and report suspect photos.

That's the problem with slippery slope arguments. They become meaningless rhetoric.

Image perceptual hashing is literally a single person's work for two hours. If you really think the barrier between an oppressive government stomping on groups or not is whether Apple did a (poorly communicated) Child Safety PR move and implemented this trivial mechanism...the world is a lot more perilous and scary than you think.


Do keep in mind that atleast one govt has successfully preassured apple to give up on its privacy

Also the difference here compared to the scenarios you've mentioned above is that Apple has walked pretty far. All it would take is to make the verification happen on all local files irrespective of its being uploaded to iCloud or not. Then any government can provide their own hashes to apple to keep track of. Apple won't have any idea what those hashes actually mean


"Do keep in mind that at least one govt has successfully pressured apple to give up on its privacy"

No company can defend you from your government.

"All it would take"...

That is the slippery slope. If a government is going to say "that's a nice looking hashing system you have there, now we need you to..." they could as easily -- more easily -- have said "that's a nice filesystem you have there, we need you to...".

Hashing files and comparing them against a list is literally a college grad afternoon project. There is absolutely nothing in Apple's announcement that empowers any government anywhere in any meaningful way at all. It is only fear-mongering (or simply raw factual errors as seen throughout this discussion) that makes it seem like it does.


Sure no company can completely defend me from my govt but atleast they can not build tools that make it easier. Also while it could be easy for a college graduate to build such a system, only Apple has the capability of rolling out this system to all of their phones. Otherwise we would have already seen such a system implemented in other countries


"only Apple has the capability of rolling out this system"

Right. Exactly. Any country in the world can mandate that Apple do anything they want (any of the slippery slope mandates), and Apple can comply or withdraw from the market. If any country wanted to demand that Apple compare all files against a ban list, they could have done that in 2007, or any year since. There is zero technical barrier and it would be a trivial task.

The point is that this development moves the bar infinitesimally. I would argue not at all. Fearmongering that depends upon "Well what if..." didn't actually think it through very well.


> "only Apple has the capability of rolling out this system"

I guess this is where I differ with you. No government was able to preassure apple to implement a complete client side verification till now but who knows how things will be now that they have a system in place that can be easily modified.

Anyways I hope your prediction turns out right. I live in a place where privacy laws don't really exist. The last thing I want is any system that can be easily exploited by authorities to crush dissent.


That is because you are incorrectly using the the charge of the Slippery slope fallacy to hand wave away legitimate concerns by reducing them to an absurdity which is itself a logical fallacy

The legitimate concerns here are not a slippery slop, they are real and self evidence born from countless examples through out history of these types of survelence systems being abused. EFF points to a coulple, other artiles point to other examples

It takes a person an extremely dense person (or intellectually dishonest) to simply hand wave all of these concerns away as "well that is just a slippery slope so I can ignore you"


I guess I'm just extremely dense.

I see Apple implement a trivial (laughably trivial) system -- a tiny little pimple on a massive platform of hardware and software -- and I don't think "this is it! This is what puts the surveillance state over the top!", I think "Hey look, trivial, constrained system".

But if you believe that this was what was between a surveillance state or not...well you must be much more intellectually capable than I and I am honored to be in your presence.

It is the very definition of a slippery slope. A journey of a thousand miles begins with a single step, but usually you're just walking to the bathroom.


> What is clearly a PR move

I don't know what I think about the move, but I know this: intentions don't matter. Capabilities matter. If the current intention is benign, that means little.


The privacy implications are fucking horrible.

As an Android user I'd love to gloat, after all Apple have really had the upper edge on privacy so far, and their users have not been shy about telling us. Again and again.

However any pleasure would be as short lived as peeing your pants to keep warm in winter, because if Apple proceeds, Google is bound to follow.

A user monitoring system like this is just ripe for abuse of the most horrific kinds. It must die.


Google Photos runs PhotoDNA on all photos there, so it’s no different from Apple scanning photos destined for iCloud photos with similar tech. I guess the only pushback is that apple has decided to do it on-device (still only to photos going to iCloud Photos) instead of server-side, where they have to disable E2EE to do so?


There's a huge difference between Apple scanning files that are on Apple's servers and Apple putting spyware on your encrypted device.


iCloud Photos are not end-to-end encrypted in the first place.


This opens the door to doing so. https://news.ycombinator.com/item?id=28081863


> I guess the only pushback is that apple has decided to do it on-device.

This is exactly what my problem is. They are using my CPU, battery time and network bandwidth to do this when they should be using their own resources. I know I can turn off iCloud but I am paying for that and we already have an agreement in place.

Honestly, it's the only concrete thing to complain about. Every other complaint is based on a what-if, slippery slope concern.

Of course, as per usual, nobody agrees with me here on HN... but that's fine with me because they simply don't reply which lets me know that they don't have any good arguments against my PoV.


Consider it’s not very interesting to argue with someone who is set in their ways, and the lack of counter argument is a lack of interest, not a superiority of viewpoint.

Anyway I agree with you, and have been curious what the neural net hardware on the new iPhone will be used for. Turns out it’s for checking my photo library for child abuse, how futuristic!


> Consider it’s not very interesting to argue with someone who is set in their ways, and the lack of counter argument is a lack of interest, not a superiority of viewpoint.

I have yet to hear a good counter argument though. If I had heard one, I would have changed my mind. So, I don't think I'm set in my ways. (I actually change my mind often when presented with new evidence!)

I will always consider downvotes without argumentation to be a lazy way of disagreeing and the people who do that I think are set in their ways. Until I hear otherwise, I will continue to consider my argument as superior.

Does anyone really change their mind until they hear a counter to what they already think? I don't think so... I already argued with myself and came up with my opinion on this, so really - I need external arguments to change my mind just like everyone else.


You're being downvoted for eschewing website guidelines.

https://news.ycombinator.com/newsguidelines.html


Oh yeah? Which one?

I kind of doubt it because I've been censored for simply asking very straightforward questions with zero adjectives.


"Please don't comment about the voting on comments. It never does any good, and it makes boring reading."


Nah. Maybe that was one from today, but it certainly doesn't apply to the comments I've been referring to.

No, what I gather is that many people cannot change their minds when presented with good evidence. Here's one: They complain about "monocultures" as if they're always bad but when I point out that the Linux kernel created a monoculture and the world hasn't imploded, they have no come-back. So they do the only thing that they can do.

It's fine with me, I wear it as a badge of pride because I know I'm right whenever that happens.


Well, I mean if Gmail has already done this kind of scanning for years (as is reported) then you’d assume Google Photos probably already does too if you sync it to their cloud (as does iCloud on the server side). But yeah, client side is a whole different thing…


Out of curiosity, how does Samsung Knox handle privacy?


What if you choose not to upload photos to iCloud?


According to Apple, photos are only checked in the process of being uploaded. So if you don’t upload, no check either.


Privacy is not something Apple, or any company, gives us. But it’s something they extract from us; by tracking us and harvesting data. If Apple were serious, as opposed to positioning themselves as less extractive than GFAM and others, they would not be taking this road. Yes, they say it’s on iCloud for now, but the signs are that it will be done client side in future. Whether you believe that will happen or not the risks are too high; so it’s prudent to assume the worst.

This is this a first step to "Detective in your Pocket", cloaked intentionally, or not, by a well-meaning objective. An objective we can all support. If you wanted to put in a thin wedge on distributed surveillance, where better to start? As pointed out CSAM filtering/scanning is already done so that’s not the issue here. There’s a big debate to be had, and being had, on the upsides/downsides and benefits/dangers of AI and false positives. That’s a huge issue in itself; but that’s not the biggest concern with this move. If Apple pushes on with this it’s a clear signal that they wish to march us all to a new world order with Distributed Surveillance, as a Service. A march in step with the drumbeat of your increasingly authoritarian (or if you are lucky or more generous, safety conscious) government.

I have signed the letter to express my very strong personal concerns but also as a CEO of a company that takes CSAM seriously and seeks to provide solutions, in search, without surveillance.


it still boggles my mind how naive people in the tech industry are, they fall for the same trick again and again over the course of decades. Same people who fell for Google's "don't be evil" and Google pretending to be better than the Evil Empire Microsoft are now shocked that Apple was really only in business for money from the start and the privacy stuff was just a marketing gimmick


That's a little cynical though don't you think? Couldn't it be that they were sincere in the beginning but over time, company culture changes, the original founders move on/retire/die/etc and pressure from government, lobbyists, etc add up. I guess its the same result in the end.


Talking about this with non-tech people might make the distrust in anything they don’t understand worse. Think vaccines.


Refusing to acknowledge egregious top-down decisions which threaten or violate either one's well-being or some perceived fundamental right has a much worse effect on public trust.

"Why didn't you tell me this was going on?" "You're simply too stupid to handle the idea that your betters might occasionally be untrustworthy"


How are you going to argue that it’s bad for parents to be informed when strangers send sexual imagery to their children?


You argue the applications of the technology further down the line. Remind them how facebook's safety features turned into tools for censorship.


Can you think of a time when that kind of argument has ever worked?

I remind you that people made that argument about Facebook and it didn’t.


Raising awareness won't stop any of this. It's inevitable. We have the technological capacity and institutional interest required to implement it, it will be done, and it will be endemic.

Raising awareness is about letting people know so that they might take the necessary precautions if they consider themselves to be at risk of its abuse, and degrades their faith in the institutions that support it.


It doesn’t matter whether they are aware. They can’t take precautions. There are no technical solutions that people can use, and technologists seem to be uninterested in working on them. Apple’s solution is the best on offer. Raising awareness about Facebook’s problems hasn’t harmed Facebook.


They can take precautions by not using iOS devices. The goal isn't to harm Apple, it's to make the knowledge available to those who need it, and have the will to avoid it.

That could mean anything from watching what they put on their iPhone, to putting them down the track of using degoogled android variants on select phones, to ditching cell phones all together depending on their needs.

Knowing that your phone is watching, reporting, and potentially moderating the content in it is information which is valuable in and of itself. Even if it only has utility to a fraction of the population.

We find ourselves in the tightening noose of a latent authoritarian technocratic surveillance society. Few people have anything to fear from it, or the resources to escape it. But some do, and should be given every piece of information that might help them moderate or escape it.


Android has been doing this all along, so not using iOS devices will not help.

As I said, there are no good solutions.


* drop your phone

* don't store images on your phone

* use a rom like Graphene, don't install a cloud storage or Google Play Services

Those aren't great solutions, but they're options.


They are options that almost nobody can or will use, so they won’t have any impact.

If your goal is to inform a small minority of expert users that they should protect themselves against corporate/government encroachment, by the looks of the comments here, I’d say you’ve already succeeded.


I think you're missing my point. You do your best to inform the majority not because the majority will enact change based on it, but so that the minute and disparate slices of the population for whom that information is relevant but might not otherwise have been exposed to it can access it and perform or investigate whatever actions they deem necessary and economical to mediate the potential threat.

This forum is too niche to fulfill that purpose on its own.

The mass distribution of the information, and arguments against apple's behavior should be intended to incidentally target relevant niches beyond technically expert circles of which the communicator is unaware.

That the argument and resolution of these issues is irrelevant to most of those who will be exposed to it is immaterial.


Zepto, the Thread won't let me respond directly to yoy, but I haven't touch led any of my comments since they first received a reply. I often touch my comments within a couple minutes of posting in order to correct typos or clarify my intentions, but don't do so without remark once they've become canonical in a thread.

Your replies to my threads are as they were written.


Some of what you said wasn’t there when I replied. It could be because most of my replies were composed quite slowly from a mobile device, giving you plenty of time to make edits while I was writing. Whether this was intentional or not, it means I wasn’t replying to what you wrote.


I notice you edited your top level comment, and it seems like several others to change the meaning of our thread.

I’m not missing your point if you are changing your point after I have replied.


For some reason I couldn't reply to your comment initially, so I put this in a sibling (typos and grammar excepted):

I haven't touched any of my comments since they first received a reply. I often touch my comments within a couple minutes of posting in order to correct typos or clarify my intentions, usually by appending a paragraph. But I don't do so without remark once they've become canonical in a thread.

Your replies to my comments are as they were written.


Leave the phone at home. That's a technical solution to the police in your pocket.


Intentionally deceiving people is probably not a good strategy for building their trust.


Yes, most people are not good at persuasion, and a generic mention of the fact may be harmful as you say.


[flagged]


When did they take away my guns? I must have my missed that town hall.


"See what Apple is doing is basically vaccinating your iPhone against child porn. That's bad because there's the possibility of side effects, though rare, and you should have the decision yourself as to whether you vaccinate your phone."

Wait, no, that's not the argument ...


>touting itself as a privacy-centric alternative

There is a difference between selling your data to the highest bidder through a stalker capitalism ecosystem and giving governments carte blanche access.

I am not at all advocating for the latter, but if you are fighting that battle, CP is not the hill to die on.


This is why it's a problem - this is the most defensible usecase for it, and any upscaling later will just get seen as a natural progression.


My point was that battle was already lost a long time ago when terrorism was the use case.


> Please talk to non-tech people around you about this.

They were already sold into the Apple ecosystem with the iPhone 12's, M1 Macs and iPad Pros. They are going to have a hard time moving to another platform.

Best part? Apple is a huge customer of Google Cloud for storage so you can now tell Google about all the files on your iCloud account. [0]

'With privacy in mind.' /s

[0] https://appleinsider.com/articles/21/06/29/apple-is-now-goog...


In all fairness, in theory, they could have all that data encrypted with keys that google doesn't have.


Disappointing.

While moderating CP distribution and storage is obviously the right thing to do, I do not think this approach will be a worthwhile endeavor for apple and governments.

Let me explain: imagine you are bad guy with bad stuff on your iphone, and you hear apple will be scanning your phone. What is the next logical thing you would do? Migrate your stuff to something else, obviously. Encrypting a usb stick does not require a high degree of technical skill [1]; neither does running tor browser.

So I am thinking 3 things:

1. This is not about CP or doing the right thing, but rather apple bowing to government/s pressure to create a half-ass backdoor to monitor and squash dissidents.

2. Apple and government/s are incompetent and do not realize that criminals are always one step ahead.

3. Most likely, some combination of the above - government/s have demonstrated they are willing to go hard on dissidents on the personal electronics front [2], not realizing that they will only mitigate, not exterminate, the dissent they fear so much.

For the average joe, I would say this - your privacy is effectively gone when you use an electronic device that comes with closed-source code pre-installed.

For the benevolent dissidents - there are many tools at your disposal [3, 4].

Likely next iteration - governments / courts / cops compel suspects to hand over passwords and encryption keys over to prosecutors. It looks like it already started [5, 6], so act accordingly.

[1] https://www.veracrypt.fr/code/VeraCrypt/

[2] https://www.washingtonpost.com/investigations/interactive/20...

[3] https://files.gendo.ch/Books/InfoSec_for_Journalists_V1.1.pd...

[4] https://www.privacytools.io/

[5] https://www.bbc.com/news/uk-england-11479831

[6] https://en.wikipedia.org/wiki/Key_disclosure_law


It is not the role of infrastructure providers to reach into private, on device content.

That is like the USPS opening every letter and scanning what is inside. Even if it's done with machine learning, that is absolutely not okay, no matter what it is for.


Though we were okay with Gmail doing the same because "hey, it's free!".

But I understand your concern here. All our lives are now digitalised and maybe stored forever somewhere and anything you do, even if completely benign, could be considered a crime in some country or even in your own country in a few years from now.


Your gmail messages live on Google's servers. It's not ok for them to scan everything, but it's completely different from Apple scanning your phone's content.


The hashes are computed from images that live in iCloud - how is this different?


Suppose Apple flags some image in iCloud using a hash that was probably sent from the device. Do they currently have the capability to decrypt the image to check if the content is actually illegal before reporting it to the authorities without needing access to the physical device?

(Not trying to make a counter-argument to your comment. I genuinely don't get this part).


Hmm not sure what you mean. The hashes are shipped with iOS, and then (if parental controls are enabled) compared with the hashes computed against the unencrypted data on-device, and displays a warning to the user and their parents.

I'm not sure how the image is displayed on the parents' device - it could be sent from the child's device, or the image used to compute the hash that the child's image matched could be displayed.


Are you claiming GMail looks through the emails to match them with crime keywords and reports you to the authorities? citation needed


I'm claiming they use the content of your messages to profile you and better target their ads until they stopped this practice in 2017 in favour of reading your email "for better product personalisation".


gmail (and any other web based email) needs to "read" the email content to provide a search function. If the same program also uses the content to pick some ads from the inventory then I'm personally ok with that.

That is different from searching them for illegal material based on what illegal implies at a given time and location.


nope. i’m not okay with Gmail doing that. in fact I’ve completed removed all google products from my life. i don’t need someone to constantly spy on me.


But the USPS does scan every letter (and package for that matter) for things like weapons and anthrax.

Don't think the US is taking a hard-line stance here. If it were possible to non-destructively scan mail for CP they absolutely would.


The scanning for those things doesn’t allow them to scan the written or printed content of letters sent through the post.

If i recall correctly isn’t there a legal distinction involved in mail privacy, I recall an explanation involving the difference between postcards and letters with respect to lawful interception…


Shine a bright light through the envelope, you Should be able to make out some letters. Maybe make a neural network for reading these letters and putting them together into sentences.


UPS and Fedex can do this as private companies, my understanding is the USPS actually can't due to the 4th Amendment:

"The Fourth Amendment protects the privacy of U.S. mail as if it had been kept by the sender in his home. Ex parte Jackson, 96 U.S. 727, 732-733, 735 (1878); U.S. v. Van Leeuwen, 397 U.S. 249, 252-52 (1970). It cannot be searched without a search warrant based on probable cause."


Agreed, but not like we have a choice with that matter.


You have a choice about whether to keep or discard your iPhone.


I'm not even interested in the outcome of this now. Apple has chosen to act against the users without consultation. They should be punished for this by their customers walking away.


It’s astounding how many very smart people are getting this wrong.

Perhaps I missed it, but does anywhere on this letter mention that that both of these features are optional?

CSAM depends on using iCloud Photos. Don’t rent someone else’s computer if you don’t want them to decide what you can put on it.

Content filter for iMessages is for kids accounts only, and can be turned off. Or, even better: skip iMessages for Signal.


The new system is overkill for iCloud, which Apple already scans. The obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used.

Very smart people are not getting this wrong.


>> The obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used.

Wrong [1]. It's even in the first line of the document which you apparently didn't even read:

  CSAM Detection enables Apple to accurately identify and report iCloud users who store
  known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts
This doesn't mean I'm supporting their new "feature".

1. https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


nah. it means that they don’t scan it yet.

also by reading that doc and pointing to it means you trust apple.

i used to trust apple when they were peddling their privacy marketing stuff. not anymore.


OK so let's all lose our shit over things that haven't happened.


> The obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used.

Why is this the obvious conclusion?


Partly it falls under the supposition of "if they can, they will". It's also suggested when they tout that they are going to start blurring "naked" pictures, of any kind, sent to children under 14. Which means they need some kind of tech to detect "naked" pictures, locally, across encrypted channels, in order to block them.

In theory, this is different tech than CSAM, which is supposed to checking hashes against a global database, vs determining the "nakedness" score of an arbitrary picture.

But, scan them once, scan them all, on your phone. The details start to matter less and less.

Also, since they already scanning all of the photos up on iCloud already, why would they need to have it put locally on the phone?

Finally, I know that Apple scans photos on my device, because that's how I get those "memories" of Furry Friends with vignettes of my cats. I don't even use iCloud. (To be clear, I love this feature. Send me movies of my cats once a week.)


> Partly it falls under the supposition of "if they can, they will".

Hm. But there are many millions of things Apple could do, but haven’t, because it would hurt their business model. So how would what you are proposing they will do help their business model?

> Which means they need some kind of tech to detect "naked" pictures, locally, across encrypted channels, in order to block them.

I know you know this is the case, but to make it clear for anyone reading: Apple is not blocking nude pictures in the kids filter. It’s blurring and giving a message. Again I ask: why would using this technology on non-nude stuff benefit Apple?

Are we worried about Apple or are we worried about the government forcing Apple to do things that this technology enables?


>They already scanning all of the photos up on iCloud already

I can't find a source for this. Do you happen to have one?

It seems to me that Apple doesn't want to host CSAM on their servers, so they're scanning your device so that if it does get uploaded, they can remove it and then ban you.

They're not scanning all photos on iCloud, as far as I can tell.


Apple doesn't "scan" iCloud. Not sure what you're talking about. Generally everything in iCloud is E2E encrypted, with the exception of iCloud Backups, where Apple holds onto a decryption key and will use it to comply with subpoenas. But nothing is "scanned," and if you don't use iCloud backup, Apple can't see your data.


iCloud Photos aren’t E2E encrypted, but it’s unlikely they’re scanned for CSAM today because Apple generates effectively 0 references to NCMEC annually.


I also believe Apple doesn't really want to scan your photos on their servers. I believe their competitors do, and they consider this compromise (scan on device with hashes) is their way of complying with CSAM demands while still maintaining their privacy story.


Yes. Scope creep is much easier when implemented as scanning on plaintext data.


It's too late to edit my post, but you're right. iCloud Photos are not E2E encrypted, my misunderstanding.


Apple does not already scan iCloud.


Not sure why this got downvoted. This is correct and very thoroughly documented.


The meme tides have turned against logic on this topic.


this is how it always starts, Apple went from 'no it's not possible to unlock the shooters phone' to 'yeah you can give us the fingerprint of any image (maybe doucuments too) and we'll check which of our users has it'


Then why do the "CSAM" perceptual hashes live on the device and the checks themselves run on the device? Those hashes could be anything. Your phone is turning into a snitch against you, and the targeted content might be CCP Winnie the Pooh memes or content the people in charge do not like.

We are not getting this wrong. Apple is taking an egregious step to satisfy the CCP and FBI.

Future US politicians could easily be blackmailed by the non-illegal content on their phones. This is a jeopardy to our democracy.

The only reason this was announced yesterday is because it was leaked on Twitter and to the press. Apple is in damage control mode.

This isn't about protecting children. It's about control.

Stop defending Apple.


This boils down to two separate arguments against Apple: 1) what Apple has already implemented, and 2) what Apple might implement in the future. It's fine to be worried about the second one, but it's wrong to conflate the two.


>It's fine to be worried about the second one, but it's wrong to conflate the two.

Agreed, and just to be clear, I'm worried about that too. It just appears that we (myself and the objectors) have different lines. If Apple were to scan devices in the US and prevent them from sharing memes over iMessage, that would cross a line for me and I'd jump ship. But preventing CSAM stuff from getting on their servers seems fine to me.


> "preventing CSAM stuff from getting on their servers seems fine to me"

You're either naive or holding your fingers in your ears if you think this is the objective.

Let me repeat this again: this is a tool for the CCP, FBI, intelligence, and regimes.


I think the situation is clear when we think of this development from a threat modelling perspective.

Consider a back-door (subdivided into code-backdoors and data-backdoors) placed either on-device or on-cloud. (4 possibilities)

Scanning for CP is available to Apple on-cloud (in most countries). Scanning for CP is available to the other countries on-cloud (e.g. China users have iCloud run by a Chinese on shore provider). Scanning for CP is not available to Apple on-device (until now)

This is where the threat model comes in. Intelligence agencies would like a back door (ideally both Code and Data).

This development creates an on-device data-backdoor because scanning for CP is done via a neural network algorithm plus the use of a database of hashes supplied by a third party.

If the intelligence service poisons the hashes database then it won't work because the neural network scans for human flesh and things like that, not other kinds of content. So the attack works for other sexual content but not political memes. It is scope-limited back door.

For it to be a general back door, the intelligence agency would need the neural network (part of apple's on-device code) and well as the hashes database to be modified. So that is both requiring a new code back door (Apple has resisted this), and a data back door both on-device.

Currently Apple has resisted:

Code back doors (on device) Data back doors on device (until now)

and Apple has allowed Data back doors in cloud (in certain countries) Code back doors in cloud (in certain countries)

In reality the option to not place your photos in iCloud is a euphemism for "don't allow any data backdoor". That is because iCloud is a data-backdoor due to it being able to be scanned (either by Apple or an on-shore data provider).

My analysis is that the on-device scanning does not improve Apple's ability to identify CP since it does so on iCloud anyway. But if my analysis is incorrect, I'd be genuinely interested if anyone can correct me on this point.


iCloud photos aren’t currently encrypted, but this system provides a clear path to doing that, while staving accusations that E2E of iCloud will allow people to host CP there with impunity.

When the device uploads an image it’s also required to upload a cryptographic blob derived from the CSAM database which can then be used by iCloud to identify photos that might match.

As built at the moment, your phone only “snitches” on you when it uploads a photo to iCloud. No uploads, no snitching.

We know that every other cloud provider scans uploads for CSAM, they just do it server side because their systems aren’t E2E.

This doesn’t change the fact that having such a scanning capability built into iOS is scary, or can be misused. But in its original conception, it’s not unreasonable for Apple to say that your device must provide a cryptographic attestation that data uploaded isn’t CP.

I think Apple is in a very hard place here. They’re almost certainly under significant pressure to prove their systems can’t be abused for storing or distributing CP, and coming out and saying they’ll do nothing to prevent CP is suicide. But equally the alternative is a horrific violation of privacy.

Unfortunately all this just points to a larger societal issue. Where CP has been weaponised, and authorities are more interested in preventing the distribution of CP, rather than it’s creation. Presumably because one of those is much easier to solve, and creates better headlines, than the other.


>iCloud photos are encrypted, so scanning has to happen on device.

Is this true? I feel like Apple benefits from the confusion about "Encrypted at rest" + "Encrypted in transit" and "E2E Encrypted". It's my understanding that Apple could scan the photos in iCloud, since they have the decryption keys, but they choose not to, as a compromise.

I'm keying into this because this document: https://support.apple.com/en-us/HT202303 doesn't show Photos as part of the category of data that "Apple doesn't have access to." That's mentioned only in the context of the E2E stuff.


You’re right, currently iCloud photos isn’t E2E. I’ve updated my comment


> Future US politicians could easily be blackmailed by the non-illegal content on their phones. This is a jeopardy to our democracy.

US politicians should not be using normie clouds full stop. This is a risk and always has been.


Do you know which of your kids is going to be a politician? Better keep them all off the internet to keep their future career safe.

This is why it's important to stop now.


It's baffling. It seems like nearly everyone losing their shit over this doesn't understand how it works. Most of the commentary I see here and elsewhere is based on a misunderstanding of the implementation that blends the CSAM scanner with the child messaging scanner.


This reminds me that a two OS market isn’t a healthy one for consumers. We could use more diversity.


Okay, but in a n OS market the companies behind them would still be pressured to implement CSAM detection. There's still a monopoly on government.


postmarketos.org


A quick look at that website and you see it's not for normal or typical consumers. It's for a technical crowd who is in to technical things and tinkering.

It's like telling an average person to use GNU/Linux for their desktop OS and then watching them struggle to get printing to work well (I have been through this).


This letter assumes Apple is too stupid or has overlooked the risks of the tool they've built. I guarantee this isn't the case.

We have to assume given the pressure Apple has been under to build a surveillance tool like this that any "abuse cases" such as identifying outlawed LGBTQ content is in fact exactly what this tool was built for.

Apple have already proven their willingness to bend the knee to the CCP with actions such as App Store removals. I almost can't blame Apple for this because in all likelihood if Apple refuses to cooperate they will eventually lose market access. Couple this with the fact today Apple is seeing increasing pressure from western governments to surveil and censor "hate speech" and "misinformation" they've probably reluctantly accepted that sooner or later they will have no choice but to spy on their users.

What I'm trying to say is that Apple isn't the problem here. My guess is Apple's engineers are smart enough to know this technology can and likely will be abused. They're also probably not that interested in spying on their customers as a private company in the business of convincing people to buy their communication devices. Apple did this because governments have been demanding this. And in recent years these demands have not only been coming from the CCP and other authoritarian regimes, but also from governments in their primary markets in the West.

The only way we can fight this is by demanding our politicians respect our right to privacy and fight for technologies like e2e encryption on all of our devices.

I don't want to be overly negative, but realistically this isn't going to happen. Most people who use Apple's devices don't understand encryption or the risks content scanning technology present. And even if they did, no one is going to vote for a representative because of their stance on encryption technology. It seems almost inevitable that the luxury of private communication for all will eventually be a thing of the past.

I'm just glad I know how to secure my own devices.


I'm confused. If iCloud backups are not encrypted [1], and this only scans iCloud photos, why can't they just do this server side? I'm not saying server-side is OK, but its at least not device side (which is obviously a slippery slope).

[1] https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


there has been some speculation that they will turn on encryption after pushing this out. if this turns out to be true apple sucked at controlling the PR for this whole thing.


Both of the announced features sound fine on their own — Apple has always been able to scan iCloud and send your data to police [0], and PhotoDNA is a decade old; whereas on-device machine learning on kid phones does not (according to Apple) involve sending anything to Apple reviewers and the govt.

But announcing both features in the same post makes it inevitable that they get conflated, e.g. “Apple’s machine learning will send suspected child porn-like images from your iMessages to iCloud, where it will be reviewed and possibly sent to the police”. Apple obviously knows this — so them being ok with that raises some serious eyebrows, as if they’re quietly setting the groundwork to make that a reality.

[0] https://www.businessinsider.com/apple-fbi-icloud-investigati...


> Apple has always been able to scan iCloud and send your data to police

I don’t see how the article you linked to explains how Apple can “scan iCloud” for the police. What do you mean? It seems like they just hand data over for a specific warrant related to individual users.


Sorry, are you insinuating that Apple has the ability to retrieve, read, and provide the unencrypted data from users' iCloud accounts, but not the ability to search across that data?


Where is Richard Stallman when you need him?

This is the War on General Computation you have been warned about, and its good to reiterate: "You ( and I mean you as an Individual and you as a Company ) you are either with us, or against us"


> Where is Richard Stallman when you need him?

He was canceled, but he still regularly updates his personal website: https://stallman.org/


i mean. he did warn everyone that this is going to happen decades ago.

chiming is on this would just give the cancel mob more ammunition


I've someone has gone so far like Apple already did, I doubt that a open letter will change their thinking. Maybe they step a little back and bring the next similar thing which looks less invasive some time later?

Thinks like this must to be stopped! Customers shall not buy anything from companies which are hostile. And laws against these usage need passed. As far as it looks, the laws in Europe are in place and prevent this currently. But just currently, we've seen how companies like Apple push the boundaries. We as humans behave totally irrational, we complain about inhumane working conditions at Amazon and then we order the next item. We complain about a golden prison from Apple and buy the next iPhone. We should change?


It won’t matter. Most of the public will applaud the move, many will call for it to be rolled out across every platform, including desktops. As this gathers momentum, it will eventually become illegal not to have pervasive scanning apps running on your device. Naturally, the technology will almost immediately be expanded to other kinds of content, and will be used as a wide open backdoor into everything in everyone’s life.

While this is not inevitable, I see a strong, perhaps overwhelming public response as being the only thing that will prevent it, and I do not see that response happening.


You guys can get outraged all you want. It's pretty clear that Apple debated this internally and decided that the backlash was worth whatever they got from this deal...

By today, it's pretty clear than privacy is not the top priority of most people. See how much data people are giving fb/google every second...


This defeatist attitude is not helpful.

The louder people are about this, the more it hurts and the more likely the policy is reversed.

if the policy is not reversed, then at least the community has been loud enough that people took notice and understood that this is happening, giving them a chance to vote with their wallet; for most people what happens on a computer or a phone is a complete mystery.


What I am saying is that people have been told pretty clearly that FB would sell all your data to anyone, use it to target people with misinformation that clearly influence election outcomes, and lie about it everywhere. The dent that these revelations had on FB’s userbase is peanuts.

Same thing with whatsapp.

Apple’s saying that they’ll use a robot to check that their users arent paedophiles is clearly not going to change anything. Use your energy elsewhere...


> This defeatist attitude is not helpful.

Maybe, but it's the truth. It's impossible Apple didn't think this through. They did, and they made a decision, and Google will do the exact same thing in 6 months if not earlier.


A protest outside of Apple's main offices or Tim Cook's house would draw a good amount of attention to this issue.


Disagree. I think this is a boon to E2E encryption. However you might feel, content filter legislation is unstoppable. It will just take a few high profile media cases for E2E encryption to be killed, otherwise. This is the right (privacy-conscious) way to implement it, on device. Google has been doing this for years, yet server-side. I recommend reading the actual docs: https://www.apple.com/child-safety/


More horrifying is the number of people defending or apologizing for this move. Truly mind boggling!


Anything for the children!!

I think that earnest acceptance of “well, it’s for a good cause” style arguments indicates a severely stunted ability to generalize: either to generalize applications of the technology (it works on any proscribed content, not just stuff people generally agree is bad) or to generalize outcomes of this kind of corporate behavior (Apple will continue to actively spy on their customers as long as they can come up with a tenuous justification for it).


yes, think about the children!

nobody thinks about the children that are literally starving every day or that are in social settings where they simply cannot succeed.

Let’s just treat everyone like disgusting criminals until THEY prove they are innocent by giving us access to everything they do.

The more I see, the more I want to literally dump all technology and go live off the grid.


Translation: The child porn industry needs to wait until we solve world hunger. Priorities, people!


The real translation is that nobody gives a fuck about the children but they use this as an excuse to peddle their bs backdoors.


You got that right—this is a bullshit back door. That's my favourite kind of back door. Because it does one thing incredibly well: it starves of oxygen the one remotely tractable argument politicians have found to break real E2E encryption with real, non-bullshit back doors.

With this technology in place, never again will the Government be able to wail "think of the children" when claiming E2E needs a back door in order to protect the the children.

Give me the bullshit back door any day.


This is such a massive story now there will be damage control.


Beyond the obvious here, I'm just shocked Apple thought they could do this without causing a shitstorm of biblical proportions. To market yourself as the "privacy-centric" ecosystem and then do a complete about-face requires either disarray and tone-deafness at the management level... or, alternatively, extremely nefarious intentions. I'm honestly not sure which one it is at this point, and their reaction in the coming days will probably reveal that.


While I applaud the goal, this is likely not to achieve it on its own. Those that should be affected are probably not using Messanger to begin with and if they are they can easilly switch to other communication apps.

It is also not quite clear if Apple is taking a moral or legal stand here. If it is legal then this could in the future open doors to:

- Scanning for other types of illegal content

- Scanning for copyrighted content (music, images, books, ...)

- Scanning your iCloud files and documents

- Scanning emails to make sure you are not doing anyting illegal

If it is morally driven and Apple wants to really take a stand against any CSAM material on its devices, they would really have to do it at a system level, monitoring all data being transferred (including all communications, all browsing etc) so this could just be the first step.

A moral-based agenda would be much easier for broader public to accept, while a legal-based agenda could lead to other kinds of privacy-intruding consequences. And even a moral-based agenda would still be a precedent as ultimately we do not know what are Apple's "moral values" and what would it be ready to intrude user's privacy over in the future?

Seems like a slippery slope for a company to take, any way you turn it, specially if privacy is one of your main selling points.

Another thought: if we as a society agree that CSAM is unacceptable, why not globally prevent it at an internet router level? edit: jetlagged... we can't because data is encrypted. It has to be at client level, pre-encyption.


> While I applaud the goal

Really? What is the goal?

It's not to prevent child abuse, since passively looking at images is not, per se, abuse.

It's also not to limit the making of child pornography, since this will only search for already existing and already known images that already exist in government databases.

If you make new images that are not yet in said databases, you're fine.

I'm not sure what the actual goal is (project a virtuous company image, maybe?), but the result could very well be the opposite of what people think.


> It's not to prevent child abuse, since passively looking at images is not, per se, abuse.

Consumption feeds production.


Or the opposite: consumption prevents people from acting out.


Apple is a legal entity not a moral one. It may have moral employees, but at best Apple itself is amoral. It will do what the law allows (or compels) it to do.

This feature absolutely will be used for human rights abuse by countries like China, just like they have asked Apple to abuse their platform in the past. Why? Because those abuses are legal there, and capitulation will be the only way those governments will allow them continue to sell in their lucrative marketplace.


I'm glad people are sticking their necks out and opposing this. Typically people keep quiet because they don't want to be branded as "pro child-pornography" and that opens the door for this sort of surveillance.


Apparently "It will also scan messages sent using Apple’s iMessage service for text and photos that are inappropriate for minors."

https://www.washingtonpost.com/technology/2021/08/05/apple-c...

That seems even worse. In the US we have this terrible situation where it might be perfectly legal for two 17 year olds or a 17 and an 18 year old to have sex with each other but if they sext then they're engaging in child pornography which is a huge federal crime. But it hasn't been a problem until now because it's very hard to enforce. But it looks like Apple is now going to take part in enforcing that law. It'll be tattling to the parents rather than law enforcement but I still think that's terrible.


The intention to prevent child sexual abuse material is indeed laudable. But the "solution" is not. For those wondering what's wrong with this, two hard-earned rights in a democracy go for a toss here:

    1. The law presumes we are all innocent until proven guilty.
    2. We have the right against self-incrimination.
Pervasive surveillance like this starts with the presumption that we are all guilty of something ("if you are innocent, why are you scared of such surveillance?"). The right against self-incrimination is linked to the first doctrine because compelling an accused to testify transfers the burden of proving innocence to the accused, instead of requiring the government to prove his guilt.


how is this not a violation of the 4th amendment if Apple is performing this search on behalf of a government entity?


There is quite a bit of case law precedent over the past fifty years, notably US vs. Miller, that upholds the “Third Party Doctrine”: if you give your records to a third party, then the government doesn’t need a warrant to get those records. Personally I think that ruling is awful and needs to be overturned, but that’s not likely to happen anytime soon.


Because you are submitting to it via their EULA. Probably time you read that contract again, eh? You have no rights once you give them up.


My take is that by simply announcing this sort of thing the Rubicon has been crossed. If you see one ant, there's 100 ants somewhere.

The point in bitching to Apple isn't to make them change, but to bring attention to privacy issues to the hoi polloi. Cleaning up your own privacy act is the main lesson.

Maybe the only logical place to end up is a dedicated Chromebook in guest mode for financial transactions, air-gapped workstation to do artistic or other useful things, rarely-used dumbphone, get out more among physical people.


Now I wonder how long will it take for this tech to reach macOS.


Later this year in the Monterey update.

“ These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*”

https://www.apple.com/child-safety/