Hacker News new | past | comments | ask | show | jobs | submit login
Is macOS Look Up Destined for CSAM? (eclecticlight.co)
126 points by ingve on March 20, 2022 | hide | past | favorite | 133 comments



I am using Little Snitch since version 2. Practically all my connections form MacOS to the mothership are stopped by default. I turn off the filter only for updates. I avoid in general Apple software (Preview, Photos, etc.). Macs are used only for work. My personal information or browsing is done only on Linux with Open Snitch and minimalistic install. In the new information landscape blind trust can be harmful.


One small suggestion, consider using tcpdump on your router to make sure LS is really stopping everything. Apple and Microsoft wised up to this some time ago and some things hook in the network stack after the application firewalls. If you spot something please report it to the Little Snitch developers.

The amount of data captured by tcpdump can be minimized by only capturing syn/fin/rst packets assuming proto 6 tcp.

  'tcp[tcpflags] & (tcp-syn|tcp-fin|tcp-rst) != 0'
And if capturing to a file one can also limit the total number of packets captured with

  -c 1000
This may be useful if you are also capturing proto 17 udp.


Thanks.


I couldn’t find a way to black hole CIDR blocks on Little Snitch which is necessary to completely silence macOS. However if you do this at the router you break stuff like iMessage on phones running on the wifi and so forth.


No, this vaguely related technology has nothing to do with whatever you are associating with it. If Apple wants to surreptitiously spy on your porn collection, they will do so, and won’t need cover.


Does macOS 12.3 and beyond phone home with details of images and documents you open in Preview?


As far as I know it's purely local search. I'm guessing this is part of the development arc towards AR applications but in the near term solves the problem of being able to search Photos for "birthday party" and hopefully get something sensible out.


No. Why would you think that? It goes against everything they have stated and the designs of the software. They are heavily focused on keeping all of that on device.


I don't think it's far fetched at all, that they'd do that without mentioning it. It certainly phones home when you open Preview [1], what's another little ping when you're looking at CSAM or whatever else they've been instructed to look for? The recent debacle made it clear enough to me that the their privacy reputation is little more than carefully curated marketing, and they're likely under tremendous pressure from lawless alphabet agencies to ramp up surveillance. I wouldn't put anything past a 3 Trillion dollar company, that's quite an empire to protect.

[1] https://mspoweruser.com/macos-big-sur-has-its-own-telemetry-...


> The recent debacle [...]

It's hard to argue that this "debacle" was not materially driven by the media, which did not accurate report the system's privacy protections, either because they did not understand them, or because they did not care to.


Maybe. My awareness of it primarily came from this and other tech-minded sites, I'm not exactly sure what the media was saying about it. I understood the privacy protections but still found the features positively horrifying. I stopped using an iPhone and I stopped using iCloud on all devices because of that announcement.


There is some reprieve by using a good firewall or even an off device firewall.

However with many of these services if you try to kill them, they come back. If you delete them sometimes it will literally break your OS.

Example, if you remove the ocsp daemon, you can’t start any program on your computer.


Also, Apple is making it virtually impossible to make changes to the OS now with the sealed system volume. We as the operator have no workable way of making changes there. You can do the whole bless thing but you have to boot into several modes several times. For every update. Otherwise it won't even boot.

I think this is a worrying development. Until now our computers were actually ours. Now they're controlled by the vendor, and looking over our shoulder.

I think using the user's own device to spy on them (whatever the reason!) is a big red line to cross. And puts this stricter control over the OS in a different perspective than just "security". I think either thing that plays into it is that Apple is now a content provider (Apple music and TV+). So they have another reason to keep us out to protect their DRM.

But anyway, good security should not have to imply trusting the vendor implicitly. Give us the ability to add our own signing key for files we want to modify, just like secure boot on Windows allows adding custom keys..


I did some testing yesterday. It used to be if you blocked 17.0.0.0/8 (or a more restrictive subset for OCSP) on your router, it would take ~30 seconds to open a program. It's a lot faster now. The only thing that really leaps out at me is some programs I haven't launched in a long time go through that "Verifying..." step to launch where it calculates checksums.


Well other than the proposal specifically outlined in this post, that is. (Pardon if I missed the sarcasm)


I tested OCR with Wi-Fi disabled and it still functions. Is Visual Look Up (and Live Text) purely offline, phoning home with CSAM reports, or an offline preview of future technology which phones home with CSAM reports?


Everything is done on device. Which is different than others who choose to do scanning on their servers.


And if the filter thinks the image is positive for CSAM it sends it to Apple, correct? Otherwise there's literally no point.


> And if the filter thinks the image is positive for CSAM it sends it to Apple, correct?

No. The logic was never “if there’s a match the image is uploaded”. The device never knows if there’s a match. Under the process Apple described, an extra packet of data is attached to every iCloud upload. If there are enough matches, Apple can decode those packets to get a low-res thumbnail, which they then check against a second perceptual hash. The process doesn’t work on arbitrary images on your device, it’s specifically designed for iCloud uploads.


So in reply to your parent, the answer is yes. It sends a low resolution copy of the image to Apple with extra steps.


No. Look at the context of this discussion. Somebody starts off by asking:

> Does macOS 12.3 and beyond phone home with details of images and documents you open in Preview?

There is no similar context with Apple’s previous CSAM scheme. The device is unable to check for a match and then upload the photo if there’s a match. The scheme only works because it operates on iCloud uploads.


That's a fair point. In my mind I was immediately transported to the CSAM/NeuralHash debate from last year. I will slow down.


Why can't they just scan iCloud uploads on-server then? Why does anything need to be done on-device?


They almost certainly do. Every major provider does, and even some enterprises.

The whole point of the CSAM stuff was that it would allow for end to end encryption while not turning Apple’s ecosystem into preferred tool of child pornographers.

Apple poorly communicated the feature, then the EFF put out a deliberately misguided written hitpiece that conflated parental controls with CSAM, and started an online freak out. The “privacy activists” won, and your data is sitting on Apple servers with Apple’s managed encryption keys outside of your control today.


I think that's a pretty bad summary of the concerns that were raised. Sure they are scanning your files on icloud, but there is a 100% reliable way to prevent that: just don't upload them.

In their proposal they would scan your files on device, which is fundamentally different. Initially they would not run the scanning when icloud upload was disabled but how long would that last for?


No. The proposal was to generate the perceptual hash on device at the time of uploading to iCloud. It would not be doing any scanning on-device. The comparison to the CSAM database would still happen on Apple's servers.


The device would download an encrypted database. It would compute a hash on the device. It would compute a value (“voucher”) from this hash and the database and upload that value to iCloud, which could decrypt it iff there were a sufficient number of matches. The voucher is independent of the plaintext file uploaded to iCloud: hence you could upload only the voucher and not the file and the system would still alert.

Phrases like “it would not be doing any scanning on-device” don’t have any precise meaning. Scanning is a series of operations including hashing and cryptographic calculations plus a voucher upload and decryption. All of the former operations are happening on the device, only the latter happens on the server. So in fact a significant fraction of the scanning is indeed happening on your device. And this two-computer design isn’t being used to preserve your privacy: it’s designed this way solely to prevent you (the device owner) from knowing whether your files match the database. Without that requirement, the system would be much simpler: it would download a hash database and simply send a notification to iCloud whenever (a sufficient number of) local files hash to values matching the database.


This scheme is obviously designed to work when Apple has no direct access to the photos. Because of this, there is a lot of speculation that Apple plans on making iCloud photos encrypted. This scheme would continue to work in that situation, whereas the on-server approach would fail. However that’s just speculation, Apple haven’t announced anything.


That speculation is pretty out there considering that there have been no barriers to them doing true end-to-end encryption of iMessage backups, but they have chosen not to for many years despite marketing iMessage as "end-to-end" encrypted. Reportedly at the direct request of the FBI. https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


The theory is they were going to use this tech to finally enable E2EE iCloud Photos and reactionary privacy absolutist psychopaths who didn't understand how it works -- such as myself -- made a big ruckus and spoiled everything.

Also at the point that the image hash matches, Apple thinks it is CSAM (and it probably is). Doing it locally lets them avoid storing it, which they definitely do not want to do.


By definition information must phone home somehow. How else is apple's spy department going to know if there's a "visual match"? Local scanning doesn't mean anything, it's an excuse to impliment the feature that will be changed later on anyways. That's how the slippery slope of precedent works.


This article could help by defining what Visual Look Up is... I have never heard of it.


Indeed - here is a nice little article that described one aspect of it that was posted the other day: https://eclecticlight.co/2022/03/16/how-good-is-montereys-vi...


What's the right number of lives to destroy over false positives from an algorithm? Is it some number other than zero? Why or why not?


I'm more concerned with the fact that framing someone for real CSAM is nearly the perfect crime.

Contract killers have to be up close and personal in meatspace and leave an actual homicide investigation in their wake.

Someone half way around the world could be contracted to spearfish you, take over your phone/laptop then cause it to download actual CSAM.

The first thing you'd know about it is when you were arrested, and then I don't think there's any kind of defense. If the hackers were very sloppy you might be able to hire a very talent forensics team to find traces of the attack. But beyond that you're pretty screwed and nobody will ever believe you.

It is a button that someone (without any morals and willing to take a certain degree of risk) can push to ruin your life remotely, leaving virtually no trace, and allowing virtually no defense. Nobody is ever going to believe a pedophile claiming that they got hacked.

And the intersection of this with politics and national intelligence agencies (who certainly have the skill and lack the morals) is probably a bit disconcerting in the larger stage.


> It is a button that someone (without any morals and willing to take a certain degree of risk) can push to ruin your life remotely, leaving virtually no trace, and allowing virtually no defense

There are many such buttons. So many it’s horrifying.


You can ask the same question about self-driving cars.


This is ignorant. There is lots of information about the safety of human driven cars, it is not a high bar to improve upon, and can be verified. The technology in question is introducing a new form of potentially ruinous statistical surveillance that didn't exist before.


>There is lots of information about the safety of human driven cars, it is not a high bar to improve upon, and can be verified.

That doesn't make sense. The 'average' driver gets into millions of accidents. I don't want a slight above average driver driving my family around, and I don't want slightly above average drivers around me - especially those who I can't communicate with - by honking, by shouting to get their attention, and algorithms who have no fear of their own life, etc. All software has bugs and I don't want my safety contingent on developers making mistakes. The self driving car must be orders of magnitude above the BEST human driver, and there must be punitive damages in place as a deterrent, to compensate aggrieved parties, etc. At present, self-driving cars are unproven dangerous technology that is being rightfully scrutinized.


What measure are you using for "slightly above average" that also equals "unproven dangerous"? If the above average technology is dangerous, surely the average driver is even more dangerous. Otherwise it's not really above average, no? I.e., how can you make fewer errors and be a measurably better driver but still be worse than the thing that you're measurably better than?

Either you're making this argument in bad faith because you just don't like self driving cars, or you don't believe in statistics as a concept.


Where are the self-driving cars that respond to honking, respond to someone yelling the driver to watch out, where you can make eye contact and they wave you over so you feel safer while crossing, etc, etc?


Currently you are surrounded by average drivers. Why are you worried about them being replaced with slightly above average drivers?


No, I am not. I haven't gotten into an accident, nor have I witnessed one that resulted in a fatality or any major injury.

The distribution of skills amongst drivers isn't geographically even , nor is it static.


What makes you think it’s not geographically even (at least in a country like the US)?


Because I haven't seen it proven in any domain, it sounds too naive to believe that people have equal abilities and skills by geography (or any other random metric). Population centers will have greater access to training, also in this example more cars on the road will co-relate to more opportunity to practice driving skills, but also more opportunity for accidents, etc, etc. In any case, if you look at outcomes in terms of car crashes/fatalities, the data also shows what you would expect -

https://worldpopulationreview.com/state-rankings/fatal-car-a...


Do you mean the terribly done statistics that collect data from uneventful motorways’ miles, while people automatically switch back to manual driving when a problematic situation arises, effectively filtering out any interesting data?


You’re ignoring the Trolley Problem of it all: is it moral to knowingly let uninvolved person X die if it saves the lives of Y and Z?

Fortunately, the policy choice at issue here isn’t one where there is definitive harm on the track, just risks that can be compared.


IMO It is more: is it immoral to let person X die justifying it with an unreasonable fetish for tech and its unrealized potential?

I am only half kidding :)


Self-driving cars substitute for human-driven cars, which currently kill over a million people a year. If the first mass-adopted self-driving cars have half the fatality rate per mile of human-driven cars, then slowing their rollout by a day causes 1,800 deaths. Current prototype self-driving vehicles already have a lower fatality rate per mile than human-driven vehicles. Obviously this isn't an apples-to-apples comparison since current self-driving cars are constrained to certain locations and weather conditions, but if the goal is to minimize deaths, then we should be more gung-ho about this technology than we currently are.

In contrast, CSAM scanning substitutes for... I'm not sure what. In addition to the risk of false positives, there's also the risk that the scanning technology will be used for other purposes in the future. I could easily see governments forcing Apple to scan everyone's hard drives for hate speech, 3d models of prohibited objects (such as gun parts), or communications sympathetic to certain groups. Once that door is cracked open, there is no closing it.


>Obviously this isn't an apples-to-apples comparison since current self-driving cars are constrained to certain locations...

It is much more worse, current so called self driving cars have human drivers that intervene most of the time and saved the idiot AI from crashes but Elon will not count this near crash as an actual crash.

>but if the goal is to minimize deaths, then we should be more gung-ho about this technology than we currently are.

We should maybe try to maybe to also do the obvious quick fixes at the same time?

it would be much cheaper instead of forcing AI cars on people to force say a drunk/tired/talking on the phone detector , enforce better driving tests before giving licenses to drivers, put a tax for vehicle mass toe promote less heavy cars, enforce speed limits with tech. Do you think Bob will prefer to be forced to buy an expensive self driving car to reduce the car crashes stats or better to buy a safety device(black box) that he must install in the car.


If the AI and the Human both try to correct each other's mistakes wouldn't that make a significantly better system?


Right, but this is not self driving, is a co-pilot, human +AI both drive the car.


Self-driving cars have the potential to reduce serious harm by a significant margin. Are you saying the same is true with Apple's CSAM-detection measures?

If so, how is curbing CSAM consumption going to prevent children from being raped, exactly? And I do mean exactly. The only arguments I have heard thus far appeal to some vague link between producers and consumers, predicated on the idea that CSAM producers are doing it for the celebrity/notoriety, or for financial profit. Both of these claims are highly suspect, and seem to rest on a confusion between trading CSAM online and paying traffickers for sex with children.

You may be correct, but it's going to take more than a superficial comparison to convince anyone.


>You may be correct, but it's going to take more than a superficial comparison to convince anyone.

Well you just hand-waved the idea the self-driving cars can reduce harm by a "significant" margin - based on what exactly?


If that isn't true, then yes, we should seriously consider abandoning the idea of self-driving cars.

I don't understand, though. Are you saying that this is a reason to accept Apple's CSAM-detection?


> If that isn't true, then yes, we should seriously consider abandoning the idea of self-driving cars.

I wanted to share a (I found) controversial thought: Self-driving cars are the US response to trains. No taxes for railways are palatable, but private vehicles on special roads and profit to be made, great for the car industry. That, IMO, is what drives (pun intended) self-driven cars.


Oh, I'm with you on the CSAM. I just wanted clarity about the self-driving cars, its a pet peeve of mine.


1.35 million deaths per year caused by car accidents. That doesn't even account for the people who survive, but are maimed.


Self-driving cars are unproven tech. Just pointing to the harm that humans contribute towards doesn't mean much. Soldiers also accidentally kill innocent civilians - are you in favor of AI-drones and AI-soldiers??


Not the guy you are talking to, but I would 100% be in favour of AI-drones if it was shown they made fewer mistakes than human operators.


Sure, we all want fewer people harmed in this world. With regards to drones/solders - its a complicated topic, and needs a lot of discussion so I don't mean to be flippant about it as I might have come across. I was merely making a point that eliminating humans just to reduce the mistakes they cause ignores the benefits that humans bring - e.g. in this case - Refusing immoral orders, refusing to harm children or non-combatants, exercising judgement during war, etc.


I’d be in favor of self-driving cars by the same criteria, but sadly a lot of people are ready to skip that step.

Let’s be clear, there is ZERO empirical evidence that self-driving car technologies available to humanity today are safer than human drivers. Zero.

There are lots of theoretical advantages like “computers don’t get sleepy or drunk,” which elide the many other ways that computers still dramatically underperform people (like recognizing objects).


What’s the right number of lives to destroy from CSAM? There’s a middle ground between doing nothing and totalitarianism.

“Destroyed lives” from false positives are at this point hypothetical. Child abuse is not. It’s fair to be concerned about false positives and ensure the system handles such failures appropriately. It’s also fair to directly intervene in the widespread circulation of CSAM.


From this mechanism? Zero. And if even if it was zero anyways, these measures are not justified. There are plenty of other ways to catch out criminals that doesn’t involve dubious phoning home. Devices that treat their owners like potential criminals are nothing more than government informants.

Also… The goal of law enforcement is not to vengefully destroy the lives of people who commit crime, but that’s a whole different can of tuna. Still worth noting, because it hints at a larger problem about how we approach heinous crimes.


>What’s the right number of lives to destroy from CSAM?

Is there any evidence such things actually reduce the production of CSAM? Or is it like the war on drugs where drug production is as high as it's ever been.


You're right, there is a middle ground. Law enforcement can request a warrant from a judge and demonstrate probable cause to search the machine they believe is harboring CSAM. Any attempt to bypass this process is totalitarianism.


There's no really a middle ground; the right number is 0.

It is better that ten guilty persons escape than that one innocent suffer. -- Blackstone's ratio


There’s a reason he said ten and not a million. This argument is absurd when taken to its maximalist conclusion.

Whether a rational person would accept chance X of wrongfully being convicted of a crime to decrease the chance of being a victim of crime by Y obviously depends on the values of X and Y.


Or in this situation more like:

It is better that ten children get sexually abused than one innocent person comes under suspicion

/s


This isn't combatting abuse because producing CSAM != consuming CSAM. It's far better that 10 people beat off to children than have one innocent person come under suspicion.


I disagree. Suspicion is not the same as prosecuted, and every time someone "beats off" to an image of child sexual abuse, that child is re-victimised; every time.

You'd rather 10 children be victimised than 1 person falls under suspicion? Ok...


> You'd rather 10 children be [re-]victimised than 1 person falls under suspicion? Ok...

Well yeah, mostly because I don't understand what you mean be "revictimization", but I can easily imagine what it's like to have such suspicions leaked to the angry mob.


I'm not sure what's difficult to understand about it.

How about we pivot to intimate image abuse (revenge porn). The argument is the same. The victims are re-victimised every time their intimate images are shared, for the victims it's always the same stress, shame, embarrassment, etc.

However apparently the damage is already done so there's actually no issues according to the other user so what's all the fuss about...


> I'm not sure what's difficult to understand about it.

I guess the difficult bit for me is that you talk about throwing victims under the bus, and letting them be "victimized" all over again, but I just don't understand how this solution helps victims at all. Here's the example in my head: imagine you're a victim of revenge porn. The police, using technology like the stuff Apple has developed, catch one random guy (out of hundreds or thousands more) in possession of this porn. The police probably won't tell you that they caught this guy. How does this event help you?

> there's actually no issues

It's not that there are no issues: the distribution of CSAM (and of revenge porn) are terrible crimes, and iirc they're increasing in prevalence. I just think that Apple's solution doesn't help victims, and seriously discomforts millions of people.

As you've seen, many people do see this as privacy invasion. The feeling that folks somewhere far away might be able to see your photos (even low resolution thumbnails) without your knowledge isn't a good feeling to inflict on the world. And the feeling that your phone could report you to the police at any time isn't a good feeling either, even if you know you've done nothing wrong.


Maybe I should have been clearer, I'm not in support of this "solution" but rather more put off by the counter arguments that CSAM is gonna happen anyway there's no point in even trying to stop it. Or ignoring the victims of this disgusting crime. I'm simply trying to voice their side of this story, to put their perspective. I obviously can't comment on what the police will or won't do on victims behalf.

I value privacy above most things, for example you'll not find me on any social media platforms and the ones you do don't link together. If these people valued their privacy that much they'd not be using Apple devices and services in the first place is my thinking. I'm pretty sure Apple have full access to anything in iCloud which for most users is where all their stuff sits, where's the expectation of privacy?


>you’d rather 10 children be victimized

Now you’re just arguing in bad faith.


> Now you’re just arguing in bad faith.

In what way? I'm just characterising you argument from the point of view of the victims of the crime, who you appear quite happy to throw under the bus.


> “Destroyed lives” from false positives are at this point hypothetical. Child abuse is not.

The idea that all of this effort (and all of the direct discomfort inflicted on Apple users) will do anything to stop child abuse is just as hypothetical.


CSAM is, by definition, photography or videography of something that has already happened. Therefore, quite literally, doing absolutely nothing about CSAM itself would result in no harm to any child, as the harm has already occurred.

Now you're probably going to then cry about incentivising or normalising CSAM - but that's a different argument. And if you then try to argue that the normalisation of CSAM would somehow encourage people to abuse children, well then you're really off into the zero-evidence weeds. Go look at porn research (the actual Google Scholar/JSTOR/Elsevier kind), and you'll see that almost everybody who looks at porn neither wants to nor would actually do what they see in porn, if they were given the opportunity. Surprise, surprise, most people wouldn't actually get gangbanged/bukkake'd/fucked by their sibling, mom, dad, grandpa/pooped on by their next door bespectacled red-headed neighbour, etc.

Nor is there any evidence that inadvertently coming across CSAM turns people into pedophiles (news flash: pedos were turned on by kids long before they were ever exposed to CSAM on the internet), and porn itself is almost invariably used for fantasy or as something wholly unrealistic that people get off to precisely because it's unrealistic. Even though it might be unsavoury to do so, we could follow this reasoning to its extreme but undoubtedly true conclusion and state that there are individuals who get off to CSAM notwithstanding that they would never themselves abuse children.

So to recap, 0 children would be saved from harm because the CSAM itself is ex post facto; it wouldn't de-incentivise CSAM because the demand and markets for CSAM and pedophilia existed long before the internet was a thing, and pedophiles will find avenues around dumbass implementations like Apple's scanning (TOR, anyone? not using an iPhone/Mac?); and, finally, just because somebody looks at CSAM doesn't mean they're an actual pedo or would ever harm children themselves. The fact that possession of such material is illegal is not to the point - Apple is not the police, and the police and other executive agencies need warrants for this kind of thing (in common law countries this notion is more than 400 years old).

Meanwhile, we know the false positive risks are not insignificant - look at the white papers yourself, or just look at the numbers that smart people have crunched. The best part is that even though Apple says the false positive rate is 1 in a trillion accounts, people's photo libraries are exactly the sort of thing you can't extrapolate statistically. Maybe your Aunt June really likes photos of her nephews in the pool, and she's got a library with 300 photos of her nephews half naked and swimming when they visited her last year. Apple has no fucking clue whether they would or would not trigger its scanner, because it currently does not have access to Aunt June's unique photos to test vs its database. Apple quite literally doesn't know what the fuck will happen. I and many others find that abhorrent when you consider the effect that even the mere accusation of pedophilia has on a person's life. And that isn't even to start the discussion of what kind of precedent on-device scanning would set for other subjects (political and religious dissent, for example - if not in America then in places like China).

Apple and everybody else can fuck right off with this Orwellian shit.


> the harm has already occurred

Circulating images of a minor child engaged in sexual abuse do not constitute an ongoing harm to that child? That's a fascinating viewpoint.

> Apple and everybody else can fuck right off with this Orwellian shit.

Right along with people who think child abuse images should be okay to keep as long as you aren't the one who made them.


First, no, I don't think continued circulation ex post is comparable to the harm that occurs at the time the abuse physically occurs. My own view is that whatever feelings flow from knowing that the images are 'circulating' isn't harm at all. Less personally, lingering negative effects from some event in the form of flashbacks or unpleasant memories are not new instances of harm as a matter of law (for whatever that's worth), and I think it goes beyond straining common sense to use the term 'harm' in that way.

But let's assume you're right.

You think that pedophiles won't find ways to share content even if every tech company in the world implemented this? You think pedophiles don't and won't have terabytes of child porn backed up on hard disks around the world that will be distributed and circulate for the next millennium and beyond, even if it has to be carried around on USBs or burned to CDs (which aren't exactly amenable to CSAM scanning), and then saved to offline computers? Put another way, even if the internet shut down tomorrow, plenty of pedophiles around the world would continue jacking off to those images and sharing them with their buddies - you don't need the internet for that.

Further, even if you could convince me that it's harmful in the sense that I understand the word, I'm not sure I'd ever be persuaded that the amount of harm could be sufficient to outweigh the harms that would result from the scanning itself.

Happy to listen, though.


The problem of having porn posted of yourself is not something that applies only to minors. It can happen with any age.


> Circulating images of a minor child engaged in sexual abuse do not constitute an ongoing harm to that child?

When you do it, it does. When law enforcement does it, apparently not...


I’ve worked jobs in which I was exposed to this content regularly. It’s disturbing, sometimes extremely so. Just because someone does not abuse a child after viewing this content does not mean the content causes no harm to either individuals or our society at large. I don’t want to live in a world where CSAM is tolerated in order to keep pedophiles satiated.


First you couched your position as being about the children and harm to children; now you're talking about - as best I can tell - psychological harm to society writ large which you're asserting would occur based on your own experience. The part about CSAM being tolerated in order to keep pedos satiated seems like a non sequitur but honestly I don't really understand what you're trying to say, so... it doesn't sound like a very good faith discussion to me.

I do hope you get whatever support you need for something that's apparently affected you. Take care.


[Redacted because avazhi said it better.]


There are levels of harm. It’s not boolean.


> Therefore, quite literally, doing absolutely nothing about CSAM itself would result in no harm to any child, as the harm has already occurred.

By this "moral" reasoning you're also fully supportive of revenge porn.


I don't think possession of revenge porn by third parties should be illegal. However, I do think a victim of revenge porn should have causes of action against the ex or whomever recorded and distributed it. In some countries, like Australia at least, recording and distributing revenge porn has resulted in claims in equity for breach of confidence - so something like that already works.

But prosecuting third parties for having it is asinine, IMO.


Is this opinion or backed with something?


The middle ground is 0. The idea of equal jailed innocents to jailed criminals is mind numbing leaps of logic that is against the very ethos the united states was founded on.


I understand the sentiment and even share it to some extent, but we should really be arguing against the strongest possible version of claim, which is as follows: signal-detection theory tells us that there is a direct relationship between the number of correct-detections and false-alarms, and the only way to achieve 0 false-alarms is not to label anything a hit. In the present case, this means not prosecuting anyone, ever. That's probably not a solution you're happy with.

Therefore, the argument is one of degree. I agree with you that Apple's CSAM-detection is going too far, and this is what we should be articulating. Chanting "not even one" is not particularly convincing, nor sensible.


This is certainly a matter of opinion, but mine is that I would rather let an arbitrary number of criminals go free than jail (or ruin the life of) even one innocent person.


Okay… iOS has had on device image recognition since 2016.


Recognizing "this is a cat" and "this is a specific painting of a cat" are different challenges though.


I'm fine with child (prepubescent) rapists whom are adults to be sentenced to death. If you were an accomplice to that, as a videographer or similar, I'm also OK with death.

(There's a really weird social area from 13-18, with the weirdness and illegality going away up at 18. Stuff in this realm, especially around 2 similar ages, gets very stupid. This is where you can get 2 16yo's sexting and being charged with CSAM of their own body. I'm avoiding this in this post.)

But what does this CSAM scanner do? It only catches already-produced pictures of CSAM. In other words, it's evidence of said crime. In no other area of criminal law is there a law against said evidence itself. And yes, given the statutory nature of these images (possession is criminal, even if you didnt' put them there), I'm not at all comfortable in charging people for simple possession.

Even if they have urges of liking age-inappropriate pornography and "CSAM", as long as they're not doing any physical actions of harming humans, I'd much rather them do so in their own bedroom alone.

Nor do I buy into the gateway theory that CSAM leads to production of CSAM by raping children. This smacks to me of the DARE drug propaganda and gateway theory (which is complete bullshit).

And, we also already have harder situations that have been deemed legal: SCOTUS stated that Japanese Manga Hentai featuring schoolgirls (obviously under 18, sometimes by quite a lot), are completely and 100% legal. Again, SCOTUS foscused on 1fa and the fact that no children were harmed in its production.

And that leads to what just happened a few days ago. With the Zelenskyy (badly done) deepfake, when can we expect 18yr women with very petite bodies, being deepfaked into 10-13 year olds? In those cases, we could attest that everyone in the production is of legal age and provided ongoing consent. Will this fall under the same as Hentai?

Tl;Dr: I'm for the criminal legalization of CSAM. I'm for death penalty for child rapists/child sexual assault. But this, I can see going very very bad, in easily overscoping CSAM to the "cause of the day".


> But what does this CSAM scanner do? It only catches already-produced pictures of CSAM.

You say this as if it's bad to identify people who are distributing or collecting known child pornography. Are you recommending that companies implement technologies which go beyond this by not depending on a corpus of existing materials?


> I'm fine with child (prepubescent) rapists whom are adults to be sentenced to death. If you were an accomplice to that, as a videographer or similar, I'm also OK with death.

The reliability of the criminal justice system, particularly in the US, is abhorrent. There's a long history of false convictions, particularly affecting people in minority outgroups and mentally disabled; we've executed adults who were so mentally incapacitated they were below a 10 year old in terms of mental capacity. The death penalty is highly immoral.

There are tens of thousands of black men still in jail because they were basically the most convenient way for a police department to "solve" the murder or rape of a white woman and help their case clearance rates. Police, prosecutors, and forensic "experts" were complicit. "Hair analysis" is just one example of the pseudo-science nonsense.

In Boston, a forensic chemist falsified thousands of test results and somehow this escaped notice despite her having a productivity level that was far and above virtually any other forensic chemist.

Or, if you're not exceedingly gullible: her supervisors obviously knew what she was doing and didn't care, because she made their lab look great and prosecutors got lots of open-and-shut cases.


Are you a fan of giving said rapists the incentive to murder too, making prosecuting them that much harder?


Rambling post, hard to follow or understand


What’s with the obsession with CP? I agree that it’s morally wrong and should be penalized but why is it perceived as the Ultimate Crime? Why is this tool supposedly for detecting CP only and not stolen bikes for sale, bullying through SMS, etc which are also criminal offenses?


> why is it perceived as the Ultimate Crime?

Because you can apparently justify any move, no matter how authoritarian, by saying "think of the kids"!

It's politicians and governments exploitating psychology to get away with problematic crap.

It's not the ultimate crime, it's the ultimate justification.


Just like how "security" is often used in the same manner, but I agree that CP is a much more persuasive and emotional argument.


Because it's a good political tool that leverages parental and other human instincts to protect children. Because it puts most people in such a thought terminating blind panic you shut down thought and use it as cover for your true intentions, and give token enforcement funding for it while you direct the majority of enforcement funding for your true goals to politically control your enemies. It's old as politics itself.

It's been known for a while that this is a political technique. It is one of the four horsemen of the infoacopolypse [0] since 1988 after all. Or that “How would you like this wrapped?” comic by John Jonik in the year 2000 [1]. It's the next round of the crypto wars.

[0] https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...

[1] https://www.reddit.com/r/PropagandaPosters/comments/5re9s1/h...


Two thoughts.

1) Good, simple politics. Protecting kids from predators is about as cut and dry an issue as you will ever find. Harry Potter vs Voldemort might be a more complicated moral issue.

2) I suspect that a few very well connected activists in the Bay Area have made it their life's work to get CSAM tools on sites.

Ashton Kutcher and his organization Thorn [0] are probably the best example of this. Thorn is an interesting example because it has been VERY good at making its case in the non-tech media e.g. [1], [2], [3] and in front of congress [4]. It should be said, Thorn makes technology that helps track down child exploitation and has had some great results, which deserve plaudits.

[0] https://en.wikipedia.org/wiki/Thorn_(organization)

[1] https://www.npr.org/sections/goatsandsoda/2019/04/15/7126530...

[2] https://www.nytimes.com/2021/09/02/opinion/sway-kara-swisher...

[3] https://www.washingtonpost.com/news/reliable-source/wp/2017/...

[4] https://www.youtube.com/watch?v=HsgAq72bAoU


> What’s with the obsession with CP?

It's been the go-to outrage generator for federal law enforcement and spy agencies to use to attack strong device and end-to-end encryption by means of legislation that requires backdoors our outlaws encryption that is too strong.

To see why, scroll down to see the guy advocating for the death penalty for people involved in child porn production.

If only law enforcement showed equal vigor for addressing child abuse in religion, whether it's raping altar boys or using the mouth to clean blood off a baby that has been circumcised (often causing syphilis outbreaks in the process.)

It's almost like it's not actually about fighting child abuse, but about being able to snoop in your devices and communications.


CP is fairly easy to recognize if you see it I'd imagine. I'm sure there are some instances where an adult just looks very young, but there is probably a lot of CP out there with no potential for that.

How exactly does one recognize a bike as stolen from a photograph?


This is a misunderstanding. The goal here is not to identify new child pornography, based on ML trained models. This is to identify known child pornography, according to a hashed value. The hashed value is generated by an ML model.


So is the proposal a system for identifying known images of stolen bikes? That doesn't make sense because it isn't against the law to have an image of a stolen bike.


No. This system would only work for material that is illegal to possess. If we want to go paranoid, we could see a similar system deployed to prevent computers from having unauthorized copyrighted materials (DRM-free ebooks, academic papers, movies, etc), that’d be a horrible reality.


You recognize bikes, and compare it to a database of known stolen bikes.


CSAM only works by checking a hash. Photos of a stolen bike, especially ones that are sold online would probably have unique images taken by the thief.


Well it is a special category of human depravity. In prison the other prisoners don't go out of their way to beat and shank the bike thieves and cyber bullies, or even the run-of-the-mill murderers.


> I agree that it’s morally wrong and should be penalized but why is it perceived as the Ultimate Crime?

Because children are trafficked and abused to create it SMH...


Child abuse is a real problem and should have considerable resources dedicated to combating it, but focussing on banning images depicting child abuse does nothing to prevent a child from being abused. We're close to a situation where it's safer to abuse children than to try and find images depicting child abuse. I'm pretty sure that focussing on preventing abuse and supporting children to report abuse will do a lot more than sweeping the evidence it ever happened under the rug. Of course that would also require you to go after some pretty high ranking people, so it's not very good for one's career.


Police arrest child abusers and traffickers all the time... But as long as there's demand for CP people will create it (like many illecit activities), hence trying to reduce the demand (through making consequences for possessing it).

It's funny, comments on this site regularly demonize all sorts of non-consentual images (revenge porn for example) and rightly so but CP is the ultimate non-consentual image - a child doesn't even understand sexuality, isn't sexually mature, never mind able to consent... And there's comments here downplaying it, borderline condoning it...


There is a vast ocean of difference between "borderline condoning" and "this data must not appear on any storage medium ever". People who possess images of abuse, be it of children or adults, should be forced to delete them and fined appropriately. Additionally there should be an investigation to identify the abusers and punish them accordingly. Beyond that I see no use in sending people who simply possess an image to prison. Yes, the person's rights are being infringed, which should result in some reparations, but I see no use in depriving that person of their freedom.

You cannot remove demand, but you can discourage supply enough that demand would have to be fulfilled by drawings or CGI renderings of nonexistent people.


How does someone truly test how this feature is being used without possessing illegal content? This is a nearly-impossible area to research. Frightening.

(edit: I'm of course referring to possessing anti-Putin memes) (sarcasm)


Come up with something like:

X51!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-CHILD-ABUSE-CONTENT-TEST-FILE!$H+H*

https://en.m.wikipedia.org/wiki/EICAR_test_file


I thought about this, but you're still stuck trusting the implementation unless you test with actual illegal data, which is often criminal and immoral to obtain.

Example: How does a researcher test whether algorithmically-classified illegal imagery stored on user devices is being scanned and reported home to Apple's servers, and what those bounds of AI-classified criminality are? (presumably with respect to what is illegal in the user's jurisdiction)

Testing by using a test phrase, like in a spam context, is inadequate here as a scanning system can trivially be architected to pass those publicly-known tests, while still overreaching into people's personal files and likely miscategorizing content and intent.

If a user connects via a VPN to Russia for whatever reasons, does their personal content start getting reported to that country's law enforcement by their notion of what is illegal?

Parents often have all sorts of sensitive photos around which are not held with exploitative intent. "Computer says arrest."


I always imagine aliens hearing about something like this and being stunned.

"How can data be illegal?"

"There are bad things, but how can you decide what is bad and show how it's bad without examining and discussing it?"

You can only go so far merely alluding to things. Somewhere the rubber has to meet the road and you have to have concrete data and examples of anything you need to study or make any sort of tools or policy about.

It's like parents not talking to kids about sex. You can avoid it most of the time because decorum, but if you take that to it's extreme you have just made your child both helpless and dangerous through ignorance.

Somewhere along the way, you have to explicitly wallow directly in the mess of stuff you seek to avoid most of the time. That "seek to avoid" can only ever be "most of the time". It's insane and counter-productive to try to see that "most of the time" as an incomplete job and improve that to 100%.

I guess in this case there will eventually be some sort of approved certified group. A child porn researcher or investigator license. Cool. Cops with special powers never abuse them, and inhibiting study to a select few has always yielded the best results for any subject, and a dozen approved good guys can easily stay ahead of the world of bad guys.


> Example: How does a researcher test whether algorithmically-classified illegal imagery stored on user devices is being scanned and reported home to Apple's servers, and what those bounds of AI-classified criminality are? (presumably with respect to what is illegal in the user's jurisdiction)

I'm not an expert in AI so this might be totally off base but I feel like you would be able to use an "intersection" of sorts for this type of detection. You detect children and pornography, the children portion trains it for age recognition and the porn portion trains it to see sexual acts. Slap those two together and you've got CSAM detection.


Possessing CSAM if it’s for abuse-prevention is not immoral, regardless of what the law says. Saying otherwise is a slippery slope to saying judges, jurors and evidence custodians are also immoral. In fact if possession is so immoral, CSAM trials shouldn’t even have visual evidence. We should just trust the prosecutor.


while i vaguely agree with the idea behind this comment, this explanation is particularly poor. by that logic, it should be allowed to kill people to prevent murder. it is, in fact, allowed to kill people to prevent murder, but only in specific legally-prescribed circumstances. typically, only specific people are allowed to kill, and only to prevent an immediate murder. the same applies for child porn: it is allowed to have child porn to prevent child porn, but only under certain legally-prescribed circumstances.


Obviously, definitionally, it's impossible to verify that server-side logic isn't doing something evil. (Local homeomorphic protocols count, when the secret logic is imported from remote servers).

This is one reason FOSS is actually-important and actually-relevant. Isn't it valid to know exactly what your personal computer is doing, to be able to trust your own possessions? Richard Stallman was *never* crazy; his understanding of these issues is so cynical as to be shrill and off-putting, but that's well-calibrated to the severity of the issues at stake.

You joke about anti-Putin memes. Here's a thought for well-calibrated cynics: Apple solemnly swears its hashes are attested by at least two independent countries. Russia and Belarus are two independent countries.


You mean one of the countries sales of devices just stopped in? And the other already was announced to be a US org? And you need the intersection of both?


- "And the other already was announced to be a US org?"

Then one rogue employee in a US org could be sufficient to get selective root to every Apple device everywhere? That's easy for a nation-state adversary. Here's demonstrated examples: MBS had US-based moles in Twitter corporate spying on Khashoggi [0], and Xi had Chinese-based Zoom employees spying on dissidents in America [1].

[0] https://www.npr.org/2019/11/06/777098293/2-former-twitter-em...

[1] https://www.justice.gov/opa/pr/china-based-executive-us-tele...

That second example is topical: the Chinese state used their Zoom assets to attempt to frame Americans for CSAM possession.

- "As detailed in the complaint, Jin’s co-conspirators created fake email accounts and Company-1 accounts in the names of others, including PRC political dissidents, to fabricate evidence that the hosts of and participants in the meetings to commemorate the Tiananmen Square massacre were supporting terrorist organizations, inciting violence or distributing child pornography. The fabricated evidence falsely asserted that the meetings included discussions of child abuse or exploitation, terrorism, racism or incitements to violence, and sometimes included screenshots of the purported participants’ user profiles featuring, for example, a masked person holding a flag resembling that of the Islamic State terrorist group."


You need the intersection of both, which this hypothetical doesn’t account for.

In terms of planting, it’s much easier to do that already across the many, many cloud services that secretly scan on the backend. Going a whole weird route just to get images into a hash database, and then the matching images onto the device, that then get independent human verification seems totally unnecessary if you’re a state agent. Why do something so complicated when there are easier routes to go?


The FBI also has a way of "discovering" CSAM on the computers of uncooperative informants/suspects.

https://www.pilotonline.com/nation-world/article_b02c37d2-ca...


By test it, do you mean see if the police show up at your door? If you know how it works, you just need a list of hashes and a way to find a collision which I believe exists.

Otherwise, you're really just highlighting the problem with all closed source software, you don't really have a way to check what it does so you have to trust the vendor.


We already know that a hash collision doesn't get far enough to involve police showing up at your door, so a full test would take something more substantial.


Research has to be done in partnership with the NCMEC, which in turn partners with the Department of Justice to run the database of known CSAM material.


You don't have to test it against anti-Putin memes to see if it would work for anti-Putin memes. Algoritm would be something like:

1. Have image

2. Get hash of image

3. Get another hash from another similar image

4. Compare hashes

The images themselves can be of whatever to see if it works as expected, they don't have to contain anti-Putin memes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: