Hacker News new | past | comments | ask | show | jobs | submit login

I recently listened to a Darknet Diaries episode on messaging app Kik. This app is apparently being used by many people to trade child pornography. In this episode, there was some criticism expressed on how Kik doesn't scan all the images on their platform for child pornography.

I would really like to hear from people who sign this open letter, how they think about this. Should the internet be a free for all place without moderation? Where are the boundaries for moderation (if it has to exist), one-on-one versus group chat, public versus private chat?

To quote this open letter: “Apple is opening the door to broader abuses”. Wouldn't not doing anything open a different door to broader abuse?

Edit: I would really love an actual answer, since responses until now have been "but muh privacy". You can't plead for unlimited privacy and ignore the impact of said privacy. If you want unlimited privacy, at least have the balls to admit that this will allow free trade of CSAM, but also revenge porn, snuff-films and other things like it.




My personal opinion is if we could spy on all citizens all the time we could stop all or most of the crimes, do we want this? If you say Yes then stop reading here. Else if you say 100% surveillance is too much then what do you have against people discussing where the line should be drawn?

Some person that would sign that letter might be fine with video cameras in say a bank or some company building entrance but he is probably not fine with his own phone/laptop recording him and sending data to soem company or government without his consent.

So let's discuss where should the line be drawn, also if competent people in this domain are present let's discuss better ideas on preventing or catching criminals, or even for this method let's discuss if it can be done better to fix all the concerns.

What I am sure is that clever criminals will not be affected by this, so I would like to know if any future victim would be saved (I admit I might be wrong, so I would like to see more data from different countries that would imply the impact of this surveilance)


This only finds known pictures of child abuse not new ones and especially it doesn't find the perpetrators or prevent the abuse.

But it creates a infrastructure for all other kind of "criminal" data. I bet sooner or later governments want to include other hashes to find the owners of specific files. Could be bomb creation manuals, could be flyers against a corrupt regime. The sky is the limit and the road to hell is paved with good intentions.


> This only finds known pictures of child abuse not new ones

No, it finds images that have similar perceptual hashes, typically using the hamming distance or another metric to calculate differences between hashes.

I've built products that do similar things. False positives are abound with perceptual hashes, especially when you start fuzzy matching them.


But these are just variations of known pictures to recognize them if they cropped or scaled. The hash of a really new picture isn't similar to the hash of a known picture.


Perceptual hashes have collisions, and it is entirely possible for two pictures that have nothing to do with each other to have similar perceptual hashes. It's actually quite common.


Sure it is possible to have collisions, this is the main thing about hashes , in general collisions are rare but this are a different type of hashes and Apple is probably tweaking the parameters so the collisions are manageable.

But if the database of hashes is big and the total number of unique photos of all iPhone users is also a giant number you will find for sure collisions.


It certainly does help find the perpetrators and prevent abuse. Unlike videos of many other kinks, CSAM distribution is not one-way from a small number of producers to a large number of consumers but is often based on sharing "their own" material.

When we arrest people for "consuming" known old CSAM, we often find new CSAM produced by themselves; the big part of busting online CSAM sharing rings is not the prevention of CSAM sharing but the fact that you do get a lot of actual abusers that way.


I would like to read more about your claims,the problem is this subject is very dangerous , the only things I know about it is from articles that got popular and part of this articles are about innocent people that had their lives destroyed because someone made a mistake(like read an IP address wrong).

A nightmare scenario bould be soemthing like,

- giant tech creates secret algorithm , with secret params and threshold that they can tweak at will

- bad guys reverse it or find a way to make a inocent looking picture to trigger the algorithm

- the previous is used by idiots in chat groups or even DMs to troll you, like SWAT-ing in US , or DDOS and other shjit some "gamers" do to get revenge for losing some multiplayer game.

- consequences innocent person loses his job, family, friends, health etc.

I don't trust giants moderators either, they make mistake or just don't do they job and randomly click stuff.


Many people share photos of child pornography via the mail. There has been criticism that the USPS does not open all mail and scan them for child pornography.

I would really like to hear from people who do not sign this letter how they think about that.


They already scan for bombs and hazardous materials, so yes the line is drawn somewhere between ‘let anything get sent’ and ‘track everything’


Not sure I'm going to go all in on unqualified support, but it seems like comparing an image to a series of known hashes is qualitatively somewhat different than the postal inspector open all your mail. Though those convenient little scans they'll send you if your mail if you sign up suggests they do have some interest in who sends you mail already.


A closer comparison would be a machine opening your mail, scanning the contents, then using the same perceptual “hash” with an unknown database of “hashes”

If whoever controls that database wants to prohibit say a meme making fun of a politician, they need only to add the picture to the DB.


I think it would be horrible if they opened all mail... However, if they had a system that could detect CSAM in unopened mail with an extremely low false negative rate, then I'd be fine with it.

With many packages this already happens in the search for drugs and weapons and I have no problem with that either.


I believe millions more nudes are sent via snap/watsap/etcetc every day then are sent in the mail in a year.. with "extremely low false negative rate" - if that means there are a bunch of geeks and agents staring at your wife/gf.. your daughter/ etc.. is that okay? Some people will say yes - but I don't think those people should get to decide that all the other people in the world have to hand over all the nudes and dick pics everyone else has taken.

when they open mail with drugs and weapons it's most likely not a sculpture of your wife's genitals / daughter's tits, pics of your hemorrhoids / erectile implant - whatever other private pic stuff that no one else should be peering at.

If they were opening and reading every written word sent and received - I can guarantee you people would be up in arms about it.


> I believe millions more nudes are sent via snap/watsap/etcetc every day then are sent in the mail in a year.

I know for sure this is true, but this BS scenario was still used as a rebuttal against my comment, so I figured I'd give them an actual answer.

> when they open mail with drugs and weapons it's most likely not ... other private pic stuff that no one else should be peering at.

People order a lot of stuff that is very private online. From self help books, to dilos (which show up clearly on x-ray), to buttless chaps. If anything, packages are probably more private than mail these days, since people only mail postcards and bills.

> If they were opening and reading every written word sent and received - I can guarantee you people would be up in arms about it.

They would. But this is completely incomparable to what's happening on iOS devices now.


Maybe some crossed wires in the discussion - I thought your initial reply was in regard to what gp wrote about letters - and then you were saying about xrays and weapons and drugs.. which generally are not an issue with letters..

think I brought too many examples in basically trying to say flat paper.. yeah I think most people assume boxes/packages are xrayed/scanned by usps - and people do still send stuff they would not want public that way.

we are in agreement in that "They would. But this is completely incomparable to what's happening on iOS devices now."


I suppose one important distinction here, if this makes any difference, is that drugs and weapons (to use your examples) are physical items, and these could arguably cause harm to people downstream, including to the postal system itself and to those working in it. In contrast, photos, text, written letters, and the transmissions of such are merely information and arguably not dangerous in this context (unless one is pro-censorship).


The proliferation of CSAM is extremely harmful to the victims in the photos and more people seeing them might encourage more CSAM production in general.


I suppose I should clarify my point that I was referring to those dangerous items in the mail in the sense of their capacity to directly cause physical harm to those handling or receiving them, rather than in the more general sense of the societal effects of their proliferation, which is something else altogether (to your point).

To be clear, I'm not disagreeing with you in regards to the harm caused in the specific case of CSAM, but I can't help but see this as a slippery slope into having the ability in the future to label and act on other kinds of information detected as objectionable (and by whom one must also ask), which is itself incredibly harmful to privacy and to a functioning free society.


The big reason I don't want someone else scanning my phone (and a year later, my laptop) is that last I checked, software is not perfect, and I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision.

It's the same reason I don't want police having any opportunistic information about me - both of these have an agenda, and so innocent people get pulled into scenarios they shouldn't be when that information becomes the unlucky best-fit of the day.

That Apple has even suggested this is disturbing because I have to fully expect this is their vision for every device I am using.

And then I expect from there, it's a race to the bottom until we treat our laptops like they are all live microphones.


The chance of you getting enough false positives to reach the flagging threshold is one in a trillion. And then the manual review would catch the error easily. The FBI won’t be called on you.


Manual review. So in other words they're going to go through your private photos unbeknownst to you to decide you're innocent. And you're never even going to know it happened. Wonderful.

What if it really was a very private photo?


That's a claim by Apple. Due to the opaque process this is completely unverifiable.

As well this statement relies on the fact that no malicious actor out there is trying to thwart the system. The hashes used are deliberately chosen to easily produce collisions otherwise the system won't work. This almost certainly will be abused.


The concept of checking images against a hash list is not unique to Apple or even new.

https://en.m.wikipedia.org/wiki/PhotoDNA

What Apple announced is a way to do it on the client device instead of the server. That has some security implications, but they’re more specific than just “hashes might collide.”


They're not just checking against a hash list, they're using perceptual hashing, which is inexact and unlike cryptographic hashing or checksumming. Then, they use a fuzzy metric like hamming distance to determine if one image is a derivative of an illegal image.

The problem is that the space for false positives is huge when you use perceptual hashing, and it gets even larger when you start looking for derivative images, which they have to do otherwise criminals would just crop or shift color channels in order to bypass filters.


“This almost certainly will be abused.” is a claim by you and completely unverifiable. Apple has published white papers explaining how their hashing, matching, and cryptography work. How do you see someone thwarting that process specifically?


Apple's claims on how it will work are also completely unverifiable. What's stopping a government from providing Apple with hashes of any other sort of content they dislike?


And then Apple reviews the account and sees that what was flagged was not CSAM. And again, the hashes aren’t of arbitrary subject matter, they’re of specific images. Using that to police subject matter would be ludicrous.


How would Apple know what the content was that was flagged if all they are provided with is a list of hashes? I completely agree it's ludicrous, but there are plenty of countries that want that exact functionality.


If they have the hash/derivative they dont need to look on device or even decrypt, theyll know that data with this hash is on device, and presumably 100s of other matching hashes from the same device


The matched image’s decrypted security voucher includes a “visual derivative.” I’m sure in other words they can do some level of human comparison to verify that it is or is not a valid match.


Easy -- by having the authorities expand what is being searched for.


> and then the manual review would catch the error easily.

How do we know this? It's not obvious to me that this system would work well for the edge cases that it seems intended for.


I won't use a service that performs dragnet surveillance on my communication. The US Postal service does not open every single letter to check if it contains illegal material. If I rent a storage unit, the storage company does not open up my boxes to see if I have albums of child pornography. This move by Apple is equivalent.

I'm going to turn this around and say that those in favor of Apple's move: "Should the internet be a place where your every move is surveilled?" given that we have some expectation of privacy in real life.


> The US Postal service does not open every single letter to check if it contains illegal material.

You'd be surprised about the amount of packages that gets x-rayed in order to find drugs. But yes, you're 100% right that it's not all of them.


It is illegal for the USPS to look at the contents of first class mail except with a court order.

Other classes of mail may have less stringent protections. For instance, third class (and maybe second class, I forget) mail can be opened to make sure that the things it contains are allowed to be mailed in that class.


An x-ray is nothing like reading the contents of letters or randomly checking ahipped harddrives and USB sticks for content. I don't know how to clarify legally or conceptually, but I feel confident there is a very clear difference


Perhaps I'm being too cynical but I think the difference is they haven't figured out a convenient, automated way to do the latter


I forgot about that. I think it's nearly all stamped mail over a certain weight? I don't remember if labeled mail is similarly scanned and my google skills are failing me today.


I think there is a very deep and troublesome philosophical issue here. Ceci n'est pas une pipe. A picture of child abuse /is not/ child abuse.

Let me ask you a counter-question. If I am able to draw child pornography so realistically that you couldn't easily tell, am I committing a crime by drawing?


> am I committing a crime by drawing

That really depends on the law in the country where you're doing that. According to the law in my country, yes, you are.

None of this actually answers my question though, it's just a separate discussion. I would appreciate an actual answer.


Right, my question is of course a rhetorical one. If I similarly draw other crimes being committed, the same does not apply. Why is that?

And to address your explicit question, it is far too complex and specific to a given community to answer in any meaningful way here. I can tell you what isn't the answer though: deploying spyware on phones worldwide.


> If I similarly draw other crimes being committed, the same does not apply. Why is that?

The answer is much more simple than you seem to think it is. Because the law doesn't say it is. I can only go by the law in my country, which is that (loosely translated) 'ownership of any image of someone looking like a minor in a sexually suggestive position' is a crime. Since a drawing is an image, it's a crime. Having an image of someone breaking into a home is not a crime according to law in my country. That's why you can have a drawing of that.

Your question is like asking "why don't I get a fine for driving 60 on the highway, but I do get a fine for driving 60 through a school zone?" Well, because one is allowed and the other isn't.


Yeah but he's using reducto-ad-absurdem to illustrate the law as written is absurd in some cases. So yes you can pedanticly cite the law and have an answer to the question, but you're missing the larger discussion.

At somepoint somebody is gonna actually have to tackle that looking at images, especially fake ones, might not only be harmless, it might be safer to let people get release from their fantasies (I'd be curious to see what the research says).

Some day, people will say "Obviously nobody cares if you possess it or look at it, obviously, it's just about MAKING it."

I think this is follow the process of weed (unspeakable horror, "drugs will kill you" in dare, 1-strike expulsion from high-school or college) when I was growing up, yet now legal to even grow.


It seems just as plausible that viewing such depictions could feed into someone's urges. Without some evidence I'd be hesitant to go with someone's conjectures here. There is also, frankly, the fact that a lot of people find viewing such stuff so shockingly depraved that they don't care if someone is "actually" harmed in the process, a claim that is hard to evaluate in the first place.


> So yes you can pedanticly cite the law and have an answer to the question, but you're missing the larger discussion.

While I do agree that drawings being illegal might not be in anyone's best interest, that doesn't have anything to do with the questions I asked.

> At somepoint somebody is gonna actually have to tackle that looking at images, especially fake ones, might not only be harmless, it might be safer

I thought I agreed with you for a second, buy "especially fake ones" implies you think that child pornography is actually a good thing. I hope I'm wrong about that.

> Some day, people will say "Obviously nobody cares if you possess it or look at it, obviously, it's just about MAKING it."

Guess I'm not and you really think possession of child pornography is harmless. My god that's depressing.

If you are reasoning from the standpoint that the ownership and trade of child pornography is harmless, yes, then I understand that giving up privacy in order to reduce this is a bad thing. Because in your eyes, you're giving up privacy, but you gain nothing.


>> implies you think that child pornography is actually a good thing.

Holy cow, that's the worst phrase anybody has ever put into my mouth. That's too offensively disingenuous to warrant any further discussion. Shame on you.


It is a pretty dumb law IMHO for the mere fact that two teenagers sexting on snapchat when they are 17 years and 355 days old are committing a fairly serious crime that can completely mess up their lives if caught but doing that the next day is totally ok all of a sudden and they can talk and laugh about it.


> but doing that the next day is totally ok all of a sudden and they can talk and laugh about it.

Unless a leap year is positioned unfortunately (probability ~1/4), in which case it's back to being a serious crime.


What is this absurd counter?

Who’s talking about surrealistic drawings? We’re talking about actual material in real life, being shared by users and abusers.

To be clear, I’m not supporting surveillance, just stating facts.


You should maybe read about the work [0], or just read the rest of what I said. Surrealism has nothing to do with the argument I made, so why do you bring it up?

CSAM databases can by necessity not contain novel abuses, because they are novel. In fact, filtering by CSAM databases even indirectly /encourage/ novel abuses because these would not be caught by said filter.

Catching CP hoarders does little to help the children being exploited in the first place, and does a lot to harm our integrity and privacy.

[0] https://en.wikipedia.org/wiki/The_Treachery_of_Images


> Catching CP hoarders does little to help the children being exploited in the first place

This is completely untrue, since hoarders tend to often also be part of groups where images of still unknown abuse circulate. Those images help identify both victims and suspects and in the end help stop abuse.


I sometimes wonder where logic is, if it isn't on HN.

Surely if one takes measures against only existing material, and not production of new, this only encourages appearance of new material.

For evidence you can look up what happened with synthetic cannabinoids, and how much harm they brought.


Sorry for the late reply, but for evidence you can look up Robert Miķelsons, who was an active child predator in the Netherlands. He was identified due to images found on a computer in the US. It ended up leading police to him, which caused him to stop. According to further investigations he abused an estimated 83 children, one of which was abused close to 100 times. Many of the victims were less than 5 years old, some were babies.

Without finding the child porn, he would not have been identified and would've victimized many more children.


> Catching CP hoarders does little to help the children being exploited in the first place

If there is no market for it, there might be less incentive to produce any more of it.

Not that I believe we should all be continuously spied by people who merely pinky sweared to do it right and not abuse it.


I don't think market forces are what drive the production of CSAM. Rather, it's some weird fetish of child abusers to share their conquests. I'm sure you're aware, but when CP collectors are busted, they often have terabytes upon terabytes of CP.

But that's, I think, tangential - I don't understand pedophilia well enough to say something meaningful here.


Quite funny: looking for some figures, it seems I wasn’t the first one to do so, and that finding anything even remotely reliable isn't easy at all. See

- https://www.wsj.com/articles/SB114485422875624000

- https://thehill.com/blogs/congress-blog/economy-a-budget/260...

- http://www.radosh.net/archive/001481.html

> I'm sure you're aware, but when CP collectors are busted, they often have terabytes upon terabytes of CP.

Always appalled me that there is so much of it out there. FFS.

For some reason, it reminds me of when the French government decided to spy on BitTorrent networks for copyright infringement (HADOPI).

Defense & Interior asked them not to, saying that it would only make their work harder.

That some geeks would create and democratise new tools not to be caught downloading some random blockbuster, or even just out of principle, and both child pornographers & terrorists would gladly adopt them in a heartbeat, because while they weren’t necessarily the type capable of creating such tools, they had historically been shown to be proficient enough to use them.

Quite hilarious, when you think of it. Some kind of reverse "Think of the Children".

We still got HADOPI however, and now any moron has access to and knows how to get a VPN.


> We’re talking about actual material in real life, being shared by users and abusers.

And, more importantly, material that has been and will be produced through the actual abuse of real children. The spreading of which encourages the production of more of it.

GP's counter is absurd.


Or if you make a film about child abuse in which a child actor pretends to be abused, can you arrest the actor who abuses? If any depiction of the act is a crime, then you can.

This issue came before the US Supreme Court about a decade ago and they ruled that the depiction of a crime is not a crime so long as the depiction can in any way be called "art". In effect, any synthetic depiction of a crime is permitted.

However that ruling predated the rise of deep fakes. Would the SC reverse that decision now that fakes are essentially indistinguishable from the real thing? Frankly I think the current SC would flip since it's 67% conservative and has shown a willingness to reconsider limits on the first Amendment (esp. speech and religion).

But how would we re-draw the line between art and crime? Will all depictions of nudes have to be reassessed for whether the subject even might be perceived as underage? What about films like "Pan's Labyrinth" in which a child is tortured and murdered off-screen?

Do we really want to go there? This enters the realm of thought-crime, since the infraction was solely virtual and no one in the real world was harmed. If we choose this path, the freedom to share ideas will be changed forever.


Yes. People have been convicted for possession of hand-drawn and computer-generated images. Editing a child's image to make it pornographic is also illegal. So some "deepfake" videos using faces from young celebs are in all likelihood very illegal in many jurisdictions.

Images can be illegal if all people are of age, but are portrayed as underage. Many historical dramas take legal advice about this when they have adult actors portraying people who historically married while underage by modern standards. (Ie the rape scene in BBC's The White Princess series.) This is why American porn companies shy away from cheerleader outfits, or any other suggestion of highschools.


This topic is really fascinating to me, how we deal with images of bad things. Clearly, murder and assault is fine to reproduce at 100% realism - but even plain consensual sex is forbidden in the US for a wider audience.

This reminds me of a classic Swedish movie I watched, I forget its name, it's made in the 70s and contains upskirt panty shots of an actress who is playing a 14-year-old, along with her exploring sexuality with a same-age male peer. I think the actual actress was around 14 years old too. It made me feel real uneasy, maybe because my parents in law were around, but also because I thought "wait, this must be illegal to watch". In the end, the movie was really just a coming-of-age story from a time when we were more relaxed about these things.



> People have been convicted for possession of hand-drawn and computer-generated images.

If that's indeed the law in some countries, it is a stupid law that nobody should help enforce.

In Shakespeare's take on the story of Romeo and Juliet, Juliet is thirteen. So this play should probably be illegal in the countries that have such laws.


Not sure where you get your advice from. Supreme court has ruled loli as legal. You can find it on 4chan in about 30 seconds.


"Supreme court has ruled loli as legal." - proof of this?

Had not heard that change.

the US arrested some diplomat like guy some years ago for having sexualized bathing suit models underage on a device. all non nude I believe.

I think most of what gp saying is mostly true - some places cartoons can be / have been prosecuted.

Just cuz its on 4chan every day - doesn't mean it's legal - I thought 4chan deletes it all with in 24 hours (?) - so that if they got a warrant to do something about it, it would already be gone before the warrant printed much less signed and served.

However charges are going to vary by jurisdiction - I don't think many prosecutors are trying to get cartoon porn convictions as a top priority these days, but that doesn't mean it couldn't happen.

I don't think gp is accurate in saying that american porn avoids cheer and high for these reasons. the market forces have changed many times over the past so many years and depending on which porn portals one might peruse they will see more or less of cheer and such. With some portals the whole incest titling is used for clickbait - it just depends on the portal, and much of what is not seen on portals is not there because of dmca / non-rights.. not because things aren't being produced and made available elsewhere.


And companies are terrified of any sort of liability here, which can lead to over-broad policies.

On reddit, you can't link to written content like stories or fanfiction (words only, without any image files) if any character in your fictional story is under 18 and does anything sexual in the story.


Maybe in Saudi Arabia? That’s not the case in the US.


I think it's very easy to get people to hate things that gross them out. It's always interesting to me how casually people joke about Epsteins death while simultaneously never talking about cp without a gross look on their face.

I'm not sure I fully understand how society can be more relaxed with actual pedophiles than they are with cp material.

Personally it bothers me to see the focus around things that gross people out rather than the actual child abuse.


Yikes, do not engage in a defense of child pornography in these types of arguments. The opposition is gunning for that.


I'm not defending any specific point of view. I'm merely pointing out a problem of map-territory conflation.


> I'm merely pointing out a problem of map-territory conflation.

Using complicated language does not make you smart... It just makes you hard to understand. Maybe if you expressed yourself in common language, you'd understand that the points you're trying to make are bogus.


I think OP is saying that it can become less straight-forward to apply judgement in some real-world situations (which is encapsulated in the old adage "the map is not the territory", which I guess I thought _was_ common language but we all love in our own bubbles, I suppose). Anyway, I believe the reason that is often cited for why child pornography is so bad is because it involves the coercion/exploitation of a human being that is unable to consent. I believe OPs position is that if this harm is removed, there would need to be some other grounds on which to prosecute someone. That is, some identifiable harm would need to be demonstrated.

There may be a reasonable counter to this argument but I would not say that OPs position is "bogus" on is face.


A picture of child abuse is child abuse in that the abuse of a child was necessary to take the picture in the first place.

If the picture had no grounds to spread, it would likely not have been made—no incentive. As such, the fact that the picture is able to spread indirectly incentivises further physical abuse to children.


That does simply not follow from logic. Child abuse existed before cameras.

Edit: People are unhappy with this refutation, so a second one then. The claim, specifically, is that CP is CA because CP requires CA to happen. So a photo of theft is theft. A recording of genocide is genocide. Clearly absurd. Never mind the context that the pornography in my question is drawn, thus no actual child was hurt.

Edit 2: The point was made somewhere that CP is hurtful to the child that it depicts, and this is obviously true - but only if spread. Therefore, distributing CP should be illegal, but that does not mean that it's justified to scan every phone in the world for such images.


Sure, some child abuse isn't perpetrated for the purpose of being filmed and sold/distributed. But a large percentage is.

That large percentage is disincentivized when technology that makes it more difficult to spread and/or makes it easier for LE to apprehend owners is implemented.

I never said that there would no longer be any child abuse with these steps, just less of it.


I think you'll be hard pressed to show this with any kind of evidence. What happens when you effectively ban some expression? People hide that expression. Likewise, if you are successful in this CSAM pursuit, you'll mostly drive pedophiles to reduce evidence sharing. I bet you dollars to donuts, the people who fuck kids, will still fuck kids.


> Child abuse existed before cameras.

Therefore we shouldn't do anything about it.

While your comment is true, it does nothing to refute anything the previous commenter said. They are completely right. Also the spread of child porn increases the impact on the victim and should just for that reason be minimized.


> A picture of child abuse is child abuse in that the abuse of a child was necessary to take the picture in the first place.

People have made all of these points before, but I still wonder what the legal and/or practical basis of the law is when the pictures or videos are entirely synthetic.

On another note, I wonder how much of this kind of Apple(tm) auto-surveillance magic has made it into the MacOS.


> A picture of child abuse /is not/ child abuse.

Yes, exactly.

It may even help prevent child abuse, because it may help pedophiles overcome their urges and not act upon it.

I'm not aware of any data regarding this issue exactly, but there are studies that show that general pornography use and sexual abuse are inversely correlated.


I see the argument, but the counterargument is that by you doing that, you could possibly be nurturing and perpetuating abuse.

In effect, possessing such material may incentivize further production (and abuse), "macroeconomically" speaking. And I hate that evidently, yes, there is an economy of such content.


In many jurisdictions you are.


There's a difference a mile wide between moderating a service and moderating people's personal computers.

Maybe the same thing that makes this hard to understand about iOS will be what finally kills these absolutely wretched mobile OSes.


except that the boundary between device and service is steadily eroding. A screen is a device and netflix is a service, but a smart tv that won't function without an internrt connection? A nokia was a device you owned, but any Apple device is more like a service than a device really, as you don't control what OS can run on it, which unsigned software you run etc. So if you are ok with moderation on services and your device turns into a service... Perhaps we need laws to fully separate hardware and software but actually the integration between hardware and software is becoming stronger and stronger every year


> moderating people's personal computers

This is done client side because your data is encrypted in their cloud. It won't be done if you disable cloud sync. If you just keep your cp out of the cloud, you're fine.


Apple can decrypt data stored on iCloud, and they scan it already it seems:

https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

Which it what makes adding the client side functionality even more problematic. They can easily extend this to scanning all offline content.


TBH it doesn't matter at all why they're doing it, they've crossed a line here.


The boundary should be that user generated personal data that is on a user device should stay personal on a user device unless explicit consent is otherwise given. That's it. The argument is always well "don't you want to save the children" to which I give this example.

In the USA guns are pretty freely available, relative to other countries. There is a huge amount of gun violence (including shooting at schools targeting children) yet every time major gun restriction legislation is introduced it fails, with one major reason being the 2nd amendment of the US constitution. This could again be amended, but sufficient support is not there for this to occur. What does this say about the US?

They have determined, as a society, that the value of their rights is worth more than all the deaths, injuries and pains caused by gun violence. A similar argument can be made regarding surveillance and child porn, or really any other type of criminal activity.


>> The boundary should be what is on a user device should stay personal on a user device.

But how many apps only store the locally on the device versus sending data to the cloud, getting data from the cloud, or calling cloud APIs to get functionality that cannot be provided on device?

Having the power of a personal computing device is huge, but having a networked personal computing device is far greater.

Keeping everything on-device is a good ideal for privacy, but not very practical for networked computing.


Ah, I was talking specifically about user generated personal information. I have edited my post to make it clearer.


The conversation today is not really about PhotoDNA (checking image hashes against known bad list). That ship has sailed and most large tech companies already do it. Apple will do it one way or another. It’s a good way to fight the sharing of child porn, which is why it is so widely adopted.

The question is whether it is worse to do it on-device, than on the server. That’s what Apple actually announced.

I suspect Apple thought doing it on-device would be better for privacy. But it feels like a loss of control. If it’s on the server, then I can choose to not upload and avoid the technology (and its potential for political abuse). If it’s on my locked and managed mobile device, I have basically no control over when or which images are getting checked, for what.


You have the exact same level of privacy as before. Before, the images were scanned in the cloud. Now they are being scanned as they exit your phone. In that way, the avoidance protocol is the same, namely

> I can choose to not upload and avoid the technology

They are not scanning your entire photo library at rest.

Could they? Sure. But they’ve had the hardware to do this for years, so they could have done so silently at any time.

This entire saga has been one of poor messaging and rampant speculation.


> If you want unlimited privacy, at least have the balls to admit that this will allow free trade of CSAM, but also revenge porn, snuff-films and other things like it.

Sure, in exactly the same way that the postal system, parcel delivery services, etc. allow it. But that's not to say that such things are unrestricted -- there are many ways that investigation and enforcement can be done no matter what. It's just a matter of how convenient that is.

It would also restrict CSAM a lot if authorities could engage in unrestricted secret proactive searches of everybody's home, too. I don't see this as being any different.


By that logic, can we send in someone into your house every day to look through every corner for child porn in digital or print form?

In fact there are a lot of heinous crimes out there some much worse than child porn IMHO. Singling out child porn as the reason seems like it's meant only to elicit an emotional response.


Fundamentally there is a difference between scanning files on your servers and scanning files stored on someone else's device, without their consent.


Not that I wholly agree with Apple's move due to other implications, but it has to be said that only photos that you have chosen to upload to iCloud will be scanned.

So, in a sense, you do consent to having those photos scanned by using iCloud. Maybe it's time for Apple to drop the marketing charade on privacy with iCloud, since iCloud backups aren't even encrypted anyway.


Should all personal computers be scanned all the time by windows/apple/... in case it contains cp?


Are you unable to answer any of the questions I asked? I'm seriously interested in hearing, from people who think Apple is in the wrong, where the border of acceptability is.

Would you prefer iOS submitted the photo's to their cloud unencrypted and Apple scanned them there? Because that's what the others are doing.


iCloud photos are not end-to-end encrypted, Apple has full access to them. https://support.apple.com/en-us/HT202303


Evidently the episode you listened to was loaded with false information, because Kik has used PhotoDNA to scan for CSAM for 7 years. [1]

[1] https://www.kik.com/blog/using-microsofts-photodna-to-protec...


> Doc: From what I can tell, it only starts looking in the rooms and looking at individual people if they are reported for something.

https://darknetdiaries.com/transcript/93/

If I were Kik, I would also write a blog post about using something like this. Many, many things point at Kik only doing the bare minimum though. (If you're the type who supports moderation, apparently they're already doing too much according to much of HN.)


I don't want to be arrested by the FBI and my life ruined because my daughter, who is 6 and doesn't know any better, takes a selfie with my phone while she happens to be undressed/changing clothes and the photo automatically syncs to the cloud.


Chances are your daughter would be arrested or at least investigated for having child pornography.

https://www.theguardian.com/society/2019/dec/30/thousands-of...


You won’t. Understand what this is instead of falling for the hysteria.


It starts with comparing file hashes. I’m worried about where this goes next.


By that logic, how do you know it hasn’t already gone there, years ago?

And this scan is currently happening on the server. They’re just making it so that the scan will now happen at the time of upload as opposed to N seconds later.


I think the best way to stop child porn is to install government provided monitoring software + require an always online chip to be present on any video capture system and any media player must submit screen hashes + geolocation and owner ID to a government service. If the device fails to submit hashes for more than 24 it should block further operation. Failing this we will never be safe from child porn in the digital world.

Not doing anything opens the door to further abuse.


If a company explicitly states in a service's terms and conditions that content stored or shared through that service will be scanned, I think that's acceptable because the user then can decide to use such a service on a case-by-case basis.

However, making this a legal requirement or deliberately manipulating a device to scan the entire content stored on that device without the user's consent or knowledge even, is extremely problematic, not just from a privacy point of view.

Such power can and will be abused and misused, sometimes purposefully, sometimes accidentally or erroneously. The devastating outcome to innocent people who have been wrongfully accused remains the same in either case (see https://areoform.wordpress.com/2021/08/06/on-apples-expanded... for a realistic scenario, for example).

The very least I'd expect if such a blanket surveillance system were implemented is that there were hefty, potentially crippling fines and penalties attached to abusing that system in order to avoid frivolous prosecution.

Otherwise, innocent people's lives could be ruined with little to no repercussions for those responsible.

Do strict privacy requirements allow crimes to be committed? Yes, they do. So do other civil liberties. However, we don't just casually do away with those.

If the police suspect a crime to have been committed they have to procure a warrant. That's the way it should work in these cases, too.


There are reasonable measures to take against child abuse, and there are unreasonable ones. If we can’t agree on that, there’s no discussion to be had.


> Should the internet be a free for all place without moderation?

The better question would be: do you want an arbitrary person (like me) to decide whether you have a right to send an arbitrary pack of bytes?

Neither "society" nor "voters" nor "corporations" make these decisions. It is always an arbitrary person who does. Should one person surrender his agency into the hands of another?


>Neither "society" nor "voters" nor "corporations" make these decisions. It is always an arbitrary person who does.

Except in this case, a corporation (Apple) is making the decision relative to the sexual mores of modern Western society and the child pornography laws of the United States. It's unlikely this decision was made and implemented randomly by a single "arbitrary" individual. Contrary to your claim, it's never an arbitrary person.

And yes, I believe Apple has the right to decide how you use their product, including what bytes can and cannot be sent on it.

>Should one person surrender his agency into the hands of another?

We do that all the time, that's a fundamental aspect of living in a society.

But in this specific case, no one is forcing you to use an Apple phone, so you're not surrendering your agency, you're trading it in exchange for whatever convenience or features lead you to prefer an Apple product over competitors. That's still your choice to make.


"Apple has the right to decide how you use their product"

I hate this argument. If Apple wants to claim ownership of _their_ products then they shouldn't sell them. They should lease them.


Should the internet be a free for all place without moderation?

Yes!!!


I think different opinions are great. Just wanted to point out saying the internet should be moderated on the hacker news forums is surprising and funny.


“Apple is opening the door to broader abuses”. Wouldn't not doing anything open a different door to broader abuse?

Actually I believe that in regards to "not doing anything open a different door to broader abuse" - no. Starting scans will lead to broader, and no tin foil hat needed.

If we compare 1-apple starts scanning for known hashes, not "looking at all your naked pics and seeing if you've got something questionable"... this is just looking for known / already created by others / historical things... by doing a scan for one thing - they open the pandoras box to start scanning for other things - and then they will be compelled by agents to scan for other things - and I believe that is broader, much.

Next month it will be scan for any images with a hash that matches a meme with fauci - the current admin has already stated that in their desire to stop 'disinformation' they want to censor sms text messages and facebook posts (assuming also fbk DMs and more).

There is a new consortium of tech co's sharing a list of bad people who share certain pdfs and 'manifestos' or something like that now right? Might as well scan for those docs too, add all them to the list.

What power this could lead to - soon the different agencies want scans for pics of drugs, hookers.. how about a scan for those images people on whatsapp are sharing with the black guy killing a cop with a hood on?

What happens when a new admin takes the house and white house and demands scans for all the trumped a traitor pics and make a list?

See this is where the encryption backdoors go.. and where is that line drawn? Is it federal level agencies that get to scan? can a local arizona county sheriff demand a scan of all phones that have traveled through their land/air space?

Frankly, public chats, public forums.. if you post kids or drugs or whatever is not legal there - then it's gonna get the men with guns to take note. What you do in private chats / DMs, etc - I think should stay there and not go anywhere else.

I don't like the idea that Msoft employees look at naked pics of my gf that are added to a pics folder because someone setup a win system without disabling one drive. So I don't use one drive and tell others to not use it - and not put pics of me there or on fbk or tictoc.

For all those people that have nothing to hide - I feel sorry for you - but wonder if your kids/grandkids should have thousands of agents looking into their private pics just to make sure there is nothing not legal there.

so would these scans get nudes sent through whatsapp? That would kill the encryption thing there kind of.

Would this scan get a cach if someone was using a chat room and some asshat posted something they shouldn't - and every person in the chat got a pic delivered to their screen. so many questions.

I also question what the apple scans would scan as far as folders and what not.. would it scan for things in a browser cache? like not purposefully downloaded.. if someone hit a page that had a setup image on it - would that person now be flagged for inspection / seizing?

If they are working with nice guys in Cali that just want to tap people on the shoulder and have a talk with people - will they send flagged notices to agents in other places who may go in with guns drawn and end up killing people?

I'm sure many people are fine with either outcome - I think there is a difference between someone surfing the web and someone who coerces real world harm, and not all those who surf the web deserve bullets to the chest, and there is no way to control that.. well maybe in the uk where cops aren't allowed to have guns maybe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: