Any anti-encryption push by politicians where they mention child exploitation as a reason is easily argued against by merely asking them what their funding plans are for actual, real world, child protection services.
There is a thing called "mandatory reporting" where teachers have to report suspected cases of even low levels of abuse. The organisations that do the investigations are so under funded and under staffed that the only issues they are able to investigate are those where the child's life is in immediate danger. Anything less just falls off the radar.
That's how much governments really, actually care about protecting children.
When they want to scan electronic communications, it ain't for reasons of protecting children from harm.
Exhibit B is Jeffrey Epstein and his friends, real life incarnations of the child exploiting boogeymen that we supposedly must sacrifice our rights to catch. Except they were committing these crimes in the open, no encryption needed, and were let off the hook at every level of law enforcement, up to and including the FBI, DOJ, and court system. His friends continue to be let off the hook.
> "The organisations that do the investigations are so under funded and under staffed that the only issues they are able to investigate are those where the child's life is in immediate danger. Anything less just falls off the radar."
This is complete nonsense
-----
Ridiculous that this was downvoted. You understand OP is saying crimes likes rape and kidnap are not being investigated?
I'm up-voting in order to support dissenting opinion. My comments are based on what I know of teachers performing their mandatory reporting and frustration at the impotence of the system from their viewpoint.
There is room to argue against my point just based on the fact that if a kid is going to school, that implies a certain amount of 'care' by the parent / guardian already, so maybe my argument's subset is already flawed.
However, rape and kidnap are actual crimes whether against children or not, so these aren't really under the auspices of Child Services - that's precisely where police and detectives do come in.
It's not an easy area to work with, which is why politics probably tries to ignore it. Certain scales of abuse may actually be preferable, in the long term, to removing the child from the situation and putting them into the hands of the state. Removal from parents is psychologically damaging in itself. It's a very fine line, and one that pretty much all humans should be squeamish about.
"When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3.""
> Ridiculous that this was downvoted
"Please don't comment about the voting on comments. It never does any good, and it makes boring reading."
> "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3.""
It's even worse than that imo. It basically says "That is idiotic", skipping over the "1 + 1 is 2, not 3" part. The amusing part is not that this was downvoted, but rather that it gains a positive number of votes every few hours.
It's hard to understand exactly what you mean when you don't explicate what you disagree with and provide the reasons why you disagree. If this is a field you have experience with, you should have knowledge of information sources that can help dispel the incorrect information that is being spread.
Without any of this, your comment comes across as low value and will attract downvotes.
Mandatory reporting, I think, is intended to flag the potential that this is happening or may be likely to happen in the future.
It doesn't become a crime until it happens though.
Maybe I've answered my own question / outrage? As a rule, I don't support "pre-crime" ideology. What are the definitions of 'abuse'? What household situations point towards the likelihood of future 'abuse' and is this just 'pre-crime'...?
When you said “This is complete nonsense”, I presume you intended to convey “My goodness!! That means even rape and kidnaping may not be investigated!!”
The vast majority of contact offences are committed by people abusing a position of trust e.g. relative, teacher, coach etc. I don't there are grounds to intervene until they have committed at least some offending, or at the very least indicated a desire to offend e.g. in forum messages. You wouldn't even necessarily lose access to your own children even after being convicted of a CSAE offence, e.g. if you were convicted of downloading IIOC and not suspected of anything else you would probably not lose access to your own child.
Surveillance in the general sense definitely stops offending from happening because it allows on-going offenders to be identified. This article is about client-side prevention of distributing known IIOC, which is more a method of preventing a crime before it happens (the crime being Distribution of IIOC, not the sexual assault/rape the imagery depicts).
Loads of things "worthy of investigation" are closed off due to resourcing issues but what you said was that only cases where there is an _immediate threat to life_ are investigated which is completely false.
Maybe OP was living in different country than yours? I don't think you can dismiss their statement as 'complete nonsense' before confirming what exactly are they talking about.
I dont understand all the fuss. If I want to send encrypted email I will send it. By pasting encrypted data, adding them to attachment, use stenography... Whatever. The "terrorists", "pedophiles", "drug lords", "whoever is the latest excuse for breaking privacy" could communicate like that since forever. On irc networks, mails, whatever chat program or in-game chat. There is literally nothing you can do against that you couldn't do regardless of end-to-end encryption. And if those are high profile targets they have $$$ to pay security expert for consulting.
This war against end-to-end encryption is a complete nonsense and is meant as a control for general public as anyone who doesnt want to be spied on can and will take actions against.
The big difference is ease of use. You can basically round the number of people using email encryption to zero. Additionally, email encryption is fraught with operational issues making it easy to screw up. On the other hand, a billion people use WhatsApp and don’t think about it.
That’s a big shift in who uses encryption and how easy it is to passively surveil them.
Yes, that is exactly my point. e2e war has nothing to do with actual "threats" that are beeing sold to us as they are and will (even more after all public debates) encrypt their data. Even local potheads that I know are using pgp. The whole sharade has nothing to do with threats, it is about monitoring everyone (HK scenario comes to my mind), it is not "war against terror", it is war against public.
I'd argue that despite encryption, the big shift in surveillance is how easy it's becoming. Message content may be somewhat safe (assuming neither endpoint was compromised, which has gotten easier than ever to do. Just plant a bug, or point a camera at your target while they type in their password), but metadata is as vulnerable and valuable as ever. As is tracking massive numbers of people through cell-phones, facial recognition, omnipresent cameras, surveillance drones that can remain airborne for days and cover whole cities... I could go on.
Any half-serious operation would likely just order a custom encryption app, anonymously, paid for by monero, or something.
What really is important is reliability. An open source app, checked by experts, buildable from scratch in a controlled environment, is much less likely to have a bug planted by a three-letter agency.
So yes, good and widespread end-to-end encryption is a large nuisance for said agencies, even if a successful ban on it does not prevent criminals from encrypted communication in principle.
Any intervention by government that picks our locks only works with platforms that choose or can be forced to participate. People with something to hide will always be able to find a place to communicate beyond the reaches of such surveillance.
This makes government backdoors not only an unwelcome intrusion, but also entirely pointless.
I agree completely. These efforts to "break" end-to-end encryption seem entirely ineffectual so long as open source alternatives exist - they are plenty and well proliferated. Banning the use of unapproved software is impractical, like asking everyone to turn in guns. So what's really their end game?
Controlling the popular platforms, that the vast majority of people use, while ignoring or attempting to restrict over time the little-used alternatives?
That was my point - it’s a nonsense concept invented by an overreaching “law enforcement” agency that is more intent on spying on the general populace than it is in obeying the law itself.
If they're compromised by a zero-day, that may come out. And so you may read about it. For example, we only learned about the FBI's NIT after the PlayPen etc busts. So Firefox got patched.
Maybe it would happen that way, but if it doesn't, it doesn't show you're safe.
Testing can only prove the presence of bugs, not their absence. Reading about other people getting hacked can warn you that you're vulnerable to the same bug, but if they didn't get hacked it doesn't prove anything.
You did say "if they're safe, you're safe." But that's not true. Your security setup might (and probably does) have different vulnerabilities. Maybe you're using better or worse encryption than them? You can't really conclude anything, in general.
I get your point. And I did overstate the argument.
Still, if available software and systems don't let them be safe, those software and systems won't let you be safe either. And arguably, the assholes are better at staying safe than you are. Or at least, the ones who aren't will go down fast.
And when they do get pwned, it's often a public matter. Because criminal matters are public in sane countries, and they're newsworthy. So, for example, busts have alerted us to file snooping by anti-malware apps, retention and disclosure of VPN service logs, Firefox bugs, and leakage of Apache error messages around Tor. Also the risks of using unusual slang, although that's a human failure.
Yes, I agree we can learn from other people's experiences using certain technology. Like, nobody believes Bitcoin is anonymous anymore, right? And certainly we learn something from watching people playing for higher stakes than us.
On the other hand, for criminals it's a little different since they rely entirely on technology. That's also an ideal for some people of a libertarian mindset who are not criminals, but it's not the only way to do things.
A combination of legal, political, and technical safeguards may work better than purely technical rules for most people? We don't have to live outside the law if we make the law work for us. Anyone who talks about legal rights is implicitly putting some faith in the legal system to put things right, as an ideal, anyway.
1.) Simple alteration (change a pixel in MS paint) or encryption of content bypasses the filter
2.) Patching out the filtering routine bypasses the filter
3.) Blocking the phone-home address (pihole, router firewall, etc) bypasses any reporting
4.) Any vulnerability in the future that allows an attacker to report arbitrary clients (disclosure of client IDs, weakness in app, weakness in server) renders evidence gathered by the system unreliable.
At best clientside filtering allows you to draw relationship maps of technically incompetent perverts who might possibly be sharing CP. What harm reduction are they trying to get out of that?? Why not just refocus efforts on catching the small minority of individuals who are actually producing this content??
But hey, if these garbage clientside filtering of image uploads is enough security theatre to keep governments satisfied, I say let them have it.
> But hey, if these garbage clientside filtering of image uploads is enough security theatre to keep governments satisfied, I say let them have it.
The thing to be wary of is that they may be intended to be useless. Their purpose is not to work, but to establish the precedent / principle that invasion of privacy is warranted / justified / accepted / needed. This then sets the stage for later saying "we now want to outlaw encryption completely because the previous methods that are already [accepted / justified / needed] are not working". So for the ultimate aims of their proponents, it's better if they don't work than if they do.
If you want to see it in action you can look to Australia where it is exactly this argument being employed: ie - police have always had surveillence capability for telephone calls, so new powers that inject interception capability into the OS layer of phones are just re-establishing something already accepted, not introducing something new.
I recently learned about Microsoft PhotoDNA[1]. Very interesting (and cool) technology. My understanding is that a decade or so ago a Microsoft engineer stumbled upon a law enforcement guy giving a talk about the challenges of combating child pornography with the rise of the internet, etc. The Microsoft engineer and the LEO started talking and came up with a concept of a platform where known abuse material is hashed, and automated scanning tools can be deployed in the field when suspects are detained. The net result was it saved law enforcement officers from having to view the same material again and again, and instead could determine with the certainty of a SHA1/2 hash that it is indeed abuse-related material, justifying further review/inspection.
That said, I'm not sure from a privacy perspective that I like communication apps playing the referee. Sure, its terrorism or child porno now. What about when it is political content regarding 'X' that is prohibited?
I am rather skeptical about PhotoDNA. If it is an effectively method for video filtering then why is youtube using very expensive machine learning, which has a high maintaince and operation costs, compared to just simply hashing the video frames.
There is also similar problem with spam where spammers send email with images in order to fool the spam filter. If the algorithms in PhotoDNA would be effective then the problem of spam images would be a fairly solved problems, but what I keep hearing is that the only effective tool is machine learning.
I think you can divide it (among many other ways) into two categories: known content, and unknown content. PhotoDNA solves the known content (hashed) problem. The other stuff you mentioned I believe is being leveraged to combat the unknown [abuse] content problem. Eg, identifying the "0-day" content.
>The simplest possible way to implement this: local hash matching. In this situation, there’s a full CEI hash database inside every client device. The image that’s about to be sent is hashed using the same algorithm that hashed the known CEI images, then the client checks to see if that hash is inside this database. If the hash is in the database, the client will refuse to send the message (or forward it to law enforcement authorities).
The image could be scanned when it's received, and not when it's sent. That way you can't use hacked clients to send forbidden images.
But then you just use a modified client to receive them. I’ve no idea how often the recipient isn’t wanting to receive the message in this context, but I’d expect it’s not often.
There is a thing called "mandatory reporting" where teachers have to report suspected cases of even low levels of abuse. The organisations that do the investigations are so under funded and under staffed that the only issues they are able to investigate are those where the child's life is in immediate danger. Anything less just falls off the radar.
That's how much governments really, actually care about protecting children.
When they want to scan electronic communications, it ain't for reasons of protecting children from harm.