
Adding Client-Side Scanning Breaks End-to-End Encryption - prostoalex
https://www.eff.org/deeplinks/2019/11/why-adding-client-side-scanning-breaks-end-end-encryption
======
BLKNSLVR
Any anti-encryption push by politicians where they mention child exploitation
as a reason is easily argued against by merely asking them what their funding
plans are for actual, real world, child protection services.

There is a thing called "mandatory reporting" where teachers have to report
suspected cases of even low levels of abuse. The organisations that do the
investigations are so under funded and under staffed that the only issues they
are able to investigate are those where the child's life is in immediate
danger. Anything less just falls off the radar.

That's how much governments really, actually care about protecting children.

When they want to scan electronic communications, it ain't for reasons of
protecting children from harm.

~~~
JoeSmithson
> "The organisations that do the investigations are so under funded and under
> staffed that the only issues they are able to investigate are those where
> the child's life is in immediate danger. Anything less just falls off the
> radar."

This is complete nonsense

\-----

Ridiculous that this was downvoted. You understand OP is saying crimes likes
rape and kidnap are not being investigated?

~~~
tlrobinson
> This is complete nonsense

"When disagreeing, please reply to the argument instead of calling names.
"That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3.""

> Ridiculous that this was downvoted

"Please don't comment about the voting on comments. It never does any good,
and it makes boring reading."

[https://news.ycombinator.com/newsguidelines.html’](https://news.ycombinator.com/newsguidelines.html’)

~~~
dependenttypes
> "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not
> 3.""

It's even worse than that imo. It basically says "That is idiotic", skipping
over the "1 + 1 is 2, not 3" part. The amusing part is not that this was
downvoted, but rather that it gains a positive number of votes every few
hours.

------
stiray
I dont understand all the fuss. If I want to send encrypted email I will send
it. By pasting encrypted data, adding them to attachment, use stenography...
Whatever. The "terrorists", "pedophiles", "drug lords", "whoever is the latest
excuse for breaking privacy" could communicate like that since forever. On irc
networks, mails, whatever chat program or in-game chat. There is literally
nothing you can do against that you couldn't do regardless of end-to-end
encryption. And if those are high profile targets they have $$$ to pay
security expert for consulting.

This war against end-to-end encryption is a complete nonsense and is meant as
a control for general public as anyone who doesnt want to be spied on can and
will take actions against.

~~~
likpok
The big difference is ease of use. You can basically round the number of
people using email encryption to zero. Additionally, email encryption is
fraught with operational issues making it easy to screw up. On the other hand,
a billion people use WhatsApp and don’t think about it.

That’s a big shift in who uses encryption and how easy it is to passively
surveil them.

~~~
nine_k
Email may be a particularly poor example.

Any half-serious operation would likely just order a custom encryption app,
anonymously, paid for by monero, or something.

What really is important is reliability. An open source app, checked by
experts, buildable from scratch in a controlled environment, is much less
likely to have a bug planted by a three-letter agency.

So yes, good and widespread end-to-end encryption is a large nuisance for said
agencies, even if a successful ban on it does not prevent criminals from
encrypted communication in principle.

------
ttul
Any intervention by government that picks our locks only works with platforms
that choose or can be forced to participate. People with something to hide
will always be able to find a place to communicate beyond the reaches of such
surveillance.

This makes government backdoors not only an unwelcome intrusion, but also
entirely pointless.

~~~
oil25
I agree completely. These efforts to "break" end-to-end encryption seem
entirely ineffectual so long as open source alternatives exist - they are
plenty and well proliferated. Banning the use of unapproved software is
impractical, like asking everyone to turn in guns. So what's really their end
game?

~~~
incompatible
Controlling the popular platforms, that the vast majority of people use, while
ignoring or attempting to restrict over time the little-used alternatives?

------
hectorr1
Matthew Green had a good thread on this:
[https://twitter.com/matthew_d_green/status/11906745637740093...](https://twitter.com/matthew_d_green/status/1190674563774009349)

~~~
dredmorbius
Threadreader:
[https://threadreaderapp.com/thread/1190674563774009349.html](https://threadreaderapp.com/thread/1190674563774009349.html)

------
mirimir
It's a truism that any approach which lets some friendly adversary pwn those
whom you consider evil will also let your adversaries pwn you.

Given that, it's reassuring when the evil don't get pwned. Because they're
canaries. If they're safe, you're safe.

~~~
skybrian
Uh, since your argument proves zero-day attacks don't exist, you might want to
go back and figure out how you got it wrong?

~~~
mirimir
If they're compromised by a zero-day, that may come out. And so you may read
about it. For example, we only learned about the FBI's NIT after the PlayPen
etc busts. So Firefox got patched.

~~~
skybrian
Maybe it would happen that way, but if it doesn't, it doesn't show you're
safe.

Testing can only prove the presence of bugs, not their absence. Reading about
other people getting hacked can warn you that you're vulnerable to the same
bug, but if they didn't get hacked it doesn't prove anything.

~~~
mirimir
Hey, I didn't claim that they were safe. Just that you can't be safe if they
aren't.

~~~
skybrian
You did say "if they're safe, you're safe." But that's not true. Your security
setup might (and probably does) have different vulnerabilities. Maybe you're
using better or worse encryption than them? You can't really conclude
anything, in general.

~~~
mirimir
I get your point. And I did overstate the argument.

Still, if available software and systems don't let them be safe, those
software and systems won't let you be safe either. And arguably, the assholes
are better at staying safe than you are. Or at least, the ones who aren't will
go down fast.

And when they do get pwned, it's often a public matter. Because criminal
matters are public in sane countries, and they're newsworthy. So, for example,
busts have alerted us to file snooping by anti-malware apps, retention and
disclosure of VPN service logs, Firefox bugs, and leakage of Apache error
messages around Tor. Also the risks of using unusual slang, although that's a
human failure.

~~~
skybrian
Yes, I agree we can learn from other people's experiences using certain
technology. Like, nobody believes Bitcoin is anonymous anymore, right? And
certainly we learn something from watching people playing for higher stakes
than us.

On the other hand, for criminals it's a little different since they rely
entirely on technology. That's also an ideal for some people of a libertarian
mindset who are not criminals, but it's not the only way to do things.

A combination of legal, political, and technical safeguards may work better
than purely technical rules for most people? We don't have to live outside the
law if we make the law work for us. Anyone who talks about legal rights is
implicitly putting some faith in the legal system to put things right, as an
ideal, anyway.

------
nyxxie
These systems are useless. Of the many flaws:

1.) Simple alteration (change a pixel in MS paint) or encryption of content
bypasses the filter 2.) Patching out the filtering routine bypasses the filter
3.) Blocking the phone-home address (pihole, router firewall, etc) bypasses
any reporting 4.) Any vulnerability in the future that allows an attacker to
report arbitrary clients (disclosure of client IDs, weakness in app, weakness
in server) renders evidence gathered by the system unreliable.

At best clientside filtering allows you to draw relationship maps of
technically incompetent perverts who might possibly be sharing CP. What harm
reduction are they trying to get out of that?? Why not just refocus efforts on
catching the small minority of individuals who are actually producing this
content??

But hey, if these garbage clientside filtering of image uploads is enough
security theatre to keep governments satisfied, I say let them have it.

~~~
bonoboTP
The first is not true. These are robust hashes of the image content, not the
exact pixel colors. Look up PhotoDNA for an example.

~~~
PostOnce
"Perceptual hashing" it's sometimes called.

------
dependenttypes
I wonder how they are planing to force free software to add client-side
scanning.

~~~
chii
By using the rubber hose method : beat up the person who uses "unauthorised"
software to make an example of them.

------
GhettoMaestro
I recently learned about Microsoft PhotoDNA[1]. Very interesting (and cool)
technology. My understanding is that a decade or so ago a Microsoft engineer
stumbled upon a law enforcement guy giving a talk about the challenges of
combating child pornography with the rise of the internet, etc. The Microsoft
engineer and the LEO started talking and came up with a concept of a platform
where known abuse material is hashed, and automated scanning tools can be
deployed in the field when suspects are detained. The net result was it saved
law enforcement officers from having to view the same material again and
again, and instead could determine with the certainty of a SHA1/2 hash that it
is indeed abuse-related material, justifying further review/inspection.

That said, I'm not sure from a privacy perspective that I like communication
apps playing the referee. Sure, its terrorism or child porno now. What about
when it is political content regarding 'X' that is prohibited?

[1] [https://www.microsoft.com/en-us/photodna](https://www.microsoft.com/en-
us/photodna)

~~~
belorn
I am rather skeptical about PhotoDNA. If it is an effectively method for video
filtering then why is youtube using very expensive machine learning, which has
a high maintaince and operation costs, compared to just simply hashing the
video frames.

There is also similar problem with spam where spammers send email with images
in order to fool the spam filter. If the algorithms in PhotoDNA would be
effective then the problem of spam images would be a fairly solved problems,
but what I keep hearing is that the only effective tool is machine learning.

~~~
GhettoMaestro
I think you can divide it (among many other ways) into two categories: known
content, and unknown content. PhotoDNA solves the known content (hashed)
problem. The other stuff you mentioned I believe is being leveraged to combat
the unknown [abuse] content problem. Eg, identifying the "0-day" content.

------
vwuon
>The simplest possible way to implement this: local hash matching. In this
situation, there’s a full CEI hash database inside every client device. The
image that’s about to be sent is hashed using the same algorithm that hashed
the known CEI images, then the client checks to see if that hash is inside
this database. If the hash is in the database, the client will refuse to send
the message (or forward it to law enforcement authorities).

The image could be scanned when it's received, and not when it's sent. That
way you can't use hacked clients to send forbidden images.

~~~
amarshall
But then you just use a modified client to receive them. I’ve no idea how
often the recipient isn’t wanting to receive the message in this context, but
I’d expect it’s not often.

