
Telling the Truth About Defects in Technology Should Never Be Illegal - severine
https://www.eff.org/deeplinks/2018/08/telling-truth-about-defects-technology-should-never-ever-ever-be-illegal-ever
======
tptacek
You read Cory Doctorow talking about vulnerability research and you get the
impression that there's a war out there on security researchers. But of
course, everything else in Doctorow's article aside, there isn't: the field of
vulnerability research has never been healthier, and there have never been
more companies _explicitly authorizing_ testing of their servers than there
are now.

There isn't an epidemic of prosecutions of vulnerability researchers --- in
fact, there are virtually no such prosecutions, despite 8-10 conferences worth
of well-publicized independent security teardowns of everything from payroll
systems to automotive ECUs. There are so many random real-world things getting
torn down by researchers that Black Hat USA (the industry's biggest vuln
research conference) had to make a whole separate track to capture all the
stunt hacking. I can't remember the last time someone was even C&D'd off of
giving a talk.

I'm a vulnerability researcher (I've been doing that work professionally since
the mid-1990s) I've been threatened legally several times, but all of them
occurred more than 8 years ago. It has never been better or easier to be a
vulnerability researcher.

Telling the truth about defects in technology isn't illegal.

Doctorow has no actual connection to the field, just a sort of EFF-style
rooting interest in it. I'm glad he approves of the work I do, but he's not
someone who I'd look to for information about what's threatening us. I'm
trying think of something that might be a threat... credentialism, maybe?
That's the best I can come up with. Everything is easier today, more tools are
available, things are cheaper, more technical knowledge is public; there are
challenges in other parts of the tech industry, but vuln research, not so
much.

~~~
icc97
> the field of vulnerability research has never been healthier, and there have
> never been more companies explicitly authorizing testing of their servers
> than there are now.

> in fact, there are virtually no such prosecutions

> Telling the truth about defects in technology isn't illegal.

These statements don't seem to add up. If it's legal to tell about defects,
then all of EFF's work (on this case) is a waste of time and they're spreading
misinformation.

If it's legal to do what they're doing then there should be zero prosecutions.

Companies wouldn't need to authorize anyone because what they are doing is
legal.

Edit: added some clarification

~~~
tptacek
I kind of love this comment. "This doesn't add up... if what you're saying is
true, EFF is misleading people".

Yes. I think EFF often does mislead people. They do some important work, and
some less well-intentioned advocacy stuff.

I supported my argument with evidence. To wit: if what researchers do is
"illegal", consider this year's Black Hat schedule, and ask why none of these
presentations generated so much as a C&D, let alone a threatened criminal
prosecution: MDM attacks, attacks on self-driving cars (including Tesla’s
ECU), breaks in “16 desktop applications, 29 websites, and 34 mobile
applications” in the fintech space, attacks on ATMs, industrial control
gateways, VPNs, ICS firewall products, antivirus software, “Akamai, Varnish
Cache, Squid Proxy, Fastly, IBM WebSphere, Oracle WebLogic, F5”, a smartphone
baseband, every macOS firewall product, mobile point of sales systems
including “Square, SumUp, iZettle, and Paypal”, SAML SSO’s, SATCOM, WinVote
voting machines, LTE base stations, z/OS for Christ's sake, streaming video
watermarking schemes, “a cross-section of smart city devices currently in use
today", Toshiba FlashAir SD cards, warning siren systems from ATI Systems,
celllular devices manufactured by Sierra Wireless and many others, implanted
medical devices, text-to-speech systems, and a hardware backdoor in an x86
clone.

What prosecutions would you put on the other side of this scale to balance it
out? It sounds like the best argument the thread has seen so far is
Aurenheimer, who dumped a database and bragged to the media about his "theft".
I don't support that prosecution and didn't at the time, but if that's the
best evidence you've got, I'm not going to be up nights worrying about this.

I work in this field. Cory Doctorow's relation to it is that of a gadfly. I'm
saying outright: I think this article is misleading.

You'll have no trouble finding vulnerability researchers to take the other
side of my argument, that we're all a threatened species ready to be pushed
underground at any moment by overzealous prosecutors. We've been saying that
for literally decades, and I think it has more to do with us wanting to feel
exceptional and daring than it does with any real risks we face.

~~~
dcole2929
Companies giving researchers permission to do something is inherently
different from it being legal to do something. In one case you are able to
operate only at the whims of the companies you're researching. The companies
can for any reason, and at any time, withdraw their consent and take legal
action against a researcher. Yo can debate the likelihood of that ad nauseuam,
however it would seem to still be a potential risk. In the other case, your
are free to operate as you see fit because you have every legal right to
investigate and go about your business. As far as I can tell, you're arguing
that the former circumstances are good enough, whereas the author and
apparently the EFF and ALCU are trying to obtain the former.

~~~
tptacek
Almost none of those were vendors giving permission; I left out dozens of
talks where it was clear permission had been given (for instance: no Apple or
Microsoft talks on that list).

~~~
dcole2929
Again, I'm not disagreeing that the current state of affairs might be good
enough for people to work in this field with some degree of confidence they
won't suddenly be facing crippling consequences for work that everyone
generally agrees is important. I'm simply pointing out that a handshake
agreement that companies won't sue or press charges against researchers is a
far cry from the legal protection that the EFF and ALCU hope to gain. Imagine
a scenario where some vindictive CEO/Founder (I can think of a few) takes
offense to a researcher publicizing vuln/flaws in a new flagship product that
embarrasses them or the company. Imagine that CEO decides to make a point by
violating the "understanding" researchers and companies have so far enjoyed,
and files suit or even worse presses charges. Currently that researcher has no
protection. Even if that's a rare and unlikely occurrence, that researcher
probably isn't going to care when facing ruinous legal fees, fines, or even
jail time. And those are the consequences.

~~~
tptacek
What handshake agreement are you referring to? There's usually no agreements
with them at all. A lot of the companies in that list are notoriously
litigious. If this was such a risk, why is nobody getting C&D'd? I helped
manage one of the larger software security firms in the country, staffed with
people using their bench time to do independent assessments of random things
that interested them. I can count on zero fingers the number of people who
were ever threatened with a suit. What's the evidence that this is a real
problem? To me, the evidence loudly suggests that it is not.

There are people who have legit legal issues with the research that they're
doing. They fall generally into two buckets:

* The rapidly shrinking bucket of people doing content protection security research for vendors that don't want copy protection stuff broken (and related stuff, like jailbreaking consoles).

* People who are testing _other people 's computers, not their own_.

Most of the legal drama you see in our field comes from that second group. I
fully agree: it is legally dangerous to test software running on a stranger's
computer without authorization. But most important vuln research doesn't set
off that particular tripwire.

~~~
leereeves
Often, the drama involves unauthorized testing of websites. Hard to test those
on your own computer.

~~~
tptacek
There was a thread a couple months back from a story where a tech reporter
went out of his way to find (and solicit) stories of legal threats to
vulnerability researchers. He ran his best examples in the story, and they
were all flimsy. Here's a comment I wrote for the thread:

[https://news.ycombinator.com/item?id=16642155](https://news.ycombinator.com/item?id=16642155)

(This is about the Zack Whittaker story referenced on the thread, not about
Dropbox's VDP, which is the ostensible subject of the thread).

Again: these are the _good_ examples.

In the 5 months since, I haven't see more legal drama for vuln research
(again, despite multiple huge conferences with people dropping zero-day on
litigious vendors on stage). Where is the evidence that this is a real
problem?

~~~
leereeves
What are your thoughts on the prosecution of Eric McCarty?

~~~
tptacek
I don't have any; that case is 12 years old. I'd have to log into PACER to get
the indictment and read it.

(I couldn't help it, I got nerdsniped, but the case is also under seal and
none of the documents available are interesting).

~~~
leereeves
Has the law changed since then? As far as I know, it hasn't.

I think you're right that companies and prosecutors are generally tolerant of
security researchers doing their thing.

But I also think the EFF is right that dangerous tools exist that can be
misused by overzealous or malicious prosecutors.

~~~
tptacek
That's not my claim. I'm saying that lots of companies _aren 't_ tolerant of
security research, and, despite their best efforts, still can't do much about
it, because as long as you're not reaching out and probing their website, they
can't do anything about it.

~~~
leereeves
> as long as you're not reaching out and probing their website

I think that's the part the EFF is concerned about. At least the CFAA part.

The EFF also discussed the DMCA, but I'm not familiar with any cases of
security researchers prosecuted under that law.

------
rotrux
Ok so I do appreciate the EFF's zeal in preemptively quashing a bug-hunter-
hunt, but here's a scenario that throws their thesis into question:

Bank of 'MURICA (Bo'M) gets a phone call from some random guy (Jack) who
identifies a bug in the interface between PoS systems at gas stations across
the US, and whatever Bo'M internal software-mega-structure manages checking
acct balances for Bo'M customers. Now, Jack is a good Samaritan; he would
never use this information to steal from millions of Bo'M customers...but
like...he totally could. Jack's sister Jill overhears her brother's
conversation with Bo'M's security team, & decides everyone needs to know
immediately about Bo'M's negligence.

Lets pretend it's going to take a ~month to fix the interface.

Is it cool for Jill to get on Reddit and post the code necessary to exploit
the bug before Bo'M has a chance to protect their customers?

Is this a victimless act? Was she just being responsible? Should she have
waited to be responsible?

~~~
Chardok
Okay, but what about potential customers of BoA that have no idea that the
bank they want to entrust their money to has a huge known exploit that
increases their chances of identity theft? Does the customer not have a right
to know what they are signing up for?

~~~
tptacek
I have bad news for you. I don't know what bank you use, but I know this: your
bank has huge exploits that increase your chance for identity theft.

~~~
Chardok
I'm sure it does. Even if I already was a BoA customer in this theoretical
scenario, I would also prefer to be in the know to avoid that specific POS/gas
station company until the issue was resolved, instead of wondering why my
identity was stolen and dealing with the fallout with my bank unknowingly to
blame.

~~~
tptacek
You'd be in the know about one random thing, but there will be dozens of
others, many of them also known to different subsets of people.

I'm not making an argument about the public policy of disclosure. My view is:
if you come about the information lawfully, publish whenever you're ready.

------
ikeboy
This is obfuscating the law. It’s not illegal to disclose anything, it’s
illegal to hack (in certain circumstances). Nobody is getting prosecuted for
disclosure, they’re getting prosecuted for hacking.

Now, maybe those forms of hacking should be legal. I’m sympathetic to their
cause. But it’s disingenuous to frame this as disclosure being illegal.

As a side note, the post implies companies are using criminal sanctions as a
threat to prevent disclosure - this is itself illegal extortion.

~~~
Drakim
If I buy a Bluetooth SmartLock, is it illegal for me to to attempt to open my
own lock without the key to see if the product actually works?

~~~
tptacek
Almost certainly not (if the lock somehow phones home to a central server and
your research involves subverting the server, you could have problems, but no
lock I know of works that way).

While it might not be _illegal_ to mess with a lock you've purchased, it could
be _unlawful_ , in the sense that you might be violating a contract that you
agreed to when you purchased the lock. So while I would stake money on you not
getting _arrested_ for doing that work, I wouldn't bet against you getting
_sued_ , which is a far more common occurrence. Researchers getting _arrested_
: rare. Researchers getting _sued_ : a little less rare (still pretty rare).

~~~
jackhack
>>...but no lock I know of works that way).

And if the law is followed, you never will know of one.

That is the problem.

~~~
tptacek
I don't think this is the case. Not only are there probably no locks that work
like this on the market, but it is even less likely that, were one to exist,
it would be impossible to discover the remote server connection without
probing the remote server. Remember also that the law we're talking about
isn't strict liability; you have to knowingly exceed authorization.

I've spent the last 13 years doing this kind of work, and bumped into a lot of
remote servers I don't own on assessments of devices. There've been a lot of
times where I wished I could go further, but never a time where I accidentally
got sucked into hacking someone else's computer by dint of probing my own
device.

------
crankylinuxuser
That's fine. If Congress illegalizes speech of this nature, then public
disclosures of exploits and other code errors won't happen nearly as often.

So what would happen then? There's many, many organizations that are willing
to pay for exploits, because they plan to use them. There's one such company
in DC that is likely to sell to the defense industry. For a 0-day no click
exploit, $500k .

So sure, make it illegal. You only push this into the illegal side of
operations and exploits. And I would much rather have exploits well known, so
I can make the determination to: do nothing, take service down until fixed, or
try to patch if available.

~~~
rixrax
>> For a 0-day no click exploit, $500k .

Which company is that again? You hang around with them cybersec folks too long
and you keep hearing about these 6 or 7-figure payouts for expl0its by some
mysterious companies.

But it's always someones friend or friend of friend or some other 'reliable'
source.

Can someone here actually say that they have received, or more importantly,
are on a consistent basis getting 6-figure payouts for their exploits?

For reference, Microsoft offers bug bounties up-to $250k[0]. I can see that in
light of this, it's not entirely unreasonable that some tax payer money has
been wasted on buying an exploit or two for large amounts of $$$. But what I
am calling into question is the myth of consistent 6/7-figure payouts for
'exploits' when sold to 'shady' companies or that such payments would be
common place or such markets being generally available to security
'researchers' (read exploit developers).

Intrigued to be proven wrong here! :)

[0] URL: [https://www.microsoft.com/en-
us/msrc/bounty](https://www.microsoft.com/en-us/msrc/bounty)

~~~
wepple
[https://zerodium.com/program.html](https://zerodium.com/program.html)

This is fairly vague, and “up to $X” butnpossibly provides a touch more
concrete-ness than you’ve seen so far.

You’re right though; you won’t find anyone who is currently selling 0day
exploits to brokers or govt who will be willing to go on record. Or even admit
that they sell them.

~~~
rixrax
Thanks for this link. It looks like my skepticism may not have been well
founded. Are there [many] other similar companies out there? Now that my
curiosity peaked, they are the only one mentioned in Wikipedia[0] related to
this activity.

And as an add-on question to satisfy my curiosity, how would one know when
approaching company such as one linked here that the exploit sold will be used
by good-guys<tm> and doesn't come to haunt you later when it ends up being
used by bad-guys<tm>?

[0] [https://en.wikipedia.org/wiki/Market_for_zero-
day_exploits](https://en.wikipedia.org/wiki/Market_for_zero-day_exploits)

~~~
wepple
disclaimer: I haven’t sold a single exploit, so am taking a few guesses here.
Approx ~5 of my relatively close friends are exploit devs, and most of them
don’t talk about anything beyond the name of the company they work for.

There aren’t a ton of companies, but once you’ve worked in the general offense
industry, names start to reappear regularly.

I’m not sure what your definition of good/bad guy is. If you don’t like the
idea of 0day used for “cyber warfare” then it’s easy: only sell directly to
the vendor. If your idea of good guy is your own nation, you can usually tell
a companies alignment. They’ll have an ex-NSA CEO, or close partnership with a
defense contractor. Ultimately you’re never sure though.

Edit: I’ll also add that in your earlier post you mention the high cost, and
wasting of taxpayer dollars. I don’t disagree, but an interesting way to look
at it is value per dollar. If you look up the cost of running a state of the
art attack helicopter or even just a couple of hummers loaded up with navy
seals and all their kit, you find that dropping a $1m piece of remote
Jailbreak malware on a bad actor is actually really amazing ROI in terms of
finding out what your enemy is upto, and disrupting their plans.

------
LinuxBender
There are a large number of penetration testers that operate in the open
because it is legal and there are frameworks for disclosure. What happens when
we make this illegal? Will those people simply cease and desist?

------
elronster
This is easy to get around. Just release the information anonymously on an
anonymous platform.

~~~
JamesUtah07
Yea but most platforms, that garner any attention, are not actually anonymous
in ways that would protect the person releasing the information.

~~~
wepple
The full-disclosure mailing list has been around 20 years and has no problem
garnering attention. Sending an anonymous email in that direct with no way to
track back isn’t particularly difficult.

------
mdsareckcfsck
Attempting to have a reasonable conversation in this thread is impossible
because mods have allowwed _certain_ accounts to post as much as they want and
brigade counter opinions.

~~~
dang
Most accounts are allowed to post as much as they want! We do put rate limits
on some if they have a history of posting too many low-quality comments too
quickly, or of getting involved in flamewars. Also, new accounts are rate
limited for a while, to cut back on trolling.

If you or anyone else has a rate limited account and would like us to lift the
rate limit, you're welcome to email us at hn@ycombinator.com. We're happy to
do that if we believe there's an intention to use HN as intended in the
future.

