There isn't an epidemic of prosecutions of vulnerability researchers --- in fact, there are virtually no such prosecutions, despite 8-10 conferences worth of well-publicized independent security teardowns of everything from payroll systems to automotive ECUs. There are so many random real-world things getting torn down by researchers that Black Hat USA (the industry's biggest vuln research conference) had to make a whole separate track to capture all the stunt hacking. I can't remember the last time someone was even C&D'd off of giving a talk.
I'm a vulnerability researcher (I've been doing that work professionally since the mid-1990s) I've been threatened legally several times, but all of them occurred more than 8 years ago. It has never been better or easier to be a vulnerability researcher.
Telling the truth about defects in technology isn't illegal.
Doctorow has no actual connection to the field, just a sort of EFF-style rooting interest in it. I'm glad he approves of the work I do, but he's not someone who I'd look to for information about what's threatening us. I'm trying think of something that might be a threat... credentialism, maybe? That's the best I can come up with. Everything is easier today, more tools are available, things are cheaper, more technical knowledge is public; there are challenges in other parts of the tech industry, but vuln research, not so much.
> in fact, there are virtually no such prosecutions
> Telling the truth about defects in technology isn't illegal.
These statements don't seem to add up. If it's legal to tell about defects, then all of EFF's work (on this case) is a waste of time and they're spreading misinformation.
If it's legal to do what they're doing then there should be zero prosecutions.
Companies wouldn't need to authorize anyone because what they are doing is legal.
Edit: added some clarification
Yes. I think EFF often does mislead people. They do some important work, and some less well-intentioned advocacy stuff.
I supported my argument with evidence. To wit: if what researchers do is "illegal", consider this year's Black Hat schedule, and ask why none of these presentations generated so much as a C&D, let alone a threatened criminal prosecution: MDM attacks, attacks on self-driving cars (including Tesla’s ECU), breaks in “16 desktop applications, 29 websites, and 34 mobile applications” in the fintech space, attacks on ATMs, industrial control gateways, VPNs, ICS firewall products, antivirus software, “Akamai, Varnish Cache, Squid Proxy, Fastly, IBM WebSphere, Oracle WebLogic, F5”, a smartphone baseband, every macOS firewall product, mobile point of sales systems including “Square, SumUp, iZettle, and Paypal”, SAML SSO’s, SATCOM, WinVote voting machines, LTE base stations, z/OS for Christ's sake, streaming video watermarking schemes, “a cross-section of smart city devices currently in use today", Toshiba FlashAir SD cards, warning siren systems from ATI Systems, celllular devices manufactured by Sierra Wireless and many others, implanted medical devices, text-to-speech systems, and a hardware backdoor in an x86 clone.
What prosecutions would you put on the other side of this scale to balance it out? It sounds like the best argument the thread has seen so far is Aurenheimer, who dumped a database and bragged to the media about his "theft". I don't support that prosecution and didn't at the time, but if that's the best evidence you've got, I'm not going to be up nights worrying about this.
I work in this field. Cory Doctorow's relation to it is that of a gadfly. I'm saying outright: I think this article is misleading.
You'll have no trouble finding vulnerability researchers to take the other side of my argument, that we're all a threatened species ready to be pushed underground at any moment by overzealous prosecutors. We've been saying that for literally decades, and I think it has more to do with us wanting to feel exceptional and daring than it does with any real risks we face.
There are people who have legit legal issues with the research that they're doing. They fall generally into two buckets:
* The rapidly shrinking bucket of people doing content protection security research for vendors that don't want copy protection stuff broken (and related stuff, like jailbreaking consoles).
* People who are testing other people's computers, not their own.
Most of the legal drama you see in our field comes from that second group. I fully agree: it is legally dangerous to test software running on a stranger's computer without authorization. But most important vuln research doesn't set off that particular tripwire.
(This is about the Zack Whittaker story referenced on the thread, not about Dropbox's VDP, which is the ostensible subject of the thread).
Again: these are the good examples.
In the 5 months since, I haven't see more legal drama for vuln research (again, despite multiple huge conferences with people dropping zero-day on litigious vendors on stage). Where is the evidence that this is a real problem?
(I couldn't help it, I got nerdsniped, but the case is also under seal and none of the documents available are interesting).
I think you're right that companies and prosecutors are generally tolerant of security researchers doing their thing.
But I also think the EFF is right that dangerous tools exist that can be misused by overzealous or malicious prosecutors.
I think that's the part the EFF is concerned about. At least the CFAA part.
The EFF also discussed the DMCA, but I'm not familiar with any cases of security researchers prosecuted under that law.
Can you expand on why you think some of EFF's advocacy is less well-intentioned?
Bank of 'MURICA (Bo'M) gets a phone call from some random guy (Jack) who identifies a bug in the interface between PoS systems at gas stations across the US, and whatever Bo'M internal software-mega-structure manages checking acct balances for Bo'M customers. Now, Jack is a good Samaritan; he would never use this information to steal from millions of Bo'M customers...but like...he totally could. Jack's sister Jill overhears her brother's conversation with Bo'M's security team, & decides everyone needs to know immediately about Bo'M's negligence.
Lets pretend it's going to take a ~month to fix the interface.
Is it cool for Jill to get on Reddit and post the code necessary to exploit the bug before Bo'M has a chance to protect their customers?
Is this a victimless act? Was she just being responsible? Should she have waited to be responsible?
I'm not making an argument about the public policy of disclosure. My view is: if you come about the information lawfully, publish whenever you're ready.
The point is just that collateral damage can happen when people run their mouths about important/sensitive info. Sometimes, not always or even often: just sometimes, that's not cool & should maybe be prevented if possible. Should American citizens all be given access to the launch-codes because we pay taxes?
This is a gray issue. I love the EFF but this article misses important nuance.
It's a trick question. Jill cannot post the exploit before Bo'M has a chance to protect their customers, because Bo'M already had a chance to do so. They just chose to leave their customers exposed by hiring cheap programmers who didn't know their computer science (aka math).
Think that's sophistry? Well, look at the alternative:
In a world where security researchers wait before they publish exploits, it's economically beneficial to cheaply write insecure software, wait for the White Hats to reports the flaws (hopefully before the Black Hats notice), then patch them. I'd argue this strategy amounts to outsourcing security engineering to government funded researchers. Do you want to live in such a world?
Besides, why would Bo'M need to protect their customers?! The customers did nothing wrong, Bo'M did. Therefore, Bo'M is liable for damages. So... let me rephrase your question:
> Is it cool for Jill to get on Reddit and post the code necessary to exploit the bug before Bo'M has a chance to cover their exposed backside?
I think it totally is.
> Jill cannot post the exploit before Bo'M has a chance to protect their customers, because Bo'M already had a chance to do so.
Software has bugs...I can stick a file server in a pdf by poking at the file-headers. This doesn't necessarily mean we should stop using pdfs, nor that poor hiring practices at Adobe are to blame.
My argument was merely that the language used in the article seems too categorical for a topic as complex as this one. I understand that big corporations often rush to market instead of doing due-diligence, but in certain realistic situations, this article would be advocating anarchy.
I personally like electricity & running water, and so I disagree with the black & white take presented.
Now, maybe those forms of hacking should be legal. I’m sympathetic to their cause. But it’s disingenuous to frame this as disclosure being illegal.
As a side note, the post implies companies are using criminal sanctions as a threat to prevent disclosure - this is itself illegal extortion.
Also, while not approving of 'weev' or his methods, I think most on HN and elsewhere would agree that merely visiting a website should not be considered a hack - https://www.wired.com/2013/03/att-hacker-gets-3-years/ - adding an additional layer to this mess.
For that reason, I don't think the MBTA hack is a great counterexample to the parent's (well-reasoned) argument.
You'll have a difficult time building a case against the CFAA based on Aurenheimer's actions, since the prosecution was able to demonstrate with his own words that he knew he wasn't authorized to take the records. In fact, his appeal against the CFAA conviction failed with a citation to his pre-arrest appeal to the media to have him explain "the method of [his] theft".
Is there a standard service that announces available services? If not, how do I know if I'm 'allowed' to use Google.com:80 ? Where is that fact delineated?
The CFAA requires knowingly exceeding authorized access or doing so intentionally. If, in fact, you don’t know access is unauthorized, you’re not liable for a CFAA violation. So you can go blissfully through life doing what you, in good faith, may reasonably guess the web site owner intends you to do. You don’t need the rules “deliniated” just like for any other kind of property.
However, there is what you say about what you knew, and what a jury may reasonably infer about what you knew. If you say you didn’t know accessing port 80 was unauthorized, the jury will probably buy it, because port 80 is expected to host publicly available web services. However, if you take a URL and start manually fiddling with numbers, and you see that it yields credit card information, the jury probably won’t believe you if you say you didn’t know that access was unauthorized. Similarly if you access a resource, and the web site owner tells you not to do it, or bans your IP, now you know. CFAA prosecutions are almost invariably based on a scenario where it is abundantly clear that someone knew she wasn’t supposed to be doing what she was doing.
 But, say you’re accessing a page programatically and there is a bug in your code and you access some URL that yields credit card information, which you promptly delete because you are acting in good faith. There is no CFAA violation because you had neither knowledge or intent. And to my knowledge, nobody has been convicted of violating the CFAA in such a situation.
Have I committed a crime? There were no doors, no locks, no signs to tell me that this area was restricted. Just because it wasn't on the map and may have not had a clearly marked entrance, doesn't mean that I should have known it was a restricted area.
Most criminal tresspass laws require at least clear signage or a previous communication, and even then, the area has to be clearly defined. You can't just wave your hand and say 'if you go over there, you're tresspassing'
There are some large(20,000+ acres) areas near me that are used as ATV playgrounds. The ATV riders are not authorized to be there, and the land owners have taken some steps to put up signs. They are not consistent or ubiquitious, and the properly lines are not clearly marked or identified. Being that the area is not surveyed, law enforcement is very reluctant to charge anyone with criminal tresspass because they can't even tell themselves where the legal property lines are.
Imagine trying to convince a jury beyond a reasonable doubt?
People on here talk all the time about how their digital possessions are just as important as their physical possessions (if not more-so). Given that it seems perfectly reasonable to have the same cultural norms about exploring digital spaces as physical ones.
If I broadcast a request for an IPv4 address and your DHCP server proffers an IPv4 address that I can use for the next 15min, the address of a nameserver, and the address of a router that will forward packets to the global internet, I can reasonably conclude that you intended to allow me to exchange packets on your LAN and at least attempt to use your gateway to interact with the internet. On the other hand, depending on the situation, a "403 Forbidden" could reasonably be interpreted as a request to not send that type of HTTP request anymore.
The protocol isn't the only place to look for intent, but it absolutely does express intent in some situations.
If the owners don't want something public, it's trivial to lock it down--they might as well freak out when somebody uses the wrong door to enter the front of their shop.
The protocol is relevant, because it conveys information. Just as unlocked doors generally indicate permission to access, unsecured HTTP generally indicates the same. But the protocol is only one piece of the puzzle. It is not dispositive. It does not conclusively decide rights and responsibilities. If a reasonable human would discern that the protocol allowing access was probably the result of a mistake rather than intent on the part of the property owner, that is what matters.
Making a mistake doesn't relieve one of legal protection from trespass (physical or digital).
So this creates a bad situation where responsibly reporting security holes provides evidence that you knew the access was not authorized, regardless of your intent.
I don't think you actually believe what you're trying to argue here.
I can go to google.com or HN because I type the URL into a browser and the page is displayed -- I didn't need to go through an approval process or present a security token or anything else to gain access.
Now, what if I type in a URL into my browser that happens to look like company.com/users/12345/purchases and it displays a page of information. Did I "circumvent" anything in this case? Did I "exceed" my authorized access? If so, what is the fundamental difference between accessing google.com and this hypothetical URL?
Your intention. It matters.
Yes, it is reductio ad absurdum to question hitting port 80 on a public site. Work a little harder, though - what about port 8080? 6734? What if you monkey with the URL?
Note that there are people here claiming that the last bit is 'hacking' under the law, and in some cases they are right.
So now we can't change URLs without hacking, but only when doing so knowingly exceeds authorization. Well, what does that mean?
That's why we have trials and juries and so on. 'Working a little harder' and trying to find trivial logical gotchas in law is a pointless thing to do and discuss because it's very much not how the legal system works. The law is not a program interpreted once by some literalist, implacable CPU.
The law has dealt with these kinds of things for hundreds of years. Intent matters.
The point I am making is about uncertainty before you get to a formal legal setting. Or is everything else you do in business do-first, hope you don't get busted later?
Legal uncertainty about edge cases is real, and a thing that dissuades people from doing thing things. Saying the law will figure it out is how we got here with the CFAA, not a solution to it.
I don't give a shit about the bug bounty program for the bounty per-se, I want them to exist because I want it to be legal to attempt to hack things (without breaking working systems on purpose) because blackhats do it anyway.
Further, how does one discover weaknesses without doing things? Are any and all discussions of security to be confined to the purely hypothetical?
When it comes to the DCMA, that's debatable. In a court of law. Unless, of course, you don't have the money to match their lawyers.
A computer is protected (18 USC §1030(e)(2)) in generally one of three circumstances - i) Used exclusively by a financial institution; ii) Used exclusively by the US Gov't; or iii) Affects foreign commerce or communication [usually interpreted to mean connected to the Internet]
Therefore, if you are using a non-internet connected device (i.e. Bluetooth or NFC only), it's unlikely to be a violation of the CFAA.
NOT LEGAL ADVICE, SEEK YOUR OWN LICENSED ATTORNEY
While it might not be illegal to mess with a lock you've purchased, it could be unlawful, in the sense that you might be violating a contract that you agreed to when you purchased the lock. So while I would stake money on you not getting arrested for doing that work, I wouldn't bet against you getting sued, which is a far more common occurrence. Researchers getting arrested: rare. Researchers getting sued: a little less rare (still pretty rare).
Locks are generally a let the buyer beware kinda proposition. It's up to me to decide if this lock is good enough to protect the tools in my shed.
If i'm explicitly forbidden from investigating that - i think the manufacturer should be required to say how much of your losses they'll cover when the lock fails. Obviously the current state of affairs is _none_ and i can't lawfully tell if the lock sucks or not.
We'd all be better off if lock makers were forced to choose. either i can pick my own lock (evaluate my own risk) or advertise how much insurance comes with the purchase of the lock (evaluate risk for me).
(i fully appreciate the world does not work this way. I'm just asserting it would be better for everyone if it did.)
You can agree to some punishment (like in an NDA). But if it's in a level that would destroy your life, it ought to be illegal again. And every time you agree for some punishment for exercising a fundamental right, you must gain something proportional to the right taken, otherwise it ought to be illegal again.
It's impossible to have a democracy if most of the population is slave due to some contractual clause.
A quick intuitive verification for this claim: when you buy an E&O insurance as a technologist they'll ask a lot of stupid questions (in the sense that you'll be offended they have to even ask) but they don't ask "Do you do security research?" because the underwriters believe the risk and hazard to be actuarially immaterial.
>This Policy does not cover any loss, damage, cost, claim or expenses, whether preventative, remedial or otherwise, directly or indirectly arising out of or relating to:
(a) the calculation, comparison, differentiation, sequencing or processing of data involving the date change to the year 2000 or any other date
change, including leap year calculations, by any computer system, hardware, programme or software and/or any microchip, integrated circuit or
similar device in computer equipment or non-computer equipment whether the property of the Insured or not; or
(b) any change, alteration or modification involving the date change to the year 2000 or any other date change, including leap year calculations, to
any such computer system, hardware, programme or software and/or any microchip, integrated circuit or similar device in computer equipment or
non-computer equipment, whether the property of the Insured or not.
This Clause applies regardless of any other cause or event that contributes concurrently or in any sequence to the loss, damage, cost, claim or expense.
And if the law is followed, you never will know of one.
That is the problem.
I've spent the last 13 years doing this kind of work, and bumped into a lot of remote servers I don't own on assessments of devices. There've been a lot of times where I wished I could go further, but never a time where I accidentally got sucked into hacking someone else's computer by dint of probing my own device.
In either case, these are probably not CFAA violations.
So it's not illegal to disclose, just illegal to discover?
No offense intended, but this sort of wordsmithing doesn't reassure me at all.
So what would happen then? There's many, many organizations that are willing to pay for exploits, because they plan to use them. There's one such company in DC that is likely to sell to the defense industry. For a 0-day no click exploit, $500k .
So sure, make it illegal. You only push this into the illegal side of operations and exploits. And I would much rather have exploits well known, so I can make the determination to: do nothing, take service down until fixed, or try to patch if available.
Which company is that again? You hang around with them cybersec folks too long and you keep hearing about these 6 or 7-figure payouts for expl0its by some mysterious companies.
But it's always someones friend or friend of friend or some other 'reliable' source.
Can someone here actually say that they have received, or more importantly, are on a consistent basis getting 6-figure payouts for their exploits?
For reference, Microsoft offers bug bounties up-to $250k. I can see that in light of this, it's not entirely unreasonable that some tax payer money has been wasted on buying an exploit or two for large amounts of $$$. But what I am calling into question is the myth of consistent 6/7-figure payouts for 'exploits' when sold to 'shady' companies or that such payments would be common place or such markets being generally available to security 'researchers' (read exploit developers).
Intrigued to be proven wrong here! :)
 URL: https://www.microsoft.com/en-us/msrc/bounty
I am not going to make any claim on whether people are or are not receiving payments, but anyone who actually is would clearly not disclose it directly, because that would most likely break the terms of the payment.
The fact that no one is saying are getting paid directly for exploits is not evidence that it is not happening.
This is fairly vague, and “up to $X” butnpossibly provides a touch more concrete-ness than you’ve seen so far.
You’re right though; you won’t find anyone who is currently selling 0day exploits to brokers or govt who will be willing to go on record. Or even admit that they sell them.
And as an add-on question to satisfy my curiosity, how would one know when approaching company such as one linked here that the exploit sold will be used by good-guys<tm> and doesn't come to haunt you later when it ends up being used by bad-guys<tm>?
There aren’t a ton of companies, but once you’ve worked in the general offense industry, names start to reappear regularly.
I’m not sure what your definition of good/bad guy is. If you don’t like the idea of 0day used for “cyber warfare” then it’s easy: only sell directly to the vendor. If your idea of good guy is your own nation, you can usually tell a companies alignment. They’ll have an ex-NSA CEO, or close partnership with a defense contractor. Ultimately you’re never sure though.
Edit: I’ll also add that in your earlier post you mention the high cost, and wasting of taxpayer dollars. I don’t disagree, but an interesting way to look at it is value per dollar. If you look up the cost of running a state of the art attack helicopter or even just a couple of hummers loaded up with navy seals and all their kit, you find that dropping a $1m piece of remote Jailbreak malware on a bad actor is actually really amazing ROI in terms of finding out what your enemy is upto, and disrupting their plans.
Is the law the actual legal code, or should it be conjoineded with the spirit and justification for the law?
If the law is the law, devoid of underlying reason, then it is no exploit to discuss edge cases in the law. It is indeed the law, regardless the intentions upon initial discussion and passing.
It also doesn't hurt that most of these edge cases can only be triggered with large amounts of money. Us normal people can look at the edge cases, but never touch.
If you or anyone else has a rate limited account and would like us to lift the rate limit, you're welcome to email us at firstname.lastname@example.org. We're happy to do that if we believe there's an intention to use HN as intended in the future.