You read Cory Doctorow talking about vulnerability research and you get the impression that there's a war out there on security researchers. But of course, everything else in Doctorow's article aside, there isn't: the field of vulnerability research has never been healthier, and there have never been more companies explicitly authorizing testing of their servers than there are now.
There isn't an epidemic of prosecutions of vulnerability researchers --- in fact, there are virtually no such prosecutions, despite 8-10 conferences worth of well-publicized independent security teardowns of everything from payroll systems to automotive ECUs. There are so many random real-world things getting torn down by researchers that Black Hat USA (the industry's biggest vuln research conference) had to make a whole separate track to capture all the stunt hacking. I can't remember the last time someone was even C&D'd off of giving a talk.
I'm a vulnerability researcher (I've been doing that work professionally since the mid-1990s) I've been threatened legally several times, but all of them occurred more than 8 years ago. It has never been better or easier to be a vulnerability researcher.
Telling the truth about defects in technology isn't illegal.
Doctorow has no actual connection to the field, just a sort of EFF-style rooting interest in it. I'm glad he approves of the work I do, but he's not someone who I'd look to for information about what's threatening us. I'm trying think of something that might be a threat... credentialism, maybe? That's the best I can come up with. Everything is easier today, more tools are available, things are cheaper, more technical knowledge is public; there are challenges in other parts of the tech industry, but vuln research, not so much.
> the field of vulnerability research has never been healthier, and there have never been more companies explicitly authorizing testing of their servers than there are now.
> in fact, there are virtually no such prosecutions
> Telling the truth about defects in technology isn't illegal.
These statements don't seem to add up. If it's legal to tell about defects, then all of EFF's work (on this case) is a waste of time and they're spreading misinformation.
If it's legal to do what they're doing then there should be zero prosecutions.
Companies wouldn't need to authorize anyone because what they are doing is legal.
I kind of love this comment. "This doesn't add up... if what you're saying is true, EFF is misleading people".
Yes. I think EFF often does mislead people. They do some important work, and some less well-intentioned advocacy stuff.
I supported my argument with evidence. To wit: if what researchers do is "illegal", consider this year's Black Hat schedule, and ask why none of these presentations generated so much as a C&D, let alone a threatened criminal prosecution: MDM attacks, attacks on self-driving cars (including Tesla’s ECU), breaks in “16 desktop applications, 29 websites, and 34 mobile applications” in the fintech space, attacks on ATMs, industrial control gateways, VPNs, ICS firewall products, antivirus software, “Akamai, Varnish Cache, Squid Proxy, Fastly, IBM WebSphere, Oracle WebLogic, F5”, a smartphone baseband, every macOS firewall product, mobile point of sales systems including “Square, SumUp, iZettle, and Paypal”, SAML SSO’s, SATCOM, WinVote voting machines, LTE base stations, z/OS for Christ's sake, streaming video watermarking schemes, “a cross-section of smart city devices currently in use today", Toshiba FlashAir SD cards, warning siren systems from ATI Systems, celllular devices manufactured by Sierra Wireless and many others, implanted medical devices, text-to-speech systems, and a hardware backdoor in an x86 clone.
What prosecutions would you put on the other side of this scale to balance it out? It sounds like the best argument the thread has seen so far is Aurenheimer, who dumped a database and bragged to the media about his "theft". I don't support that prosecution and didn't at the time, but if that's the best evidence you've got, I'm not going to be up nights worrying about this.
I work in this field. Cory Doctorow's relation to it is that of a gadfly. I'm saying outright: I think this article is misleading.
You'll have no trouble finding vulnerability researchers to take the other side of my argument, that we're all a threatened species ready to be pushed underground at any moment by overzealous prosecutors. We've been saying that for literally decades, and I think it has more to do with us wanting to feel exceptional and daring than it does with any real risks we face.
Companies giving researchers permission to do something is inherently different from it being legal to do something. In one case you are able to operate only at the whims of the companies you're researching. The companies can for any reason, and at any time, withdraw their consent and take legal action against a researcher. Yo can debate the likelihood of that ad nauseuam, however it would seem to still be a potential risk. In the other case, your are free to operate as you see fit because you have every legal right to investigate and go about your business. As far as I can tell, you're arguing that the former circumstances are good enough, whereas the author and apparently the EFF and ALCU are trying to obtain the former.
Almost none of those were vendors giving permission; I left out dozens of talks where it was clear permission had been given (for instance: no Apple or Microsoft talks on that list).
Again, I'm not disagreeing that the current state of affairs might be good enough for people to work in this field with some degree of confidence they won't suddenly be facing crippling consequences for work that everyone generally agrees is important. I'm simply pointing out that a handshake agreement that companies won't sue or press charges against researchers is a far cry from the legal protection that the EFF and ALCU hope to gain. Imagine a scenario where some vindictive CEO/Founder (I can think of a few) takes offense to a researcher publicizing vuln/flaws in a new flagship product that embarrasses them or the company. Imagine that CEO decides to make a point by violating the "understanding" researchers and companies have so far enjoyed, and files suit or even worse presses charges. Currently that researcher has no protection. Even if that's a rare and unlikely occurrence, that researcher probably isn't going to care when facing ruinous legal fees, fines, or even jail time. And those are the consequences.
What handshake agreement are you referring to? There's usually no agreements with them at all. A lot of the companies in that list are notoriously litigious. If this was such a risk, why is nobody getting C&D'd? I helped manage one of the larger software security firms in the country, staffed with people using their bench time to do independent assessments of random things that interested them. I can count on zero fingers the number of people who were ever threatened with a suit. What's the evidence that this is a real problem? To me, the evidence loudly suggests that it is not.
There are people who have legit legal issues with the research that they're doing. They fall generally into two buckets:
* The rapidly shrinking bucket of people doing content protection security research for vendors that don't want copy protection stuff broken (and related stuff, like jailbreaking consoles).
* People who are testing other people's computers, not their own.
Most of the legal drama you see in our field comes from that second group. I fully agree: it is legally dangerous to test software running on a stranger's computer without authorization. But most important vuln research doesn't set off that particular tripwire.
There was a thread a couple months back from a story where a tech reporter went out of his way to find (and solicit) stories of legal threats to vulnerability researchers. He ran his best examples in the story, and they were all flimsy. Here's a comment I wrote for the thread:
(This is about the Zack Whittaker story referenced on the thread, not about Dropbox's VDP, which is the ostensible subject of the thread).
Again: these are the good examples.
In the 5 months since, I haven't see more legal drama for vuln research (again, despite multiple huge conferences with people dropping zero-day on litigious vendors on stage). Where is the evidence that this is a real problem?
That's not my claim. I'm saying that lots of companies aren't tolerant of security research, and, despite their best efforts, still can't do much about it, because as long as you're not reaching out and probing their website, they can't do anything about it.
Ok so I do appreciate the EFF's zeal in preemptively quashing a bug-hunter-hunt, but here's a scenario that throws their thesis into question:
Bank of 'MURICA (Bo'M) gets a phone call from some random guy (Jack) who identifies a bug in the interface between PoS systems at gas stations across the US, and whatever Bo'M internal software-mega-structure manages checking acct balances for Bo'M customers. Now, Jack is a good Samaritan; he would never use this information to steal from millions of Bo'M customers...but like...he totally could. Jack's sister Jill overhears her brother's conversation with Bo'M's security team, & decides everyone needs to know immediately about Bo'M's negligence.
Lets pretend it's going to take a ~month to fix the interface.
Is it cool for Jill to get on Reddit and post the code necessary to exploit the bug before Bo'M has a chance to protect their customers?
Is this a victimless act? Was she just being responsible? Should she have waited to be responsible?
Okay, but what about potential customers of BoA that have no idea that the bank they want to entrust their money to has a huge known exploit that increases their chances of identity theft? Does the customer not have a right to know what they are signing up for?
I'm sure it does. Even if I already was a BoA customer in this theoretical scenario, I would also prefer to be in the know to avoid that specific POS/gas station company until the issue was resolved, instead of wondering why my identity was stolen and dealing with the fallout with my bank unknowingly to blame.
You'd be in the know about one random thing, but there will be dozens of others, many of them also known to different subsets of people.
I'm not making an argument about the public policy of disclosure. My view is: if you come about the information lawfully, publish whenever you're ready.
I mean, that's a decent point, but for the sake of my argument: let's pretend we're talking about a nuclear missile silo instead of a bank.
The point is just that collateral damage can happen when people run their mouths about important/sensitive info. Sometimes, not always or even often: just sometimes, that's not cool & should maybe be prevented if possible. Should American citizens all be given access to the launch-codes because we pay taxes?
This is a gray issue. I love the EFF but this article misses important nuance.
> Is it cool for Jill to get on Reddit and post the code necessary to exploit the bug before Bo'M has a chance to protect their customers?
It's a trick question. Jill cannot post the exploit before Bo'M has a chance to protect their customers, because Bo'M already had a chance to do so. They just chose to leave their customers exposed by hiring cheap programmers who didn't know their computer science (aka math).
Think that's sophistry? Well, look at the alternative:
In a world where security researchers wait before they publish exploits, it's economically beneficial to cheaply write insecure software, wait for the White Hats to reports the flaws (hopefully before the Black Hats notice), then patch them. I'd argue this strategy amounts to outsourcing security engineering to government funded researchers. Do you want to live in such a world?
Besides, why would Bo'M need to protect their customers?! The customers did nothing wrong, Bo'M did. Therefore, Bo'M is liable for damages. So... let me rephrase your question:
> Is it cool for Jill to get on Reddit and post the code necessary to exploit the bug before Bo'M has a chance to cover their exposed backside?
I wouldn't call this sophistry, but I do disagree with your logic here:
> Jill cannot post the exploit before Bo'M has a chance to protect their customers, because Bo'M already had a chance to do so.
Software has bugs...I can stick a file server in a pdf by poking at the file-headers. This doesn't necessarily mean we should stop using pdfs, nor that poor hiring practices at Adobe are to blame.
My argument was merely that the language used in the article seems too categorical for a topic as complex as this one. I understand that big corporations often rush to market instead of doing due-diligence, but in certain realistic situations, this article would be advocating anarchy.
I personally like electricity & running water, and so I disagree with the black & white take presented.
This is obfuscating the law. It’s not illegal to disclose anything, it’s illegal to hack (in certain circumstances). Nobody is getting prosecuted for disclosure, they’re getting prosecuted for hacking.
Now, maybe those forms of hacking should be legal. I’m sympathetic to their cause. But it’s disingenuous to frame this as disclosure being illegal.
As a side note, the post implies companies are using criminal sanctions as a threat to prevent disclosure - this is itself illegal extortion.
As far as the CFAA, you may have a point. However, the EFF is spot on when it comes to the DMCA, which does in fact allow for speech to be restricted even when no identifiable 'hack' under the CFAA is alleged. The MBTA case is probably the most famous - https://www.wired.com/2008/08/eff-to-appeal-r/
Also, while not approving of 'weev' or his methods, I think most on HN and elsewhere would agree that merely visiting a website should not be considered a hack - https://www.wired.com/2013/03/att-hacker-gets-3-years/ - adding an additional layer to this mess.
The MBTA case is in a bunch of ways analogous to cases where people have gotten in trouble for probing websites. The real target of the MBTA hack isn't the cards themselves, but the centralized MBTA fare collection system. You can probably do whatever you want to a card you own (in particular: you can conduct whatever research you like involving reading the cards), but once you're interacting with the fare collection system, you're communicating with a computer system you do not own or operate, and are back in CFAA country.
For that reason, I don't think the MBTA hack is a great counterexample to the parent's (well-reasoned) argument.
You'll have a difficult time building a case against the CFAA based on Aurenheimer's actions, since the prosecution was able to demonstrate with his own words that he knew he wasn't authorized to take the records. In fact, his appeal against the CFAA conviction failed with a citation to his pre-arrest appeal to the media to have him explain "the method of [his] theft".
Ive always wanted to know, using CFAA logic, how does one know if someone is offering a service to be used online?
Is there a standard service that announces available services? If not, how do I know if I'm 'allowed' to use Google.com:80 ? Where is that fact delineated?
> knowingly accessed a computer without authorization or exceeding authorized access
The CFAA requires knowingly exceeding authorized access or doing so intentionally. If, in fact, you don’t know access is unauthorized, you’re not liable for a CFAA violation. So you can go blissfully through life doing what you, in good faith, may reasonably guess the web site owner intends you to do. You don’t need the rules “deliniated” just like for any other kind of property.
However, there is what you say about what you knew, and what a jury may reasonably infer about what you knew. If you say you didn’t know accessing port 80 was unauthorized, the jury will probably buy it, because port 80 is expected to host publicly available web services. However, if you take a URL and start manually fiddling with numbers, and you see that it yields credit card information, the jury probably won’t believe you if you say you didn’t know that access was unauthorized.[1] Similarly if you access a resource, and the web site owner tells you not to do it, or bans your IP, now you know. CFAA prosecutions are almost invariably based on a scenario where it is abundantly clear that someone knew she wasn’t supposed to be doing what she was doing.
[1] But, say you’re accessing a page programatically and there is a bug in your code and you access some URL that yields credit card information, which you promptly delete because you are acting in good faith. There is no CFAA violation because you had neither knowledge or intent. And to my knowledge, nobody has been convicted of violating the CFAA in such a situation.
If I go to Disney World, and, being a curious person, see something interesting and wander into an unmarked restricted area while checking it out.
Have I committed a crime? There were no doors, no locks, no signs to tell me that this area was restricted. Just because it wasn't on the map and may have not had a clearly marked entrance, doesn't mean that I should have known it was a restricted area.
Most criminal tresspass laws require at least clear signage or a previous communication, and even then, the area has to be clearly defined. You can't just wave your hand and say 'if you go over there, you're tresspassing'
There are some large(20,000+ acres) areas near me that are used as ATV playgrounds. The ATV riders are not authorized to be there, and the land owners have taken some steps to put up signs. They are not consistent or ubiquitious, and the properly lines are not clearly marked or identified. Being that the area is not surveyed, law enforcement is very reluctant to charge anyone with criminal tresspass because they can't even tell themselves where the legal property lines are.
Imagine trying to convince a jury beyond a reasonable doubt?
The problem comes when we've normalized the idea of exploring buildings with unlicked doors as "yeah, this is probably exceeding authorized access".
People on here talk all the time about how their digital possessions are just as important as their physical possessions (if not more-so). Given that it seems perfectly reasonable to have the same cultural norms about exploring digital spaces as physical ones.
The internet is not a building, but to follow the analogy, HTTP by design has no unlocked closed doors — only open doors and locked doors, with an explicit and clear distinction between them.
This is the kind of argument you'd expect to see from a writer at Slate, not from technologists who actually understand applications. Practically by definition, almost all application-layer vulnerabilities, from remote code execution through SQL injection through remote file access, involve requests that HTTP "allows" and processes. In fact, one of the most lethal bug classes --- SSRF --- simply involves getting an HTTP server to accept and pass on a request somewhere else! The premise that a request is authorized so long as it doesn't generate a 403 implies that virtually all modern application vulnerabilities can be exploited lawfully. And that's a ridiculous proposition.
The Internet is comprised of physical servers, owned by humans. Those servers are accessed by other humans, who are perfectly capable of predicting how the human owners of the servers would want those servers to be used. The protocol isn’t what defines those human interactions and expectations.
A protocol - originally the diplomatic customs, procedures, conventions, and etiquette for relations between states - is by definition one of the ways of expressing intent.
If I broadcast a request for an IPv4 address and your DHCP server proffers an IPv4 address that I can use for the next 15min, the address of a nameserver, and the address of a router that will forward packets to the global internet, I can reasonably conclude that you intended to allow me to exchange packets on your LAN and at least attempt to use your gateway to interact with the internet. On the other hand, depending on the situation, a "403 Forbidden" could reasonably be interpreted as a request to not send that type of HTTP request anymore.
The protocol isn't the only place to look for intent, but it absolutely does express intent in some situations.
The protocol explicitly has mechanisms for protected and non-protected.
If the owners don't want something public, it's trivial to lock it down--they might as well freak out when somebody uses the wrong door to enter the front of their shop.
At the end of the day, laws govern the interactions between humans. The law imposes on everyone an obligation to think about the intentions and expectations of other humans. (This is what separates us from animals--the ability to reason about the mental states of others!)
The protocol is relevant, because it conveys information. Just as unlocked doors generally indicate permission to access, unsecured HTTP generally indicates the same. But the protocol is only one piece of the puzzle. It is not dispositive. It does not conclusively decide rights and responsibilities. If a reasonable human would discern that the protocol allowing access was probably the result of a mistake rather than intent on the part of the property owner, that is what matters.
Turning the question around, if you did discover that you were able access a site, and it concerned you enough that you felt obliged to report it as a security problem, then by your own judgement you have decided you shouldn't have had access.
So this creates a bad situation where responsibly reporting security holes provides evidence that you knew the access was not authorized, regardless of your intent.
I think you're missing the (subtle) point that is trying to be made.
I can go to google.com or HN because I type the URL into a browser and the page is displayed -- I didn't need to go through an approval process or present a security token or anything else to gain access.
Now, what if I type in a URL into my browser that happens to look like company.com/users/12345/purchases and it displays a page of information. Did I "circumvent" anything in this case? Did I "exceed" my authorized access? If so, what is the fundamental difference between accessing google.com and this hypothetical URL?
The point isn't what people believe about thought experiments, the point is the uncertainty in the law.
Yes, it is reductio ad absurdum to question hitting port 80 on a public site. Work a little harder, though - what about port 8080? 6734? What if you monkey with the URL?
Note that there are people here claiming that the last bit is 'hacking' under the law, and in some cases they are right.
So now we can't change URLs without hacking, but only when doing so knowingly exceeds authorization. Well, what does that mean?
Let me offer an example: Someone uses the dumb disable-right-click Javascript to stop people from downloading their images. I disable JS and save an image. Put aside copyright and focus on the CFAA: Have I broken it? If so, does this seem to be the sort of thing that should be a felony to you?
Work a little harder, though - what about port 8080? 6734? What if you monkey with the URL?
That's why we have trials and juries and so on. 'Working a little harder' and trying to find trivial logical gotchas in law is a pointless thing to do and discuss because it's very much not how the legal system works. The law is not a program interpreted once by some literalist, implacable CPU.
Yes, the law is great at reaching decisions. Not always good ones, but one is sure to be reached. If that is all that matters to you, you're thinking like a legal academic, not a security researcher.
The point I am making is about uncertainty before you get to a formal legal setting. Or is everything else you do in business do-first, hope you don't get busted later?
Legal uncertainty about edge cases is real, and a thing that dissuades people from doing thing things. Saying the law will figure it out is how we got here with the CFAA, not a solution to it.
It's like everyone has forgotten the principle of first sale... and it really annoys me how much HN users allow the Overton window to be moved on topics of freedom like this.
The problem is that we're in a society where increasingly important things are behind APIs and we're supposed to just trust that they won't fuck it up unless they have a bug bounty program.
I don't give a shit about the bug bounty program for the bounty per-se, I want them to exist because I want it to be legal to attempt to hack things (without breaking working systems on purpose) because blackhats do it anyway.
In a world where "illegal hacking" can mean "making a document you legally own on a device you legally own usable with other software", there is most definitely a problem.
Further, how does one discover weaknesses without doing things? Are any and all discussions of security to be confined to the purely hypothetical?
When it comes to the DCMA, that's debatable. In a court of law. Unless, of course, you don't have the money to match their lawyers.
http://www.cs.cmu.edu/~dst/GeoHot/
Depends on the lock and the method of attack you are using - CFAA violations are usually tied to 18 USC §1030 (4) (https://www.law.cornell.edu/uscode/text/18/1030), which only apply to "protected computer[s]"
A computer is protected (18 USC §1030(e)(2)) in generally one of three circumstances - i) Used exclusively by a financial institution; ii) Used exclusively by the US Gov't; or iii) Affects foreign commerce or communication [usually interpreted to mean connected to the Internet]
Therefore, if you are using a non-internet connected device (i.e. Bluetooth or NFC only), it's unlikely to be a violation of the CFAA.
Almost certainly not (if the lock somehow phones home to a central server and your research involves subverting the server, you could have problems, but no lock I know of works that way).
While it might not be illegal to mess with a lock you've purchased, it could be unlawful, in the sense that you might be violating a contract that you agreed to when you purchased the lock. So while I would stake money on you not getting arrested for doing that work, I wouldn't bet against you getting sued, which is a far more common occurrence. Researchers getting arrested: rare. Researchers getting sued: a little less rare (still pretty rare).
There's a kind of funny warranty/liability split when you cant test the quality of your own lock.
Locks are generally a let the buyer beware kinda proposition. It's up to me to decide if this lock is good enough to protect the tools in my shed.
If i'm explicitly forbidden from investigating that - i think the manufacturer should be required to say how much of your losses they'll cover when the lock fails. Obviously the current state of affairs is _none_ and i can't lawfully tell if the lock sucks or not.
We'd all be better off if lock makers were forced to choose. either i can pick my own lock (evaluate my own risk) or advertise how much insurance comes with the purchase of the lock (evaluate risk for me).
(i fully appreciate the world does not work this way. I'm just asserting it would be better for everyone if it did.)
Such one sided contracts where there was never a possibility of negotiation need to be tossed out. This applies to the EULA of a smart lock to a lease agreement with some super corporation located in a different state. Such contracts can only be consensual when involving two entities of similar power.
In a Democracy, one can not resign fundamental rights in a contract. Not in one you get to read before signing, and definitively not in one you can only read after signing.
You can agree to some punishment (like in an NDA). But if it's in a level that would destroy your life, it ought to be illegal again. And every time you agree for some punishment for exercising a fundamental right, you must gain something proportional to the right taken, otherwise it ought to be illegal again.
It's impossible to have a democracy if most of the population is slave due to some contractual clause.
Researchers getting sued: a little less rare (still pretty rare).
A quick intuitive verification for this claim: when you buy an E&O insurance as a technologist they'll ask a lot of stupid questions (in the sense that you'll be offended they have to even ask) but they don't ask "Do you do security research?" because the underwriters believe the risk and hazard to be actuarially immaterial.
I mean my business insurance still has a Y2K rider
>This Policy does not cover any loss, damage, cost, claim or expenses, whether preventative, remedial or otherwise, directly or indirectly arising out of or relating to:
(a) the calculation, comparison, differentiation, sequencing or processing of data involving the date change to the year 2000 or any other date
change, including leap year calculations, by any computer system, hardware, programme or software and/or any microchip, integrated circuit or
similar device in computer equipment or non-computer equipment whether the property of the Insured or not; or
(b) any change, alteration or modification involving the date change to the year 2000 or any other date change, including leap year calculations, to
any such computer system, hardware, programme or software and/or any microchip, integrated circuit or similar device in computer equipment or
non-computer equipment, whether the property of the Insured or not.
This Clause applies regardless of any other cause or event that contributes concurrently or in any sequence to the loss, damage, cost, claim or expense.
And further, if you're doing this on your own and never publish anything, it's not like Bluetooth Locks, Inc knows what you're up to and just magically serves you with a lawsuit.
I don't think this is the case. Not only are there probably no locks that work like this on the market, but it is even less likely that, were one to exist, it would be impossible to discover the remote server connection without probing the remote server. Remember also that the law we're talking about isn't strict liability; you have to knowingly exceed authorization.
I've spent the last 13 years doing this kind of work, and bumped into a lot of remote servers I don't own on assessments of devices. There've been a lot of times where I wished I could go further, but never a time where I accidentally got sucked into hacking someone else's computer by dint of probing my own device.
But you're not trying to access the software, you're trying to unlock the lock. DMCA concerns itself with copyrighted works behind some protection mechanism: technically, if you're accessing the lock's firmware, discover it's encrypted, then attempt to decrypt it, you're violating the DMCA. In addition, if you publish the decrypted software, you're infringing copyright.
In either case, these are probably not CFAA violations.
You don't really need to "hack" to find vulnerabilities. As an extreme point, spotting your own username and password in a URL would not be considered hacking.
You may know that, but does the average person who might find themselves in a jury pool know that? Would they understand the distinction? To the average lady person, this would be like saying alizeran crimson is more than just a red paint.
That's fine. If Congress illegalizes speech of this nature, then public disclosures of exploits and other code errors won't happen nearly as often.
So what would happen then? There's many, many organizations that are willing to pay for exploits, because they plan to use them. There's one such company in DC that is likely to sell to the defense industry. For a 0-day no click exploit, $500k .
So sure, make it illegal. You only push this into the illegal side of operations and exploits. And I would much rather have exploits well known, so I can make the determination to: do nothing, take service down until fixed, or try to patch if available.
Which company is that again? You hang around with them cybersec folks too long and you keep hearing about these 6 or 7-figure payouts for expl0its by some mysterious companies.
But it's always someones friend or friend of friend or some other 'reliable' source.
Can someone here actually say that they have received, or more importantly, are on a consistent basis getting 6-figure payouts for their exploits?
For reference, Microsoft offers bug bounties up-to $250k[0]. I can see that in light of this, it's not entirely unreasonable that some tax payer money has been wasted on buying an exploit or two for large amounts of $$$. But what I am calling into question is the myth of consistent 6/7-figure payouts for 'exploits' when sold to 'shady' companies or that such payments would be common place or such markets being generally available to security 'researchers' (read exploit developers).
> Can someone here actually say that they have received, or more importantly, are on a consistent basis getting 6-figure payouts for their exploits?
I am not going to make any claim on whether people are or are not receiving payments, but anyone who actually is would clearly not disclose it directly, because that would most likely break the terms of the payment.
The fact that no one is saying are getting paid directly for exploits is not evidence that it is not happening.
This is fairly vague, and “up to $X” butnpossibly provides a touch more concrete-ness than you’ve seen so far.
You’re right though; you won’t find anyone who is currently selling 0day exploits to brokers or govt who will be willing to go on record. Or even admit that they sell them.
Thanks for this link. It looks like my skepticism may not have been well founded. Are there [many] other similar companies out there? Now that my curiosity peaked, they are the only one mentioned in Wikipedia[0] related to this activity.
And as an add-on question to satisfy my curiosity, how would one know when approaching company such as one linked here that the exploit sold will be used by good-guys<tm> and doesn't come to haunt you later when it ends up being used by bad-guys<tm>?
disclaimer: I haven’t sold a single exploit, so am taking a few guesses here. Approx ~5 of my relatively close friends are exploit devs, and most of them don’t talk about anything beyond the name of the company they work for.
There aren’t a ton of companies, but once you’ve worked in the general offense industry, names start to reappear regularly.
I’m not sure what your definition of good/bad guy is. If you don’t like the idea of 0day used for “cyber warfare” then it’s easy: only sell directly to the vendor. If your idea of good guy is your own nation, you can usually tell a companies alignment. They’ll have an ex-NSA CEO, or close partnership with a defense contractor. Ultimately you’re never sure though.
Edit: I’ll also add that in your earlier post you mention the high cost, and wasting of taxpayer dollars. I don’t disagree, but an interesting way to look at it is value per dollar. If you look up the cost of running a state of the art attack helicopter or even just a couple of hummers loaded up with navy seals and all their kit, you find that dropping a $1m piece of remote Jailbreak malware on a bad actor is actually really amazing ROI in terms of finding out what your enemy is upto, and disrupting their plans.
It should then also be illegal to disclose exploits of the tax system, etc. Hacking is not limited to electronics, but can also happen to any of our social/legal constructs.
Ugh. Your comment reminds me of the Etherium DAO debacle.
Is the law the actual legal code, or should it be conjoineded with the spirit and justification for the law?
If the law is the law, devoid of underlying reason, then it is no exploit to discuss edge cases in the law. It is indeed the law, regardless the intentions upon initial discussion and passing.
It also doesn't hurt that most of these edge cases can only be triggered with large amounts of money. Us normal people can look at the edge cases, but never touch.
There are a large number of penetration testers that operate in the open because it is legal and there are frameworks for disclosure. What happens when we make this illegal? Will those people simply cease and desist?
The full-disclosure mailing list has been around 20 years and has no problem garnering attention. Sending an anonymous email in that direct with no way to track back isn’t particularly difficult.
Attempting to have a reasonable conversation in this thread is impossible because mods have allowwed certain accounts to post as much as they want and brigade counter opinions.
Most accounts are allowed to post as much as they want! We do put rate limits on some if they have a history of posting too many low-quality comments too quickly, or of getting involved in flamewars. Also, new accounts are rate limited for a while, to cut back on trolling.
If you or anyone else has a rate limited account and would like us to lift the rate limit, you're welcome to email us at hn@ycombinator.com. We're happy to do that if we believe there's an intention to use HN as intended in the future.
There isn't an epidemic of prosecutions of vulnerability researchers --- in fact, there are virtually no such prosecutions, despite 8-10 conferences worth of well-publicized independent security teardowns of everything from payroll systems to automotive ECUs. There are so many random real-world things getting torn down by researchers that Black Hat USA (the industry's biggest vuln research conference) had to make a whole separate track to capture all the stunt hacking. I can't remember the last time someone was even C&D'd off of giving a talk.
I'm a vulnerability researcher (I've been doing that work professionally since the mid-1990s) I've been threatened legally several times, but all of them occurred more than 8 years ago. It has never been better or easier to be a vulnerability researcher.
Telling the truth about defects in technology isn't illegal.
Doctorow has no actual connection to the field, just a sort of EFF-style rooting interest in it. I'm glad he approves of the work I do, but he's not someone who I'd look to for information about what's threatening us. I'm trying think of something that might be a threat... credentialism, maybe? That's the best I can come up with. Everything is easier today, more tools are available, things are cheaper, more technical knowledge is public; there are challenges in other parts of the tech industry, but vuln research, not so much.