Hacker News new | past | comments | ask | show | jobs | submit login
Protecting Security Researchers (dropbox.com)
396 points by dsr12 on Mar 21, 2018 | hide | past | web | favorite | 40 comments



"A pledge to not initiate legal action for security research conducted pursuant to the policy, including good faith, accidental violations."

This is the major takeaway point. More and more shady companies in the past few years have been starting to file lawsuits and take legal action against researchers who they declare went out of scope, or just outright broke the law by doing research and disclosure against their software/device/product. These companies are trying to stifle legitimate security research because they are too lazy or ignorant to fix their problems. It's nice to see such a large entity taking a public stance on how they feel bug bounties and general security research should operate.


This is the first I’ve heard of such a widespread problem. Do you have any examples?


I know there was one about a bug bounty researcher finding massive vulnerabilities in a chinese companies drones(link below). They then claimed he went out of scope(he didn't), and threatened to sue him. A few other cases were reported on zdnet a few months back about some cases like this. Mostly website/product owners leaving their systems vulnerable after being contacted. Months/years later some researchers did a public disclosure and the companies then tried/did take legal action.

https://news.ycombinator.com/item?id=15721268


I summarized this here: https://news.ycombinator.com/item?id=16642155

Long story short: they didn't sue him. Their legal demanded that he delete DJI IP and secrets. It wasn't a friendly demand, but that's all it looks like it was.


The earliest instance I remember was the DeCSS debacle. In 2001 Dmitry Sklyarov published issues he found with the back-then popular so-called "copy protections". When he gave a talk at DEF CON in Las Vegas, he was arrested.

https://en.wikipedia.org/wiki/United_States_v._Elcom_Ltd.

The irony in this story couldn't be bigger:

- He is a Russian who exercised free speech in the US, and got arrested right there. Imagine that happening with the roles of US and Russia reversed!

- He was only dropped from prosecution in exchange for agreeing to testify and to leave the US. Again, imagine that happening with the roles of US and Russia reversed!

- He essentially provided free research, sharing their findings with the public, instead of abusing them in private.

- It was not even a serious security issue. It was just a flaw in a system which nobody expected to work for long anyway. (Really, how could copy protection ever work without exercising full control over all audio and video hardware? And even those could be reverse engineered over time.)

- Plain copyright law was sufficient to cause all that trouble for him. No computer security laws or homeland security laws were needed.



Probably the most egregious example would be Weev/Andew Auernheimer and AT&T

https://www.wired.com/2013/03/att-hacker-gets-3-years/ https://www.wired.com/2014/04/att-hacker-conviction-vacated/


weev attempted to extort and blackmail AT&T. To paint him as an innocent well-intentioned security research is a slap in the face to everyone who is.

Charging him under the CFAA was a ridiculous abuse of that law




These are mostly bad examples, and pretty much none of them are examples of the phenomenon being talked about on this particular subthread:

1. Keeper is suing Dan Goodin, a reporter, for (I think?) defamation. (Keeper is evil and you should never use them, but they're not pursuing the researcher under CFAA or DMCA).

2. Chris Vickery found a database backup of a whole company, analyzed it and found that they were shady, and published directly from the database backup. That's not really vulnerability research, and is a bit akin to finding a vulnerability and then using it to dump an account table to Pastebin.

3. PwC C&D'd (but didn't sue) a firm called ESNC. The software ESNC was testing was available only under an NDA license; I assume ENSC got access transitively through a client. This happens a lot in enterprise pentesting. ESNC published anyways, and nothing happened.

4. DJI rescinded KF's authorization to continue testing when he refused to accept the terms of a bounty (which included both disclosure limitations [which may or may not have been reasonable] and a promise not to do post-compromise pivoting [which is entirely DJI's prerogative]). KF rejects the bounty terms, and DJI legal gets involved and demands that he delete any DJI IP or secrets he's taken. This is unfriendly, but not a lawsuit.


We're talking about the threat of lawsuits, right? Can we be so sure that the word "lawsuit" was never mentioned in any of those discussions?


The predicate at the root of this thread is "starting to file lawsuits or take legal action", against researchers.

Maybe a better way to put it: it's hard to see how any of the examples in this article would be addressed by Dropbox's VDP.


That's a big leap.

Given the rather asymmetric nature of the power in these interactions, even something as simple as just being responded to with a legal letterhead rather than an email from the security department has a stifling effect I'd argue.


Which example from that article would be addressed by Dropbox's VDP?


This is a wonderful thing to see and I hope that more vendors will follow suit. Amit Elazari [1] has been doing some amazing work in this field advocating for legal safe harbours for security researchers. She posts regular reviews of security policies encouraging vendors to help protect security researchers using #legalbugbounty on Twitter. [2] In fact, it appears that Amit was responsible for some of the changes to Dropbox' security policy: https://twitter.com/d0nutptr/status/973322158351921152. Well done, Dropbox and Amit!

[1]: https://twitter.com/AmitElazari

[2]: https://twitter.com/hashtag/legalbugbounty


> 3. A clear statement that we consider actions consistent with the policy as constituting “authorized” conduct under the Computer Fraud and Abuse Act (CFAA).

That's hugely important if you want bug bounty programs to appeal to people who are distrusting of federal prosecutors and the FBI.

Without it, there's a lot of anxiety and uncertainty with testing live systems.


This is an extremely researcher-friendly VDP, and, powerfully, includes essentially a demand that Dropbox partners adopt comparable VDPs; Facebook did something similar (but less formally) a few years ago when BlueCoat started threatening researchers.


At first glance, this seems awesome. I have to applaud Dropbox for their forward stance here.


Considering their terrible track record for maintaining users’ privacy, I’m glad they’re taking a step in the right direction.


This concerns the bug bounty itself: Is there a Dropbox internal bug bounty program as well? As data is unencrypted, I assume the biggest thread to customer data are Dropbox employees.


I don't know of any company that publicly advertises the bonuses they give (if any) to employees for finding vulnerabilities in their own software. That seems more like part of someone's job description.


Yeah the incentives get very murky very quickly there. One employee might ignore a vuln now so as to get a bounty for it later. A manager might give a known vuln-maker code access. Any group of employees might conspire to do either of those or something else at some remove. An actual attacker might manipulate any such conspiracy... It might be interesting as a study of Gambit Roulette, but not as any way to run a firm.

It's possible that GP meant giving independent researchers access to internal tools. That would be interesting but also very difficult to pull off safely.


Yes, but it would be even more important. Employees have access to system architecture details which makes it easier for them to harm users.


I appreciate the transparency.

A few years ago I was testing a service acquired by Dropbox and they updated the scope of the Dropbox acquisitions program on HackerOne to exclude said program while I was in the middle of testing it and I didn't notice (checked later with the "last updated" diff). Unfortunately the vulnerabilitie(s) I discovered didn't count and their reply was all "no harm, no foul, thanks anyway."


On the dropbox security issue, does anyone have a way to get a proper changelog for new versions? It does not seem to exist.


New versions of.....?

The Google Play Store has a change log.

Websites virtually never have change logs.


Dropbox also provides a desktop application : https://www.dropbox.com/help/desktop-web/desktop-application...

I've never used it but I know that it gets updated quite regularly without giving much information : https://www.dropboxforum.com/t5/Desktop-client-builds/Stable...

And as you can see I'm not the only to ask for a changelog about this product : > thomas l.14 : CHANGE LOOOOOOOOGS !

edit : It is a shame if this announce only concerns the website and not the full environment :(


archive.org might be the only way to see what changed.


"A pledge that we won’t bring a Digital Millennium Copyright Act (DCMA) action against a researcher for research consistent with the policy."

Minor issue, but it's DMCA-- if someone reading this has edit rights on the page you may want to fix this.


Thank you for posting this, this is what most companies should be push forward!


It seems like all of these issues are tied to identity. Is there not some anony0mouse security reporting site ala securedrop?


It seems like the issue would be for people who want to publish their results, get paid, and otherwise be able to publicize their accomplishments.


This is excellent


Justin Shafer


The best way to get a company to do anything is through public disclosure, however it is reasonable to reach out to them first, anonymously (so that they don't sue you or kill you), and hope that they are descent people too.


Did you read the link at all? Or did you just see the title and decide you needed to vomit your opinion?


Personal attacks will get you banned here. Please don't do this again, regardless of how empty another comment is.

You also broke the guideline asking people not to do the "did you read" thing. It would be good if you'd (re-)read https://news.ycombinator.com/newsguidelines.html and follow them when commenting here.


Did you just assume someone hadn't read the posting guidelines?

...

Turtles all the way down.


I see your point, but the purpose of the awkward "(re-)" bit is to indicate the opposite.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: