

What to do when a company refuses to fix a vulnerability I disclosed to them? - moooooky
http://www.reddit.com/r/hacking/comments/ruf9w/what_to_do_when_a_company_refuses_to_fix_a/

======
zrgiu_
There are security companies that buy these kind of information from you (like
antivirus companies), so that they can patch the breaches themselves and
proudly announce they discovered a breach and only by using their software you
can be protected.

I don't know how legal it is, and I understand that the breach finder wants to
publish his findings himself (for "reputations points" maybe ?), and he might
lose this right by selling an info, but at least he's getting something out of
this. IANAL, but i'm pretty sure you could get in trouble for publicly posting
information on how to hack a public service (or pretty much anything for that
matter)

~~~
tptacek
Nobody is going to buy a rate limiting bug in some random mobile application.
Actually: nobody is going to buy a rate limiting bug at all.

~~~
zrgiu_
when I posted my comment the reddit post wasn't edited to say that it's a rate
liming bug. Indeed, nobody is going to buy such a thing. Pretty useless for
any kind of purposes, black-hat or not.

------
eslachance
If only the company is put in danger and they stubbornly refuse to resolve the
issue, I'm not exactly sure why anyone would work so hard to convince a
company to do this. The job of reporting the issue is done, a corporate
decision has been made. If that decision is to remain vulnerable, as long as
it does not affect users directly, why bother?

Unless, as others suggested, you can legally make a profit out of it, then by
all means! Otherwise, just let it go...

~~~
naner
It appears he wants to publish the vulnerability (might be a novice security
researcher) without getting sued.

~~~
tptacek
He is very, very unlikely to be sued provided that (i) he didn't explicitly
agree to a contract forbidding security research when he acquired the
application, (ii) he acquired the application lawfully, (iii) he at no point
solicited business from the vendor of the application, (iv) he didn't exploit
the vulnerability in any way that could be construed as having caused direct
damages to the vendor, and (v) he is scrupulously honest and careful about how
he writes the finding up.

Contrary to popular opinion on HN, finding vulnerabilities in software _you
yourself run_ on _your own computer_ is rarely fraught. We hear about the
exceptions in the news because they're exceptional. In reality, people publish
vulnerabilities all the time.

The same thing obviously CANNOT BE SAID about finding vulnerabilities in other
people's web applications. Finding web vulnerabilities without permission is
_highly_ fraught. You can easily find yourself both civilly _and criminally_
liable for doing so.

~~~
lawnchair_larry
I would adjust "other people's web applications" to be "in other people's
deployments."

For example, it is fine to take someone else's commercial web app, install it
on your own server, and beat it up.

~~~
tptacek
That is a good point, thanks for amending.

------
alan_cx
From an ignorance and slightly tongue in cheek POV...

...is there a difference between discovering a new exploit and discovering a
company is open to an old or well known exploit? This sounds like the latter.

I'm all for disclosure of a newly found exploit because by doing so you are
informing every one who might have the problem and that allows them to take
action, etc. But if this is just one business who refuse to fix a known
problem then, well, that's their stupidity, no?

See, the bit that bothers me is that publishing the "news" that one company is
vulnerable has to be a bit iffy. Its like publishing a list of buildings that
don't have good door locks or something. We don't see that in the real world,
so why would it be reasonable for the IT world? I mean, there is no legitimate
list of vulnerable buildings created by white hat burglars, is there? Its
never been legit for such burglars to gain access to a building and leave a
note describing the poor security on the CEO's desk.

~~~
Chirono

      Its never been legit for such burglars to gain access to a building and leave a note describing the poor security on the CEO's desk.
    

Unless, of course, you happen to be Richard Feynman. Which most of us aren't.

[http://www.silvertrading.net/articles_lagniappe_01_richard_f...](http://www.silvertrading.net/articles_lagniappe_01_richard_feynman.html)

~~~
thornofmight
I've had "Surely You're Joking" on my Kindle for almost a year now and have
never read it, but every time I see anything written about Feynman I realize
that I'm almost certainly missing out. He sounds like the most interesting
man.

~~~
msg
You are missing out on a readable book divided into short chapters. It's
basically all anecdotes. Easy to intersperse with your other reading.

------
noonespecial
I don't know how big the company is, but after a certain bigness, all of the
people who could fix problems like this have moved on. The only people left
are managers who fix "problems" with lawyers. A classic "when all you've got's
a hammer" situation.

They might not be refusing to fix the problem. They might actually be unable
with the tech talent they've got left.

My advice? Don't look like a nail.

------
glogla
If you contacted them non-anonymously first, you made a mistake, because they
can and will sue you if you disclose it. Judges don't understand computers and
US courts are all about draining money from someone, so they still might ruin
you out of spite even if you disclose it in a way that there's no proof it was
you or if someone else who discovered and released it on his own.

The correct way would be: 1) discover a vulnerability 2) contact them
anonymously 3) if they don't fix it, anonymuosly release it to general public

That way, you can still help them while protecting yourself. The third step is
optional of course.

------
fishercs
You almost sound like you're laying down an ultimatum to the company, you've
done your job by notifying them so let sleeping giants rest. If it's a known
exploit I don't see any reason to publish your findings, if it's something
you've come across that hasn't been published than by all means publish away.

------
ajross
The linked post is talking about a DoS vulnerability of the service. It
doesn't impact other entities than the service provider (beyond the obvious
potential for service outage of its users). I think telling them about it is
all that's required. Either they fix it or they don't, that's between them and
their users.

------
homakov
Public disclosure won't help btw. half of sites here didn't fix
anything([http://homakov.blogspot.com/2012/03/hacking-skrillformer-
mon...](http://homakov.blogspot.com/2012/03/hacking-skrillformer-
moneybookers.html))

~~~
tptacek
Did you reach out to each company and tell them, or did you assume that by
creating a public blog post about them and submitting it to Hacker News they
were bound to find out?

~~~
TheCowboy
Did you read his blog post and see that he did report the vulnerabilities and
noted which companies fixed it?

~~~
tptacek
I read the comment he wrote on HN where he said he didn't. But if Egor Homakov
says he did, my next question is "who did he report it to?"

I've been doing this for awhile, maybe there's useful advice I can offer him.

------
j_baker
I'm curious what we could change legally to make this less an issue. There's a
clear conflict of interest between doing a public good by disclosing a
vulnerability and not wanting to risk (at worst) the FBI coming after you or
(at best) losing clients. I would certainly consider it unethical to know of a
vulnerability and not disclose that information publicly, but there are so
many hurdles to doing so that I don't blame some people (especially those who
are less established) for not doing so.

It almost makes me feel that there should be a law _requiring_ disclosure of
vulnerabilities.

~~~
tptacek
The FBI is not going to come after you for publishing a DOS vulnerability in a
mobile app; in fact, you could find and publish remote code execution in an
extremely popular application (say Instagram or Twitter) without even telling
the vendor and still not be in any trouble. People do it all the time.

Most of the stories you hear about people getting in actual trouble over
vulnerability research involve web vulnerabilities. You cannot hack someone
else's web site to make a point, even if the underlying point is unimpeachable
("this application is insecure and people should know about it").

------
grannyg00se
He could just leave them alone and do nothing. It's their service and if they
don't want to respond then let them leave the vulnerability open. It doesn't
affect user privacy so there is no duty to fellow users as there is in some
other cases where a vulnerability being left open means people could be losing
private information on an ongoing basis.

------
gouranga
Pastebin, then on a disclosure list.

------
ScottBurson
Seems pretty straightforward to me -- since it's a DoS that doesn't put users'
information at risk, just publish it without naming the company.

~~~
huhtenberg
That's the first sensible and ethical suggestion in this thread.

------
arien
Couldn't this vulnerability simply be published without mentioning who is
directly affected? E.g. "under x and y circumstances, it is possible to do z
and everyone is advised to check and correct this".

If this is not an option it means it is something very specific of that
company, and what would be the purpose on releasing the vulnerability to the
public?

------
outside1234
You should contact them. If that fails, make a commit to the Rails project.

------
DannoHung
There should be a scientific journal for this sort of thing.

------
functionform
Take your business elsewhere is step 1. Your information/service is not
guaranteed if they aren't willing to protect it.

------
JoeAltmaier
Everything takes time and money. It may get fixed, eventually. What does
'refusal' amount to?

------
un1xl0ser
Move on, work on something new.

I recommend two shots of wheatgrass and a smoothie.

------
michaelcampbell
I'm sure Randal Schwartz can offer some advice on this.

------
lawnchair_larry
Name and shame.

------
cinquemb
If you really want them to fix it, whatever you decide, be anon about it. If
you want your name attached to it, move on.

------
homakov
Use It.

~~~
mcherm
Illegal. And juvenile.

~~~
ktizo
Illegal maybe. Juvenile? If illegally making use of hacks is juvenile, someone
better inform the Mexican Zetas. Very politely.

------
aneth
I think you're supposed to exploit the vulnerability in relatively innocuous
but deeply disturbing ways, get banned, then complain about how you only meant
well, then be lauded on Hacker News as a martyr who should have been embraced
by the hacked company.

~~~
rdtsc
Or rather you contact them. Then they ban you and possibly send the FBI after
you for "illegally accessing a remote computer system" or other such crime and
then you are punished for all your work. If you tell them you will disclose
your research on a certain date they'll go after you for extortion.

I wrote this before and I'll say it again. I don't believe in "White Hacker"
as a label. Corporations do not do well when their vulnerabilities are
exposed. They don't have a way to handle "White Hackers" unless they are the
ones hiring them. Most will strike back and punch you in the face no matter
how good your intentions are. So if you already spent the time researching and
finding the vulnerability, just disclose on a security forum or if you want to
profit, sell on a black market.

~~~
ericgearhart
I believe you mean "White Hat Hacker"... I think everyone gets the gist of
what you mean but just wanted to clarify in case someone's thinking you're a
racist hating on "Whitie" or something :)

~~~
dclowd9901
Are there really people in this community who don't know this?

~~~
sciurus
I've heard the phrase "white hat" used frequently to describe hackers. I've
never heard the phrase "white hacker".

    
    
      About 526,000 results
      http://www.google.com/#hl=en&q=%22white+hat%22+hacker
    
      About 65,000 results
      http://www.google.com/search?hl=en&q=%22white%20hacker%22

~~~
dclowd9901
You know what? I totally mentally replaced the word "white hacker" with "white
hat", and only realized it after you pointed it out.

------
madaxe
Nothing. If they're unwilling to fix it, they'll end up facing the
consequences when someone less scrupulous than yourself discovers it. If you
do publish it, odds are they'll issue a DMCA takedown and try to sue.

Speaking from experience...

~~~
cperciva
_If you do publish it, odds are they'll issue a DMCA takedown and try to sue._

My experience is quite to the contrary. Even Intel, as poor as their security
response was, didn't try to take legal action against me. (I was lucky that I
was unemployed at the time, though...)

~~~
rdtsc
> didn't try to take legal action against me

But that is an interesting attitude. Instead of being indignant that they
didn't offer to pay you for doing their security research for them ( or at
least publicly thanking you) you just seem glad that they didn't sue you.

It is like volunteering to help someone and then just being glad they didn't
beat you up in the end.

So it seems like there is not much benefit to doing this (there is a benefit
if you prevent other people information from being stolen) but immediately
there is no upside. You either get ignored or you get sued. If anyone gets
sued by a company who has a full department of lawyers on retainer, it is
guaranteed they'll pretty much have a bad time.

~~~
cperciva
_It is like volunteering to help someone and then just being glad they didn't
beat you up in the end._

I didn't publish the hyperthreading vulnerability to help Intel. I published
it to help Intel's customers.

------
xxiao
do nothing, it's none of your business, why bother?

