
“Why are you releasing a full exploit just minutes after the patch is released?” - jameshart
http://www.openwall.com/lists/oss-security/2015/07/23/17
======
Coding_Cat
I agree with Leif here. Sure releasing a PoC serves a purposes even in these
cases, but this is just idiotic. One cannot expect outsiders to patch their
systems this quickly. I would release without a PoC for a few weeks, maybe a
month depending on how widespread the exploit is believed to be, to give
everyone enough time to update. Humans need sleep , and some patches need
downtime after all. Perhaps a tiered release would be best? Release a PoC to a
few trusted partners to verify and pen-test the new patches before giving a
full-disclosure. If there's something I'm missing here I'd love to hear it,
cause frankly I feel like I'm just stating the obvious here...

~~~
nkantar
> If there's something I'm missing here I'd love to hear it, cause frankly I
> feel like I'm just stating the obvious here...

Likewise — I'm very much a n00b sysadmin, but I'm having a tough time
understanding how giving people a buffer _isn 't_ desirable.

~~~
Titanous
1) It helps admins verify that the patch was successfully applied.

2) It gives security researchers more surface area to evaluate when looking
for similar bugs or bugs in the patch.

3) A PoC not existing provides a false sense of security. Just because one is
not published does not prevent attackers from creating and using an exploit
(in many cases before the vulnerability is disclosed).

~~~
dimman
Again, delaying the PoC exploit code release gives people some time to patch
their systems. How much time depends on the skill of the attacker and
complexity of the exploit, but some time is better than no time.

~~~
Joky
And again the point of Titanous is that delaying the PoC provides a _false_
sense of security. Saying "some time is better than no time" _is_ the problem,
you have a false impression that you _have_ time while the fact that the PoC
is not published does not mean it is not already used by some attackers.
Delaying does not provide an incentive to people to update. I agree that there
is a tradeoff, but it is not a simple a you present it (delaying means giving
some time).

~~~
EmanueleAina
It's not a matter or "impressions" but rather the matter of chance: when a
vulnerability is published you may have the chance to fix it before someone
figures how to use it in a exploit.

If the exploit is released immediately, that chance becomes effectively nil.

Of course this doesn't change the fact that everybody should apply the patch
ASAP, but delaying the exploit simply adds some delay to the bad guys.

------
chucky_z
For those curious... click through the entire thread with thread-next>.

You'll be rewarded with some really great discussion!

------
tptacek
This is a local privilege escalation bug. If you're a SAAS company protecting
user data and you were relying in some way on RCE not being a gameover flaw
because it wouldn't give attackers root, you were probably already boned.

It's also a 90s style Unix file management bug. It's not like patches and
vague advisories were going to keep the exploit under wraps.

That doesn't make it unreasonable to complain about exploits being released in
advisories (though: this is a complaint that is approximately as old as
security advisories), but it should dampen the outrage a bit.

------
Titanous
Working exploits are useful to test and iterate on to ensure that everything
is patched correctly (both for admins and researchers).

/r/netsec has some good discussion:
[https://www.reddit.com/r/netsec/comments/3ed4fu/cve20153245_...](https://www.reddit.com/r/netsec/comments/3ed4fu/cve20153245_and_cve20153245_local_exploit_that/)

~~~
dimman
They sure are valuable, he never said otherwise. His point was that the PoC
exploit was released before or right at the time that patches became available
and delaying the exploit PoC would give people a bit time to patch their
systems.

~~~
emidln
Delaying PoC doesn't preclude others from analyzing the patch and writing
exploits. The only thing delaying PoC does is provide a false sense of
security.

~~~
danielweber
I wish people would actually think about issues before saying "false sense of
security." It's as useless as shouting "that's security through obscurity!"
without thinking about things.

If you want to get rid of the False Sense Of Security, announce that there are
working exploits out there.

~~~
emidln
The rationale for not releasing the PoC is that it gives legit users a chance
to patch. This ignores the possibility of an attacker quickly developing an
exploit as well as ignoring the scenario where bad actors already know and are
exploiting the vulnerability. Not releasing PoC allows users think they have
more time to patch than they actually do. This is a sense of security that is
not actually derived from being secure in any way. What else should I call it?

This also prevents users from being able to test their machines vulnerability
after patching to verify that the patch remedies the problem in their
installation. Again, apply the patch, hope that it works in my installation
(with my config options) is a piss poor security policy.

~~~
danielweber
> Not releasing PoC allows users think they have more time to patch than they
> actually do

This assumes a bunch of stuff about the psychology of everyone else that you
cannot possibly know.

You are making decisions for other people because you think you know their
business better than they actually do.

> This is a sense of security

You are not a telepath who can read people's minds.

------
fieryscribe
That's what an embargo is and why it exists. And when it's up, it's up.

This is essentially a rehashing of the Microsoft/Google squabble a few months
ago.

~~~
danielweber
No, Google gave Microsoft X days to fix and release, and MS missed the
deadline. (Assuming I'm thinking of the same issue as you.)

Google didn't release PoC exploit code the same instant as Microsoft released
the fix.

------
Nomentatus
Please pardon what may be a naive suggestion. This isn't my area and illness
had me "benched" for a long while. Is there an inverted solution?

Maybe it's worth releasing not a simple PoC, but an unnecessarily large ball
of executable or testing procedure that incorporates the PoC for testing
purposes, but also does many harmless, exploit-like but nonsensical things,
performing only one action that matters (the test.) A hairball of mostly trash
code or procedure that is not trivial to untangle, in other words.

The idea is to create enough of a mess to stop the script kiddies from quickly
knowing what to modify for their own purposes, not forever; but for a time.

Just maybe it would be possible to create a hairball that takes longer to
untangle or trace than just having a pro tear apart the patch and roll their
own attack, ignoring the hairball. If so, providing the test/PoC/hairball is
not giving the pros in the black hats any extra time. (But in any case, at
least you are not amusing all the script kiddies.)

If you can make a worthy hairball, you might just want to release the hairball
just a little ahead of the patch; enough to whet admins' appetites for the
patch - inverting the current order of release. (Or not.)

Again, this may not be practical in all cases or any; and my apologies if
that's so.

PS There may be an argument in here for providing an "unnecessarily" and
complex patch, too, to make it harder to reverse engineer - perhaps worth
considering.

------
eyeareque
They do it for personal marketing reasons. Someone else could have posted a
PoC soon after and stole some of their spotlight.

I agree, it would be nice to wait a little while for people to get their
patches installed. But also, people like to have a way to verify that the
patch worked.

------
wcummings
Responsible disclosure is a courtesy, not an obligation, to criticize reeks of
entitlement.

~~~
mpdehaan2
I strongly disagree, and don't follow this logic.

Responsible disclosure is intended to be ethical, the opposite is ... well,
irresponsible.

I've worked with irresponsible security disclosurists before, and even those
that agree to a window only to dump _another_ small variation after the window
is up - rather than working to make sure the product is as good as it could
be.

Giving end users time to patch so they are not owned is quite a ethical thing
to do. The end goal is less compromised systems.

Doing this at the same time achieves the opposite effect and increases the
amount of compromised systems.

It's never been "we're giving you two weeks, get it done fast because the
exploit is going out then", it should be "we'll give you enough time to fix
this properly, and then give the customers time to update once announced".

I personally would always release when something got done early, just because
the person on the other end is usually unpredictable. However, the person on
the other end shouldn't assume development of the fix didn't take the full
interval.

Let people pen-test after they had a chance to update to secure the systems.
Having the ability to prove a fix is not worth systems being owned prior to
being able to fix them. The damage is already done at that point.

I've worked a lot of CVE reports, and about half the time, the person on the
other end just wants credit, they aren't really going to hang around and try
to help, and often they supply terse information (i.e. no exploit PoC to the
vendor) as some form of game. I'd much rather this always be more user
focused, the ultimate desire should always be to help the end user, and often
users are less educated and don't have dedicated security teams.

~~~
tptacek
"Responsible disclosure" is a term of art coined by a group of vendors and
consultants with close ties to vendors. Baked into the term is the assumption
that vendors have some kind of proprietary claim on research done by unrelated
third parties --- that, having done work of their own volition and at their
own expense, vuln researchers have an obligation to share it with vendors.

Many researchers _do_ share and coordinate, as a courtesy to the whole
community. But the idea that they're obliged to is a little disquieting.

If vendors want to ensure that they get some control over the release
schedules on their flaws, they can do what Google does and pay a shitload of
money to build internal teams that can outcompete commercial research teams.
Large companies that haven't come close to doing that shouldn't get to throw
terms like "responsible disclosure" around too freely.

~~~
CHY872
Although I take your point of principle, I'm not really sure I see any
incentive to not share vulns. From what I can think, the options are:

1) Actively exploit - I think we can agree that providing it to someone who
will actively exploit _is_ ethically dubious, or at least can be classified as
irresponsible?

2) Share with the vendor. Ethically, this is fine.

3) Do nothing with it. Ethically, this is fine, although pointless.

4) If you make software that's supposed to detect and/or block the use of such
vulnerabilities, add it to the detection system. I don't think this is a 'good
guy' thing to do, although I suppose it could in principle be what's best for
the company.

Having said that, my guess is that this last one has the smallest amount of
commercial value, since it turns what might be a few months of work into a
tiny part of a bigger piece of software; it's an investment that probably
ranges (when overheads are considered) from $5-50k that has no value at all
unless someone else finds the same bug.

But if someone else finds the bug, they could as easily be another security
researcher as a bad guy, and then you've kinda lost out.

Have I missed anything here?

~~~
tptacek
In this case we're not even talking about a vendor not getting access to a bug
found by a third party (though that does happen). We're talking about a series
of vendors notified ahead of time who now expect to have a say in how the
researcher notifies others.

The general response a lot of vuln researchers have to this kind of sniping is
that it should be directed to the person who wrote code after, say, 1995 that
directly edited /etc/passwd instead of rebuilding and linking it.

------
davidgerard
The actual answer is in the thread:

[http://www.openwall.com/lists/oss-
security/2015/07/23/19](http://www.openwall.com/lists/oss-
security/2015/07/23/19)

"That's how coordinated release dates work. Instead of trying to shame Qualys
for not following your arbitrary views on what is and isn't "Responsible
Disclosure", perhaps you should make sure Red Hat releases patches hours
before the CRD, like Ubuntu does?"

The rest is the usual rehashed discussion on this topic.

~~~
danielweber
> perhaps you should make sure Red Hat releases patches hours before the CRD,
> like Ubuntu does?"

We could end up repasting the entire thread, but the very next comment is from
Ubuntu denying that behavior. Releasing early is basically violating the
embargo.

~~~
cmurf
"There was absolutely nothing wrong with Qualys' timing. When the embargo
ends, it ends." And then later some comments about people's panties being
twisted, etc. which ends the thread.

Reading the entire thing is worthwhile.

------
bakhy
there's a lot of arguments here that delaying the patch release gives a false
sense of security because we cannot prove that bad guys do not already have
the exploit, or because they will develop one within hours.

if to have real security we must prove that bad guys do not have an exploit,
would that not lead to a conclusion that there is no real security at all? it
is safe to assume that exploits for undisclosed vulnerabilities exist. and,
regarding the "several hours" argument, didn't the release of the exploit
reduce those several hours to zero? instead of not being sure if the bad guys
have the exploit, or if they will develop it in 1 or 3 hours, we are now
certain that they have it. instead of a sense of false security, we currently
have no security.

it seems obvious that little to nothing can be done about those who already
have the exploit. and given the wide agreement that developing it will take a
couple of hours, then wouldn't an optimal exploit release time be 1-2 hours?

and as far as i could notice, only one very downvoted comment mentions that a
motivating factor for a quick release could be to be the first to break the
news. certainly sounds like good PR for a security company. why is this
unspeakable? :)

------
tzs
For a particular system to actually be exploited, you need someone who (1)
knows an exploit, (2) wants to use it on that particular system and (3) has
access to that particular system.

A lot of the analysis in the comments here does not seem to be taking all of
these requirements into account. They are particularly relevant in this case
because, I believe, the exploit is a local privilege escalation for people
with shell access.

------
tlo
I recently reported a security issue to a quite popular open source project
(which has at least some company support). The fix is ready but because of
coordinating other security fixes into one big release it is - after almost 2
month - still not released. I wonder if this is normal? What else can you do?
Full disclosure?

~~~
Titanous
It is normal, but not desirable. You should feel free to exercise full
disclosure if the vendor is not being responsible.

> [W]ithout the threat of full disclosure, responsible disclosure would not
> work, and vendors would go back to ignoring security vulnerabilities.

[https://www.schneier.com/blog/archives/2012/06/on_securing_p...](https://www.schneier.com/blog/archives/2012/06/on_securing_pot.html)

------
seorphates
To ensure that test kits, catches and patches prevail. But that's only a
guess.

------
Nomentatus
Obfuscate the hell out of the PoC, then release it simultaneously.

------
nobody_nowhere
Increases the need for Qualys security tools, no?

~~~
Sanddancer
Just the opposite. Releasing a PoC means that /other/ security companies can
add the exploit to their behavioral databases that much quicker.

------
agounaris
self advertisement??

