“We reported this vulnerability to Oracle, the latest update from them is that they are still looking into it, while in fact the latest version of Oracle VirtualBox version 5.2.18 has silently introduced a patch without giving credit or mentioning of the vulnerability report.”
The part that grandparent is quoting starts at about 38 minutes in.
This kind of attitude goes all the way back to 2002 https://www.theregister.co.uk/2002/01/16/oracle_security_cla...
But that's about it.
> While the crashing bug was reported to the VirtualBox tracker (https://www.virtualbox.org/ticket/16444), it was never considered a security vulnerability, and is not marked as one. This ticket is 15 months old at the time of writing this post and still marked as unresolved.
They might not be The Worst, but at 15 months, they're not great.
Very, very rarely will a Sev1 crash in production software be instantly reproducible. When it is, it’s an utter embarrassment. More often the full set of conditions which lead to the crash are unknown, rare, and hard to fully quantify all at once.
And yet, we know with certainty that the crash did happen and by definition it is a valid bug in any sane world where “don’t crash” is an absolute requirement.
Any crashing bug in a hypervisor must be assumed to be security critical until proven otherwise. Even after “proving” a crash can’t possibly be exploited, you should still assume that a clever attacker will figure out a way. The Venn diagram of “crashing bugs” and “security exploits” is vanishingly disjoint.
If you absolutely can't crash, you need to build an OS that caters to that - modern OSes explicitly prioritize other goals, like performance
Because, they plainly don't hide it and try to be nice, at least.
I also think that they're the worst in the industry.
My personal bias would also tell me Oracle is because of their business practices, but we were arguing about support practices.
My perceived worst definition was not about only business practices BTW. I meant in every category.
Nonetheless, supporting the products you sold are also a business. :)
I'm paraphrasing his 3 reasons for disclosing immediately:
1. It's unacceptable to wait half a year until a vulnerability is patched.
2. Bug bounties are riddled with tricks to delay you, shenanigans as to whether they'll pay you or not, and games to low ball the price.
3. It's arrogant to wait months while meanwhile preparing for a big boastful disclosure.
I'm thinking that we could add to this list as follows:
4. The vulnerability might already be well-known and being used by black hats and secret government agencies. Releasing the details immediately puts all of us on an equal footing.
5. Punish the vendors for having such awful security. Just the other day, HN had an article about Crucial SSDs that encrypt the drive with a key that doesn't depend on the password; you can decrypt the drive without the password just by patching the firmware. Is Crucial going to get punished for such an outrageous lie about their encryption? The marketplace mechanisms that force vendors to pay proper attention to security are trust in the brand, independent security reviews (thank you security researchers!), government regulation (no thank you), and immediately naming and shaming them without giving them time to cover it up or downplay it.
6. Punish the users for not demanding higher security. Of course, we could suffer as result, but it's not the security researcher who caused the bug and it's not the security researcher who would be exploiting it. There needs to be demand from users for higher security in their purchases.
7. We get to see all the details if the vulnerability is disclosed immediately. If the vulnerability is disclosed after "responsible disclosure", plenty of times you don't get the raw details. I wouldn't be surprised if many security researchers are signed to NDAs (or simply threatened by lawyers), so we don't hear about the bugs at all. Having the full details published immediately advances our knowledge in the security field more than a partial disclosure months later (or a non-disclosure).
The marketplace gives very few shits about security vulnerabilities, despite the constant stream of outrageous defects. The real market force is selling consulting and security gizmos imo. Nobody is ditching peoplesoft because it’s a a security disaster — they double down and buy accessories to secure it.
Big companies like Microsoft got security religion when the prospect of liability started becoming real, particularly when Blaster/Welchia paralyzed big networks in the 2000s.
As to your own more general arguments, they lead me to believe you're somewhat manichaean, and possibly a bit too eager to assign guilt and follow up with punishment.
Specifically, punishing users for "not demanding higher security" is a suggestion that can only be made when the metric ("high security") has become an objective by itself, completely divorced from real-world outcome that spawned it, namely to avoid people being harmed.
It's unclear if punishing users could reliably work as a proxy to get vendors to increase security. Thus, it is already assured that there will be some users that will be harmed by your idea with no positive effect possibly justifying that harm.
If, however, the mechanism works as envisioned by you, it would work just as well without helping it along: the very same effect will occur organically, whenever insecurity results in harm. And where it doesn't, nothing is lost. Quite the opposite: insecurity that does not result in harm is quite obviously something good.
It's a well-known failure mode of the human mind to so fully engage with some intermediate task that one forgets the initial motivation. Hence police officers arresting 6-year olds and soldiers "just following orders". Here, your suggestion is the result of fetishising one specific quality of software, namely security. Secure software, created in a process so beautiful it is a piece of art by itself, does not only shed its initial justification: Its pursuit suddenly legitimises actions specifically intended to injure those you once set out to protect.
Yeah, plus, you know: that mechanism assumes that fast disclosure harms users. So you're kinda contradicting all those other points about fast disclosure being helpful.
As to (5): You're not giving any reasons why early disclosure would result in more harm to the vendor, except for the very idea of responsible disclosure itself, namely lowering real-world harm by having a patch or other mitigation ready at the moment of disclosure. Here, you are falling into the same trap of being more interested in vengeance than justice, much like the first officer arriving at the crime scene putting another bullet in the victim to ensure that horrible criminal will face trial for murder, and not just assault.
>SSDs that encrypt the drive with a key that doesn't depend on the password
AFAIK, this is the standard practice in software harddrive/filesystem encryption - LUKS, TrueCrypt, etc., for practical reasons, and with no negative impact on security if properly implemented. AFAIK again, this is also how Firefox handles encryption of stored website login/password credentials - they are always encrypted using locally generated & held key; in case user supplies her own password, this key is encrypted with that password, with no change to the actual login/password store.
Contrast the following two scenarios:
- derive encryption key from user-supplied password [potentially weak key]
- encrypt the whole device / filesystem with the key [very long process, prone to interruption]
- user wishes to change the password
- re-encrypt the whole device / filesystem with the new key [very long process, prone to interruption]
- generate a cryptographically secure random or pseudorandom key
- store the password in device
- encrypt device with the key from the first write onward
- when user sets password, encrypt the stored key [no change to content of device necessary]
- from that point the user password is necessary to use the decryption/encryption key
- when the user changes password, re-encrypt the stored key only [no change to content of device necessary]
The second scenario is also more amenable to implementing various multi-password schemes, password-recovery techniques, and encryption secured with both passwords and hardware tokens.
If the key were properly encrypted (as you describe) the password check could not have been bypassed.
The critical point you're missing is that the Crucial SSD was not doing that step. If the key doesn't depend on the password, then the raw key has to exist somewhere on the Crucial hard drive or in its firmware, so the task becomes an easy hunt for the key with a debugger. No password cracking required.
If the issue actually lies with VirtualBox, VirtualBox is owned/maintained by Oracle, and based on other interactions I've seen with Oracle I wouldn't be surprised if others have submitted exploits to them before and they were ignored.
> There is no driver code involved.
Yea I see that step #1 in the 'exploit algo' is to remove the e1000, I missed that earlier:
> An attacker unloads e1000.ko loaded by default in Linux guests and loads the exploit's LKM.
I'm no security expert, but the feeling I get from other discussions is that big players have acted dishonestly with regards to proper compensation of bug bounties. It seems that sad state of affairs is being protested.
Putting aside the ethics of publishing this 0day, I feel like it's important to critique the more nuanced point the author is making, rather than critique a caricature of it.
Hard lessons are needed, having attempted to disclose serious vulnerabilities in T-Mobile USA's APIs by reaching out repeatedly, most vendors will not patch in an urgent manner, and some (like T-Mobile) are content to leak customer info indefinitely.
It is a culture problem, and it will take (financial and reputational) pain to alter the existing corporate cultures.
"There’s no regulations or liability. The market largely doesn’t care. They outsource security work, even stuff as easy as 15 min of AFL, to unpaid labor that has to beg for payouts that are often below market rate for paid, security professional. If they refuse to do secure development, I say just publish the vulnerabilities or sell them to Zerodium. Also, keep recommending secure alternatives to common, vulnerable software.
I did have another idea when looking at the fact that high-security software is always too expensive for most to buy or sold/free at a loss. Companies like Zerodium pay a fortune for vulnerabilities in software. It’s always the same software, too, whose developers keep adding preventable vulnerabilities. Usually a company making piles of money off it, too. So, sell vulnerabilities in those apps which already have red flags for security-conscious users, make a bit of money for yourself out of that, and spend the rest (eg majority) on developing secure alternatives. For example, selling vulnerabilities in Nginx to carefully extend and tool-check lwan, in consumer routers to fund an OpenBSD-based router, and/or iOS to fund HardenedAndroid (or new mobile OS). Stuff like that.
Hardly anyone will pay for real security. They’ll pay for vulnerabilities, though. Sad it comes to ideas like that but it’s one of the only ones that easily generates the required revenue."
An “Uber for vulnerabilities”.
Having said that, I do tend to think “slap a market on it” can often lead to perverse outcome.
The main trouble is that for it to fully work, you need to have the big corps bidding against black hats in this market. I can't see that happening. It'd have big corps dirtying their hands too openly.
The other trouble is that a lot of the damages (or value) of software bugs aren't readily fungible to dollars. For example, to actually profit from the Ashley Madison hack you'd need to blackmail a whole lot of people, which is incredibly time consuming. So software firms would be able to underpay for most exploits: The amount they'd be have to pay is at most what a black hat can profit from the bug, which is necessarily less than the real damages because the black hat has to cost for fungibility.
Is this really a thing? Who are these firms and what is their take?
So basically, their value.
It's not a slice of cake that is exchanging hands. The attacker might only be interested in the cherry on the top but he could also destroy the rest of the cake in the process.
Like somehow being able to have a third party negotiate payout rates for bug bounties.
I have no idea what the might look like.
Sequence looks something like: Security researcher submits exploit to the union. Union verifies it and decides it's worth $x. They inform software firm of the exploit and a deadline for payment. If payment is received before deadline, they get full, private disclosure. If not, then exploit is made public. Union takes a cut.
Security researchers don't really need a market maker. (It's not a real market: Actually converting exploits into money is typically antisocial and illegal.) They need someone to negotiate for them.
"I have remote code execution in your product, pay me XXX or I'll tell everyone".
I don't actually know if it is blackmail, but if it is, hiding behind a union isn't enough to make it not-blackmail.
Yeah. Just look at where we are today. But calling it "perverse outcome" is sugar coating it.
If it wasn't for profit being the king of all (and being the current common sense). Then we would've put more time into making software more secure.
The problem is not profit, it's that these security issues don't really lower it enough. I've seen the profit motive working very well in a couple of companies when the threat of lowering them (in the form of large fines) appeared thanks to the GDPR.
From the article: "Elevated privileges are required to load a driver in both OSs. It's common and isn't considered an insurmountable obstacle. Look at Pwn2Own contest where researcher use exploit chains: a browser opened a malicious website in the guest OS is exploited, a browser sandbox escape is made to gain full ring 3 access, an operating system vulnerability is exploited to pave a way to ring 0 from where there are anything you need to attack a hypervisor from the guest OS."
Defense in depth means caring about every link of the chain.
Everything tends to be undervalued when there is only one buyer.
Also purchase processes tend to be slow when there is only one buyer.
It's almost as if there needs to be competition for the sale of the disclosure ... although that would have its own issues of course.
Another idea is a public timed/buy/disclosure board that offers security bugs to the vendor at a certain price but if the vendor does not want to pay within the time then it's released publicly. Effectively this is a concept in which the researcher who found the bug decides on the price and the timeframe in which it will be paid, rather than placing the decision in the hands of the vendor. This too of course has issues. It's likely that things will go this way though because security vulnerabilities are worth a great deal to companies.
There's an opportunity right here for the startup who wishes to publicly list vulnerabilities for sale to the vendor (only) at a given price within a given time with escrow, else the researcher release to public. You'd have to find a way to present it such that it is not extortion which might be hard, but in a way that sort of what bug bounty programs are - reverse extortion. Bug bounties are sort of saying "we'll pay you if you can exploit us but promise not to", whereas extortion is saying "I know how to exploit you, pay me so I won't". The big money goes to whichever startup finds how to present this in an acceptable way.
Problem with that is that it might be taken to be extortion / blackmailing / racketeering. (Though, not a lawyer and this isn't legal advice)
Next Critical Patch Updates by Oracle is scheduled at 15 January 2019. 
I see this bug exists at least in part in the open-source parts of VirtualBox.  Could it be fixed there?
Could that (that part being opensource) also be reason for the researcher not being granted the bug's bounty?
The author must be referring to third-party, vendor-agnostic programs such as the Zero Day Initiative (ZDI), SecuriTeam Secure Disclosure (SSD), or the Accenture iDefense VCP (unless he was planning to sell to a non-disclosing entity). Keep in mind that the main purpose of these programs is PR (in the case of SSD) or a specific part of a security product (TippingPoint). I am not surprised that the value of a vulnerability in these scenario varies heavily over time. For example, if your purpose is to write a blog post and highlight your security research, maybe it's not as interesting if you have written another post about the same software recently.
The real question is, what is a business model by which a third party can make money off of vulnerabilities despite reporting to the vendor. This is a tough problem to solve. Maybe it's time to have the users of software contribute financially in some way?
1. How do vulnerability researchers and RE engineers narrow down which code base to test? VirtualBox code could be so huge.
2. If their research leads to dead end, which I guess may happen most of the time, how do they keep themselves going/motivated?
3. Clearly, this work needs lots of time. How do they fund themselves to do this?
4. I believe a certain mindset is required to continue doing this work because most of this is 'altruistic' in nature. The monetary reward is a pittance. Would love to read some books on such topics.
>>> a browser opened a malicious website in the guest OS is exploited, a browser sandbox escape is made to gain full ring 3 access, an operating system vulnerability is exploited to pave a way to ring 0 from where there are anything you need to attack a hypervisor from the guest OS.
I cracked several games in the end of the 80's but that was nowhere as hard as this seems to be. How do researchers find the time to go so deep in their analysis ? Where do they learn ?
Anyway, the code analysis showed by the author is really good. That's so much clever than old school "replace this check by NOP's" :-) Kudo's
There are are many reasons for this. One is that with a game, you already have full access to the program on your disc and can modify it at will, run it infinitely many times, have full access to how it's loaded and run, and analyze it separately. Plus "hacking a game" is not a security vulnerability, and the only person who loses if you add an RCE to your game is you, and possibly the publisher if you crack it.
The rest is just to show a scenario where this is actually a problem.
Specifically, the write primitive exploits the way the E1000's EEPROM is emulated, and you can see the read primitive exploits VirtualBox's ACPI implementation.
By definition, he absolutely is a “security researcher”, and as of today, I would say he is also a known security researcher. This work is excellent.
The engineers in charge quickly patching this up aren't the ones who came up with the bounty program. Making them pay for it seems like a pretty shitty move.
Well really, Oracle threw the hot shit in customers’ laps. The author just had the gall to point it out.
> making hundreds of people's life a living hell for a week
> Making them pay for it seems like a pretty shitty move.
It sounds like you assume Oracle is going to abuse its staff in the process of getting this fixed. I don’t know why you assume that, nor why the author ought to be blamed if Oracle does.
Right, Oracle as an organization did, not necessarily the engineers who will be tasked with fixing this.
> Oracle is going to abuse its staff
Not necessarily abuse, but obviously, having 30 days to fix something is a much more saner experience than when every minute counts.
Sounds pretty first hand to me.
So that's the on-the-ground mess that's been made. It's annoying, but not the end of the world, so concentrating only on the lowest level is arguably a distraction from the bigger picture - which is impacted by all such events as this.
I would never trust any Oracle product in any form in production environment.
What VM were you using?
Host OS crashes like what you describe are no-trust-me-it's-really-not-the-compiler unlikely.
I did have X crash on me once while figuring out the X11 protocol specification, but that's because X is widely known to be less than perfectly stable :D and also because the actual graphics driver I was using was a little flaky.
So I misremembered (it's been 6+ years ago) but it was the AMD PCNET driver I wrote that triggered the crash, not E1000. Excuse the code quality, but the driver I wrote is located here: https://github.com/blanham/ChickenOS/blob/master/src/device/... (Note the comment about it not working in Virtualbox) Attempting to boot my kernel with that driver activated would cause a hard freeze of the host OS, requiring a hard reset of the host machine. I replicated this multiple times, in Virtualbox for both OSX and Windows.
Thanks for actually replying instead of downvoting, I guess I shouldn't make off-the-cuff comments and then forget to follow-up until the next day.
Huh, a shell and Vim. (Alongside multiple architecture support and two filesystems.) That's reasonably developed-out. Not bad!
Can totally understand the problem of running out of steam. (I think that might be why I've been so hesitant about diving in myself - want to pace things so they stay interesting for long enough, and don't want to make too many discouraging mistakes. A couple kernel architecture arguments doesn't produce a good filesystem, adequately future-proof UI model, good vertical "little detail" semantics, etc etc.
A hard crash on multiple platforms? That's almost definitely a bug. Heh, I think all the downvoters might've made a bunch of incorrect presumptions there :)
It might be mildly interesting to see if you can still trigger the crash with the latest version of VirtualBox. (I'd have a go but of course I have no repro details, or info on how to actually build everything for that matter.)
Precedent has just been set on 0daying VB networking hardware, for what that's worth :P but there's also Project Zero (which will accept security vulnerabilities in any software, and imposes a 90-day deadline) if VB's own bug bounty thing proves unappealing. This is of course getting a bit ahead of testing/re-verification. I mention it because I'm now very curious to know if the bug is still there, but of course HN comments are not the right place for a [Y] :)
It's impossible to say if recreating the test environment (host OS version(s), host VirtualBox version) would prove fruitful if the latest VB versions seem (...seem...) immune. Vulnerability research seems to be consist of a lot of "hmm, that seems like it might run a tiny bit slower on months with a Q in them if the computer is leaning 30° to the left at 2:14PM in the afternoon" and then staring at Hex-Rays for 3 months to ultimately prove that your crazy theory is in fact valid (literal example https://ramtin-amin.fr/#nvmedma; very similar example https://bugs.chromium.org/p/chromium/issues/detail?id=648971 - "one byte overflow"!).
In any case, I wouldn't mind getting some more details and seeing the CLUNK in action. It's interesting at face value.
NB. About that nvmedma link - it took me about 4-5 rereads of that article and its prequel (https://ramtin-amin.fr/#nvmepcie) before the bigger picture started to click.
(I also need to check my comments somewhat frequently - and then remember to also actually follow-up after seeing I have replies! Woops. Thanks for the reply!)