Hacker News new | past | comments | ask | show | jobs | submit login
VirtualBox E1000 Guest-to-Host Escape (github.com/mortenoir1)
577 points by kpcyrd on Nov 6, 2018 | hide | past | favorite | 108 comments



This [1] gives a little more background. The same security researcher found another vulnerability in VirtualBox earlier this year, and didn't have a great experience with Oracle:

“We reported this vulnerability to Oracle, the latest update from them is that they are still looking into it, while in fact the latest version of Oracle VirtualBox version 5.2.18 has silently introduced a patch without giving credit or mentioning of the vulnerability report.”

[1] https://blogs.securiteam.com/index.php/archives/3736


Does Oracle have a track record of being The Worst about this or should I have assumed as such given my preconceived notions of them being the classic villain in the tech world?


"You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle."



Entertaining talk.

The part that grandparent is quoting starts at about 38 minutes in.


It's even better when he slips and say: "Don't anthropomorphize Larry Ellison"


That wasn't a slip


Oracle are actively hostile to the security researcher community, with their CISO having told customers and researchers off for attempting to find issues in their products

https://arstechnica.com/information-technology/2015/08/oracl...

This kind of attitude goes all the way back to 2002 https://www.theregister.co.uk/2002/01/16/oracle_security_cla...


Oracle has a track record of being The Worst about anything.


They seem to have a knack for selling their product to C-levels who don't fully understand the business they represent.

But that's about it.


CIOs at my last 3 companies have each had a "get rid of Oracle" as a top 5 strategic goal for the year.


Let's all meet back at this comment next year and see if IBM took the title from them.


This sounds interesting, would you mind going into detail about this? Maybe a quick list?


Relevant discussion in another Oracle related post[0].

[0]: https://news.ycombinator.com/item?id=18389481


I do like that post because it summarizes how I feel about Oracle rather succinctly. For your typical FAANG company, the money is the means to the end. But for Oracle, the money is the end to the means.


From GP's link:

> While the crashing bug was reported to the VirtualBox tracker (https://www.virtualbox.org/ticket/16444), it was never considered a security vulnerability, and is not marked as one. This ticket is 15 months old at the time of writing this post and still marked as unresolved.

They might not be The Worst, but at 15 months, they're not great.


The text of the bug doesn't have any suggestion of it being a vulnerability and as the closing comment indicates, not enough information to reproduce


“Unreproducible” can also be read as “Wasn’t worth the effort to reproduce”.

Very, very rarely will a Sev1 crash in production software be instantly reproducible. When it is, it’s an utter embarrassment. More often the full set of conditions which lead to the crash are unknown, rare, and hard to fully quantify all at once.

And yet, we know with certainty that the crash did happen and by definition it is a valid bug in any sane world where “don’t crash” is an absolute requirement.

Any crashing bug in a hypervisor must be assumed to be security critical until proven otherwise. Even after “proving” a crash can’t possibly be exploited, you should still assume that a clever attacker will figure out a way. The Venn diagram of “crashing bugs” and “security exploits” is vanishingly disjoint.


"don't crash" can't be an absolute requirement, because operating systems. I've had programs crash when running out of RAM, I think sometimes there's no way for the OS to recover without killing some programs.

If you absolutely can't crash, you need to build an OS that caters to that - modern OSes explicitly prioritize other goals, like performance


The usual definition of "crash" does not include "getting killed by the OS while behaving normally".


If you're in a scenario where a crash can mean a critical security vulnerability, then being killed by the OS can also be, because some parts might continue running with an unexpected state


There are plenty of other big companies behaving similarly. HP comes to my mind also, but I cannot find the statistics which quantified worst maintainance practices.


Other companies may be behaving similarly, but Oracle is the perceived worst, and it says something.

Because, they plainly don't hide it and try to be nice, at least.

I also think that they're the worst in the industry.


One of my uncles moved to another state and was telling me all about how Oracle pays a lot there and I should move there. I told him I absolutely never would. He just wouldn't understand it, and kept nudging so I told him my wife would not want me to move to another state, thankfully that settled that.


I said there was a study which did quantify worst maintainance practices: time to fix bugs, time to reply and several other support metrics and interestingly Oracle was not the worst, HP was.

My personal bias would also tell me Oracle is because of their business practices, but we were arguing about support practices.


Sorry, I missed the statistics part. I've never came across the one you mentioned either. I'd love to read that.

My perceived worst definition was not about only business practices BTW. I meant in every category.

Nonetheless, supporting the products you sold are also a business. :)


By that time that researcher had already been paid by the company running the SecuriTeam program. Admittedly Oracle did release the patch too early, but this was clearly by accident, since they usually never release security patches outside a Critical Patch Update. Also I believe they did assign CVE-2018-3294 and give credit in the following October CPU.


I think the author brings up a good point about so-called "responsible disclosure" (a self-serving term by the vendors).

I'm paraphrasing his 3 reasons for disclosing immediately:

1. It's unacceptable to wait half a year until a vulnerability is patched.

2. Bug bounties are riddled with tricks to delay you, shenanigans as to whether they'll pay you or not, and games to low ball the price.

3. It's arrogant to wait months while meanwhile preparing for a big boastful disclosure.

I'm thinking that we could add to this list as follows:

4. The vulnerability might already be well-known and being used by black hats and secret government agencies. Releasing the details immediately puts all of us on an equal footing.

5. Punish the vendors for having such awful security. Just the other day, HN had an article[1] about Crucial SSDs that encrypt the drive with a key that doesn't depend on the password; you can decrypt the drive without the password just by patching the firmware. Is Crucial going to get punished for such an outrageous lie about their encryption? The marketplace mechanisms that force vendors to pay proper attention to security are trust in the brand, independent security reviews (thank you security researchers!), government regulation (no thank you), and immediately naming and shaming them without giving them time to cover it up or downplay it.

6. Punish the users for not demanding higher security. Of course, we could suffer as result, but it's not the security researcher who caused the bug and it's not the security researcher who would be exploiting it. There needs to be demand from users for higher security in their purchases.

7. We get to see all the details if the vulnerability is disclosed immediately. If the vulnerability is disclosed after "responsible disclosure", plenty of times you don't get the raw details. I wouldn't be surprised if many security researchers are signed to NDAs (or simply threatened by lawyers), so we don't hear about the bugs at all. Having the full details published immediately advances our knowledge in the security field more than a partial disclosure months later (or a non-disclosure).

[1] https://news.ycombinator.com/item?id=18382975


I agree, except for the notion of security “research” being a significant market factor.

The marketplace gives very few shits about security vulnerabilities, despite the constant stream of outrageous defects. The real market force is selling consulting and security gizmos imo. Nobody is ditching peoplesoft because it’s a a security disaster — they double down and buy accessories to secure it.

Big companies like Microsoft got security religion when the prospect of liability started becoming real, particularly when Blaster/Welchia paralyzed big networks in the 2000s.


The author's reasons are specific to Oracle. I believe other vendors have shown themselves to be worthy of the trust required to for such cooperative disclosure schemes to work.

As to your own more general arguments, they lead me to believe you're somewhat manichaean, and possibly a bit too eager to assign guilt and follow up with punishment.

Specifically, punishing users for "not demanding higher security" is a suggestion that can only be made when the metric ("high security") has become an objective by itself, completely divorced from real-world outcome that spawned it, namely to avoid people being harmed.

It's unclear if punishing users could reliably work as a proxy to get vendors to increase security. Thus, it is already assured that there will be some users that will be harmed by your idea with no positive effect possibly justifying that harm.

If, however, the mechanism works as envisioned by you, it would work just as well without helping it along: the very same effect will occur organically, whenever insecurity results in harm. And where it doesn't, nothing is lost. Quite the opposite: insecurity that does not result in harm is quite obviously something good.

It's a well-known failure mode of the human mind to so fully engage with some intermediate task that one forgets the initial motivation. Hence police officers arresting 6-year olds and soldiers "just following orders". Here, your suggestion is the result of fetishising one specific quality of software, namely security. Secure software, created in a process so beautiful it is a piece of art by itself, does not only shed its initial justification: Its pursuit suddenly legitimises actions specifically intended to injure those you once set out to protect.

Yeah, plus, you know: that mechanism assumes that fast disclosure harms users. So you're kinda contradicting all those other points about fast disclosure being helpful.

As to (5): You're not giving any reasons why early disclosure would result in more harm to the vendor, except for the very idea of responsible disclosure itself, namely lowering real-world harm by having a patch or other mitigation ready at the moment of disclosure. Here, you are falling into the same trap of being more interested in vengeance than justice, much like the first officer arriving at the crime scene putting another bullet in the victim to ensure that horrible criminal will face trial for murder, and not just assault.


yep, all of that is equivalent to "security by obscurity". Hiding vulnerabilities longer is of no use, it just helps companies to be ready to answer customers, not to fix things in a better way.


Also it's really unfair to most vendors that don't have close ties with the creator of the security problem. E.g. in the case of Intel this turned out really bad for all other OSs except Windows and Linux. On the other hand it makes using products/software from entities who continue with this policy even more unattractive.


In this case there is an easy mitigation for people aware of the vulnerability (change virtual network adapter), and of course they have to run a malicious guest to start with. I'm not sure this argument holds up as well if this were an RCE in an unauthenticated network service that can't be mitigated.


While you raise very good points overall, this one I have beef with:

>SSDs that encrypt the drive with a key that doesn't depend on the password

AFAIK, this is the standard practice in software harddrive/filesystem encryption - LUKS, TrueCrypt, etc., for practical reasons, and with no negative impact on security if properly implemented. AFAIK again, this is also how Firefox handles encryption of stored website login/password credentials - they are always encrypted using locally generated & held key; in case user supplies her own password, this key is encrypted with that password, with no change to the actual login/password store.

Contrast the following two scenarios:

- derive encryption key from user-supplied password [potentially weak key]

- encrypt the whole device / filesystem with the key [very long process, prone to interruption]

- user wishes to change the password

- re-encrypt the whole device / filesystem with the new key [very long process, prone to interruption]

vs.

- generate a cryptographically secure random or pseudorandom key

- store the password in device

- encrypt device with the key from the first write onward

- when user sets password, encrypt the stored key [no change to content of device necessary]

- from that point the user password is necessary to use the decryption/encryption key

- when the user changes password, re-encrypt the stored key only [no change to content of device necessary]

The second scenario is also more amenable to implementing various multi-password schemes, password-recovery techniques, and encryption secured with both passwords and hardware tokens.


The difference is that the key was not encrypted with the password.

If the key were properly encrypted (as you describe) the password check could not have been bypassed.


> You wrote: user supplies her own password, this key is encrypted with that password

The critical point you're missing is that the Crucial SSD was not doing that step. If the key doesn't depend on the password, then the raw key has to exist somewhere on the Crucial hard drive or in its firmware, so the task becomes an easy hunt for the key with a debugger. No password cracking required.


I enjoyed reading the author's motivation for posting as a 0day vs Bug bounty.

https://github.com/MorteNoir1/virtualbox_e1000_0day#why


I find that part really weird. These are two extremes - you can easily notify the vendor and give them a month (or whatever period you think is reasonable) to fix the issue if you're not interested in the bounty. Google was pretty successful with enforcing 3 months. VirtualBox may not be a production service where it really matters, but publishing a 0day makes for some stressful days for many ops.


After reading the description of the exploit, it's not clear to me who is at fault. It almost seems like there are several bugs in the Intel E1000 driver, and not VirtualBox. But then again, the hypervisor should probably never allow a guest kernel to escape the VM.

If the issue actually lies with VirtualBox, VirtualBox is owned/maintained by Oracle, and based on other interactions I've seen with Oracle I wouldn't be surprised if others have submitted exploits to them before and they were ignored.


The bug is in the VirtualBox code that emulates an Intel E1000 device. There is no driver code involved.


Thanks.

> There is no driver code involved.

Yea I see that step #1 in the 'exploit algo' is to remove the e1000, I missed that earlier:

> An attacker unloads e1000.ko loaded by default in Linux guests and loads the exploit's LKM.


The author has a number of great points, with an overarching theme that how we handle security bugs is ridiculous.


You don't think there is a place for giving vendors time to fix the exploit before handing it over to everyone who can use it maliciously?


The security of the product is the responsibility of the vendors. If they want to control how exploits are handled, then they should compensate security researchers for that service, just like anything else. The poster of the exploit outlined some reasonable steps to that end.

I'm no security expert, but the feeling I get from other discussions is that big players have acted dishonestly with regards to proper compensation of bug bounties. It seems that sad state of affairs is being protested.


Most companies and organizations react terribly to being made aware of security issues, sometimes landing the messenger in prison. Prevailing practices are to sweep vulnerabilities under the rug, or quietly acknowledge them and hope no one notices.


Even the the author publishing this 0day clearly believes there is a place for that; their protest is calling attention to that this time-to-fix period is, they believe, in practice abused to be a great deal longer than it should be.

Putting aside the ethics of publishing this 0day, I feel like it's important to critique the more nuanced point the author is making, rather than critique a caricature of it.


Not at the cost of leaving end users vulnerable and in the dark, vendors can deal with the consequences of their choices.

Hard lessons are needed, having attempted to disclose serious vulnerabilities in T-Mobile USA's APIs by reaching out repeatedly, most vendors will not patch in an urgent manner, and some (like T-Mobile) are content to leak customer info indefinitely.

It is a culture problem, and it will take (financial and reputational) pain to alter the existing corporate cultures.


So... It's okay to harm third parties, i. e. VirtualBox users, not just as an unfortunate but unavoidable side effect, but as your means to punish the vendor for their (neglient? wilful? morally depraved?) failure to follow the idealised processes you envision, and for not honouring your genius with whatever your ego believes it is owed?


The harm is already there, you're just shooting the messenger.


Considering https://blogs.securiteam.com/index.php/archives/3736 I'm not too surprised they just went "fuck you Oracle" this time.


IMO the vendor should employ security researchers so their customers don't have to rely on outside volunteers to expose the bugs in the software they bought. The responsible disclosure talk often is just blame shifting.


I think the market should demand and suppliers offer a better baseline of security in prevention and turnarounds for fixes. The market mostly doesn't pay for better security. Suppliers mostly don't care either. So, most software is intentionally vulnerable with users/buyers accepting and rewarding that. That means 0-days are coming no matter what, esp if development processes don't improve. Publishing a 0-day seems at least no more unethical than the demand and supply side intentionally putting them there in the first place. I've actually started thinking security people should cash in on the practice in a way that funds secure alternatives like I said on Lobste.rs version of this article:

"There’s no regulations or liability. The market largely doesn’t care. They outsource security work, even stuff as easy as 15 min of AFL, to unpaid labor that has to beg for payouts that are often below market rate for paid, security professional. If they refuse to do secure development, I say just publish the vulnerabilities or sell them to Zerodium. Also, keep recommending secure alternatives to common, vulnerable software.

I did have another idea when looking at the fact that high-security software is always too expensive for most to buy or sold/free at a loss. Companies like Zerodium pay a fortune for vulnerabilities in software. It’s always the same software, too, whose developers keep adding preventable vulnerabilities. Usually a company making piles of money off it, too. So, sell vulnerabilities in those apps which already have red flags for security-conscious users, make a bit of money for yourself out of that, and spend the rest (eg majority) on developing secure alternatives. For example, selling vulnerabilities in Nginx to carefully extend and tool-check lwan, in consumer routers to fund an OpenBSD-based router, and/or iOS to fund HardenedAndroid (or new mobile OS). Stuff like that.

Hardly anyone will pay for real security. They’ll pay for vulnerabilities, though. Sad it comes to ideas like that but it’s one of the only ones that easily generates the required revenue."

https://lobste.rs/s/kjvb2i/virtualbox_e1000_guest_host_escap...


Especially the websites/branding of bugs, like Heartbleed, SHAttered, etc. It seems like researchers do this to propel their own fame, for probably financial motives. I imagine it's pretty lucrative to have been the "co-founder" of Heartbleed just like it is lucrative to be the co-founder of a well-known startup.


There is truth in that but I do think it makes them easier to refer to when you talk about them 5 years later, I still remember Heartbleed. If someone said to me “Hey, remember when CVE-2014-0160 happened?” I would be like what was that? So yeah, the awareness side of it helps


There are better outcomes from doing these kinds of bug brandings, it creates awareness of sometimes serious vulnerabilities and gives us something more friendly to reference a bug by than its CVE #. Who remembers the CVE # for Blueborne or Heartbleed?


With regard to this, maybe there’s an opportunity for a market maker to step in.

An “Uber for vulnerabilities”.

Having said that, I do tend to think “slap a market on it” can often lead to perverse outcome.


These already exist. There are darknet firms that independently verify that an exploit is real (staking their reputation that they won't take it and run once shown), and then open it to the market.

The main trouble is that for it to fully work, you need to have the big corps bidding against black hats in this market. I can't see that happening. It'd have big corps dirtying their hands too openly.

The other trouble is that a lot of the damages (or value) of software bugs aren't readily fungible to dollars. For example, to actually profit from the Ashley Madison hack you'd need to blackmail a whole lot of people, which is incredibly time consuming. So software firms would be able to underpay for most exploits: The amount they'd be have to pay is at most what a black hat can profit from the bug, which is necessarily less than the real damages because the black hat has to cost for fungibility.


> There are darknet firms that independently verify that an exploit is real (staking their reputation that they won't take it and run once shown), and then open it to the market.

Is this really a thing? Who are these firms and what is their take?



"The amount they'd be have to pay is at most what a black hat can profit from the bug, which is necessarily less than the real damages because the black hat has to cost for fungibility."

So basically, their value.


I wouldn't put it that way. The potential value lost (reputation, downtime, etc) for the vendor could be more than the value an attacker might gain.

It's not a slice of cake that is exchanging hands. The attacker might only be interested in the cherry on the top but he could also destroy the rest of the cake in the process.


I think the author is referring to third parties which buy and disclose vulnerabilities. Very hard to monetize. There is already a flourishing market for undisclosed vulnerabilities, for obvious reasons.


It is called darknet :)


Sorry, I should have been clearer. I meant a non-darknet market.

Like somehow being able to have a third party negotiate payout rates for bug bounties.

I have no idea what the might look like.


Probably would look like a union (a dirty word in software), where their collective bargaining allows them to have leverage on software firms and have policies to punish non-payment in an ethical way.

Sequence looks something like: Security researcher submits exploit to the union. Union verifies it and decides it's worth $x. They inform software firm of the exploit and a deadline for payment. If payment is received before deadline, they get full, private disclosure. If not, then exploit is made public. Union takes a cut.

Security researchers don't really need a market maker. (It's not a real market: Actually converting exploits into money is typically antisocial and illegal.) They need someone to negotiate for them.


That's starting to sound quite close to blackmail.

"I have remote code execution in your product, pay me XXX or I'll tell everyone".

I don't actually know if it is blackmail, but if it is, hiding behind a union isn't enough to make it not-blackmail.


> Having said that, I do tend to think “slap a market on it” can often lead to perverse outcome.

Yeah. Just look at where we are today. But calling it "perverse outcome" is sugar coating it.

If it wasn't for profit being the king of all (and being the current common sense). Then we would've put more time into making software more secure.


Software not made for profit is very often no more secure.

The problem is not profit, it's that these security issues don't really lower it enough. I've seen the profit motive working very well in a couple of companies when the threat of lowering them (in the form of large fines) appeared thanks to the GDPR.


This is why local root exploits matter even if compromised userspace already gives attackers control of everything in the VM. Don't update your kernel, and maybe they'll get everything outside the VM too.

From the article: "Elevated privileges are required to load a driver in both OSs. It's common and isn't considered an insurmountable obstacle. Look at Pwn2Own contest where researcher use exploit chains: a browser opened a malicious website in the guest OS is exploited, a browser sandbox escape is made to gain full ring 3 access, an operating system vulnerability is exploited to pave a way to ring 0 from where there are anything you need to attack a hypervisor from the guest OS."

Defense in depth means caring about every link of the chain.


The pricing of / evaluation of bug bounties seems to be a problem. Going begging to the vendor of course results in reduced value.

Everything tends to be undervalued when there is only one buyer.

Also purchase processes tend to be slow when there is only one buyer.

It's almost as if there needs to be competition for the sale of the disclosure ... although that would have its own issues of course.

Another idea is a public timed/buy/disclosure board that offers security bugs to the vendor at a certain price but if the vendor does not want to pay within the time then it's released publicly. Effectively this is a concept in which the researcher who found the bug decides on the price and the timeframe in which it will be paid, rather than placing the decision in the hands of the vendor. This too of course has issues. It's likely that things will go this way though because security vulnerabilities are worth a great deal to companies.

There's an opportunity right here for the startup who wishes to publicly list vulnerabilities for sale to the vendor (only) at a given price within a given time with escrow, else the researcher release to public. You'd have to find a way to present it such that it is not extortion which might be hard, but in a way that sort of what bug bounty programs are - reverse extortion. Bug bounties are sort of saying "we'll pay you if you can exploit us but promise not to", whereas extortion is saying "I know how to exploit you, pay me so I won't". The big money goes to whichever startup finds how to present this in an acceptable way.


> Another idea is a public timed/buy/disclosure board that offers security bugs to the vendor at a certain price but if the vendor does not want to pay within the time then it's released publicly.

Problem with that is that it might be taken to be extortion / blackmailing / racketeering. (Though, not a lawyer and this isn't legal advice)


Great and very thorough writeup; the one biggest question I have left, is what happens if you do this with the real hardware. Will it crash the hardware and put it in a weird state, or will it do something natural and benign like wrapround an internal counter modulo 16K?


Whats the CVE ? [1]

Next Critical Patch Updates by Oracle is scheduled at 15 January 2019. [2]

I see this bug exists at least in part in the open-source parts of VirtualBox. [3] Could it be fixed there?

Could that (that part being opensource) also be reason for the researcher not being granted the bug's bounty?

[1] https://cve.mitre.org/cve/

[2] https://www.oracle.com/technetwork/topics/security/alerts-08...

[3] https://www.virtualbox.org/browser/vbox/trunk/src/VBox/Devic...


To give a bit of context to this discussion: Oracle does not and has never had a bug bounty program. Honestly Oracle does not even play a big role in this. From my experience with reporting VirtualBox bugs to them, they usually handle them rather professionally.

The author must be referring to third-party, vendor-agnostic programs such as the Zero Day Initiative (ZDI), SecuriTeam Secure Disclosure (SSD), or the Accenture iDefense VCP (unless he was planning to sell to a non-disclosing entity). Keep in mind that the main purpose of these programs is PR (in the case of SSD) or a specific part of a security product (TippingPoint). I am not surprised that the value of a vulnerability in these scenario varies heavily over time. For example, if your purpose is to write a blog post and highlight your security research, maybe it's not as interesting if you have written another post about the same software recently.

The real question is, what is a business model by which a third party can make money off of vulnerabilities despite reporting to the vendor. This is a tough problem to solve. Maybe it's time to have the users of software contribute financially in some way?


The number 3. oh my. It's ridiculous what people do with the bugs their found since Heartbleed. I don't remember anything like that before, but ever since every security issue needs a logo and a catchphrase for some reason.


It makes vulnerabilities easier to remember and talk about. Is that bad?


Yeah, this is all too true, and unfortunately it gives the security industry a bad rap


Given the prevalence of security flaws and the seriousness of the consequences of a breach, I'm still surprised why people are so quick to dismiss high security systems like OpenBSD. Just a couple of days ago someone was incredulous that I would consider vmm from OpenBSD for virtual machines, but I'd rather have a secure, open source virtual machine, than a bug infested virtual machine from Oracle. But it is hard to argue with someone who has made a billion dollars from bug infested software... I guess it is really a balance between having cool new features people will buy and good-enough security for your purpose (or good enough for your customers purposes anyways). Maybe I should go work for Visa, they probably care about security.


I have a slightly off topic observation and couple of questions. Hoping to get inputs from community. I'm half way through the write up and am amazed by the amount of research, skill and grit that went in to finding this vulnerability. Few questions I have:

1. How do vulnerability researchers and RE engineers narrow down which code base to test? VirtualBox code could be so huge.

2. If their research leads to dead end, which I guess may happen most of the time, how do they keep themselves going/motivated?

3. Clearly, this work needs lots of time. How do they fund themselves to do this?

4. I believe a certain mindset is required to continue doing this work because most of this is 'altruistic' in nature. The monetary reward is a pittance. Would love to read some books on such topics.


This bug was already reported to Oracle:

https://github.com/MorteNoir1/virtualbox_e1000_0day/issues/8


FTA :

>>> a browser opened a malicious website in the guest OS is exploited, a browser sandbox escape is made to gain full ring 3 access, an operating system vulnerability is exploited to pave a way to ring 0 from where there are anything you need to attack a hypervisor from the guest OS.

I cracked several games in the end of the 80's but that was nowhere as hard as this seems to be. How do researchers find the time to go so deep in their analysis ? Where do they learn ?

Anyway, the code analysis showed by the author is really good. That's so much clever than old school "replace this check by NOP's" :-) Kudo's


> I cracked several games in the end of the 80's but that was nowhere as hard as this seems to be.

There are are many reasons for this. One is that with a game, you already have full access to the program on your disc and can modify it at will, run it infinitely many times, have full access to how it's loaded and run, and analyze it separately. Plus "hacking a game" is not a security vulnerability, and the only person who loses if you add an RCE to your game is you, and possibly the publisher if you crack it.


Of course they started with exploiting the emulated device using a VM where they have full control.

The rest is just to show a scenario where this is actually a problem.



Very interesting read, I browsed the author's website and it says he is a independent security researcher and self employed, a real question, I am interesting in knowing he survives just reporting bugs, when there is no surety a bug could be found every month for a living....


As soon as I read the phrase "ASLR bypass", I knew this would be devastating.


I guess SELinux/SVirt would mitigate this


Is qemu affected?


The exploit half impacts the E1000 driver, but the other half impacts VirtualBox's implementation of the virtualized system; so no.

Specifically, the write primitive exploits the way the E1000's EEPROM is emulated, and you can see the read primitive exploits VirtualBox's ACPI implementation.


huh. is the author a known “security researcher”? i agree more or less with his 3 points.


> is the author a known “security researcher”?

By definition, he absolutely is a “security researcher”, and as of today, I would say he is also a known security researcher. This work is excellent.


The author here just blindly assumes a lot of things about VirtualBox's bug bounty program. Many, like Google, put a very strict limit, and they will release the details when that time is over. I think it's more respectful to at least give them a chance, rather than throwing a hot shit on their lap and making hundreds of people's life a living hell for a week.

The engineers in charge quickly patching this up aren't the ones who came up with the bounty program. Making them pay for it seems like a pretty shitty move.


> rather than throwing a hot shit on their lap

Well really, Oracle threw the hot shit in customers’ laps. The author just had the gall to point it out.

> making hundreds of people's life a living hell for a week

> Making them pay for it seems like a pretty shitty move.

It sounds like you assume Oracle is going to abuse its staff in the process of getting this fixed. I don’t know why you assume that, nor why the author ought to be blamed if Oracle does.


> Oracle threw the hot shit

Right, Oracle as an organization did, not necessarily the engineers who will be tasked with fixing this.

> Oracle is going to abuse its staff

Not necessarily abuse, but obviously, having 30 days to fix something is a much more saner experience than when every minute counts.


Blindly assuming?

https://blogs.securiteam.com/index.php/archives/3736

Sounds pretty first hand to me.


Specifically:

> While the crashing bug was reported to the VirtualBox tracker (https://www.virtualbox.org/ticket/16444), it was never considered a security vulnerability, and is not marked as one. This ticket is 15 months old at the time of writing this post and still marked as unresolved.


OK, so a bunch of engineers have been told to drop everything to patch this. Maybe work weekends, (hopefully not) provide hourly updates on their work, etc. I don't see fixing this as entirely tricky; the patch looks to be a few lines of code. Then it's over to testing for a short run, and the release team will be running around informing vendors in order of importance and then updating https://www.virtualbox.org/wiki/Changelog, which incidentally hasn't changed yet (it will be interesting to see when this happens / how long it takes).

So that's the on-the-ground mess that's been made. It's annoying, but not the end of the world, so concentrating only on the lowest level is arguably a distraction from the bigger picture - which is impacted by all such events as this.


Not to discount the work done here. Big high five. But I am surprised hundreds more of these bugs haven't found every week. It's Oracle. Their mission statement might as well be "we make security vulnerabilities and charge you a shitload"

I would never trust any Oracle product in any form in production environment.


I hate Oracle and consider them among the most evil of companies out there. However, I don't think that's a very fair characterization of their mission statement.


I think [1] shows that it is far from being an unfair synopsis of their mission statement.

[1]: https://news.ycombinator.com/item?id=10039202


True, I think it's actually Adobe's.


It's called exaggerated sarcasm. But it is absolutely fair of the products they build. They are complete garbage. It's like Adobe and Oracle have a side bet on who can introduce the most vulnerabilities. That is how terrible of a track record they have. I will be extreme here and say all of their products, especially Java, flash, and acrobat should outright be banned from a corporate network. It should be considered a liability and insurance companies should actually build it into their models.


To be fair, this is Sun (or probably Innotek) code almost certainly. Sun I generally trusted in production.


The commodity x86 virtualization gold rush happened some years before writing system code in memory-safe languages got back into vogue, and people only started doing real vulnerability research when they were already estabilished. so most of the hypervisors are prone to this stuff.


Oh, this might explain how I managed to cause the host OS to crash while running my homebrew OS when I was in college...


Homebrew OS? Interesting. What did it do? Can I see it anywhere?

What VM were you using?

Host OS crashes like what you describe are no-trust-me-it's-really-not-the-compiler unlikely.

I did have X crash on me once while figuring out the X11 protocol specification, but that's because X is widely known to be less than perfectly stable :D and also because the actual graphics driver I was using was a little flaky.


I took an OS class and wanted to learn more afterwards so I hacked together a small kernel mostly using info from the osdev wiki. I got decently far (managed to have a shell running, had VIM running), but ran out of steam years ago (that and the code needing a massive amount of cleanup and auditing).

So I misremembered (it's been 6+ years ago) but it was the AMD PCNET driver I wrote that triggered the crash, not E1000. Excuse the code quality, but the driver I wrote is located here: https://github.com/blanham/ChickenOS/blob/master/src/device/... (Note the comment about it not working in Virtualbox) Attempting to boot my kernel with that driver activated would cause a hard freeze of the host OS, requiring a hard reset of the host machine. I replicated this multiple times, in Virtualbox for both OSX and Windows.

Thanks for actually replying instead of downvoting, I guess I shouldn't make off-the-cuff comments and then forget to follow-up until the next day.


Interesting. I've been eyeing OS development for a while myself. I have a few high-level ideas about kernel design, and a few years's worth of musings about good UX. I badly need to get on with spending a few years getting my feet wet with actual implementation details and such so I'm not just a pile of unorientated hot air though, haha. Armchair pontificating doesn't count for much. But anyway.

Huh, a shell and Vim. (Alongside multiple architecture support and two filesystems.) That's reasonably developed-out. Not bad!

Can totally understand the problem of running out of steam. (I think that might be why I've been so hesitant about diving in myself - want to pace things so they stay interesting for long enough, and don't want to make too many discouraging mistakes. A couple kernel architecture arguments doesn't produce a good filesystem, adequately future-proof UI model, good vertical "little detail" semantics, etc etc.

A hard crash on multiple platforms? That's almost definitely a bug. Heh, I think all the downvoters might've made a bunch of incorrect presumptions there :)

It might be mildly interesting to see if you can still trigger the crash with the latest version of VirtualBox. (I'd have a go but of course I have no repro details, or info on how to actually build everything for that matter.)

Precedent has just been set on 0daying VB networking hardware, for what that's worth :P but there's also Project Zero (which will accept security vulnerabilities in any software, and imposes a 90-day deadline) if VB's own bug bounty thing proves unappealing. This is of course getting a bit ahead of testing/re-verification. I mention it because I'm now very curious to know if the bug is still there, but of course HN comments are not the right place for a [Y] :)

It's impossible to say if recreating the test environment (host OS version(s), host VirtualBox version) would prove fruitful if the latest VB versions seem (...seem...) immune. Vulnerability research seems to be consist of a lot of "hmm, that seems like it might run a tiny bit slower on months with a Q in them if the computer is leaning 30° to the left at 2:14PM in the afternoon" and then staring at Hex-Rays for 3 months to ultimately prove that your crazy theory is in fact valid (literal example https://ramtin-amin.fr/#nvmedma; very similar example https://bugs.chromium.org/p/chromium/issues/detail?id=648971 - "one byte overflow"!).

In any case, I wouldn't mind getting some more details and seeing the CLUNK in action. It's interesting at face value.

NB. About that nvmedma link - it took me about 4-5 rereads of that article and its prequel (https://ramtin-amin.fr/#nvmepcie) before the bigger picture started to click.

(I also need to check my comments somewhat frequently - and then remember to also actually follow-up after seeing I have replies! Woops. Thanks for the reply!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: