An out-of-bounds memory access flaw was found in the way QEMU's virtual Floppy Disk Controller (FDC) handled FIFO buffer access while processing certain FDC commands. A privileged guest user could use this flaw to crash the guest or, potentially, execute arbitrary code on the host with the privileges of the hosting QEMU process.
If you're using RHEL, then SELinux further confines the qemu process so although you can run arbitrary code in there (which is very bad) you cannot access any files on the host filesystem except ones which qemu has open. Also libvirt runs qemu as a separate user, and further confines it with some cgroups rules. Depending on the version of RHEL, seccomp may be involved too, which limits the type of syscalls that qemu can make.
Applying the qemu fix is still highly recommended of course.
I haven't even especially tried Rust, but I think it's time for a safer language/compiler.
That is the purpose of the cXXX,cYYY part of the label as seen in the example here:
Why isn't legacy junk like this disabled by default? How many people need a floppy disk controller?
There's something scary about how legacy compatible a lot of FOSS projects are. It just raises the attack surface and leads to stuff like this.
Anyhow, security bugs happen (in all sorts of devices, old and new), and the important thing is that we fix them quickly and that SELinux mitigates the immediate effects.
Unfortunately I'm always working with established systems, so I can't push this feedback during the sales process.
That's the only way I've found to remove it from the list of drives that WMI's win32_physicaldisk class presents.
Arguing about defaults requires a step back to policy level, which is something for which many projects have trouble finding time and attention.
Incompetence on top of incompetence doesn't invalidate my argument. Minimizing your attack surface should be the norm, unfortunately here on HN it just leads to downvotes.
In the physical world, I still maintain quite a few old computers (and I mean really old) that do things around the house or someone else's house. Many of these lack working CD-ROM drives and USB ports (let alone bootable USB ports), so the most surefire means to transfer data to/from them are either over a network (which depends on them having a NIC; this isn't always the case) or via floppies (which pretty much all of them have); floppy drives are also almost universally needed on these machines in order to boot OS installers (and, in some cases, even boot the main OS itself; I have at least one machine that boots off a floppy with GRUB in order to load an OS of choice off a USB thumbstick - one of the lucky few I have that has USB ports without supporting USB boot). Here, Linux having a floppy controller is incredibly useful (whether in virtualized or physical environments), since it makes it easier to create boot floppies and the like with `dd`.
So, what you're saying is you don't expect attackers who can write exploits to escape a VM to be able to write a linux privilege escalation exploit when SELinux is enabled?
That said, it's really hard to market security companies in ways that represent the hard work that they do, in ways that are not all snake oil and spin. So it's hard to blame folks for trying to turn excellent security investigative work into self-promotional opportunities.
(Edits: clarity and trying not to sound judgmental of the parent comment)
Except that Crowdstrike is heavily involved in 'threat intelligence' so this isn't really about patching vulnerabilities at the technical level but educating non-technical executives on threats and 'threat actors'. So corporate execs can be handed a dossier of recent events, like they were the US President evaluating their national security policy.
The only problem is that threat intelligence has marginal value, as infosec changes so rapidly and is so diverse, so at the end of the day it is very much simply emotional gratification - that Crowdstrike delivers at a very high price.
In terms of resource utilization, it doesn't seem like a good use of time/money to obsess over each bug as if it were an atypical event in a slow moving enviornment. But hey if it gets a few people at the top to start caring about security, maybe there is some value... I just hope it doesn't result in execs nagging the infosec team for updates on 'venom' and disrupting their work on real security measures for the company by focusing on the latest hot topic.
Please don't underestimate the human work needed to be done along with our tech jobs.
So many times have I tried to push things forward (internal system upgrades, new security policies, etc) that did not have any immediate impact but then something happens and we have to scramble together.
Being able to show this to a non-technical person and have them at least somewhat understand that there is a problem that needs to be addressed is invaluable.
You can play, or you can abstain, but you can't abstain from the effects of abstaining (to paraphrase Rand).
I find it much easier to talk about heartbleed or shellshock (which is like ~7 different bugs). But googling for bugs and to find out which versions/patches fix this bug, I'll still need the CVE number.
What "the actual issue" is depends on your POV.One might argue the big-picture view given in the infographic is closer to providing a workable description of the problem for most people than the bit-twiddly details.
Also, the FAQ explains a lot; including the details of how the vulnerability works.
I personally don't like the trend for the same reason why I dislike terminology like "ninja" or "rockstar" or "badass" or "devops". It cheapens computer science/engineering into resembling something a bunch of hip middle schoolers yammer on about alongside their video games and their skateboards instead of the multi-billion-dollar professional field it actually is.
The added design here seems a waste of effort and not really adapted to the targeted audience.
EDIT: The xen advisory gives you all the info you need in a couple of block of text: http://xenbits.xen.org/xsa/advisory-133.html
x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit."
The rest of the rant is here:
> Theo de Raadt's problem is that he views security the way cryptography experts view cyphers: as an absolute. But security isn't like math. It's not absolute. There are right and wrong ways of doing security.
Not an absolute but there is a right and wrong way to "do security?"
To be fair Boender is attacking the naive maxim that "virtualization is secure." It's just another layer if you isolate processes inside of virtualized run-time environments. Makes sense... it's not snake oil.
However there is no need for "right" and "wrong" in these discussions. The security of any given system exists in a continuum and it will only be a matter of time before the next vulnerability is discovered. I get the sense that all we can do is limit the damage that can be done by any particular system.
It seems that virtualization is just one path towards providing those limits just as chroot and other attempts have been.
I'm most interested in seeing how jitsu and unikernels can turn the tables... not only can a process be wrapped in a virtualization layer but it is short lived and only runs when it is requested. It puts the onus on us to set up the summoner properly and provide safe-guards... but it's just another layer of complexity for attackers to manage.
The useful aspect about the MirageOS unikernels is that they use the pure Xen PV interface, which has almost no dependency on qemu. No floppy or block/net emulation, timers through the direct Xen shared_info page, and generally as "native" to x86 as Xen permits.
HVM has support PV timers (and interrupt controllers, and spinlocks) for quite some time now
>generally as "native" to x86 as Xen permits.
I really don't know if I agree with this. With hardware extensions basic CPU performance is going to be significantly better on HVM instances (No longer having to bounce to the hypervisor every time you make a system call since you once again have three CPU protection rings with ring -1), and that's before getting into SR-IOV, etc.
Unikernels are sweet and all, but without PVH, PV will outperform modern "PVHVM" implementations, and with PVH you're still running in a partial HVM shell.
fdctrl_stop_transfer() might be one of the incriminated functions.
If you have a buffer with odd semantics (such as wrapping out-of-bounds addresses back into bounds), it should probably be wrapped in something that enforces that. In C++, it could be made to look like a normal buffer, except that operator is overloaded to wrap for you, and you can make the compiler scream at you if you try to escape that safety net--for an inline class, very likely without any performance cost over adding the wrapping computation to each access by hand.
In C, your options are more limited. The safest is an opaque handle that has to be passed to an accessor function. You're more likely to take a performance hit (unless LTO fixes it for you), but in a floppy drive emulator, I doubt it matters.
If the overhead does matter, a macro or static inline function still makes the access convention easier to memorize, which should make it less likely for someone to forget, and should make code that does forget more suspicious on code review.
"Q: How is this different from previous VM escape vulnerabilities?
A: Most VM escape vulnerabilities discovered in the past were only exploitable in non-default configurations or in configurations that wouldn’t be used in secured environments. Other VM escape vulnerabilities only applied to a single virtualization platform, or didn’t directly allow for arbitrary code execution."
Edit: Xen advisory: http://xenbits.xen.org/xsa/advisory-133.html
Not because it's not important, but because it just desensitizes the whole impact of vulnerabilities the caliber of Heartbleed or Shellshock which did affect a large chunk of the servers and machines connected to the internet at the time.
Now they claim it's bigger than heartbleed, but with no exploit, and no clear statement on what actual in use implementations are affected, Amazon already have came out saying that VENOM has never affected their implementation of Xen, if Digital Ocean and Rackspace come out with the same statement it just makes this whole "bigger than HB stance" is silly.
And as far as the corporate/enterprise world goes, well VMware, CISCO, and MSFT hypervisors have a much bigger share out there and their hypervisors are not affected so again no much of a bite there.
is your argument that awareness of this issue would have been better if there wasn't a cool name and website?
VIRTUALIZED ENVIRONMENT NEGLECTED OPERATIONS MANIPULATION
VENOM seriously, kinda reminds me of:
Ward: Strategic Homeland Intervention, Enforcement and Logistics Division.
Hill: And what does that mean to you?
Ward: It means someone really wanted our initials to spell out "SHIELD."
At least heartbleed and shellshock were kinda properly named, this one is a hell of a stretch.
Inb4 people start registering silly domains that can be used to spell out vulnerabilities.
CITRUS: Channel Insecure Transport Releases User Sessions
TBONE: Transmission Buffer Overflow Network Exploitation
BUTTER: Buffer Underrun Transactional Execution Relay
Let the parking wars begin!
BONECRUSH: Buffer Overrun Neatly Exceeds Current Reusable User Storage Hindrances
FACENEEDLE: Foreign Actors Can Execute Nefarious Executables, Even During Lockdown... Exploit
HELLSCREAM: Hack Exploiting Lazy Loading Standard Core Runtime Executables And Modules
The vulnerability is not mentioned explicitly in the change log. It only shows up as one of 32 bullet points "Floppy: several fixes". The actual changes are recorded only as "2015-05-08 12:58 Changeset in vbox  by vboxsync: FDC: Fixed DRIVE SPECIFICATION command".
The fixed file from the QEMU project is:
VirtualBox's equivalent file is:
There were some changes related to command buffers five days ago by Frank, but they only address FD_CMD_DRIVE_SPECIFICATION_COMMAND (in a slightly different way than QEMU's developers did it). The VirtualBox source code diffs are at:
Compare to the QEMU diffs at:
The vulnerability does not affect the current VirtualBox FD_CMD_READ_ID or the versions of the file going way. Maybe because it might have been forked as far back as 2003? Crowdstrike did point out that the vulnerability was present from 2004. But the vulnerability manifests in two bugs, one of which appears to affect VirtualBox and the other not.
The exploit is in the floppy disk controller, of a virtual machine, in an era when almost no physical machine includes a floppy disk drive, and those entering the field might never have seen a floppy disk other than the "File -> Save" icon; plus the exploit can be triggered even when the FDC is disabled.
Certainly a sobering thought for those using large, feature-filled applications 'just in case' some feature might be needed in the future.
For managing virtual machines, it's more surprising that we give VMs VGA devices rather than just using serial: when using VGA emulation, you cannot trivially write code that reads text on the VM's screen, but if you configure the VM to use a serial console, you can trivially write a program which controls the VM. In a libvirt-managed qemu environment when the OS has its serial console enabled, you can run "virsh console MyVM" and instantly start executing commands and parsing their output. You can also have the OS write its log to serial so that if the OS crashes you can still read the full log. When all else fails, serial still works. Additionally, a virtual VGA device has an infinitely larger attack surface than a serial device.
When doing unattended windows installs, a lot of people use floppy drives to store the Autounattend.xml file. Floppy disk images are the most trivial and smallest images for automation tools to create. They're additionally useful for placing a linux bootloader on to boot a linux install CD with command line parameters.
Most people still use CD-ROM images to install operating systems, and it's basically required for windows. Virtual machine management software also tends to use the cd drive to install guest tools since it's the easiest way to let the guest see large files from outside the VM - nearly every OS can read CDs.
To be sure! But it's not always necessary.
> When doing unattended windows installs, a lot of people use floppy drives to store the Autounattend.xml file.
> Most people still use CD-ROM images to install operating systems,
Also both true, but they shouldn't be available unless you need them.
For extra-security-conscious deployments, most hypervisors let you remove most hardware, and qemu gives you enough flexibility to define nearly every device on the VM's motherboard at the command line rather than taking a pre-configured motherboard setup. The default settings in most hypervisors give you lots of unneeded hardware, but this hardware is really convenient for any user who is just trying to get a VM up.
I realize that, from a "secure defaults" perspective, the CD-rom and unused serial port increase your attack surface, but I also think this trade is worth it in most scenarios, but it's a tough line to draw.
This is the endgame of your so-called "responsible disclosure". Those with profit loss exposure win, and the peasants get it whenever the PR company is done making the logo and infographics.
Public hosting providers;
Large-scale organisational users of Xen;
Vendors of Xen-based systems;
Distributors of operating systems with Xen support.
Here "provider", "vendor", and "distributor" is meant to include anyone who is making a genuine service, available to the public, whether for a fee or gratis. For projects providing a service for a fee, the rule of thumb of "genuine" is that you are offering services which people are purchasing. For gratis projects, the rule of thumb for "genuine" is measured in terms of the amount of time committed to providing the service. For instance, a software project which has 2-3 active developers, each of whom spend 3-4 hours per week doing development, is very likely to be accepted; whereas a project with a single developer who spends a few hours a month will most likey be rejected.
Basically, if you provide a service to the public which uses Xen (Not restricted on size), or use Xen at large scale internally, you can get on the list. There are several small hosting providers that utilize Xen on that list.
Presumably if you use Xen at small scale internally you're less worried about security vulnerabilities as it is only your employees with root access to the machines - if external users have root access, you probably fall under one of the other definitions.
On the Rackspace public cloud I've been through three full-fleet reboot cycles so far. Only one of those affected AWS customers, and AWS handled it in such a way that only a portion of their fleet was affected.
How could AWS do this when Rackspace and others couldn't?
For one, they could stratify guest placement based on instance type and guest OS. (Which I hear they do.) Most recent XSAs have only affected PV or HVM guests, not both. If you keep PV and HVM guests separate ...
AWS seems to be an example of good engineering, not an example of the perils of capitalism.
Maybe next you'll insist that everyone's prevented from patching for a week after disclosure so that smaller companies that don't have the resources to react immediately are not unfairly left behind?
It is fundamentally unfair, and sets up a non-level playing field.
(inb4 "critical infrastructure")
It is fundamentally unfair, and is perfectly reasonable.
Profit and PR are hardly the goal here -- community awareness and public safety are paramount. Vulnerabilities need to be obviated to the general populace at large.
If the latter, perhaps Amazon was never vulnerable anyway?
"For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers."
Edit: This comment seems to indicate that even lacking a virtual floppy drive, the floppy drive controller is still present and thus the system is vulnerable: https://news.ycombinator.com/item?id=9539191
One would expect though that there is no issue if a floppy drive is not attached, and hope that there is not a separate security hole to mount a floppy from sandboxed code (unlikely).
Why does it need to be an acronym? Just name it.
DigitalOcean (DO): Being Patched. (The DO staff are busy in rolling out security updates. The patch will automatically be applied on DO servers inside Cloudways Platform.)
Amazon Web Service: Officially confirmed to be Safe.
Google Compute Engine: Officially confirmed to be Safe. (A Google representative informed Cloudways, “Google Cloud Platform was never vulnerable to this flaw. We do not use the vulnerable software.”)
Oracle, which develops VirtualBox, said in an emailed statement that the company was "aware" of the problem, and fixed the code, adding that it will release a maintenance update soon.
"We will release a VirtualBox 4.3 maintenance release very soon. Apart from this, only a limited amount of users should be affected as the floppy device emulation is disabled for most of the standard virtual machine configurations," said software lead Frank Mehnert.
> For many of the affected virtualization products, a virtual floppy drive is added to new virtual machines by default. And on Xen and QEMU, even if the administrator explicitly disables the virtual floppy drive, an unrelated bug causes the vulnerable FDC code to remain active and exploitable by attackers.
So I guess for KVM you're safe if you don't have a virtual floppy drive, unclear whether it's KVM default though. For the others, you're still vulnerable by an unrelated bug.
If there's a solution to this problem, I don't know what it is. Trying to replace an underfunded monoculture with severely underfunded diverse implementations may not even work and may actually reduce security.
Couldn't you just disable the floppy device in the vm?
That is a pretty simple mitigation. Make sure there are no (unnecessary) virtual floppy devices defined in your VMs.
I checked my VMs (Ubuntu/KVM) and, as expected, none of them have a virtual floppy - they are not added by default on that platform.
> If you administer a system running Xen, KVM, or the native QEMU client, review and apply the latest patches developed to address this vulnerability.
> If you have a vendor service or device using one of the affected hypervisors, contact the vendor’s support team to see if their staff has applied the latest VENOM patches.
Or you could, you know, search for "qemu disable floppy" in google, read a bit and apply this flag to the VM:
qemu -global isa-fdc.driveA=
or -nodefaults to only enable the devices you want to enable...