Hacker News new | comments | show | ask | jobs | submit login
Linux kernel: multiple vulnerabilities in the USB subsystem (openwall.com)
180 points by stablemap 10 months ago | hide | past | web | favorite | 69 comments



Wonder how much of these bugs remain hidden away in the modern kernels. USB and Thunderbolt (which is also carrying PCIe in some modes) and for what it's worth USB-C which can ALSO carry PCIe, in addition to DisplayPort, HDMI and whatnotelse have grown enormously complex.

To make matters worse, I guess there are ways for the USB slave device to fingerprint the OS (or: hardware!) running on the host... something like a "pwn me" USB stick working on all three major platforms (OS X, Linux, Windows) or game consoles is certainly possible. And I can perfectly imagine that there are ways to exploit the actual USB controller hardware, which gives unfettered memory access.


> USB and Thunderbolt (which is also carrying PCIe in some modes) and for what it's worth USB-C which can ALSO carry PCIe, in addition to DisplayPort, HDMI and whatnotelse have grown enormously complex.

Well, sure, the hardware standards required to achieve this are complex. But looked at from another angle, the software required to handle this is simpler than before.

Before, a kernel had to have, effectively, fifteen drivers for the various IO controller-hub chips in the system, and then fifteen matching subsystems to manage their respective protocol stacks (i.e. physical, link, and [for several of the standards] network layers in the OSI model.)

With consolidation, now you have maybe half as many drivers for controller-hubs (because both USB-C and Thunderbolt are managed by the same hub, etc.) and far fewer stacks, because there's only one stack per common protocol standard. Every port being "unified into" a {USB, PCIe, DisplayPort, HDMI, ...} port means that now, that set of protocols—taken together—are now where a lot of the eyes in kernel development end up. So each of these protocol stacks is being picked over far more thoroughly than before.

It's the same thing that happened when USB obsoleted all the previously-proprietary standards for OSI layers 1 and 2 over serial or parallel ports: suddenly, instead of 1500 individual implementations of those layers [either as proprietary drivers from the manufacturer, or as reverse-engineered parts of Linux] you had one, much-more-well-done OSI layer-1/2 driver for USB. There were still 1500 drivers, for whatever those devices did on top of OSI1+2 (though that number gradually decreased as common HAL driver standards for things like HID peripherals emerged), but those drivers now each had much less code and therefore much less attack surface. (And, as well, a lot of those drivers no longer "did" anything much at the hardware level, instead just packing and unpacking packets from the bus into data for a system service to hold onto, and therefore these drivers could now be more heavily sandboxed by the OS. This is ~half of why Windows XP crashed less than Windows 9x: a USB device-driver crash was, almost always, a non-fatal event.)


I'm really not sure this is right, it just doesn't ring true to me.

Yes, in the bad old days, we had dozens or hundreds of different drivers for specific network cards, serial cards, parallel port chips, audio cards, floppy controllers, disk drivers, etc., plus the tangled mess of scsi and ata/sata/pata storage stacks, tcp/ip/tokenring/ethernet/netbios/smb/appletalk network stacks, tty/vtty/serial terminals, and so on. Bugs in the higher-level stacks, like sata, would be impact very broadly. But a bug in one specific network card driver would, generally, only affect the subset of people who actually had that specific network card. And many of those old drivers were not really all that complicated, because they were specific and narrowly tailored. Buggy as anything too, of course, and didn't get much scrutiny due to the huge variation in software.

Now everything is subsumed under a massive IO controller-hub chip driver. At least the last time I checked, all those tangled upper-level stacks are still there, but they are just entwined with a hugely complex mess of usb/thunderbolt/usb-c mess. So you'd still have all the complexity of a sata stack, but that's nested on top of a scsi stack, sitting on a PCI stack, all nested inside a USB-C stack.


I guess the question to ask is:

Medium sized company. It's your job to pick a stack. Your performance will be evaluated on whether a team of security consultants can hack you. Your stock grant will either multiply 100x or divide 10x depending on whether you get hacked. You get 1 month of unfettered access to all employee workstations. The consultants get one week to try to break in. Do you want a modern OS with its USB stack and whatever high level drivers, or do you want to put everyone on old school machines with bare metal drivers?

Assume the attackers get to know which choice you made.



Fingerprinting has been done: https://cise.ufl.edu/~butler/pubs/sadfe11.pdf

And given you can easily reprogram thumb drives and such, it's easy to do evil stuff with them. You can plug a thumb drive with a reprogrammed firmware which will detect the OS based on IO patterns and then it can tell the OS it's also a keyboard and runs command for you.

https://srlabs.de/bites/usb-peripherals-turn/ https://srlabs.de/wp-content/uploads/2014/11/SRLabs-BadUSB-P... https://github.com/brandonlw/Psychson

https://www.computerworld.com/article/2690790/tools-for-crea...

From the article: "During their Derbycon demonstration, which is available on YouTube, the two researchers replicated the emulated keyboard attack, but also showed how to create a hidden partition on thumb drives to defeat forensic tools and how to bypass the password for protected partitions on some USB drives that provide such a feature."


Wow, the situation is even worse than I imagined - I thought fingerprinting was theoretical, not that there are actual practical examples in the wild. Jeez.


Considering scientific statistics, which says there are 0.5 to 25 bugs per KLoC in delivered software, we are really lucky that stuff still work... :D


I use the following metaphor: software is like a narrow path high up on the arete of a mountain, with thousand-foot drops on either side. Your goal is to walk down that path.

If everything goes well (all the input is as expected), you'll walk the path straight and reach your goal.

Bad software (most of it, honestly) has no guardrails: do something wrong, take one step to the side, and you fall crashing off the narrow path. Robust software has guardrails, handles all the error conditions, to make sure you stay on the path (or at least return to the start).

Most software is tested against expected input. It's already a lot of work to get things working for the expected case, it's usually enough to stop there. Make the path, expect smart hikers, omit guardrails. This is why stuff still works: most of the time, things go as expected.

It is a huge investment to test against all error conditions. I only know of the Space Shuttle that had the robustness requirements to shoulder the cost of implementing that. As I recall, their conventional productivity metrics were abysmal (<1 LoC/day/engineer?), but it was robust.

I've worked on High-Availability equipment. The software quality wasn't better than elsewhere, but the recovery mechanisms were much better. For example, all mission-critical data was mirrored to redundant HW/SW, and there was a standby system ready to pick up whenever the active copy crashed... which wasn't too often, but it happened. This is similar to Netflix' "Chaos Monkey" approach to software design.


Assuming 0.5 rounds down, that seems like an essentially useless statistic. If you deliver a randomised binary that just doesn't run, does that count as more than one bug?


0.5 rounds up though...


Only for 2/5 IEEE 754 rounding methods!

What's 0.5 of a bug, anyway?


A full bug, but it takes two thousand lines of code to surface.


A feature!


Corner cases. Relatively rare until exploited.


I swear over-engineering is the cancer of all modern software and systems. It's become my #1 complaint about almost all modern software and systems.


Welcome to the club. The problem is that nobody really cares, which makes it super hard to do something about it. In fact, there are plenty of people riding the over-engineering gravy train, which means there are substantial forces to keep things as they are or make them even worse.

I don't remember the last job where I was hired where my major contribution was to add something, usually I'm just removing cruft and simplifying things and that's what makes things work again.


My current job is mostly about eliminating code. We've got a massive code base that suffered severely from "not invented here" syndrome where nearly everything was implemented as a custom wrapper around some other library to avoid "vendor lock in", nevermind that the wrapper exposed essentially the exact same API which was so library specific that any attempt to replace it would basically amount to a total rewrite. I and my fellow devs have spent the last few months ripping out all the unnecessary wrappers all over the code base, updating library versions, and in many case eliminating entire trees of dependencies that are simply no longer needed. We're still in the middle of the effort, but by the time we're done I'm estimating we'll have cut our startup times to 1/10th what they are now as well as eliminated 75% of the dependencies across our projects.


That's not exactly the same as NIH, which is where you would have reimplemented the other library instead of using it. Wrapping a dependency can make sense, because it gives you a stable thing to test against (and depending on the language may make unit testing actually possible), or to handle some version updates (especially if you're testing multiple versions -- e.g. we have a wrapper for Jetty because we needed to transition from Jetty 8 to Jetty 9 but had a lot of code to gradually update) though yeah there's less value when it's just a 1-1 and no one ever updates the underlying library...


Well, it's kind of a mix of NIH and this weird cargo-cultish idea that every API needs some kind of wrapper around it. In the past if anyone wanted to use basically any library it was a major battle to get permission to do so, so a lot of code ended up being written to do things that were already provided in fairly standard libraries which is where the NIH comes in. On top of that, they then insisted on using wrapper libraries that exposed essentially the same API as the libraries they did bring in (and then never updated ever again before finally cutting support for their wrappers). The infuriating thing is that the wrappers were both tightly coupled to the underlying library in a way that makes them useless from abstracting from the underlying library, but also differed ever so slightly so that ripping them out and using the underlying library is still a major refactoring. The major impetus for the refactoring we're currently doing is that all these wrappers formed an interlocked set of dependencies on very specific versions of the underlying libraries and all of it was starting to suffer from bit-rot, with many of the libraries being stuck on versions that had been EOLed or were otherwise many years behind in updates. The worst one I can think of off the top of my head was one library that was pinned to a version from 2006 (and was still actively maintained with current releases from this year, so this wasn't even a deprecated library, just the wrapper was preventing it from being updated).


A good day is when I delete more code than I add.


But then your LoC is negative, and you’re not being productive! /s


That's why I don't delete code, I just comment it out with line comments, and add a note before each line describing why it was removed. I'm so productive! /s :)


you joke, but I get the sneaking suspicion that many of my co-workers don't squash commits simply because they are gaming github metrics.

Didn't help that my company went as far as saying they used # commits as a guide for annual reviews.


This particular feature bloat seems to be driven by the mobile OEMs, and the wish for something to match the Apples "universal" iphone port...


Looks like there were all found by google doing fuzzing, cool!


Very regularly fuzzing uncover a few bugs, impressive.


These are all DOS attacks ( that require a crafted USB device and physical access. At which point i guess you could just powerdown. Only if chained with another vuln, (e.g. access to iLO) could you actually do anything.


> or possibly have unspecified other impact

> use-after-free and system crash

> general protection fault and system crash

> out-of-bounds read and system crash

> NULL pointer dereference and system crash

I don't know about you, but I think a smart person could figure out how to abuse these to get data out of my machine.


How? They have non privileged logical access as well as physical access to the host?


Not sure where you're coming up with non privileged logical access. The vulnerabilities are in the kernel. The kernel is very much privileged.


TLDR: severity is denial of service (as of today) for all of them


But if you actually read the descriptions there are plenty that look like they'd have great potential for code execution.


Are there any situations where this would be a vulnerability but using a USB device as a HID would not?


Note that "crafted USB device" is a bigger attack vector than it sounds like; if paired with vulnerabilities in USB devices themselves, you can have hosts that infect USB devices and USB devices to make them infect other hosts.


FireWire used to be a similar basket case. Don't think the firmware/drivers got fixed, instead it became an obsolete hardware port and replaced by TB/USB/etc...


Apparently, these are all denial of service attacks using a specially crafted USB device that you have to physically insert into the Linux machine.

Pulling the plug sounds easier :)


It's unclear whether or not these are all strictly DOS. Some seem like they may be exploitable.


I thought that it would help even if the USB stack _would_ be secure, as any USB device has direct access to the motherboard/PCI?


It sounds like you're thinking that any device connected to the motherboard or PCI bus has access to read/write arbitrary memory on the host through Direct Memory Access or some other mechanism. This isn't true for a couple reasons:

* The PCI bus access is given to the _USB hub_, since that's what's connected to PCI, not to the USB device you plugged in. The USB hub can talk to the USB device using whatever restrictive protocol it wants to. The USB protocol doesn't give DMA access to devices -- it polls registers on the devices for commands and data.

* Even if it were a hostile device connected to the PCI bus, modern machines, both desktop-class (e.g. x64) and mobile (e.g. modern ARM cores in cellphones) use IOMMUs to give a virtual view of just the device's own addressable memory to the device, not the full system memory map.

(I might be mistaken about some of this, but that's my understanding.)


Most machines have IOMMUs but they are often not enabled. A lot of them are fairly buggy. Linux generally does not enable them, not sure if osx or Windows does by default.


A specially-crafted USB device doesn't have to look specially-crafted - a malicious cheap USB charger or cable from Amazon or from your local convenience store could be specially-crafted, too.


Sure, but if the attacker has physical access to the machine, I don't see many scenarios in which launching a USB denial of service attack would be the most effective thing to do.

Perhaps if the goal was to sabotage a presentation or some 007 style attack...

In any event, I didn't mean to say that the vulnerability is not worth fixing. It definitely is. It just seems like a relatively benign issue compared to the daily barrage of hair raising security flaws.


> I don't see many scenarios in which launching a USB denial of service attack would be the most effective thing to do.

High security servers that don't have typical interfaces are super common in government facilities. Sometimes they'll shove glue into the USB and sometimes the USB is the only method of accessing the system.


As 'staticassertion replied above, causing a kernel crash due to memory mismanagement is usually a sign that with some extra work you could cause execution of arbitrary code in the kernel instead.


Consider the bugs a workaround for laptops with a fixed battery. ;-)



I don't think that's relevant - WebUSB exposes physical USB devices on the local machine to code running inside the browser. It does not allow code running inside the browser to access the host USB stack as if it were a physical device (in order to emulate a USB device, forward one across the web, etc.).


Care to elaborate?


These fixes will be backported to longterm kernel versions right?


AFAIK grsecurity allows to disable USB completely after boot, which should avoid many of these problems. Is there another similar solution?


Blacklist the various USB modules? Or disable USB in BIOS?


You can do anything with physical access anyway.


True, but with these bugs it doesn't have to be the attacker that personally has physical access to the target machine. For secure sites the attacker may not be able to get past security, but if the device is innocent enough it might.

If you are a secure site you need to oversee the manufacture of all your USB cables, otherwise you don't know that someone hasn't put an attack into the cables you ordered.


This is making a ton of assumptions about your target. Is it a desktop user? Then possibly. Is it a server? What type? Does it have a keyboard/ monitor? Maybe a USB is how you access this device - so a USB that can get control over the kernel is suddenly your most viable attack method.


This is one of the many reasons why the kernel should have been written in Rust.


Linux, which first appeared in 1991, should have been written in a language that didn't exist until 19 years later?

When will your rewritten-in-Rust version of the kernel be released?


A wholesale rewrite is fantasy, but piecewise replacement might be plausible.

Is it technically possible to write a driver in Rust and add it to the kernel in a reasonable way? That is, without gobs of fragile shims, etc.


> Is it technically possible to write a driver in Rust and add it to the kernel in a reasonable way? That is, without gobs of fragile shims, etc.

I think this is highly unlikely. The Linux kernel lacks an internal ABI [0] which would make writing drivers in a different language and targeting them at multiple kernel releases essentially impossible.

Many, many people and organizations writing drivers have complained over the years about the lack of an internal ABI. [1] [2] I would go so far as to say it's why Linux on ARM is such a dumpster fire. SoC vendors provide an SDK based on a certain kernel, and OEMs forever ship this kernel (with occasional backports for really severe bugs) because porting the device-specific changes to a newer kernel is just too much effort.

[0] https://en.wikipedia.org/wiki/Linux_kernel_interfaces

[1] https://stackoverflow.com/questions/827862/why-i-need-to-re-...

[2] https://news.ycombinator.com/item?id=9220973


Google created their own Linux driver ABI with Project Treble.

Basically, it transforms Linux into a kind of microkernel, with drivers being implemented on their own processes, using shared memory APIs to talk with the kernel, based on an IDL (Interface Description Language).

https://source.android.com/devices/architecture/hidl/


Why do you think ABI compatibility is needed for wiring drivers in Rust? ABI compatibility is only needed for binary modules. Source compatibility should be enough for source code form.


Yes, Mozilla is doing just that for Firefox, and there are lots of demo projects. (There's even a minimal kernel example I have bookmarked somewhere...) The problem is that once you accept that, you open the door for many other languages that are arguably better than C and can be introduced piecemeal just as well. Like Nim, but others too. Now you have to decide which one you want to use, evaluate their various merits, it's not just a "use this hip one that pops up on HN every week". It's easier to just keep everything in C and propose out-of-band ways of making things better. (e.g. I don't think there are any unit tests in vanilla Linux but it wouldn't surprise me if there's a bank of unit tests some other groups have written to occasionally run the latest vanilla against. People have found bugs with fuzzers. Etc.)


Probably never.

But we have other UNIXes, that moved away from a pure C model, Inferno, NeXTSTEP and its descendants.

Regarding Linux, the kernel might never move away from C, but Android, Android Things and ChromeOS have very little of it exposed to userspace.

And then there is Fuchsia and Redox.


Joking aside, is there any hope of Linux doing anything else than just continuing to use C forever and using verbal abuse of patch submitters as the only safety layer?


There's hope we move from Linux some day, into something with access controls between parts of the OS.

But Linux will be in C forever.


Not at all, UNIX is married with C.


Not really, no.


I'm almost certain this is a joke, in case anyone gets confused.


:^)




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: