Hacker News new | past | comments | ask | show | jobs | submit login
Remotely Compromising Android and iOS via a bug in Broadcom's WI-FI Chipsets (exodusintel.com)
387 points by pedro84 on July 26, 2017 | hide | past | favorite | 161 comments

Why does Broadcom insist on proprietary drivers?

How could it possibly be detrimental for Broadcom to have free software drivers?

This article is a poignant example that it is detrimental for them to continue to keep their drivers proprietary.

There is no benefit. They probably have an embarrassing code base that is full of garbage and a bunch of lawyers paranoid about IP. Why would any manager suggest to take that risk that has very little potential upside.

It isn't that it would be realistically detrimental, it just has no value to the individual attempting to change the established course of the ship

There are tons of benefits, to the end users. But most of them result in the user having less reason to buy a new device in the hopes of faster or more reliable WiFi.

End users don't really matter to Broadcom. Chip makers don't sell product to end users, they sell to device manufacturers.

absolutely true. But manufacturers are not going to use a wifi chip maker known to be repeatedly exploited, because of end user perception.

> But manufacturers are not going to use a wifi chip maker known to be repeatedly exploited, because of end user perception.

How many of your non-professionally-technical friends could tell you the manufacturer of their WiFi chip? Is it on the box? Could they even tell you who Broadcom is?

"Intel NIC" only recently became a very minor selling point in enthusiast desktop motherboards. I'm not holding out hope this is going to follow a more informed curve.

The gatekeepers (manufacturers) are the only ones informed enough to make the decision en mass. And they're not going to do so without a market reason. So barring something like "Broadcom stops providing security updates" or "New law holds device manufacturers liable for security bugs" they're going to save the few cents on BoM and continue using them.

I agree, but these things go up the chain. Sure people won't say 'I wont buy broadcom', or 'i wont buy snapdragon.' But they will say 'I won't buy PhoneCoXYZ because they were in the news for being hacked.' Not after this necessarily, but after enough times, I think we'll see it.

I'd like to move more towards this. And the branding and identification is probably a big part of this. Pushing news to report the culpable parties by name. "Samsung / Google / Apple recently had a flaw in their {model} phone" vs "Another phone vulnerability" desensitization.

I've seen some manufacturers start to advertise which WiFi chipset a laptop has. For example, Qualcomm Atheros cards are now marketed as "Killer," and Intel Wifi cards are often referenced by model number in laptop specs.

I hope that becomes true one day, but it's certainly not the case now.

Manufacturers currently have no choice but to ship devices running vulnerable application software that communicates with remote devices using vulnerable protocols. This is facilitated by vulnerable operating systems running many vulnerable device drivers communicate with devices that themselves have embedded processors running yet more vulnerable software.

All manufacturers can do is keep patching the morass of code that their offering depends on, but most device manufactures can't even manage that, or if they can they are unable do it fast enough and unwilling to do it for long enough.

Atheros has been open and Broadcom closed for well on a decade now. That has not driven substantial adoption of Qualcomm's wifi platform.

Judging from what I've seen of their drivers, yeah, their firmware/microcode is probably an embarrassing bug-ridden code base. But it also represents a lot of investment in low-level/low-layer features that they would prefer to hide from competitors (much more sensitive than the higher-layer logic in the driver proper).

While I can understand that sentiment and perspective. Those people are wrong, they are inconveniencing their competitors to a tiny degree and over estimating the value of those features due to the awful implenation.

OTOH anybody can reverse engineer it. Just a tad bit more work.

The bugs in question here are not in the drivers (the bits that run in the OS kernel on the CPU). They are in the firmware (code that runs on a little ARM core on the WiFi chip itself - also called the microcode in the biz).

The driver is indeed "protected" for IP-lawyer reasons; they'll have it out under license to every Tom, Dick, and Jane looking to build a device with their chipset. The firmware, on the other hand, is very closely held, because that's where the chip's functionality lives. A WiFi chipset implements a fantastically complicated protocol, and no one wants to bake that into hardware that can't be updated as bugs are found; so they build relatively simple hardware, and slap a microcontroller right on the die that runs all the complicated logic.

This means that the microcode is as sensitive as The hardware specs on earlier generations of hardware; a competitor with a copy of that source can make a (perhaps better and improved) knockoff if they're not too worried about legal implications like, say, several dozen Chinese knockoff shops.

    echo $original_comment | sed 's/driver/firmware/g'

In which case this is your answer; they're worried about knockoffs, because without the firmware logic their devices are simple commodities, ie don't really have strong differentiators from the competition.

The other hugely important side of this is that a lot of these devices have the ability to transmit on frequencies which they may not be licensed for, or may not be licensed for in all markets.

The easiest example being b/g channel 13. You're permitted to use it for WiFi in most of the world, but not North America. Keeping the firmware proprietary and "secure" is likely an important part of their FCC/IC certifications.

It is required(1). The FCC requires locking down the ability to transmit into regulated frequencies.

1. https://apps.fcc.gov/eas/comments/GetPublishedDocument.html?...

Is their firmware really noticeably better or have more features than competing chipsets?

Yes. 802.11ac, for example, takes both a faster transceiver and a bunch of firmware support, and time to market with this features was a big driver of sales. And that was an ongoing process - ac is actually a set of features, and it's taking years for all of them to be implemented.

Similarly, cheaper chips often don't support optional performance-enhancing features at layers 2 and 3 (link and MAC) that boost performance without any hardware investment.

One theory I've seen bandied about related to GPU drivers is that it's harder for your competitors to notice you're infringing on their patents if you don't ship your source code.

This seems unlikely. (Or maybe that's the reason given, but it seems implausible to be true.) Competitors have more than enough know how to determine if you're infringing a patent, source or no source.

The US legal system has discovery for patent cases. You can sue and then subpoena their code base to confirm whether they are in violation before going to court (and really racking up the fees). Since these are US companies I think that it's more likely fear that others would see the horrible hacks or clever trade secrets.

In other countries (most of Asia), where there is no discivery, it's almost impossible to prove hardware or software patent violations so your case is kicked out of court immediately, even if your patent claims are valid and their product reads into your claims. That's why most patent suits end up in the US or Europe (or in the even faster ITC import injunction).

> Competitors have more than enough know how to determine if you're infringing a patent

How do you figure? The detectability of infringement is a key factor in deciding whether to file a patent.

Regardless, copyright infringement might also be an issue.

Well, it's hard to be specific since nobody ever mentions which patents they're talking about. But I would assume that somebody at AMD has the skills to determine whether Nvidia uses the "good matrix" technique, or whatever it is people are assuming AMD patented that Nvidia is trying to hide.

In court, there's a big difference between your assumption vs. published code.

Drivers are published code, freely available. If the driver does something, and you can't find an expert witness to testify it does what it does, maybe it doesn't do what you think?

Reading assembly with no comments is a different skill than reading C or C++, especially since it's not always clear what's an instruction and what's data, but it's still reading code, and there are tools to help you trace through it. And most of the drivers aren't writing code like it's a 64k demo (where the code is the data, and the data is the code, and they both modify each other)

But they need evidence.

Reverse engineering silicon to figure out if you used a specific type of patented algorithm is super hard.

Looking at open source code is waaaay easier.

>How could it possibly be detrimental for Broadcom to have free software drivers?

Because they are used as a front/vector for US intelligence agencies. Opening this firmware would not allow for the distribution of these implants.

While I acknowledge that this is a very real possibility, I don't think it has anything to do with the motivation to keep the source closed. I think more to do with an old world mentality that all intellectual property is a trade secret. Why share, when there's no real penalty to not sharing?

The question is, what value is secrecy, and alas the answer is that for those who want to exploit their technology prowess over others, secrets are a dire necessity.

An open ideology is one where everyone wins, even those who aren't on your team. The closed one is mostly to protect ones own team. I don't consider any of the reasons to be good reasons, personally. Commercially necessary, perhaps, as a conforming act as part of the "super-state", also perhaps.. but nevertheless, the best conclusion is that this situation is rotten enough to motivate someone to fix it.

Me personally .. I'd love to have the sources for every sub-processor/component in my system. It would be of immense value - commercially and otherwise - to me as an end-user. I hope I don't sit alone in this market...

Well based on the number of NSA and CIA "implants" in Linux and other open source software it isn't clear that makes a lot of difference.

It is indeed a battle.

The drivers are probably pretty complicated and thus valuable IP.

I don't buy that.

Every wifi chipset has working drivers; therefore there is little to no value in Broadcom's driver as "IP".

Contrast that to the value of having a free driver that can receive security patches from anyone at any time.

I'd argue most wifi chipsets do not have working drivers, but I have an apparently high bar for "working": It has to stay authenticated to a given access point indefinitely, can't kernel panic or require reboots to switch to a new network, and needs to have competitive throughout and tail latencies under load.

Most wifi adapters fail at least one of these requirements under windows or linux.

Every wifi chipset has working drivers

Every existing Wi-Fi chipset has working drivers. A startup begins from scratch, which is one more barrier to entry.

True, but there are many open-source wi-fi drivers out there already. Unless broadcom's implementation is something out of the ordinary, releasing their driver doesn't really change the game.

And experience with Broadcom's wireless drivers (and NFC chips) tells me that the only thing extraordinary about them is how awful and limited they are.

Every GPU has working drivers, but optimizations within them can make huge differences in performance on the same hardware.

The driver 'optimizations' in GPUs tend to boil down to hand written replacements for unoptimal/broken shaders and API call sequences on a per application basis.

I expect a network card to not 'interpret' my traffic in a similar way.

As someone who spent a ton of time in this space(working with most mobile GPU vendors from Android E to L) you're talking about things just at the surface.

The way the drivers + backing architecture are built contains tons of IP and things that they would very much not want their competitors seeing.

Our team built our reputation and relationship on keeping that data separate and confidential despite working with just about every vendor out there.

Adding the double blind optimizations leading to absurdly complex drivers that tries to guess how user code tries to guess driver logic... maybe vulkan will help open source GPU drivers in the end.

Comparing a Wifi chipset to a video card is like comparing a bicycle to a sports car.

GPU drivers are very bug ridden and 3D apps can easily cause OS crashes. Apps are tested so they don't trigger those bugs. Witness all the complaint comments on webgl posts about machine crashes - and that's with a thick driver bug workaround layer in browsers.

Many of the "features" of a wifi chipset are implemented in drivers or firmware. QoS? Smarter packet scheduling? Better interference protection? All of those are likley in the drivers.

Competitors could review the code and "copy" how great features work without actually copying the code.

Tell that to the Open and guys who have been dying for wifi drivers for years. They are definitely not all open. Furthermore, not nearly as many people are running alternative OSes on their phones as they are on other types of computers.

It's not that other chipsets have working drivers, but that am open driver would reveal technical secrets of both their software and hardware design. Not saying I agree with proprietary drivers though.

Having worked on enterprise APs using BCM and Atheros chipsets:

The driver is not deeply protected IP. They will hand out full copies to any device manufacturer that uses they're chipset, so that they can integrate it with their systems. The secret sauce, the bit that they won't share with anyone even if it's necessary for debugging, is the on-chip firmware. That's the stuff they take seriously. The driver itself is just a lawyer thing.

1. Why doesn't Apple open source iOS?

2. These weren't drivers. Did you read the article?

3. The larger share of the blame belongs to Apple. Why does Apple trust devices like this that are essentially independent computers? Why should anything this chip does be able to take over the phone and install software on it, in privileged mode, that replicates itself?

4. Why can't you see things rationally?

C's lack of array size info strikes again:

    memcpy(current_wmm_ie, ie->data, ie->len);
where "ie" points to data obtained from the net.

Programmer's mistake for not validating data, not the fault of C language mechanics. Yes it would be easier if <hll features>, still gotta be careful. I've made plenty of these mistakes but never blamed the language.

If a manufacturer makes and sells a gun that keeps going off in people's holsters and shooting people in the foot, the answer is not to say "it's the user's fault for not using it properly. I've shot myself in the foot hundreds of times and I don't blame the manufacturer".

Or something. That analogy sounded better in my head than written down. The point is that IMO the blame lies squarely with the C language: it's a language that's used in a lot of complex parsing code and provides pretty much nothing to help with this, and if anything actually puts roadblocks in the way.

Tools exist that find this error in C code, more to blame the company that didn't use those tools and released code this important. Blaming the language solely... not sure it's productive. If the gun went off in my holster WITH the safety on then I'd blame the manufacturer. Same goes with using the right tools with C, if all steps were taken to do the right thing then I can say we should blame C. But there's not any information here on how broadcom developers their code internally.

Its really hard to find an objective line in these things. Your 'tools exist to find this so they should buy them' is someone else's 'good developers should know this so hire good developers' or 'customers should know to be careful so its their fault' or 'Other languages are better about array length checking so they should use them'.

> that keeps going off

If the C language (not even a compiler, the language) somehow magically inserted bad code or rewrote your good code to be buggy on its own, you'd have a point.

> That analogy sounded better in my head than written down.

Probably, because it's really bad.

I shot myself in the foot yet I don't blame the manufacturer for not putting a safety on the gun since I'm the one that bought it with full knowledge of the caveats

My experience is that most C programmers don't know about many of the caveats about the C programming language.

Are you aware that atoi("a"); is undefined behavior? It can crash, it can launch nethack, it can return 0.

yes I'm aware that parsing a letter as an an integer is undefined behavior, it's in the manual

> yes I'm aware that parsing a letter as an an integer is undefined behavior


> it's in the manual

It's not in MSDN: https://msdn.microsoft.com/en-us/library/yd5xkb5c.aspx

It's not in the manpages: https://linux.die.net/man/3/atoi

Cppreference understates it has having an undefined return value, rather than undefined behavior outright: http://en.cppreference.com/w/cpp/string/byte/atoi

Tutorialspoint defines the behavior as returning 0, and fresh2refresh makes no mention of undefined behavior.

My eighth google hit for atoi finally, finally, gets it right: http://pubs.opengroup.org/onlinepubs/9699919799/functions/at...

If you buy or pirate a copy of e.g. the C89 standard, or refer to one of the free draft versions, it's of course properly documented there too. Neither shows up in the first 50 google results, naturally.

And, of course, by google result 9, we're back to square one - incorrectly defining the behavior as being "returning 0": https://en.wikibooks.org/wiki/C_Programming/stdlib.h/atoi

If thousands of people were repeatedly making the same mistake, and frequently shooting other innocent bystanders' feet, I would definitely put some blame on the manufacturer.

the parents analogy is quite bad since C doesn't shoot you in the foot unless you pull the trigger

closer to a gun without a safety, which plenty of manufacturers sell

That doesn't mean that gun is well designed or worth buying and using. Criticising the lack of such a valuable or even essential feature and advising people not to use it would be fair comment. Right?

never said C was a great language, merely indicated that it's important to distinguish between a mistake made by a programmer and a failure of the language

pretty sure trusting user provided data without validation is the programmers fault regardless of language

You consented to it, sure. This bug affected millions of phones. It's more like a car manufacturer decided to put a particular explode-y gas tank in a car, because it provided greater performance.

and they put a note in the manual that says "if you drive over 70mph on a hot day it will definitely explode so don't do that"

so if you buy that car and you do that, it's your fault regardless of how poor the car design is

There's likely to be a government mandated recall if your car "spontaneously" explodes - some designs are so poor they fail to meet entirely reasonably regulatory standards. Burying the lede in the fine print is not a get out of jail free card for obvious reasons.

> Programmer's mistake for not validating data, not the fault of C language mechanics.

If you outfitted a police force with guns without safeties and hair-pull triggers, and you have any sanity, you're not going to be surprised if the rate of accidental discharges goes up. Programmers use their programming languages a lot more frequently and with a lot less care.

So, sure, let's tell the programmers to be more careful. But the most careful of programmers know they'll still make mistakes, and seek out ways to aid themselves in catching those mistakes instead of hoping they can will them away. One such aid can be choosing another programming language that doesn't share C's language mechanics.

Put another way - programmer's mistake for choosing to use C's language mechanics. You could argue that's technically not saying it's the fault of C language mechanics, but I'd say that's splitting hairs at the best.

You could share the blame with a failure to properly fuzz, and insufficient use of static analysis. I'd be okay with that too.

It's 2017. It's about time we had a better, well designed language (ie Rust, but I'm not sure if it's well designed enough, depends on what you want it for, too). There have been many advances in language research since the 70's that could be very useful for the kind of work C is used for. Saying we should stick to C because it's well established is like saying we should stick to assembly if it was well established. Except it's not portable, but close enough.

I've noticed that C and C++ have pretty fanatical and very narrow-minded fanbase. And I am talking about people of ages 50+ as well -- I was acquainted with several of them (in the real physical world) as well.

So IMO it's absolutely pointless trying to argue with them in the first place. They are set in their ways and while a good chunk of them are pretty strict and excellent in what they do, they are not open to any changes.

I would be the first to agree that Go and Rust aren't ready to start replacing drivers but IMO people should start trying! (Or invest in LLVM some more?)

C/C++'s faults aren't ever going away. They're too convenient in their target area. I am against the overly-used "disruption" term -- I happen to believe the USA tech blogosphere bastardized the term long ago -- but IMO the systems programming area is very, VERY overdue for disruption.

It's time.

You prefer to blame something that can't realistically be changed instead of something that could be changed. How is it useful?

There's a reason "THIS SIDE TOWARDS ENEMY" is a thing.

It's actually


[1] https://en.wikipedia.org/wiki/M18_Claymore_mine

Fair enough. Thanks for the correction and additional information.

C's lack of arrays strikes again. They are essentially syntactic sugar.

What do you mean by C not having arrays?

It's syntactic sugar in the sense that arr[i] is just shorthand for *(arr+i)

There's no abstraction or 'concept' of arrays there. You are literally just telling the compiler to take a certain pointer and move i steps ahead.

Isn't that the definition of an array? Chunk of contiguous memory plus a notion of how to subdivide it into equal parts?

Yes, exactly. A chunk of memory. A chunk has an end.

But "arrays" in C aren't a chunk of memory, just the info where it starts and how large elements are.

The problem is that a frightening number of people don't bother to write the half-dozen obvious wrappers around this, and stdlib doesn't provide them either:

struct buf { uint8_t * base, size_t size };

> The problem is that a frightening number of people don't bother to write the half-dozen obvious wrappers around this

That's likely because having to pass it in and out of functions and libs that don't expect your special structure might cause it to have an invalid length, and then all your special wrappers can become a liability and not an advantage through either assuming your bufs are valid, or defensively checking more than is necessarily because they can't know whether it was altered or not.

> stdlib doesn't provide them either

Which is the real problem. That would make them a de facto standard, and a lot (but probably not all) of the problems would be mitigated by people accepting the performance trade offs needed to make them safe.

That doesn't solve the problem because you still have to set 'size' manually.

I assume they mean that C's support for "arrays" is essentially just C's support for pointer arithmetic plus a mapping of a[b] to *(a+b).

You can even swap it a[b] equivalent to b[a].

shouldn't it be:

* (a + b * sizeof(T))

where T is array elements' type?

That's implied in + on a pointer. uint32_t* + 1 actually adds 4.

Probably referring to the fact that they are simply pointers into contiguous memory.

C doesn't have "memory" in the standard. They're pointers into a contiguous object, but anything before a[-1] or after a[sizeof(a)-1] is undefined aka it actually doesn't exist.

`sizeof(a)` only gives the size of the array when the size is specified at compile time. Either you accept e.g. `int[16]` as a type, or you pass a pointer (for which `sizeof` just returns `sizeof(intptr_t)`)

That's not quite right. Arrays always have a knowable length, and sizeof will give a correct result for variable length arrays as well.

However, arrays that are passed as arguments to functions decay into raw pointers, at which point you lose information about its length.

> sizeof will give a correct result for variable length arrays as well.

In C99 with rather spotty support. And never with malloc and similar, which is how the vast majority of arrays are (and can be) created. And you can't return or store those dynamic arrays somewhere else without losing the size info, nor it can be declared static.

In other words, you're right but for very limited situations.

They are a pointer and an offset with no validation of bounds. But that's OK: C is little more than a high level assembly.

I hate the term "high level assembly" when applied to C since it's loaded against assembly (in the sense of C as its superset) which is obviously not true.

C is full of undefined behavior, assembly is not.

High level != Superset

Actually we often give up features and specialize with high level languages. That's why there are things easier to do in Ruby than in C++ and vice versa.

They are all (usually much more convenient) subsets of assembly.

What ? Assembly has undefined behavior, for example wrt unaligned access on some processors.

The article mentions

> Broadpwn is a fully remote attack against Broadcom’s BCM43xx family of WiFi chipsets, which allows for code execution on the main application processor in both Android and iOS.

But it doesn't go into any details on this privilege escalation actually works for iOS and more specifically that it doesn't require additional exploits. Can anyone explain this in more detail? If this actually allows code execution on iOS application processor, that means we have a jailbreak right?

The block diagram shows a PCIE connection to the application processor, which enables DMA. Most modern systems have a MMU to prevent the peripheral from DMAing to memory areas not specifically reserved for it, but given (certainly Android) systems run oldschool kernels hacked together by the last kind of crowd you want working on them it's probably not enabled or setup correctly.

The other more obvious privilege escalation is that there is still a kernel driver on the application processor talking to the chipset. There is per se no reason to distrust data coming from the chipset, so these often aren't written as defensive as they should be and could contain trivially exploitable assumptions on what the chipset will send and do.

> Most modern systems have a MMU to prevent the peripheral from DMAing to memory areas not specifically reserved for it, but given (certainly Android) systems run oldschool kernels hacked together by the last kind of crowd you want working on them it's probably not enabled or setup correctly.

I'm not sure it's fair to assume iOS IOMMU isn't set up properly just because that's the case on many (most?) android phones. According to the author, most android phones don't even have KASLR which iOS had since iOS6. I would assume IOMMU exists and is working properly unless someone has evidence otherwise (quick google shows very little information on iOS + IOMMU). If a DMA attack is indeed successful on iOS devices, I think that would be substantial enough to write about.

> The other more obvious privilege escalation is that there is still a kernel driver on the application processor talking to the chipset.

I would consider that a separate exploit--but even then you still need a KASLR bypass (another exploit?) at the very least to gain control.

> so these often aren't written as defensive as they should be

On the contrary, the market rate for a iOS jailbreak chain is upwards $1 million USD so I'd be surprised if a single exploit gives you full system control.

I didn't want to suggest that iOS is insecure because Android systems are. For Android we know most of them are hopeless, on iOS it's security by obscurity all the way with just a generally good "track record".

Well, they patched something. Maybe they just patched the firmware image that is loaded onto the chipset on boot and there was no privilege escalation onto the iOS application processor. But if there was, the obscurity means criminals can easily look at the patch to see what it was and exploit that while the public knows nothing.

How, specifically, is iOS "security by obscurity all the way"?

Right, I'm guessing they just patched the actual wifi chip vulnerability. That's why I would like someone (preferably the author) to disclose if there is indeed a privilege escalation or not and to provide more details if there is.

> criminals can easily look at the patch to see what it was and exploit that while the public knows nothing.

Huh? If criminals can “easily” inspect the patch, why can’t the public?

They can, but ease is a term that is conditional on expertise.

> hacked together by the last kind of crowd you want working on them

That was neither informative nor necessary, really... (And likely not true)

The leading Android smartphone producer Samsung had on at least two of their flagship phones a world RW device that allowed full access to the entire system memory including the entirety of the kernel, deliberately:


They needed it to get the camera working. This is not an exaggeration, the people they have writing software for their phones will take every shortcut they can find or imagine to deliver another bullet point on the feature list. Security be damned.

That's shocking, but it happened in 2012. How have they been doing for the past 5 years?

They only hijacked the radio chip, they didn't escalate to the main processor.

One way of attacking would be to intercept someone's internet traffic, and redirect them to a different site (ie, instead of going to Google, you go to get-hacked which looks just like Google).

> Broadpwn is a fully remote attack against Broadcom’s BCM43xx family of WiFi chipsets, which allows for code execution on the main application processor in both Android and iOS.

This implies they have code execution on the application processor just from broadpwn (and not additional safari/ios exploits). Hijacking internet traffic is indeed serious but tech blog sites are already picking up on this and blowing it up. Example: https://9to5mac.com/2017/07/20/broadpwn-wifi-vulnerability-i...

> PSA: Update to iOS 10.3.3 to fix serious wifi vulnerability allowing attacker complete control

Probably because a device on the system itself has tremendous privileges; aside from interacting with non-hardened kernel code (drivers generally trust the device but to be malicious), they generally have DMA access, in PCI can impersonate the CPU when speaking to other devices, and can generally cause all kinds of havoc.

They intercept and modify packets to redirect to a web site they control. It exploits the main processor through the browser.

Assuming one has additional browser vulnerabilities, sandbox escapes and privilege escalation bugs. "Broadpwn" doesn't exploit main processor __anything__ by itself.

Does the broadcom processor normally have access to main memory? If so, I would have to consider that a complete attack absent any mitigation technology.

No. It's a pci express device (or SDIO/usb for older chips). All modern platforms have IOMMU's for those, and every decent OS will enable it.

Fortunately, this is being addressed in software updates. Unfortunately, people who own older devices are left with the vulnerability forever. The iPhone 4S alone sold ~60 million units (according to Wikipedia) and did not (and most likely will not) receive any updates.

Where can you learn if your device has been patched? I have an iPhone 5S.

EDIT: From another comment [0], unfortunately if you've been holding out on updates like I have you'll have to upgrade to 10.3.3.

[0] https://support.apple.com/en-us/HT207923

Why are you waiting to update?

From this it seems to have been a applied in last week's iOS patch, 10.3.3


This is kind of scary :(. How does one ensure that they aren't vulnerable to this bug?

Apple released fixes for both macOS and iOS last week:

https://support.apple.com/en-us/HT207923 https://support.apple.com/en-us/HT207922

If your Android OEM has pushed the July 2017 security update to your device, you're patched.


Out of curiosity, what fraction of Android OEMs push these security updates promptly (or equivalently what fraction of Android phones receive these kind of updates regularly)?

This page has a table of OEMs/devices that, as of the end of May, were fewer than 60 days behind on patches.


To me, the takeaway from this is that unless you are using a "flagship" device, or one sold directly by Google, you're probably not getting updates in a timely manner.

Manufacturer: Device(S)

BlackBerry: PRIV

Fujitsu: F-01J

General Mobile: GM5 Plus d, GM5 Plus, General Mobile 4G Dual,

General Mobile 4G

Gionee A1

Google: Pixel XL, Pixel, Nexus 6P, Nexus 6, Nexus 5X, Nexus 9

LGE: LG G6, V20, Stylo 2 V, GPAD 7.0 LTE

Motorola: Moto Z, Moto Z Droid

Oppo: CPH1613, CPH1605

Samsung: Galaxy S8+, Galaxy S8, Galaxy S7, Galaxy S7 Edge, Galaxy S7 Active, Galaxy S6 Active, Galaxy S5 Dual SIM, Galaxy C9 Pro, Galaxy C7, Galaxy J7, Galaxy On7 Pro, Galaxy J2, Galaxy A8, Galaxy Tab S2 9.7

Sharp: Android One S1, 507SH

Sony: Xperia XA1, Xperia X

Vivo: Vivo 1609, Vivo 1601, Vivo Y55

Just a disclaimer, this isn't the complete list of devices that received the July 2017 update. I, for one, received it for my Moto G4 Play in Brazil.

This list shows the models with a MAJORITY OF DEPLOYED DEVICES running a security update from the last two months.

Yeah I just cut and pasted from the link above

And yet another time we learn why it is better to use Lineage OS. Five year old Samsung S3:

  Android: Version 7.1.2
  Security Patch Level: 5th July 2017

Note that not all vulnerabilities are/can be patched by LineageOS, regardless of what the security patch level claims. Your device maintainer needs to actively merge patches into the kernel/device (see [0], note that this list relies on maintainers to update it). In addition, binary blob firmware needs to be patched by the manufacturer (e.g. Broadcom wi-fi exploits), which won't happen for devices that are out of support.

[0] https://cve.lineageos.org/kernels

My five-year-old Samsung S3 for Verizon stopped receiving updates less than 2 years after its release. The bootloader is locked tight, so I am unable to install any custom ROMs such as Lineage OS.

My phone screen was eaten alive by fungi last week, so I had a look at the field to pick a new device. Discovered Lineage OS, super keen. Unfortunately, its device support is crap.

Guess I'll have to look at upgrading my diehard old Moto G. It's still on Android 5.1.1.

Meanwhile I guess disabling WiFi is a mitigation?

> Meanwhile I guess disabling WiFi is a mitigation?

That's a good question. If it's disabled in firmware and not actually powered down, it might still be susceptible.

My TV came with no on-screen menus but a tablet you use to interact with most features and settings: it's on the June... 2016 patch set. It was over half of year out of date before I took it out of the box.

The supported Pixel and Nexus phone lines get things quickly.

There isn't any third party customization to re-validate.

If you have an iPhone make sure you're on iOS 10.3.3. For Macs, you want macOS 10.12.6.

Why is El Capitan not getting an update? It's still on support.

This is the relevant security update for El Cap: https://support.apple.com/kb/DL1932?viewlocale=en_US&locale=...

Turn off your wifi

Don't use any devices with a Broadcom Wi-Fi chipset.

which basically means don't use any wifi. I think at least 60% of all wi-fi chipsets are broadcom ones.

And who's to say another chipset won't have a similar issue?

that seems pretty doable. on phones there's qualcomm chipsets, and on desktops/laptops there's atheros, intel, and realtek (of the vendors i know of).

So your considered recommendation is for 60% of recently bought device owners globally to all replace their phones, tablets and laptops with devices containing Qualcomm chips, immediately.

And this is preferable to a software fix.

well on servers/laptops i was always happy to have a intel chip. I once bought a usb stick with intel, which was a complete chunk of garbadge. Well the BCM chips were "mostly" stable, so I didn't had too much problems with them, some chips had problems under linux, but besides that they were ok.

It is pretty hard to ensure you are vulnerable.

i've been hearing people complain about the seriousness of this attack vector for years. i'd be surprised if there weren't intelligence agencies that have utilized it already.

Could please someone explain, 1) if firmware is stored on a Wifi chip or rather loaded during the boot process?

2) Do apple/google have binary image from Broadcom or rather source code?

It is quite interesting how this patch production/delivery process works.

> 1) if firmware is stored on a Wifi chip or rather loaded during the boot process?

Typically it's loaded during the boot process. On Linux, see the binary blobs in /usr/lib/firmware or: https://git.kernel.org/pub/scm/linux/kernel/git/firmware/lin...

Source: I did some work with some broadcom (wired) firmware years ago and found it to be pretty unpleasant.

http://ipxe.org/gsoc/bnx2 and https://github.com/dewyatt/bnx2-fw-utils

I worked with both QCA and Broadcom wireless, and can confirm. Generally a version of the driver is compatible with only a specific version of the firmware; the system manufacturer gets the driver source and a firmware binary as a package.

How long until someone unleashes this? There are going to be millions of vulnerable Android phones for at least a couple of years to come. Surely it will happen.

I do wonder why most mobile chips are broadcom. There's decent competition from Qualcomm atheros and mediatek.

Dunno if it's still true, but Qualcomm/atheros was more expensive and mediatek was cheap crap.

If anyone wonders, this was patched in iOS 10.3.3 https://threatpost.com/apple-patches-broadpwn-bug-in-ios-10-...

http://boosterok.com/blog/broadpwn/ shows a simple check using hostapd to see if a device is vulnerable

I already updated my phone. Is the iOS update that patches this available over a cell network? If not, as is usually the case, isn't that Not Good?

10.3.3? "This update requires a Wi-Fi network connection to download." Frustrating. I read this update may be 80-100 MB. Apple, please let me use my mobile data as I see fit. (And a security patch is certainly a worthy use!)

Agree, this is irresponsible to not let us update over cellular

I'm currently on vacation, which means that most of the wifi hotspots I'm connecting to are public. That's a big security risk in itself

iOS security update files are crypto signed, so it's safe to download them on public WiFi.

Not in this case.

If your wifi chipset is known to be compromised, and there is a remote exploit available, as in this case, then merely connecting with wifi to an evil hotspot is enough to compromise a device.

Once a device's been compromised, crypto signatures on update files won't protect you, as the signature checking itself can no longer be trusted.

Proprietary drivers, firmware blobs and ASICs are a national security threat. Without open code reviews, auditing and functional verification it's impossible to trust there are both a minimum of exploitable bugs and/or backdoors in a given software-hardware stack. This may require some sort of confidentiality rubric but there's no shortcut to getting around this vital need.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact