There is no benefit. They probably have an embarrassing code base that is full of garbage and a bunch of lawyers paranoid about IP. Why would any manager suggest to take that risk that has very little potential upside.
It isn't that it would be realistically detrimental, it just has no value to the individual attempting to change the established course of the ship
There are tons of benefits, to the end users. But most of them result in the user having less reason to buy a new device in the hopes of faster or more reliable WiFi.
> But manufacturers are not going to use a wifi chip maker known to be repeatedly exploited, because of end user perception.
How many of your non-professionally-technical friends could tell you the manufacturer of their WiFi chip? Is it on the box? Could they even tell you who Broadcom is?
"Intel NIC" only recently became a very minor selling point in enthusiast desktop motherboards. I'm not holding out hope this is going to follow a more informed curve.
The gatekeepers (manufacturers) are the only ones informed enough to make the decision en mass. And they're not going to do so without a market reason. So barring something like "Broadcom stops providing security updates" or "New law holds device manufacturers liable for security bugs" they're going to save the few cents on BoM and continue using them.
I agree, but these things go up the chain. Sure people won't say 'I wont buy broadcom', or 'i wont buy snapdragon.' But they will say 'I won't buy PhoneCoXYZ because they were in the news for being hacked.' Not after this necessarily, but after enough times, I think we'll see it.
I'd like to move more towards this. And the branding and identification is probably a big part of this. Pushing news to report the culpable parties by name. "Samsung / Google / Apple recently had a flaw in their {model} phone" vs "Another phone vulnerability" desensitization.
I've seen some manufacturers start to advertise which WiFi chipset a laptop has. For example, Qualcomm Atheros cards are now marketed as "Killer," and Intel Wifi cards are often referenced by model number in laptop specs.
I hope that becomes true one day, but it's certainly not the case now.
Manufacturers currently have no choice but to ship devices running vulnerable application software that communicates with remote devices using vulnerable protocols. This is facilitated by vulnerable operating systems running many vulnerable device drivers communicate with devices that themselves have embedded processors running yet more vulnerable software.
All manufacturers can do is keep patching the morass of code that their offering depends on, but most device manufactures can't even manage that, or if they can they are unable do it fast enough and unwilling to do it for long enough.
Judging from what I've seen of their drivers, yeah, their firmware/microcode is probably an embarrassing bug-ridden code base. But it also represents a lot of investment in low-level/low-layer features that they would prefer to hide from competitors (much more sensitive than the higher-layer logic in the driver proper).
While I can understand that sentiment and perspective. Those people are wrong, they are inconveniencing their competitors to a tiny degree and over estimating the value of those features due to the awful implenation.
The bugs in question here are not in the drivers (the bits that run in the OS kernel on the CPU). They are in the firmware (code that runs on a little ARM core on the WiFi chip itself - also called the microcode in the biz).
The driver is indeed "protected" for IP-lawyer reasons; they'll have it out under license to every Tom, Dick, and Jane looking to build a device with their chipset. The firmware, on the other hand, is very closely held, because that's where the chip's functionality lives. A WiFi chipset implements a fantastically complicated protocol, and no one wants to bake that into hardware that can't be updated as bugs are found; so they build relatively simple hardware, and slap a microcontroller right on the die that runs all the complicated logic.
This means that the microcode is as sensitive as The hardware specs on earlier generations of hardware; a competitor with a copy of that source can make a (perhaps better and improved) knockoff if they're not too worried about legal implications like, say, several dozen Chinese knockoff shops.
In which case this is your answer; they're worried about knockoffs, because without the firmware logic their devices are simple commodities, ie don't really have strong differentiators from the competition.
The other hugely important side of this is that a lot of these devices have the ability to transmit on frequencies which they may not be licensed for, or may not be licensed for in all markets.
The easiest example being b/g channel 13. You're permitted to use it for WiFi in most of the world, but not North America. Keeping the firmware proprietary and "secure" is likely an important part of their FCC/IC certifications.
Yes. 802.11ac, for example, takes both a faster transceiver and a bunch of firmware support, and time to market with this features was a big driver of sales. And that was an ongoing process - ac is actually a set of features, and it's taking years for all of them to be implemented.
Similarly, cheaper chips often don't support optional performance-enhancing features at layers 2 and 3 (link and MAC) that boost performance without any hardware investment.
One theory I've seen bandied about related to GPU drivers is that it's harder for your competitors to notice you're infringing on their patents if you don't ship your source code.
This seems unlikely. (Or maybe that's the reason given, but it seems implausible to be true.) Competitors have more than enough know how to determine if you're infringing a patent, source or no source.
The US legal system has discovery for patent cases. You can sue and then subpoena their code base to confirm whether they are in violation before going to court (and really racking up the fees). Since these are US companies I think that it's more likely fear that others would see the horrible hacks or clever trade secrets.
In other countries (most of Asia), where there is no discivery, it's almost impossible to prove hardware or software patent violations so your case is kicked out of court immediately, even if your patent claims are valid and their product reads into your claims. That's why most patent suits end up in the US or Europe (or in the even faster ITC import injunction).
Well, it's hard to be specific since nobody ever mentions which patents they're talking about. But I would assume that somebody at AMD has the skills to determine whether Nvidia uses the "good matrix" technique, or whatever it is people are assuming AMD patented that Nvidia is trying to hide.
Drivers are published code, freely available. If the driver does something, and you can't find an expert witness to testify it does what it does, maybe it doesn't do what you think?
Reading assembly with no comments is a different skill than reading C or C++, especially since it's not always clear what's an instruction and what's data, but it's still reading code, and there are tools to help you trace through it. And most of the drivers aren't writing code like it's a 64k demo (where the code is the data, and the data is the code, and they both modify each other)
While I acknowledge that this is a very real possibility, I don't think it has anything to do with the motivation to keep the source closed. I think more to do with an old world mentality that all intellectual property is a trade secret. Why share, when there's no real penalty to not sharing?
The question is, what value is secrecy, and alas the answer is that for those who want to exploit their technology prowess over others, secrets are a dire necessity.
An open ideology is one where everyone wins, even those who aren't on your team. The closed one is mostly to protect ones own team. I don't consider any of the reasons to be good reasons, personally. Commercially necessary, perhaps, as a conforming act as part of the "super-state", also perhaps.. but nevertheless, the best conclusion is that this situation is rotten enough to motivate someone to fix it.
Me personally .. I'd love to have the sources for every sub-processor/component in my system. It would be of immense value - commercially and otherwise - to me as an end-user. I hope I don't sit alone in this market...
I'd argue most wifi chipsets do not have working drivers, but I have an apparently high bar for "working": It has to stay authenticated to a given access point indefinitely, can't kernel panic or require reboots to switch to a new network, and needs to have competitive throughout and tail latencies under load.
Most wifi adapters fail at least one of these requirements under windows or linux.
True, but there are many open-source wi-fi drivers out there already. Unless broadcom's implementation is something out of the ordinary, releasing their driver doesn't really change the game.
And experience with Broadcom's wireless drivers (and NFC chips) tells me that the only thing extraordinary about them is how awful and limited they are.
The driver 'optimizations' in GPUs tend to boil down to hand written replacements for unoptimal/broken shaders and API call sequences on a per application basis.
I expect a network card to not 'interpret' my traffic in a similar way.
As someone who spent a ton of time in this space(working with most mobile GPU vendors from Android E to L) you're talking about things just at the surface.
The way the drivers + backing architecture are built contains tons of IP and things that they would very much not want their competitors seeing.
Our team built our reputation and relationship on keeping that data separate and confidential despite working with just about every vendor out there.
Adding the double blind optimizations leading to absurdly complex drivers that tries to guess how user code tries to guess driver logic... maybe vulkan will help open source GPU drivers in the end.
GPU drivers are very bug ridden and 3D apps can easily cause OS crashes. Apps are tested so they don't trigger those bugs. Witness all the complaint comments on webgl posts about machine crashes - and that's with a thick driver bug workaround layer in browsers.
Many of the "features" of a wifi chipset are implemented in drivers or firmware. QoS? Smarter packet scheduling? Better interference protection? All of those are likley in the drivers.
Competitors could review the code and "copy" how great features work without actually copying the code.
Tell that to the Open and guys who have been dying for wifi drivers for years. They are definitely not all open. Furthermore, not nearly as many people are running alternative OSes on their phones as they are on other types of computers.
It's not that other chipsets have working drivers, but that am open driver would reveal technical secrets of both their software and hardware design. Not saying I agree with proprietary drivers though.
Having worked on enterprise APs using BCM and Atheros chipsets:
The driver is not deeply protected IP. They will hand out full copies to any device manufacturer that uses they're chipset, so that they can integrate it with their systems. The secret sauce, the bit that they won't share with anyone even if it's necessary for debugging, is the on-chip firmware. That's the stuff they take seriously. The driver itself is just a lawyer thing.
2. These weren't drivers. Did you read the article?
3. The larger share of the blame belongs to Apple. Why does Apple trust devices like this that are essentially independent computers? Why should anything this chip does be able to take over the phone and install software on it, in privileged mode, that replicates itself?
Programmer's mistake for not validating data, not the fault of C language mechanics. Yes it would be easier if <hll features>, still gotta be careful. I've made plenty of these mistakes but never blamed the language.
If a manufacturer makes and sells a gun that keeps going off in people's holsters and shooting people in the foot, the answer is not to say "it's the user's fault for not using it properly. I've shot myself in the foot hundreds of times and I don't blame the manufacturer".
Or something. That analogy sounded better in my head than written down. The point is that IMO the blame lies squarely with the C language: it's a language that's used in a lot of complex parsing code and provides pretty much nothing to help with this, and if anything actually puts roadblocks in the way.
Tools exist that find this error in C code, more to blame the company that didn't use those tools and released code this important. Blaming the language solely... not sure it's productive. If the gun went off in my holster WITH the safety on then I'd blame the manufacturer. Same goes with using the right tools with C, if all steps were taken to do the right thing then I can say we should blame C. But there's not any information here on how broadcom developers their code internally.
Its really hard to find an objective line in these things. Your 'tools exist to find this so they should buy them' is someone else's 'good developers should know this so hire good developers' or 'customers should know to be careful so its their fault' or 'Other languages are better about array length checking so they should use them'.
If the C language (not even a compiler, the language) somehow magically inserted bad code or rewrote your good code to be buggy on its own, you'd have a point.
I shot myself in the foot yet I don't blame the manufacturer for not putting a safety on the gun since I'm the one that bought it with full knowledge of the caveats
If you buy or pirate a copy of e.g. the C89 standard, or refer to one of the free draft versions, it's of course properly documented there too. Neither shows up in the first 50 google results, naturally.
If thousands of people were repeatedly making the same mistake, and frequently shooting other innocent bystanders' feet, I would definitely put some blame on the manufacturer.
That doesn't mean that gun is well designed or worth buying and using. Criticising the lack of such a valuable or even essential feature and advising people not to use it would be fair comment. Right?
never said C was a great language, merely indicated that it's important to distinguish between a mistake made by a programmer and a failure of the language
pretty sure trusting user provided data without validation is the programmers fault regardless of language
You consented to it, sure. This bug affected millions of phones. It's more like a car manufacturer decided to put a particular explode-y gas tank in a car, because it provided greater performance.
There's likely to be a government mandated recall if your car "spontaneously" explodes - some designs are so poor they fail to meet entirely reasonably regulatory standards. Burying the lede in the fine print is not a get out of jail free card for obvious reasons.
> Programmer's mistake for not validating data, not the fault of C language mechanics.
If you outfitted a police force with guns without safeties and hair-pull triggers, and you have any sanity, you're not going to be surprised if the rate of accidental discharges goes up. Programmers use their programming languages a lot more frequently and with a lot less care.
So, sure, let's tell the programmers to be more careful. But the most careful of programmers know they'll still make mistakes, and seek out ways to aid themselves in catching those mistakes instead of hoping they can will them away. One such aid can be choosing another programming language that doesn't share C's language mechanics.
Put another way - programmer's mistake for choosing to use C's language mechanics. You could argue that's technically not saying it's the fault of C language mechanics, but I'd say that's splitting hairs at the best.
You could share the blame with a failure to properly fuzz, and insufficient use of static analysis. I'd be okay with that too.
It's 2017. It's about time we had a better, well designed language (ie Rust, but I'm not sure if it's well designed enough, depends on what you want it for, too). There have been many advances in language research since the 70's that could be very useful for the kind of work C is used for. Saying we should stick to C because it's well established is like saying we should stick to assembly if it was well established. Except it's not portable, but close enough.
I've noticed that C and C++ have pretty fanatical and very narrow-minded fanbase. And I am talking about people of ages 50+ as well -- I was acquainted with several of them (in the real physical world) as well.
So IMO it's absolutely pointless trying to argue with them in the first place. They are set in their ways and while a good chunk of them are pretty strict and excellent in what they do, they are not open to any changes.
I would be the first to agree that Go and Rust aren't ready to start replacing drivers but IMO people should start trying! (Or invest in LLVM some more?)
C/C++'s faults aren't ever going away. They're too convenient in their target area. I am against the overly-used "disruption" term -- I happen to believe the USA tech blogosphere bastardized the term long ago -- but IMO the systems programming area is very, VERY overdue for disruption.
The problem is that a frightening number of people don't bother to write the half-dozen obvious wrappers around this, and stdlib doesn't provide them either:
> The problem is that a frightening number of people don't bother to write the half-dozen obvious wrappers around this
That's likely because having to pass it in and out of functions and libs that don't expect your special structure might cause it to have an invalid length, and then all your special wrappers can become a liability and not an advantage through either assuming your bufs are valid, or defensively checking more than is necessarily because they can't know whether it was altered or not.
> stdlib doesn't provide them either
Which is the real problem. That would make them a de facto standard, and a lot (but probably not all) of the problems would be mitigated by people accepting the performance trade offs needed to make them safe.
C doesn't have "memory" in the standard. They're pointers into a contiguous object, but anything before a[-1] or after a[sizeof(a)-1] is undefined aka it actually doesn't exist.
`sizeof(a)` only gives the size of the array when the size is specified at compile time. Either you accept e.g. `int[16]` as a type, or you pass a pointer (for which `sizeof` just returns `sizeof(intptr_t)`)
> sizeof will give a correct result for variable length arrays as well.
In C99 with rather spotty support. And never with malloc and similar, which is how the vast majority of arrays are (and can be) created. And you can't return or store those dynamic arrays somewhere else without losing the size info, nor it can be declared static.
In other words, you're right but for very limited situations.
I hate the term "high level assembly" when applied to C since it's loaded against assembly (in the sense of C as its superset) which is obviously not true.
Actually we often give up features and specialize with high level languages. That's why there are things easier to do in Ruby than in C++ and vice versa.
They are all (usually much more convenient) subsets of assembly.
> Broadpwn is a fully remote attack against Broadcom’s BCM43xx family of WiFi chipsets, which allows for code execution on the main application processor in both Android and iOS.
But it doesn't go into any details on this privilege escalation actually works for iOS and more specifically that it doesn't require additional exploits. Can anyone explain this in more detail? If this actually allows code execution on iOS application processor, that means we have a jailbreak right?
The block diagram shows a PCIE connection to the application processor, which enables DMA. Most modern systems have a MMU to prevent the peripheral from DMAing to memory areas not specifically reserved for it, but given (certainly Android) systems run oldschool kernels hacked together by the last kind of crowd you want working on them it's probably not enabled or setup correctly.
The other more obvious privilege escalation is that there is still a kernel driver on the application processor talking to the chipset. There is per se no reason to distrust data coming from the chipset, so these often aren't written as defensive as they should be and could contain trivially exploitable assumptions on what the chipset will send and do.
> Most modern systems have a MMU to prevent the peripheral from DMAing to memory areas not specifically reserved for it, but given (certainly Android) systems run oldschool kernels hacked together by the last kind of crowd you want working on them it's probably not enabled or setup correctly.
I'm not sure it's fair to assume iOS IOMMU isn't set up properly just because that's the case on many (most?) android phones. According to the author, most android phones don't even have KASLR which iOS had since iOS6. I would assume IOMMU exists and is working properly unless someone has evidence otherwise (quick google shows very little information on iOS + IOMMU). If a DMA attack is indeed successful on iOS devices, I think that would be substantial enough to write about.
> The other more obvious privilege escalation is that there is still a kernel driver on the application processor talking to the chipset.
I would consider that a separate exploit--but even then you still need a KASLR bypass (another exploit?) at the very least to gain control.
> so these often aren't written as defensive as they should be
On the contrary, the market rate for a iOS jailbreak chain is upwards $1 million USD so I'd be surprised if a single exploit gives you full system control.
I didn't want to suggest that iOS is insecure because Android systems are. For Android we know most of them are hopeless, on iOS it's security by obscurity all the way with just a generally good "track record".
Well, they patched something. Maybe they just patched the firmware image that is loaded onto the chipset on boot and there was no privilege escalation onto the iOS application processor. But if there was, the obscurity means criminals can easily look at the patch to see what it was and exploit that while the public knows nothing.
Right, I'm guessing they just patched the actual wifi chip vulnerability. That's why I would like someone (preferably the author) to disclose if there is indeed a privilege escalation or not and to provide more details if there is.
The leading Android smartphone producer Samsung had on at least two of their flagship phones a world RW device that allowed full access to the entire system memory including the entirety of the kernel, deliberately:
They needed it to get the camera working. This is not an exaggeration, the people they have writing software for their phones will take every shortcut they can find or imagine to deliver another bullet point on the feature list. Security be damned.
They only hijacked the radio chip, they didn't escalate to the main processor.
One way of attacking would be to intercept someone's internet traffic, and redirect them to a different site (ie, instead of going to Google, you go to get-hacked which looks just like Google).
> Broadpwn is a fully remote attack against Broadcom’s BCM43xx family of WiFi chipsets, which allows for code execution on the main application processor in both Android and iOS.
This implies they have code execution on the application processor just from broadpwn (and not additional safari/ios exploits). Hijacking internet traffic is indeed serious but tech blog sites are already picking up on this and blowing it up. Example: https://9to5mac.com/2017/07/20/broadpwn-wifi-vulnerability-i...
> PSA: Update to iOS 10.3.3 to fix serious wifi vulnerability allowing attacker complete control
Probably because a device on the system itself has tremendous privileges; aside from interacting with non-hardened kernel code (drivers generally trust the device but to be malicious), they generally have DMA access, in PCI can impersonate the CPU when speaking to other devices, and can generally cause all kinds of havoc.
Assuming one has additional browser vulnerabilities, sandbox escapes and privilege escalation bugs. "Broadpwn" doesn't exploit main processor __anything__ by itself.
Does the broadcom processor normally have access to main memory? If so, I would have to consider that a complete attack absent any mitigation technology.
Fortunately, this is being addressed in software updates.
Unfortunately, people who own older devices are left with the vulnerability forever. The iPhone 4S alone sold ~60 million units (according to Wikipedia) and did not (and most likely will not) receive any updates.
Out of curiosity, what fraction of Android OEMs push these security updates promptly (or equivalently what fraction of Android phones receive these kind of updates regularly)?
To me, the takeaway from this is that unless you are using a "flagship" device, or one sold directly by Google, you're probably not getting updates in a timely manner.
Note that not all vulnerabilities are/can be patched by LineageOS, regardless of what the security patch level claims. Your device maintainer needs to actively merge patches into the kernel/device (see [0], note that this list relies on maintainers to update it). In addition, binary blob firmware needs to be patched by the manufacturer (e.g. Broadcom wi-fi exploits), which won't happen for devices that are out of support.
My five-year-old Samsung S3 for Verizon stopped receiving updates less than 2 years after its release. The bootloader is locked tight, so I am unable to install any custom ROMs such as Lineage OS.
My phone screen was eaten alive by fungi last week, so I had a look at the field to pick a new device. Discovered Lineage OS, super keen. Unfortunately, its device support is crap.
My TV came with no on-screen menus but a tablet you use to interact with most features and settings: it's on the June... 2016 patch set. It was over half of year out of date before I took it out of the box.
So your considered recommendation is for 60% of recently bought device owners globally to all replace their phones, tablets and laptops with devices containing Qualcomm chips, immediately.
well on servers/laptops i was always happy to have a intel chip.
I once bought a usb stick with intel, which was a complete chunk of garbadge.
Well the BCM chips were "mostly" stable, so I didn't had too much problems with them, some chips had problems under linux, but besides that they were ok.
i've been hearing people complain about the seriousness of this attack vector for years. i'd be surprised if there weren't intelligence agencies that have utilized it already.
I worked with both QCA and Broadcom wireless, and can confirm. Generally a version of the driver is compatible with only a specific version of the firmware; the system manufacturer gets the driver source and a firmware binary as a package.
How long until someone unleashes this? There are going to be millions of vulnerable Android phones for at least a couple of years to come. Surely it will happen.
10.3.3? "This update requires a Wi-Fi network connection to download." Frustrating. I read this update may be 80-100 MB. Apple, please let me use my mobile data as I see fit. (And a security patch is certainly a worthy use!)
If your wifi chipset is known to be compromised, and there is a remote exploit available, as in this case, then merely connecting with wifi to an evil hotspot is enough to compromise a device.
Once a device's been compromised, crypto signatures on update files won't protect you, as the signature checking itself can no longer be trusted.
Proprietary drivers, firmware blobs and ASICs are a national security threat. Without open code reviews, auditing and functional verification it's impossible to trust there are both a minimum of exploitable bugs and/or backdoors in a given software-hardware stack. This may require some sort of confidentiality rubric but there's no shortcut to getting around this vital need.
How could it possibly be detrimental for Broadcom to have free software drivers?
This article is a poignant example that it is detrimental for them to continue to keep their drivers proprietary.