Poor security hygiene is by no means unique to Asus' motherboards' firmware updates. You can find bad practices in all sorts of embedded systems' firmware updates. Manual downloads of router firmware are an excellent example of this, and that includes third party OSS firmware such as DD-WRT. The Obihai ATAs will auto-update over HTTP although I have not checked if they do any sort of code signing. I have seen OSS live media that is distributed without PGP signatures, or even HTTPS+checksums. There are plenty of other examples out there.
In the case of routers, things are beginning to change because the FCC is requiring that manufacturers prevent users from modifying radio parameters to their satisfaction and the easiest way to do that is to prevent users from using OSS firware:
In the case of Linksys routers, the router firmware appears to also auto-update and until recent firmware versions, it lacked verification. I do not know if it auto-updates over HTTP. If it does, the ones running older firmware would definitely be vulnerable to the same kind of attack as the Asus motherboards.
I recently purchased a Linksys EA8500-RB to use as an access point and wanted to flash OSS firmware that I built myself for a reasonable level of confidence in its trustworthiness. It turned out that DD-WRT is the only third party project that supports it at this time. There is no documentation on how to get the precise sources used by the ddwrt developer to build the images he distributes and those downloading them are vulnerable to MITM attacks from the absence of HTTPS+checksums and/or PGP signatures:
The DD-WRT project does have a subversion repository that could be used, but anyone doing a checkout are vulnerable to a MITM attack due to the absence of HTTPS. A mirror is available on github, although there have no assurance that whatever is replicating the repository from subversion to git is not vulnerable to a MITM attack. Furthermore, the build instructions for the image are missing and while generic instructions exist, they are incomplete. They also specify the use of a binary cross compiler toolchain, which similarly has no obvious source code and no protection against MITM attacks.
I built my own toolchain with Gentoo's crossdev, but the incomplete instructions require that I figure out how to use a custom toolchain, the dd-wrt config parameters, the kernel config parameters, how to go from a build to a factory to ddwrt image, etcetera. It is a huge pain, but it is one that I must endure if I want to have an access point running OSS firmware that built myself. Building it myself gives me a high level of assurance that the binaries correspond to the source code and that the source code can be audited by either myself or people in the community.
It really should not be that difficult to get trustworthy firmware and Asus' goof is just the tip of the iceberg.
In the case of routers, things are beginning to change because the FCC is requiring that manufacturers prevent users from modifying radio parameters to their satisfaction and the easiest way to do that is to prevent users from using OSS firware
The flip-side is that this isn't the sort of security most of us want, and the fact that router firmware is "insecure" from this perspective is what enables things like DD-WRT to exist in the first place. See also: iOS jailbreaking, Android rooting, console homebrew, etc.
> The flip-side is that this isn't the sort of security most of us want, and the fact that router firmware is "insecure" from this perspective is what enables things like DD-WRT to exist in the first place. See also: iOS jailbreaking, Android rooting, console homebrew, etc.
There is a difference between making updates available in a way that enables custom firmware and doing them in a way that permits MITM attacks. Locking down automatic downloads is a good thing. Locking down manual updates is not.
Perhaps it was just a case of misunderstanding, but your sentence read like you were implying that preventing users from updating the firmwares was a good thing.
They did accomplish a good thing (preventing MITM attacks on firmware updates) as a side effect of doing something stupid (preventing users from flashing OSS). The right way to prevent MITM attacks would have been via HTTPS and PGP signatures that users are expected to verify when doing manual updates and is only verified automatically by auto-update mechanisms. Sadly, what was done was motivated by reasons that have nothing to do with security and consequently, is making equipment less secure.
> See also: iOS jailbreaking, Android rooting, console homebrew, etc.
Let's go back a step: Why do all of those things exist?
It seems like it would be fairly easy. Use ARM TrustZone or Intel TXT trusted environments to host a non-writeable firmware, with one-time-programmable key storage, verifies that the contents of the boot memory are correctly signed by the key. If it is, then boot. If not, don't (copy in and verify a last-known-good backup image or something).
If the manufacturer wants to create an update, they take the code to the CTO's office safe, and compile and sign the image with the air-gapped machine that holds the private key.
If the private key is not leaked, jailbreaking is impossible. Doesn't seem that hard.
I am just a junior dev with a minimum knowledge of cryptography. This cannot possibly be a unique or new idea. It's literally the purpose of those modules and the purpose of code signing. So what are the business reasons for why it isn't done this way? Some ideas:
0. They didn't think of it, and I just gave them the idea. Unlikely.
1. Resources required to implement this (hardware read-only keystore, crypto primitives in the bootloader, reboot scan time, backup boot image storage space plus incoming image storage space, etc) are too expensive.
2. The possibility of losing the key and having devices they mathematically can't modify without a complete recall and replacement is too terrifying.
3. They don't care relative to the effort to implement it. They talk and litigate like they care, why doesn't this message get carried down to the new product department?
4. Decision makers don't understand the difference between the "security" obfuscation measures they're being sold right now, and potential, actual mathematically secure models proposed to them.
5. They are incompetent to actually build this. They have some pretty smart people, and accomplish other impressive projects, so this seems unlikely.
6. There's a flaw in my scheme that makes it no better than existing methods that can be jailbroken.
I've been trying to convince <Kong>, the owner/(brother of owner) of that desire.de site (btw not the official repo) to implement HTTPS and caching using Let's Encrypt and Cloudflare and not just rely on signed binaries but he's insistent that his method of just signing the binaries is sufficient secure.
Maybe if sufficient number of people pester him about it.
EDIT: On a related note, I've been trying to get the ddwrt guys to improve their HTTPS setup (ciphers etc.) without much success. To me, testing with testssl.sh and fixing the errors that pop up is easy and not that much work.
Transferring signed packages over TLS only prevents the attacker from observing which particular packages are being updated, and that’s assuming the padding alone is sufficient to obscure identification by size.
Otherwise signing packages is actually preferred, because you can do it offline, so that hacking the server is not enough to push malicious code.
Not exactly true. If you as a malicious third person can MITM the http connection for updates, you can push an older signed package with known vulnerabilities.
Is it really that hard? Assuming you generate a public/private key, there's openssl RSA_sign and RSA_verify whose interfaces are pretty simple, clear and well documented. There's a difference between "I'm going to write a new way to do asymmetric encryption" which is really hard, and "I'm going to use RSA to verify/sign a binary" which is really easy. I'm seeing a tendency to say that crypto is hard for anything that has to do with crypto which is not the case. While erring on the side of caution is commendable, let's not push vigilance to the point where taking the hash of a file is considered hard crypto to get right.
You sign at the source with a private key, and you verify on the target with the public key. The trick is only someone with the private key can create a signature that the public key can verify.
The other problem is many people wont have pre-existing copies of the required private key available. If your attacker is in a position to MitM your download of a signed binary, they're probably also in a position to MitM your retrieval of the public key. SSL/TLS certainly helps there (at least the attacker then also needs to be capable of acquiring root CA signed TLS certs for the download site and any readily available PGP key sites. It wont slow the NSA down much - but it will help against the guy with the WiFi Pineapple in your local Starbucks…)
(Though with my overly-cynical hat on, I now just suspect you've only moved the problem to the previous update's authentication - and recursively back to the initial download. How do you protect against the initial download being MitMed and having an attacker's public key inserted - this is functionally the same as HSTS - if you can MitM the first visit you win...)
You need to trust something at some point, be it TLS session and the server you’re talking with, or an SHA csum you verify with a friend (or using PGP’s WoT), and even further the process(es) and person(s) responsible for actually signing the releases.
As for “moving the problem,” it is worth it. Because it’s easier to verify the origin of the software once, then for every update. If there’s a new vulnerability in TLS this will only affect new installations. Verifying (& signing) packages offline is much more anti-fragile.
The parent explained they might not trust verification on the target system. This might not necessarily be because the target is assumed malicious, but it may only have e.g. md5sum and not sha1sum available etc. (I'm obviously no expert... the hashing examples are just examples here)
I intended to suggest you could download it on a separate system which you do trust and verify it there. Then transfer it to the target system.
He's not totally wrong. If the package is signed and the public key is trusted (the bigger 'if' actually) and the package signature passes - then it doesn't matter how it was transported. Even email is fine.
With that said, security needs to be in layers and defense in depth is critical, especially for this type of core infrastructure.
He should upgrade all infrastructure points for better security where feasible.
There's also the political reason not to use Cloudflare ... we're quickly moving to a darker Internet where traffic goes into networks like Cloudflare and nobody knows what happens inside the black box. I wonder if that's where the resistance lies.
The problem with that is that, if you as a malicious third person can MITM the http connection for updates, you can push an older signed package with known vulnerabilities.
A signature is not a hash. As long as you know the right public key (which only needs to be transferred once), you can verify the legitimacy of the signature regardless of how it was transmitted.
My home network as two of them...and a WRT54g with dd-wrt on the Xbox 360. Buffalo has been around at least since the early 1990's when they sold printer buffers.
Very nice find. What are the business unit motivations behind critical suppliers like ASUS repeatedly violating customer trust in this manner? At what point in the management chain is the decision reached to sacrifice reputation for - whatever cost savings there are from not implementing TLS/blob signing?
edit: This is not rhetorical. Actually curious if someone on HN familiar with this class of companies (ASUS is not unique among OEMs) can educate.
I find there's a pervasive hardware culture that's at odds with both software and security cultures.
Hardware culture involves designing it once, testing it once, setting up the supply chain and production line once, and from then on it's just quality control and marketing: totally a fire-and-forget weapon.
That means in a hardware dominated organization, where you sell hardware, revenue is in terms of units sold. Anything else is fixed overhead which detracts from the R. All those once-s above have moved on to other projects or they were short term contractors anyway.
That culture comes from the Darwinian selection of the marketplace.
If you make chips, after you tape out, the design organization must absolutely turn their focus to creating the chip that will obsolete it or someone else will eat your lunch. In a hardware organization, long attention span is a liability, not an asset.
With software, the road to success is incrementally increasing your value to the customer with each successive release. You are in it for the long game, and if you do not have the attention span to achieve a long-term vision, coupled with the ability to deliver incrementally on that vision, you will lose.
Having worked on the software side of a predominantly hardware company, I can tell you first hand how hard it is to get someone whose background is physical chemistry to grok anything to do with user-level software. They have been selected for a short attention span, and are good at their job because of the short attention span. This is at odds with the needs of software product management.
Considering the number of bugs that continue to show up in every new hardware iteration Intel has pushed since Haswell I'm rather surprised they haven't increased their cycle, I have a Haswell i5 in both my desktop and my 2014 XPS 13 and I shudder at the thought of upgrading (which thankfully I have no reason to, like you said advances are much slower these days anyway).
Vehicle manufacturers also have this same mindset issue with their in-vehicle entertainment systems. For them things are either a "recall" or they don't exist, they have no concept of software updates or how to deliver them.
Yep. I find it baffling that people in the tech industry (who should know better) are enthusiastic about vehicle infotainment systems, Apple CarPlay, Android Auto, etc.
Keep it simple. Power, a 3.5mm audio connector, and a windshield suction mount. Something they can't screw up too badly. I can easily buy a new smartphone every 2 years, but a car I'm going to hold on to for more like 10. Why would I want to be stuck with a 5-generation-old navigation/music player system when the car is still fine?
> Why would I want to be stuck with a 5-generation-old navigation/music player system when the car is still fine?
Why would you be? Isn't this a solved problem?
When you want to upgrade your entertainment system, you head to somewhere like Crutchfield or Sonic or Best Buy and buy a new one to plug in. In many cars, it takes less time to install a new car infotainment unit than it takes to buy a new iPhone from a carrier store. (It's often cheaper too).
For people afraid of wires, bored teens at Best Buy will install it all for you for an extra $70.
I think this was true for a while in the mid 2000s when radio units were more or less commodotized, but these days the big center console touch screens are deeply integrated with the car. Even settings like engine timings (sport/eco modes), suspension, steering feedback, etc. are a page or two away from the FM radio. Some have HVAC on the same system. There's also integration with steering wheel buttons, and sometimes the instrument cluster is actually a second screen rendered by the radio unit and displays the radio station or driving directions, etc. The FM radio and GPS may be part of the same transceiver package as the GSM "connected car" radio, and the Bluetooth and OnStar/automatic 911 dialing systems may share the same microphone, all running through the radio unit.
You can't just replace it and get identical/better functionality.
> I think this was true for a while in the mid 2000s when radio units were more or less commoditized, but these days the big center console touch screens are deeply integrated with the car
There's still dozens of brand-new cars that sell without a locked down center console.
For example, you could buy an 100% electric brand new 2016 Nissan Leaf. It ships today with an infotainment center you can "just replace it and get identical/better functionality" anytime you'd like. Despite being upgradable, it still supports many modern features (such as GPS, Apple CarPlay, Rear-view backup camera, etc)
At some point, people are making a choice. If you don't want to be stuck with an unmaintained computer in your car, buy any of the dozens of brand-new modern vehicles that let you freely upgrade whenever you like.
You're not wrong, but most people (even the tech minded) choose their new car by something other the center console features—there are many more important criteria to consider. In the 90s many people ended up with cassette players in their car when they would have preferred CD players.
On my car I had to open up the radio and modify it by hand to accept aux input. Still, I wouldn't have chosen a different car. The other parts of of it are far more important, even if it does irk me that the center console isn't DIN compatible.
What cars are we talking about? Modern cars integrate their entertainment system with a bunch of car-specific features, so it's no longer possible to swap "just the player" unit, like it was possible earlier.
I'm using a Toyota Yaris as an example. All recent models (including the current 2016 model) can be swapped out easily.
With Electric cars / Hybrids, I can see issues since they integrate environmental controls and other stuff in to their radios. I suspect as electric cars become more common, entertainment companies will start building units with support for those features.
Interesting... Wouldn't think anybody still does it. It's already impossible to do in 5th-gen Camaros, for example (2010-2015). I wouldn't be surprised that other Chevy cars are the same way.
> It's already impossible to do in 5th-gen Camaros, for example (2010-2015)
Is that actually true?
I'm no expert, but Crutchfield claims a 2010-2015+ Camaro can take almost any new infotainment box you'd like from their site. (Both Single or Dual DIN).
They claim you'll also retain all OnStar, audible safety alerts, and climate control functionality with your upgrade, regardless of the upgrade you choose.
Carplay and android auto are solving this problem by running all the logic on the phone. Android auto is literally a h264 stream of content generated on the phone and a reverse stream of input events from the car display to the phone. All the updating will happen on the phone.
> Why would I want to be stuck with a 5-generation-old navigation/music player system when the car is still fine?
This is EXACTLY why CarPlay and Android Auto make us enthusiastic - the user interface is run and rendered on your phone, not the infotainment unit. The unit just shows a video stream of UI and sends back keystrokes, GPS and other data. Hence updating functionality and applications is in domain of your smartphone not the car or headunit manufacturer.
It's certainly better than car specific apps, but I have a hard time believing Android and iOS will be the platforms, and stay similar enough to maintain backwards compatibility, near the end of life of a vehicle rolling off the line today.
Actually, that's not quite true. I know of at least one manufacturer whose internal commitment is 15 years. HOWEVER, the problem is all the third party interfaces that these systems increasingly leverage.
I can pretty much guarantee that Google Maps is not going to be the same 15 years down the road and I very much doubt Google has the interest in supporting something for that period of time. So somewhere down the road, we'll end up with all these cars that part of the early stages of connected cars that end up having completely unusable systems.
And that's why Android Auto (and possibly CarPlay) are just dumb streams from the device to the car infotainment system, and input streams back. The car doesn't need to know what's on screen, and the phone doesn't need to know the capabilities of the infotainment system.
My car getting hacked could potentially cause loss of life - my router or laptop can't so easily. However, some security can be had by enforcing physical-only updates. My router requires updates by ethernet cable (physical) so I would prefer my car to be the same - even if I have to bring it into a certified shop.
Or, honestly, how to do software. I can crash my Subaru's audio system just by handing it a sufficiently large MP3 file to play. Let's not even go to how bad Bluetooth is. And they will never, ever fix it.
I think in many cases there's no conscious decision not to implement TLS or code signing. It could just be that no one who cares enough about security is in a position to drive that change. There are many organizations that quite simply lack any kind of security culture.
My AC66U runs the linux 2.6.22.19 kernel which has a ton [0] of vulnerabilities in it. Hopefully they back-fix vulnerabilities without updating the kernel version but I doubt it. I would never trust this or any other consumer piece of hardware as a border device considering the sad state they are all in. Yet, millions of homes have this or worse sitting as their only gatekeeper into their networks.
Yet, millions of homes have this or worse sitting as their only gatekeeper into their networks.
Which is why computer OSs should start treating these as the potential hostile devices they are. None of this "trusted network" nonsense, no unencrypted or unauthenticated connections between devices on the same LAN, etc.
Now that IPv6 deployment is getting up there it'd be really nice to see increased use of IPSec. No need to rely on the network that I'm on or the potential that some actor has attempted to MITM a TLS connection (because a lot of people are just going to ignore TLS warnings, unfortunately).
The Asus version of the 2.6.22.19 Linux kernel has many, many backports, so a simple vuln search for vanilla Linux 2.6.22.19 will not yield pertinent results.
Fine but many router vendors do not push updates for deployed routers. Majority of ordinary users barely know how to plug in an ethernet cable let alone download&update firmware.
>It could just be that no one who cares enough about security is in a position to drive that change
Bingo, I would also add that it isn't even a matter of caring. I suspect some of these people don't even know that they don't know. Which feeds exactly into your not caring statement.
"Never attribute to malice that which is adequately explained by stupidity"
I will be honest, I run a small web community of about 20,000 users, so it's different than hardware/firmware updates for potentially mission critical systems... That said, the reason I haven't implemented tighter security practices isn't so much a response to cost-benefit analysis. My users simply haven't made a lot of noise demanding more strict password tolerances, identity verification, or SSL. I have a limited amount of time I can apply to the demands of the community, 'more security' rarely if ever comes up.
Odds are that I'm not from your community, but I follow a personal policy of not visiting plain http websites. I have strict HTTPS enabled at all times, and for the few sites that I visit that exclusively serve insecure content, I visit through archive.org or google cache.
Needless to say, I can't interact with these sites through proxies. It could also be the case that those who care about their security and privacy simply left your community or limited their presence to just being spectators.
Parent's in a slightly different position, but I'd expect a lot of similar things can be chalked up to the prevalence of technical consultants working for non-technical, business client management.
Without authority, you can say "This is a good idea and you're taking a huge security risk without fixing it" as many times as you want and still be told "No new feature, not a development priority."
You can do it in certificate only mode. I think DNS challenge is even enabled now so you don't need to let it touch your running server at all. Of course it's a bit of admin every ~90 days but 4 times a year for free SSL isn't bad.
You're missing the point. The OP isn't claiming that Lets Encrypt is impossible to configure, so telling them that there are ways to make it work is a bit pointless. The key is that Lets Encrypt can be very difficult to configure and get things working. In a way, the numerous methods available to set up LE add to the confusion, you can search around and find lots of conflicting 'best' ways to do it.
Just for clarification those are two distinct issues.
One: the LE client's apache magic fix up code can fail to handle fairly standard configs and mess things up.
I addressed this directly, you don't need to use the apache magic configuration.
Two: ACME offers numerous ways to convince it that you own a domain name.
My intent was to follow this up by saying you don't even need to be running an ACME client/mess with your apache config at all by using the dns-01 challenge variant over the http-01 variant.
> The key is that Lets Encrypt can be very difficult to configure and get things working.
This is definitely a problem, it would be great to see a divorce between the apache config munger and the certificate fetcher. SSL available to the uninitiated is great but if they're on the command line then I'd like to believe they're capable of pointing their server at a specific location for certificates on disk.
Only if your DNS system supports it, and you host it yourself.
If you have DNS managed by your domain registrar, this helps you very little.
So, for the average small site that’s neither made with a kit where the hoster has a one-click SSL solution, nor large enough to have their own nameservers, this is a real issue.
And yet, these sites are the target demography for LE.
It's hard to make a case for long term support of commodity hardware sold into the consumer market because the most shiny things at the lowest first tends to drive purchases. It's as true for laptops as it is for Android phones.
BestBuy doesn't care if it stocks ASUS or not. It cares about sales and margins. If there's an extra dollar putting Gateway on the shelf instead of ASUS they will. And their customers won't care. "BIOS updates with TLS!" stickers aren't going to improve sales.
Buying a laptop creates a consumer not a customer relationship. I want to pay the least, the manufacturer wants to deliver the least. A few years out, shiny-low-cost will drive my next purchase more than brand loyalty.
Except if that were the case, it's even cheaper to not develop a live update capability at all.
They went through the process of specifying and developing a automated utility that downloads files, parses manifests and then acts accordingly to install BIOS or other updates, tested it and bundled it with their retail system build, and after all that effort didn't take the one tiny step to sign their files or at least put a $10 TLS certificated on their servers?
To me the more likely answer is that (as observed) this dates back to XP days or even earlier, when securing this sort of stuff was not front-of-mind. Back then, finding downloadable software served over https was rare. ftp or http was far more likely. And I think ASUS just never thought about updating their tools. They worked, so never really got a thorough second look at any point up until today.
My observation is that consumer grade laptops have upgradable BIOS to handle bugs, but there's only a brief lifecycle in which the manufacturer performs updates of about a year to coincide with a typical [US] manufacturer's warranty. My suspicion is that providing the upgrade service is cheaper than the combination of resulting warranty claims and potential lawsuits which could result from not fixing the BIOS automatically.
I don't deny the possibility that adding TLS to the update mechanisms might make the world better. On the other hand, I haven't seen a business case that shows a clear benefit for company's like ASUS to change all the moving parts in their logistics chain. Admittedly, I haven't looked very hard.
Field firmware updates for hardware are to greatly reduce the risk a product needs to be recalled -- this is a worst case scenario with great cost for the manufacturer. To some extent it also reduces the pre-ship software verification costs -- potential damage from a bug slipped is greatly reduced.
Security risks are harder to quantify for the bottom line.
To some extent it also reduces the pre-ship software verification costs -- potential damage from a bug slipped is greatly reduced.
It also encourages reducing effort spent in fixing bugs before release, which is why I absolutely loathe this "update culture": the "we can always fix it sometime later" mentality is like procrastination, and leads to barely-working products being released.
It is true that before easy field-updateability, products did ship with unfixable bugs, but I feel like it has only gotten worse from there.
I'm typing this on a ThinkPad...though it was used and runs Linux and was a Buy it Now $135 including shipping on eBay. And last time I had the new laptop fever (about six months ago) ThinkPads were all I looked at...odds are though I'll buy used again.
"youre" a small enough amount that they dont care, but other niche companies like system76 will go through the trouble of testing and white labeling a machine
I think you overestimate the actual security impact of this issue. Think about it, practically everyone who is even remotely technical is going to put a fresh copy of his favorite OS on his computer anyway. No one likes dealing with all the bloatware that is installed on those machines by default.
And the rest? They're infecting themselves by opening dubious email attachments already. For someone looking to make money through viruses, MitM attacks are just not economical.
ASUS really should fix this problem by implementing TLS, but it's just very unlikely that they will lose any amount of customers whatsoever if they don't. And as long as that's the case, nothing will change in that regard.
Don't assume that there is any decision process. Those hardware makers are famous for not getting software. Probably the intern who wrote the downloader did not know who to talk to to get the certificate purchased.
Why do you think they have an agency in and understand their decision? It seems a lot more probable that it is simple incompetence that is the underlying reason.
Most likely: The person who implemented it either did not understand the implications or did not have the knowledge or time to implement HTTPS or other types of signing. And all the other people in the company (including management) don't have any idea of how it should work. There's only a "we can do software updates at all" checkbox ticked somewhere.
Consumer laptop manufacturers which are not Apple have exactly one game to play: who has the most GHz and GB on the Best Buy shelf for the lowest price?
Other dimensions of quality (screen, keyboard, trackpad, hinge, case, battery, fans, overheating issues, preinstalled rootkits and adware, and certainly BIOS) are not relevant to their customers (who, even if they are frustrated with these things, don't necessarily know that something better exists).
So they go in whichever direction is cheapest (or, in the case of spyware, brings in the most 3rd party revenue).
It also bothers me a lot most manufacturers provide OS-dependent tools for updating UEFI. So if you want to use something as exotic as Linux, you need to keep a bootable partition / USB with Windows somewhere...
I thought that too and then got terrified of the idea that someone trusting an HTTP connection writing (parts of) a damn BIOS. This would be worse than anything.
Actually I also don't like BIOS allowing to be flashed from within the OS for convenience. So your computer gets owned and you can't trust your motherboard anymore.
They do that so people with hundreds of servers do not have to spend days in the server room with thumb drives, individually booting servers to flash the BIOS. There is HUGE demand for the ability to do remote BIOS updates over a management network. Now how is the BMC supposed to know whether the network is appropriately secure before accepting those updates?
The network shouldn't ever be considered appropriately secure - always treat the network as hostile, and install the updates iff you can be sure that you have received the proper data (with appropriate signatures) over the untrusted network.
You should never trust the network; firmware updates should have to be cryptographically signed and validated against a pinned certificate to be accepted.
The only "good" answer to whether the BMC is supposed to know whether the management network is "secure" would be a song and dance like 802.1x and any authentication configured on top of that, before even allowing you to submit an image for flashing, let alone trying to verify it.
(You could argue that the existence of good signature verification on the images is often sufficient, but if you have an older BIOS that's signed but has an exploit vector, you could still make use of that if you had unfettered access other than the signature checking.)
Yes it does! Although it doesn't seem to work behind a firewall. LOL. In case direct-from-BIOS flash updates aren't silly enough, imagine having to put your server on the naked internet to boot. No thanks. Even TFTP would be better than that.
Yeah, thats what the title said to me too. Not the case for me since I don't use any of those windows tools from Asus. (run linux instead on all of them)
The fun thing is that this is not about the UEFI BIOS binaries itself, which normally must be signed and the signature is checked during UEFI capsule update. The HN title is not exactly correct.
I don't remember the brand(s) exactly --- don't think it was ASUS however --- but I do remember a few years ago of laptops which would automatically and silently download and install BIOS updates, and inevitably some of them would fail, leading to bricked machines.
IMHO the BIOS is not something that should ever change unless there's a very important reason to, and even then it should be on the explicit action and consent of the user, because of the risks of ending up with a completely non-working machine. UEFI is a whole new mess, but I think the same principle applies.
ASUS has a tendency to have very important reasons to update the BIOS. The last few Intel chipsets, they've had serious BIOS issues at launch that can cause tremendous headaches (random freezes, blue screen, etc.).
Last time I update an ASUS BIOS (about a month ago) it was because the board couldn't use a M.2 NVMe SSD at the same time with Intel SATA controller in RAID mode. Who would have thought... Even after update, they work only in CSM mode, forget UEFI.
> Asus appears to be one of the worst OEMs we looked at, providing attackers with functionality that
can only be referred to as remote code execution as a service.
This means I can basically go to a Starbucks and pwn the Asus there, right?
I have an Asus now but installed Ubuntu first thing when I got it. If I couldn't use Ubuntu I'd use Mac. I see the problem has two sides, Windows for allowing for malware preinstalled and OEM for installing it.
Which is a good reason to always do a fresh OS reinstall before you even boot the system for the first time. That's what I've done the past couple of times I've bought a new PC. the very first boot is off a USB drive to do a clean OS install. Completely wipe the existing disk partitions too.
I'm certain that if you are really honest with yourself, you'll agree that the "malicious" practices you point out are entirely different - not even comparable - to the security negligence pointed out in the OP.
When I buy an Apple computer, I'm aware of everything you listed - and in fact, I enjoy it. I don't want to worry about a thing when I buy my computer. I'm not interested in self repairing, upgrading, or tinkering.
If I buy an Asus however, I would have had no idea that my entire system was at risk. That kind of negligence is malicious in my eyes - not creating closed off hardware.
Apple's devices tend to be more difficult to repair or upgrade yourself, yes, but this is not malicious, even if it is somewhat hostile.
Also, I should point out that while Apple's phones don't let you install unapproved software, this isn't true of the Mac, which, unlike Microsoft-approved PCs[0], lets you install alternative operating systems (you can even boot to DOS!), disable its security features, etc.
[0] I know that MS do allow OEMs to allow disabling Secure Boot, but it's not required as of Windows 10. Meanwhile, Apple's computers don't have it in the first place!
> Apple's devices tend to be more difficult to repair or upgrade yourself, yes, but this is not malicious, even if it is somewhat hostile.
If they consciously take a hostile action (and let's face it, you don't accidentally design a new screw) then yes, I'd call that malicious. If you do that repeatedly then I'd consider you evil.
> Also, I should point out that while Apple's phones don't let you install unapproved software, this isn't true of the Mac, which, unlike Microsoft-approved PCs[0], lets you install alternative operating systems (you can even boot to DOS!), disable its security features, etc.
Gatekeeper seems like a step towards it. And regardless, Microsoft isn't exactly a paragon to compare yourself against.[0]
Right. These can also be interpret as their reason to make slimmer devices. People buying these products would presumably know what they're getting into.
Custom screws, unless they're customized beyond the driver required to remove them, almost certainly don't qualify as a tool for making slimmer devices.
4 out of 6 complaints here all help reduce the size of an electronics product.
In fact some of those decisions led me to chose an Apple product over a non apple one.
Locked down software ended up being a business decision for Apple's App store. As for the screws, I have no idea, but you can easily find appropriate screwdrivers online so it doesn't seem like that big of a deal to me.
I fear that, as a bonus, there is a race condition where a local attacker can replace any update with its own 'update' between download and installation.
Worst-case, they might have implemented this like this:
"Local" to the computer, not the network. If a 'normal' user you can create a file that will get run with administrator privileges, that's equivalent to a UNIX root exploit.
Local to the network. No administrator permissions required, you just have to use your computer in a public place or have a hostile actor on the network (e.g. hotel, cafe)
Having a domain that can serve https is good and all, but it's no use if the client doesn't actually bother checking whether the cert is valid or not. The phenomenon is so prevalent that google needed to add a warning to developers in android's ssl socket documentation.
Not really. I was thinking along the lines of a localized instance so it would be more akin to things being captured at the perimeter and getting converted there.
So outbound to an insecure HTTP url gets rewritten to pull from a HTTPS url instead etc.
Yup. The standard procedure that applies to pretty much any prebuilt computer. If you need to run the stock OS, reformat with the OS vendor's image ASAP, and make sure to only install the minimal driver packages from the hw vendor afterwards.
Thankfully it's trivial to uninstall. My UX390UA didn't have it installed(can't remember if I manually uninstalled it before) and took two seconds to uninstall from second one.
LiveUpdate runs on any OS the purchaser might install, or just Windows?
I have never had to perform a BIOS update with any off the shelf computer from ASUS.
I have looked at the BIOS updates on offer. I cannot recall that they were always hosted on a server using HTTPS. Or that MD5 signatures were provided.
But my understanding was that only users that knew what they were doing applied BIOS updates after purchase.
Is flashing the BIOS really common with ordinary users?
It's a major change and not something I would want to be done automatically by a third party.
This auto-updating craze is becoming a bit farcical.
Running programs that let third parties open ports, and run downloaded executables.
But the concern is whether someone can MITM or tamper with the download?
If "unauthorized access" to computer systems without actual damage to said systems were not outlawed, there might be people who would make harmless exploits against such irresponsible vendors and publicly shame them (in the eyes of the general non-techy public) into action.
Alas, it's illegal to "exploit" but not illegal for system vendors to enable exploits through negligence. Thus, because of how the law is written, vendors have little incentive to care about security and well-meaning white hats are disincentivized from demonstrating the vendors' irresponsibility.
Out of curiosity, how would an attacker exploit this to run a code s/he wants?
Try to direct the DNS requests to their own server instead of the LiveUpdate one? If so, how?
Also, would we be a better design? Hard-code IP addresses to prevent the DNS trick? Use HTTPS and hardcode the public key of the server on every machine?
(Only asking out of curiosity, clearly.. Seems like a good case study for designing things right.)
Redirecting DNS is one way; if the attacker can MITM the connection -- e.g. they control the wireless AP, or the router, or the ISP -- they can also just replace the server response with a modified image.
Hardcoding the IP is not a good idea and it doesn't work against MITMing. HTTPS with certificate pinning would be the standard way to secure the connection. Verifying the BIOS image using a certificate before installing it is also "a good idea" (ie. pretty much mandatory), that way users can provide a binary downloaded on another computer.
Even if the "last mile" is secure, unless the whole process is secure, doesn't matter if the download is secure; case in point, find a oss project that documents their build process to see if security is baked in; hint, of often it is not.
The first thing I did when I got my ASUS UX303UA was re-partition and install Arch Linux. Works flawlessly and I avoid issues with this sort of half baked bloatware. I haven't done any BIOS updates because ASUS doesn't publish detailed changelogs and I don't know if they are required or might cause problems. Excellent hardware company that probably should just stick to hardware.
Just an aside, but I have been out of the loop for awhile with Windows but I couldn't believe how ugly Windows 10 looked when I booted it. I currently float between Mac, ChromeOS and Gnome and my favourite at the moment is still the material design look. Windows seems to be getting uglier. It is a shame because, although different, I am not sure it is functionally all that much worse these days.
While agree this BIOS update situation is poor, is there any proof a large number of 'common users' are affected by MITM attacks? The average Joe is not doing harbouring any state-secret. This is similar to 'stage-fright'? Please give numbers who got affected?
On the other hand, I find people are very relaxed enabling remote desktop or teamviewer (as it promises them to access their file anywhere) and using the same password.
But no problems have happened and if there are problems the market will work it out, people will just stop buying Asus, no need to have laws for network security!
At least one model of ASUS iKVM (server remote management daughter-board) I've seen has an embedded linux OS that doesn't allow changing of the admin password from the default. Doesn't use shadow passwd files either.
In the case of routers, things are beginning to change because the FCC is requiring that manufacturers prevent users from modifying radio parameters to their satisfaction and the easiest way to do that is to prevent users from using OSS firware:
http://hackaday.com/2016/02/26/fcc-locks-down-router-firmwar...
In the case of Linksys routers, the router firmware appears to also auto-update and until recent firmware versions, it lacked verification. I do not know if it auto-updates over HTTP. If it does, the ones running older firmware would definitely be vulnerable to the same kind of attack as the Asus motherboards.
I recently purchased a Linksys EA8500-RB to use as an access point and wanted to flash OSS firmware that I built myself for a reasonable level of confidence in its trustworthiness. It turned out that DD-WRT is the only third party project that supports it at this time. There is no documentation on how to get the precise sources used by the ddwrt developer to build the images he distributes and those downloading them are vulnerable to MITM attacks from the absence of HTTPS+checksums and/or PGP signatures:
http://desipro.de/ddwrt/K3-AC-IPQ806X/
The DD-WRT project does have a subversion repository that could be used, but anyone doing a checkout are vulnerable to a MITM attack due to the absence of HTTPS. A mirror is available on github, although there have no assurance that whatever is replicating the repository from subversion to git is not vulnerable to a MITM attack. Furthermore, the build instructions for the image are missing and while generic instructions exist, they are incomplete. They also specify the use of a binary cross compiler toolchain, which similarly has no obvious source code and no protection against MITM attacks.
I built my own toolchain with Gentoo's crossdev, but the incomplete instructions require that I figure out how to use a custom toolchain, the dd-wrt config parameters, the kernel config parameters, how to go from a build to a factory to ddwrt image, etcetera. It is a huge pain, but it is one that I must endure if I want to have an access point running OSS firmware that built myself. Building it myself gives me a high level of assurance that the binaries correspond to the source code and that the source code can be audited by either myself or people in the community.
It really should not be that difficult to get trustworthy firmware and Asus' goof is just the tip of the iceberg.