Automated discovery and exploitation of architectural flaws is merely the next step in the evolution of software. For the past few years we have been witness to a 'whack-a-mole' type of dynamic in the field of computational security. Exploits are published, a day later they become a Metasploit module, and a day after that anybody in the world can use it on everyone else in the world at the click of a mouse. If you are a full time sys-admin plugged into all the advisory mechanisms you may, for a time, be able to keep your systems patched, but the machines never sleep, they never blink, they never forget, and apparently, they never die.
In the race against time, it is fair to say at this point that the machines have won. It may not be completely obvious yet, in the way that a tidal wave out at sea is only a small hump under your individual ship, but when it comes ashore, when the confluence of terrain and massive liquid power becomes manifest, then, of course, it is obvious.
What appears to be happening is a kind of terra-forming activity, a new software layer is spreading, one that has the keys to everything - our social lives, our morning cup of coffee, our cars.. our nukes.
This has an end condition, of course - and that is the total loss of control over our technological infrastructure.
"It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide."
Agreed. If you had a tool which could detect all of a particular class of exploits in your software one could just add it to your compiler so it would throw an error.
Of course this assumes that automated discovery is not very computationally intensive, which in some cases it appears to be. The search space of a program is enormous. Instead one possible world is one in which exploits can be found automatically, but discovery requires massive computational effort. This seems extremely likely to me because exploits that don't require massive computational effort will be found and limited quickly eliminating the low hanging fruit.o
Thus governments with the best algorithms and the most money/powerplants/datacenters/fabs have an advantage because they can patch their own software while developing exploits for other peoples software.
The strategy comes in at:
1. how many exploits do you keep in reserve given a particular rate of discovery, and how and when do you use exploits?
2. How do you handle the case when you and the target are using the same software? If you start to patch it, the exploit might leak to the target. If you use the exploit before patching, the target might use it against you.
Operationally protecting exploits from spies seems hard. A government with a technical advantage might well be a disadvantage to a less technically savvy government with a human intelligence advantage.
To quote the Honey Badger video:
>"You do all the work for us, honey badger, and we'll just eat whatever you find, how's that? What'daya say, stupid?"
To avoid this a government might use the exploit development capability only defensively in peace time, keeping no reserve of exploits, until they have an immediate need. Of course this might weaken deterrence.
it isn't like there's "Google Chrome" and "Russian Chrome", everyone in the world runs the same software with global distribution channels. and if the solution is "well, we'll make software distribution tied to geographic regions" how well do you think that's going to work, especially when there's a dynamic of "if you can get the Chinese Internet Explorer it will have way fewer bugs than the American one, and you can diff the two to find the bugs?"
1. US military hardware runs different software than Russian military hardware.
2. There are major geographic differences in the software, hardware and architecture of Industrial Control Systems. Not to mention vulnerabilities that might only exist in certain configurations which are common to the contractors building those systems.
3. Major powers are developing their own GPS satellite constellations. Some countries develop their own satellite software.
4. Most web applications are customized to the client.
5. Due to fears of hardware backdoors, it is looking like we might seen a balkanization of communication hardware (internet routers, etc). Note that their are already geographic and regional differences in cell and phone communications.
6. S. Korea's legally mandated https encryption, SEED, is not used outside of S. Korea. An attack on SEED software would be very specific to that country.
You are correct though in the notion that much of the consumer OTS software is global in scope. It really depends on the vertical you are attacking.
In my view this is a result of hardware not actually being "hard".
It would be far better to have a bug that comes back after a cold boot, as part of a known starting state, than to have a mechanism for "updates" (running software) that is inaccessible to the user/programmer. A static bug can be worked around, but a moving target is harder to compensate for, particularly if it is not viewed as user-programmable (for lack of documentation, or license, etc).
To me, this demonstrates that the proper place for security is in software, at as high a level as possible for the purpose. Even secure boot is too low. Boots should be insecure, but repeatable. Turning a computer on should give full control - only then can you lock it down (whatever that means to the user...). Secure boot, if you want it, should just be a bootloader that verifies a payload, thereby protecting itself - not a BIOS-integrated feature, where the BIOS is opaque.
A feature-full BIOS would only makes sense if it were immutable (and even then, there would be disadvantages), or (at the other end of the spectrum) if it were as programmable as the rest of the system - but until then, any unnecessary complexity or "features" are downright harmful.
To do that, you'd also have to harden everything - including the hard drive's firmware - so you get the proper boot sector.
Not all bios bugs can be worked around in firmware -- if the memory controller isn't initialized correctly, all processing is suspect. And if the manufacturer can't fix a bios, then it falls on the shoulders of the OS writers -- who do not have neither the motivation to fix 100's of board-specific problems. Worse yet, if I wanted to inject malware, I would pretend to be a motherboard manufacturer and submit bad kernel code to beleaguered OS people.
In my opinion it isn't a bad thing because in the future, when Microsoft disallows disabling secure boot, using BIOS vulnerabilities will be the only way to install an unsigned operating system.
What isn't a bad thing? Shitty security in BIOS chips? Instead of reformatting your disk you have to detach eeprom chip that holds bios from mobo and connect it to another system to inspect it for infections / changes. I'm not sure this is even possible for most mobos and it doesn't cost nothing like reformatting disk costs nothing.
EDIT:
> using BIOS vulnerabilities will be the only way to install an unsigned operating system.
Then I would rather not use those systems.
Android phones are already at this level - I could run CyanogenMod but I'd have to first run a random blob I refuse run because there is no way to verify what that blob does. I'm screwed both ways. At these moments I remember Stallman wasn't completely crazy and wish Linux was licensed under GPLv3 so that the phone I bought wasn't tivoized.
Ha. The strange thing about stallman is that his words of 'craziness' magically convert to words of wisdom, but there is always a delay in that process, which can go up to decades. This happens _always_. Joke's on us, despite knowing about this phenomenon, we never adjust for it.
but I'd have to first run a random blob I refuse run because there is no way to verify what that blob does
There's always reverse-engineering... an option which I believe could be far more powerful, and Stallman should've argued for; the ability (and right) to figure out what some software does and modify it is the fundamental key to the freedom he argues for, and while having the source code can certainly help, it's not the only possibility.
The power of RE comes from the fact that, while it's very easy to not release source code, it's nearly impossible to prevent someone from reading the binary on a general-purpose computer regardless of what the legal situation is.
I don't know what you're advocating for here. The ability to reverse engineer is explicitly required by the LGPL, and the GPL necessarily requires a minimum standard that covers the RE-ability of any covered work...
Bingo: I'm far more afraid of hardware that's actually unhackable. Down that road lies the slow death of start ups, activism, DIY programming etc.
I'm actually really happy the tablet revolution didn't pan out as predicted since it leads to the same conclusion: computers where you can't just start coding on them, without which I would never have gotten started.
The War On General Purpose Computing continues. Far too many people are content to sacrifice their future in exchange for a few shiny beads^H^H^H^H^H"smart" devices.
As for tablets: they may not have been the revolution that some people hoped for, but the lock-in to a walled garden happened anyway with the iphone. Apple has done more long-term damage to the computer industry than anybody else by convincing way to many software authors - who should really know better - that paying to write and publish software is sane.
Instead of fighting this when it was small, we are now faced with a future where even the hardware can work against the user who wants a true General Purpose Computer. We've already seen BIOS lockouts such as the recent thinkpad "boot guard" idiocy. It will get really bad once we start to see Intel "SGX" and the "trusted execution environment" it is intended to enable[1]. So now we get to fight at the hardware level, too.
As for using vulnerabilities to root the device - that is not a strategy to fight this, and merely cedes the fight to the people that are afraid of what it means to be "turing complete".
Unfortunately, I suspect that we are too late. Fighting this trend now requires sacrifice. Stop giving any money to any business that uses these anti-user technologies. Yes, that includes Intel and many others. Stop writing software or making embedded products that rely on these kinds of features. Yes, this might mean quitting a nice job. No, I expect instead that the people that should know better will continue supporting the enemy by buying their products. I expect they will stay on as collaborators.
I honestly admit, I'm scared. We're headed towards the world of not owning any tools and basically leasing shiny crap.
Funny thought: will this be the start of the actual professionalization of programming? I.e. you won't be allowed to operate a compiler or a general-purpose computer without proper engineering license? It used to be considered impossible, because hey, everyone can get hands on a computer and a compiler. But if current trends continue, it may soon no longer be the case.
As someone who is a novice to computer architectures, is there some consensus in the research community about what will be a good replacement for the von neumann machines we are currently running? I mean, if you think about it, if we truly want to take control, shouldn't we attempt to break free, at a ground level, from all the technologies coming from (or dominated by) corporate structures (and government agencies)?
x86 (and _64) and even ARM are all primarily developed by govt-influenced companies like Intel, yes? So what are the possibilities of us, all programmers and electronic engineers who want to support personal computing, to get together and develop a crowd-researched, crowd-designed, and (maybe) crowd-funded architecture to last the future. Of course, writing software for that architecture could take decades-centures (unless someone writes a perfect x86 emulation layer on that architecture), but at least that gives us a hope for the future, a backup to fall to if Intel pulls a full on 'google' on us.
So are these just big dreams or is there an actual possibility of something like this happening? Especially if the community secures the funding of some visionary who's rich as Bill Gates (maybe the man himself) and on the side of the public? That way we'll be able to actually build a system from the ground up that is libre and transparent, without having to muck about with reverse engineering on the 'enemy's ground', so to speak, like corebook does.
The problem isn't so much in architectures, but in silicon processes. HomeCMOS (http://homecmos.drawersteak.com/wiki/Main_Page) is notable in being the relatively rare project to look at this layer in the computing stack at all.
Once you can do your own processors, the architecture is alright. To avoid running into licensing issues all the time, projects like RISC-V (http://riscv.org/) can help - or open cores like Leon (SPARCv8), OpenSPARC-T1/2 (SPARCv9), openRISC and several more.
The problem is, the silicon processes were optimized for investment heavy, large scale operations. To ensure the livelihood of general purpose computing we need a "3D printer for logic gates" (for lack of better term), even if it's economically and technologically less efficient (but not too much, obviously).
Assume for a moment that the 3D printer for logic gates is a little too much of a long-shot. So if somehow the funds for a large-scale investment into 'open' RISC-V processors are obtained, it is theoretically possible for a community of hackers to write their own BIOS'es on a these processors? And then eventually the whole stack going up to userspace applications. So the only problem right now is that of funds? What other obstructions will be there to build a completely libre system from the ground up?
EDIT : Also, HomeCMOS doesn't look active as of June 2013.
coreboot works on emulation/qemu-riscv, and the communities have somer (small) overlap. We (at coreboot) intend to support coreboot on real RISC-V hardware as soon as possible.
For the higher software stack, Linux on RISC-V already exists, and from there it's open source, a compiler plus some portability work to get a useful stack.
But that's only the CPU side. Good enough for embedded applications and maybe even servers - but at some point, data has to hit a display.
http://www.miaowgpu.org/ or something like that could help there, but I know next to nothing about it.
is there some consensus in the research community about what will be a good replacement for the von neumann machines we are currently running?
Take this with a grain of salt because my exposure is admittedly limited, but what I've seen of the "research community" is that many of them are pro-DRM, pro-anti-user-security, and are mainly interested in furthering such technologies without considering the wider implications. I once asked someone with a vision of making all systems written with formally-verified provably safe languages what he thought of jailbreaks, console homebrew, and all the other exploits that bring freedom. His response was that they shouldn't exist.
I'm not really pleased with what Intel has been doing with x86 recently, but fortunately it wasn't always that way and a huge amount of software and documentation was created during that time so we don't have to start from nothing at all. IBM released schematics and BIOS listings for all the models of PC up to the AT. Thus it could be better to "fork the PC"; but individuals have designed and built their own CPUs and complete systems before, so a from-scratch design is still quite doable for a crowd:
Jail breaks shouldn't (need to) exist, but physical presence triggered introduction of a new root trust should be a mandatory for locked down hardware.
Jail breaks are always a stop-gap measure, since they're quite obvious against the interest of the system designer, and so they'll work on plugging the holes.
At some point they will develop the safe methods themselves, and if we "shun" public work in that area, only jails will be safe, not the open spaces - which, among many other things, would be a PR nightmare for general purpose computing (Example: Apple and their "safe", filtered appstore).
TPM /could/ be nice, if you personally would control it, instead of some company burning the signing keys into the processor – because then you would be able to make a safe system.
Except for the Enrollment Key, that's how TPMs work.
And the EK is not so much an issue of "control", but of "privacy" - and as long as you control the OS, access to that key (or any other) can be mediated properly.
I'm reasonably technical but had no idea I was supposed to be security-patching my BIOS. I googled and found that (1) most of the articles about updating BIOS don't mention security, (2) finding BIOS updates for your hardware isn't necessarily trivial, and (3) applying the update is a fairly involved process with some risk of bricking your computer.
Is there any way hardware manufacturers could make it as easy as OS updates?
Then again, this is only necessary because UEFI is more complex than some operating systems, and thus provides the attack surface that they then have to defend like that.
A lot of these reflashable ROM exploits can be prevented with the simple addition of a physical switch or jumper that controls the write signal. With the writes physically disabled, no exploit would survive rebooting the system.
Device makers can add such switches as a simple way to advertise being secure. I'd pay a few dollars extra for such, for example, hard disk drives that cannot have their firmware altered, and car computers that cannot be altered, etc.
Naive question: Is Core Boot in any way a potential remedy for the sad, sorry situation we're in here?
I wonder when there will become a market for manufacturers to start releasing devices with hardware DIP switches or jumpers that need to be bridged for flashing purposes.
coreboot has a very different architecture, and we try to avoid having it parse external data (except for hardware registers).
It also has next to no presence during the runtime of the OS (no API for the OS to call into, and to exploit) - which is part of the "no external data" approach.
It's also much smaller: A minimum UEFI for QEmu that can barely boot into Linux loaded from SATA is 120kloc, coreboot (plus payload) for the same purpose are about 20kloc.
These differences should stop some of the exploit approaches right from the start.
Regarding the switches/jumpers, hardware is now complex enough to set up that a wakeup after suspend requires more data than what CPUs can store by themselves to revive memory without data loss. That data ends up in flash, so something needs to change in that area before a flash chip could be write protected with a jumper.
(Disclosure: I'm a coreboot developer and no fan of UEFI)
Motherboards were produced with flash write-enable jumpers around the end of the last century, but my guess is that the cost savings of omitting the part and not requiring users to physically move the switch whenever reflashing BIOS, combined with what seems to be now perpetually buggy BIOSes needing constant updates, made them disappear. I think it's still a good idea, however.
Regarding the switches/jumpers, hardware is now complex enough to set up that a wakeup after suspend requires more data than what CPUs can store by themselves to revive memory without data loss. That data ends up in flash, so something needs to change in that area before a flash chip could be write protected with a jumper.
How much data is this? I thought it'd be stored in the CMOS NVRAM, which could be a better design.
CMOS NVRAM addresses 256 bytes (minus 16 for the clock).
Unfortunately that's not enough. Maybe things could be stored smarter, given that the data should have some structure, but the raw format is ~2000 bytes, IIRC.
Wow, that sounds rather horrid! But it seems everyone is switching to UEFI, and MS was happily showing of ~2s boots due to it. Is the complexity because of bad engineering, or politics growing scope, cluelessness, maliciousness, just to keep competition (who?) out via barrier to entry, or what?
* part bad engineering (I find their concept of loadable modules and a central dispatch mechanism that can override pretty much everything absolutely misplaced in the context of firmware),
* part politics (the rumour mills claims one of the reasons UEFI was done in the first place instead of going for Open Firmware, which was an established standard back then, was to protect Intel's BIOS partners from having to compete in an existing ecosystem that was quite alive and dominated by others. also NIH),
* part growing scope (Secure Boot locks down said dispatch mechanism through signatures when its inherent insecurity became an issue),
* part cluelessness (Intel is good with silicon and processes, but generally not so good when it comes to software, and they shaped EFI and through that UEFI).
I don't think there's malice involved, and the competitive angle is covered in politics.
That it's pushed the way it is has lots of complex reasons. One is certainly pride (even if they wanted to, Intel can't simply back down after promoting that stuff for 15 years without losing face). It also doesn't help that the only realistic existing alternative for an OS-facing API remains BIOS (because OpenFirmware never really existed on x86 outside laptop.org), which has its own set of issues (even if they are all patchable with enough effort).
I was about to ask about coreboot as well, but I am pretty ignorant about this part of the hardware stack.
I came to know about coreboot thanks to Chromebooks. If one is security conscious, what laptops with good coreboot support do you recommend? A Chromebook Pixel? New Thinkpads seem not to be supported.
The thinkpad support is mostly done by the free software part of the coreboot community, and they don't touch devices that come with a Management Engine (see http://me.bios.io/).
If "security conscious" includes not having any unexplained binaries around, the Asus C201 Chromebook may actually be the best bet (maybe there will be devices with the same chipset but more laptop-like properties eg regarding storage), but it's less powerful than a contemporary Intel based device.
Are these vulnerabilities exclusive to UEFI BIOS? The more I learn about UEFI, I am left with the feeling it is nothing but a wide open back-door and a method to "brick" pc's.
UEFI is designed to scale up - servers that can boot from network, which brings a certain baseline complexity. It doesn't help that the designers didn't want to set any rules and run everything through a central, extensible function call dispatch.
The effect is that UEFI can load and run executables (from flash, disk, network), has a network stack and things like openssl (when did you last update your firmware's SSL implementation? :-) ), all of which process lots of ingress data - while maintaining a larger degree of control over the system than the OS that comes after it.
So, it's not UEFI-specific per-se, but UEFI's design was optimized for the large scale (it pretty much started on Itanium, so there) and at a time when security wasn't much of a priority.
Now they sit in that corner and look for ways out. Such as signature checks on executables (Secure Boot).
In the race against time, it is fair to say at this point that the machines have won. It may not be completely obvious yet, in the way that a tidal wave out at sea is only a small hump under your individual ship, but when it comes ashore, when the confluence of terrain and massive liquid power becomes manifest, then, of course, it is obvious.
What appears to be happening is a kind of terra-forming activity, a new software layer is spreading, one that has the keys to everything - our social lives, our morning cup of coffee, our cars.. our nukes.
This has an end condition, of course - and that is the total loss of control over our technological infrastructure.