In the race against time, it is fair to say at this point that the machines have won. It may not be completely obvious yet, in the way that a tidal wave out at sea is only a small hump under your individual ship, but when it comes ashore, when the confluence of terrain and massive liquid power becomes manifest, then, of course, it is obvious.
What appears to be happening is a kind of terra-forming activity, a new software layer is spreading, one that has the keys to everything - our social lives, our morning cup of coffee, our cars.. our nukes.
This has an end condition, of course - and that is the total loss of control over our technological infrastructure.
> This has an end condition, of course - and that is the total loss of control over our technological infrastructure.
Why can't you use the same technology to defend your software?
Of course this assumes that automated discovery is not very computationally intensive, which in some cases it appears to be. The search space of a program is enormous. Instead one possible world is one in which exploits can be found automatically, but discovery requires massive computational effort. This seems extremely likely to me because exploits that don't require massive computational effort will be found and limited quickly eliminating the low hanging fruit.o
Thus governments with the best algorithms and the most money/powerplants/datacenters/fabs have an advantage because they can patch their own software while developing exploits for other peoples software.
The strategy comes in at:
1. how many exploits do you keep in reserve given a particular rate of discovery, and how and when do you use exploits?
2. How do you handle the case when you and the target are using the same software? If you start to patch it, the exploit might leak to the target. If you use the exploit before patching, the target might use it against you.
Operationally protecting exploits from spies seems hard. A government with a technical advantage might well be a disadvantage to a less technically savvy government with a human intelligence advantage.
To quote the Honey Badger video:
>"You do all the work for us, honey badger, and we'll just eat whatever you find, how's that? What'daya say, stupid?"
To avoid this a government might use the exploit development capability only defensively in peace time, keeping no reserve of exploits, until they have an immediate need. Of course this might weaken deterrence.
what planet do you live on where this distinction can be made?
2. There are major geographic differences in the software, hardware and architecture of Industrial Control Systems. Not to mention vulnerabilities that might only exist in certain configurations which are common to the contractors building those systems.
3. Major powers are developing their own GPS satellite constellations. Some countries develop their own satellite software.
4. Most web applications are customized to the client.
5. Due to fears of hardware backdoors, it is looking like we might seen a balkanization of communication hardware (internet routers, etc). Note that their are already geographic and regional differences in cell and phone communications.
6. S. Korea's legally mandated https encryption, SEED, is not used outside of S. Korea. An attack on SEED software would be very specific to that country.
You are correct though in the notion that much of the consumer OTS software is global in scope. It really depends on the vertical you are attacking.
It would be far better to have a bug that comes back after a cold boot, as part of a known starting state, than to have a mechanism for "updates" (running software) that is inaccessible to the user/programmer. A static bug can be worked around, but a moving target is harder to compensate for, particularly if it is not viewed as user-programmable (for lack of documentation, or license, etc).
To me, this demonstrates that the proper place for security is in software, at as high a level as possible for the purpose. Even secure boot is too low. Boots should be insecure, but repeatable. Turning a computer on should give full control - only then can you lock it down (whatever that means to the user...). Secure boot, if you want it, should just be a bootloader that verifies a payload, thereby protecting itself - not a BIOS-integrated feature, where the BIOS is opaque.
A feature-full BIOS would only makes sense if it were immutable (and even then, there would be disadvantages), or (at the other end of the spectrum) if it were as programmable as the rest of the system - but until then, any unnecessary complexity or "features" are downright harmful.
Not all bios bugs can be worked around in firmware -- if the memory controller isn't initialized correctly, all processing is suspect. And if the manufacturer can't fix a bios, then it falls on the shoulders of the OS writers -- who do not have neither the motivation to fix 100's of board-specific problems. Worse yet, if I wanted to inject malware, I would pretend to be a motherboard manufacturer and submit bad kernel code to beleaguered OS people.
What isn't a bad thing? Shitty security in BIOS chips? Instead of reformatting your disk you have to detach eeprom chip that holds bios from mobo and connect it to another system to inspect it for infections / changes. I'm not sure this is even possible for most mobos and it doesn't cost nothing like reformatting disk costs nothing.
> using BIOS vulnerabilities will be the only way to install an unsigned operating system.
Then I would rather not use those systems.
Android phones are already at this level - I could run CyanogenMod but I'd have to first run a random blob I refuse run because there is no way to verify what that blob does. I'm screwed both ways. At these moments I remember Stallman wasn't completely crazy and wish Linux was licensed under GPLv3 so that the phone I bought wasn't tivoized.
In which case Google would most likely simply have used a BSD variant as Apple did.
There's always reverse-engineering... an option which I believe could be far more powerful, and Stallman should've argued for; the ability (and right) to figure out what some software does and modify it is the fundamental key to the freedom he argues for, and while having the source code can certainly help, it's not the only possibility.
The power of RE comes from the fact that, while it's very easy to not release source code, it's nearly impossible to prevent someone from reading the binary on a general-purpose computer regardless of what the legal situation is.
I'm actually really happy the tablet revolution didn't pan out as predicted since it leads to the same conclusion: computers where you can't just start coding on them, without which I would never have gotten started.
As for tablets: they may not have been the revolution that some people hoped for, but the lock-in to a walled garden happened anyway with the iphone. Apple has done more long-term damage to the computer industry than anybody else by convincing way to many software authors - who should really know better - that paying to write and publish software is sane.
Instead of fighting this when it was small, we are now faced with a future where even the hardware can work against the user who wants a true General Purpose Computer. We've already seen BIOS lockouts such as the recent thinkpad "boot guard" idiocy. It will get really bad once we start to see Intel "SGX" and the "trusted execution environment" it is intended to enable. So now we get to fight at the hardware level, too.
As for using vulnerabilities to root the device - that is not a strategy to fight this, and merely cedes the fight to the people that are afraid of what it means to be "turing complete".
Unfortunately, I suspect that we are too late. Fighting this trend now requires sacrifice. Stop giving any money to any business that uses these anti-user technologies. Yes, that includes Intel and many others. Stop writing software or making embedded products that rely on these kinds of features. Yes, this might mean quitting a nice job. No, I expect instead that the people that should know better will continue supporting the enemy by buying their products. I expect they will stay on as collaborators.
 this should scare anybody that wants a future where that still has general purpose computers: http://www.arm.com/images/GP_Standardization_500.png
I honestly admit, I'm scared. We're headed towards the world of not owning any tools and basically leasing shiny crap.
Funny thought: will this be the start of the actual professionalization of programming? I.e. you won't be allowed to operate a compiler or a general-purpose computer without proper engineering license? It used to be considered impossible, because hey, everyone can get hands on a computer and a compiler. But if current trends continue, it may soon no longer be the case.
x86 (and _64) and even ARM are all primarily developed by govt-influenced companies like Intel, yes? So what are the possibilities of us, all programmers and electronic engineers who want to support personal computing, to get together and develop a crowd-researched, crowd-designed, and (maybe) crowd-funded architecture to last the future. Of course, writing software for that architecture could take decades-centures (unless someone writes a perfect x86 emulation layer on that architecture), but at least that gives us a hope for the future, a backup to fall to if Intel pulls a full on 'google' on us.
So are these just big dreams or is there an actual possibility of something like this happening? Especially if the community secures the funding of some visionary who's rich as Bill Gates (maybe the man himself) and on the side of the public? That way we'll be able to actually build a system from the ground up that is libre and transparent, without having to muck about with reverse engineering on the 'enemy's ground', so to speak, like corebook does.
Once you can do your own processors, the architecture is alright. To avoid running into licensing issues all the time, projects like RISC-V (http://riscv.org/) can help - or open cores like Leon (SPARCv8), OpenSPARC-T1/2 (SPARCv9), openRISC and several more.
The problem is, the silicon processes were optimized for investment heavy, large scale operations. To ensure the livelihood of general purpose computing we need a "3D printer for logic gates" (for lack of better term), even if it's economically and technologically less efficient (but not too much, obviously).
EDIT : Also, HomeCMOS doesn't look active as of June 2013.
For the higher software stack, Linux on RISC-V already exists, and from there it's open source, a compiler plus some portability work to get a useful stack.
But that's only the CPU side. Good enough for embedded applications and maybe even servers - but at some point, data has to hit a display.
http://www.miaowgpu.org/ or something like that could help there, but I know next to nothing about it.
Take this with a grain of salt because my exposure is admittedly limited, but what I've seen of the "research community" is that many of them are pro-DRM, pro-anti-user-security, and are mainly interested in furthering such technologies without considering the wider implications. I once asked someone with a vision of making all systems written with formally-verified provably safe languages what he thought of jailbreaks, console homebrew, and all the other exploits that bring freedom. His response was that they shouldn't exist.
I'm not really pleased with what Intel has been doing with x86 recently, but fortunately it wasn't always that way and a huge amount of software and documentation was created during that time so we don't have to start from nothing at all. IBM released schematics and BIOS listings for all the models of PC up to the AT. Thus it could be better to "fork the PC"; but individuals have designed and built their own CPUs and complete systems before, so a from-scratch design is still quite doable for a crowd:
Jail breaks are always a stop-gap measure, since they're quite obvious against the interest of the system designer, and so they'll work on plugging the holes.
At some point they will develop the safe methods themselves, and if we "shun" public work in that area, only jails will be safe, not the open spaces - which, among many other things, would be a PR nightmare for general purpose computing (Example: Apple and their "safe", filtered appstore).
And the EK is not so much an issue of "control", but of "privacy" - and as long as you control the OS, access to that key (or any other) can be mediated properly.
Is there any way hardware manufacturers could make it as easy as OS updates?
UEFI also allows updating from within the OS, and it looks like they formalized things a bit more, so that can be streamlined. (see https://blogs.gnome.org/hughsie/2015/03/03/updating-firmware...)
Then again, this is only necessary because UEFI is more complex than some operating systems, and thus provides the attack surface that they then have to defend like that.
Device makers can add such switches as a simple way to advertise being secure. I'd pay a few dollars extra for such, for example, hard disk drives that cannot have their firmware altered, and car computers that cannot be altered, etc.
I wonder when there will become a market for manufacturers to start releasing devices with hardware DIP switches or jumpers that need to be bridged for flashing purposes.
It's also much smaller: A minimum UEFI for QEmu that can barely boot into Linux loaded from SATA is 120kloc, coreboot (plus payload) for the same purpose are about 20kloc.
These differences should stop some of the exploit approaches right from the start.
Regarding the switches/jumpers, hardware is now complex enough to set up that a wakeup after suspend requires more data than what CPUs can store by themselves to revive memory without data loss. That data ends up in flash, so something needs to change in that area before a flash chip could be write protected with a jumper.
(Disclosure: I'm a coreboot developer and no fan of UEFI)
How much data is this? I thought it'd be stored in the CMOS NVRAM, which could be a better design.
Unfortunately that's not enough. Maybe things could be stored smarter, given that the data should have some structure, but the raw format is ~2000 bytes, IIRC.
* part bad engineering (I find their concept of loadable modules and a central dispatch mechanism that can override pretty much everything absolutely misplaced in the context of firmware),
* part politics (the rumour mills claims one of the reasons UEFI was done in the first place instead of going for Open Firmware, which was an established standard back then, was to protect Intel's BIOS partners from having to compete in an existing ecosystem that was quite alive and dominated by others. also NIH),
* part growing scope (Secure Boot locks down said dispatch mechanism through signatures when its inherent insecurity became an issue),
* part cluelessness (Intel is good with silicon and processes, but generally not so good when it comes to software, and they shaped EFI and through that UEFI).
I don't think there's malice involved, and the competitive angle is covered in politics.
That it's pushed the way it is has lots of complex reasons. One is certainly pride (even if they wanted to, Intel can't simply back down after promoting that stuff for 15 years without losing face). It also doesn't help that the only realistic existing alternative for an OS-facing API remains BIOS (because OpenFirmware never really existed on x86 outside laptop.org), which has its own set of issues (even if they are all patchable with enough effort).
I came to know about coreboot thanks to Chromebooks. If one is security conscious, what laptops with good coreboot support do you recommend? A Chromebook Pixel? New Thinkpads seem not to be supported.
If "security conscious" includes not having any unexplained binaries around, the Asus C201 Chromebook may actually be the best bet (maybe there will be devices with the same chipset but more laptop-like properties eg regarding storage), but it's less powerful than a contemporary Intel based device.
Since Sandy Bridge it's mandatory.
The effect is that UEFI can load and run executables (from flash, disk, network), has a network stack and things like openssl (when did you last update your firmware's SSL implementation? :-) ), all of which process lots of ingress data - while maintaining a larger degree of control over the system than the OS that comes after it.
So, it's not UEFI-specific per-se, but UEFI's design was optimized for the large scale (it pretty much started on Itanium, so there) and at a time when security wasn't much of a priority.
Now they sit in that corner and look for ways out. Such as signature checks on executables (Secure Boot).
Guess who wins when you fight God?