I think realistically state is hard to avoid unless you are using disposable read-only memory.
Example, found through Qubes website:
Seems like she'd have more effect designing hardware.
Qubes' design means hardware and software are all separated so a vulnerability in one doesn't mean exposing another.
I like that in their docs they mention an approach they take and when it isn't secure
That being said the main point of security contention is the admin (dom0).
Between the twilight of Moore's law, and the success of open-source software, I just don't see that much long-term value left in x86+PC.
She is working on hardware but that is not as easy to bring into the wider world.
A: Stinkier without, therefore Qubes.
I'd consider betting on one of those things being solid on its own, but not all of them together.
New Thing solves X and Y but not Z.
Therefore, criticize New Thing for not solving Z.
I run into so many people at local interest groups who do less than advisable things on the computer, yet don't even give a second thought to it because "I'm using Tails!" Or
"I'm using Qubes!"
At the same time, I have friends who do security for the military who show and tell so many different (and simple) ways to exfiltrate data that bypass most of the hypervisor/os/software stack.
This is a better condom. That is an accomplishment, and I tip my hat to them. At the same time, if you really don't want the diseases, it's safest to just stay off tindr.
If you've spent any time with Intel's phone-book-sized opcode manual, or following the history of the PC, you get real skeptical when the words "secure" and "PC" are mentioned together.
Why are you pointing the finger at Qubes for not solving every problem there is? It's doing a much better job than ~every other Linux distro.
In my own space, the approach has typically been to minimize attack surface by using the least amount of the simplest possible hardware we can get away with, then verifying the hell out of it. 8/16-bit micros, RS-232, no BIOS, aggressive shielding, and an extreme approach to the actor model. For things that need more horsepower, super-simple 32-bit micros, a real-time microkernel, and loads of QA. It's not perfect, and we leave a lot of performance on the table, but as far as security-per-man-hour-expended goes, I'd put it up against anything on the PC any day of the week.
nickpsecurity made a very good comment on designs circulating in the assurance/defense sectors: https://news.ycombinator.com/item?id=15571546
The best part of his comment was the quote from Brian Snow:
"The problem is innately difficult because from the beginning (ENIAC, 1944), due to the high cost of components, computers were built to share resources (memory, processors, buses, etc.). If you look for a one-word synopsis of computer design philosophy, it was and is sharing. In the security realm, the one word synopsis is separation: keeping the bad guys away from the good guys' stuff!
So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partially successful, the residual problem is going to be covert channels (i.e. side channels). We really need to focus on making a secure computer, not on making a computer secure -- the point of view changes your beginning assumptions and requirements."
I agree Qubes (or other similar systems) are imperfect - partly due to software bugs, partly due to hardware vulnerabilities. But the it clearly is an improvement, if only thanks to the compartmentalization. I'm sure there are potential adversaries that have access to BIOS backdoors, Xen 0-days etc. But well ...
Never said "Qubes sucks because it's not perfect." I have argued that the PC is too damn crufty and complicated to ever be "reasonably secure".
If I ever felt as though I had to protect myself from FBI or ex-Mossad, I'd feel safer with an iPad and Signal than a PC running anything, and I say that as someone who doesn't particularly trust or care for Apple. You could also go full-Stallman, but that would probably be fairly error-prone if you didn't know as much about computers as RMS.
FWIW I don't think you've answered the "What to use instead, then?" question. I agree there are platforms that are much tighter on security compared to x86 (say, iphones seem to fare quite well), but I don't see how I could use that for my "regular" work. For that, I think Qubes is "reasonably secure" but hopefully it'll get better.
Of course, if your threat model includes guys from NSA/FBI/Mosad, then perhaps it's not enough. But then again, iphone may not be enough either.
If you need a workstation that is hardened against the big boys, I doubt such a thing exists, and it never will if people keep putting all of their hope in the next band-aid. It is also a damn shame, since it's not like this is a problem that needs two more generations of pure science to solve.
Hell, the B5000 was safer than the things we run today, and people didn't stop having better ideas about computing in 1961.
The closest you can get to Qubes on Windows would be to follow Microsoft's Privileged Access Workstation (PAW) guide, but it requires a lot of additional infrastructure. That infrastructure allows you to do remote attestation of the virtual machines, but makes it costly to deploy in a SMB or homelab environment.
I don't expect it'll be very long before PAW and WDAG are usable at the same time, with colored window borders indicating the origin virtual machine. I hope this is on Microsoft's roadmap.
Video on privileged access workstation use, starting at a demo: https://youtu.be/3v8yQz2GWZw?t=41m48s
Video on privileged access workstation setup: https://www.youtube.com/watch?v=aPhfRTLXk_k
It's unmaintained now, but it is basically the same idea as WDAG. Essentially similar to firejail but the container gets its own lightweight kernel and runs in a stripped down VM, so the attack surface is KVM, not all parts of the kernel that aren't firewalled off by SECCOMP.
1) Support anything other than Edge/its own apps
2) Allow the feature to be accessed by users of all Windows editions
I understand for now it's still experimental and whatnot, but I'm not getting my hopes up.
lol. the whole point of an airgap is that you can very easily -at a glace- verify that the system is secure because there's no inputs/outputs to/from it (air gapped). trying to implement it using a hypervisor turns it into a buzzword.
Did you know if you force remove Edge from Windows 10 it will forever after ignore the "always use this" checkbox and prompt you to choose your default browser every time the browser is called from a link in an application?
NixOS has containers that show how this could work, but they're only via systemd-nspawn, so not as jailed as Qube's domUs.
This means you can run Genode on NOVA with VirtualBox 5 fully integrated as the VMM, all with the improved Noux/POSIX interop components in place, and have a decent package management solution (that handles API compatibilities, multiple version installs, src vs binary deps, packages, and more). There's also Xen support with the most recent release (for cloud appliance work with Genode)
What's more, based on the roadmap and challenges, they should be bringing VirtualBox5 support to the seL4 kernel, and they even have a goal for being the virtualization foundation of QubesOS. https://genode.org/about/challenges
With the recent toolchain update and new package management system, its easier than ever to cook up your own Genode-based systems.
a) tweak compilation flags of libraries & apps
b) describe full set of runtime config files of an app
and thus build a single full configuration of a whole system, like in NixOS.
Hm; or can this maybe somehow be solved with the "run scripts" mentioned at the end of the article? I'm even less than a noob with regards to Genode, so I'm not sure about that.
Or does the package manager only provide Nix-like functionality, with no way for NixOS-like features?
I've also been looking at how projects like Hypercontainer and Clear Containers achieve minimal VM overhead to expand the model to a per-application-instance VM.
Another interesting enabling technology is VirtFS, which can be used for filesystem-level storage virtualization to gain the many benefits of COW and shared caching.
The principal question then is how to allow interaction between different application instances without the user having to manually ferry files between them, as it currently happens with AppVMs on Qubes.
Is chromes process per tab model restricted? Forking and piping in general perhaps?
Currently all applications assume they get access to everything by default, so even if one was to be able to implement a confirmation dialog, the user would be victim to a battery of requests.
This is not to mention that isolation excludes discoverability, so users would have to manually make files visible to other applications beforehand.
The problem won't solve itself by adoption if it never gets to that point, that's almost a perfect catch-22.
Some exciting changes are coming:
EDIT: Downvotes for providing relevant sources, really?
Sometimes the downvotes on HN make no sense. Looking through your comment history there are a number of recent comments that were unfairly downvoted. Just a guess, but I wouldn't be surprised if it was the same people doing it.
I only just now downvoted you.
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
I have separate vms for media and browsing, for music (spotify), development (python, rust), skype, personal email, work email and password manager.
It needs 16gb of ram to be able to run all of these at once and about 150gb of disk if you actually create separate template vms.
My only real pain was coping and pasting between all of these vms (you need to ctrl+c then ctrl+shift+c for copy and the ctrl+shift+v, ctrl+v for paste )
I solved that with a custom solution that automatically distributes the clipboard contents (for text only) to multiple vms (depending on the source of the clipboard change). I know it defeats the purpose of isolation for the clipboard but it's ok for my use case.
Thankfully, better designs such as seL4's VMM do exist, although it might need a little more work  until usable for the purpose.
> It seems one major residing problem with KVM is the Linux kernel (which is large and vulnerable). A port of KVM to a thinner base layer would obviate those issues.
Complexity was still yoo high. Most in high-assurance security were trying stuff like Nova microhypervisor as a result. KVM on separation kernels might be worth further investigation for these platforms that will stay on KVM regardless.
(disclaimer: working at Google on virtualization security)
I mean, the US government no doubt had influence on the Trusted Computing Group (too bad the EFF totally shunned it), and through the magic of product binning and chip fab costs, we all have trusted platform modules.
ASLR currently seems wimpy.
I'm certain you are in a position to accomplish a great deal, no matter where you are in the hierarchy. Maybe the future is x86 hardware emulation for user mode processes.
The project would cost money that Google has. There's not much new to invent, though. They just have to apply what's there. The performance penalties and ASIC costs are even much lower than they were in the past. Google refuses to do these things because either (a) they don't know about them or (b) more likely their management doesn't want to commit that much money to secure hardware. Typical of the big companies with the smartcard market the only exception far as stuff non-enterprises could afford.
For a quick example, they did retool software to support OpenPOWER architecture but could've also funded Raptor Workstation in a desktop or esp server form themselves. It would've been to their budget like pennies are to ours. Not even that. At least they did the Chromebooks, though, which are good for a lot of non-technical folks.
Intel should be considered to be totally unreliable and incompetent.
I mean, no one buys office store safes and expects their things to be secure in them. But a processor is a little more expensive than a cheap safe and holds more valuable things.
Edit: and besides, Fortezza is an SSL protocol option.
Perhaps if we add one more thing, x86 will finally be secure. You are right, Intel should be left to their own devices.
SGX is at least a middle ground - it integrates the memory access checks very deep into the memory access circuitry, sufficiently deep to block all other privilege levels on the CPU. Whilst there may well be implementation flaws in SGX itself so far most attacks have been mounted via side channels, not directly exploiting CPU bugs.
In this sense my original statement was correct. Intel is pushing secure CPUs forward more than any other vendor.
It is the wrong sense. Intel is playing catchup more than any other vendor and are selling a product that is nothing more than a bunch of cobbled together features, my opinion in the view of the statement that AMD is glued together.
A more complex one is below that was also designed by one person for his dissertation. Knocks out all kinds of issues without modifying the processor. It has stuff to improve for sure but it think it proves the point pretty well. The stuff corporate teams were designing comes nowhere near this because they don't know much about high-security design. A critical part of that isn't features so much as a balancing act between what protection mechanisms do and don't that tries to minimize complexity to low as is possible.
And one open-source one on MIPS for capability-based security that runs FreeBSD:
A company or group of hardware volunteers could develop this into something at least as usable as a multi-core ARM CPU on RISC-V or OpenSPARC. It wouldn't take tons of money esp if they worked their way up in complexity. The hard stuff is already done. People just need to apply it. They could even pay these academics to do it for them with open-sourced results. They even get a huge discount on the EDA tools that can be six digits a seat.
You're right that Intel is screwing up and playing catchup cobbling together features. There was stuff in the available literature better than most of what they're doing. They even have a separation kernel from Wind River they're not employing. Managers without security expertise must be pushing a lot of this stuff.
It is easy to make a secure coprocessor, since the formal logic proofs aren't for such a long set of code.
The fact that rootkits are even possible, that without malware that doesn't involve an elaborate rewrite of the kernel, shows how terrible everything is.
If I didn't know any better, I'd say that Intel is hiring the designers who thought Internet Explorer should be in the kernel.
I haven't used Xen for a while, but seemed to recall that Amazon forked it way back in the 3.x days and had been doing their own incompatible thing with it since then.
Corrections welcome of course :)
150kLoC is quite a bit for an hypervisor.
Unlike the nitty gritty of how the sandbox works, the IPC changes often with new releases. And quite frankly it isn't as fun, cool, or interesting as VMMs or other sandboxing techniques, so a lot of the time it isn't given the close eye that it should.
I do not think so.
It's a pretty deep rabbit hole if you really want to go down it. You can make a case for not trusting any CPU that you didn't design and fab yourself, and even then you have to watch out for your own mistakes and bugs that can be used against you.
I've been using KVM/Xen/VMware for some time and always enjoyed it. And since Amazon and Google especially are going all in on KVM I'm surprised to hear the Xen is a better choice.
KVM is, like VMware, a Type 2 hypervisor. 
Xen is a proper Type 1 hypervisor.
I guess it's your use of 'proper' that bugged me.
Sorry for not googling before asking...
Tiny hypervisors like NOVA http://hypervisor.org, seL4-based are the ideal solution, but sadly no one seems to be pushing to make them usable and production-ready :(
I encounter an equal amount of complexity in my KVM workstation as I did in my Qubes workstation, and more problems.
For example, lack of a secure copy/paste mechanism, meaning I must type passwords by hand to avoid every VM being exposed to the clipboard.
This means that, if you use this OS on a laptop, you'll be vulnerable to cold-boot attacks, even after you close your lid, unless you configure it to shutdown on lid close. (I.e., if a highly skilled adversary steals your laptop then, even if your laptop lid is closed, they will be able to read your RAM and therefore decrypt your entire hard drive.)
Despite the major security implications, it doesn't sound like a fix will be implemented any time soon. 
To defend hibernation/shutdown: if I lose my laptop or it is stolen, and I realize I will never see it again, then at least I will have peace of mind that no one can ever recover the data, assuming I had a strong password.
An evil maid attack assumes I will have the laptop in my possession again. This is a different problem, and requires different measures to defend against. I'm interested in hearing why you think leaving a laptop in sleep mode protects it from an evil maid attack.
I don't have any bars on my windows to prevent that.
You need to draw the line somewhere.
I do. And, if I have a choice, I'd rather not have to wonder if this data is in the hands of a stranger after my laptop is stolen.
I'm quite sure if you look at your average thief and multiply these to chances together that's less than one in a million chance to happen. Assuming you're not some high profile person where the right person is out to get you and knows which OS you use, and knows how to steal from you.
My Linux OS can hibernate, and I've not found it to be noticeably inelegant or inconvenient. I suppose others' opinions may differ.
 https://puri.sm/shop/librem-13/ - see the Operating System choice
You can have a separate "qube" that is not connected to the network where you would store your passwords, etc.
I'm still not sure than there's a market for this stuff. It must be free, and it's hard to build a business model around that.
Is anyone running this on a laptop? I get the feeling after reading that page that this is really strictly desktop only. Maybe the page has not been updated in a bit?
The certification requirements are higher, but that's basically if people want to stick the Qubes-certified label on their devices, signaling to customers that it measures up to the highest standards of security.
They're not necessary to run Qubes, they're just ideal.
EDIT: See https://news.ycombinator.com/item?id=15735911
Alternatively, consider not running a full-blown desktop or using Windows, which has grown a lot more secure since the Windows XP pre-SP2 days.
OpenBSD -> Secure & minimal Server
OpenBSD doesn't have the isolation and hardening on the desktop apps, as Qubes has.
For this article's target audience, Qubes OS is the better choice.
Considering the researchers who actually disabled IME require physical access to the machine, Purism's claim that they can do it to previously sold devices with only a software update stinks of BS to me.
Also the post you linked to directly gives credit to me_cleaner and Positive Technologies.
The reason the researchers required physical access:
> Although some systems do allow the full contents of the BIOS flash chip to be reprogrammed using software tools only (so called 'internal flashing'), on most PCs this facility is either completely unavailable, or can only write to the unprotected areas of the flash filesystem (excluding the ME area), or will only write vendor-signed images. Accordingly, we will describe the approach of using 'external' flashing in this guide, as that is the most reliable.
Purism being, uhhhh, the vendor, allowed full write access.
If that was the case they could have shipped IME-free machines from the start. They are selling whitebox machines for an exorbitant markup with their own spin on a Linux distro.
Disabling IME can have other impacts, and Purism even has a blog post explaining what the issues were and how they resolved them -- once they figured out what IME modules were needed for their laptop to work properly they could disable IME with a software update.
I don't know if that's how they did it, but you're misunderstanding the difference between disabling IME and enabling internal flashing.
God knows, there are several other embedded CPUs in the thing that have who knows what kinds of vulnerabilities... but better is better.
Wouldn't you want the keys on the computer that's going to use them? And then, wouldn't you want to make it hard to copy the unencrypted private keys?
(I'm assuming we're talking about SSH keys.)
OTOH, it could be neat to run an ssh agent in a key-holding qube and forward that to whatever qubes need to use your SSH keys, using `ssh-add -c` so that key use must be confirmed in the key-holding qube.
If you are victim of such an attack, the encryption of the file is broken as well.