Hacker News new | past | comments | ask | show | jobs | submit login
A native hypervisor is coming to OpenBSD (marc.info)
276 points by mariusz79 on Aug 31, 2015 | hide | past | favorite | 110 comments

Good to see OpenBSD in process of getting one. I previously thought that OpenBSD, or at least Theo, hated virtualization based on this rant:


Or just x86's virtualization. Nothing different popped in my news feed until now. Has OpenBSD's position changed on the subject of virtualization? And is there a more recent post that explains their position?

Theo's more recent talks (like that one for ruBSD) seem to indicate that OpenBSD's looking to prioritize virtualization in order to stay competitive with other operating systems.

It's also worth noting that OpenBSD has supported virtualization on some other platforms, too; for example, it can be used to manage SPARC logical domains (LDOMs).

Appreciate it. Makes sense on competitiveness. Didn't know about the LDOM support. Interesting.

Theo is objecting to the view that VMs are a security necessity.

Makes sense. I disagree with him in that they can definitely have benefit over regular OS for security. The reason is relative simplicity and ability to easily integrate security tech in versus a whole OS with legacy compatibility issues. This was proven in KVM/370, then VAX Security Kernel, then MILS, then Nizza Security Architecture, and then people started leveraging Xen for similar reasons albeit without the same assurance. The NSA's pentesters repeatedly failed to breach some of these despite years of effort while the OS's... see Snowden leaks and TAO catalog.

Of course, I agree that VM's aren't necessary: one of many approaches one can use and don't get the job done by themselves. They are beneficial, though, as running security-critical components on a 4-12Kloc in kernel mode with mediated, simple interface should have way less impact than running it on 1Mloc+ with POSIX-style interface.

Aren't they though? With 0-days it makes sense to be able to snapshot non mission-critical systems, bring them offline, until a patch is found.

Snapshotting alone is a security necessity when patching too. Very easy to rollback to a "good" state.

I haven't read it, but I assume the discussion was about sandboxing clients. From that perspective, any additional security would be defeated as soon as a client is able to affect the hypervisor or the host OS. So (according to Theo) if you can't write a secure host and/or client, the VM doesn't improve security.

While I agree with him on a lot of things, this isn't one of them; security is all about layers, and it's not like OpenBSD hasn't steered clear of other forms of sandboxing (like chroots and systrace). This doesn't mean that virtualization should be relied on exclusively or even near-exclusively as a defense (which is what I suspect Theo was more objecting to, along with the point of "well if you can't write a secure operating system, what makes you think you can write a secure hypervisor?"), but rather that it should be used as an additional layer on top of (or rather, underneath) a bunch of others.

It's like bulkheads on modern ships. Yeah, if you get a hole in your hull, you're gonna be in some (literally) deep water, but that bulkhead (so long as it's built right) could mean the difference between limping to the nearest harbor or sinking to the nearest seafloor.

Somehow back when Amazon had to reboot every EC2 server to fix a Xen bug, my "insecure" hypervisor-less server didn't require such action. I think I'll prefer to keep sailing without that particular bulkhead.

The reboots were because Amazon cuts a lot of corners in their Xen setup because instances are supposed to be "disposable". A proper VM cluster would use dedicated storage nodes that export over something like iscsi, which would require just transferring a memory snapshot, or would use the native, slower, disk snapshotting migration. But that's just one of many ways AWS is rather broken from an operations standpoint.

Thing is, the AWS philosophy of considering everything short of data sources "disposable" leads to a lot more robust engineering.

Yes, it makes Ops work more difficult, but as someone smart said regarding software: if something hurts, you're not going it often enough :)

I'm pretty sure "if something hurts, you're not doing it often enough" doesn't apply to, say, arm-breakage or self-immolation :)

What you call a proper VM cluster has been pretty terrible from my experience as both a developer on an early cloud and a consumer of large government facing clouds.

Typically the network or SAN becomes oversaturated and the vm's shit the bed. AWS on the other hand was considerably more reliable and I'd argue they've made better decisions rather than cut corners.

The SAN becoming over-saturated isn't something that just "happens". Between establishing limits ahead of time and monitoring that shouldn't happen without someone knowing well ahead of time.

Just to be clear, I'm defending Amazon over an accusation that they've cut corners from an install that was up in 2007 or earlier. Now we could have focused on xen guests shitting the bed for no apparent reason or flakey switch port but we decided to focus on storage.

OK so I'm not proclaiming to be an expert but as someone working in the area in 2007, buying something off the shelf like it aint no thing, you're getting something like a netapp with a limit of 512 iscsi inititatiors, or a Sun amber road where your only form of automation is an ssh consol with a big warning stating it's unsupported.

From memory, there was no such thing as setting quotas on the amount of IOPs an iSCSI inititatior can do, in fact, I'm fairly sure IOPs quotas just didn't exist period, as the vendors weren't really up-to-speed with this new selling vm's thing. So basically, we're suggesting that it's a good idea to just buy a SAN to run an indetermined amount of vm's that are going to do an indetermined amount of IOPs.

OK, cool, you're now indebted to storage vendors selling you new shelvs at £80,000 a pop for those extra IOPs you so deperately need. Now to be fair, Amazon could probably afford it, but your VMs would still be a lot more expensive and would probably have still been totally disposible when your switch port decides to blip traffic to the SAN or as previously stated, Xen shits the bed.

None of these things might be a problem today, I don't know, I'm more a consumer than a producer of clouds these days, but I'd suggests these criticisms are bullshit. They come from some obviously smart people, but bullshit none-the-less.

Well true; even the RMS Titanic, with its fifteen bulkheads, was no match against an iceberg. Let's just hope that this new hypervisor for OpenBSD is built with better-quality steel :)

Also, it's worth mentioning that EC2 instances are meant to be ephemeral; Amazon doesn't provide any semblance of a guarantee that your instances won't reboot, and assumes that you intend for all your "machines" to be arbitrarily rebootable. Not saying that any hypervisor implementation right now is particularly good; only that Amazon isn't exactly the best representation here.

How many bugs have OpenBSD team found vs found in Xen? That would be a relevant comparison. From there, an assessment of exploitability of each given OpenBSD's attention to mitigation.

What you said, on other hand, was meaningless given that OpenBSD has had bugs that could lead to a crash. Real question is, "Do Xen or security-focused virtualization schemes (a) reduce number of vulnerabilities with impact of kernel-mode 0-days, and/or (b) prevent, contain, or facilitate easy recovery from OS- and app-level 0-days?" Prior experience in security-focused efforts show yes to both questions. Xen isn't one of them as the existence of the Xenon project shows. However, it's small size and improvements over time make it substantially less risky than an arbitrary OS + software combination esp if above layer is also addressed (eg MirageOS). Even Galois Inc.'s conservative teams are using it in some work.

Well, the point is my security would not have been improved, in any way, by running on top of Xen and sharing my server with some rando.

I agree with that. It's why I still recommend BareMetal hosting and physical separation where possible. ;)

Why would you be sharing your server with some rando? You don't have to share your Xen deployments with other people if you don't want to, you know :)

You've never had to reboot your server?

and why can't you do that with containers or jails? or filesystem snapshots? or instant re-deploy to a known good state from ansible/puppet/salt/chef?

Just because there's a nice mouse-navigable GUI for entry level Windows admins doesn't mean it's a good solution.

if you need snapshots for doing updates it's because your software is fragile, undocumented, and you don't have a deployment procedure. Fix that, and upgrades will be easy and not scary.

Or you could have a consistent repeatable way of doing it across multiple apps and operating systems. I am not averse to your argument that doing it at the app level has benefits, but being able to do it at the VM level in a consistent way is going to be simpler when you expand beyond a handful of applications.

I agree that there is convenience to the execution controls VMs provide. But if you can't repeatably and easily stand up a replacement/duplicates of a system from configuration management and backups, then you don't actually have configuration management or backups.

Sure, to be clear I'm not arguing against those things, I'm just arguing that VM level snapshots and clones are also a tool in the operational arsenal and is in fact a very popular one that works well in practice.

It looks like I agree entirely with you, just not the guy that started this whole thread by saying VMs are a security necessity.

Snapshotting is sorta orthogonal to virtualization though, isn't it? Just snapshot at the storage level, no?

KVM, VMware, and Xen all allow you to do a snapshot of memory as well as of disk. It's one of the tools used to migrate xen instances between nodes, for example.

x86 hardware support for virtualization has changed quite a bit since 2007, though. https://en.wikipedia.org/wiki/X86_virtualization#Intel_virtu...

True. Looks to have gotten a lot better, too.

That E-Mail is from 2007...

This is from 2013: https://youtu.be/OXS8ljif9b8?t=395

"That E-Mail is from 2007..."

Yes, people have repeated that line since 2008. Readers certainly saw the timestamp. It was main Q&A and search result, though. Hence asking for modern post.

"This is from 2013"

And that's a start. Appreciate the link. I don't see many of his interviews but that was the first one I saw him admit to being behind. Good they're changing their attitude a bit.

The expected and somewhat disappointing part is when he has no answer to what can be done to raise the bar past a few exploit mitigations. There's something like four decades of work (and worked examples) showing how to increase assurance of security in hardware, software, and systems. Especially in capability, microkernels, static analysis, covert channel analysis, and so on. He could... idk... apply some of that instead of mock and ignore it like most mainstream does. FreeBSD is ahead here with SEBSD and Capsicum work.

One project did port OpenBSD to L4 kernel to isolate it in a protection domain. The idea, as in Nizza Security Architecture, is to be able to split system into legacy, untrusted stuff in VM and trusted, highly-assured components running directly on microkernel. A proven model that would benefit OpenBSD by dramatically reducing attack surface. This is done in embedded space (eg INTEGRITY, PikeOS Hypervisor) for up to 8 ISA's each for those wondering about portability.

Just one of dozens of techs to draw on to increase assurance. Will be interesting to see if they draw on any of this or get left behind [again] by those that do.

The OpenBSD covert channel analysis team is understaffed at the moment.

Both EROS and Genode managed to do a lot of what I mentioned with less staff & time. Plus, with solid architecture that above components can leverage without doing all the highly assured stuff over and over again: what I call security ROI. Your huge TCB, drivers probably in kernel, and language-related issues are where I predicted most of the trouble would come from. (Did it?) Something like Nizza Security Architecture would benefit you along with interface checks you're already good at and a better language that translates to C with checks automatically inserted.

Far as covert channels, Google Kemmerer's Shared Resource Matrix for a method that worked for amateurs and pro's alike on a budget. Any person handling a subsystem can apply it. Even using it on function call level can tell you a lot. If understaffed, just use it on any component handling secrets for storage and timing channels. Best bang for buck.

So go run EROS. Why aren't you already?

Nah, I'm just not running OpenBSD. Gotta stay where the innovation is at, both in productivity and security. ;)

Ok, to be more serious for a bit, a goal of the OpenBSD project is to produce a Unix like operating system. To the extent "innovation" is "don't be Unix" it's somewhat counter to project goals.

Well, there we go. That's unfortunate, but understandable. However, it still allows you to build on decades of work in security engineering (incl old secure UNIX's). The easiest route at this point is putting OpenBSD API on top of a microkernel, pulling security-critical functionality out of main system onto microkernel, and bulletproofing your middleware for these. Additionally, writing the code in a way that lets tools such as Astree Analyzer work on as much of it as possible will knock out many bugs. Compiler tools that automatically transform kernel or user-mode code to make it safer might help. Softbound + CETS comes to mind.

Much to draw on or improve while remaining a UNIX. The microkernel + user-mode virtualization approach has already been done in academia and commercial products. So, it could be done here. Will they? Another matter entirely. I doubt it.

Truth be told, though, I voted for the Xen Dom0 to use OpenBSD because 0-days would be its main concern. And we know which team is the best at removing them from a UNIX codebase. ;)

but its so fun to blast out buzzwords

It's fun to do what the sheep or crowds are doing. That's Windows, Mac, BSD, or UNIX at any given time. Throw in C/C++ everywhere, browsers, Flash, HTTP as universal transport, language runtimes too bloated to understand, the Cloud... the more vulnerabilities and maintenance horrors, the more they'll like you and the more fun you'll have with them.

Our security needs to improve dramatically. Some ways are proven to work, some are proven not to. Pushing the second category is extremely fun, you might get famous in Silicon Valley companies, maybe invited to DEFCON, and everyone will make excuses for its problems later. Lots of buzzwords there, I agree: can't even mentally track all of them. Then there's a tinier group pushing methods that work because they're necessary, even if not all fun. Staying with that group on principle if not profit.

In addition to a new VMM, why not enable OpenBSD to run as a guest VM on AWS/Xen?

Ravello claims to do that:


Haven't used them: just found them looking for OpenBSD on AWS.

Indeed. Why not?

This March 2015 post lists some known work items, http://www.joelroberts.org/openbsd/

  - Kernel hangs when bringing online additional cpus--no working SMP
  - PV drivers for net and disk needed. Probably easy to take from NetBSD
  - System needs testing for stability
An old (2012?) comparison of 4 BSDs on Xen and KVM: https://gmplib.org/~tege/virt.html

From the link above comparing BSDs on Xen:

"OpenBSD probably works poorly by design, since its lead developer despises virtualisation."

I have seen this before. Can someone summarise the reasons why Theo de Raadt hates virtualisation?

Basically he thinks any hypervisor that isn't OpenBSD is not secure.

"x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit." http://marc.info/?l=openbsd-misc&m=119318909016582

(I see justinsaccount already posted this below.)

That's quite true, except that even OpenBSD is not _necessarily_ secure as a hypervisor.

I believe that Xen SMP hang was fixed in 5.8:


The mailing list post discusses the use of virtio for drivers.

That gets you a lot of the way to KVM support.

OpenBSD has had VirtIO (supported by KVM, VMWare and now virtualbox too) guest support for a while now.

AWS uses Xen and domU support is a lot more invasive - OpenBSD had supported it in the past but I believe it was dropped?

Well PV domU support is more invasive, but HVM is not (it is just like kvm), and Amazon supports both. The middle ground is the pvhvm mode which requires some Xen drivers for additional support, rather than the virtio drivers that kvm/bhyve use - Xen has a different driver model.

"For example, I've been baking in support for things that the other implementations don't care about (namely i386 support, shadow paging, nested virtualization, support for legacy peripherals, etc) and trying to backfit support for those things into another hypervisor would probably have been just as hard as building it from the ground up."

I don't get it. qemu does all of that already. All he needs to do is implement the same kernel ioctls for OpenBSD that KVM implements on Linux, and he gets all of that for free.

qemu runs as a userspace process so it is relatively slow.

Essentially, vmm is to OpenBSD what KVM is to Linux. And yes, a KVM compatible interface can be built, as Mike mentioned.

So, is this thing going to run all the IO emulation in the kernel? That sounds like a horrible architecture security-wise, very much not what I would have expected from OpenBSD, which always seemed to pick security over performance.

By far most security bugs in modern hypervisors come from bugs in the emulation of legacy devices, because it's complicated and messy. This is why for example Red Hat's hypervisor solution has had a lot of work put into isolating the qemu process with SELinux.

Great to see more competition for kvm & FreeBSD's bhyve

And Xen :) NetBSD runs it well

arguable. Xen has become more and more Linux-centric and there seemingly has been less and less interest in the NetBSD developer community to keep abreast. see stub-domains

I wonder how reusable this will be on the SPARC64 port.

There was never enough interest in Linux/SPARC to make a Xen or KVM port feasible. But OpenBSD's community is a different animal.

OpenBSD/sun4v already has some support for virtualisation via logical domains. See: http://www.openbsd.org/papers/eurobsdcon2011-kettenis.pdf and the ldomctl manpage here: http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/...

If you don't mind me asking, I've always wondered - what do you / people use SPARC for?

Testing - per the desired hardware page: http://www.openbsd.org/want.html

"It is important to spread sparc64 around the development community, since it is the most strict platform for detecting non-portable or buggy code."

for me, personally, it's perverse historical interest.

in a larger sense, it's because Oracle's support policies make the market-clearing price for used SPARC hardware very, very reasonable. The value of 3-year-old SPARC gear is essentially $1.

As a real Solaris/SPARC customer, it's cheaper to buy new than try to get a piece of used equipment into Oracle's good graces.

If you don't mind me asking, where are you finding such cheap SPARC hardware? Even 5-year old hardware still goes for thousands of dollars on the second-hand market.

He's full of crap. Any of them that are obviously in good condition go for thousands of dollars with T1 processors. Some of those without HD's, used, and in unknown condition go for several hundred. Not apples to apples, though, as no business will depend on such unknowns for mission-critical stuff that SPARC is best used for.

Interesting enough, even the AlphaServers still sell for good money on eBay. Certainly they were great servers in their day but performance is way behind. While faster Itaniums from SGI you can get for $100-200. Old hardware market isn't linear or predictable. Personally, though, I think those AlphaServers are still worth if if one understands OpenVMS clusters or PALcode. ;)

Unclear what this means. Is this a hypervisor that runs under OpenBSD? Or a hypervisor under which OpenBSD runs? Or an attempt to kludge OpenBSD into a hypervisor? Or some kind of Docker-like container system for OpenBSD?

Hypervisor that runs under openbsd, and which can run openbsd.

Was doing some digging into the research. Perhaps someone happens to have a copy of Bill Broadley's 2007 IT Security Symposium presentation/paper?

It was hosted at http://shell.cse.ucdavis.edu/~bill/virt/virt.pdf

http://www.lugod.org/presentations/virt-lugod.pdf and http://taviso.decsystem.org/virtsec.pdf

These seem to be the most popular ones that show up.


  From:       Theo de Raadt

  > Virtualization seems to have a lot of security benefits.

  You've been smoking something really mind altering, and I think you
  should share it.

  x86 virtualization is about basically placing another nearly full
  kernel, full of new bugs, on top of a nasty x86 architecture which
  barely has correct page protection.  Then running your operating
  system on the other side of this brand new pile of shit.

  You are absolutely deluded, if not stupid, if you think that a
  worldwide collection of software engineers who can't write operating
  systems or applications without security holes, can then turn around
  and suddenly write virtualization layers without security holes.

  You've seen something on the shelf, and it has all sorts of pretty
  colours, and you've bought it.

  That's all x86 virtualization is.

The past several years have brought many additions to the x86 architecture that makes it more than shit atop shit. When he wrote it, things like Xen required you to build a kernel with special options for it to run, with all kinds of new and exciting bugs to discover. Things like page table virtualization, which provide a lot better page protection, or being able to shadow the vm control structures, which allow for nested VMs, or IOMMUs, which allow vms to access explicit pieces of hardware, just didn't exist. x86 is in a better condition for virtualization now, and layouts can finally done in a semi-sane way.

Yeah, but on the other hand one might argue that there is no reason to believe Intel's hardware implementation of such features is more correct and less prone to errors than Xen's implementation in software.

A mailing list post from 8 years ago. And a lot of virtualization implementations have had security issues. I don't think that precludes an attempt at getting it right in OpenBSD.

> [...] an attempt at getting it right in OpenBSD.

So... The others got it wrong, and you're gonna get it right! :-)

How do you justify the 1.3% share on servers[1]? There must be something GNU/Linux got right! There's not even a mention of OpenBSD[2], just Free and Net.

People must be stupid, right? :-)

[1] http://w3techs.com/technologies/details/os-unix/all/all

[2] http://w3techs.com/technologies/details/os-bsd/all/all

OpenBSD (collectively) has made a career out of getting it "right" where other people have not re: security. They've had their own missteps but that doesn't mean they shouldn't give it a try.

> How do you justify the 1.3% share on servers[1]?

OpenBSD is a research operating system.

A lot of their development and deployment methods do not align with the needs/wants of large infrastructure deployments (e.g. biannual releases, supported for 1 year).

Happy to cull/reinvent legacy to suit modern systems and practices (e.g. utf8, doas, opensmtpd/ntpd/bgpd/sshd etc...)

Refusal to support hardware without documentation or binary kernel blobs.

Focus on simplicity and correctness, rather than legacy and kludges, which often gets in the way of sysadmins wanting to Get Stuff Working.

Take your pick?

Truth is, I know ... But it gives me a small but substantial amount of pleasure teasing OpenBSD fans :-P

That said, I am grateful to OpenBSD developers because I use their software daily: OpenSMTPd, PF, SPAMd and SSH.

Appreciate the market share update, that confirms it. BSD is dying.

We're right back to Slashdot golden-era Netcraft copypastas.

(Also, when will people learn that market share is not a proxy for technical superiority?)

People will learn that market share is not a proxy for technical superiority the precise moment they find themselves advocating something with a small market share, and not a moment before.

and it will be heavily compartmentalized to that one thing.

You know, I haven't seen that in forever. I feel nostalgic.


It is now official. Netcraft has confirmed: BSD is dying

One more crippling bombshell hit the already beleaguered BSD community when IDC confirmed that BSD market share has dropped yet again, now down to less than a fraction of 1 percent of all servers. Coming on the heels of a recent Netcraft survey which plainly states that BSD has lost more market share, this news serves to reinforce what we've known all along. BSD is collapsing in complete disarray, as fittingly exemplified by failing dead last [samag.com] in the recent Sys Admin comprehensive networking test.

You don't need to be the Amazing Kreskin [amazingkreskin.com] to predict BSD's future. The hand writing is on the wall: BSD faces a bleak future. In fact there won't be any future at all for BSD because BSD is dying. Things are looking very bad for BSD. As many of us are already aware, BSD continues to lose market share. Red ink flows like a river of blood.

FreeBSD is the most endangered of them all, having lost 93% of its core developers. The sudden and unpleasant departures of long time FreeBSD developers Jordan Hubbard and Mike Smith only serve to underscore the point more clearly. There can no longer be any doubt: FreeBSD is dying.

Let's keep to the facts and look at the numbers.

OpenBSD leader Theo states that there are 7000 users of OpenBSD. How many users of NetBSD are there? Let's see. The number of OpenBSD versus NetBSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetBSD users. BSD/OS posts on Usenet are about half of the volume of NetBSD posts. Therefore there are about 700 users of BSD/OS. A recent article put FreeBSD at about 80 percent of the BSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeBSD users. This is consistent with the number of FreeBSD Usenet posts.

Due to the troubles of Walnut Creek, abysmal sales and so on, FreeBSD went out of business and was taken over by BSDI who sell another troubled OS. Now BSDI is also dead, its corpse turned over to yet another charnel house.

All major surveys show that BSD has steadily declined in market share. BSD is very sick and its long term survival prospects are very dim. If BSD is to survive at all it will be among OS dilettante dabblers. BSD continues to decay. Nothing short of a miracle could save it at this point in time. For all practical purposes, BSD is dead.

> Also, when will people learn that market share is not a proxy for technical superiority?

When they argue their point honestly, and not reach for ex post facto rationalizations?

That's not nearly as contradictory as it seems at first. Mike didn't say running OpenBSD on vmm would be more secure than running on hardware. Without commenting directly on the security of vmm, it can be simultaneously less secure and still useful.

Ted, can I ask a question? what's your opinion about network sockets and 9P?

Without gainsaying what de Raadt says there, it's important to keep in mind that virtualization opens up the possibility to do things other than running a giant pile of Unix slop on top of a different giant pile of Unix slop -- see, for instance, projects like Mirage [1]

[1] https://mirage.io

To be fair that post is from 2007 which is like 32 years in tech years.

Virtualisation technologies both hardware and software have improved a lot since then.

But visualization also does more nowadays. We have a lot more surface area to get hit.

your link is from 2007-10-24. more recently: from ruBSD in 2013 https://www.youtube.com/watch?v=OXS8ljif9b8 (6:36 mark)

People do change their minds as they learn new things or the world changes.

It is one of my favorite posts of Theo's but it is a bit dated. Still what is advocated is not really practical (building a secure computer system (ISA, System Architecture, OS)). Something security folks have wanted to do for years and years, I even got to bid on one for some Maryland agency when I was at Sun.

That said, you could do a lot worse than just running OpenBSD out of the box on your servers connected to the Internet security wise, and be hard pressed to do better without a lot of training.

Theo has been softening on x86 virt for a long time.

Additionally, he's still right. Don't rely on it to enforce security boundries (e.g. host untrusted systems and trusted systems on different tin) and his rant is totally congruent with virtualisation having a place in OpenBSD

Just another iteration in the constant cycle of adding abstractions. What's another layer?

The benefit was explained in an early, secure-virtualization project:


Allows easier enforcement of isolation-based policies, easier verification of TCB, compatibility with existing OS/applications/tools, and optional debugging of above OS. Attempts to build secure UNIX's totally failed because it was inherently insecure in so many ways and took many API modifications. Many VM's, separation kernels, etc were built with strong security. Abstraction helps if applied correctly.

Exactly! We should all just give up and go home!


Qubes OS [1] must've shown him the way. Many already consider it a more secure alternative to OpenBSD, and I don't think Theo likes his OS being called "second best" in security.

[1] https://www.qubes-os.org/

Unlikely: there were a ton of server, desktop, and embedded virtualization projects and products before QubesOS. I listed them on their mailing list asking why they were reinventing the wheel instead of building on established work or improving hardware after they showed it was so flawed. Joanna's counter-arguments were disturbing and easily countered. Then she just started censoring any of my critiques on her blog while allowing positive comments lol. Promotes confidence in its security and her comment section's accuracy...

Anyway, if OpenBSD copies anyone, I'd say copy the Nizza Security or Genode Architectures. They leverage the kind of good components (eg Nitpicker/NOVA) and tactics (eg resource management) I mentioned to QubesOS. I see Qubes since adopted a similar tactic in their graphics system. Has a few novelties but mostly a rehash of virtualization and CMW stuff good against common malware. Need stronger TCB and methods to stop nation-states. Other work at least goes in that direction although not there yet either.

Nizza Security Architecture https://os.inf.tu-dresden.de/papers_ps/nizza.pdf

Genode http://genode.org/

EROS (example of one of best approaches) http://www.eros-os.org/papers/IEEE-Software-Jan-2002.pdf

They did one thing really well. They actually provide a running system. You linked to Nizza and EROS papers, not software. Genode provides a "framework" - and basically runs complete systems in separate windows.

QubesOS provides a familiar environment where applications can be labeled with different security contexts and run completely isolated from each other. Basically they win on execution right now. I'm sure there are loads of interesting approaches and papers describing how to make things better. But QubesOS is a product and it works. It also provides a user documentation, while genode docs give you c function signatures.

Oh yeah, they did build a running system. It was in similar status to the others (eg Dresden's TUDOS) when I made my recommendations. All needed work with QubesOS getting plenty from its dedicated team. Put enough work into something and it will certainly run: see Windows 3.1 on MS-DOS. ;) QubesOS is way better naturally but shows the point that "it runs" doesn't invalidate any design or security claim made about it. Nor is it an excuse for using a bad approach or failing to adapt. Reminds me of programmers with defective code that argue, "But mine ran faster!"

That said, it's usable enough that even I recommend it as an option when strong attackers aren't the opponents. If the attackers are, then it's unlikely to save you and work should be put into the inherently stronger options to get them in better shape. Plus, there's some commercial separation kernels that already run on desktops/laptops etc. one can use. They all use security-focused kernels, user-mode drivers, user-mode networking stacks w/ hardening, trusted boot, I/O MMU, and so on. Not cheap, though, plus a risk of subversion or someone cutting corners. Decisions, decisions. ;)

Truth be told, the whole market (proprietary + FOSS) sucks one way or another. Enemies are probably going to get in if it's a desktop due need to be compatible with much risky garbage. Only the console approaches can be made strong enough with FOSS components right now. Tough trade-offs in ease-of-use, too.

My current recommendation for defending against strong attackers is my old approach: several cheap, hardened machines for physical separation with KVM switch; a guard for sharing between them; sharing done over non-DMA, simple interfaces with simple, easy-to-parse protocols. Additionally, no wireless functionality (even disabled) in them at all. Worked before and still works for less than $1,000 but it ain't pretty or easy to setup.

"Genode provides a "framework" - and basically runs complete systems in separate windows."

Actually, it's a [barely-]usable system differentiated by a resource-management scheme, pluggable microkernels, minimal-TCB native apps, and/or running "complete systems in separate windows." Splitting between microkernel apps and VM's is a proven method that resisted NSA hackers in prior evaluations (eg XTS-400, INTEGRITY-178B). Part that really needs review for risks is their resource-management scheme. However, if proven, it will be advantageous in benefits it offers and especially if microkernel/microhypervisor is enhanced with INTEGRITY RTOS-style resource controls. Malicious apps mostly wouldn't be able to do shit unless there were hardware flaws or problems in the few, trusted components.

Meanwhile, QubesOS runs. So do my Linux LiveCD's and KVM boxes. Malware doesn't hit any of them because the best aren't trying: most targets use Windows or predictable Linux builds. We'll keep using such obfuscation until the [FOSS] strong stuff is ready.

> My current recommendation for defending against strong attackers is my old approach: several cheap, hardened machines for physical separation with KVM switch; a guard for sharing between them; sharing done over non-DMA, simple interfaces with simple, easy-to-parse protocols.

Could you share more about the physical transport for sharing, the data guard (is that a separate box with a Live CD) and wire protocols?

How do you protect against physical threats to unattended devices/data, e.g. do you have any form of trusted boot to verify the integrity of the BIOS, bootloader and OS?

There is no protection for unattended devices lol. That's a huge cat and mouse game. It's why I used to use little embedded boxes like ARTIGO's which were easy to stash along with tamper-evidence tricks. If there was tampering, can't be trusted any more. The few times turned out to be a roommate bumbling around for some ridiculous reason.

There are many physical transports to use. My original hack was IDE in a non-DMA mode to get past serial's speed limits. Then I/O offloading onto dedicated, cheap computers to pre-process the data and force it into correct spot. Next step was synthesis of the same onto cheap, I/O-focused FPGA's or microcontrollers before I had to put a pause on those developments.

The guard [1] is the strongest part. It used simple hardware(s), a security-focused microkernel, carefully written drivers, optional middleware for internal flow control, and separate partitions for each logical function. Anything incoming is fully scrutinized before moving on. Certain protections, such as encryption, might be applied automatically. The modular, layered, often FSM-using implementation of each thing allows the highest amounts of analysis and verification w/ many errors provably absent. You can also gradually add advanced security technology as it comes online such as SecureCore, Cambrige's CHERI processor, DIFT, Softbound + CETS, etc.

So, the concept is physical separation into different domains. The computers use what they need to use. The Internet-facing ones typically did use LiveCD's and BIOS's I could protect to a degree (eg oldest boxes had jumpers). If it wasn't LiveCD, it was regularly restored from clean backups. Virtualization, hardening, and mandatory controls used as appropriate but I assume it will be toast. Simpler formats like text, HTML 3.2, BMP, and so on for easy analysis by guard. If complex stuff is allowed, it goes over a data diode so any malware isn't leaking things back.

For a similar approach at network/host level, see Boeing's OASIS Architecture [3] that builds on their high-assurance Embedded Firewall (PCI card), SNS Server (highest rating/field-use ever), and a bunch of custom components/strategies. Post-police-state, I'm basically just swapping out Linux distro's as I can't afford to build my old setups any more. My current R&D is on tools such as crash-safe.org, CHERI (w/ CHERIBSD), and the cryptographic methods that all protect system confidentiality and integrity from hardware up. Been working on verified ASIC development flow to implement them with that being done up to RTL level. Current explorations are High-level Synthesis, Analog Synthesis, and my medium-high-assurance RAD methods for software. Post most of my results on Schneier.com, etc instead of my own blog for impact with some companies copying it without credit that we've seen. I can email you those if I haven't.

[1] https://en.wikipedia.org/wiki/Guard_%28information_security%...

[2] https://en.wikipedia.org/wiki/Unidirectional_network

[3] http://www.dtic.mil/get-tr-doc/pdf?AD=ADA425566

It would be very useful if you could post some links to relevant threads about Qubes.

Couldn't find it last I Google'd for whatever reason. Could've been lost, moved, or another thing she censored. Started with a conversation on one forum where I mentioned prior work and separation kernels. A reader brought it up on QubesOS mailing list with Joanna dismissing it and cutting our comments down. So, if you're wondering about my tone, that's why. She exploded on me with ranting nonsense mixed with some useful points about her position. Went back and forth a few rounds.

Anyway, my cross-domain system saved a copy of my end of the conversation. My style is to fairly quote other person before each reply so context is obvious to readers. Anything else was ranting filler that was unimportant. You'll be able to clearly see what she was saying. Here's a Pastebin of the two logs.


Thanks very much for that.

For what it's worth, I've found the Qubes team to be combative and defensive at times. I especially recall not managing to get straight answers about their VPN and Tor networking modules. But then, both were contributions from users, so their apparently dismissive attitude wasn't totally outrageous.

Good to know it wasn't just my own anti-charismatic personality. ;) That she didn't see the value of user-mode drivers for robustness and thought Darwin was representative of microkernel design were both disturbing in terms of "Should I trust this?" It's like they were smart on the things they published but didn't have a clue about security engineering outside of that.

So, I have no intention of ever depending on it for strong security: just maybe regular malware or containing effects of spyware, bloat, etc.

The best path to strong security has always been hardware isolation, right? So now that we have mass-market microcomputers, why bother with VMs? What do you think of Tinfoil Chat[0]? The notebook form factor could contain several microcomputers, with optical isolation, or even outright air gapping. But closed-source hardware and firmware is still problematic :(

[0] https://github.com/maqp/tfc-otp

Closed source hardware is indeed a problem: If the HW of TxM is pre-compromised, the device/malware running on it might spit out what it thinks is the key via serial or alternative covert channel.

If you start developing on top of TFC, please create a Github fork at some point and submit pull requests to any typos / issues you might find.

Not really. I started thinking that but not really sure. There's a number of models. The thing the preventative ones all have in common is they impose control on the flow of information in such as way as to prevent attacks. Separation, like address spaces, is a recurring concept and technique but not the only one. So, I use the term "information flow control" albeit it might be used differently in academia. The other model, covered by diversity and obfuscation, is to create a disconnect between what attacker envisions and what they can accomplish to create probabilistic security. The first is great against "known knowns" and "known unknowns:" specific attacks or non-specific worries in known risk areas. The second is great against straight unknowns, esp tricks nation states devise. Combining the two is most powerful and hence my recommendation. Many models of each, which further muddles things for attackers.

Far as Tinfoil Chat, I've recommended it heartily as a project to use and improve. Markus Ottela took what he learned from prior work and our comments at Schneier's blog (esp on data diodes & physical separation) to create a unique, solid design. He's been posting on the blog for feedback for months, we've suggested many risk mitigations (eg polyciphers, continuous transmission), and he's integrated about every one into his system. Most just ignore such things or make excuses: Markus is 1 in a 1,000 in (a) applying what's proven and (b) not letting problems become legacy "features."

So, yeah, I recommend it. Once my personal situation stabilizes, I plan to reimplement it with a tiny TCB on appropriate devices. I'm probably going to do a portable implementation of Send for microcontroller-style systems. Receive will be a Linux box hardened with virtualization or obfuscation security methods. Genode if it's up to it by then. The transport will be a more hardened, cheap box with just that functionality. I'm going to use CHERIBSD, if possible, just to experiment with it. Might replace the raw, serial links with MCU's or FPGA's for higher-speed, one-way I/O. Optical is highly likely (good guess). Eventually, I'm going to put it in an appliance with several, cheap boards so it's all integrated.

On my extensive backlog for now. But, yes, it's one of the best and practically has no TCB. Great design. Can be reused for email, audio, video, and maybe filesharing. Will be my interim framework until my next high-assurance system is ready.


It's gotten backlogged for me too. I started obsessing about potential EM coupling across optoisolators. But to test, I need a Faraday cage and gear. ...

Anyway, I'll check out the discussion on Schneier's blog.

It's kind of spread out all over the place lol. Would be difficult to even integrate. For Tinfoil, it's best to just grab the code of the Poly variety and start using/improving it. Far as the 100+ other topics, I can give you a list of links to my designs and essays there if you want to dig through for something worth building on. All I ask is credit as Nick P for whatever part my work contributes.

Thanks for the warning :)

I'd appreciate whatever links you can share. And I would be glad to credit you.

Forgot to tell you that I emailed it to your RiseUp address.

Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact