Hacker News new | past | comments | ask | show | jobs | submit login
Docker Acquires Unikernel Systems as It Looks Beyond Containers (techcrunch.com)
279 points by amirmc on Jan 21, 2016 | hide | past | web | favorite | 102 comments



I was thinking: Docker. Hmm. Containers. Hmmmm. Xen developers. Hmmmmm. Seemed really boring until I saw " Anil Madhavapeddy, the CTO of Unikernel Systems." Oh... I know that name: it's on quite a few IT/INFOSEC papers I stashed and shared over the years. A smart researcher with a practical focus. Didn't know he was CTOing at a startup.

Yeah, Docker is about to get some enhancements for sure. Maybe some real security improvements, too. You can count on it.


I think he also has a book on OCaml. Been following MirageOS hopefully, something great comes of this.


http://www.amazon.com/Real-World-OCaml-Functional-programmin...

I had heard of it and that it was good. Didn't know he was one of the authors. Even more props. :)


It is good! I find myself on the online version frequently. I'll likely be purchasing a hardcopy soon.


Here's a blog post to get you up to speed on OCaml jargon, http://hyegar.com/blog/2015/10/19/so-you're-learning-ocaml/


Thanks for the link. I had actually found that page just after writing my own blogpost with a similar theme. Great resource.



Great video. I follow the Mirage project already, but I thought Anil explained it very well, and some good questions from the hosts too.


Appreciate it. I'll check it out later after work.


I used to be in the unikernel camp of "this is the next step in virtualization tech", but having played around with both containers and unikernels, and now developing with containers, I think unikernels are going to occupy only a very niche space.

There are two touted benefits of unikernels, performance and security. Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent (with the OS actually winning in some use cases).

Security is definitely an issue, but it's so abstract. My company is a compliance (a very specific industry's compliance) cloud provider and we have gone with Docker as we get to use the OS as our Hypervisor, which means it is much more extensible and, in our use case, secure as we are able to auto-encrypt all network traffic coming out of the hosts with a tap/tun virtual device.

Two things need to happen to make unikernels attractive. A new Hypervisor needs to get made, one that is just as extensible as an OS around the isolated primitives. It should also have something extra too (like the ability to fine tune resource management better than an OS can). Secondly a user friendly mechanism like Docker needs to happen.


There are many other benefits to unikernels especially depending on which implementation you choose to go with. For example, MirageOS and Rumprun are good examples of clean-slate vs current systems. I'd recommend reading some of the articles at http://unikernel.org/resources to get better view of this.

As for 'a user friendly mechanism like Docker' ... well, I hope today's news convinces you that it's in progress.


I came down harder than I wanted to. An acquisition like this definitely moves the ball forward. I'm excited to see what comes of it!


> "a user friendly mechanism like Docker needs to happen"

Had you looked at OSv and Capstan[1]? It is just like Docker in that respect.

[1] https://github.com/cloudius-systems/capstan/blob/master/READ...


Very interesting. I will give this a look.


> Performance turns out to be a red herring, as the overhead of an OS vs a Hypervisor turns out to be roughly equivalent

That presumes you get to choose. When you have a hypervisor either way (as on IaaS), the only "choice" is between a "bare" hypervisor, or a hypervisor plus an OS doing containerization.


On AWS Linux is the dom0 OS on Xen (I don't know of any other CPs who are different, unless you get a dedicated host), so the same is true of unikernels right now as well, not of docker though, as they have a containers product now. So...


I'd hope that AWS' container product runs at least each customer's containers in a separate VM per customer, if not one VM per app. It is too easy for a malicious customer to break out of a Docker container (far easier than breaking out of Xen, and they take those security vulnerabilities seriously).

The advantage here would be that Amazon could change their container product to allow only one layer of isolation (user code in a unikernel, inside Amazon-run Xen) instead of two (user code in Docker, inside Amazon-run Linux per customer, inside Amazon-run Xen).


I think this is a bit of an overstatement, breaking out of a docker container is not "easy", especially if you add other security products on top. Then there is the extensibility of the OS to be used to make things even more secure.

If you look at security through the narrow lens of "being able to break out of your environment" then yes, Hypervisors are more secure, but you have to look at more than just that. The OS allows you to make more than just the host secure, it allows you to make the network secure, ensure that all customers get encrypted disks, etc...

Full disclosure, I work for a company doing this right now (Catalyze Inc.)


Yeah, I don't mean to say breaking out of Docker confinement is easy by any objective measure, just that it is much less hard than breaking out of Xen confinement, and even that seems to be a fair bit of concern for AWS already.


Yep, they do run each customer's containers in separate VMs. It's actually bring own VM kind of design :-) You configure your own EC2 instances, install ECS agents...


The dom0 is not the hypervisor. Guests are not running on top of a dom0 in Xen - the dom0 is a management domain that is a VM running on top of the hypervisor just like any other, just with some additional privileges and management functionality.


But on Xen, it's not like to dom0 is required to be called in the dataplane. It's mainly there for config and emulation of devices that don't have SR-IOV.


From https://cloud.google.com/compute/docs/faq#whatis ...

""" Google Compute Engine includes Linux and Windows based virtual machines running on KVM, local and durable storage options, and a simple REST based API for configuration and control. """

No Xen there


Wouldn't the unikernel security benefit be the same for a lightweight OS where the libraries/services had been carefully pared down to the bare minimum for the application?

It sounds like the main benefit to the Unikernel security is the implicit audit of all the services you want to roll into the kernel down to the bare minimum. I imagine that could be done with a more conventional architecture, it's just that nobody ever does.


It depends on which lightweight OS and which unikernel. But e.g. a stripped down Linux will still have a huge amount of C. If you're going to write your kernel in something safer, then you might as well make a unikernel, rather than creating a kernel/userspace split.

It's hard to see how you'd get a traditional OS stripped down anywhere close to e.g. the mirage-firewall unikernel (http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewal...)


Not necessarily.

NodeOS cut out everything but the absolutely essential parts of the linux kernel. No c libraries, no c compiler, no POSIX utilities, no user space, etc.

Instead, everything runs on V8 (ie which also takes care of sandboxing) and minimal tools were rewritten in pure javascript, incl a git clone tool.

It's amazing how much you can cut out and still have a decent platform to build servers.


What's next PythonOS, RubyOS? Just learn OCaml, its really not that hard.


In itself, beside being a ridiculous amount of work, it would be a good thing.

If all apps ran on its own hypervised runtime instead of a "native" runtime, the world you be a better place.

The overhead of starting a VM is actually in the 2 digit milliseconds and can be done while the initial I/O are performed.

The XDG-App project is trying to make it easier to sandbox Linux desktop and server applications by defining all the interfaces between the sandboxes and the OS. This work, while initially designed for containers, will eventually work on VM too.


You're still running the linux scheduler in C, and you're still context switching for system calls. And can you run your whole stack as a user-mode program by changing one line in the build file the way you can with Mirage?


This is a red herring. Presumably you'll be running unikernels in multi-tenant environments. There will still be scheduling and context switching overhead from the hypervisor. The Hypervisor isn't going to allow a VM full access to the hardware. Also, if you're the only process running, the Linux scheduler shouldn't actually have any overhead.


NodeOS, like most projects, is likely just a duct-taped jenga tower of the usual suspects. The attack surface will be huge. For example, OpenSSL will be in there with all of its gotos and malloc-reinventions. From a security point of view, I don't see a contest. The Mirage guys even clean-room implemented SSL in OCaml. This is no ordinary OSS project.


We do. We generate ultra-minimal Linux-based images tailor-made for your JVM app and provide rapid local testing on VirtualBox and full zero-downtime blue/green deployment orchestration on AWS https://boxfuse.com


There are certainly ways to drastically improve some sorts of performance but certainly not with a lot more work than just running a standard unikernel on Xen.

https://arrakis.cs.washington.edu/


Security and performance of hypervisors will never be like physical, PDI can that is Storage agnostic can deliver both better and with full compliance. See Http://jentu-networks.com


Hopefully Docker's paternalistic attitude doesn't infect Unikernel systems.

I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies. This move embraces unikernels, and now they are perfectly positioned to go the rest of the way.

The discussion at https://news.ycombinator.com/item?id=10904452 may shed light about my complaint.


We're in a new weird technology landscape where everything is driven by pride and ego and money instead of "what's best for everyone" progress or even stable technological advancement.

Things like Docker and systemd are exactly EEE land grab lock-in plays to redirect "open source" value back towards private entities and unilateral control structures (instead of cross-vendor standards bodies, because standards bodies can't make individuals into billionaires).

How do we as lowly developers with effectively $0 net worth fight companies with billion dollar war chests? Over time, better does win (see: the collapse and accelerating irrelevance of Microsoft), but in the short term it's likely we'll get stuck in another 10 years of platform "dark ages" until we see the light again.


With systems like Docker we're just making sure that we don't lock ourselves in. If you decouple deployment and your code (which you shouldn't have coupled to a container anyway) from how you run it (containers, a VM, etc) then there's no problem moving to or away from systems like Docker. This is how developers can make sure they're not getting the raw end of the deal, and makes for a much cleaner setup anyway.


You may want to revisit your assumptions about the "accelerating irrelevance" of Microsoft. I'm no fanboy but the direction they've taken in the last few years is every promising, and we've only begun to see what it will mean.


Microsoft is not irrelevant. Maybe in your corner of the world but they're still very much a big player and actually turning the ship (slowly) in the right direction.


> I'm not very hopeful given that their CTO is quite open about wanting to embrace-extend-extinguish competing technologies.

I don't think that's true, and I'm not sure what made you say it. Maybe you're referring to Google here, but I don't think it's true of either company.


OCaml and MirageOS FTW!

I hope this brings Unikernels adoption forward, and boosts a little more OCaml's adoption as well.


Exactly. They've been doing exemplary work. I especially like that, upon hearing of language-related obstacles, they try to actually fix or work around them instead of throwing up hands and defaulting to C or C++.


Docker has some really smart people leading it to success. But make no mistake, the economic thesis of Docker depends on a massive landgrab of vendor lock-in. This acquisition is a hedge against any Unikernel company looking to make the same landgrab.

There's a reason Docker is so heavily funded by the biggest cloud companies. They're the ones who stand to benefit from specialized Docker container optimized for their own platform. It's a great way to package open source services and leverage the effort of the developer community into centralized profit.

It seems blatantly obvious that Docker is looking to build the app store of devops. I wish them the best of luck, but they are going to face some heavy resistance from open source initiatives. There is nothing about Docker that makes it fundamentally superior to the systems it's based on, specifically the LXC project. When developers finally wake up to the fact that they are sleep walking into a massive walled garden, Docker will lose some of its clout.


Docker doesn't build on LXC, they have their own implementation of a container runtime known as libcontainer, which has been used for over 2 years. I also want it noted that the OCI and runC mean that vendor lock-in is very hard. There is a tonne of work going into making Docker a wrapper for runC so you can replace Docker usage with runC and containerd without having to rewrite your configuration.

I've been involved in the Docker community as a free software developer for a long time. I may have squabbles with some of the people who work at Docker Inc. but many of them are very passionate about software freedom, so I don't think it's fair to say the project is bad because the CTO (who hasn't committed code for almost a year) has views that are unfriendly to free software.


I have to say, after having contributed thousands of commits to free software, recruited hundreds of people to do the same, and directed tens of millions of dollars to open-source contributions... this stings a little.

For the record I don't have views unfriendly to free software. But I have upset many people by disagreeing with their design opinions, and as a result not merging their pull requests. Sometimes those people imagine a conspiracy of evil VCs smoking cigars, instructing me to reject their patch for an obscure capitalistic motive that they can't quite express. When in reality the explanation is simpler: I disagree on their opinion, and I have commit rights.

If you point out an example of my supposed hostility to free software, I will be happy to refute it with objective facts. But then everyone is free to believe what they want to believe.


With the recent defection by Kubernetes against the Docker networking model, and earlier friction with CoreOS and the app container spec, this is already happening.

Unfortunately, I think the voices of reason are still getting drowned out in the Docker hype.


> Unfortunately, I think the voices of reason are still getting drowned out in the Docker hype.

I don't know about that. There has been considerable Docker backlash. One example from just a few days ago:

https://news.ycombinator.com/item?id=10920370

And this comment in particular:

https://news.ycombinator.com/item?id=10921076


OCaml is a fine language that most people don't use. If I want a unikernel in my own language, do I need to build one myself? I wonder if someone is building a unikernel that have external language bindings, which will allow one to create "High-level" unikernels. This will open up the possibility to completely bypass the installation of language runtime. For example, I can just type some Python code into a browser editor, the backend can take the source code and fork a Python unikernel to run the code. Docker can currently do this but one still has to rely an underly OS to manage all the packages etc. Wouldn't it be nice if you could simple write "import xyz", and the unikernel takes care of fetching them automatically?


Will probably be a while before could build a clean-slate unikernel with something like python, but doing what you want in a rumpkernel looks to be just around the corner:

https://github.com/rumpkernel/rumprun-packages/tree/master/p...


It's getting really close! I have a post detailing how you would run Flask in a rumpkernel[0]. Right now any pure Python modules will run just fine.

We get into tricky situations when you need C extensions, e.g MySQL or PostgreSQL. Since extensions must be statically linked, you have to decide upfront what you want. Either extension can be compiled in, but including both in the package by default.... So a production Python build is fairly custom right now.

Nothing insurmountable, just haven't gotten the workflow perfect.

[0] http://projects.curiousllc.com/flask-in-a-rump-kernel.html#f...


Maybe this is a good time to learn OCaml then? There's also this I saw, literally what you're talking about: https://stackhut.com/


Thanks :) - we're super early in this space, but are incredibly excited by what how we can use both containers and unikernels to help ease the development process and integrate with your current stack.

ps - shameless plug but we're hiring talented OCaml devs (http://stackhut.com/#/careers)

pps - Congrats to @amirmc and the Mirage OS group!


Mirage uses OCaml's strong types to generate secure, fast and highly specialised code. It's also very succinct and with LWT, almost as good as Haskell for lightweight concurrency and nonblocking IO. Why would you want to use Python? This is a golden opportunity to learn and use a superior language.


There are a bunch. NodeOS and runtime.js are the ones I've looked into because I like JS but there are unikernel implementations.

The cool part is, if the OS is trimmed down enough (ex < 10mb) it's small enough to fit in version control.

Much like we automate build tools to concatenate/minify web assets, it'll be possible to create a build step that takes a webapp as input and spits out a fully functional VM ready to deploy as output.

It completely inverts the deployment process. Instead of building an environment and deploying an app to it, you focus on building the app and deploy it as a VM when it's ready.


Very true. It changes the perspective a little bit: Instead of worrying about incompatible/incomplete libraries/packages. The application dictates everything all the way down to the VM level, and in a single language.


> The application dictates everything all the way down to the VM level, and in a single language.

As much as physically possible, yes.


Rump kernels exist in large part for that use case.


Yes, here's Go on a rump kernel: https://github.com/deferpanic/gorump


We also put a note on the company website [1]. I'm looking forward to working with the Docker folks on this. :)

[1] http://unikernel.com/#notice (not to be confused with the community website at http://unikernel.org :)


So far Docker seems to be a good citizen when it comes to FOSS, hopefully that will continue.

I've been following the Mirage and rumpkernel lists for a while and its nice to see these hackers getting traction (and money!) for their efforts.

Not too long ago unikernel.org was started, which IIRC was billed as a community driven "one stop shop" for information on the subject, which I assume is independent of the company "Unikernel Systems". Hopefully Docker won't go rogue and start attacking others that use the term "unikernel" by claiming that its trademarked or something like that.

Congratulations Amir et al!


Unikernel Systems was supporting the community site (i.e. paying for stuff) and that will continue with Docker. It's important that the open-source work continues to grow and that all the projects benefit from each other. unikernel.org just got started so we need help to grow it and make it the place to go.

Also, the word 'unikernel' is used like a generic term and as far as I know it's not trademarked.


I don't believe unikernel is a trademark.


Congratulations to the all the OCaml homies up in Cambridge! Well-deserved.


I have been following unikernel development for sometime. The work done by Atti Kantee and others on Rumpkernels [1] is most promising and has the right abstractions (POSIX userspace using NetBSD stack). Also, in the demo video, unikernel folks should acknowledge rumpkernel work as they are using it :)

[1] http://rumpkernel.org


I sort of agree with you and I sort of don't. For example, I think an interesting future path for the Rumprun unikernel (which, I always stress, is not the same thing as a rump kernel) is running without the POSIX-y userspace interfaces. In fact, that's pretty much where I think e.g. Golang support for Rumprun should go, i.e. remove the userspace abstractions from the stack, since they, conceptually, do exactly nothing. Parts of the implementation of Rumprun are wrong, because I didn't previously see the importance of no-userspace, but I'm slowly converting those. (ironically, Rumprun -- before the codebase was even called Rumprun -- started out as no-userspace, and then grew too much userspace ... but that's really another story)

(edit added later so that we get credit going where it's due: It occurred to me that it was Sebastian Wicki who was campaigning for the no-userspace mode last summer as part of his lowRISC project, and did the initial work to be able to again use Rumprun without "userspace". Apparently my memory of events goes only a few weeks back if I don't think about things carefully ...)

Now, if I'm allowed to summarize rump kernels, I'd say the goal is to build a framework which incorporates enough of the past to allow things to work, but tries to be as flexible as possible so as to enable the future. I'm a firm believer in "there's no such thing as #1", which means you shouldn't produce software components which work only in one type of tool, because it's easy to foresee that right around the corner you'll have to build components for the next tool.

p.s. "Atti"? That one was new, usually it's "Antii" or something like that ;) ;)


Appreciate your take on this "Antii" ;)


From the comment:

Having everything in-kernel (single memory space) with POSIX-y API for applications the right direction ?

or I just brain farted here ?


Assuming that was directed at me, what's "the comment"?

Anyway, that's the right direction if that's what you want you want to accomplish. Otherwise it's the wrong direction.

sincerely, anti


Why should I care at all about POSIX when not using C or porting UNIX software?

POSIX is just the part of UNIX that should have been part of the C runtime, but instead they made it into an optional standard.

Almost ever other programming language with richer runtimes don't have any need to depend on POSIX.

All their APIs for creating processes, threads, accessing file systems, communicating over the network aren't POSIX dependent.



Holee shit. I totally called this. Even got made fun of on twitter (via @ShitHNSays) for mentioning it.

I didn't expect unikernels to gain mainstream notice for at least 6 months to a year.


I didn't quite get what a unikernel was. Reading up on approaches that were a bit different than MirageOS was helpful:

http://osv.io/

https://github.com/rumpkernel/rumprun


There are more resources at http://unikernel.org/resources


I thought OSv is dead?


There are recent commits: https://github.com/cloudius-systems/osv/commits/master

I posted the links because (at least for me) it was hard to get my arms around what a unikernel was by looking at MirageOS. Largely because I have little knowledge of functional programming, OCaml, etc.


Our latest distributed database uses a mono kernel too. We use Pure64[0] to boot the system and then the "kernel" is derived from QK[1], but it's also just our database software.

Other than reducing complexity, our distributed database use the virtual memory hardware in a unique way, so a mono kernel was essential.

Having said that, the easiest way to develop such a system is not on the bare metal, it's by running Linux in such a way that in only uses the first 1 or 2 cores, and then running your "custom kernel" on any other cores in the system. Then you can use a normal debugger and utilities during development. It's only when you actually want to put it into production that you can consider not using Linux at all.

[0] https://github.com/ReturnInfinity/Pure64

[1] http://www.state-machine.com/qpcpp/struct.html#comp_qk


Considering the way docker tends to feature-creep, they will eventually just be re-implementing a full kernel. :)


This ^

Docker is trying to be the next systemd


Why don't I see Antii Kantee's[0] name all over this?

[0] https://archive.fosdem.org/2013/interviews/2013-antii-kantee...


Antti isn't part of Unikernel Systems. List of people involved can be found at http://unikernel.com/#notice


That, and my name not being Antii (as already mentioned in the discussion).

Pick whichever reason suits your agenda ;-)


:)

My 'editability' timed out on my initial comment before I had a chance to fix that, Antti.

That said, there's a footnote about rumprun, and if Justin Cormack of Unikernel is https://twitter.com/justincormack, its all so close to Antti (Justin and Antti are both of NetBSD). For -me- Antti has been such a big part of my unikernel experience, I figured there'd be more involvement in this project or at least more recognition of his work.


I'm a bit surprised. Isn't that a different approach than containerization?

I hope this wasn't acquisition to simply kill unikernel approach.


No, it may look a bit different but it has the same aims. And they are not intending to kill us. Disclaimer: I work for Unikernel Systems, now Docker.


The problem I have with is that those are two different philosophies to address the same problem. I have concern whether docker can pursue both of them at the same time. I see that sooner or later they'll have to decide which direction to pursue.

If they decide to drop containers and go the unikernel route, then a lot of what they already did to this point is no longer relevant. If go the containers direction then unikernel technology won't be too useful, unless using it to create hosts OS that would run the containers, but that is a huge challenge and at that point it no longer would be simple kernel.


Hit me up if you'd be into collaboration (we contribute back) with real customer partners of these things in production. I'm at the world's largest hedge fund with a massive use case that disqualifies Docker.


It's a very different technical approach but the use cases are the same: the customers who are interested in Docker will also be interested in unikernels as the technology matures. And customers who are sort of interested in Docker but unwilling to use it (either for security, or for performance, or because Docker doesn't isolate enough) may be willing to use unikernels.

A bad analogy could be made to Mac OS 9 and earlier multitasking, and Mac OS X multitasking: the implementations are wildly different but you keep (and gain) customers.


They seem to have sold very early in the lifecycle of the Unikernel technology. Possible they've left billions on the table?

Were the terms of the deal disclosed?

Perhaps Unikernel Systems ran out of money and it was an "acqui-hire"?


I wonder what the planned business model of Unikernel Systems was pre-acquisition.

Even if the acquisition was a result of Unikernel Systems running out of money, I have no doubt that these folks will continue to work on unikernels, rather than, say, more tools for managing containers.


Maybe Unikernel Systems was a supernova from the early unikernel universe.

Its people will be the elements that go to make up a new generation of unikernel companies.


Interesting demo materials(`docker-unikernel`): https://github.com/Unikernel-Systems/DockerConEU2015-demo


I'm curious about how support for building unikernels will be integrated into Docker. The current Dockerfile-based build process doesn't support separate build-time and run-time environments, but when building a unikernel, the build-time environment is completely different from the run-time artifact. Support for separate build-time and run-time environments is also useful when building container images, so the image doesn't include things that are only necessary at build time. So I hope that problem is solved first; I think the addition of unikernel support will be more natural that way.


So Docker will now have some OCaml openings? :)


This feels specious:

> The result of this is a very small and fast machine that has fewer security issues than traditional operating systems (because you strip out so much from the operating system, the attack surface becomes very small, too).

Obviously traditional operating systems provide a lot of interfaces that represent attack surface, but they're generally able to be secured. On the other hand, much of the operating system actually _implements_ security, so if you throw it out, you're losing that.


In the way that unikernels are typically used, each unikernel instance runs as a VM, and security is implemented by the hypervisor. If the VM is just running one application, it doesn't need its own mechanism for allowing mutually distrusting users to run their own code.


If the hypervisor is providing the security, I don't see how the unikernel can be said to be any more secure than anything else run under a hypervisor. Moreover, I don't see why the hypervisor should be in a better position to be secure than a kernel.


Implements security theatre. If the OS is immutable then there's nowhere to persist an exploit.

A server could be killed and restarted at the first sign of compromise. If you aren't loading GBs worth of modules that'll never be uses, the OS will be able to boot up in a few seconds max.


What do you mean by immutable?

If you're talking about the on-disk files, that's possible to do without throwing away almost everything -- Solaris does this using read-only zones (ROZR).

If you're talking about the kernel itself; that can never be truly immutable. An OS had to read and write memory.

I also don't understand the claim about "GBs worth of modules"; if you're talking about kernel modules, there shouldn't be "GBs" to start with.

As for security theatre; I hardly think crypto support, auditing frameworks, qualify. So you need to define that as well.


Immutable as in filesystem.

Meaning every restart brings the system back to a fresh state. Most trojans/backdoors persist because they're able to modify the underlying OS to load them on boot. No, immutable OSes are -- by no means -- a new idea. This just uses the approach by default.

Not talking about the kernel specifically. Although there's no reason the bootup time should be slowed down by loading unnecessary HAL, drivers, protocols, utilities that will never be used in a server setup. GBs of modules was referring to the rest of the OS made up mostly of features that will never be used by the server.

Crypto support is one big issue. I'd assume if the OS uses V8 at the base level, it should be able to be packaged with OpenSSL.

Auditing for what exactly? File access? The filesystem is immutable. Network requests? Can ve handles at the network level or a tool can be added to log network requests via an offline logging service.

Security auditing at the OS level isn't required to the same degree it is on other OSes because the attack surface is so small. For instance, you don't have to check for SSH vulnerabilities when there's no SSH. The userspace sandbox isn't necessary because everything that's network-facing already has to run through V8's sandbox.

Application-level vulnerabilities will always be an issue but that's nothing new. This doesn't solve all problems. Just reduces the potential for issues.


Meaning every restart brings the system back to a fresh state. Most trojans/backdoors persist because they're able to modify the underlying OS to load them on boot. No, immutable OSes are -- by no means -- a new idea. This just uses the approach by default.

Ok, then yes, Solaris read-only zones provide that level of immutability.

Not talking about the kernel specifically. Although there's no reason the bootup time should be slowed down by loading unnecessary HAL, drivers, protocols, utilities that will never be used in a server setup. GBs of modules was referring to the rest of the OS made up mostly of features that will never be used by the server.

A good operating system will be smart enough to only load what it needs; there will be some inefficiency, but protocols, drivers, HAL are actually used in server setups and are not usually the source of significantly increased boot times unless they're misconfigured. Also, with a proper service management facility and packaging, it should be possible to simply omit unnecessary items; services are usually what contribute the most to startup time.

Crypto support is one big issue. I'd assume if the OS uses V8 at the base level, it should be able to be packaged with OpenSSL.

Right, but you generally need more crypto administrative capability than OpenSSL alone provides to manage a sufficiently complex application. (e.g. gnupg, etc.)

Security auditing at the OS level isn't required to the same degree it is on other OSes because the attack surface is so small.

Sorry, I'll have to disagree on that point. Auditing is always important because it allows you to determine what happened and why when privileged operations are involved. The overhead of auditing is almost completely determined by what an administrator chooses to audit and how detailed that auditing is.

For instance, you don't have to check for SSH vulnerabilities when there's no SSH.

Perhaps you misunderstood what I referred to when I spoke of auditing; I don't speak of checking for vulnerabilities -- packaging should generally handle that in combination with metadata and reporting. I was referring to the recording of the assumption of privileged operations, system authentication, and related details.

The userspace sandbox isn't necessary because everything that's network-facing already has to run through V8's sandbox.

Disagree again; the point of having a userspace sandbox is in case something escapes V8's sandbox, which has been done in the past. It's an extra line of defense.

Application-level vulnerabilities will always be an issue but that's nothing new. This doesn't solve all problems. Just reduces the potential for issues.

While it reduces the number of problems in the existing space, I personally believe it creates new problems. I'm not sure the tradeoffs are worth it.

I would rather see an immutable container with a minimised set of packages with proper auditing enabled. That's far easier to test and deploy and doesn't create new problems.


Persistence is not required for an exploit. Repeatability is just as bad. Restarting in response to compromise means every compromise is a DoS.


Yes! More OCaml adoption.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: