Hacker News new | past | comments | ask | show | jobs | submit login

I asked a similar question yesterday. [0] The problem is that containers share the kernel of the hostOS, so you cannot host a unikernel without some kind of hardware virtualization, since the unikernel is obviously a different kernel from that of the host OS. However you can run qemu inside docker if you want to inherit its sandboxing and namespace configurations. The problem comes when you have to isolate resources, like network devices, at both the namespace level on the host OS, and the virtualization level inside qemu.

Intel's Clear Container project tries to solve this problem, but it's still limited by some virtualization overhead because qemu requires a tap device, which then connects to eth0 in a netns, which is one half of a veth pair with the host. So you end up creating 3 or 4 virtual Ethernet links just to route packets down to the guest.

[0] https://news.ycombinator.com/item?id=13976125




Yeah, it seems to me that there's a sort of duality between a server OS + containers vs. a hypervisor OS + unikernels. Their both attempting to minimize the overhead of process isolation and deployment flexibility.


Meh, what's the big difference between providing PID1 and the kernel? You don't have nor want direct hardware access (bus, MMU), so what would be the principal advantage?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: