Just an FYI: the signup form breaks in very obscure ways if one is using an ad-blocker. It's generally a good idea to never let exceptions from analytics code break your own logic, since not everybody will (or can) disable their adblocker.
The site is using Segment to assign registrations a userId. Segment is used for analytics, yes, but it's really a canonical tracker for everything from identity to logging.
Assigning userId client-side even before you hit the actual user database enables data to be sync'd anonymously across services with userId rather than email address. If you block this from running, yes, you block it from working.
Basically it uses a hypervisor to launch your docker image, but not a full blown VM, since there is no guest OS, like ubuntu or coreos in the VM. There is only one tiny kernel in the HyperContainer to run your Docker app.
"it uses a hypervisor to launch your docker image, but not a full blown VM"
I don't understand this. Isn't using a hypervisor the same as using a VM? How is this different from using VMware, Xen, etc. to run a stripped down VM with a tiny kernel inside to handle Docker?
Depend on how one define OS. If kernel=OS, then yes, it is stripped down one. But HyperContainer is different from container OS, like coreos, that it is just a kernel, no rootfs.
Yes, even if it's "just a kernel", it's still an operating system. It's a piece of software that runs in the supervisor mode of the CPU, providing system call services to user applications.
The marketing copy appears, on this basis, to be somewhat misleading. It says on the site that this is a cloud where "multi-tenant containers can inherently be run safely side by side on bare metal, instead of being nested in VMs".
I think that, as an industry, we've established that "bare metal" execution of workloads means _without_ the multi-layered approach of a hypervisor (itself an OS) running a separate guest OS kernel.
Thanks. I remember a blog post that coined a term "virtualized container", but cannot find the link right now.
I wouldn't use the word VM, as it reminds us of the heavy full-blown images. It is probably more misleading to say that HyperContainer launches your Docker image in VMs. But you are right, it is still new and requires efforts to get people understand.
I don't know if I would describe VMs as heavy. But to the extent that they are heavy, it's because they are an application and OS running on virtualized (i.e. partially or fully emulated) hardware rather than physical hardware.
[Edit: Removed rest of comment to limit negativity. Thank you for responding to clarify.]
Let me be more clear. The problem of VM is not virtualization (or hypervisor), the problem is "Machine". For a long time, we try to emulate a complete machine with a complete "OS" running atop. What Docker really changed is to make us realize that all we need is the app, not a full OS. Therefore Docker image is an app-centric package, nothing specific to Linux container. Yes, it runs with Linux container, but you can also launch it with a hypervisor in 100ms.
Shall we call hypervisor+kernel+Docker image VM? I don't think so. It never tries to give you a complete machine, neither a full OS. Personally I like "virtualized container". But the combination of these two words might be more confusing, given that whenever you see the word "container", you think of Linux container.
OS size is just one problem with Virtual Machines. (Much of the OS is idle unless explicitly used anyway.) Other significant problems include:
- inefficient use of resources resulting from functionality duplicated in both the hypervisor and the guest (e.g., two TCP stacks, two filesystem implementations, two schedulers, and so on). (That's why I asked about this in a separate comment.)
- inefficient use of resources resulting from the inability to dynamically share resources between VMs (e.g., memory and disk typically have to be statically allocated to VMs, instead of sharing a common pool the way processes within a single Unix-like system typically do).
- poor visibility from the hypervisor into the application (i.e., observability tools cannot typically cross the hardware-virtualization boundary).
That's just a partial list.
I think that's why you're seeing pushback from people insisting that these be called VMs rather than containers: they have all of the above downsides of VMs, even if the operating system surface area isn't that large.
So, yes, an OS. This isn't like a software isolated process in Singularity or little component in a user-mode runtime leveraging a L4-like microkernel. It's virtualization + a kernel + (maybe something here) + my code.
That's a stripped-down, guest OS on virtualized platform. Basically same sort of thing that's common but with some differentiator.
I meant a filesystem implementation (i.e., kernel code actually implementing the filesystem system calls from the application, either by reading and writing blocks from a block-device provided by the hypervisor or using a network file protocol).
Please tell us more, how does hyper_ compare to Clear Containers [0], funded by Intel? And VMware Photon Platform (of which the hypervisor is closed source)? Both of these state they have very fast startup times and allow for large number of virtual containers per host. Those are the three Hypervisor Containers I'm aware of and would like to understand the differences in implementation. Benchmarking the three would be even better, but I guess we won't see those for some time.
I understand boot up completes in under a second but if one needs to setup an environment to do something useful, it can take several seconds. I guess what I'm getting at is why would the fast boot matter when one still needs to setup a basic environment to complete a task?
This is analogous to achieving an artificially low TTFB but actual rendering doesn't start until several seconds later.
This is exactly what I've been waiting for. We're currently on Tutum, and have 60 days to decide whether to migrate to Docker Cloud or a different platform. This one, on the face of it, would be much better for us.
1) Are there any competitors?
2) Can we be confident in building our business on this platform? What is the funding and stability situation of Hyper?
Joyent Triton also exposes a Docker API, which you use with an unmodified copy of the Docker CLI. In that sense, working with Triton is also the same as on your laptop.
1) Hyper_ offers EBS-like volume and snapshot for your container. You can failover volumes cross containers
2) Given the instance startup time, you can program the APIs to do the scaling/HA, etc.
3) Yes, think Hyper_ as a virtual container host with unlimited capacity, but under the hood, it is still a cluster of machines.
I meant that most of them didn't get any downvotes in the first place. Two or three did, but only barely.
We appreciate your vigilance in watching out for a fellow user. It's better to send these to hn@ycombinator.com though. That way we're sure to see them, and the threads don't go as off topic.