It's now possible to boot operating systems in milliseconds and have them carry out a task (for example respond to a web request) and disappear again. Trouble is the clouds (AWS, Google, Azure, Digital Ocean) do not have the ability to support such fast OS boot times. Per second billing is a step in the right direction but needs to go further to millisecond billing, and clouds need to support millisecond boot times.
But if I'm reading things correctly, it still took over two orders of magnitude longer to boot than it did to reply. So what sort of use case does millisecond boot help with? Very sporadic requests?
Not to mention all the big names missing from that list. For some reason Dimension Data makes the list (and it's woeful, from experience), but there's no Digital Ocean, OVH, Hetzner, etc...
One thing I noticed though is the pricing seems a bit biased; for example for AWS it recommends an m1.small with 1GB Ram and 20GB of Storage at $35 a month ... However if you used a t2.micro that would give you the same specs for $10.79
Moving the goalposts here. 'Not owning the whole core' is the default in the cloud.
> "A CPU Credit provides the performance of a full CPU core for one minute. Traditional Amazon EC2 instance types provide fixed performance, while T2 instances provide a baseline level of CPU performance with the ability to burst above that baseline level. The baseline performance and ability to burst are governed by CPU credits."
A t2.micro allows only 10% of the vCPU baseline performance. Anything above that needs to be "earned" at a rate of 6 credits per hour. The t2.micro can accumulate a maximum of 144 CPU credits (+ the 30 initial credits, that do not renew), each good for 1 minute of 100% use.
So in other words, you can on average only use 100% of the CPU for 6 minutes per hour.
Thanks for pointing it out!
The lifetime of a web request, for example, can be measured in milliseconds.
It is now possible, technically anyway, for operating systems to boot, service the request and disappear.
There needs to be pricing models that reflect computing models like this.
This was already possible even with a 2 second boot time. The problem is that it's a stupid use case because (unless the OS boots up in <10ms) the latency of waiting for the bootup is intolerable in any use case where a 2 second boot time was intolerable.
The applications are "whatever you want imagine" but yes one application is building FAAS Function As A Service in which the operating system carries out a single function.
Put anther way, Docker is complex, overweight, and requires re-implementation of much computing infrastructure. You can meet many of the same goals as Docker in much more simple way not by building containers but by building tiny operating systems.
1) a complete and utter nightmare to debug.
2) A huge waste of computing resources. Even with a unikernel you're wasting time initialising resources and getting in to a ready state to be able to process a request. Why bother when you can be ready and respond effectively instantly?
Starting in milli-seconds is not the hard problem. Starting + warming caches in that time is -- that will get you a bunch of awards when you solve it.