Hacker News new | comments | show | ask | jobs | submit login

Really welcome, although per millisecond would be better.

It's now possible to boot operating systems in milliseconds and have them carry out a task (for example respond to a web request) and disappear again. Trouble is the clouds (AWS, Google, Azure, Digital Ocean) do not have the ability to support such fast OS boot times. Per second billing is a step in the right direction but needs to go further to millisecond billing, and clouds need to support millisecond boot times.




Just curious here, what OS can do millisecond boot times? How many milliseconds are you talking? And the constant boot time of the OS is so much less than the OS responding to the web request that this is actually worth it?



Intersting. There's a link to here on the Wikipedia page: http://zerg.erlangonxen.org/

But if I'm reading things correctly, it still took over two orders of magnitude longer to boot than it did to reply. So what sort of use case does millisecond boot help with? Very sporadic requests?


Cold EBS boot will still be super slow...


If you're concerned about cost, AWS is almost never the right place to host to begin with.


Agreed - just check cloud comparison, AWS is rarely at the top: https://www.cloudorado.com/cloud_server_comparison.jsp


Well, if you're going to use bad figures, then sure, AWS won't win. The default size there is 768MB RAM, 1 cpu, and 50GB disk... which it says AWS will provide for $54. Whereas in actuality a t2.micro with those specs only costs $14, lower than all the listed prices (which are all clearly out of date)

Not to mention all the big names missing from that list. For some reason Dimension Data makes the list (and it's woeful, from experience), but there's no Digital Ocean, OVH, Hetzner, etc...


As per my other reply: A t2.micro does not allow you to use more than 10% of the vCPU on a sustained basis. Any use over that needs to be earned, and you only earn 6 credits (for one minute each) per hour.


Wow thanks for sharing this link. Didn't know about this.

One thing I noticed though is the pricing seems a bit biased; for example for AWS it recommends an m1.small with 1GB Ram and 20GB of Storage at $35 a month ... However if you used a t2.micro that would give you the same specs for $10.79


Not quite the same, you don't own the whole core on the t2 and will get cpu throttled.


> you don't own the whole core

Moving the goalposts here. 'Not owning the whole core' is the default in the cloud.


For the other instances you get a specific number of units of processing capacity that you can use 100% of continuously if you like. For the micro instances, you get a base level and build up credits towards bursts, and can not maintain 100% utilization continuously. It's very much different and not the default. To quote Amazon:

> "A CPU Credit provides the performance of a full CPU core for one minute. Traditional Amazon EC2 instance types provide fixed performance, while T2 instances provide a baseline level of CPU performance with the ability to burst above that baseline level. The baseline performance and ability to burst are governed by CPU credits."

A t2.micro allows only 10% of the vCPU baseline performance. Anything above that needs to be "earned" at a rate of 6 credits per hour. The t2.micro can accumulate a maximum of 144 CPU credits (+ the 30 initial credits, that do not renew), each good for 1 minute of 100% use.

So in other words, you can on average only use 100% of the CPU for 6 minutes per hour.


m1.smalls are also ancient, when the current generation m4 is more than a year old at this point.

Odd site.


that's a very handy site, previously I had mostly been using http://www.ec2instances.info/ and http://www.gceinstances.info/

Thanks for pointing it out!


You miss the point.

The lifetime of a web request, for example, can be measured in milliseconds.

It is now possible, technically anyway, for operating systems to boot, service the request and disappear.

There needs to be pricing models that reflect computing models like this.


There is likely a point of diminishing returns in this type of scenario. The OS boot time, the web service setup time, the actual request and then the shutdown time. Also consider that unless you have an external caching layer, you may be processing some requests that could have been cached by an always-on server. If your site has predictable traffic patterns then I suspect the math would be in favor of always-on provisioned servers with scale up/down based on traffic. If you have a very high traffic site the extra OS boot time (even in milliseconds) is going to add up quickly. You'd have to be very sure the spin up/down time is less than the idle time of the always-on server.


>It is now possible, technically anyway, for operating systems to boot, service the request and disappear.

This was already possible even with a 2 second boot time. The problem is that it's a stupid use case because (unless the OS boots up in <10ms) the latency of waiting for the bootup is intolerable in any use case where a 2 second boot time was intolerable.


If you're doing that, why bother loading an entire operating system? Just use something like AWS lambda instead.


And that's great. But it's meaningless when the base-cost per unit of computation capacity is so high that it is in most cases cheaper to have a whole farm of servers running idle elsewhere.


Isn't that what lambda is all about? Sub-second billing?


Sounds like you're describing AWS Lambda / serverless architecture. But maybe I'm not understanding your use case?


There are a wide range of tiny operating systems that can boot in a matter of milliseconds.

The applications are "whatever you want imagine" but yes one application is building FAAS Function As A Service in which the operating system carries out a single function.

Put anther way, Docker is complex, overweight, and requires re-implementation of much computing infrastructure. You can meet many of the same goals as Docker in much more simple way not by building containers but by building tiny operating systems.


I'm somewhat amused by the idea of booting an operating system from scratch to service a single request being described as "much more simple" than the alternative of, y'know, having a single instance serve many requests.


From an ops perspective, spinning up an instance to process a single request seems like

1) a complete and utter nightmare to debug.

2) A huge waste of computing resources. Even with a unikernel you're wasting time initialising resources and getting in to a ready state to be able to process a request. Why bother when you can be ready and respond effectively instantly?


OS does not serve requests -- applications do. While it may be possible to demo a toy OS+app, real-world applications take seconds if not minutes to start and warm up. Throughput on a cold-cache is a fraction of that on a warm cache.

Starting in milli-seconds is not the hard problem. Starting + warming caches in that time is -- that will get you a bunch of awards when you solve it.


Docker is just a manager of Linux namespaces. You'll need one to manage your operating systems anyway - start/stop them, copy them to the machine, delete them, etc.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: