
Our packing method and how it translates to savings for infrastructure - elsalgo
https://supergiant.io/blog/supergiant-packing-algorithm-unique-save-money
======
avitzurel
Couple of weeks ago I commented about Marathon. The part I said it's missing
is scaling the underlying cluster of machines in a smart way.

Seems that they're solving this issue but auto-scaling in a more flexible way.

Amazon will autoscale to a "launch configuration" which means you will auto
scale the same instance type and same image-id. That might be an overkill for
what you need in a lot of cases.

The reason everyone "bakes" their own solution to these things is that it's
very hard to generalize around simple rules (like CPU/memory etc...).

Lets take an example for queue-workers like celery or any other, CPU is
irrelevant in this case, you would want to scale up based on the queue-size
etc...

All that being said, I LOVE seeing solutions in this space, I think we are
marching towards a brighter future for container based application stacks.

------
gtirloni
I wonder if it'd make sense to have a low-level API so Kubernetes can talk to
the hypervisor and suggest these parameters for new nodes.

~~~
qboxio
So, in essence that is one of the functions of the Supergiant API. We just
released a few weeks ago, so AWS was our first target, but.. we are working to
add support for other top cloud and on-premise providers as fast as we can. I
think we are striving for OpenStack, Digital Ocean, and GCE for our next
targets. If you have other targets solutions that you think we should look at,
let us know :-)

