Hacker News new | comments | show | ask | jobs | submit login

I've been amazed that more people don't make use of googles preemtibles. Not only are they great for background batch compute. You can also use them for cutting your stateless webserver compute costs down. I've seen some people use k8s with a cluster of preemtibles and non preemtibles.



something I've always been curious about (and if a Google Cloud Engineer could clear up - that would be great), is why we should not (as in, why does everyone not) use preemptible nodes (apart from maybe the 3 / 5 master nodes).

My question specifically being: if I configure a k8s cluster to have all my slaves as preemptible nodes...would GCP automatically add new nodes as my old nodes are deleted (from what I understand preemptible nodes are assigned to you for a max of 24 hrs)?

Considering the pricing of preemptible nodes + the discounts that GCP assigns to you for sustained use, it makes cloud insanely cheap for an early stage startup.


Google Cloud Developer Advocate here.

Go for it as long as you understand the downside. It's possible that all instances get preempted at once (especially at the 24hr mark), that there isn't capacity to spin up new preemptible nodes in the selected zone once the old instance is deleted, etc. New VMs also take time to boot and join the cluster.

If you are just doing dev/test stuff, I'd recommend using a namespace in your production cluster or spinning up and down test clusters on demand (which can be preemptible).

If you have long running tasks (like a database) or are serving production traffic, using 100% preemptible nodes is not a good idea.

Preemptible can be great for burst traffic and batch jobs, or you can do a mix of preemptible and standard to get the right mix of stability and cost.


If you don't mind me asking, what exactly is the role of a developer advocate?



Cheers!


What about spreading your K8s load across multiple instance types (given it is unlikely google runs out of all types at the same time). That plus historical modeling was the trick of a startup that Amazon acquired that promised to dramatically reduce compute cost, by using mostly spot instances.

Would those types of mitigations work similarly with Google's premetable VM's?


Compute Engine doesn't really have "Instance Types" or "Instance Families" per say, just Core/Memory combinations. Larger machines have a higher chance of preemption though (according to the PM of PVMs who stated that on this thread).

There are a few interesting projects out there that do the kind of automation you are speaking of like these:

https://github.com/binary-com/gce-manager https://github.com/skelterjohn/prevmtable

Spreading multiple smaller machines over a multi-zone k8s deployment might help mitigate, but it will never solve all the issues.


preemtibles are not available for GPU.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: