Hacker News new | past | comments | ask | show | jobs | submit login

We hear from our customers mostly what has been said here: cost and mental overhead. There is a bit of a paradox - companies that plan to grow aggressively are wary of AWS bills chopping their runway in half - they're very aware of _why_ cloud providers give out a year for free to most startups - they recoup that loss very fast once the cash faucet opens up.

What really gets me is that most cloud providers promise scalability, but offer no guard-rails - for example diagnosing performance issues in RDS - the goal for most cloud providers is to ride the line between your time cost and their service charges. Sure you can reduce RDS spend, but you'll have to spend a week to do it - so bust out the calculator or just sign the checks. No one will stop you from creating a single point of failure - but they'd happily charge for consulting fees to fix it. There is a conflict on interest - they profit from poor design.

In my opinion, the internet is missing a platform that encourages developers to build things in a reproducible way. Develop and host at home until you get your first customers, then move to a hosting provider down the line. Today, this most appeals to AI/ML startups - they're painfully aware of their idle GPUs in their gaming desktops and their insane bill from Major Cloud Provider. It also appeals to engineers who just want to host a blog or a wedding website, etc.

This is a tooling problem that I'm convinced can be solved. We need a ubiquitous, open-source, cloud-like platform that developers can use to get started on day 1, hosting from home if desired. That software platform should not have to change when the company needs increased reliability or better air conditioning for their servers. If its a Wordpress blog or a minecraft server or a petabyte SQL database - the Vendor should be a secondary choice to making things.




I've found that Kubernetes mostly solves this problem. I say mostly because for AI/ML workloads that require GPUs, we still rely on running things on bare metal locally, and deploying with GKE's magic annotations and Deep Learning images. But for anything else, I haven't had an issue going all in on k8s at the beginning, even with very small teams.


Yep! My startup is https://kubesail.com, so I agree :)

As for ML on Kube, I agree, there have been and still are some rough edges. The kernel drivers alone make a lot of out-of-the-box Kubernetes solutions unusable. That said, we've had a lot of success helping people move entirely onto kube - the mental gain alone from ditching the bash scripts or ansible playbooks (etC) alone is pretty freeing.


> This is a tooling problem that I'm convinced can be solved. We need a ubiquitous, open-source, cloud-like platform that developers can use to get started on day 1, hosting from home if desired. That software platform should not have to change when the company needs increased reliability or better air conditioning for their servers. If its a Wordpress blog or a minecraft server or a petabyte SQL database - the Vendor should be a secondary choice to making things.

This is the intent of Kubernetes. Not that I particularly like it.


K8s might eventually get there, technically it fits the bipl,but it's got many moving parts thats its rarely worthwhile in a domestica setup


I don't disagree with you (certainly I'm bullish on Kubernetes, hence the startup) but I have to say many people felt similarly about running web-servers in general back in the day. Init script management, maintaining RAID arrays, etc was a project for nerds, not a mainstream activity. Today though, we have replicating databases, we have cloud storage available for backups, we have intelligent disk-replication systems... In my view, it's just a matter of time until Kubernetes can be run by beginners at home. In the late 90s I probably re-installed Linux on my home server at least a hundred times after messing up some component I didn't understand. Kube is no different. Eventually, (I think we're incredibly close), you just install it and forget about it! `microk8s` among others are really close to one-click these days.


a problem with these 'easy' kubernetes deployments, quickstart guides etc is that they have people churning out insecure and nowhere near production-grade clusters that are often using $cloud-provider managed k8s control plane and cloud-specific features & integrations for storage, load-balancing, databases, and more which prevent easy replication of the environment to another cloud provider, let alone a home/lab environment - and it's just as difficult to port a non-cloud k8s setup over to a cloud provider, no matter how many sidecars are tacked on.

this problem extends even further down, because the cloud infrastructure substrate that k8s runs on needs to be configured too, and even when using a tool that supports multiple cloud providers, such as terraform, you end up with lots of cloud-specific code


My main problem with docker/Kubernetes for small projects is automated patching.

With Ubuntu, I can set up automated upgrades/restarts pretty easily but with docker/k8s I end up having to write my own script to handle it, or at least that was the case last time I looked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: