Hacker News new | past | comments | ask | show | jobs | submit login
Managed Kubernetes Price Comparison (devopsdirective.com)
168 points by spalas on March 7, 2020 | hide | past | favorite | 121 comments



Egress costs at those major clouds are ridiculous. Once you start having some traffic it can easily make 50% of all costs or more. That is not justifiable! Meanwhile hardware costs are going down. We need more k8s providers with more resonable pricing. Unfortunatelly both Digital Ocean and Oracle Cloud don't have proper network load balancer implementations which is a must for elastic regional clusters and to forward TCP in a way that client IP is preserved and be able to add nodes without downtimes or TCP resets. OVH cloud doesn't implement LoadBalancer service type at all. So the choice in 2020 is really just Google, Amazon, Azure with their rolls-royce pricing. The cost difference between them is neglible. And then there are confidential free credits for startups. So sad...


All of AWS's VPC features are free. You get Security Groups, Subnets, Route Tables, all kinds of shenanigans in a very stable way, with basically no incidents that I can remember.

Except those are not free, they're just charged for separately under "egress transfer costs". End of the day, the engineers in the networking teams at AWS still have to get paid and that service probably also needs to run at a profit. When seen under this light, the costs make more sense to me than the usual simple view of "but bandwidth transfer costs are cheaper everywhere else!!!"


The amount of work required from AWS engineers to set up security groups, routing, etc is not proportional to the amount of traffic your server produces, but is charged as if it were.

This requires every egress gigabyte to generate a lot of money to be worth it. If you serve high-traffic applications, like video, you have to be Netflix.


NAT gateways (a required component if you want real security) cost like $30/month each and you need at least 2 for HA.


Well, I’ve got bad news for you, if you think NAT Gateways are required for “real security”, just wait until you enable IPv6 on AWS.....


Could you elaborate on this?


When you enable IPv6, there is no NATing. All of the IP addresses are public. You can use an Egress only Internet Gateway.

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.htm...

But technically, your EC2 instance never has a public IP address. You can see for yourself by assigning a public IP address to an ENI attached to your instance and doing an ifconfig.

An Internet Gateway is a one to one NAT.


Interesting, I did not know this is how IPv6 worked. Need to investigate this more.


I've been a Linode customer for the better part of a decade. Primary reason is that their egress is quite generous compared to any other provider I've looked at. Also, their customer service is exceptional and over the years the hardware has been upgraded without being charged more money.

They're now rolling out a Kubernetes service which I'm in the beta of.

Been working on migrating a high egress service over to k8s for the last month or so. So far so good.

I'd recommend you take a look at Linode's offerings.


Not everyone cares about egress costs.

Our product is a log processing/analytics tool. Customers pay for a certain period of time for which we'll hold onto their logs for search and analytics, after which point the logs are expired and then deleted. So our ingress-to-egress ratio is very, very high, and egress is practically a rounding error on our AWS bill.

I've seen some stupid shit. Like security architects requiring employees to always but always be tunneled in with a VPN, without considering what the egress cost is of an employee surfing YouTube through the VPN.

At the end of the day, you are responsible for your architecture choices. As part of your architecture planning, you should design an architecture with maintainable costs, in particular, one where the cost of adding additional users doesn't outweigh the revenue those users bring.


> I've seen some stupid shit. Like security architects requiring employees to always but always be tunneled in with a VPN, without considering what the egress cost is of an employee surfing YouTube through the VPN.

Actually the cost of this is pretty small. I tend to spend a big part of my day tunneling through a VPN hosted on GCP and almost always have a mix of some sort playing in the background on youtube i.e. fairly high use. Total cost is ~£5/month. It's not free, but I suspect most companies have bigger fish to fry.


I have a feeling (no evidence) that YouTube's systems are smart enough to figure out when a given video clip is a static background for the entire clip, or when playback is in a background tab, and drastically cut down on the amount of video data sent. After all, YouTube has high enough egress costs of its own, and anything to optimize down the egress would, in aggregate, save YouTube a bundle of money.


Fun fact: a major reason Google survived in the early days was that they negotiated much cheaper ingress than egress, and then ingressed the whole web while serving "10 blue links", at a time when data center clients were expected to run and pay for high egress:ingress ratio servers.


EDIT: As almostdigital mentioned here, Scaleway supports Kubernetes, their load balancing works the way it should, based in EU (France) and pricing is very nice!


For HTTP, an external load balancer/CDN such as CloudFlare or Fastly could easily fulfill that role, though you obviously have to jump through a few more hoops to get there, since it's not built in. But with some of them you also get things like TLS termination, DNS hosting, advanced rewriting, caching, and DDOS protection.


You seem to think theres some kind of disconnect between egress pricing, and if the network has a functional backbone with load balancing infrastrucutre.

If anything, egress pricing is to low in the vast majority of cases, and it really shows when customers start using it.


It doesn't seem to include Digital Ocean in the comparison.


Author here -- I can add DO to the comparison today, I'll ping here once I have done so!

---

EDIT: Done!

I have updated the notebook with the digital Ocean offering using their General Purpose (dedicated CPU) droplets.

The major takeaways for DO are that they:

  - Also do not charge for the control plane resources
  - $/vCPU is less expensive than the other providers
  - $/GB memory is more  expensive than the other providers
  - No preemptible or committed use discounts available
For smaller clusters and/or clusters running CPU bound workloads, DO looks like the most affordable option!



Hmm... It looks like for smaller machine types the packet bare metal machines are quite a bit more expensive than the equivalent cloud provider VMs, but as you move to larger machine types Packet's pricing doesn't continue to grow linearly (making them more competitive).

Also, I would need to do some further research to understand how to fairly compare the physical CPU cores on the bare metal systems w/ the vCPUs offered across the major cloud VMs.


Digital Ocean is still significantly cheaper (unsurprisingly). They don't charge for the control plane, so you just pay the normal prices for the droplets and resources you use. It's well integrated, allowing K8 to provision load balancers and volumes, and the Terraform provider for it works well.

My (admittedly small) cluster of 3x 4Gb droplets, an external load balancer, and volumes enough for logs, databases and filesystems costs about 70 USD/Month. It's been absolutely rock solid too. I have very few minor gripes and a lot of positive things to say about it.


Isn't it more limited though, e.g. with auto-scaling not available for nodes, but only for pods?


DigitalOcean now has a node auto-scaling as well [1]. It was released very recently. It was not available in the first general release.

[1] https://www.digitalocean.com/docs/kubernetes/how-to/autoscal...


Yes, it is absolutely more limited. That, single-IP load balancers, and no direct equivalent of VPCs spring to mind as the biggest differences. AWS still makes a lot of sense in a lot of cases. It is worth noting DO has a decent API, so if wouldn't be _that_ hard to implement autoscaling yourself if you needed it.


What’s the story for automatically provisions TLS certificates for your load balancer been like?


I don't know about terminating on the load balancer level, but it works fine on the ingress-level (http router) with cert-manager, nginx-ingress-controller and the Ingress-definition.


That's exactly how I manage it too. It means there only needs to be one load balancer per cluster, and adding a new SSL cert is just a matter of adding a couple of lines to the ingress config.


Load balancer certs via annotations are supported, but they're a bit iffy when pairing with controllers like ambassador, since ambassador expects to own TLS termination (although the ambassador docs do say this is configurable). https://www.digitalocean.com/docs/kubernetes/how-to/configur...


aside: ambassador definitely supports external TLS termination (tested with AWS ELB).


Ah good to know, thank you!


DO is my absolute favorite. I really think they could be a long term winner. Their interface is so much nicer than the competitors, in my opinion. I'm not even currently a customer, let alone a shill.


I agree that DO is awesome. I'd argue though that they can make a better UI because they offer less. Everything is a littler simpler. It would be hard to condense AWS into a similar type of interface.

Having said that, DO is enough for virtually everything I've ever worked on, and the user experience and price are so much better. They're a clear winner for almost everything I do these days.


I agree but that's part of the charm to me. I only use what 4 or 5 things in AWS, but each login is information overload. Having to Ctrl F what you are looking for is not an ideal experience.

Whether a conscious decision or not, I think offering what the 80ish percent (just a guess) actually use, and streamlining it, is the right decision.


AWS could really use a dashboard where you can pick the components you want to see and only those show up. If I only use S3 and EC2 I shouldn’t have to search for those two products every time I log in.


This is what I don't understand. AMZ is one of the richest companies ever, but everyone agrees their AWS interface is terrible. It looks like an engineer wrote it 20 years ago and nobody ever bothered to refactor it. Just showing what you're actually using by default would be a 1000% improvement. I don't even have admin access at my current company, quit showing me things I won't and can't use.


Look at Amazon.com. Same deal. Nobody wants to be the person that redesigns the UI and sees sales drop off by 0.001%. I think they're afraid to touch what made them #1, even if it's objectively terrible.


The console already shows the five most recently used components when you log in.


Yet somehow it always seems to forget.



Yeah I was very disappointed to see that this is limited to the "regular" three...


I went back and added in Digital Ocean per the parent comment's request!

The article and Jupyter Notebook now reflect that change.


Because DO is crap and they treat their customers like crap. Constantly breaking SLAs and revoking enterprise accounts.


Scaleway gives you a k8s cluster starting from 40 EUR per month


Thanks for this! I've looked at Scaleway and so far I like it a lot, this looks like the end of my months of suffering. It seems to fit my needs perfectly. Finally a smaller cloud provider doing it right!


They allow the DEV1-M boxes now, which are 8EUR a month.


Only Azure doesn’t charge for the k8s control plane, that is the most surprising thing for me.


We don't know how long that's going to last. Google didn't charge for control plane either, until recently. So Azure couldn't charge, in order to be competitive. Now that Google has started charging Azure may start too.


Azure also doesn't have an SLA, so that's why they don't charge.

Google started charging after they added an SLA.


Makes sense, the point of an SLA is that you agree to pay back the customer's money if you don't meet it. What does it mean to have an SLA for a free product?


I'm not familiar with the space so my question might not be that relevant - where does OpenShift fit in all of this (I still struggle to differentiate it from Kubernetes) and is there any merit to IBM trying to sell it so hard?


A few cool specific things that OpenShift does that others don't:

1. ships a built-in Ingress controller that has smarts like blocking subdomain/path hijacking across namespaces. Someone's dev namespace can't start adding its Pods to a production API's ingress path

2. the oc cli has first class support for switching namespaces without bash aliases

3. RBAC logic is implemented such that as a unprivileged user you can list only the namespaces you have access to with `kubectl get ns` vs "not allowed to list all namespaces" error.

4. first class representation of an "image stream" which given a container location will cache it locally, emit events like "do a rolling deployment when this changes" and a few other very simple but logical helpers

Plus all the other top-level features like Operators and over the air updates. I think seeing some very specific wins can help folks understand that the little things matter just as much as the big ones.

disclosure: work at Red Hat


Openshift wraps around Kubernetes, with some of their own special offering stuff on it. Generally, plain K8s is a building block - RedHat made Openshift with a bunch of opinionated choices, geared towards enterprise deployments, some of them migrating later to Kubernetes itself (OpenShift's Route inspired K8s' Ingress), some OpenShift cribs from K8s (Istio becoming part of OpenShift by default in OS 4).

Generally OpenShift heavily targets enterprises as "All-in-One" package. Some of that works, some doesn't, but honestly it's often more a case of the IT dept that manages the install ;)

Except installing OpenShift. That's horrific. Someone should repent for the install process, seriously.


Even with OpenShift 4? I thought it was pretty nice and straightforward, to be honest...


I have yet to touch OpenShift 4 - every environment that used OpenShift that I worked with professionally except for some testing runs was air-gapped to some extent from internet, something that is not supported on OpenShift 4, and which was treated as a crucial requirement by the customer deploying OpenShift.


Airgapped is now available in 4.3 (although it has some rough edges that will be addressed in 4.4).


Oh, that's a good news. Just this friday when I first looked into OpenShift install, it looked like it wasn't even in the plans, so I might have hit older docs than I intended.

Makes for higher possibility that $DAYJOB upgrades to OpenShift 4.x, but then, we would rather get rid of our (intra-group) provider and their openshift environments...


The OpenShift 3 way of using Ansible playbooks was bad, and is now gone with OpenShift 4. It's a binary installer that you run. Much easier IMHO.


It's the expensive k8s distribution for enterprises with large IT budgets. It adds enterprise-focused add-ons and is very opinionated. Its primary target is people who have a large existing IT staff dedicated to running on-prem / self-hosted, but lacks the expertise to run a "community supported" k8s distribution on their own, and maybe has no existing CI/CD workflows yet.

If your problem is "I have a BigCo with my own infra running RHEL" then it's almost definitely what you want. It follows the old saying of "nobody ever got fired for buying IBM." It gives your boss a vendor to yell at when things go wrong and its add-ons can provide an onramp for enterprise-type ops and devs who are still unfamiliar with CI/CD and containers.

Where it's not very good is if you're not a BigCo. If your org has one or more of the following characteristics consider other distros / managed cloud hosting first: a nimble ops team, existing cloud buy-in, a limited budget, a heavy "dev-opsy" culture (i.e. existing CI/CD/container processes), a desire to avoid vendor lock in, or less conservative management.


I work for a company that recently reconsidered our stance on OpenShift. Long story short, we looked at vanilla k8s rather fondly because of nearly all of the counter-points. The factor that brought us back to OpenShift was the secure-by-default aspect and that some of the training wheels (OCP console) couldn't be replaced by vanilla in a secure way.

Here's what we do:

- We don't use "develop on OpenShift" features like Eclipse Che or on-demand Jenkins nodes to build a project

- We don't use OpenShift-specific resources if there's _any_ alternative in the vanilla kubernetes resource definitions (e.g., use Ingress instead of Route)

- Use outside CI/CD to handle building and packaging of the applications, then deploy them with Helm like any other Kubernetes cluster

- Use the OCP console like a crutch of last resort, preferring `kubectl` whenever possible

All of this helps avoid vendor lock-in as much as possible while still taking advantage of the secure-by-default approach to a kubernetes cluster.

Talking with Red Hat engineers, it sounds like the OpenShift-specific things are contributed upstream and, while they may not become available by exactly that name and syntax, essentially the same functionality does come into vanilla Kubernetes. Routes inspired Ingress resources, for instance. The official stance is for OpenShift users to prefer the vanilla resources because the OpenShift-specific ones are intended to be shims.

(not a Red Hat employee, just work for a company that is a customer)


I don't feel like any of the responses really answered your question well, so I'll take a stab. Up front disclaimer tho: I work for Red Hat and work on OpenShift nearly every day.

OpenShift in general I don't think can be compared here, because most of the time OpenShift isn't a cloud product. There is OpenShift online, which I would love to see improved, but it's a minority of OpenShift uses. The pricing there isn't direct apples-to-apples either, since OpenShift adds a lot of value on top of "raw k8s" (there's really no such thing as "raw k8s" since k8s is a platform for platforms, and every cloud vendor adds stuff to it, but it's a useful simplification for comparison). OpenShift adds some non-trivial features to Kubernetes and they aren't just plucked from existing projects and bundled. Some major K8s features started out as OpenShift features and got merged upstream (Ingress, Deployments, etc). Red Hat innovates and improves the distribution rather than just repackaging it and slapping on a support contract. Red Hat also tests and certifies other products so you have confidence that they'll work together, which is important for decision makers that don't have the technical depth to evaluate everything themselves.

> is there any merit to IBM trying to sell it so hard?

Red Hat is trying to sell it hard, and technically Red Hat is a subsidiary of IBM, so what you say is not technically wrong. However, it's not really accurate either. People that work for "IBM" don't really care much about OpenShift, people that work for "Red Hat" do. It's an amazing product and is getting better every day, and we see it as a major contender in the Kubernetes and PaaS space. Internally we don't think of ourselves as "IBM." The two companies are quite separate and are being kept that way. Over time we will cross-polinate more, but that's a ways down the road IMHO.

If we `s/IBM/Red Hat` tho, I probably shouldn't answer regarding if there's merit to selling it. I do, but then I wouldn't be working on it if I didn't think that. I am not a fan of our pricing model, and hope that changes, but nobody in pricing cares what I think :-)

If you are an enterprise customer, I think OpenShift is a great buy. If you're a startup, it probably isn't (at least not yet. I'm hoping to change that in the future).


OpenShift is Kubernetes, just like RHEL is a Linux distribution with support for enterprises. OpenShift makes an opinionated choice about what they bundle (distribute) with vanilla Kubernetes. For example, Istio was chosen as the service mesh distributed with OpenShift 4.


I agree with you here, but do think it's worth pointing out that OpenShift adds some non-trivial features to Kubernetes and they aren't just plucked from existing projects and bundled. Some major K8s features started out as OpenShift features and got merged upstream (Ingress, Deployments, etc). Red Hat innovates and improves the distribution rather than just repackaging it and slapping on a support contract.


Too bad AKS is just terrible.

Slow provisioning time, slow PVCs, slow LoadBalancer provisioning, slow node pool management, plus non-production ready node pool implementation.


Agreed, not below usable though.

Some more: rolling upgrades of k8s (said to not affect the uptime of the cluster) not being rolling in actuality, allowing upgrades when the service principal is expired thus preventing the nodes from being added to the LB, certain aks versions not being upgradable requiring you to recreate the cluster from scratch ...


Does anyone have experience with OVH's managed k8s offering? I've had good experiences with them in the past on pricing/quality.


I tried it out briefly and it seemed to work well. I never went to prod with it tho. I also didn't try out the LoadBalancer so I can't say how easy that would be to use. I've heard that cost can unexpectedly jump, so read the docs before you get too deep into it[1]:

I now have an OpenShift cluster that I do testing with, but if I didn't I'd probably use OVH k8s in dev because it does seem by far the cheapest.

[1] https://docs.ovh.com/gb/en/kubernetes/using-lb/


As someone who manages a production cluster, I spend about 1% of my time worrying about the control plane. It’s trivial to get it running and keep it running now. It’s all the stuff you build on top of k8s that’s the hard part. I don’t see much value add to eks, personally.


Do you use something like kops for setting up and maintaining your cluster?

Most of my direct experience with Kubernetes has been on GKE, but I have been meaning to work through https://github.com/kelseyhightower/kubernetes-the-hard-way to gain more appreciation for what is going on behind the scenes.



At the low end it’s worth considering Fargate distinct from EKS. You don’t need to provision a whole cluster (generally 3 machines minimum) and can just run as little as a single Pod.


I tried Fargate and found it to be crappy. It is very hard to use. It is proprietary, so your app will not be portable, and your knowledge and experience will not be portable either. If you use Kubernetes there is tons of tutorials, your app becomes portable across clouds and your knowledge is portable from cloud to cloud too. GKE only costs around $60 per month for a single-machine "cluster".


I use fargate and pretty happy with it. Don't need big scale out - it supports $1M/year revenue so not huge, but LOVE the simplicity.

I just have the CLI commands in my dockerfiles as comments, so once I get things sorted locally using docker I update the task with some copy / paste. I only update occasionally when I need to make some changes (locally do a lot more).

The one thing I'd love to get my DOCKER image sizes down - they seem way too big for what they do but it's just easier to start with full fat images. I tried alpine images and couldn't get stuff to install / compile etc.


You should look into multistage docker builds, that lets you still use a full fat image for your build but then leave all the build tools out of your final image

I liked jpetazzo's post on the subject but there are plenty to choose from https://www.ardanlabs.com/blog/2020/02/docker-images-part1-r...


Someone else suggested the same thing actually. Easy to get lazy when it "just works" and internet is 1gig home and office - you can see how bloat just builds up.


What is proprietary about Fargate? It's containers. I did not find any experience/knowledge (other than the basic knowledge of navigating the AWS console) that wouldn't transfer to any other container service.


AWS console is the crappy part. Azure and Google have much better GUIs. And here's the proprietary part: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...

For contrast, you can manage a Kubernetes deployment using standardized yaml and kubectl commands, regardless of whether the application is running on localhost (minikube), on Azure or on GKE.

BTW, AWS Lightsail has decent GUI. Alas, it doesn't support containers out of the box. The best support for Docker image-based deployment is Azure App Service.


I'm still not seeing the difference. As pointed out, what you linked is for ECS. That has nothing to do with Kubernetes, so I'm not sure why you're comparing the things on that page to kubectl commands on GKE or Azure. Of course you cannot use kubectl on ECS, because ECS has nothing to do with kube.

When you are using actual EKS (with or without Fargate), you certainly can use standardized kubectl commands.

The only "proprietary" things I see in your link is the specific AWS CLI commands used to set up the cluster before you can use kubectl, but both Azure and GCP require using the Azure CLI and gcloud CLI for cluster deployment, too. There's also setting up AWS-specific security groups and IAM roles, but you have to do those same things on GCP or Azure, too, and both of those have their own "proprietary" ways of setting up networking and security, so I don't see the differentiating factor.


> here's the proprietary part: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/....

That's ECS, not EKS. Two different products.

The EKS documentation is at https://docs.aws.amazon.com/eks/latest/userguide/fargate.htm...

> For contrast, you can manage a Kubernetes deployment using standardized yaml and kubectl commands, regardless of whether the application is running on localhost (minikube), on Azure or on GKE.

Likewise for EKS.


I was replying to this:

> At the low end it’s worth considering Fargate distinct from EKS.


Right, and you linked to the documentation for ECS on Fargate rather than the documentation for Kubernetes Fargate, which is what was being talked about. Again, two different products.


How is Fargate “proprietary”? It runs a Docker containers.

But if you are at any type of scale, the last thing on your to do list is “cloud migration”.


Just curious - what's wrong with buying a large instance (24 cores) and running it for < 10,000 users? Kubernetes feels like an insane complexity that doesn't need to be taken on and managed. You're gonna spend more time managing Kubernetes than writing actual software. Also, it feels like if something goes wrong in prod with your cluster - you're gonna need external help to get you back on the feet.

If you're not going to build the next Facebook, why would you need so much complexity?


> Kubernetes feels like an insane complexity

There's two things there - the complexity of engineering the cluster itself, and the complexity of using it.

The former is where the pain is. If you can remove that pain by using a reliable managed offering, it changes the perspective a bit.

The complexity of using it is also non-trivial - you have to learn it's terminology and primitives, understand the deployment and scheduling model, know how volumes are provisioned, aggregate logs, etc.

However, the ROI for learning that complexity can be pretty good. If you get comfortable with it (maybe a month or so of learning and hacking?) you get sane rolling deployment processes by default, process management and auto healing, secrets and config management, scheduled tasks, health checks, and much, much easier infrastructure as code. Which means if things do go really sideways, it's usually not that hard just to stand up a replacement cluster from scratch. With a bit more reading and some additional open source services, you get fully automatic SSL management with let's encrypt too.

All that said, I absolutely agree with you in principle. No one should overcomplicate their deployments for no benefit. It's just worth reflecting on what those benefits are. Kubernetes had a bunch of them - and whether they're worth the effort of getting to know it depends very much on the system(s) you're running.


I used to work for a company that had less than 130 employees and none of the software services were exposed to public - the total user count was 130 users. And we had a Kubernetes cluster.

Thanks for an insightful response - Don't you think that a lot of wizbang software developers just want to use latest and greatest buzzword thing, whatever that might be, to get some cool points? As I grow older, I see increasing lack of objectivity and decreasing attention to KISS principles.


> Don't you think that a lot of wizbang software developers just want to use latest and greatest buzzword thing

Yes, absolutely, and it drives me up the wall. I've seen some incredibly unsuitable technology choices made, the complexity of which have absolutely sunk productivity on projects.

That said, it's easy to become cynical and associate anything that's become recently popular with hype-driven garbage. But that can blind you to some really great new stuff too. I tend to hang behind the very early adopters and wait to see how useful new tech becomes in the wild - the "a year with X" style blog posts tend to be really informative.


What's a good recent example of hype-driven software? The only good example I can think of is MongoDB. All the other hyped techs in recent memory I can think of were all objectively good: Nodejs, Reactjs, Docker, Kubernetes, etc. In the same vein, I laugh every time someone brings out "javascript framework flavor of the month". That hasn't been true for 5 years at least.


I think React would be a pretty good example of the kind of thing I'm talking about, actually. Just to be clear: I like React and I'm building a product with it right now of my own free choice. It does what it was built to do very well.

The point I'm trying to make is that people often think in absolute terms. The problem isn't thinking "React is good for SPAs" - it's thinking "React is good for all websites". I've come across a number of engineers now who genuinely think React is a good fit for e.g. totally static sites, "because it's fast".

I've come across similar persistent beliefs around NoSQL databases. Some teams wouldn't begin to consider an RDBMS for their primary data store, even while their data is heavily relational - because "SQL databases don't scale".

It's not that technologies are good or bad - it's the blind belief that a given technology is best for all situations. The hype drives the development. Not of the underlying tech itself, but in the teams using that tech.


Yeah I agree web dev has settled imo on React/npm/typescript/webpack. I like the stability. Hooks is there for some excitement though!


If you switch companies every two years for your first ten years, you make 50% more than someone who changed once. Companies have set it up so you must optimize your resume to get a competitive salary. Now, start giving 10% yearly raise as standard and maybe employees can afford to work on simple solutions.


I don't know. I'm a pretty big fan of staying with a company that treats me well and pays me well enough. I don't need a 300k / year salary when I have a team that respects me and that I respect and we work on interesting problems together.


A company that respects you wouldn’t pay you under market value.

If the same company had to hire someone else with your same skillset as they expand or replace a coworker, they would pay them market value. They would have to offer competitive salary. When that happens, you find new employees making more than equally skilled existing employees. It’s called salary inversion.


Well, some companies don't make enough to pay you market value;

but, the company I worked for was not paying higher wages for the newer employees than myself and I got pretty regular, pleasant raises.


Why would I sacrifice my finances because a company that didn’t have a business plan that allowed them to pay their employees market value? If the owner’s sold the business would they give you part of the proceeds?

I bet they talk about the company “being like a family”, don’t they?


Well, let’s see. We did hackathons and competitions with nearly all employees in the company and many of the hackathons resulted in products or internal changes that went on to live in production and help grow the business. I had stock in the company, as well, so my improvements to the company could impact my own money.

It remains the best company I’ve ever worked for. Small and my opinion and actions actually had real impact to core features of the company, so :)

Worth it, I think.


Now if you have significant equity in the company, that’s an entirely different story....


I didn’t have significant equity. I had a few thousand dollars.

I did have direct influence into huge and important parts of that business, though and I really, actually was respected and given broad rein in what I could do. My coworkers and I also tended to have a great time working, and I had direct lines to the CTO. We talked often.


  If you're not going to build the next Facebook, why would you need so much complexity?
You don't. I think this is a recent point people are trying to make. Kubernetes makes sense at a certain scale, but for smaller startups it maybe shouldn't be the go to.


So if Kubernetes is too complex, then terraform is a Nono too?

I don't find them complex at all. You just tell the tools to be in a specific state and the tool applies the necessary changes. Server templates. Provisioning. Orchestration. etc.


I don't think theres a comparison there (or I'm just unsure of the point you're making with that statement). I agree, they aren't conceptually complex, but Kubernetes is a large scheduler that _definitely_ benefits from having a dedicated team managing it.

That being said, I always recommend using a tool like Terraform to back up infrastructure and the likes.


Maybe I didn't do enough with Kubernetes to need a dedicated team hmm.

The point I wanted to make is that my opinion is a bit different. Being able to declare how state should be instead of doing it imperatively/with configuration management is just something I enjoy and which I think does not cost much more in comparison.

That is why I wondered why not use it as a small startup?


You definitely still could if you feel the maintenance is manageable. This was just my experience :) I chose to go with something like Cloud Run.


Terraform and Ansible are not nonos! I use them for my small product and they are reliably good! Kubernetes has much more complexity and a steeper learning curve compared to those tools! But it has its own benefits. Apply what is necessary than what is fancy!


Can you educate on at what point Kubernetes actually makes sense? Is there a rule of thumb?


When you need to run more than 1-2 applications, especially in HA, and you're cost-conscious so just throwing tons of machines in autoscaling groups doesn't work for you.

Said applications don't have to be applications that you're writing as part of your startup. It can be ElasticSearch cluster, Redis, tooling to run some kind of ETL that happens to be able to reuse your k8s cluster for task execution, CI/CD, etc. etc.


I don't think I'm in a position to answer this (heavily experienced outside the control plane, but not in), but if I had to answer I'd say:

Once you don't personally have your hand in every application your company is running (in addition to the points the other comments have brought up).


Because it removes both provisioning and deployment concerns when you can build a container and then just tell Kubernetes to deploy it onto the cluster. Theres not much that goes into spinning up these managed Kuberentes clusters. Most of it is telling them what classes of instances you want created.

When you buy a large instance, you still need to set up the instance and tweak it to your application's needs. You then need to babysit this node.


>When you buy a large instance, you still need to set up the instance and tweak it to your application's needs. You then need to babysit this node.

It's called "use kickstart"


I don't know why you are being down-voted, you are not wrong - Kubernetes is not something I would roll out at a start of a project. I think people are just excited to try it out so they often overlook the operational side of things.


Considering that once you get through possibly high upfront cost, it greatly simplifies operational side, I often get the feeling that the "you are not Google" crowd misses the operational side completely, or looks at it through rose-tinted glasses.


Absolutely agree. The point of Kubernetes is to simplify operational side. I use it for my hobby projects. There is some learning investment needed, but after that it simplifies things so much. You can use Kubernetes for simple projects that don't need to scale to the size of Google.


In another recent thread, I mentioned running ~62 apps on kubernetes, and people asking if it could be simplified to less.

They are mostly plain old PHP webapps, non-trivial amount of them Wordpress (shudduer), some done in random frameworks, some with ancient dependencies, some in node.js, one was ruby, etc. They are the equivalent of good old shared hosting stuff.

With kubernetes, we generally manage them in much simplified form, and definitely much cheaper than hosting multiple VMs to work around issues in conflicting dependencies at distro level or keeping track of our own builds of PHP.

We also run CI/CD on the same cluster. Ingress mechanics mean we have vastly simplified routing. Yes, we cheat a lot by using GKE instead of our own deployment of k8s, but we can manage that too, it's just cheaper that way.

Pretty much smooth sailing with little operational worries except the fact that Google's stackdriver IMO sucks :)


It is a single point of failure. I want multiple smaller instances even if I’m running something trivial. It is not about scale but about reliability for us. And the moment I go with multiple instances I need something to manage that mess. Kube handles it well.


I've used DigitalOcean's load balancer + multiple instances that run app containers. It works fine and without Kubernetes.

I wasn't saying that just use 24 core single instance. Perhaps I should have worded it better.


Well a managed kubernetes cluster is not any more complex than the other solutions.

You can use GCP control panel to launch clusters, add nodes, launch containers expose and autoscale them without touching a single line of config file or a terminal if thats what you like. If you launch / manage your own cluster though.. It’s a pain.


How do you obtain the right docker images and launch them with the right settings?

How do you coordinate rolling deployments?

(I'm not saying you need Kubernetes to do these things. But if you've written something to handle them, it is very probably Kubernetes-shaped).


It all depends on how many applications you really have to run. When you have one application, can depend on vendored services for some of the infra, or have sufficiently small requirements for extra services, it all works pretty well.

Helps if you also don't do due diligence on various things (like actually caring about logging and monitoring), so you scratch out those concerns. This is not even half sarcastic, people went pretty far with that.

You might probably still spend more time than you think on actual deployment and management due to lack of automation & standardization, but if you're sufficiently small on the backend it works.

When the number of concerns you have to manage rises and you want to optimize the infrastructure costs, things get weirder. Kubernetes' main offering is NOT scaling. It's providing a standardized control plane you can use to reduce the operational burden of managing often wildly different things that are components in your total infrastructure build - Things like infrastructural metrics and logging, your business metrics and logging, various extra software you might run to take care of stuff, making it easier to do HA, abstract away the underlying servers so you don't have to remember which server hosted which files or how the volumes were mounted (it can be as easy as classic old NFS/CIFS server with a rock-solid disk array, you just don't have to care about mounting the volumes on individual servers). It makes it easier to manage HTTPS routing in my experience (plop an nginx-ingress-controller with a host port, do a basic routing setup to push external HTTP/HTTPS traffic to those nodes that run it, get the bliss to forget about configuring individual Nginx configs anymore --- or use your cloud's Ingress controller, with little configuration difference!).

In my experience, k8s was a force multiplier to any kind of infrastructure work, because what used to be much more complex, even with automation tools like Puppet or Chef, now had a way to be nicely forced into packages that even self-arranged themselves on the servers, without me having to care about which server and where. Except for setting up a node on-prem, or very rare debug tasks (that usually can be later rebuilt using k8s apis to get another force-multiplier), I don't have to SSH to servers except maybe for fun. Sometimes I login to specific containers, but that falls under debugging and tweaking the applications I run, not the underlying infrastructure.

That's the offering - whether it is right for you, is another matter. Especially if you're in cash-rich SV startup, things like Heroku might be better. For me, the costs of running on PaaS are higher than getting another full-time engineer... and we manage to run k8s part-time.


Single point of failure. What if your machine needs reboot to apply kernel patches. What if your network connection breaks.

Additionally, new crazy things are emerging like service mesh (Istio, Linkerd) which make observability and security easier.

Of course, it all is getting quite complex. But it brings value in the end.


While I see a lot of derision about Kubernetes these days, if I am starting a Greenfield platform/design/product, why wouldn't I use it?

There are tremendous benefits to K8S. It isn't just hype.

On the flip side, starting out as a monolithic (all in one VM) app will take significant effort to transition to micro services / K8S.

If I think I might end up at microservices/K8S, I think I might as well plan for it (abstractly) initially.


I believe this is the right way to think about it. You can start off with a relatively monolithic architecture, and then break that out into smaller microservices as needed with a much easier transition.


Will be nice to include GPUs in v2 once there is more stable support for the operators


Yeah -- Adding in GPUs and doing a deeper dive on how using some of the low-cost VM types (w/ small but burstable CPU, etc...) impact both cost & performance are things I hope to take a look at in the future!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: