Hacker News new | past | comments | ask | show | jobs | submit login
‘AWS vs. K8s’ Is the New ‘Windows vs. Linux’ (zwischenzugs.com)
202 points by puzza007 24 days ago | hide | past | web | favorite | 110 comments



> EKS (like all AWS services) is heavily integrated with AWS IAM. As most people know, IAM is the true source of AWS lock-in

This one thousand times over. Lots of companies have multi-cloud on their strategy but when it's time to actually implement it they find that the cost to adopt hyperscaler #2 is just the same or even more as they spent on integrating hyperscaler #1. While Kubernetes makes workloads portable between vendors, organizational processes for "boring" chores like IAM, tenant provisioning and billing/chargeback aren't. Integration of these processes can get really expensive if you need to re-implement basic "enterprise" process like e.g. four-eyes-principle/seperation of duty on role assignments for each of your cloud vendors and technologies. The IAM systems differ ever so slightly across vendors and that brings a ton of complexity.

The larger the company, the more likely it is that the real bottleneck to cloud adoption is on these processes, not the technology. I know it's hard to imagine if you're coming from a startup background where you can just take the company credit card and get an account on AWS, GCP or Azure in a second. But for developers in big co., acquiring a public cloud account may take months and forms over forms of paperwork, getting roles put into the central corporate directory etc. The investment into these (often atrocious) processes for hyperscaler #1 create a big lock-in.

Disclaimer: I'm a founder at Meshcloud, we build a multi-cloud platform that helps organizations consistently implement organizational processes like IAM, SSO, landing zone configurations etc. across cloud vendors and platforms.


I work at Pivotal, we have both Kubernetes (PKS) and Cloud Foundry (PAS) offerings. With Cloud Foundry we abstracted away the IaaS pretty much entirely, including IAM and SSO, for the reasons you outline (and were doing so before Kubernetes existed). We also went out of our way to make "give the developer an account" as easy as possible. I worked in Pivotal Labs before moving to R&D; the ease of just getting something running on CF vs the various homegrown platforms I encountered was just amazing and was part of why I switched divisions.

This kind of generalised, simplified capability is quickly emerging for Kubernetes too. Knative is such an effort (we contribute there too), there are many others. What has typically been missing, in my view, is an understanding that different roles need different things from a platform. Kubernetes kinda blurs the business of being an operator with being a developer.

That's fine at a small scale. It's also workable, including some automation, at large scale if you trust everyone. But there are lots of folks in the middle who are at large scale who can't just let anyone do anything they like. They need crisp boundaries between developers and operators which are safe and easy to traverse. You also need top-to-bottom multi-tenancy with no way to opt out, which is not yet a thing to vanilla Kubernetes.

In any case, I don't think AWS is going to be wiped out by Kubernetes. But taking a counterfactual view, it seems plausible that the introduction of Kubernetes has flattened their lockin trajectory and hence reduced their long term profits as an area under the curve.


> But for developers in big co., acquiring a public cloud account may take months and forms over forms of paperwork, getting roles put into the central corporate directory etc.

I help companies re-architect their solutions for the cloud, and I've worked with companies who had a multi-month process for requesting and provisioning new "cloud servers". If it takes you just as long with just as much overhead to deploy a VM in Azure as it does to buy and provision a physical server, why are you even moving to the cloud?


Because the services offered after getting onto the public cloud are better than the internal ones, more transparently priced, and cheaper.


*if they've bothered to hire more than a cloud specialist. Most of the time the in-house talent won't have time to train in the specificities of cloud used and things with continue as earlier albeit now with cloud branding


There is nothing cheaper about cloud than bare metal if you’re doing a “lift and shift” and you’re not willing to change your processes.


In my experience AWS is the huge waste of time. I spent years on and off effectively building poor implementations of k8s to make AWS digestible to developers. If you work in "IT" rather than infrastructure for a large software company you might not have experienced this as internal IT is more static and easy to run on AWS/VMWare/whatever.

For me k8s finally signals moving beyond that problem and onto more interesting things as it "solves" that. Devs understand it "well enough" or can copy/paste from another project and get going without hand holding.

People that don't understand the value of the single unified API vs the hodge podge of AWS garbage (don't even mention CloudFormation) are completely missing the point. Sure it's no better for running your "IT" workloads, but that isn't what it's for.


I think your prediction that k8s facilitates a future where even more random developers are copy-pasting whatever they find on old blog posts into their company's production infrastructure is probably 100% correct. Sounds good I guess :)


I've hardly looked at k8s, it looked like a pain to get a docker-compose file running in it the last time I looked. I think there was a tool to translate it to something kubernetes could deal with.

That said, after the last four months I've had of trying to get Route53, API Gateway, Lambdas, ECS, VPCs, Cognito, DynamoDB, S3, CloudFront, etc, etc running in CloudFormation I really can't imagine it's worse under any slightly involved scenario.


Translating something from docker-compose into k8s isn't generally hard. But yeah CFN is a nightmare, I wrote a bunch of tools to try make it suck less and contributed to stuff like cfndsl and I could still never get it to the point that I could have developers use it without an abstraction layer.

k8s definitely helps with the "all my apps stuff in one place that is easy to see". There are some pitfalls to avoid though. Don't use helm, it's bad for your health. Avoid deploying your own k8s cluster unless you really need to, just use GKE. Avoid custom resource definitions unless they are well supported, migrating off them can be hard - prefer tools that look at annotations etc (like external-dns, cert-manager and friends).

Of course things get difficult when you need statefulsets to run things like Kafka/ZK and friends but it's definitely possible and they run well once setup.

In my mind k8s is the only option right now that doesn't result in man-years being wasted on pointless AWS bullshit.


Why not use helm? I'm looking into spinning up (and eventually productionizing) a k8s cluster at my job and I was leaning towards using helm since some pieces that I was thinking about using are installed via helm charts (https://github.com/kubernetes/ingress-nginx for example)


If you use third-party Helm charts, you eventually need to add onto the generated objects in a way which the chart doesn't support, and then you're up a creek without a paddle. This is precisely the use case which Kustomize tries to fix and it was the only real strength which Helm had to begin with (i.e. the ease of installing third-party software on your cluster).

In the meantime, because you must install Helm into every namespace in your cluster into which you desire to install charts, it's a massive resource hog and security risk. Charts themselves also need to be hosted somewhere, so you end up needing to install Chartmuseum, Harbor, or Artifactory (if you didn't have Artifactory already), and they have their own operational costs.


I thought it would be useful to add that you can also generate k8s manifests from a helm chart using the `helm template` command.

I'm in the same boat where I avoid helm if at all possible.


That only works for local checked out copy and not remote repo URL, unfortunately.


Something not pretty, but that works well for me, is using yarn to manage dependencies on upstream Helm chart repositories, then use kustomize to override certain things if required.

That checkouts a local copy locked at a specific version, which you can bump easily, and allows overriding the template definitions on your side.

No need for additional infrastructure.


They're really not as bad as you've made it sound. I've been working with CF for 1.5 years, and I've done everything you've listed above (and more) with deep integrations into each other, and more. The most difficult was cognito, but that's because of their naming choices. Second most was ECS, but that's because of the time needed to flesh out and reason about everything.

There are plenty of examples on github/gitlab for most of the services you mentioned. Like most programming languages, you need to separated the wheat from the chaffe; 99% of the CF publicly available sucks. Hit me up if you need help.


I used it heavily for over 5 years. I know how to use it but that isn't the problem.

The problem is I can't give it directly to developers. I don't want to be constantly writing/maintaining abstraction layers over CF to make it half decent. You may as well use Terraform at that layer of abstraction because at least it frees you of some lock-in and has more sane state tracking (which too is pretty abysmal but we aren't talking about a high bar when comparing to CFN) but I digress.

You seem to be missing the point here, it's not that you can't do things with CFN, you can. The point is k8s allows me to administrate a system that puts all of the machinery behind a nice unified API that all of my developers can consume without requiring an SRE to make every change and be there to hand hold them through setting up a new service.

It may be more complex but that complexity quickly pays off with the amount of stuff now solved in a self service manner. Sure some stuff needs to get much better, there are poorly understood primitives like Service and Ingress that could do with better docs to help explain to devs when you should use one and why.

Overall things are much better than the "AWS as an abstraction" way where everyone is clamouring to use some half baked AWS garbage service rather than just doing the old tried and true thing (screw Kinesis for example).


Yes, you're thinking of https://github.com/kubernetes/kompose


you do not need docker-compose. you can just use k3s/minikube or k3d. so you can actually run your whole dev environment directly in kubernetes.


I'm not really following, perhaps it's too early here. AWS is a cloud provider, Kubernetes is container orchestration software. Here's a crazy thing, we do K8S on AWS.


K8S is the commoditization of of Amazon's compute infrastructure. This standardizes it and makes it easier to switch and to build vendor independent tooling around it. Of course the strategy of AWS now at least be the first two parts of the old Microsoft strategy with regards to open standards: https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis... (Hopefully not the third part.)

There is still lots that Amazon does that is unique and proprietary. And it has tons of respect in the industry.

But as more becomes standardized on Kubernetes and more becomes automated via Service Providers, it will make further commoditize the market.

Google needs to do the following: - Better CDN that can work with any origin. - Work hard on getting Service Providers/Service Catalog into Kubernetes with all the latest Google Services (it is very patchy support at the moment.) - Buy Gitlab and support the shit out of it -- that is one great product.


IDK about Gitlab. Google has the worse reputation about keeping products alive. That reputation along with the privacy reputation is going to drive adopters away, especially the ones who are not using GitHub because of Microsoft.


I think the comparison that the author makes is that AWS and Kubernetes are both platforms that you code your apps to, just like Windows and Linux


Even more crazy, you can put Linux on Windows!

If you take a step away from the spiral that is AWS PaaS / SaaS services, k8s isn't that dissimilar to EC2, VPC, EBS, and IAM and thus a lot of the same arguments around lock-in apply.

This will become more obvious as people put windows VMs on k8s because it's there and they can.


That's using AWS as a VM and VPC provider. AWS don't want you to do that, or at least, that's not how they'd like you to use their services (IMO).


I think it's pretty clear that AWS is happy for you to use their services in all manner of ways, using heavy AWS-proprietary services or not. Nothing wrong particular with "lift-and-ship" as they call it when you migrate on on-prem app mostly unchanged.

At least this is my impression.


Of course they would love for you to do a lifts and shift, it makes them more money than if you are “cloud native”.


The article sounds like a comparison of buzzwords tailored for the technically challenged. I mean, I'm pretty sure that a Dilbert strip will be created based on the reaction of the pointy haired boss to this specific article.


Exactly! This looks like an apples vs. oranges comparison.


It's apple corer vs. fruit basket.


Inaccurate analogy. Container based compute is a very small part of the AWS ecosystem. You are missing the bigger picture. Companies are migrating to AWS for end to end managed infrastructure including directory services, databases (sql and non sql), managed ETL pipeline infrastructure, batch processes, backups, data processing and analytics at scale, high durability storage at scale, a host of mobile app services, even satellite ground station infrastructure of late. This list is a subset. And all this comes with compliance and security out of the box on the infrastructure side, and a very low support overhead on the IT side compared to if you did it yourself.

And finally these are not isolated services. They can be used as building blocks or layers. For example your aws lambda compute might be triggered by s3 events.


I think you're actually agreeing even more with the author. Windows has everything you need for most of the usages you would have, all of it is integrated and the out of the box experience is functional. Linux at the time was a big mess of different softwares with different guarantees of working, maintained by duct-tape bash scripts and custom configuration files. If you wanted something, you had to work to have it.


This was only true for end-user desktops. For technical work and servers, Windows had a few built-in things and a ton of things which could, with significant work, be installed — nothing like a simple apt-get install for years, which is why so many places switched rather than paying people to spend hours clicking through installers and then troubleshooting why it failed.


What did Windows server have for you out of the box vs a CentOS box?


My thoughts exactly. Insofar as Kubernetes provides one abstraction (for container orchestration) it reduces lock-in and provides standardisation for just a small part of the IT landscape, and even then there are alternatives that may be more appropriate depending on the use-case (Heroku, Docker swarm mode, ECS, Fargate, serverless).


I disagree. The only reason for AWS slow adoption of Kubernetes was because they couldn't find a strong use-case within their customer base.

Customers knew they wanted to use it, but they really didn't know how or why it would be better than using other type of container architecture.

AWS doesn't build services for customers to toy around. Every new service is a significant investment so the data and the use cases need to be really strong.

AWS can almost always wait for any new technology to show strong signs of adoption before doing anything in that space. Their leadership gives them that advantage. Of course other vendors jump faster into new spaces because that's their only chance to close the gap.

This happens with almost everything that is new. Another example is Blockchain.


Bollocks! AWS has had few horses(i.e aws ECS) in this race so they waited to see if they can win. Obviously K8 wins over ECS so now they have no choice but embrace K8. The last thing Amazon wants is to be easy replaceable with a different provider. I guess this is same with the underdogs as well(i.e Google/Microsoft) and that's why they do all they can to lock you in with proprietary services(i.e Appengine, Datastore etc).


AWS no other choice? I am not sure if you are following what AWS does (like for example the MongoDB story).

>> The last thing Amazon wants is to be easy replaceable with a different provider. I guess this is same with the underdogs as well(i.e Google/Microsoft) and that's why they do all they can to lock you in with proprietary services(i.e Appengine, Datastore etc).

Datastore has S3 API and it is trivial to replace it with other products.

https://cloud.google.com/storage/docs/interoperability


Datastore can't have an S3 API and is not easily replaceable. You linked to a different product(Cloud Storage).


There is some lock-in with Google Datastore, but not with App Engine, at least with the new "standard environment" based on gVisor. I run perfectly standard Go or Python HTTP services.


The thing with Appengine is that even if you run the standard env(which is more expensive and has some limitations) you are highly likely to use Datastore and other proprietary services that are tightly integrated in the appengine environment. The whole thing feels like a bait to lock-in, just like the initial pricing which went off-rails.


The new version of the Standard Environment removed most limitations (and you can still use the Flex Environment when required):

https://cloud.google.com/appengine/docs/the-appengine-enviro...

And using the proprietary App Engine API is not recommended anymore: “We strongly recommend using the Google Cloud client library or third party libraries instead of the App Engine-specific APIs.” (source: https://cloud.google.com/appengine/docs/standard/go111/go-di...)

You can use App Engine with a standard MySQL or PostgreSQL database and S3-compatible object storage.

Like you, I used to avoid App Engine because of lock-in, but they radically improved things with gVisor.

Yet, I agree about the price, especially with the Flex Environment compared to standard Compute Engine :-(


I feel like this is quite a cynical view. Amazon and other cloud providers want to attract and keep users by providing the best product possible. This may come in different forms so some people want easy-to-setup databases (AWS RDS) others want auth (IAM) handled for them, others scalability (Appengine) or maybe email (SES). In order to create these services and to make life easier for those who want them, it may require proprietary technology that results in lock-in. Don't get me wrong. They would love some lock-in but it is not at the forefront of their minds when creating a service.


> They would love some lock-in but it is not at the forefront of their minds when creating a service.

I really find that assertion hard to believe. AWS is notoriously expensive and oddly enough its added value comes in the form of proprietary technologies which require non-transferable tech skills. The lock-in overload goes to the extreme of leading technicians to specialize exclusively in AWS services which leads to the infamous title of AWS engineer. This doesn't happen by chance, but by design. It's like a very expensive mousetrap designed to help victims get in but being practically very hard if not impossible to get out.


The most expensive parts of AWS for us are:

RDS - hosted non proprietary databases.

EC2 - Standard VM hosting

Redshift - a proprietary OLAP database that uses standard Postgres drivers

S3 - object storage. But there are so many S3 API compatible storage providers, there is little “lock-in”

But the fact is that lock-in is overrated. Your CTO is statistically as likely to move their entire infrastructure just because a few engineers promised it would be “seamless” as your DBA is going to move away from their Oracle installation because developers “used the Repository Pattern to abstract database access”.


I feel like often people want to have their cake and eat it. As in they want both freedom to switch services easily and easy-of-use. What I am saying is that ease-of-use sometimes comes at a cost and that cost is lock-in.

No one is forcing you to use AWS. You could build your own private cloud datacenter, implement OpenStack, configure k8 and deploy your docker containers. If a server fails, replace it. Install your updates etc. To me that sounds like a lot of work (assuming I don't have a large organisation behind me) and I would rather have some of that setup for me. It's a trade-off...


What stops Google from open sourcing Datastore, or AWS from open sourcing dynamoDB? These cloud providers with proprietary services are the new Oracle. They are actually worse than Oracle b/c they can cut off your access to your database in seconds.

Google open sourced k8 so that they can get a pie from AWS and then lock you in with other proprietary services.


> What stops Google from open sourcing Datastore, or AWS from open sourcing dynamoDB?

They have invested millions into creating cloud software. If they open source it, what is stopping other firms from taking that software and creating their own cloud offering?

Would you work for free? Would you open source all the work you have done?


Its 2019 and people still confuse the different kinds of "free" when talking about open source software.

(Also, yes, I open source as much of my work as possible, and I'm paid to write it)

Also, AWS is BUILT on open source, they just don't open source the extra bits they added on top. Just like how Microsoft took an open source tcp/ip stack and open source web browser, added some stuff, and called it their own.


They are also using millions worth of open source software.

Why would they open source it? To provide a way out to customers. They don't even provide a mongo-like license.

I can't forget a session at Google I/O how they were insisting that appengine is not open source because we are too dumb to understand it....yeah sure what about k8, now? I can run it on my Mac, I even contributed few commits.


DynamoDB is dependent on many of AWS’s services and infrastructure.


This post reminds me of a junior developer here. Very smart and driven, but not to the point in his career where he has a full picture of what we're doing. Kubernetes is his current obsession, and he wants to use it for everything.

AWS is implementing Kubernetes to make it easier for people who use it already to migrate to their service. It's the same reason all other cloud providers are doing so.

I can pretty much guarantee that the products Amazon introduces aren't designed to lock people in or make it hard to switch, it's to make it easy for them to start using AWS and not NEED to switch. They design products and their APIs to solve problems customers are facing. This is what everyone is doing.


Author here. This is absolutely not true. I worked for an AWS customer that wanted to use it and had a crystal clear strategy for why. The idea that AWS didn't have a business case for EKS is _laughable_. Also laughable is the idea that K8s didn't have enough traction to be worth the investment.

EKS was a bolt from the blue - we'd been quietly told by senior staff that ECS would be moved to a K8s API (which we may have taken). My feeling was that business trumped the engineering view that ECS was enough, but that's nothing more than my speculation.


Because you “worked for an AWS customer” that wanted it is not a valid reason for why AWS should invest in the service.


Why would you think that a particular AWS customer anecdote (at your 1 to 1 scale) asking for K8s is validation for building a managed K8s service at the enormous AWS scale?

I think you didn't understand my argument. I'm not saying people didn't have use cases. They likely did. But nothing at an enterprise level. Nothing at the moment could hint someone paying multi-million dollar contracts contigent to having a K8s managed service.

A single customer asking for K8s doesn't prove anything in the larger picture. A larger picture where you have to invest millions of dollars if you move into that particular space.

Also, please pardon my delicacy here, but what's the need for using the word "laughable"? I don't understand what's laughable about my position. I really don't get why people think that the best way to prove their point is to undermine a challenging opinion with pejorative adjectives.


This sounds... implausible, given that every metric I've seen for k8s is pretty much hockey-sticking. For example, it's the 2nd largest open source project ever (after Linux) by contributors & commits.

The OP's hypothesis is that Amazon would much prefer ACS to succeed, because it exists already and provides nice lock-in, and AKS was only launched (after considerable internal shitfights, I'd imagine!) once it became clear they can't just ignore k8s.


Linux isn't the largest open source project ever! Look at Android - it incorporates Linux and then adds vastly more on top.


With K8s you are not spared having to understand the cloud architecture, or containers or Linux. You need to know all that and then how K8s works

Whereas something like Lambda in theory you can know less, because it hides stuff from you. You don’t need to understand Linux to use it.

I imagine this would have limited the number of users wanting k8s.


Let's not forget that Google wanted/wants a piece of the IaaS/PaaS pie and AWS has a huge first mover advantage. Someone smart at Google said:

"Hey we have this barely usable, rocket-science level complex, container orchestration tool. How about we make it available to everyone and make sure the message is that you can deploy anywhere? Get people to look at us as an alternative instead of AWS as the default, while using a tool built by us".

Of course AWS was not excited about supporting a tool that might make it easier to leave AWS.


This is a pretty good read on Google's strategy for dealing with AWS - https://stratechery.com/2016/how-google-cloud-platform-is-ch...


Borg may be rocket science but k8s is not. Especially now that they've put a couple years into the documentation and tooling.


It's an imperfect analogy so we're all going to take what we want from it. Here's my take:

Windows enabled businesses to start doing business without a ton of r&d. It was ubiquitous and supported and functional.

At the same time, with Linux you could invest all your capex into a datacenter and 1Us and build clusters for cheap and transfer all that money you would have spent on a complete system on opex to develop one. In some cases it's just plain necessary, and has some great success stories.

Sadly, I'm seeing a lot of businesses throw away time and money on NIH, justifying it with restrictive budgets. They claim they have to build all their solutions from scratch, rather than pay for managed ones, because it's cheaper. Those people often don't understand the true cost, and are wasting both time and money, when they could be buying ready solutions to start churning out products.

But then again, there are businesses that literally do not need to move fast, improve their products, increase efficiency, etc. For them, spending money on tech is more a hedge against an uncertain future than a business decision. They'll invest in anything if they think it makes them relevant.

So, Windows vs Linux is probably less important than the underlying motives of a given business, and whether it's managed well. Anyway, neither of them disappeared, and there's still good cases for both.


> Interestingly, the reasons AWS put forward for why private clouds fail will be just as true for themselves: enterprises can’t manage elastic demand properly, whether it’s in their own data centre or when they’re paying someone else.

This really rings true with me. I feel like Amazon aren't doing enough to tackle this problem though.... Does anyone else find the standard AWS tools for demand scaling to be like working against the grain in comparison to the rest of the platform ?


No. Lambda, DynamoDB, autoscaling read replicas on Aurora, Serverless Aurora, and even the regular old autoscaling groups are easy to automate and maintain.


If it is the New 'Windows vs. Linux' then K8s will prevail.

Because it's server-side market :P

But seriously there will be a better K8s eventually. For example, make K8s work as Heroku or AWS Lambda. After that, there's no point using bare cloud for most of the players.


Drawing comparison between OpenStack and K8s is also invalid IMO... they operate at different levels of abstraction. You can use K8s with OpenStack as a cloud provider as with any other cloud provider. K8s is intended to abstract away the infrastructure that lies underneath and give you a consistent way to deploy services across different cloud providers.


In fact, my team does Kubernetes on OpenStack on Kubernetes. :)

It's OpenStack on Kubernetes because we deploy the OpenStack components on a baremetal Kubernetes cluster (with actual payloads running in VMware), and it's Kubernetes on OpenStack because we have an addon service that allows customers to spin up Kubernetes on the VMware VMs created by OpenStack. We also dogfood this because the baremetal Kubernetes cluster is not large enough to hold all our internal services.


So you're running VIO / vmWare integrated OpenStack?


No. We started with our efforts before VIO was a thing. You can see what we're doing at https://github.com/sapcc, though admittedly that org is quite busy and I don't know every part of it myself.


The point made by OP is not so much about the technology but about drawing parallels between the history of and hopes held for both projects. Both OpenStack and K8s were/are hyped as solutions to reduce vendor lock-in, although they do so in different ways.

The parallels between the two are startling, i.e. at the beginning of OpenStack vendors tried to differentiate themselves by solving distribution/installation and plug-ins/integrations. How many K8s distros/installers are out there right now? Countless... [0] provides an interesting angle to the discussion that vendors trying their best to differentiate themselves is what ultimately led to the demise of OpenStack. I'm so far optimistic the same won't happen to K8s. Google is investing into K8s as a vehicle to gravitate workloads to GCP, but in order to do so it has to maintain some level of compatibility to other cloud vendors.

0: https://aeva.online/2019/03/what-happened-to-openstack/


The strength of AWS does not come from any individual offering, but from the way the many services effortlessly tie together. Job queues, email/sms services, databases, all available under the same api. Sure I could install my own open source versions of these but I'd rather spend time writing code than worrying about security patches and updates.

For me the biggest game changer was realizing the true power of lambda, and I'm not talking about using it as an http responding webhost as most blogs describe. But as a way of reacting to events within your infrastructure. Any AWS service that can generate an event can have a lambda function associated with. Run code on file uploads, database updates, completion of backups ...the possibilities are endless.


> effortlessly

Uh...have you ever actually used AWS? There's friction all over. It's certainly not a seamless experience going from API to API.


Your comment is unproductive and unnecessarily sarcastic. It would seem obvious the parent has used the services provided by AWS, and your focusing entirely on the perceived negatives and friction when combining services is not constructive.

Are you suggesting there is less friction in using multiple softwares, likely as self hosted, produced by separate developers, compared to using services hosted by AWS which allow you to only worry about building your product and managing links between services? The latter almost undeniably leads to greater productivity over the former, and if you're interested more about building and delivering a software product, that must be preferable.


Which of the two is Linux? It's more like Windows vs Java.


Software Engineer's fallacy: if all you have is software/programming skills you start to look for an open source linux based solution to holding wooden boards in place, one that doesn't vendor lock you into the iron ore.

With Windows vs Linux the premise was you install absolutely free software on a computer you already own and you end up with a superior computing and programming environment.

With k8s - you switched to it, now what? You have to buy server hardware, rent or build a room, run wires for electricity and broadband, build cooling, physical security, etc etc etc. You will stop paying hourly rate to Amazon, but all the upfront and recurring costs will chew into that saving and you'll question whether it was worth the drill. Foremost, it will all take days and weeks, but with AWS it takes intolerable 20 minutes to spin up the k8s control plane (did I remember the term right?)

Even if you do all of that and blog about your great success, what's your plan for when it goes viral and visitors flow in to leave accolades on your visionary move?

With AWS, you could just spin up more servers to deal with extra traffic. With your own datacenter, you'd have to order new servers, perhaps from Amazon's same day delivery...


But many companies and workloads don't "go viral" because they aren't VC funded social networks. They're big, stable companies in mature markets where competition means everyone is always looking for cost savings.

If your workloads are predictable and you care about cost savings, AWS/Azure/GCP aren't great deals. They're expensive; why do you think these firms love them so much. AWS made Amazon profitable for the first time ever. These are the new big platform plays but many firms don't need them. Modern hardware is reliable and if you don't need thousands of machines or really complex proprietary services (AI accelerators etc) the overhead of renting a bunch of dedicated machines in a cheap provider isn't that high.


With k8s - you switched to it, now what? You have to buy server hardware, rent or build a room, run wires for electricity and broadband, build cooling, physical security, etc etc etc.

I'm not sure that's the choice. You can run kubernetes in the cloud, can't you? e.g. EKS on Amazon.


So you also agree that there's no standoff between AWS and K8s, unlike an article titled "'AWS vs. K8s' Is the New 'Win vs Linux'" would suggest?

It also talks about building a personal data center quite a few times (at least that's how I read it).

K8s is likely to replace EC2, ECS, and all other unusable amazon bloat to become the default AWS API. But it's not going to damage AWS, it's going to enhance it and become part of it.


AWS is an expensive cloud provider. If you commoditize it then then Amazon will either have to charge less (= less revenue/profits) or user will switch to cheaper cloud providers (= less aws revenue/profits). That is the point the article is making.


but even in this reply you describe the standoff very well yourself: after k8s replaces all amazon bloat to become the default AWS api, k8s has effectively commoditised AWS. i think that's precisely the point OP is also making.


Running k8s in AWS (and gcp, and on-prem) is exactly what we do and we're really happy with it and have avoided lock-in. We recently moved a bunch of k8s-in-AWS hosted stuff to gcp and it was seamless.


The article forgets about Azure or GCP. I know that AWS is the most popular, but you shouldn't forget about those 2. From the article it seems like there are AWS and nothing else, which is obviously not true. And Azure is backed by Microsoft, GCP is backed by Google, so if "Bezos loses his mind" there will be other alternatives besides Kubernetes.


> And Azure is backed by Microsoft, GCP is backed by Google, so if "Bezos loses his mind" there will be other alternatives besides Kubernetes.

You missed the point that moving from AWS to GCP or Azure is going to be _extremely_ complex and expensive if you have used all those shiny AWS platform features. It is very easy if you are using k8s.


Sure migrations of a large complex infrastructure is easy...says no one who has actually done it.


It's funny that you mention it. In the OS wars, OS X and *BSDs were also forgotten.


Is this some kind of DevOps joke I'm too serverless to understand?


I mean, seriously.

AWS has EKS and as far as I know, it added many K8s features to ECS in the last years. So even if you are one of the unlucky people who is locked in by containers, there are plenty of options to run K8s managed and unmanaged on AWS.

This discussion sounds like some taxi driver who insists on only driving a taxi build by themselves instead of buying a car build in large volume production.


This illustrates the confusion around infrastructure and apps by 'devops' in a rather telling way. It's a bit like comparing Mysql or Postgresql and Aurora. Both give you databases and even some builtin replication technology but only one gives you databases plus availability and scalability out of the box.

Without that you would have to engineer your own distributed architecture and actually have the infrastructure in terms of bandwidth, regions, networking and data centers to scale to.

AWS gives you availability and scalability out of the box. Usually only a cloud provider can give you that with physical infrastructure in multiple regions, bandwidth, networking, things like floating ips, replicated storage and in the case of things like Aurora architecture and engineering. Kubernetes is a container orchestration application.


I think this is the analogy folks who really want to hang their hat on K8s would like to force. It feels familiar, and it helps justify the tremendous personal investment of time it takes for running your own K8s cluster with high uptime. The usual adage that "free software is only free if your time is of no value" really does apply here.

Assuming you're writing code that lives happily inside of a container, and assuming that you're not taking advantage of anything else AWS has to offer (which is increasingly improbable), moving from ECS to EKS or GKE isn't a big deal if you need the additional configurability that real Kubernetes offers.

Personally, I like running things on Fargate-style containers so I can keep things as serverless as possible.


Because it's a waste of time?

In my the same way I spent years of my life thinking about 'the desktop' when the desktop was about to get killed by mobile, arguing about MicroVMs vs containers is useless: they're just somewhere to run functions.


K8s is not an AWS comparison, at least not a proper one.

K8s is software, but AWS is software + hardware. Your K8s cluster likely would still be hosted on AWS/Azure/GCP.

K8s lives upon the cloud providers, they are not competitors.


while I don't really understand how ‘AWS vs. K8s’ Is related to ‘Windows vs. Linux’, I guess k8s is a threat to AWS primarily when it comes to EC2 just like it is a threat to commercial linux distros since it makes the underlying VM and host OS totally temporary and replaceable, but what can k8s do to AWS SaaSes (e.g. RDS, SNS, etc...)? almost nothing (even though it gives you the opportunity to migrate from one SaaS to another while still having availability using the help of istio for instance), you can't really escape the lock-in


> what can k8s do to AWS SaaSes (e.g. RDS, SNS, etc...)?

k8s has allowed us to not use RDS any more because we now host PostgreSQL inside k8s instead. It's a bit more overhead, but we're now portable and run in gcp too with the same k8s manifests and almost no porting effort.

So yeah, we escaped the lock-in.

And now we're not just tied to the AWS stuff and we're experimenting with cockroachdb and other products that aren't there "natively" in AWS, so beyond escaping lock-in, we're more ready to experiment with newer stuff.


So you “escaped lock-in” at what cost? Did you save any money up front? Did you save Developer time or do you think their time is free because they are willing to work 60 hour weeks or spending time managing infrastructure instead of adding business value?

What is the fault tolerance of your hosted Postgres solution compared to Aurora?


Everyone keeps talking about vendor lock-in. In regards to RDS you can still export your database and move somewhere else. But we use it because we don’t want to manage the database, we don’t want to worry about availability zones and syncing the data and failing over for updates. I don’t want to setup k8s instead and configure it and maintain it. That stuff is boring for me.


The vendor lock is is not the data in RDS, it's the ops code that sets up and integrates RDS with everything else you do. K8S allows you to use the same code to setup a database, or most anything else, in any cloud host.


My only ops code is a thin wrapper over the secrets manager API, that converts from a secret to something Django orm or sqlalchemy can use to connect.

My pain point is db migrations, from version X to version Y. My bad to pick Django for our internal tooling web app and go with its orm. Fairly confident that I can change that to sqlalchemy easy, if I get the time for it.


The article forgets to define the terms its using, I think the author means Amazons EKS service vs. running your own k8s on AWS VMs - not sure.

Could also be that they compare AWS vs. GCP, but I'd dont think so.


In terms of the comparison AWS vs. K8... are we not talking a much larger stack to replace AWS? i.e. VMware/OpenStack + K8 + Docker...Maybe this is what the author was implying?


I have container orchestration and management problem: use K8S.

I have the need to do business using computers: use AWS.


"wasting their lives compiling Linux in their spare time" stopped reading there


Sorry, not following the logic here either. Not even close. It's a blog with ads from an author that is big into Docker. It's not ad hominem; it's just cui bono.

The author put scare quotes around the "real workloads" AWS can handle/solve for. I'm not sure the author really has the experience to know what real big workloads look like. OpenBet? Barclays? Doesn't sound like a background in large scale technical problems that gets to sneer at the workload sizes of others.

I'll admit my biases -- I used to work at AWS and I'm a happy customer. But if you don't understand it or don't have the appropriate technical needs for that level of infrastructure, don't try to draw such a poor analogy and make really weird claims.

> There are limits on all sorts of resources that help define what you can and can’t achieve.

Huh? Yeah, of course... Certain tools have certain purposes, built to solve specific problems. Kind of like the UNIX philosophy. Do one thing and do it well. You know about the UNIX philosophy, right? Limits? Everything has limits.

> In the same way, some orgs get to the point where they could see benefits from building their own data centres again. Think Facebook, or for a full switcher, Dropbox. (We’ll get back to this).

You should build your own data center if it is strategically advantageous for your company to be in the data center business. I'd argue that Facebook is in the data center business mostly for historical and timing reasons, and that for Dropbox, it actually might be a strategic advantage because they're basically, uh, an infrastructure company.

> Which brings us to AWS’s relationship with Kubernetes. It’s no secret that AWS doesn’t see the point of it.

Actually, I do think it's a secret. It must be your little secret to yourself, though, because I don't think anyone else thinks this. Could it just be that building a product takes time and people?

> IAM is the true source of AWS lock-in (and Lambda is the lock-in technology par excellence. You can’t move a server if there are none you can see).

Have you ever even built anything on cloud infrastructure? Doesn't seem like it from your writing. Lock-in is a really uncharitable synonym for "solves a difficult problem well."

Also, Lambda is about the easiest thing to migrate I can think of. Its surface area that your code touches is remarkably small, by design. If anything, it's fairly clear there were many opportunities for Lambda to make product decisions that could've really locked customers in -- which they elected not to take.

(Typed on a phone, forgive typos.)


The sticky part of Lambda isn't the API, it's the services and events you integrate with. The code you provide is small, necessarily your surface-to-volume ratio is much higher.


It probably more like windows vs Unix(tm) whit something else eventually emerging as the go to for pragmatic practioners


AWS will be around in 5 years. It has an established business model making billions of dollars per year.

K8s won’t be around in 5 years. It’s peaking towards the top of the hype cycle. If you’re wondering about cloud lock-in, then you don’t get the power of the cloud.


I disagree, I think k8s is entering the slope of enlightenment. The ecosystem is still early, but there are some amazing tools being built that can make you very productive.


And that in itself is the reason why k8s is hype. Too many tools, too much of an “ecosystem”. Too complex. Linux suffers from these issues as well.


I think ECS vs K8s would be a more proper title.


Anybody who thinks that Kubernetes can be compared to AWS has absolutely no idea what these things are.


Can you elaborate?

I disagree with you and I have been using both (as well as gcp) in production for years.

They are both platforms that you write an application stack against. One is portable and can run in any cloud, the other only works in AWS.

Just because one is ALSO a cloud provider doesn't make them incomparable.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: