This one thousand times over. Lots of companies have multi-cloud on their strategy but when it's time to actually implement it they find that the cost to adopt hyperscaler #2 is just the same or even more as they spent on integrating hyperscaler #1. While Kubernetes makes workloads portable between vendors, organizational processes for "boring" chores like IAM, tenant provisioning and billing/chargeback aren't. Integration of these processes can get really expensive if you need to re-implement basic "enterprise" process like e.g. four-eyes-principle/seperation of duty on role assignments for each of your cloud vendors and technologies. The IAM systems differ ever so slightly across vendors and that brings a ton of complexity.
The larger the company, the more likely it is that the real bottleneck to cloud adoption is on these processes, not the technology. I know it's hard to imagine if you're coming from a startup background where you can just take the company credit card and get an account on AWS, GCP or Azure in a second. But for developers in big co., acquiring a public cloud account may take months and forms over forms of paperwork, getting roles put into the central corporate directory etc. The investment into these (often atrocious) processes for hyperscaler #1 create a big lock-in.
Disclaimer: I'm a founder at Meshcloud, we build a multi-cloud platform that helps organizations consistently implement organizational processes like IAM, SSO, landing zone configurations etc. across cloud vendors and platforms.
This kind of generalised, simplified capability is quickly emerging for Kubernetes too. Knative is such an effort (we contribute there too), there are many others. What has typically been missing, in my view, is an understanding that different roles need different things from a platform. Kubernetes kinda blurs the business of being an operator with being a developer.
That's fine at a small scale. It's also workable, including some automation, at large scale if you trust everyone. But there are lots of folks in the middle who are at large scale who can't just let anyone do anything they like. They need crisp boundaries between developers and operators which are safe and easy to traverse. You also need top-to-bottom multi-tenancy with no way to opt out, which is not yet a thing to vanilla Kubernetes.
In any case, I don't think AWS is going to be wiped out by Kubernetes. But taking a counterfactual view, it seems plausible that the introduction of Kubernetes has flattened their lockin trajectory and hence reduced their long term profits as an area under the curve.
I help companies re-architect their solutions for the cloud, and I've worked with companies who had a multi-month process for requesting and provisioning new "cloud servers". If it takes you just as long with just as much overhead to deploy a VM in Azure as it does to buy and provision a physical server, why are you even moving to the cloud?
For me k8s finally signals moving beyond that problem and onto more interesting things as it "solves" that. Devs understand it "well enough" or can copy/paste from another project and get going without hand holding.
People that don't understand the value of the single unified API vs the hodge podge of AWS garbage (don't even mention CloudFormation) are completely missing the point.
Sure it's no better for running your "IT" workloads, but that isn't what it's for.
That said, after the last four months I've had of trying to get Route53, API Gateway, Lambdas, ECS, VPCs, Cognito, DynamoDB, S3, CloudFront, etc, etc running in CloudFormation I really can't imagine it's worse under any slightly involved scenario.
k8s definitely helps with the "all my apps stuff in one place that is easy to see". There are some pitfalls to avoid though. Don't use helm, it's bad for your health. Avoid deploying your own k8s cluster unless you really need to, just use GKE. Avoid custom resource definitions unless they are well supported, migrating off them can be hard - prefer tools that look at annotations etc (like external-dns, cert-manager and friends).
Of course things get difficult when you need statefulsets to run things like Kafka/ZK and friends but it's definitely possible and they run well once setup.
In my mind k8s is the only option right now that doesn't result in man-years being wasted on pointless AWS bullshit.
In the meantime, because you must install Helm into every namespace in your cluster into which you desire to install charts, it's a massive resource hog and security risk. Charts themselves also need to be hosted somewhere, so you end up needing to install Chartmuseum, Harbor, or Artifactory (if you didn't have Artifactory already), and they have their own operational costs.
I'm in the same boat where I avoid helm if at all possible.
That checkouts a local copy locked at a specific version, which you can bump easily, and allows overriding the template definitions on your side.
No need for additional infrastructure.
There are plenty of examples on github/gitlab for most of the services you mentioned. Like most programming languages, you need to separated the wheat from the chaffe; 99% of the CF publicly available sucks. Hit me up if you need help.
The problem is I can't give it directly to developers.
I don't want to be constantly writing/maintaining abstraction layers over CF to make it half decent.
You may as well use Terraform at that layer of abstraction because at least it frees you of some lock-in and has more sane state tracking (which too is pretty abysmal but we aren't talking about a high bar when comparing to CFN) but I digress.
You seem to be missing the point here, it's not that you can't do things with CFN, you can. The point is k8s allows me to administrate a system that puts all of the machinery behind a nice unified API that all of my developers can consume without requiring an SRE to make every change and be there to hand hold them through setting up a new service.
It may be more complex but that complexity quickly pays off with the amount of stuff now solved in a self service manner.
Sure some stuff needs to get much better, there are poorly understood primitives like Service and Ingress that could do with better docs to help explain to devs when you should use one and why.
Overall things are much better than the "AWS as an abstraction" way where everyone is clamouring to use some half baked AWS garbage service rather than just doing the old tried and true thing (screw Kinesis for example).
There is still lots that Amazon does that is unique and proprietary. And it has tons of respect in the industry.
But as more becomes standardized on Kubernetes and more becomes automated via Service Providers, it will make further commoditize the market.
Google needs to do the following:
- Better CDN that can work with any origin.
- Work hard on getting Service Providers/Service Catalog into Kubernetes with all the latest Google Services (it is very patchy support at the moment.)
- Buy Gitlab and support the shit out of it -- that is one great product.
If you take a step away from the spiral that is AWS PaaS / SaaS services, k8s isn't that dissimilar to EC2, VPC, EBS, and IAM and thus a lot of the same arguments around lock-in apply.
This will become more obvious as people put windows VMs on k8s because it's there and they can.
At least this is my impression.
And finally these are not isolated services. They can be used as building blocks or layers. For example your aws lambda compute might be triggered by s3 events.
Customers knew they wanted to use it, but they really didn't know how or why it would be better than using other type of container architecture.
AWS doesn't build services for customers to toy around. Every new service is a significant investment so the data and the use cases need to be really strong.
AWS can almost always wait for any new technology to show strong signs of adoption before doing anything in that space. Their leadership gives them that advantage. Of course other vendors jump faster into new spaces because that's their only chance to close the gap.
This happens with almost everything that is new. Another example is Blockchain.
>> The last thing Amazon wants is to be easy replaceable with a different provider. I guess this is same with the underdogs as well(i.e Google/Microsoft) and that's why they do all they can to lock you in with proprietary services(i.e Appengine, Datastore etc).
Datastore has S3 API and it is trivial to replace it with other products.
And using the proprietary App Engine API is not recommended anymore: “We strongly recommend using the Google Cloud client library or third party libraries instead of the App Engine-specific APIs.” (source: https://cloud.google.com/appengine/docs/standard/go111/go-di...)
You can use App Engine with a standard MySQL or PostgreSQL database and S3-compatible object storage.
Like you, I used to avoid App Engine because of lock-in, but they radically improved things with gVisor.
Yet, I agree about the price, especially with the Flex Environment compared to standard Compute Engine :-(
I really find that assertion hard to believe. AWS is notoriously expensive and oddly enough its added value comes in the form of proprietary technologies which require non-transferable tech skills. The lock-in overload goes to the extreme of leading technicians to specialize exclusively in AWS services which leads to the infamous title of AWS engineer. This doesn't happen by chance, but by design. It's like a very expensive mousetrap designed to help victims get in but being practically very hard if not impossible to get out.
RDS - hosted non proprietary databases.
EC2 - Standard VM hosting
Redshift - a proprietary OLAP database that uses standard Postgres drivers
S3 - object storage. But there are so many S3 API compatible storage providers, there is little “lock-in”
But the fact is that lock-in is overrated. Your CTO is statistically as likely to move their entire infrastructure just because a few engineers promised it would be “seamless” as your DBA is going to move away from their Oracle installation because developers “used the Repository Pattern to abstract database access”.
No one is forcing you to use AWS. You could build your own private cloud datacenter, implement OpenStack, configure k8 and deploy your docker containers. If a server fails, replace it. Install your updates etc. To me that sounds like a lot of work (assuming I don't have a large organisation behind me) and I would rather have some of that setup for me. It's a trade-off...
Google open sourced k8 so that they can get a pie from AWS and then lock you in with other proprietary services.
They have invested millions into creating cloud software. If they open source it, what is stopping other firms from taking that software and creating their own cloud offering?
Would you work for free? Would you open source all the work you have done?
(Also, yes, I open source as much of my work as possible, and I'm paid to write it)
Also, AWS is BUILT on open source, they just don't open source the extra bits they added on top. Just like how Microsoft took an open source tcp/ip stack and open source web browser, added some stuff, and called it their own.
Why would they open source it? To provide a way out to customers. They don't even provide a mongo-like license.
I can't forget a session at Google I/O how they were insisting that appengine is not open source because we are too dumb to understand it....yeah sure what about k8, now? I can run it on my Mac, I even contributed few commits.
AWS is implementing Kubernetes to make it easier for people who use it already to migrate to their service. It's the same reason all other cloud providers are doing so.
I can pretty much guarantee that the products Amazon introduces aren't designed to lock people in or make it hard to switch, it's to make it easy for them to start using AWS and not NEED to switch. They design products and their APIs to solve problems customers are facing. This is what everyone is doing.
EKS was a bolt from the blue - we'd been quietly told by senior staff that ECS would be moved to a K8s API (which we may have taken). My feeling was that business trumped the engineering view that ECS was enough, but that's nothing more than my speculation.
I think you didn't understand my argument. I'm not saying people didn't have use cases. They likely did. But nothing at an enterprise level. Nothing at the moment could hint someone paying multi-million dollar contracts contigent to having a K8s managed service.
A single customer asking for K8s doesn't prove anything in the larger picture. A larger picture where you have to invest millions of dollars if you move into that particular space.
Also, please pardon my delicacy here, but what's the need for using the word "laughable"? I don't understand what's laughable about my position. I really don't get why people think that the best way to prove their point is to undermine a challenging opinion with pejorative adjectives.
The OP's hypothesis is that Amazon would much prefer ACS to succeed, because it exists already and provides nice lock-in, and AKS was only launched (after considerable internal shitfights, I'd imagine!) once it became clear they can't just ignore k8s.
Whereas something like Lambda in theory you can know less, because it hides stuff from you. You don’t need to understand Linux to use it.
I imagine this would have limited the number of users wanting k8s.
"Hey we have this barely usable, rocket-science level complex, container orchestration tool. How about we make it available to everyone and make sure the message is that you can deploy anywhere? Get people to look at us as an alternative instead of AWS as the default, while using a tool built by us".
Of course AWS was not excited about supporting a tool that might make it easier to leave AWS.
Windows enabled businesses to start doing business without a ton of r&d. It was ubiquitous and supported and functional.
At the same time, with Linux you could invest all your capex into a datacenter and 1Us and build clusters for cheap and transfer all that money you would have spent on a complete system on opex to develop one. In some cases it's just plain necessary, and has some great success stories.
Sadly, I'm seeing a lot of businesses throw away time and money on NIH, justifying it with restrictive budgets. They claim they have to build all their solutions from scratch, rather than pay for managed ones, because it's cheaper. Those people often don't understand the true cost, and are wasting both time and money, when they could be buying ready solutions to start churning out products.
But then again, there are businesses that literally do not need to move fast, improve their products, increase efficiency, etc. For them, spending money on tech is more a hedge against an uncertain future than a business decision. They'll invest in anything if they think it makes them relevant.
So, Windows vs Linux is probably less important than the underlying motives of a given business, and whether it's managed well. Anyway, neither of them disappeared, and there's still good cases for both.
This really rings true with me. I feel like Amazon aren't doing enough to tackle this problem though.... Does anyone else find the standard AWS tools for demand scaling to be like working against the grain in comparison to the rest of the platform ?
Because it's server-side market :P
But seriously there will be a better K8s eventually. For example, make K8s work as Heroku or AWS Lambda. After that, there's no point using bare cloud for most of the players.
It's OpenStack on Kubernetes because we deploy the OpenStack components on a baremetal Kubernetes cluster (with actual payloads running in VMware), and it's Kubernetes on OpenStack because we have an addon service that allows customers to spin up Kubernetes on the VMware VMs created by OpenStack. We also dogfood this because the baremetal Kubernetes cluster is not large enough to hold all our internal services.
The parallels between the two are startling, i.e. at the beginning of OpenStack vendors tried to differentiate themselves by solving distribution/installation and plug-ins/integrations. How many K8s distros/installers are out there right now? Countless...  provides an interesting angle to the discussion that vendors trying their best to differentiate themselves is what ultimately led to the demise of OpenStack. I'm so far optimistic the same won't happen to K8s. Google is investing into K8s as a vehicle to gravitate workloads to GCP, but in order to do so it has to maintain some level of compatibility to other cloud vendors.
For me the biggest game changer was realizing the true power of lambda, and I'm not talking about using it as an http responding webhost as most blogs describe. But as a way of reacting to events within your infrastructure. Any AWS service that can generate an event can have a lambda function associated with. Run code on file uploads, database updates, completion of backups ...the possibilities are endless.
Uh...have you ever actually used AWS? There's friction all over. It's certainly not a seamless experience going from API to API.
Are you suggesting there is less friction in using multiple softwares, likely as self hosted, produced by separate developers, compared to using services hosted by AWS which allow you to only worry about building your product and managing links between services? The latter almost undeniably leads to greater productivity over the former, and if you're interested more about building and delivering a software product, that must be preferable.
With Windows vs Linux the premise was you install absolutely free software on a computer you already own and you end up with a superior computing and programming environment.
With k8s - you switched to it, now what? You have to buy server hardware, rent or build a room, run wires for electricity and broadband, build cooling, physical security, etc etc etc.
You will stop paying hourly rate to Amazon, but all the upfront and recurring costs will chew into that saving and you'll question whether it was worth the drill.
Foremost, it will all take days and weeks, but with AWS it takes intolerable 20 minutes to spin up the k8s control plane (did I remember the term right?)
Even if you do all of that and blog about your great success, what's your plan for when it goes viral and visitors flow in to leave accolades on your visionary move?
With AWS, you could just spin up more servers to deal with extra traffic. With your own datacenter, you'd have to order new servers, perhaps from Amazon's same day delivery...
If your workloads are predictable and you care about cost savings, AWS/Azure/GCP aren't great deals. They're expensive; why do you think these firms love them so much. AWS made Amazon profitable for the first time ever. These are the new big platform plays but many firms don't need them. Modern hardware is reliable and if you don't need thousands of machines or really complex proprietary services (AI accelerators etc) the overhead of renting a bunch of dedicated machines in a cheap provider isn't that high.
I'm not sure that's the choice. You can run kubernetes in the cloud, can't you? e.g. EKS on Amazon.
It also talks about building a personal data center quite a few times (at least that's how I read it).
K8s is likely to replace EC2, ECS, and all other unusable amazon bloat to become the default AWS API. But it's not going to damage AWS, it's going to enhance it and become part of it.
You missed the point that moving from AWS to GCP or Azure is going to be _extremely_ complex and expensive if you have used all those shiny AWS platform features. It is very easy if you are using k8s.
AWS has EKS and as far as I know, it added many K8s features to ECS in the last years. So even if you are one of the unlucky people who is locked in by containers, there are plenty of options to run K8s managed and unmanaged on AWS.
This discussion sounds like some taxi driver who insists on only driving a taxi build by themselves instead of buying a car build in large volume production.
Without that you would have to engineer your own distributed architecture and actually have the infrastructure in terms of bandwidth, regions, networking and data centers to scale to.
AWS gives you availability and scalability out of the box. Usually only a cloud provider can give you that with physical infrastructure in multiple regions, bandwidth, networking, things like floating ips, replicated storage and in the case of things like Aurora architecture and engineering. Kubernetes is a container orchestration application.
Assuming you're writing code that lives happily inside of a container, and assuming that you're not taking advantage of anything else AWS has to offer (which is increasingly improbable), moving from ECS to EKS or GKE isn't a big deal if you need the additional configurability that real Kubernetes offers.
Personally, I like running things on Fargate-style containers so I can keep things as serverless as possible.
In my the same way I spent years of my life thinking about 'the desktop' when the desktop was about to get killed by mobile, arguing about MicroVMs vs containers is useless: they're just somewhere to run functions.
K8s is software, but AWS is software + hardware. Your K8s cluster likely would still be hosted on AWS/Azure/GCP.
K8s lives upon the cloud providers, they are not competitors.
k8s has allowed us to not use RDS any more because we now host PostgreSQL inside k8s instead. It's a bit more overhead, but we're now portable and run in gcp too with the same k8s manifests and almost no porting effort.
So yeah, we escaped the lock-in.
And now we're not just tied to the AWS stuff and we're experimenting with cockroachdb and other products that aren't there "natively" in AWS, so beyond escaping lock-in, we're more ready to experiment with newer stuff.
What is the fault tolerance of your hosted Postgres solution compared to Aurora?
My pain point is db migrations, from version X to version Y. My bad to pick Django for our internal tooling web app and go with its orm. Fairly confident that I can change that to sqlalchemy easy, if I get the time for it.
Could also be that they compare AWS vs. GCP, but I'd dont think so.
I have the need to do business using computers: use AWS.
The author put scare quotes around the "real workloads" AWS can handle/solve for. I'm not sure the author really has the experience to know what real big workloads look like. OpenBet? Barclays? Doesn't sound like a background in large scale technical problems that gets to sneer at the workload sizes of others.
I'll admit my biases -- I used to work at AWS and I'm a happy customer. But if you don't understand it or don't have the appropriate technical needs for that level of infrastructure, don't try to draw such a poor analogy and make really weird claims.
> There are limits on all sorts of resources that help define what you can and can’t achieve.
Huh? Yeah, of course... Certain tools have certain purposes, built to solve specific problems. Kind of like the UNIX philosophy. Do one thing and do it well. You know about the UNIX philosophy, right? Limits? Everything has limits.
> In the same way, some orgs get to the point where they could see benefits from building their own data centres again. Think Facebook, or for a full switcher, Dropbox. (We’ll get back to this).
You should build your own data center if it is strategically advantageous for your company to be in the data center business. I'd argue that Facebook is in the data center business mostly for historical and timing reasons, and that for Dropbox, it actually might be a strategic advantage because they're basically, uh, an infrastructure company.
> Which brings us to AWS’s relationship with Kubernetes. It’s no secret that AWS doesn’t see the point of it.
Actually, I do think it's a secret. It must be your little secret to yourself, though, because I don't think anyone else thinks this. Could it just be that building a product takes time and people?
> IAM is the true source of AWS lock-in (and Lambda is the lock-in technology par excellence. You can’t move a server if there are none you can see).
Have you ever even built anything on cloud infrastructure? Doesn't seem like it from your writing. Lock-in is a really uncharitable synonym for "solves a difficult problem well."
Also, Lambda is about the easiest thing to migrate I can think of. Its surface area that your code touches is remarkably small, by design. If anything, it's fairly clear there were many opportunities for Lambda to make product decisions that could've really locked customers in -- which they elected not to take.
(Typed on a phone, forgive typos.)
K8s won’t be around in 5 years. It’s peaking towards the top of the hype cycle. If you’re wondering about cloud lock-in, then you don’t get the power of the cloud.
I disagree with you and I have been using both (as well as gcp) in production for years.
They are both platforms that you write an application stack against. One is portable and can run in any cloud, the other only works in AWS.
Just because one is ALSO a cloud provider doesn't make them incomparable.