Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why OpenStack and Kata Containers are both seeing a resurgence of adoption (zdnet.com)
79 points by CrankyBear on Sept 8, 2024 | hide | past | favorite | 64 comments


As my organisation migrates everything over to AWS I do find myself questioning what the costs will look like when it is all done and we are completely locked in. AWS/Google/Microsoft cloud services put you in the position of an almost completely captive customer. What large corporation would ever sign up to that?


> What large corporation would ever sign up to that?

Ones with the power to have some very nice contract terms put into place to mitigate downsides in the medium term. If it's a problem long term then that'll be the next CEOs problem and the current one will have cashed in their $50m check already.

Less cynically, for many of them it's either being captive to their barely working IT department or being captive to a decently more competent AWS. So they choose the one that will give them better features for less money in the medium term. They can't make their IT work better because it's a two sided monopoly. Only one client who is guaranteed to not leave (until AWS at least). That inherently and unavoidably creates horrible incentives.


> for many of them it's either being captive to their barely working IT department or being captive to a decently more competent AWS.

It's more just being prone to sales tactics.

For the IT department you've seen everything - all the issues. You overlook the good parts.

For an external vendor you only hear about the good parts from the salesperson. They often over promise etc. You get told you'll save time and money, they can engage you with a consultant to help migrate etc until you realize no 1 has actually done the proper analysis and something doesn't work. There's going to be weeks/months of delay but by then you're already signed.


Fortune 500 executives aren't the idiots you think they are. Some are but most aren't and most are moving to cloud.


Large organizations are very inefficient, so much so that the inefficiency of the cloud is more efficient than them doing the work themselves.


> so much so that the inefficiency of the cloud is more efficient than them doing the work themselves.

You jest. They just add so many "required" layers to the cloud to make it even worse. They create "frameworks", "controls", requirements, guidelines and this and that so that using the cloud is even more effort.

You used to request a server, get a login and be done. Now you have to deal with the cloud and a whole lot of confusion over what can and can't be done and request access to every small bit of detail.


That’s not new, it’s probably just shifting around who does it.

When I worked at a Fortune 500 company, that was me. I’d get a request for a server from a dev and have to fracture it into like 50 separate tickets. One to security to open up the firewall, one to storage to get disk space, one to the VM guys for CPU and RAM, a couple to the AD guys for a new group and user, etc, etc.


The biggest powers of cloud are:

1. The blast radius of an incompetent employee is much smaller.

2. It's much easier to tick boxes for management purposes when using a standard service with a standard list of checkboxes rather than own solution.


> 1. The blast radius of an incompetent employee is much smaller.

Doubt. Never underestimate the impact of an incompetent person, it could be a single credential leak burn down the company over noght.


>> What large corporation would ever sign up to that?

Speaking of the banking sector, we are migrating our workloads to the public cloud as it allows us to be nimble and responsive to business needs. And we are getting a highly resilient and robust IT infrastructure that we could not implement on our own without having a very high headcount and associated bureaucracy.

As long as you are not using proprietor technologies e.g DynamoDB or GCP Firebase etc., you stack can be migrated from one cloud to another. It wont be easy or painless but it wont be impossible either.


> as it allows us to be nimble and responsive to business needs

Speaking of the banking sector, I doubt the server is what's slowing things down.


Thus the second sentence :

> that we could not implement on our own without having a very high headcount and associated bureaucracy


> Thus the second sentence :

Thus, I rather they fix the real problem than outsourcing 1/2 of it. Without the servers the problem is still there - in DevOps, development, etc.


>It wont be easy or painless but it wont be impossible either.

With data egress costs, all the security infrastructure built up is all bespoke to a provider, not to mention engineers relearning the new provider's APIs and all its warts. The cost would be HUGE. Would be something that would go to a board for approval.


$2-5M and 1.5 years of time is all it takes.

The biggest problem is getting everyone aligned, designing a plan with minimal rework, and being able to hire incredible talent that is going to cost far more than most companies are willing to accept.

For most companies they'll try with their existing talent and it'll be +5 years and an absolute failure.


And the very sizable opportunity cost of the engineering time spent on that project and not in anything more pressing. If your platform team has nothing better to do then I guess that's 1.5 years well spent, but if they have nothing better to do I have some questions about your company's engineering decisions.


So, its not impossible to move clouds, it's just better to buy a smaller bank on the other cloud if you want the move to actually succeed.


Everyone says this but very few know the true costs of keeping a private datacenter. Everyone looks at the aws final number and says oof that’s too high. Then ignore things like the 10 year old code base that is still being developed, causing outages and support requests that is not included in the on-prem cost.

When I did work for a var that was selling on-prem accounting software, that number was closer to a 20% difference in cost. Now add in large corporations discount for buying large amounts of compute and it suddenly becomes very palatable to use the cloud.

Small companies still benefit the most from on-prem.


Who said I want my own data centre if I don't want to use AWS/GCP/Azure?


At that scale it is cheaper. If you have a requirement for only a few dozen, then webhostingtalk is all you need. When you need 100k servers, you will be managing the hardware yourself


As a large org, you sign up for elasticity and mostly infinite rack space and power delivery. You basically turn capex into opex (though with reserved capacity you can get some capex back).

You can sidestep the worst parts of the compute lock-in by cloud-agnostic tools like k8s and terraform. Storage will always keep you hostage.

Anyway - the grass will be greener on the other side regardless of what you do.


would there be a scenario where you prefer to mark it as capex? opex has the advantage of reducing tax immediately and getting the tax benefit right away. I feel like that's always an advantage


In a large org you may have a quota for both and zero incentive to not spend.


most of the SP500?

I mean...they also signed up to get power from a private utility company

they also signed up to lease their own office from a commercial realtor

they also signed up to put their company intelligence into a closed-source ERP or CRM...

etc etc

indeed lets turn this around...what companies are believing it is a strategic advantage to reinvent S3?


Vouching for this, because if it's indeed a bad take, I’d like someone to explain why.

For the most part, it seems like a reasonable buy vs rent argument, except that if you try to build your own internal self-service cloud platform for the dev teams (or just have ops teams that are in charge of provisioning and running things), you also have a lot of complexity and employee time spent there, with it often being hard to get right.

I don’t thing orgs necessarily care that much about overpaying for some EC2 instances or load balancers when that lets them iterate reasonably quickly and have fewer compliance headaches and good SLAs.


I’m don’t 100% agree on this: this isn’t 2014 anymore, many solutions (“on prem $service”) have already been developed, are fairly known and have backing companies ready to sell you support and solutions rather than just asking you to pay rent. Example: cloudian or minio for s3-compatible storage.


There's also SeaweedFS (https://github.com/seaweedfs/seaweedfs/wiki/Amazon-S3-API) and Garage (https://garagehq.deuxfleurs.fr/) that are promising, in addition to MinIO. There was also Zenko, but that one seems to be in a bit of an awkward place: https://github.com/scality/cloudserver/issues/5469

I'm all for using on-prem self hosted options when available, personally I run my own mail server (though a pre packaged version), Nextcloud, Gitea and many other services. However, that's mostly for my own personal needs and to explore the software out there.

In many of the orgs out there, especially the larger ones, telling people that they should provision their own hardware because I want to run a self hosted piece of software instead of pressing a few buttons in a web UI somewhere (or having a few scripts written and run) to provision things would be a tough sell. Even if they could get me a dedicated box somewhere, I'd still have to be responsible for managing said software, instead of just taking on the SaaS approach and not having my career be on the line for not doing everything correctly.

In practice, of course, that basically means SaaSS: https://www.gnu.org/philosophy/who-does-that-server-really-s...


in any decent organization you don't run any of the services yourself except for the software you write, an infrastructure and platform teams do that for you.

going back to on-prem doesn't mean developers need to manage their own mysql database or stuff like that.


The power company is almost entirely different; everywhere in the US I’m aware of caps utility profits so the “getting gouged” risk is literally illegal.

Commercial real estate leases are often for a long duration, also nullifying a lot of the gouging risk.

The downside of using S3 is that their billing model tends to create complexity around trying to use it as little as possible. Many projects will need to spend significant time on “how do we minimize S3 costs?”.

I would wager most consumers of S3 are not in the “has a high enough scale of data to have genuinely complex problems” crowd and would be basically fine with MinIO or any of the various on-premise storage vendors offering an S3 API.


The alternative to using S3 is to do it themselves?

Also, do you really specifically want S3? The value of S3 is working at scale but very few but amazon needs that scale. So many other solutions work just as well.


S3 in regard of vendor lock is a bad take since it's probably the easiest service on AWS to migrate from with dozens of fully compatible solutions existing outside.

Even EC2 (AWS's VMs) require more work to migrate from.


S3’s lock-in is financial rather than technical, especially if you’re using Glacier for backups that need to be moved. It gets really expensive to pay for egress on all that data.


No this argument is not analogous. I worked for a bank that upped and moved offices on a commercial lease renewal. Let me know when you hear of a bank migrating Cloud providers and I will reconsider. It is 100% vendor lock in, no escape.


Personally I have seen a massive surge of interest in running on bare metal again. Largely due to AI workloads, wanting to own GPUs and run them on Infiniband or RoCE fabrics.

Hopefully this trend out-lives AI hype bubble. I much prefer real hardware, a whole bunch of problems are made easier by "Dell will sell me a box with 1TB of RAM for $reasonable". Same goes for effectively lossless data centre networking at 200gbps+, actual BGP, etc.


I work for an on-prem cloud provider and while AI is part of the reason we're seeing growing adoption, it's as part of a wider move to data and compute sovereignty.


I know of a large company that are moving their whole VMWare setup (thousands of VMs) to an OpenShift solution, now that VMWare has raised prices significantly, and I’m sure they’re not the only ones.


Many of my peers in the ISP community shuttered their hosting operations entirely in the wake of Broadcoms move, putting a mountain of effort and cost on themselves and their customers in the process. This is sad for many reasons, least of all because it drives further consolidation of the Internet towards the hyper scalers.

I dread to think of the real economic cost of Broadcoms moves with VMWare.


I'm honestly a bit shocked to hear that anyone has been using VMWare to provide hosting. I thought it was for internally running stuff that is expensive anyway.


It's standard practice to run all your physical servers as ESX cluster. Doesn't matter if for hosting, internal needs or whatever. Nobody sane would just install Ubuntu on a physical server and edit the nginx config. Running virtualized makes it much easier to have standard backups, restores, upgrades, failovers and so on.


What else would they be running? VMWare abstracts the machine from the hardware in a very literal sense, with live migrations between hosts, etc. so you can work with your boxen without interrupting any customer services.

If that capability suddenly costs 10x-100x more due to licensing, the margins stop being there - you'd need to rise prices above whatever aws/azure/gcp is doing and that's obviously not going to fly with customers. Shutting down is the only reasonable choice; unreasonable choices like develop your own vmware alternative or use an existing open source solution don't make sense for anything above micro and below hyperscale. The mid-market has been rugged, gutted and driven over by a Buick.


> What else would they be running?

Anything that doesn't cost licensing money is what I was thinking. Isn't there some FOSS and also gratis alternative?


No, not really. Proxmox is slowly improving, but not a replacement yet.


It has been incredibly common for the last maybe 15 years in the managed hosting industry. It wasn’t even especially expensive compared to the revenue density. Most would be on service provider licensing paying per GB of RAM.


OpenStack isn’t a competitor to VMware but to AWS and GCP. It enables you to create a fully API-driven data center that includes elastic compute, elastic network, and elastic storage resources. The key difference is that, unlike a public cloud where you consume APIs (OPEX), with OpenStack, you own and manage them (CAPEX).

OpenStack is not ideal for provisioning static workloads like you would with vCenter. It works best as an orchestrator for elastic, API-driven workloads. It also simplifies hybrid cloud integration: link your SDN networks, then use Heat to manage deployments based on predefined rules and affinities.

Companies looking for an open-source vCenter for static, single-tenant provisioning should look for ProxMox, it will meet their needs a lot better.


> OpenStack isn’t a competitor to VMware but to AWS and GCP.

It's a competitor for all three. Having a "fully API-driven data center" is just as useful on your own hardware as it is using someone else's hardware.

VMware recognizes this as they have OpenStack APIs:

* https://www.vmware.com/topics/openstack-api

> Companies looking for an open-source vCenter for static, single-tenant provisioning should look for ProxMox, it will meet their needs a lot better.

Or XCP-ng.


I don't follow the difference between static and elastic workloads. OpenStack can create a VM just like VMware, and has more features that would otherwise be implemented as a K8s cluster running on VMs.


In a cloud environment like OpenStack, virtual machines are a means to provide compute resources, not an end goal like in VMware.

Elastic workloads typically scale horizontally, unlike the vertical scaling or static provisioning seen in environments such as VMware. Elastic workloads are usually stateless, declaratively provisioned, have short uptimes, and are designed to fail gracefully rather than relying on high availability.

Kubernetes (K8S) can achieve similar capabilities, but is still distinct from what OpenStack offers: an open-source, private AWS providing multi-tenant, API-driven infrastructure for storage, networking, and compute.

If you don't care about working with elastic APIs, OpenStack/AWS is probably not for you, use VMware/ProxMox + Kubernetes.


Thank you!


Yes, they are moving to OpenShift, which is comparable to ProxMox. (Not OpenStack)


Open stack is great if someone else runs it for you, otherwise its a massive painful kick in the bollocks. The only advantage is that the documentation that exists is likely to be in date still. At least they've finally ditch ceph and gluster as a storage back end.

It seems the permissions schema has improved in the last 15 years as well.

However, I'm pretty sure there are very few questions to which openstack is the answer.

Most people just want an orchestration layer, and a storage control system. Everything else is just complexity that really comes to bite you later on. as soon as a system starts to have strong opinions about networking (I'm looking at you K8s) then things turn to shit pretty quick.


What distributed storage backend does it use if not those two?


OpenStack doesn't really just use 'a' storage backend. You can configure a different backend depending on the service, i.e. point Manila to some proprietary Hitachi technology, point Glance to Ceph or Cinder to any of these https://docs.openstack.org/cinder/2024.1/drivers.html. Not sure what KaiserPro is going on about, as Ceph is very well supported as a storage backend for pretty much all OpenStack services. Cinder is mostly used for Volume and Snapshot management, interfacing between the front-end/API of OpenStack and whatever backend is set for Cinder, which is hosting the actual volumes and snapshots.


Cinder, but its more block storage allocation, rather than Large files on a filesystem thats not really a file system, but an object store with a badly designed DB on top, that's always under resourced.


Cinder (from my time with OpenStack) is not a storage backend type, but rather a block storage provisioning API. Has that changed more recently?


> Most people just want an orchestration layer, and a storage control system

This is exactly the reason why I am still on docker swarm.


I still love the idea of running your own infra om your own metal.

It’s out of reach for most orgs because good operations people are hard to find. You need people with all the infra and networking knowledge, and the capability to automate processes end-to-end.

I still see operations people using Ansible as a better shell script, without proper structure, without understanding the fundamentals of such a tool. Without understanding the process they are trying to automate.

So in some sense what you really want is a proper software developer that is also willing to focus on boring infrastructure topics. Topics that are just an api-call in the cloud. Who in their right mind wants this :-)

The tools and technology isn’t the issue. Can you find the right people who want to build the right operations platform for you, one that assures security, availability and also is awesome to work with for developers?


> I still see operations people using Ansible as a better shell script, without proper structure, without understanding the fundamentals of such a tool. Without understanding the process they are trying to automate.

It really doesn’t help that Ansible invented their own terminology, hierarchy, and module-y system but it bears no resemblance to anything else and has difficult and confusing answers to common things like “what if I want a copy of this role that uses yum to install instead of apt?”.

I liken it to Tim Cook telling people they were holding the iPhone wrong; theyve developed it in a way where people intuitively hold it the wrong way.


Devops tools are written by people who are great at programming and great at operations.

However less than 1% of IT people match that. Reality is most operations people are not great programmers and most programmers are not good at operations.

eg you can make create a subnet just a simple API call but most programmers don't know what a subnet is (or if they do they won't understand anything past the basics).

You have to build your tools with the above in mind.


And so it swings full circle.


The Edera product this PR-as-news article is pushing seems to have very naive marketing:

"zero risk"

"Container escapes? Impossible." <https://github.com/edera-dev/>.

"Hypervisors haven't been reimagined for nearly two decades" and apparently using Xen counts as reimagining.

What the hell even is a "single-host hypervisor". As opposed to what?


Sometimes I wonder if openstack is secretly funded by the AWS marketing department

It's a world of pain that leaves you crying for AWS/GCP or even Azure


> First, companies are switching from public hyper-clouds, such as Amazon Web Services (AWS), Azure, and Google Cloud, to private clouds.

who?

> Michael Dell, Dell CEO, quoted Barclay's CIO Survey 2024: "83% of enterprises plan to move workloads back to private cloud from public cloud."

...and then later realize they don't have the time, skill, or money, and even if they did, its just commodity infrastructure and no longer a value proposition for the company any more than their office space or janitorial service

> Another reason companies are shifting to OpenStack

again, who????

these articles continue to just be wishful thinking. most people who have dealt with openstack in the last decade have likely been on a deprovisioning project. I've personally witnessed it purged from two tech companies because the ops people were low-skill button-pushers who couldn't and wouldn't deal with its complexity, so ultimately the companies just threw in the towel

maybe 5% of ops people I deal with have the experience and skills to build infra from the data center rack up to something useful. ops in general has been subject to a major skills decline in the last decade or so, AWS is in absolutely no danger

ask your ops team how many of them have ever set foot in a data center. fifteen years ago, probably 50%. now? around 0%


Agreed. OpenStack looks great on paper (or slides), but if you're rolling it yourself it'll likely be a nightmare. If you just grab OpenStack itself (as opposed to a vendor-provided value-add wrap-around product), you're in for a world of hurt. It takes a small army of highly skilled folks to run it competently. What you get from OpenStack is an infrastructure toolkit -- something very far from a polished, turnkey product. Most companies will not have the foresight and budget needed to invest in the experienced people required to set it up and run it well.


Air-conditioner repair seems to be a sticking point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: