
What Happened to OpenStack? - kragniz
https://aeva.online/2019/03/what-happened-to-openstack/
======
djsumdog
I think the question is not "What happened to OpenStack" but rather, "Is
OpenStack still total garbage?"

I've been at two companies that attempted to go down the OpenStack route. One
wanted to start a cloud offering to their clients and hemorrhaged tons of
money trying to just keep OpenStack stable. We couldn't even run our basic
Logstash offerings on our OpenStack cluster without them having bizarre
performance issues.

We had a really good manager too who had accounts on every other provider
(Rackspace, RedHat, Canonical .. all the big ones) and time and time again he
was like, "What is this? How are they doing this.." and we just figured they
used a ton of specialized proprietary plugins they just weren't open sourcing
or a ton of special patch sets.

Second shop had tried moving onto an OpenStack cluster to save on AWS prices.
It could never run anything reliably and they scrapped the entire project and
re-purposed all the servers for DC/OS, which was super nice and reliable and
every team migrate hundreds of services onto.

~~~
naikrovek
Interesting that you cite stability concerns. I am not sure that's the case
anymore.

My employer runs 5-6 complete openstack environments and those things have
never had an unplanned outage that I'm aware of. My stuff hasn't ever gone
down, I know that.

~~~
ps
Similar experience here.

Back in 2013 we had to evaluate existing cloud/VM platforms in order to
replace the plain KVM/libvirt and support and enable the growth. oVirt was
garbage (missing installation ISOs, randomly broken install process, cluster
nodes not communicating etc.), OpenNebula buggy, OpenStack seemed to be quite
hard to grasp, Hyper-V Windows only and VMWare expensive as hell (even now the
TCO Calculator gives us 4000+ EUR/VM - this must be joke).

We run several VMs with docker and our apps, manage dedicated servers and
their networks (VLANs as provider networks in OpenStack), provide IPSec VPNs
to for tenants, run Kubernetes clusters on OpenStack. We also manage several
dedicated servers that are not managed by OpenStack for historical reasons and
hopefully will migrate them to the cloud.

If OpenStack makes our heads hurt it is due to lack of documented design
patterns. After the years, documentation is good for the initial deployment
and IMHO for developers (either API consumers or contributors), but no so much
for network engineers or system architects.

Some design choices are pretty crucial upfront and you will pay the price to
change the design. We ended up modifying database records several times and
then slowly rolled the changes to the compute nodes. Recently the OpenVSwitch
flow tables were populated undeterministically after some network changes and
we had to inspect sources and even then did not understand, why do we
experience the issues.

But never did we encounter the stability issues, that were not caused by wild
actions of an administrator.

~~~
user5994461
VMWare is in the thousands of dollars per node. It's not a joke, it's very
useful software with no competition, worth it.

~~~
ps
So I guess the typical node is the beefiest you could get in order to justify
the license price?

I had this conversation with a colleague who offers VMWare managed Windows VMs
for his clients and he told me similar thing, but on the other hand, he was
shocked of the prices of our hardware (approx. $6k per server) and was
seriously considering migrating to the OpenStack.

~~~
user5994461
The beefiest you can. Exactly.

A 2U or 4U server is $10k to $20k. You're going to fit all you can in the box,
including a minimum of 512GB of memory.

It's not just about license costs though, it's about hardware costs and
capacity management. You want to have as few servers as possible for a given
capacity, it's easier to manage and cheaper. You must have VmWare to abstract
the hardware, a bit like AWS. You work with virtual machines and it packs them
on the hosts.

Last I bought it but that was a few years ago. VmWare was $5000 per node for
the full package. There was a free edition limited to about 100 GB of memory,
but without cluster management and live transfer of running VM (vMotion).

------
ec109685
“Even then, if you’re using Kubernetes, you probably won’t succeed, because it
isn’t in Google’s best interest to let anyone else actually compete with GKE.”

Google succeeds if it is anybody but AWS proprietary solutions. If they can
groom a healthy ecosystem of open source and commercial solutions that target
Kubernetes, then the tremendous advantage of AWS being a one stop shop for any
service you can imagine starts to dwindle. As of now, amazon offers a solid
compute environment and services galore, which is hard to compete with.

The author didn’t do much at all to tie what happened to OpenStack with
Kubernetes. K8s is deployed at scale by all the cloud providers. Both Google
and Microsoft solely run containers on it in their public cloud (while AWS
still has their own orchestrator). That never happened at scale with
OpenStack.

And regardless of how you feel about Google, Azure has a very strong vested
interest in K8s success.

~~~
tannhaeuser
Idk, isn't the value of cloud computing supposed to be reduced spend on ops?
Then k8s in my opinion doesn't meet that criterion because of its sheer
complexity - you very much need ops staff capable to diagnose and fix
problems, not to mention the lock-in. OTOH if you just buy RDBMS, MQ, identity
and compute service tiers directly, then you might have success with cost as
an SME.

~~~
jsiepkes
I think its more meant to keep spending / complexity controllable in the long
term (ie. linear instead of exponential).

Howeverrrrrr you will need to make sure you are actually going to need that
larger scale at some point in the future. Otherwise you are probably better
off with an simpler solution like Terraform.

I think the k8s madness its a bit like the NoSQL craze in that sense.

------
yingw787
I worked a little bit with OpenStack about four years ago, and my impression
was that it was very design by committee. Design by committee doesn't work too
well in software: [https://sourcemaking.com/antipatterns/design-by-
committee](https://sourcemaking.com/antipatterns/design-by-committee)

I think a lot of the enterprise companies supporting OpenStack, like Mirantis
([https://en.wikipedia.org/wiki/Mirantis](https://en.wikipedia.org/wiki/Mirantis)),
realized this one way or another, got themselves acquired, and then used the
new funding to pivot to Kubernetes or another open-source IaaS offering:
[https://www.mirantis.com/](https://www.mirantis.com/)

Without any promise of enterprise support, there's really no way for the large
companies targeted by OpenStack to adopt it and make that adoption sticky. So
that's how it died.

~~~
pas
RedHat and Canonical and RackSpace all offer(ed) enterprise support, no?

~~~
p_l
The practical end result was that you didn't buy "OpenStack", you bought
"Mirantis OpenStack" or "Juniper Openstack" (... that one was so broken...)
and so on, and there wasn't much portability between them.

~~~
pas
The basics worked across all distributions, as far as I know. (The openstack-
cli which was built on the HTTP API, which was shared.)

Mostly I had problems with the classic deployment, debug, develop cycle.
Reporting bugs is like throwing time out of the window, debugging through
overlay networks, über verbose python daemon logs and RabbitMQ madness was
also more of a surreally dark exercise in futility, than rewarding experience.

Most of the problems I experienced were problems due to the fundamental trade
offs taken during the deveopment of OpenStack. And these are slowly addressed,
but ... it was too little too late - at least in our case.

------
hodgesrm
> I am publishing this now in the hope that it can serve as a warning to
> everyone out there who is investing in Kubernetes. You’ll never be able to
> run it as effectively, at the same scale, as GKE does – unless you also
> invest in a holistic change to your organization.

This is a meaningless argument. I don't have to run Kubernetes at the same
scale as GKE to develop--I just run minikube, which runs very well on Linux
hosts. When I get ready to deploy there is a pick of environments to host on
because Kubernetes apps are largely portable.

OpenStack has never achieved this level of accessibility.

~~~
mfer
Two things occurred to me about this...

1\. I had a local OpenStack environment. Most of what I needed for app dev I
could do there. 2\. A lot of app devs aren't happy with Kubernetes and talk
about it. Sometimes in the comments right here on Hacker News.

Kubernetes has a lot of parallels when you drill down and look at it. Aeva
isn't the only one talking about it.

~~~
013a
Kubernetes isn't an application platform though, in the same vein of Heroku.
That's what Application developers want, and they're simply not going to get
it. It was never supposed to be that.

Kubernetes competes with AWS, in a sense. Its a standard API for interacting
with any cloud resource. I can give an AWS ASG an AMI, tell it to create 10
instances, and it will do it. I can give Kubernetes a Docker image, tell it to
create 10 instances, and it will do it. You wouldn't expect application
developers to enjoy creating AMIs, or maintaining them, or worrying about the
global high availability of their 10 instances; they wouldn't enjoy that with
AWS, and they won't enjoy it with Kube. And they shouldn't.

The confusion comes from the fact that Kubernetes needs to be deployed
somewhere; well, lets deploy it on AWS. And now there's this expectation that
Because we've created a layer on top of AWS, this layer should be closer to
the application development process. It is! But, not as close as it could be,
or should be in a productive shop. Kubernetes isn't the endgame; its just a
better place to start.

There are two angles to this problem that I hope Kubernetes continues to see
improvement on in the coming years:

First, cloud providers need deeper integration. Kubernetes should replace the
AWS/GCloud/AZ API. If you want to access object storage, Kubernetes should
have a resource for that which Ops can bind to an S3 bucket, then applications
go through the Kube resource. If you want a queue, there should be a resource.
This is HARD. But, over time I hope and do think we'll get there. You can
already see some inklings of this with how LoadBalancer Services are capable
of auto-provisioning cloud-specific LBs.

Second, we need stronger abstractions on top of Kubernetes for application
development. Projects like Rancher are doing some work in this regard, as well
as KNative and many others. Even, say, the Google Cloud console is an example
of this, as it does a great job of hiding the internals behind friendly forms
and dialog boxes.

We'll get there.

~~~
ec109685
AWS is working to express their infrastructure as Kubernetes objects in this
project, so things _are_ moving in this direction:
[https://github.com/awslabs/aws-service-
operator](https://github.com/awslabs/aws-service-operator)

~~~
013a
I've got a very keen eye on that project as it develops.

One thing I do think is: It feels like we should be looking at this a bit more
general, and saying things like "I need a Queue" not "I need an SQS Queue",
allowing the operators to bind the Queue generic to an SQS queue on the
backend, then using the application-facing spec to assert things like "it has
to be FIFO, it has to guarantee exactly once delivery" etc. And if the backend
cloud resource provider that is configured can't meet the requested spec, we
get an error.

I don't know for sure if this would be better or worse. But for some generic
cloud resources, like Object Buckets, Queues, PubSub, or SQL Databases, we can
arrive at a commonly accepted set of generic abstract qualities that an
Exemplary implementation of a system which says its a "Thing" should assert
(ex: with Object Buckets, characteristics like regional redundancy,
consistency, lifetime policies, immutable object locking, etc).

The interesting thing there is that now you've got a common base API for,
well, common things that any application would need. Open source projects
could flourish around saying "Check out NewProjectX, its a Kubernetes-
compliant Object Storage provider". The backend doesn't have to be a cloud
provider; it could run right on the cluster or on a different machine you own,
just like how load balancers can work (see: MetalLB).

Obviously I don't expect AWS to publish an API that divorces the
implementation from the spec, but I think we should think about it as a
community. And also, not every cloud resource the Big 3 provide would make
sense to be "more generic"; for example, having a generic "NoSQL Database"
provider is far too implementation specific to account for all the differences
between, say, Dynamo and Firestore. So the work AWS is doing on that project
is ultimately valuable.

------
kodablah
My opinion: it grew to a gross, large set of python and apis that, when
combined with multiple implementations, extensions, and company-specific
customizations, made it an unmaintanable mess that was difficult to deploy and
code around.

So, although the author compares it with the growing k8s project out there
now, at least k8s more clearly stewarded, more developer oriented instead of
only for ops (with code quality to match), and doesn't feel as hamstrung by
environments and dependencies (just try to run a little openstack setup on
your laptop for development... very annoying for a project of such age with so
many company's hands in the pot).

~~~
m0zg
>> k8s more clearly stewarded

Not clearly enough, IMO. Not very "clean" to begin with, it's accumulating
cruft at an alarming pace, and doesn't drop much legacy over time (a painful,
but necessary step in fighting code entropy). It seems to have inherited
Google's internal modus operandi: launch new shit and then let it rot.

~~~
p_l
It has a very clear API stability process, preventing the typical arguments
about google dropping things willy-nilly.

I rarely have to change anything in my manifests to get them to run after
upgrading k8s

------
privateSFacct
My own impression.

If you develop on AWS you get a supported experience for a long LONG time (see
simpleDB which I used and still works even though they don't seem to market
it). Same thing with old instance types. S3 etc etc.

With openstack at least a year or two ago - who can seriously stay on top of
what is going on there. You could develop something 3-4 years ago and getting
it going on the latest open stack = total pain. What exactly open stack was
also muddy - lots of ifs/buts/this 5 year old code that ran on vendor X
openstack doesn't seem to run today on vendor Y.

Didn't spend much time on open stack though - and I know the hype train was /
is huge - (AWS killer etc). My own sense - a lot of folks freaked out about
AWS and all WANTED openstack to work so they had some big gun to blow up AWS
with - but they didn't seem to spend much time talking to actual customers /
developers, while AWS certainly did.

~~~
013a
> see simpleDB which I used and still works even though they don't seem to
> market it

They don't market it, and if you create an AWS account after it was deprecated
in favor of Dynamo you'd basically never know it existed except for some
footnotes in the Dynamo documentation referencing its predecessor.

Which is fine; hats off to AWS for maintaining it for customers for so long.

------
justinsaccount
Openstack got good enough for people who wanted to run their own EC2 like
internal cloud, right about the time that people stopped wanting to run VMs.

------
peterwwillis
I've never seen an openstack implementation that wasn't horrible. The reason
more people use k8s is it's actually less useful than openstack, which makes
it more opinionated, which makes its implementations more uniform. Plus, most
people deploy it either on top of an openstack or other cloud platform.

You should not build your own cloud platform; that much is obvious. It's less
obvious that you should not build your own k8s, because it seems simpler and
more useful.

------
vkat
I think the battle for running virtual machines in the enterprise was already
won by vmware and so OpenStack became a niche for service providers wanting to
do Nfvi workloads and had resources to afford dedicated teams to running
OpenStack.

Kubernetes on the other hand capitalized on the need for running and
orchestrating containers. Kubernetes also got a few things that right such as
well documented and prescriptive set of tools one could use to get dev and
production cluster up.

On a separate note, having worked on OpenStack I can also attest that the code
was gross, not so much in kubernetes.

~~~
dfox
For me the reason why VMWare won is outlined in the article. Case in point:
running OpenStack with shared storage on SAN is major PITA while for VMWare it
is the recommended way to run things.

------
notatoad
>Enterprise customers. That’s a nice way of saying well-known brands who spend
a lot of money on each other every year

i'm definitely going to be stealing that line

------
musicale
When I used to use OpenStack, it frustrated me because the software seemed
badly designed, unreliable, poorly documented, and hard to use.

Perhaps having hundreds or thousands of contributors is more of a problem than
a solution. Or maybe it just needed better technical oversight.

~~~
mirceal
I believe that’s a bit unfair. Different Openstack components had different
level of quality.

The biggest problem I saw was that little to no thought was put into what the
experience of an operator would be. It looked more like a playground / place
to experiment and learn than something you would bet the farm on.

If someone would have cared enough to holistically drive this across the whole
platform I think this could have gone in a different direction.

------
timeu
We are working on a project where we plan to manage our new HPC system with
OpenStack. This will replace 3 legacy HPC systems that were manged with
propriatory management systems. Due to shifting requirements by our customers
(scientist) we decided to move to a cloud framework where probably still the
majority of resources is dedidcated to a batch scheduling system but it would
allow us to also provide more cloud like services (jupyerhub, rstudio,
databases, etc) It's quite an ambitious project and we (4 engineers) basically
spent the last year understanding the in and outs of OpenStack. We also went
full in with integrating all kinds of datacenter components into OS (NetApp,
SDN, DNS, etc)

Some lessons learned so far:

    
    
      - OpenStack is very complex     
      - It's less of a product and more a framework and you need a dedicated engineering team with cross cutting skillset     
      - You definately need a dev/staging environment to test upgrades and customizations  
      - Some of the reference implementations of OS servies (SDN) are fine for small deployments but if can replace them with dedicated hardware/appliances you should do that.

------
FooHentai
>When you’re looking at other cloud products, think about similar conflicts of
interest that might be affecting your favorite spokespersons today… (I’m
looking at you, kubernetes)

See also: Banks investing in cryptocurrency R&D

~~~
bdcravens
See also: Blockchain startups discussing database solutions.

------
mschuster91
Funny enough I'm deploying OpenStack right now for us as an internal
playground. It's decent - but hell, the learning curve is nasty and the
documentation is incomplete. Many things I could only get working after asking
on IRC and waiting hours for a reply.

But still, it's better than having to manage KVM by hand and cheaper than
buying VMware.

------
nwrk
Luckily there is also OpenStack running on top of Docker...
[https://github.com/openstack/kolla](https://github.com/openstack/kolla)

~~~
Sylamore
That's basically how AT&T is making openstack work:
[https://www.sdxcentral.com/articles/news/att-5g-airship-
plan...](https://www.sdxcentral.com/articles/news/att-5g-airship-plans-
powered-by-mirantis/2019/02/)

------
SteveNuts
OpenStack was an extremely ambitious project, which requires so much
cooperation and interoperability between so many companies that I believe it
was doomed from the start.

In 2015 my company purchased "Flexpod" which is a solution that's certified by
VMware, Cisco, and NetApp to work together. The result is nothing but a bunch
of back and forth finger pointing with support, and even a critical
vulnerability will take 6+ months to get patched and certified between all the
different vendors.

I personally like the Ansible approach where each Storage/Computer/Network
vendor provides APIs for management of their devices, and Ansible is just the
glue between them.

TL;DR getting major tech vendors to play nice together is hard.

~~~
ec109685
K8s is going in that direction. Providing an API for storage providers to
implement and letting them drive the implementation versus trying to offer a
monolithic all batteries included solution.

------
7ewis
I've been using AWS for years, and had never heard of OpenStack until this
year.

The only reason I'm aware of it, is because I'm studying for a degree part
time - and OpenStack is taught in one of the modules. It's a shame really that
they only mention AWS and advise against using it in case you accidentally
spend money.

------
RyanShook
As an outsider, the concept of openstack was always really appealing but it
felt like RackSpace’s answer to AWS. Never really learned more about it than
what was on their website but that was always the impression I got.

------
godzillabrennus
NephoScale was an open stack vendor I followed before they vanished.

Seems like it never really got easy enough for average developers to find it
accessible.

~~~
gtirloni
I remember getting to work and finding out the Nebula folks had disappeared
without warning [0]. On April 1st of all days. Led by Chris Kemp (@kemp).

Next virtualization cluster was KVM/libvirt-based and automated with Ansible.
That particular company didn't want to hear about OpenStack anymore.

0 - [https://blogs.dxc.technology/2015/04/02/nebula-openstack-
clo...](https://blogs.dxc.technology/2015/04/02/nebula-openstack-cloud-vendor-
just-shut-down-with-no-notice/)

------
enriquto
This way of writing (like a journalist) is so annoying. I prefer much the
scientific style of writing where the main idea is given in the first
sentence, and suspense is avoided as much as possible.

~~~
zrail
Cool.

The author chose to write a first person narrative on their personal blog.
What you prefer doesn’t really matter, except in so far as you can choose not
read the piece.

~~~
yjftsjthsd-h
We're in a comment thread. Disliking the article, with reasons, is a valuable
contribution.

Besides... Someone chose to write a comment. What you prefer doesn’t really
matter, except in so far as you can choose not read the comment;)

~~~
indigodaddy
I see what you did there with the missing grammar

