
Intel Pulls Out of OpenStack Effort It Founded with Rackspace - kefka
http://fortune.com/2017/04/14/intel-openstack-project-rackspace/
======
foobiekr
OpenStack really has a bunch of problems.

The big one is that OpenStack is about building and operating your own
Rackspace; this is not something IT organizations can even hire for let alone
carry off. The idea that they could/would/should was purely aspirational.

The others are that it's basically a mess - no two OS deployments will look
the same - and that it has neither an operational advantage nor too much of a
pricing advantage (50% for RHAT) over VMware - which is better integrated and
much, much better from a admin experience and debuggability standpoint.

I looked hard at leveraging OpenStack for the service we were building but the
underlying code was often cringetastic and somewhat naive. At some level if
you run software as a service you can work around poor code quality -
something friends from AWS have emphasized - but you can't throw it over the
wall and have other people without large engineering staffs there to help run
it - which doesn't work in the small. VMware is the opposite approach - the
entire model and practices evolved out of arm's reach software sales and
support. So naturally it works better.

~~~
mattbee
As a small competitor to Racksapce in 2010, designing a replacement for our
(then) 8-year old VM structure, Bytemark bet against Openstack from the 1st
announcement and spent years building something the-same-but-different.

It wasn't _just_ that Openstack was unreleased and talking about foundations
and making grand promises without any production-grade code. It was just that
we could never work out which version Rackspace was actually running for their
actual customers, and the details of how to actually _deploy_ it and make it
work seemed enormously fiddly compared to VMWare, which was where they were
taking aim. I expected them to say "now running on Openstack, 100%!"

Also the design seemed to mirror AWS, as if the only answer to their dominance
was to ... copy every facility they were producing exactly?

We had a more definite vision of where VMs should go, and thought it was a bad
plan to aim at "Amazon, but smaller!". In particular we really really wanted
live migrations to be a part of our platform - where we really really cared
about uptime of individual VMs, and wanted people to be able to upgrade them
on the fly. Plus, y'know, for a hosting company, being in control of, and
having opinions of our hosting platform was what people paid us for!

So we designed our in-house platform BigV instead (now Bytemark Cloud Servers)
-> [https://blog.bytemark.co.uk/wp-
content/uploads/2012/12/Desig...](https://blog.bytemark.co.uk/wp-
content/uploads/2012/12/DesignAndImplementationOfBigV.pdf) [pdf] And even
though we (re)invented an NBD server to make all the live migration stuff work
[https://github.com/BytemarkHosting/flexnbd-c](https://github.com/BytemarkHosting/flexnbd-c)
I believe we've ended up with something that does a small number of things far
better.

I see people asking whether Openstack is actually running anywhere very
public, and have read some pained war stories. So it still feels like the
right decision, even before this announcement. Plus it seems like live
migration in Openstack is still an exotic and difficult option, whereas that's
been our standard practice for years.

There are plenty of problems running your own hosting stack, but I like that
we can solve our own problems on our own terms, and tie software decisions to
hardware or data centre decisions that we make at the same time.

I eye up Kubernetes & Ganeti every few months, but they keep reminding me that
if you've got the expertise, and a specific purpose in mind, you can usually
build something that fits your purpose more closely.

(though the second half of that sentence will probably be written on my self-
carved tombstone)

~~~
foobiekr
The use case I described - Skyport's cloud managed secure servers - needs
super high quality, self-recovering embedded code that made no assumptions
about the network being high quality or reliable.

I think the thing people - and especially the OpenStack guys - don't
appreciate is how terrible it is to (a) lose a workload or (b) require someone
visit the datacenter (which may actually be a colo in another state or hours
drive away). Having to fall back to some sort of terrible insecure management
like IPMI or a dedicated mandatory management network (of questionable
security), etc. is just not viable.

Systems and infrastructure architectures need to address a few things that
really matter - error handling, continuous self monitoring, state compression
and linear, systematic self-recovery - and while some OpenStack components
handle this (ceph is pretty great except for a few things around access
control and security) the whole doesn't handle them well at all. It's not
enough to log a message (or worse, just log an exception stack).

~~~
traf68
I find your expectations to be unreasonably high. The self correcting system
is a fiction and will always be a fiction. The indefatigable infrastructure
that corrects the errant member system with 5 9s is a fiction and will always
be a fiction.

~~~
foobiekr
It is a fiction but the answer is not to throw your hands up and ignore the
problem.

------
timeu
We (scientific research institute) are currently operating a relatively small
commercially supported Openstack (Liberty release) installation next to our
HPC infrastructure and so far it works quite well. Of course, because we are
using Openstack from a commercial vendor, the installation/provisioning is
quite straightforward (although there is quite a bit of complexity, if you
want to do a lot of adjustments). So far we didn't have any big troubles
during operations and our users (researchers) have started to use it (creating
VMs for specific applications).

We will hopefully upgrade soon to the Newton release which will enable us to
provide SaaS (Murano) and CaaS (Magnum) to the users. We will probably also
evaluate Openshift on top of Openstack.

The biggest pain points is the integration with our existing HPC
infrastructure (specifically the parallel filesystem/storage). We haven't
really looked much at the code except of debugging some authentication issues,
when we tried to integrate keystone with our core IT's AD, so we can't comment
on that.

Additionally, the official openstack documentation (the commercially supported
one is quite superficial) is often very confusing and it's very hard to find
proper CHANGELOGS for each of the openstack components. Also we really don't
like (or rather despise) openstack's issue tracker (Launchpad). It is very
hard to find any existing issues, when we encounter some problems.

------
spotinsun
At our facility, we have an OpenStack-based private cloud that we worked with
Sardina Systems to put in place. It is production and we have not had any
problems with it.

We have also undergone upgrade from Mitaka to Newton with zero-downtime and
completed within a day (I understand that zero-downtime upgrade is a
capability specific to Sardina FishOS). We will be upgrading to Ocata in the
coming months.

We have found that in cloud, it is critical to separate the Operator role vs
Consumer role. OpenStack is primarily Operator-facing, who are in turn
providing the capabilities to serve certain Consumers. OpenStack enables the
Operator to flexibly provide virtualized compute environment requested by the
Consumer, while the Consumer does not care how the compute environment is
provided by the Operator. With this view, lots of things become clear.
Granted, not everyone in the OpenStack world have this view, but we are sure
glad that we do from the early on!

Our deployment is not the largest, but we aren't small either (several hundred
hypervisors). We are an organization of several 10's of thousand people,
though the user base at the moment is still in the hundreds.

A number of comments here mentioned that they had live migration problems. We
have not had these problems. It worked from the very beginning for us.

Our view was that we would work with a vendor on OpenStack, rather than trying
to string together the parts, in the same way that we don't string together
the parts that make up a Linux distribution. FishOS makes it simple for us --
and that's before looking at operational tools in the product.

I have not been involved in the commercial terms, but my understanding is that
commercially we have TCO that would be hard to match otherwise.

~~~
kiallmacinnes
As someone who used to work on OpenStack as 100% of my day job for 3-4 years,
this comment makes so much sense.

People try and deploy OpenStack from sources, or even from Ubuntu's etc
packaging.

This isn't how you should do it.. You need an opnionated vendor who meets your
needs.

------
iamthepieman
the article mentions other big players pulling out or cutting jobs/funding
related to OpenStack in recent months in addition to this latest move by
Intel.

Is this related to some endemic problem with the technology, big egos within
the project, dominance by another outside player or what? Cursory searching
doesn't reveal any "big thing". Maybe it's just dying a death by a thousand
cuts.

~~~
jldugger
I figure Intel's participation was hinged on supporting a growing fleet of
competitors to AWS, who would bid up chip prices. I guess that isn't
happening.

IMO, OpenStack was sort of a consortium effort to compete with VMWare, AWS and
Salesforce, but it's operational model is closest to AWS. Nearly all the big
enterprise IT companies tied their rafts together hoping to stave the flow of
customers. It doesn't appear to have worked.

RackSpace is advertising AWS migration and consulting services on podcasts I
listen to. Not exactly a stellar testament for their own cloud product, but I
think they still participate.

HP's strategy is a bit haphazard given the recent corporate splitup. On the
plus side, the HP/HPE split seems to have allowed HPE to ditch all the weird
www# URLs. They retired their public cloud, but still sell hardware that can
run OpenStack at least.

Dell bought EMC, and proceeded to shut down their OpenStack offerings in favor
of VMWare, which they own a substantial fraction of.

IBM is in a continual process of downsizing its hardware divisions in pursuit
of higher earnings per share. It seems their big push is cloudfoundry, which
seems to be more about containers and k8s.

So why are they bailing? I'm guessing:

\- it's way harder to hire support staff for OpenStack, than VMWare \- AWS
reduces capital costs and upfront investments \- Customers on existing
solutions aren't prepared to take advantage of new opportunities \- Many of
these companies have existing product lines they don't wish to disrupt

~~~
nul_byte
You missed Red Hat, who seem to be the only company having major success with
OpenStack [1]

Most of IBM, Intel, HPE etc have thrown in the towel and now offer their own
services on top of Red Hat OpenStack.

OpenStack has now found itself beyond enterprise, and now being the de-facto
platform for NFV running mobile networks, and I guess Red Hat are becoming the
winner here as they are so used to supporting an OpenStack 'type of'
infrastructure for large bodies such as banking, telco, health etc. When you
consider Red Hat are already large well established contributors to all of the
layers of the OpenStack 'stack' such as KVM/QEMU , libvirt, the kernel itself,
+ overlay networking tech such as OVS, and now DPDK, you can see why they are
well positioned to support and run OpenStack clouds.

[1]
[https://www.theregister.co.uk/2017/03/28/red_hat_cloud_quart...](https://www.theregister.co.uk/2017/03/28/red_hat_cloud_quarter/)

~~~
lathiat
Canonical also have a strongly growing OpenStack business with no signs of
pulling out from it.

~~~
bonzini
(Disclaimer: I work on Red Hat in the virtualization team).

Canonical seriously lacks expertise in KVM (+QEMU+libvirt), and you cannot
take that for granted when you have a customer with VMs crashing that is
asking for a fix.

~~~
kashyapc
Those who have down-voted the above almost certainly do not know the situation
on the ground.

FWIW, I recently had to explicitly inform the Canonical Virt maintainers (of
which there seem to be very few) about which patches that ought to be
backported to fix a bug in one of their libvirt packages that was seriously
affecting (i.e. preventing from patches being merged) OpenStack upstream CI
environment.

The said bug had fixes already available upstream, and are straight backports
with no conflicts. No one bothered to do the "unsexy" work of backporting &
cutting a quick stable build.

Only after pointing out the commits (with help from one of the lead upstream
libvirt maintainers), and posting them on a LaunchPad bug, did the Ubuntu
maintainers stepped in to backport the said fixes.

------
micah_chatt
Is there any chance that vendors (like Intel) will care less about the API for
accessing their products, as there seems to be more consolidation around API's
for running applications like Kubernetes. Intel is a member of the CNCF [1]
and seems to be putting money behind it. Am I reading too much into that and
is Intel just a big company that puts money in lots of places?

[1]: [https://www.cncf.io/about/members/](https://www.cncf.io/about/members/)

~~~
detaro
They have an interest in a large server ecosystem in general, with options for
running your own infrastructure, because it's a lot more profitable for them
to sell server CPUs to many companies that don't have as much options to
pressure them on price as e.g. Amazon can, and to get their special features
integrated so they can use them as arguments against other CPU vendors. They
don't care as much which option wins, so it makes sense to be involved with
anything that seems important.

------
driverdan
Sites with autoplay videos should be banned.

~~~
benley
I very much agree. Clicked link, read five words, site started screaming some
garbage advertising at me, closed tab immediately. It's not even the video
that bothers me so much - it's the autoplay _audio_ that causes me to
immediately close the tab and never return to a site. Ugh.

~~~
kefka
Yeah, I looked for another link that didn't have such obnoxious refuse...

Id' be game if you can find a less spammy site. But when I submitted this an
hour ago, there wasn't any.

~~~
benley
For what it's worth, it's fortune.com that annoyed me, not your link to it.
Thanks for looking for alternatives at least :-)

------
rectang
Consortiums are inherently unstable. Marketing interests drive wild swings in
participation and funding.

Open governance by individuals is much more stable. That stability benefits
_both_ individuals (whose stake is not vaporized when they change employers)
and businesses (who are more insulated from impulsive moves by big players).

------
turnip1979
I am sad to see these negative events as someone who was in the openstack
community. But I think a lot of folks knew such stuff was going to happen.

I think the mistakes made were:

1) the so-called big tent approach 2) too much complexity 3) core projects not
listening to end-users, and focusing too much on the plug-in model

It is a shame because there were so many smart and hard working folks
involved. It really felt like a community experience.

------
twelvenmonkeys
Is there an alternative out there for OpenStack? Perhaps a minimalist version?

~~~
ymse
If you just want clustered virtual machines, check out Ganeti[0]. It's not
advertised much, but this piece of software hosts most of Googles internal
infrastructure (not public facing stuff).

Unlike Openstack, it has a proper scheduler, and lets you rebalance VMs across
hypervisors efficiently. Also unlike Openstack, it can restart VMs if it (or
the hypervisor) dies, if you've enabled that.

And _completely orthogonal_ to Openstack, it has very strong consistency
guarantees. It's not made to start 1000s of VMs in seconds, since each master
node has to agree on all decisions, and each operation typically "locks" the
involved hypervisor. On the other hand, I haven't been able to break it once
in over six years.

Note that it really just exposes an API and comes with a superb command-line
client. Some assembly required.

Source: deployed Ganeti with great success at a billion-euro company, moved on
to promising "cloud" project which insisted on using Openstack and promptly
quit after a year of fighting obscure bugs (and naive colleagues who did not
want to try anything else :)).

(If you'd like help deploying it, I'm available!)

[0] [http://www.ganeti.org/](http://www.ganeti.org/)

------
atarian
The article just mentions that Intel is no longer funding OSIC, which from
what I read is essentially a data center for contributors to use (for free?).
I could totally see why Intel would have second thoughts about footing a
server bill; I don't think it means they're officially out of OpenStack.

~~~
ajdlinux
My understanding is that the OSIC program also included funding for Rackspace
+ Intel OpenStack developers, some proportion of whom will probably be losing
their jobs out of this I imagine.

------
xemdetia
From someone not very invested in the OpenStack process it feels like all the
momentum of the recent push of the project just withered away all at once. I
am not sure if the container crowd ate it's lunch, where containerized
appliances just solve the problem of openstack better.

~~~
yeukhon
OpenStack is a platform for creating an Infrastructure as a Service (IaaS). So
it's running on some physical machines, and manages your physical machines to
slice into VMs. There you get to manage / create networks, storage, etc.
Basically running a private cloud in your own datacenter/basement/office.
Openstack has grown more than just compute. It has a supposedly compatible
interface for S3 an object storage system, for example. New components are
created after major cloud services like AWS, pretty much. To manage containers
or manage applications, you can use CloudFoundry/Kubernetes, but to manage the
"infrastructure as a platform" (I want more machines, I want some bucket, I
want to run some Lambda function, I want to upload new OS images) you need
something like OpenStack. So if you want to build AWS/Digital Ocean, OpenStack
is required.

~~~
WorldMaker
This seems to get back to an interesting discussion I had with someone early
in the OpenStack efforts: as a developer OpenStack doesn't directly interest
me because I don't care about infrastructure. Where OpenStack had a
possibility to win was to provide options for infrastructure agnosticism: if I
can build an app that runs "unmodified" on any OpenStack-based infrastructure,
that has a possibility to save me potential time and money from having to port
apps to/from/between AWS, Azure, Google Cloud, et al (assuming of course that
it enough clouds actually adopt).

From that perspective, container solutions _are_ delivering a better developer
proposition than OpenStack has yet managed. There are ways now to build
container clusters that you can ship in parallel to AWS and Azure with very
little code difference.

In that earlier discussion I was skeptical of OpenStack precisely because of
its focus on infrastructure first. Without the buy in of being a clone for a
specific cloud structure (AWS compatibility over anything else, for instance)
or the backing of traditional datacenter/server vendors (IBM who eventually
started into BlueMix; Microsoft whose "on premises Azure" is now firing on
most cylinders but was announced as a plan early in OpenStack's history),
OpenStack didn't seem to have an obvious niche in the infrastructure world.
The closest to a niche it might have had in its early life was the promise of
application portability between clouds and that never quite seemed to be
delivered.

I can tell it frustrates infrastructure folks to hear that containers have
been eating OpenStack's lunch, but that is the very real case from the
developer perspective. As a developer today, I go for containers and OpenStack
is no longer relevant on my radar. Sure I can run containers on OpenStack, but
containers abstract away more of the infrastructure and I have less and less
care what cloud(s) is underneath the container cluster. When I asked OpenStack
people what OpenStack might deliver to me that vision of application
portability was tantalizing but never seemed quite finished; container
technologies have actually delivered that.

~~~
yeukhon
> if I can build an app that runs "unmodified" on any OpenStack-based
> infrastructure, that has a possibility to save me potential time and money
> from having to port apps to/from/between AWS, Azure, Google Cloud, et al
> (assuming of course that it enough clouds actually adopt).

> There are ways now to build container clusters that you can ship in parallel
> to AWS and Azure with very little code difference.

OpenStack is an IaaS, so just like AWS. Focus on the context. Do you want your
own private cloud? Yes or not?

If no, then this discussion can end, because AWS and Azure run on their
proprietary IaaS code. As a customer, you request resources from the IaaS
layer, and you build your server/platform from that point.

So arguing that container clusters can ship to other clouds with very little
change (almost likely writing the APIs to create a container) is unfair in the
context of why one would choose container over OpenStack. The purpose is
different.

If your answer is yes I am building a private cloud, how are you going to do
that with Docker alone? Can you build a software-defined network with Docker?
Absolutely not with Docker since Docker is a host-based deployment solution.

What you are looking for is ability to create OpenStack the same way
CloudFoundry / Kubernetes are created. You write up a manifest, and the
necessary databases and services are deployed to some EC2 machines. In the
case of CloudFoundry, you write up a manifest file, describes number of
instances, types, credentials, what not, then call Bosh to create Cloud
Foundry (will create a pool of app machines, router servers, UAA, etcd etc).
Machines are created based on a stemcell, basically an image. You want to
create your IaaS based on images. You want to be able to script up a manifest
and deploy your IaaS. You want a lift and drop IaaS infrastructure. Container
can do that, but it cannot be done simply with Docker. You need that
infrastructure abstraction layer to cover up. That's probably what Docker
Enterprise Edition might do, but I have not really dig into it yet.

~~~
WorldMaker
Yes, it is an unfair comparison. I was attempting to explain in my comment why
I consider it a _valid_ unfair comparison, because it is a matter of
perspective.

As a software developer, do I ever want a private cloud? No. Does my employer?
Maybe. Is it my job to tell them how to invest their infrastructure dollars?
Quite possibly no, because software development and infrastructure are
typically held at arms length. But even when they are not in a "proper" DevOps
shop, the ballgame of which cloud is then subservient to developer convenience
and how easy it is to deploy software to a cloud and how productive developers
are writing software for that cloud.

So yes, the purpose of OpenStack and Container technologies are very different
and I appreciate that technically. In terms of real world value to me as a
software developer, however, I have platform problems not infrastructure
problems. I don't care what the infrastructure is under the service so long as
it provides a stable, reliable platform for me to build upon. Containers
abstract that for me in a way that solves real platform problems that
OpenStack was only ever relevant to me in so far as its ability to once hint
at a possible solution to. That's not fair and that was expecting too much
from OpenStack at the time, but that's life.

~~~
yeukhon
Okay, I was reading it as defending why container would solve what OpenStack
was set out to solve, which is the proposition I read from the OP I was
replying to.

Of course, I would advise against running a private cloud unless there is a
dedicated team of at least a dozen or so. I applaud Digital Ocean for able to
survive and make good business from their private cloud. As a developer I
totally agree I just want my code to be deployed and that all the appendices
are deployed and configured.

------
kordless
I ran an OpenStack cluster in my house for a few years. The deployment was
managed by a bunch of scripts which I wrote and published to help others learn
the basics in deploying an OpenStack cluster, sans most of the more
complicated networking stuff. Last count, about 25K unique IPs downloaded the
images used in those scripts to launch test instances. At one time I held the
top 3rd or 4th link on Google for "openstack install".

A few years ago, I leveraged a bunch of methods in OpenStack to build a
_thing_ which would launch instances based on Bitcoin payments to certain
preloaded addresses. These addresses allowed "templates" to be associated with
cluster capabilities and code to be deployed. Payments to that address in
Bitcoin would immediately net you an instance, of a certain type, on someone's
infrastructure.

The general idea behind this creation was a way to abstract hardware
components into a system by which applications could launch themselves and
utilize the resources provided in a fair, secure and trustworthy way. It is my
belief this model was WAY ahead of it's time, a precursor to the hybrid models
we see emerging today, and lead directly to my personal realization that
federation of all systems will be a basic requirement for implementing trusted
computing in the future. We're going to need it with AI. Not sure how I know
that, but there it is, irrationality and all.

Unfortunately, standardizing deployment methodologies doesn't net you
federation. Standardization itself doesn't net you trust, unless everyone can
agree on the standard, which is out of necessity done in an non-trustworthy
way. Votes of a board, for example, aren't represented with fair consensus,
unless there's an algorithm behind the votes. Otherwise, votes end up being
slightly irrational, because people behind the votes are slightly irrational.
Or very irrational, depending on the company they work for.

And yes, federation can be implemented in a trustworthy way by a corporation,
such as Google or Amazon, but that requires all people in _that_ federation (a
discrete group) agree they will use this _thing_ to implement trust, even
though the _thing_ may not actually have trust implemented in a rational way
(i.e. by algorithm).

At the end of the day, OpenStack was doomed not because of container tech, or
the structure of the board, or who ran the biggest cluster, or because it was
overly complicated for simple use cases.

It failed because it did not implement the basic requirement of delivering
trusted infrastructure in a scalable and trustworthy way across a broad range
of infrastructure, in a wide range of locations, and do so in a way that
separates the use of the infras from the irrationality of humans running the
infra.

Until something does that, and does it well, we're stuck with Google, Amazon
and other provider's solutions. This is also a good rationalization for the
continued increase of cloud services by companies and the continued emergence
of hybrid models in the future.

------
holydude
I never saw a compan benefitting from openstack.

~~~
karlkatzke
I'm at one of the only companies I know of outside Rackspace that's running it
in production. We're doing a very poor job of it; we never hit production with
any effort to upgrade the cluster in the past three years, so we're still
running Havana with Nova Network.

The most likely upgrade path for us right now is VMWare 6.5.

~~~
cr0sh
I worked for a company (Nobis Technology Group) that was acquired by LeaseWeb
USA; I was a part of the "server automation" team and was somewhat reluctantly
"forced out" of the team after the transition (they basically offered to pay
me a smaller salary to work for them - do I look stupid?).

The team was essentially dissolved several months later, as I knew it would
be.

But for the short period I was there (about 2 years), it was a great place to
work, and a high point in my total career. Prior to that position, I had been
doing web development in PHP pretty much exclusively. Doing server automation
was a completely new space to me.

Soon after starting, our team was tasked with a migration to OpenStack. Since
our current infrastructure front-end was already based on PHP, I got tasked
with looking into how or if we could use PHP OpenCloud to work with OpenStack.
It seemed workable to me; I was able to extend the classes in such a way using
namespaces and other techniques so that we could add additional capabilities
to the interface that weren't already supported (and there were a lot of holes
to fill!), but wouldn't break things if/when we had to upgrade OpenCloud (this
was ultimately tested a couple times I was there, and the changes proved
flawless - at least on the PHP side).

Ultimately, we had a nice stable front-end that worked well with both our
original system (some VM architecture that I forget) and the new "cloud"
infrastructure based around OpenStack. In effect, a user could deploy either
an actual server (if available), provision a VM (if a server was available),
or build a cloud server "system" from a myriad of parts (we tried to support
and provide as much access to the OpenStack stuff as we could). We also had a
RESTful API for clients to also used (some of our clients resold our services
under their own names). Some of the backend stuff was a bit "messy" in how it
worked (I won't go into details, but I "authored" a fake "O'Reilly" "book"
(really just a front "cover" mainly) whose mascot was dickbutt) - but despite
the mess, overall it worked well, considering all the moving parts (where it
would tend to fall down - not always, but enough - was when an upgrade to
OpenStack was performed).

In short - we were also one of the few companies running OpenStack in
production. Our owners ended up selling to LeaseWeb, I left - but the idea was
that LeaseWeb wanted to transition things to their API and system, and I
honestly don't know what happened with all the work and such I was involved in
on the PHP side of things (there was also a point where me and a coworker had
to quickly ramp up and learn GoLang to make an interface from Rancher/Docker
over to the Nobis API - that was a fun and interesting experience). I imagine
that some portion is still running, but who knows.

I personally think that in the right hands and with the right infrastructure
OpenStack can be a very workable and working technology. It seemed to work
well for the systems we used while I was at Nobis. I don't know honestly
whether you could use it to scale up to anything like AWS or Google's
offerings, but I think for medium-sized stuff like we were doing (or like
Digital Ocean does - who at the time was our direct competition), it can work
well - at least as I experienced things.

~~~
karlkatzke
I don't disagree with you at all. For our uses, it was a stupid amount of
overkill specced by someone whose job was the same as his hobby. We don't have
anywhere near the need to scale that such a system was designed for, the
hardware that was specified was very poorly considered, and no maintenance was
ever done on it except for daily care and feeding -- which quickly grew to
consume all of the previous engineering team's day.

When the previous engineering team was shown the door, the entire system had
no maintenance and no path forward. It's an epic management fail, but ...
let's just color me cynically and sarcastically surprised that I used
"management" and "fail" in the same sentence.

------
traf68
Worked for rs in the early 2000s. The culture was pretty cutthroat and it was
the chiselers and greasy guys who moved up.

I worked as a dc tech during the .com bust recovery period and the level of
incompetence and the personalities I encountered led me to leave < 6 months.

Any decent SA from my generation can code a one-off virtualization base
without too much problem using kvm/qemu. LXC is a clean in-fit to that. Sounds
like this is standard RS procedure. Fleece the idiots, use the willing, and
promote the owlshit.

