I don't have any inside info, just grapevine and rumors, but this is not that unexpected and doesn't have a ton to do with the IBM acquisition either. Cloud Forms is complicated to use and not that popular. Its head was already on the chopping block.
I am not worried that IBM is going to start killing Red Hat products or "cannibalizing" them. IBM is going to adopt more Red Hat tech, and Red Hat may start using more IBM stuff internally, but I don't see big changes coming.
There's been lots of internal talk over the last 3 years of what to do with CloudForms. The customers who love it, REALLY love it and do amazing things with it. And our internal training group had the world's largest CloudForms installation(s), so we really ate the dog food. But you really do need to be of certain sizes to "grok" CloudForms value proposition.
All of these console of console products are way complicated and expensive. IBM needs to keep theirs because it is used for their power platform and zlinux.
People who want to do the single pane of glass thing are probably better off with VMWare if they want to move workloads around or terraform if they want a recipie type system.
The new IBM Cloud Paks run on Openshift (both for x86 and system Z hardware).
Assurances from corporations and their representatives, barring hard contracts beset with penalties, are not worth a dime. Moot, hot air, non-actionable.
This is a common trick. All the transition promises come from people who won't be in the org chart in a couple years. Nothing changes for most of the lock-up period during the transition, and as soon as that person is gone or a lame duck, the rules begin to change.
And what are you going to do? Complain to the people who made promises they no longer have the political standing to enforce, or in fact have already moved on?
Now there is no person accountable for the promises the former employee/boss made. Therefore, the promises do not exist.
I’m not terribly concerned over CloudForms, to be frank it’s an extremely complicated product that requires WAY to much work to use compared to alternatives like Terraform, Pulumi, etc, you might as well do some heavy handed customization of Service-Now for all the effort it requires in comparison.
If I start seeing them fuck with more core offerings I’ll be more concerned.
I'd be interested if anyone can compare and contrast.
We don't consider large-scale boring ol' enterprises much in HN; but I wouldn't be surprised (I have no data one way or another) if boring RHEL instances in boring Enterprise/Government datacentres and clouds doing boring things outnumber shiny linux instances in more glossy usage.
What does it offer - support. And support is important when you have something important, when you need to hold somebody accountable, when your core expertise or market value is not patching and fixing your backoffice software yourself, or when you simply need to pass an audit, security policy, regulation, etc.
Also, stability. Not in the software/OS stability sense (although it's pretty good there); but in the "I'm going to put RHEL on my roadmap and invest in education and processes and procedures and documentation and ecosystem based around it, because I believe it'll still exist in 10+ years, which is the timespan some of my projects and certainly my company are based on".
(Edit - I mean, hell; Enterprises run AIX, which many of you kids will have to Google;). And for what it needs to do AIX is freakin' awesome. And for most other things it's wildly inappropriate and ineffective:)
There are distributions I prefer for personal projects that I won’t touch at work. An environment like mine is where Red Hat and Microsoft shine, although Microsoft has been a pain in the ass lately with their cumulative updates and 6 month rolling releases on the desktop.
I worked at a defense contractor who was wondering what the next platform was going to be since parisc/hpux was our platform.
We were working with red hat and a bunch of us low level code group had a week long class at redhat on Linux kernal internals) it was excellent, and they higher ups got a happy feeling for the support they offered.
The project was delayed and I ended up leaving that company.
They told us in the class the open secret that centos is basically red hat
Do we integrate with/use cloud and container solutions for services that align with that? Yep. But you can’t just dismiss the foundation, no matter how recursively down the chain you need to go, that it all runs on just because you don’t have to deal with it (“you” in the general sense).
Some of these companies - I’m excluding the ones that are just hopelessly out of date and out of touch - simply have different needs. This is especially true when a significant portion of the tech - maybe the majority - faces inward. There’s not much value in elastic cloud computing when your workload is relatively stable over a period of years and it’s running a bunch of back office stuff that’s not part of your company’s product.
It is a simplification to the point of being wrong to say everything in tech is cyclical. Some things are in some ways, but for the most part we progress forward, not back.
You seem old and out of touch. No offense. But when people aren't able to keep up they sometimes invent fantasies that their skills will remain relevant forever. That's never really been true unless you actively search out confirmation bias and ignore the vast majority of what's happening in the field.
So you're not running a kernel anymore? Interesting.
The main reason why I've seen people at $WORK interact with Red Hat or Suse support is to figure out weird kernel behaviors (both actual kernel bugs and also what turned out to be suboptimal configuration).
Also, do you run your desktop Linux in containers? Desktop Linux is certainly a smaller market than server Linux, but it's large enough for Red Hat to service it.
I mean, which other distro out there is comparable for these use cases? Even Ubuntu LTS doesn't really fit the bill. The only other OS with long term update commitment is Windows.
This is absurd. Here's just one single counterexample - SuSE, which supports for up to thirteen years I believe https://www.suse.com/support/policy/. Is that longer than RedHat at twelve years https://access.redhat.com/support/policy/updates/errata?
As an aside, it’s a pity SLES didn’t gain more traction because it’s a nice alternative to RHEL.
So we purchased Red Hat Linux, the exact version NVIDIA mandated for support, and used that, just so we could get a warranty supported machine. I did learn that a whole lot of "scientific computing" happens on Red Hat or CentOS and not debian distros, though I'm not sure why.
This was 4yrs ago, perhaps things have changed now.
You don't want your proprietary medical/science software to break on your whole cluster just because you had to upgrade whole Debian to fix a security patch.
Red Hat is prepared to maintain their distribution for far longer than pretty much anyone else outside Microsoft.
Once IU got a site license for RHEL -- one of the first .edu's to do so, IIRC -- they pretty much standardized on that. In addition to all "official" University systems, their RHEL license also covers personal systems for faculty, staff, and even students (my first "real" RHEL boxes were registered to their newfangled Satellite server).
There were issues with cards dropping and becoming unavailable during use (turned out it was a slight power-imbalance from the power supply to the cards.) We encountered this within a month of purchase from an NVIDIA-listed Preferred/Authorized Enterprise Reseller. They couldnt fix the issue and escalated it to NVIDIA who promptly yelled at us for even trying to run Ubuntu.
They also noted that if there were overheat-related damage they would refuse to honor the warranty. On a side note, apparently they were very upset at the Authorized Reseller and soon that reseller was no longer on the NVIDIA list of Authorized Resellers.
On a side note, features like GPU-Direct were not even available on Ubuntu.
The core issue for random k80s dropping out of our cluster at random times was a power imbalance from the power supply to the cards, so technically no.
However -- for NVIDIA to even diagnose the machine and figure out root causes, they required RHEL (Red Hat Enterprise Linux) before they would help. Secondly, the NVIDIA diagnostic utilities such as "GPU-Burn" were not supported on Ubuntu, so NVIDIA would not even take results from those if we were still running on Ubuntu.
Also, consider the cost. This cluster was a $100k+ CapEx for a fledgling startup. We were not going to risk losing support on that for several hundred dollars of RHEL license costs.
Finally, consider uptime. We purchased this complicated cluster because we were a healthcare ML startup and had locality restrictions associated with underlying medical data and had no Cloud Service Provider local nodes available in said country. If the server was down or -- heavens forbid, needed to be sent back for servicing -- we weren't training models and we were no longer making progress and burning thru valuable runway.
The RHEL license was the a drop in the bucket.
Fedora on the desktop is also one of the top used distros (and Fedora is the upstream of RHEL).
What does it offer that other distros don't? Meticulous attention from some of the best linux engineers in the world. Tons of out-of-the-box stuff to make using it easy for non-linux experts. Also guarantees of extremely long-term support, so enterprises can rely on having a supported/patched OS for years and years.
To be fair, OpenShift should probably never have been used for these low traffic apps in the first place and I could see it being a good tool for a product that anticipates needing to scale up or down quickly. But for a relatively simple CRUD app with modest traffic I haven't seen any benefit to using OpenShift over a simple Linux server running Apache/Nginx, and it adds a lot of unnecessary complexity.
But, there's a lot of abstraction over a traditional server, so for simple cases where you only need one instance it is definitely overkill. If you don't know OpenShift/Kubernetes then that is a lot to learn.
Be sure to utilize your Support Cases, the OpenShift Users mailing list, your Solutions Architects/Sales Reps to get you connected to people that can help. You're paying for the Support, it's ok to use it. :D Also, if you're struggling with concepts, you can try (https://learn.openshift.com).
And the reason is the extensive (and expensive) paid support. It acts as an insurance for these businesses that are all about risk management.
Nothing else is supported.
IIRC, a place I interned during that time also had some RHEL machines.
These days, I have no idea what those places are running.
From this I take it that most of the folks complaining about what is happening to CloudForms in here at least don't actually know what it is.
I've had quite a bit of experience with Debian and Debian-derived distributions, a little experience with CentOS 8. I heavily disapprove of CentOS based on this experience and see it as not just restrictive, but also kind of flaky and shaky.
I work for a Fortune 50 and we're one of those companies that made that decision. Having someone to call when things go south is worth every penny. It's not that you can't hire completely competent people to deal with any issue, it's just outsourcing the staffing and expertise needed to do it globally is IMHO cheaper. Even if you have a large company you're still probably gonna have a kernel guy and a filesystem guy and some CEPH guys who are all familiar with the code base. Redhat has a phalanx of people in their employ both contributing code to upstream projects and backporting features to their version of Linux. There's a whole room of kernel guys, filesystem guys, etc, etc, etc. Hiring and keeping that bench of talent happy is someone else's problem when you buy Redhat.
>I've had quite a bit of experience with Debian and Debian-derived distributions, a little experience with CentOS 8. I heavily disapprove of CentOS based on this experience and see it as not just restrictive, but also kind of flaky and shaky.
CentOS 8 may be "flaky and shaky", dunno... haven't used it. But RHEL 5, 6, and 7 have been rock solid in my experience. If anything Redhat was too conservative and I was delighted to see Redhat adopt a different and much more aggressive patch and update lifecycle for RHEL 8. But moving faster does sometimes break things as the mantra for HN will tell you.
Is that not conceivable?
How much does a qualified person cost you? Maybe quarter of a million dollars a year or so all-in with salary, stock, benefits, recruitment cost? How many of them do you need to get 24x7 support? Five of them? That's $1.25 million. You can buy a lot of 24-hour support from RedHat for a lot less than that.
In a previous life, we paid perhaps $25k to a postgres consultancy that did support. They employed some of the core devs, and their promise was that you could escalate to a core dev within perhaps 24h. My employer hit a very weird performance regression when a database crossed a boundary size and having that deep expertise on call to diagnose, figure out a short-term fix, and help get a long-term fix deployed was invaluable and worth every penny.
The type of people who have millions of dollars of hardware under management pay for support so they can get super deep expertise on hand when they need it.
And when you consider that those 5 people have a bigger vested interest in preventing problems than Red Hat does, their cost to value ratio gets less dire.
Then you are unlikely to hire 5 people with deep expertise.
If you only need a deep expert two weeks out of the year (Especially when you can't predict what those two weeks are), it makes a hell of a lot of sense to pay for a support contract, then an in-house expert.
You probably drive a car every day, but you're not a certified auto mechanic. You outsource that expertise, for situations when you need it.
Yes I'm sure it sometimes pays off and sometimes doesn't.
But it doesn't deserve a mocking 'somehow.'
Hopefully IBM will open source Spectrum scale / GPFS general parallel file system and integrate that with Red Hat.
I would have thought that the opposite is the case; most engineers work on closed source software
Open source has lost. It's time for something new.
As best I can tell, their CloudForms updated Statement of Direction article (behind paywall, sorry) shows that Red Hat is killing off support for non-Red Hat platforms like VMware, AWS, Azure, etc. The justification is to focus on open platforms, which I think means CloudForms will ultimately disappear entirely with Red Hat focusing on OpenShift instead.
We made a strategic decision to focus our management strategy on the future — open, cloud-native environments that promote portability across on-premise, private and public clouds.
CloudForms updated Statement of Direction
However to me this is still a big blow to users of the platform, where I’m sure most will have at least some VMWare to manage. Indeed, when implementing CloudForms at work and talking to Red Hat, they said that their most mature integration in CloudForms is with VMWare.
According to the Red Hat article, CloudForms with full platform support is being embedded into IBM Cloud Pak for Multicloud Management and users are encouraged to “migrate your Red Hat CloudForms subscriptions to IBM Cloud Pak for Multicloud Management licenses.” Red Hat’s CloudForms Statement of Direction FAQ article lays out the migration path, which does confirm Red Hat will continue to support existing clients for the remainder of their subscription.
So in short, CloudForms from Red Hat is being crippled and will only support Red Hat products, which really means that users are being forced to buy IBM instead. Of course Red Hat is entitled to change their own products, but this move does seem curious when execs on both sides said they would remain independent. Maybe it’s better than killing CloudForms outright?
We can publicly say that all our products will survive in their current form and continue to grow. We will continue to support all our products; we’re separate entities and we’re going to have separate contracts, and there is no intention to de-emphasise any of our products and we’ll continue to invest heavily in it.
Jim Whitehurst as Red Hat CEO
The TFA linked RH Article  is "paywalled" behind a "free cost" user registration. You just need a valid email. I've copied the text from that article
As customers accelerate and scale their open hybrid cloud initiatives, management becomes
increasingly essential. Given this requirement, we made a strategic decision to focus our
management strategy on the future -- open, cloud-native environments that promote portability
across on-premise, private and public clouds. With this strategy update, the roadmap for
Red Hat CloudForms is summarized in three key points:
1. To date, Red Hat CloudForms has managed both open source and proprietary technologies.
2. Moving forward, Red Hat CloudForms will continue to directly support open Red Hat platforms
like Red Hat OpenStack and Red Hat Virtualization. As of March 1, 2020, Red Hat will no longer
offer NEW CloudForms subscriptions for non-Red Hat platforms. For renewal options, please
see CloudForms Statement of Direction FAQ.
3. For non-Red Hat technologies (VMware, AWS, Azure, etc.), IBM is embedding CloudForms in
their Cloud Pak for Multicloud Management and will be working in the ManageIQ community
(upstream for CloudForms). Beginning March 1, 2020, IBM will also be offering an easy migration
path from Red Hat CloudForms to this Cloud Pak.
Red Hat will continue to work in CloudForms’ upstream community, ManageIQ, to support our
offerings. For management of non-Red Hat products, IBM is also working in the ManageIQ
community and embedding CloudForms support in the IBM Cloud Pak for Multicloud Management. For
Red Hat CloudForms customers interested in managing heterogeneous environments, IBM is
offering an easy migration path from Red Hat CloudForms to this IBM Cloud Pak.
It obviously needs to be changed, but I'm not sure what to change it to.
Submitted title was "IBMs Cannibalization of RedHat Begins". That broke the HN guidelines by editorializing (https://news.ycombinator.com/newsguidelines.html). Accounts that do that eventually lose submission rights on HN, so please don't do that.
Edit: this is better.
edit: I’m not sure why I’m getting downvoted, I just stated a known fact. Did I do something wrong?
Using people's employment information to attack them is particularly bad for HN, since it disincentivizes users to show up and talk about their work, which is likely to be the thing they know the most about. That would make HN strictly worse, so it's a no-brainer not to allow comments like this.
We detached this comment from https://news.ycombinator.com/item?id=21830516 and marked it off-topic.
The most contentious and unfriendly projects in the Linux ecosystem are run by its employees and the interests of the rest of the community run directly counter to those of RedHat and by extension it's employees: software that doesn't need a gaggle of engineers to run vs software that you have to buy a subscription for.
If software is so easy to build, run and debug that you don't need support for you won't buy support for it. Every enterprise I've been at in the last 10 years had CentOS for the engineers who knew their way around Linux and RHEL for those that didn't. It was completely up to you which one you chose. The only push to anyone buying RedHat was how impossible it was to build your projects without someone holding your hand or wasting days reading documentation.
At that point the confirmation bias started to settle in pretty quickly, and so it seemed like their choices just got weirder and weirder.
I like to think I understand how software businesses work, but then I think about Yahoo and Red Hat and realize I don't know shit, because I couldn't begin to understand why they have had legs when more seemingly valid concerns flamed out.
A lucky investment into AliBaba for the former, and a good sales team for the latter.
As a result, we finally have a desktop situation that's not a pile of workarounds on top of a decades old crusty mess that nobody wants to touch that is Xorg, finally there's no fucking screen tearing, finally there's proper multi-monitor HiDPI, finally it's possible to actually sandbox GUI apps (unlike X11, having a Wayland socket does not give you the power to mess with the desktop and overlay a fake password prompt or keylog), finally there are no weird lags (because the protocol is fully async and does not involve a silly middleman broker that xorg ended up as).
There will be yet another huge mess left for the distros to deal with then things will be back to normal in two or three years with everyone remembering why the Unix philosophy is important and that 'this time it isn't different'.
Fedora prior to 2008 used classic Redhat runlevel init (what you're calling Smoorenburg). From the first release in 2003, through Fedora 9. So we have 4.5 years of "SysV init" followed by 2.5 years of Upstart in "SysV init" compatibility mode.
That said, your comment isn't especially responsive to GP; both Upstart and Systemd in Fedora continued to use a bunch of shell scripts to define startup actions. It wasn't until the systemd unit file conversion efforts around 2012 that Fedora largely moved to services defined as data in "INI"-style configuration files.
GP's "shell-script-based boot" is not an unfair way to characterize Fedora's boot process prior to (and including the early history of) systemd in Fedora.
Nor was there the sort of exclusive "compatibility mode" that you are envisaging in order to make your argument that mixmastamyk was referring to Upstart. Upstart could employ van Smoorenburg rc scripts, but that was not a mode. Upstart was still event-driven with job files. The van Smoorenburg rc scripts were run via what was just another job file. Were your argument valid for Upstart, it would equally be valid for systemd's mechanism for invoking van Smoorenburg rc scripts, and you would have to say that systemd is shell-script-based boot, which really makes a nonsense of mixmastamyk talking about "going back" to it from systemd.
Quite clearly, going back to Upstart is not what mixmastamyk was talking about, which not least can be seen from xyr comment written, next to yours, 4 days beforehand.