I'm constantly amazed that people still don't understand why Red Hat makes money, and this article won't enlighten you as it doesn't understand it as well as being full of other incidental mistakes (like their description of CentOS is way off the mark).
Red Hat takes Linux and certifies it against all kinds of government, safety, privacy etc standardards, such as PCI and FIPS and numerous local ones. As a result if you are a government or many large companies you simply cannot download Linux and YOLO it. You are legally required to use the certified software so you take the path of least resistance and buy RHEL. That's it. It's why Red Hat makes piles of money, and Canonical or your SaaS does not. Understand what your big customers are required to have and provide it to them, even though it's boring and expensive to do.
This is partly true. It’s worth noting that Canonical/Ubuntu has the same certifications such as DISA/STIG and FIPS, and runs quite a bit in highly sensitive environments, often bundled by ISVs, such as VMware or Pivotal, where I’ve worked.
Red Hat makes more money because they’re an enterprise software company and everyone thinks they’re just an OSS services company. They’re that too, but they are far more sophisticated than these articles claim.
At the core is that Red Hat’s distro is perceived as more of a standard by the ISV community that certified their software against it. Red Hat does a masterful job of playing both sides of openness - open copyright, closed trademark. They’re pure open source… but they fork most of the upstream with their own branded project version, and convince companies to certify against the proprietary trademarked offering. Automated upgrades and most documentation are behind a paywall. But they also have some of the best security responsiveness and extended support (up to 10 years?) in the industry to make up for it.
Beyond Linux, they also meet customers where they’re at by watching the competition and customers, and pivoting when they need to. Customers seem to like a competitors product more? They find the next leapfrog angle and invest years in it, dumping the old. OpenShift was rewritten 3 times before hitting paydirt! That’s good business sense. OpenShift trying to take on vSphere, Ceph evolving into an enterprise storage player, etc. These aren’t simple moves.
This dance of subtle open/closed aspects a great business model, and hard to replicate: open in code, but closed in the direction the money must flow. Most business folks don’t have the patience to weave this narrative, which takes years of reputation building. And it’s required alongside all the other “build a product people want” difficulty.
IBM also loves this model: “WebSphere” certification over “Java EE”, for example. Hence “OpenShift” certification over Kubernetes, or “RHEL” over Linux.
Red Hat "fork[ing] most of the upstream" is not telling the whole story. Yes, Red Hat works with upstream code, does additional QE/QA and certification as needed and make that available to customers on a subscription. Changes, improvements, bugfixes etc are all also sent back upstream. It is a two way street. Not a one way "take from upstream only" model. If that was the model, Red Hat would have not survived to thrive. Upstream First is the default for Red Hat.
Red Hat's code base is a mix of GPLv2/v3, MIT, BSD, Apache - ie, it spans the entire open source licensing spectrum from protective copyleft (GPL) to non-protective (BSD, Apache, MIT). Red Hat is only obligated to release code to GPL code that it ships to customers, but Red Hat goes above and beyond and ships all code that it worked on or otherwise to customers. There is ZERO value in holding back code/fixes etc. Pushing that upstream empowers the whole global community (yes, including competitors). Not doing so will result in fragmentation of the various open source code bases and downstream products.
Trademarks are not the same as copyright. Trademarks tell the receiver who it represents. It is the identity of the entity. So, that is protected and I think that is very fair.
> Red Hat makes more money because they’re an enterprise software
That was my immediate thought on reading the GP. HAving been in a few Red Hat meetings to discuss possibly using some software of theirs recently, the meetings were indistinguishable from how a meeting with VMware or Veeam goes. They are a company providing enterprise solutions.
That's not meant negatively, that's just what they are, and if you're in the market for an enterprise solution then you appreciate that these companies exist. You get assurances that what you get works as advertised or the provider spends time and effort making it work as they advertised, and you aren't spending your (limited) engineer talent to do so when it can be spent on areas that aren't easily served by throwing some money at it or the cost benefit ratio is far more in your favor.
> OpenShift trying to take on vSphere, Ceph evolving into an enterprise storage player, etc. These aren’t simple moves.
Except it isn't. One of those meetings I had? It was for Red Hat to tell me that they don't really have a good competitor to vSphere anymore, since the RHV platform is being EOL'd and they're getting out of that game. Sure, you can run a VM in OpenShift now, as long as that VM lives in a container and is part of a k8s cluster. The path to success for virtualization with Red Hat is Kubernetes or bust, I guess.
> Sure, you can run a VM in OpenShift now, as long as that VM lives in a container and is part of a k8s cluster. The path to success for virtualization with Red Hat is Kubernetes or bust, I guess.
Yes, that's the plan as far as I can tell. The demand for "vSphere but cheaper" is omnipresent. OpenStack and RHV failed, but OpenShift (and ACM) represents a new opportunity and foundation to take a run at vSphere. It will be hard - folks usually vastly underestimate how much smart engineering has gone into that layer.
Canonical don't have the heft to compete against Red Hat, and especially not IBM + Red Hat. If Wikipedia's figures are right, Red Hat employs more people maintaining the kernel and RHEL packages than Canonical employs in total. This matters a lot for government and other large customers.
The companies that do have the scale needed to compete are the cloud vendors who are rolling out various certified cloud instances, and in the case of Microsoft/Azure, AMD-based confidential computing.
Probably oversimplifying, but I get the feeling that Red Hat's compliance-enabled versions are for enterprise companies who need Linux, and Canonical's compliance-enabled versions are for smaller tech companies (or once-smaller ones) who need compliance.
It’s not about Canonical’s employee count, and frankly it’s not about Linux - that’s a shrinking revenue base. You’re also ignoring that Ubuntu was the leader in cloud for Linux for many years, mostly due to the lagging kernel releases when things like Docker arose, but also smart marketing moves on their part.
It’s about the apps and the datacenters. The hyperscalers are formidable competitors, so it’s all about OpenShift becoming the new multi-cloud operating system across them.
The thing I don't understand about Red Hat is how the support is so bad. I (a customer at a medical/educational site) pay for Red Hat, and on the few occasions I've requested support, that support has never, not once, actually solved my problem. In every single case, I eventually solved it myself after weeks of back-and-forth with one support person after another consisted solely of "solutions" like "reinstall the OS", "we won't support you unless you update [completely and utterly unrelated minor package]", and "please give us [piece of information that was included in the original ticket". In several cases I finally found the right answer ... in access.redhat.com/solutions .. but the support person was never able to find it.
We purchase it because we're required to, but we get absolutely no value for the money.
I had that same experience, some time there is a lot of back and forth with junior support people that doesn't seem to have a clue what's going and only follow a script, it gets better when it gets escalated but that's painfully slow to get there if you are not a marquee customer.
Interesting to hear this. My last job was in a shop that ran RHEL servers and I always felt bad for not using the support that we paid for. Like after working through a problem, thinking, "Oh, we should have called Red Hat." I guess we weren't missing much. We got support instead from upstream IT folks internal to our organization who packaged the applications together with RHEL (and that level of support was similar to what you describe).
but checklists work (you might have heard about surgeons leaving medical tools in patients, and checklists eliminating this problem, seemingly the dumbest simplest technology, yet it's very powerful compared to the default of nothing)
of course the quality of answers matter, but that's on the environment (auditors, regulators, industry best practices, trade groups, client expectations), see the absolute total shambles of the state of IT sec in South Korea, the 1000 year old COBOL systems in finance/insurance, the reservation system in air travel, the sorry state of TLS before LetsEncrypt, SMTP servers (how SPF, DMARC, DKIM and ARC and MTA-STS are all needed to signal that you really prefer TLS, really don't want to be impersonated, you really vouch for forwarded stuff, etc).
also there's the BeyondCorp step-up. nobody bothered to really segment the internal network. nowadays the default is zero-trust.
>but checklists work (you might have heard about surgeons leaving medical tools in patients, and checklists eliminating this problem, seemingly the dumbest simplest technology, yet it's very powerful compared to the default of nothing)
These glorified checklists also backfire all the time. With the surgeon it's indeed simple. With software what I see is that certification and process often lessens quality.
Why you ask? Well if you need certification, changes become expensive. Because there's a huge tail of documentation and processes to be done. This leads to the perverse incentive to change as little as possible, even though you know it's broken.
Boeing knew that they needed to retrain pilots if they changed the 737 MAX airplane too much. So they hacked the hardware, which lead to hacking the software, which lead to omission in the training. All because of the incentive not to change too much lest they need to retrain all pilots.
The problem of course is that the retraining is an either or. Either your plane is sufficiently 737 like or not. If not the costs are huge. Especially since the competition's airplane would not need retraining.
So the risks of these hacks to make it 737-like enough were weighed against huge costs. The costs were basically not being able to do business at all. Since these planes would be cost prohibitive for the cheap domestic airlines that want them.
If the costs were more linear I'm sure this wouldn't have happened. E.g. if you can retrain pilots on only the MCAS system and still not require full recertification.
I work for a company that ships safety-certified software. We, or often our customers, have discovered bugs in that software. We do not fix the bugs because one single small bugfix means re-certifying the entire software, a process that takes months of producing proof of matching the safety case plus many more months of updating and approving accompanying documentation and going through an audit. Everything has to be re-touched.
We just issue an updated defect list to go with the software and our customers needs to work around the bug. Known unfixed bugs are a fact of life in certified software. Update releases are not. Customers pay a premium for this because they, too, would have to go through the same pain and expense on their side.
This reminds me of how NASA would never have a Shuttle in flight during the transition from December 31 to January 1, because they were unsure as to whether the shuttle's computers could handle the rollover correctly [1]. Sure, they could have updated the software to make sure that the rollover was handled correctly, but that would have required them to recertify the entire OS running the Shuttle, and it was easier to just plan missions such that the Shuttle was never flying on New Year's Eve.
We have discovered a critical bug in QNX 6 kernel in a networking scenario. There is no workaround, since the bug was in the core of their message passing infrastructure - a non-blocking by design kernel call, SendPulse(), sometimes blocks.
It took me 9 months talking to them about this problem until I managed to reproduce it on just two nodes and half a page of code, and record kernel logs that clearly showed a race condition.
We have received a patched kernel in a few days, and it worked like that for a while. This fix was merged into the official release after almost two years.
After that - only Linux, where we can see and fix stuff. No proprietary code and bureaucracy, no "fast, robust and reliable" operating systems.
There is no safety-certified Linux. As far as I know there was no safety-certified QNX 6 either (QOS 1.0 was based on QNX 6.5 SP1, which is not the same as QNX 6 despite the numbers looking eerily similar).
With a safety-certified system, you do not receive a patch because it violates the safety certification. Of course, you can get a patch and use it but then you're responsible for safety-certifying the entire stack including the closed-source vendor code, and best of luck.
This is exactly my point. Also, even if there is a workaround, more often than not the complexity of the mountain of workarounds just creates the next set of certified bugs.
It's a valid point, but the solution is not obvious. It's a trade-off in a big design space. (Of course with software it seems "trivial" to make sure the certification can be done quickly and cheaply. Just automate it! Unfortunately we're not there yet. :/ )
The difference is that those bugs are certified in one and not the other. In many cases, bugs are known to the SW provider but not disclosed to customers unless they happen upon them. With certified SW, bugs are by default disclosed up front.
All true! The checklist is only as good as the system that produced it, and that was basically the largest sentence/paragraph in the comment.
Having a forward-looking industry working in symbiosis with its top-notch quality high-functioning regulatory environment is the ideal state. It's rare. (I would say it's apprecition is academic only today. State-capability (or "state capacity") is getting to be a buzzword now for pundits[0][1][2][3]. But it's not that surprising that these problems seem to be cropping up now, in the Internet era, and not during the Cold War.)
My theory is that having a basic regulatory environment allows for increasing industry-wide quality effectively and quickly. For example the CDC is bad at counting COVID cases[2], but catching blindness causing eye drops with "just" ~70 cases countrywide[4] is a good example of the basic safety net.
Similarly, the whole aviation industry's safety process and context was what allowed the MCAS fuckup to come to light fast.
"Fun fact" regarding MCAS. If I recall correctly Boeing argued that the MCAS malfunction was covered by the runaway stabilizer procedure (checklist!), the astronomically bad UX of MCAS itself is what made pilots confused. (Because it activated for 10 sec every minute or something WTF like that, so they had no idea they need to get the runaway stabilizer checklist.) And that's exactly what you're saying. It wasn't sufficiently 737-like.
The workaround is to add UX to these type similarity checks done by the FAA and other regulatory bodies. And, again, exactly as you have mentioned, the cost-benefit discontinuity led to this bad trade off. And while we can't magically smooth over all of them, but at least (and that's my argument) we have good frameworks to start looking at them, detect, recognize, analyze and workaround them. (And checklists are the level 1 of these tools.)
I think there are checklists (good, safe operation) and certifications (paperwork).
Seen verifications for ISO-Something, were incredible efforts invested to get the certificate. Looking at the actual technical side…I’m worried.
Certifications which were influenced by people which follow their personal targets are a way to block competition and innovation. Certifications influenced by people with right mindest could help (e.g. require source access, data access, compatibility) a lot.
What I wonder as user:
What does Red Hat and Canonical actually test when ThinkPads are certified?
The name is very obvious to me; quite a 'unique' Dutch name, and he used to be the classmate of a colleague of mine. While he doesn't mention the name in the article, the Open Core references align with their own offering.
It's interesting to see this in play at different companies. Some of it is "real"...that is those requirements really do apply. In many places, though, it's just inertia. Like, back when the companies had SunOS or HPUX, they made internal standards about regulations, or support contracts, etc, to match. Then when Linux came around, RedHat fit their view of the world well.
It's also interesting to see what happens there in some places over time. The older group that manages on-prem is able to hold the "RedHat only" line. But, other groups decide what goes on it, so everything running there is running in non-Redhat containers or nested VMs. It turns into a bulky, extra hypervisor-like layer.
Particularly for systems that are installed as appliances in customer data-centers, where you as the vendor are both (a) responsible for the appliance as a whole to continue working but (b) have little / restricted access to continue maintaining it, RHEL helps you sign long multi-year support contracts for said appliance with the customer while keeping your maintenance costs to a minimum (since the cost of hiring more support engineers then flying them out is going to be higher than the RHEL license cost).
Long term support and ability to have someone on the line when it breaks.
Not everyone (probably not many too) have experts that will track a kernel bug or know how some obscure interaction works.
If you have knowledgable people in-house the RHEL is essentially useless and WILL actively make your life harder compared to just installing Ubuntu or plain Debian as they like to mess up and segment software that normally would be in same repo... but if you don't, and "just need a software that happens to run on Linux", it's cheaper to pay for few servers than hire top level Linux expert.
When we used it many of the "wait, this doesn't work like in vanilla Linux" could be traced to some RHEL engineer changing stuff to work for their enterprise customers.
Like we had audit fail on too old SSH encryption modes enabled. The encryption modes that were disabled and deprecated in that version of SSH but Redhat brought it back with a patch, presumably to support some legacy environments of their customers.
They also do some utter mess with openssl, like removing a bunch of EC curves and various FIPS-related stuff of dubious security impact, re-adding sha1 support and bunch of other stuff
Once you are the default choice that has other network effects, so yes.
For example if you are a software vendor who needs to ship a proprietary kernel module - think something like software to control a lab instrument - only Red Hat has a credible kernel ABI scheme with the required scale (because of the reasons I gave in GP). So every ISV in this situation specifies RHEL and requires that you run your software on a purchased copy of RHEL for support reasons. And usually the ISV software costs many multiples of the cost of a RHEL license, so it's lost in the noise.
> only Red Hat has a credible kernel ABI scheme with the required scale (because of the reasons I gave in GP).
Other distros also don't change kernel version during stable phase so I don't even know what you mean by that. Hell, RHEL kernels have more changes than say a Debian kernel over lifetime of a version.
What Red Hat uniquely does is to backport features into older version of the kernel, so statements like "you need kernel x.y.z for this to work" don't really apply, you need to find whether they backported that feature or not.
We've also experienced some "fun" with how they manage it and versions in general for example:
* Driver bug was backported from newer kernel (they backported "performance improvements" not bugs) in rhel5, so we lost VLANs on our upgrade canary
* Same bug was backported to rhel6 with same disastrous result 6 months later
* One of servers stopped booting because they changed version of LVM in STABLE DISTRO (think it was rhel6), and new LVM version deprecated/renamed some flag that we used, that made it fail on boot.
So yeah, we're very happy we migrated off it. And in Debian distro upgrade works too!
The thing is that, with RHEL and a good-enough contract, from a management perspective you can yell at them until it gets fixed. Everywhere else, you're on your own.
Sorry but paywalled content is really not a valid argument.
I agree with you that there is value in kABI stability, but this comment does nothing to convince others. Do you have a better freely accessible resource?
I have seen huge law firms use Redhat's distribution. There are service level agreements, dedicated support engineers as part of the contract. It is gives peace of mind, sense of security and large firms always prefer solutions that have enterprise support, you cannot cheap out when data/security leak can impact share price.
I think you've nailed it with this comment. A big organisation thinks completely differently to a small organisation or individual. A big organisation will happily spend a lot of money to reduce risk. They like having a number to ring 24 / 7 to support software that they depend on. The money they spend on a support subscription will be a tiny fraction of their profits, and so they don't blink at the cost, even though it seems a lot of money to you and me.
They also make Fedora which is the distro of choice for me and hundreds of thousands of others.
And I use RHELlikes (Alma/Rocky) in my homelab just cause its similar to Fedora and Red Hat's documentation is much better than reading random blogs on how to do stuff on Ubuntu/Debian.
One example is companies who don't need strict compliance themselves, but have some customers who do.
If 10% of your customers need this for compliance, then you have to test and certify that your stuff works with something like RHEL - and so that becomes an option, or even the default option for the other 90% of your customers.
Plenty, LTS is a big factor and a lot of companies now got burned with CentOS.
LTS is key because migrating OS and recalibrating everything is expensive as fuck for large organizations.
As the amount of manhours needed to migrate everything, retest and retool everything and retain and update all your internal documentation is stupendous.
Also having real support for a product is worth its weight in gold in a corporate environment because it allows you to offload responsibilities when stuff breaks.
When was the last time you got a feature your business needed included in the upstream kernel? Red Hat customers pay Red Hat engineers to take their business needs and turn them into upstreamed functionality.
Your comment made me want to say "but COS is certified too!" however now I'm unsure - a quick Google certainly doesn't bring anything up. Why have Google not done this, is the demand not there? Or have they and they just don't document it prominently?
(Note we don't have any PCI requirements as we offload those to the likes of stripe etc)
Vanilla options on GCP would not be certified for specific regulatory requirements. You would need to be -- in the cases of government regulations -- need to be running Assured Workloads.
As to your question of why: maintaining compliance is expensive, especially if you're accustomed to pushing prod code changes hundreds of times per quarter.
I'm really surprised that the article does not highlight cloud services as a way to monetize open source. This seems like the most obvious way to get revenue from back office open source projects, especially databases. RedHat itself invested in relationships with cloud vendors from early on.
Yes and no, it very much depends on the country and regulation you're talking about. Not every regulation is uncompetitive red tape. I'm quite happy that airline and medical software is heavily regulated, or to give a more recent example that we've been working on, that in-car software is required to have minimum response times to user inputs.
Red Hat early on invested in understanding the needs of the different industry verticals they were trying to sell into and made sure their products could meet things such as compliance and audit requirements of differing sectors. They developed guides on how to deploy their products in a manner which met those requirements and their professional services engineers understood not just Red Hat products but the specific needs of differing sectors. This is how I saw them winning enterprise customers over larger established players in the early 2000s. Even today most opensource businesses don't do this to the extent Red Hat was in the early 2000s.
Moreover, I remember Red Hat sales reps back in the 2000s to be incredibly technical and knowledgable compared to every other vendor's sales reps. Every time I had a specific question regarding technical implementation or even compliance laws in my region, the sales rep knew the answer or forwarded me immediately to someone who did. They were very organized.
This is correct. They still do this. They are the only ones. They go beyond just "hey, I know of a database that could handle that size" to "why are you storing this here? this is PII, this is analytical data, this is business data, here's a proposal for doing this with open source at scale". It's not just "buy our product". They go above and beyond the needs of the product to the needs of the customer, like you said.
Other “open source” companies are always focused on modern startup stuff. Too much fake it till you make it.
They have a unique value proposition in that they are an OS, so they are everywhere by default. It makes sense to have a .gov, banking, healthcare, etc teams who grok whatever the issues are in those verticals.
I was at Red Hat very early on. Red Hat has been remarkably consistent throughout its journey once it graduated from selling boxes of software at book stores (which was a necessary step). Red Hat only astook venture money once it was ready to go public and never before. The philosophy of Red Hat was to "make the pie bigger" and still largely is, therefore it supports open source first before anything else. This works because it provides product-market fit at scale and by nature. The thought that "all" Red Hat sells is software support, certifications, and subscriptions to updates is so short sighted and misses the point.
Red Hat created an industry and all of the successful enterprise Linux distros have followed its model to some extent or another (Canonical, SUSE, etc). You don't need "open core" (which is a misnomer if there ever was one) if you continue down a path that creates and maintains communities of software authors out there in the world that will constantly feed value back into the ecosystem.
A virtuous cycle of community maintained software alongside a thriving and growing business is still achievable, no one seems to have the guts to do it.
> Red Hat created an industry and all of the successful enterprise Linux distros have followed its model to some extent or another (Canonical, SUSE, etc)
IBM created that industry, by lifting Linux into their supported platforms portfolio.
and the first Enterprise Linux ever was suse linux enterprise for s/390, followed by x86 releases at about the time RH released their first RHEL.
To this day every IBM customer I've ever contracted for has also been a Red Hat customer. That even applies for mainframe shops. Sales reps and support and account managers at IBM and Red Hat had eachother on speed dial a decade ago since it was so common to work a ticket that involved both companies products, or sell a bundle that would include both companies software and/or hardware, etc.
My only question is what the hell took IBM so long in making the decision to buy them out. I've never been less surprised by a big acquisition than I was with IBM and Red Hat. The only surprising thing was that nobody outside of that space saw it coming.
They deserve credit but while IBM did spend lots of money on Linux around 2000, and that helped legitimize Linux. I feel that RH has always been the leader in this industry, and RH Linux was well entrenched in the enterprise long before IBMs big push. So IBM helped but they did not create it, in my view.
I would disagree with RHEL being entrenched by the time of IBMs big push. At the time, you were still more likely to see commercial Unix than Linux at large companies.
Red Hat becoming entrenched wasn't a guarantee in my mind until IBM made that big push. If you could point to a single event that made it likely Linux would take over, without a shadow of a doubt it would be the blessing it received from IBM. There were an enormous number of bigcorps that made initial investment in Linux after that.
IBM did not create that industry, though they certainly helped it along.
Red Hat didn't create the industry single-handed, either. They deserve a lot of credit but Linux succeeded because it was a group effort where there was a chance for a lot of players to make money (and opportunity to displace others).
IBM did a 1bn USD investment creating the market for Linux based enterprise infrastructure, "vendor independent" on all their server products (PPC, s390 itanic and once available x68 based servers), with all IBM software products (db/2, websphere, ...) available "certified" on sles, RHEL and some Asian localized Linux variants.
this created an immense Push on HP, dell, Dec, sgi, ... to follow suit
and oracle, sap, Software AG, anything serious.
this push and commitment b IBM was.so.relevant because it established Linux as a serious platform with enterprise level SLAs and mutual commitments for the full stack.
the 4d chess was to establish a solid consulting business for "consolidation" on the "single platform"
and it paid off immensely for IBM --- and for most others.
given the IBM market position around 2000 this was pivotal and prophetic.
I find Redhat jumped the shark when it started selling its distribution with different licensing options that let you use different number of CPU cores.
Per-core licensing is more or less limited to JBoss stuff and the likes. Stuff like RHEL, OpenShift Enterprise, etc. is typically licensed by socket-pair.
Partly it is, but there wasn't much hope to convert those to RHEL. The plan was:
1) to offload the production of the releases (effectively what is now Alma Linux). Because there's so much use of CentOS, it was pretty much a given that somebody would pick up the slack. Most of those companies weren't running stock CentOS anymore, they had their own downstream repositories
2) to give more insight into the making of RHEL minor release to CentOS SIGs and to companies that run CentOS derivatives
Facebook for example does not use the RHEL kernel and was one of the early adopters of CentOS Stream even before CentOS Linux 8 was terminated.
1. Customer support
2. Certifications
3. Vendor support
I don't know another distribution of Linux that provides all these things to the extent Red Hat does. You don't use Red Hat because you want to, you use it because it's the only choice.
Any business could do exactly the same thing, and do it better than Red Hat. But there's really not much reason for a customer to move away from Red Hat if they're already on it, and it would take you a hell of a long time and a dumptruck full of capital to match them on the above.
Open Core is not a business model, just like Open Source isn't a business model. A business model is promising to fix a customer's problem, certify their software, and get vendors to support them. That alone will get you a yearly renewing contract for a couple million that you then upsell over time.
>A business model is promising to fix a customer's problem, certify their software, and get vendors to support them.
This can also be described as assuming liability.
Fact of the matter is people don't want to assume liability themselves. If they can pay to get someone else to assume that liability (which Red Hat is happy to do) they will.
The rest of the FOSS ecosystem on the other hand is all about assuming liability yourself. No warranties expressed nor implied, no support, no nothing. For a business that is simply not palatable.
This is also why businesses happily pay tons of monies to Microsoft, Adobe, IBM, etc. for licenses and support. They assume liability, liability the businesses don't want to assume.
"Assume" is a different word, it means implicitly accept. There is no such thing as an implicit clause in any contract.
Definitely willing to be shown otherwise, but I've never seen any vendor accept liability for there product beyond "you can have your money back, we'll take our software back".
If you're the DoD and you need to run above TS-SCI stuff and somehow Windows isn't a good fit and you don't have any spare fresh-faced kids stealing your docs to impress their virtual friends or leak to newspapers, then you run RHEL on Dell servers flown over UT and Dallas stadiums with all of the S&S fees.
4. We have all the in-house support staff we need and only want to use the self-support license, but since we're a service provider we were forced into the CCSP program where each license costs a lot more.
All service providers have to be in the CCSP program, nothing wrong with that. And we do use it for clients. But it kinda sucks that even our in-house stuff has to cost the same. Discourages use of RHEL in-house.
I don't know another distribution of Linux that provides all these things to the extent Red Hat
Oracle would technically be another one. They will even sell you Oracle Hardware (or Oracle cloud) to run your Oracle Linux. Of course there are many good reasons not to want to use Oracle for this.
When you run enterprise product X on Red Hat version Y, that vendor will do less fingerpointing. They may even work with Red Hat in the backend to deal with bugs, etc.
If you go put that on Debian or whatever, they’ll make you reproduce on RHEL.
This becomes a big deal when you have RHEL, VMWare, and other vendors in the mix. If you ignore the certification, the app vendor or VMWare will just shrug at you.
It's a shame your number one reason is customer support, when at least in my experience it has been woefully lacking.
Despite having a reasonably large contract with them at work, we had multiple support tickets go without response for months, with at least one that I am aware of being left open for almost 2 years before being closed as "won't fix"
Noting that its only my personal experience I am sure plenty of others have had the opposite experience, however I have found canonical support to be vastly superior as as such I would never chose RH if support was the primary thing I was after.
That sounds like an account management issue. You need to know how to manage and work with vendors more often than not it’s not the size of the account that matters but how you engage with them.
to be clear when I say no response, I am talking they got consistent updates that essentially said "we have no update on this issue", not them simply never putting anything on them.
also like it or not, by paying for support people have a right to expect that the company being paid will be engaging with you and the request you are putting into there system, not that you will have to find a person to chase down because your tickets are being ignored.
SUSE are probably the closest company to following Red Hat's business model (with a more European/German focus). The other ones are the cloud vendors who are getting around to realising now that without certification there are many government orgs, large companies, the military etc who simply cannot use their cloud services.
Why would they? Managed services were never part of the Red Hat business model. They've added some cloud stuff as of late, but in general you don't go to Red Hat to buy or lease hardware. And I don't think Red Hat ever had any serious interest in selling you any either.
Red Hat worked because it was a time (the height of the dot-com bubble) when open source operating systems were something of a Wild West - rapidly becoming an industry standard, but not well understood by existing businesses. Red Hat provided guidance for that transition.
Confluent was arguably also able to do this for a similar reason - at the time they launched, the Kafka model was becoming vital, but was also novel and difficult to manage in production, and its enterprise services and support were critical for adoption. (They do have a managed cloud now, but their enterprise services were 73% of their revenue in 2021 and 61% in 2022: https://investors.confluent.io/node/8466/html ).
But it's hard to run what is essentially a professional services firm and popularize an open-source product at the same time... unless your open source product is already rapidly becoming paradigm-defining, and you're the people with the unique know-how to position yourself as the saviors for that space.
Side note on Confluent’s products. Thanks for that link - the “Disaggregation of Revenue” table where the 73% and 61% numbers came from shows that Confluent Cloud revenue increased from $94M in 2021 to $211M in 2022, which was more than Confluent Platform’s revenue growth. If those growth rates continue then Confluent Cloud will very soon be the majority of their revenue.
I always take those numbers with a big pinch of salt. There is a wave, in the management world, that wants every business to be "Cloud” (i.e. SaaS), because of the (largely correct) assumption that it enables better rent-seeking and long-term lock-in. So every vendor is officially busy "pivoting to a subscription model" and wants to show growth in that area to investors, so they file everything they can under "cloud" divisions.
Confluent also moved early on the Confluent Community Licence for Kafka Connect connectors they developed to prevent other service providers being able to provide the service that Confluent did.
I remember back when Microsoft used to be anti open source too. They would release super straw man type articles about how proprietary software was actually cheaper with lower ToS and other poppycock.
Weird to see a startup making such simplistic, wrong statements to promote their business rather than to protect an entrenched business.
You can have CIS, FIPS, and a whole host of other compliance without Red Hat, arguably just as easily.
The reality is containers and container-first OSs are killing RHEL, you no longer need a breadth of OS packages or a fully capable OS. Just docker or podman. The only place RHEL has a stranglehold is with COTS apps, but those are all moving to containers too. I can't think of any widely used container that uses RHEL/CentOS or Fedora for the base image, that was a huge miss on RH's part.
So containers run on air I take it? No, RHEL isn't being killed, but you're not going to see it as base image for your typical GitHub project. In the enterprise though, where security is tight, no one is going to build containers based on Alpine which is maintained by what, 10 people? Their customers are banks, governments, etc. which will happily pay for what they provide.
Alpine as a base image is perfectly fine; it is a tiny base system after all.
RH base image is also relatively small (compared to full RHEL system). It is also completely useless, as you cannot dnf install anything while building the image, if you do not have access to RHEL repos. If you have to bring in your own packages, you might use Alpine just as well.
I woould also contest your claim about "happily pay". No, the payment goes after the sales folks do their job, which isn't easy either. There's nothing "happily" about it.
Eh, OpenShift is too opinionated to be valuable for the companies that can afford it, and way too expensive for those that might benefit from it vs a cloud based PaaS. OpenShift's biggest problem has always been that it continues to diverge just enough from vanilla k8s to require "OpenShift specific" shit for everything.
I don't see OpenShift getting the same COTS lift as RHEL, so in my mind it's still going to be a net loss for RH long term as that revenue leaves RHEL and doesn't go to OCP.
This isn’t true at all, openshift is a conformant distro of k8s. You are not required to use any
OpenShift specific shit, but what it adds is very nice.
Also that statement is kinda silly "everyone would switch", very broad and sweeping.
That's way easier said than done to convince our clients that "oh yeah, we're using not-RHEL now". We'd have to guarantee some self support to them in that case, in order to alleviate their fears.
They might but unless they also come on a supported OS with all the certifications checkboxes checked it won’t even get its foot in the door. It helps if that other software is built on as much of the already supported and maintained set of base libraries.
The biggest reason why we don’t see support and services companies achieving hyper-growth is that they are uninvestable. Annual revenues are low and nonrecurring, and margins are tight. VCs are looking for 80%+ gross margins and hockey stick growth.
In other words, Red Hat's model doesn't work for Wall St and VCs looking to make quick easy money riding out the next tech 'unicorn'. Red Hat has survived and thrived for 30 years now, while most modern VC backed tech companies are lucky to make it past 3. The infiltration of Silicon Valley VC vultures was one of the worst things to happen to the tech industry and why we rarely see sustainable startups like Red Hat anymore.
It's almost like building a company slowly, within your means, and focusing on improving the product, is the wrong way to do it these days. Better go with some BS that's flashy, make big claims and promises, and try to get some of that sweet VC money!
I don't think this can be overstated enough. The incentives for vc-backed companies these days don't align with this model, hence the reason it's not more widely in use.
And a very good argument for taxing that money out of their hands and investing it into productive enterprises that spur actual innovation and gainful employment.
1. For me until proven otherwise, RedHat have an OpenCore model, the CentOS shift is too recent to be meaningful. My company literally panicked about their CentOS servers because they didn't want a rolling distro and so decided to switch to something else free for low risk VM (worse I'm afraid) and pay RHEL support for critical applications.
2.There are a shitload of successful support company for real OpenSource product, the author just choose to ignore them. Maybe because they are mostly smaller and "uninvestable". Just look a the support advertised options for PostgreSQL in North-America only... https://www.postgresql.org/support/professional_support/nort...
They switched to Oracle Linux... and for the love of humankind don't ask me why...
(At least they had the sense to switch to REHL for critical servers)
Now on the app side we are left with some vendors that don't support installs on OracleLinux (because who would switch to that... again don't ask me why).
You know what works just like the Red Hat model, but yields way better returns? The AWS model. Just take all these tech-tested market-tested open source projects, slap on a new name and bill them by the hour.
I'd be curious to know the revenue breakdown of all their various products as a percentage of the total. I wonder if all those products you reference bring in much $$ compared to the bread and butter ones that they started off with - EC2, S3, etc
> Under Red Hat control, CentOS no longer modeled RHEL production code but was marketed as a beta—a preview of what was coming down the pipe. Eventually, the company sunsetted the project entirely, and as a result, new copycats emerged.
I thought CentOS was downstream of RHEL. Doesn't this describe what CentOS Stream became?
That very brief segment of the article pretty much collapsed a decade of time into two sentences, but yes it appears to be denoting the transition from CentOS [Linux] to CentOS Stream.
Right, but I assume the first sentence refers to CentOS and the second sentence of "sunsetting" referring to the end of CentOS and the start of CentOS Stream. After all, CentOS became a part of Red Hat back in 2014 and stopped in 2021.
In other words, how was downstream CentOS considered a beta or "a preview of what was coming down the pipe"?
I suspect that AmazonLinux (also originally downstream of RHEL) caused Red Hat to reconsider this. I’m not sure why exactly. Perhaps Red Hat/IBM compete with Amazon in some space, or perhaps they felt that Amazon should release their patches.
The CentOS Stream transition was a shift from the "throw it over the wall" model of open source like Android to a more collaborative approach.
Example: As a user of (original) CentOS, you discover a bug. Where do you report that bug? The Red Hat Enterprise Linux bug tracker, because CentOS is just a clone and the community has no real ability to fix things. Then what? You wait until a Red Hat employee gets around to looking at it, and then wait until the next patch release which might be up to 6 months away.
Now repeat with the new model.
As a user of either CentOS Stream or one of the RHEL clones such as Alma Linux, Rocky Linux, Oracle Linux, etc. you discover a bug. Where do you report that bug? The CentOS Stream bug tracker, upstream for RHEL and thus also upstream for all RHEL clones. CentOS Stream is a community project, so the community (including yourself, and including maintainers for Alma / Rocky / Oracle Linux) can actively contribute to and drive the fix onwards to completion and test it afterwards, wheras before it would sit in internal nightly builds for a few months.
Both the community and Red Hat benefit from aligning incentives that way. However, I understand why there was a lot of frustration about the way it was announced and implemented. It could have been handled better.
disclosure: I work for Red Hat, but not on anything related to RHEL or CentOS Stream.
People used CentOS because it was extremely stable. It was the safest dsitribution to pick for a long-term support version of something, e.g. if you are distributing hardware to customers and you know you will have to maintain support for that hardware for 5-10 years. Companies that want this don't really care about community bugfixing and all that. They're happy with security patches so that their products look good in security audits, and will apply their own patches for other problems if they hurt their own product.
CentOS Stream is the exact opposite of all of that, with no upside at all for this model. It will push numerous companies away from IBM RedHat entirely. Especially as it is widely seen as the start of IBM creating a bigger moat around RHEL, so there is little confidence that future clones of RHEL won't see a similar fate one way or another. And while finding a bug and not being sure who to report it to is bad, finding out that your Linux distro was buried by your maintainers half-way through the lifecycle and won't get any new updates would be disastrous.
It cannot change because of how copyright works. If you mean dropping CentOS Stream, that of course can happen but then all the Alma Linux folks have to do is go back to how CentOS was developed in 2010.
It's also not that Red Hat saw something wrong with CentOS Linux. To put it simply, between 2014 and 2019 Red Hat learnt that the company needed a free RHEL upstream more than they needed a free RHEL downstream. CentOS Stream is all about sharing participation to the CentOS community so that everybody does what they need and they don't have to ask mom Red Hat. Just read https://www.theregister.com/2021/07/09/centos_stream_greg_ku... ("Greg Kurtzer: Red Hat did the right thing and the new scenario is better than the old").
In fact, a lot of effort went towards making it possible to ship RHEL's upstream as a rolling release. Consider that ten years ago you had to ask specifically, months in advance, about updating a package in a RHEL minor release. In 2010, a quality engineer and I spent a few weeks working on grep just to make sure that the bugs were fixed in Fedora before RHEL 6 forked, because otherwise we'd be stuck with those bugs for years. These days I could just open a merge request on CentOS Stream and ask to update to a newer version of grep.
Disclaimer: I work for Red Hat, but not as a spokesperson
I may be off, but hey. As I understood it at the time, the reason wasn’t “OMG there are folks not paying us for the bits” but rather “we have this number of engineers, they can work either on CentOS Linux or on Stream” and Stream is more of a benefit to Red Hat as an upstream for RHEL.
What kind of Stockholm Syndrome is required for a user to believe that a transition from "bug-compatible with FIPS-certified stable releases" to "no-promises best-effort ABI stability" is desireable because now we get to do our own tech support?
I'll never in my life understand this. It's like Red Hat took the time they should have used to talk to their customers and spent it writing a Guide To Enjoying Eating Shit to lecture everyone about how great things are now.
I'd respect the company a lot more if they just said "we bought all this CentOS stuff and it turned out to be a money pit so we're throwing it out and making it a part of the beta testing process" instead of this relentless gaslighting that anyone on Earth wanted this.
If you are looking for "Investable Hyppergrowth" Open Core is a great choice. If you want to maximize the value project proviedes to the universe sticking to Open Source is better choice
I think it would be a great benefit to our country if governments started phasing out the purchase of any proprietary software.
If you want to make software for the government, it must be public domain.
In the long run we'd all have lower taxes, more efficient government, and better software for all to use.
The downside would be some people in the 1% would no longer earn billions in fees from the rest of us for "licensing" software laden with technical debt to the government.
Proprietary often has a lower total cost of ownership than the comparable loss of productivity and loss of support compared to open source software.
Do you have a good example of a company that provides support comparable to Microsoft's www.office.com (word, excel, powerpoint, outlook, teams, and one note). Note that this is a web instance which means that the cost of installing and managing software updates on various computers is reduced.
The lower taxes is not guaranteed at all when looking at the total cost of ownership.
With open source software, this often means increasing the staffing (because there's no corresponding org that will do the support with the necessary SLA) which also means that there isn't necessarily more efficient government.
The "better software for all to use" - switching to LibreOffice (or one of its forks) doesn't mean that the open source developers are going to prioritize the issues from Smith County, KS when there's an issue that impacts them.
To go down this path of "lets have government use open source" it is first necessary to get the open source developers to want to make their product competitive for government use compared to the proprietary ones that are out there and for companies to step up and provide similar SLAs that Microsoft and others offer.
That is something that can happen now, but isn't because there isn't enough interest from the FOSS community to do that for that sector. Trying to push the public sector to do it would be painful, expensive, and inefficient and not result in any better software.
> Proprietary often has a lower total cost of ownership
I think you are right in the short run, but I think in the long run TCO is far higher with proprietary software. Because you are sinking money into a lost cause and delaying the cost of a future expensive migration from the closed source to the open source.
People and organizations will always want to mold their software to their domain, and for a while and for the right price, that works. But proprietary companies sweep technical debt under the rug, talent leaves, they start to slow down, and eventually the software slows down the organization considerably. It becomes harder for the organization to hire and train people on the proprietary stack. I recently spent a week working alongside federal employees and almost without exception everyone hated their tech stack, attributed a double digit percentage of their productivity lost to it, and would often build things using open source on their own machines to get work done.
Eventually the org switches to a more open source stack, but the process to extract themselves is costly. I think that cost might be greater than the cost of just starting with open source in the first place.
It seems almost like a natural law that all proprietary software is eventually replaced by open source alternatives.
> Do you have a good example of a company that provides support comparable to Microsoft's www.office.com
Not as a package, no. For each individual item, there are alternatives, but I'll agree there is a missing offering for a fully self hosted alternative to Google Workspaces or Microsoft Office 365. If I'm not mistaken, Collabra Online is trying to be that, but the execution seems to leave something to be desired.
> The "better software for all to use" - switching to LibreOffice (or one of its forks) doesn't mean that the open source developers are going to prioritize the issues from Smith County, KS when there's an issue that impacts them.
I wonder if ChatGPT could influence things here. I wonder if support agents from ChatGPT trained on loads of open source content and code will turn out to be more effective than human support agents for proprietary stacks, and you might be able to build an open Office365 or Google Workspace competitor.
Anyway, just doing some back of the envelope math, I estimate governments in the USA spend $20B - $50B a year on proprietary software[1]. It certainly gives a lot to open source too, but I believe if more of that money was diverted to open source we'd all be better off in the long run. Not a slam dunk case, I know, I'm at the early stages of this line of thought. And of course, even if you could prove that open source would have a much lower TCO, it would be a protracted battle to go up against the folks with a lot of money on the line to deny that truth.
[1] According to this site (https://www.itdashboard.gov/itportfoliodashboard) it looks like total unclassified Federal Government IT spending is ~$84B a year. It looks like Palantir alone gets $1B a year from the government. The latest information I can find on Oracle is that 25% of their revenue comes from US Gov (so say ~$12B per year). Microsoft is perhaps $5 - $25B? Not sure.
Money isn't diverted to open source - its spent on staffing (both in house and consultancies) needed to support the open source software.
Hiring Deloitte or Accenture or another consultancy to provide the support for an open source piece of software (be it directly or "we need to have people with these skills") is still extracting money from the org and I can assure you that the consultancy isn't contributing back to FOSS.
As it is, Microsoft is a known quantity with a support SLA that meets the necessary requirements both for hosting of data and for response time. The cost of Microsoft licenses and support is less than the cost of hiring additional staff or bringing in a consultancy.
A budget item with a fixed number in it is much preferable for budgetary allocation than open ended numbers typically found from consultancies.
The support of open source software is often more costly for the public sector than going with a big tech solution - even when including the cost of licensing.
Until the cost of support for open source solutions comes down to where the proprietary options are, it is often not something that is considered.
RedHat is an example of a company that is doing it right - offering the support that the public sector desires with a consistent budget line item.
If you can stand up a PostgreSQL instance that is able to compete with the Oracle performance and support (e.g. backup restore failed - call support and have someone on the phone helping now) for less than the cost of the Oracle licensing you will have people beating down your door to sign up.
Same thing with Microsoft.
For the federal government to do this on their own is much more costly. It goes back to the consultancies to implement, migrate, and maintain. Trying to staff up to be competitive to be able to do it in house is also quite costly (note the pay difference between public sector and private and then multiply that difference as an ongoing expense by the number of employees needed).
And so this returns back to the FOSS community to address the needs of the public sector and create the needed companies that are competitive to be able to provide that ongoing support.
ChatGPT will not write an office.com competitor this decade... or next.
I think your assessment of the current situation is probably accurate.
My point is it's suboptimal, and the U.S. Federal Government could change things, as the biggest buyer of tech in the world. If they said "everything has to be public domain", it would be a seismic event. I think we would all be better off if they did this (probably with the exception of <1% who would lose monopoly profits).
Nit: I think you misread my ChatGPT comment. I said ChatGPT could be a competitor to office.com's _support_ agents, not that ChatGPT could build office.com.
> The first company to successfully commercialize open source software at scale
It depends on the definition of scale, or the company. Cygnus started in 1989 and was doing well, it merged with Red Hat around 1999.
> The company generates way more value than it captures and has a low rake out of necessity—none of the software it creates is proprietary.
There are expenses and revenues, and companies like Red Hat rake in revenues - which are more than expenses. So did MySQL and other companies.
> It’s under constant threat of being disrupted or undermined by other open source providers.
Back before cloud got big, I worked for companies that paid for HP/Dell service agreements and RHEL service agreements. If we were a smaller company we may have gone with CentOS or something, but we weren't.
The thing not mentioned is not only that people can and do use things like Debian instead, but that RHEL gained from Linux and Debian as well. So the benefits flow both ways.
> open core model...MongoDB...Elastic
This is another model which can be done. Three years ago, Red Hat sold for more than twice what MongoDB's current market cap is.
> The Red Hat model won’t be repeated.
It's worked for a number of companies in the past, some of whom were purchased by Red Hat. I don't see any reasons why it won't happen again, although it's not every day that a Red Hat is founded. No reasons are given why this will never happen again - all the reasons given were things Red Hat, MySQL, Cygnus etc. had to deal with.
> Support and services models don’t achieve hypergrowth
Red Hat sold for $34 billion. If that is not enough hypergrowth for some VC or founder, then they will go with another model.
> Support and services companies are uninvestable. Annual revenues are low and nonrecurring, and margins are tight. VCs are looking for 80%+ gross margins and hockey stick growth.
I don't know how revenues are nonrecurring - I worked for a number of companies that paid service contracts for RHEL and other Red Hat products alongside our HP/Dell service contracts.
Also - Intel, Netscape, Greylock and Benchmark did invest in Red Hat.
Also, companies like Red Hat and MySQL did have hockey stick-like growth.
For a company following the Red Hat model that needs to raise money from VCs, these thoughts might be applicable. But for companies following the Red Hat model that are not losing money, they don't really have to worry about what VCs think of this.
From my limited existence, one of my older employers paid Red Hat a lot of money for a fairly limited/short engagement with 1-2 of their software engineers and 1 of their solutions engineers, and then paid tens of thousands per year for I think 2 production support contracts for their software for years after.
At the same time, I could see how they could have gotten pinched recently if they were unwilling to pay more to retain talent and are now getting squeezed from the other side by companies moving away from their products to reduce costs.
>Getting contracts for development work with large customers can be difficult since most organizations already have preferred vendors for software development. Developing features under a support model gives you no entitlement to the technology. And, if you’re good at what you do, you’ll eventually put yourself out of a job since customers can use it without help
But there are a large number of outsourced software development companies, some of them publicly traded- Cognizant/Infosys/Wipro types, plus Accenture, plus the new generation of Eastern European ones like EPAM and Grid Dynamics. Government-y ones like CGI. Oldschool companies offering consultancies like EY/Deloitte, etc. Hovering in the background is Toptal, which is technically different but I would imagine the bulk of their revenue comes from enterprise outsourced dev contracts. All of these companies are making this business model work somehow? And there are lots of them too. So it can't be that bad of a way to make money (yes I understand that these are not VC-investable businesses)
One might argue that Oracle also made this model work.
Not sure about their revenue between different divisions, but they finished open-sourcing even the last proprietary part of their JDK, making OpenJDK the reference implementation, plus they finance 90+% of all its commits. They make some revenue from selling OracleJDK licenses (and even most of those are free if you stay on the latest version) which includes support. Many of their customers are on older Java versions though, for example their Java 8 support is valid for something like 2030, so mostly government, big enterprises.
(For some reason though, there is just endless bullshit around Java, even though the exact same model/license is true for the Linux kernel)
The funny thing about Oracle haters is how they ignore that Oracle and IBM stand by Java since the early days, they collaborated with Sun on the Network Computer project, were one of the first SQL vendors to include the JRE on the database engine, provide a GUI based on Swing, JSF framework, JVM implementation, IDE, and so on.
They also ignore the fact that Sun also charged money for support, JDK was only available as free beer with source on request, the whole Apache drama with the TCK, and access to legacy versions required a Sun developer account with license.
Or that they were the ones introducing the separation of having UNIX for users, and an additional license for an UNIX SDK, which is what made people rush to improve GCC, until then mostly ignored project.
I think that most Oracle haters hate them for their obscenely expensive and often idiosyncratic database product, as well as their legal division being larger than their engineering division.
And speaking of databases, the idea of running custom Java code inside the database seems like an abuse of the database. How many people actually use this feature, and how many of them like it (as opposed to having made a bad technical decision 20 years ago)?
Plenty, this is no different from running C and Perl, which Oracle did before, what PostgreSQL allows for (which Rust stable support being cheered over here last week), or running .NET on SQL Server.
When performance matters, stored procedures are the way to go, not wasting network traffic and CPU cycles on the client, As it so happens, these additional runtimes are great and safer way to extend PL/SQL, pg/SQL, T-SQL,... than writing C extensions.
> When performance matters, stored procedures are the way to go, not wasting network traffic and CPU cycles on the client, As it so happens, these additional runtimes are great and safer way to extend PL/SQL, pg/SQL, T-SQL,... than writing C extensions.
While I agree in principle, in practice to me it feels like the tooling just isn't there.
Testing and debugging is a bit more difficult than with other languages (e.g. even using breakpoints and stepping through code), things like logging typically don't have pleasant implementations (e.g. log shipping), the discoverability and even version control of the code also tend to be worse, among other things. That's even before you get into building around the particular runtime that you're provided with, trying to get a grip on dependency/package management, automated CI deploys, rollbacks, monitoring/health checks, local development environments and so on.
My experiences might be the opposite of some folks, but I recall working on a system where most of the logic was implemented in the database packages and something like Java was used just as a glorified templating solution to serve a webapp. The performance was great, but actually working with the codebase was an utter nightmare, so it's not worth it in my opinion. That's like choosing to write a webapp in Assembly just because it's faster.
Do you debug stored procedures?
47% Never
44% Rarely
9% Frequently
Do you have tests in your database?
70% No
15% I don't know
14% Yes
Do you keep your database scripts in a version control system?
54% Yes
37% No
9% I don't know
Do you write comments for the database objects?
49% No
27% Yes, for many types of objects
24% Yes, only for tables
If half the people don't debug their code, three quarters don't test their code, almost half don't use version control and about half don't bother writing comments of any sort, that's the kind of code that I don't want to be working with and would advise others against going for that approach. While we can talk about the fact that these things can be done, the fact that they're not, is evidence enough that the community just isn't there yet.
Use databases for what they're good at (including some in-database processing, like reporting), but don't try to do everything in them.
Let's not blame databases for lack of skills or interest in how to use them properly.
Oracle, Microsoft and IBM provide the same kind of IDEs, graphical debugging and source control as any other programming language.
It is this lack of skills that comes up with fads like NoSQL.
By the way, there are similar results related to debugging in other languages, where most can't do better than printf debugging, don't know unit tests, profilers or static analysers.
The outcome of bootcamps or CS degrees without sound engineering practices, while people label themselves "engineer".
> Let's not blame databases for lack of skills or interest in how to use them properly.
We can (and should) explore the causes for it, sure, but that doesn't change the reality that if you join a project that uses a certain approach, it isn't guaranteed to be using the best possible practices, but rather whatever is popular and easy to do in the industry, unless you're very selective about where you work.
> By the way, there are similar results related to debugging in other languages, where most can't do better than printf debugging, don't know unit tests, profilers or static analysers.
I'd say that this is true to some degree and is also a reflection of either poor tooling, or lack of interest. For example, command line debuggers with arcane keybinds will be harder for the average developer to learn and use effectively, than just clicking on the line they want to stop at in a JetBrains (or similar) IDE and just clicking a custom run button that will launch their entire project in debug mode. The same goes for being able to run either your entire test suite or a particular test by just clicking a button in the source file, helpfully shown by a good IDE.
Things get worse when you want to test the integration with an actual data source (like an external API or a database), because in some cases you'll have to mock so much of it that you won't be testing anything remotely close to the real thing, or will have to deal with bootstrapping an instance of the API (if you can even self-host the full thing) or a real database, which will be really good from how truthful your tests are for real world use cases, but will need certain configuration and resources for setting it all up. Sometimes you can get away with something close enough, like using an in-memory database behind an ORM, but some of those abstractions end up leaky. Even worse if you want to test your integration against cloud services.
Static analysis tools are not without their issues either: something like SonarQube is good theoretically, but will have you struggling against setting up the actual scanner (including mundane stuff like source file encoding) on your CI server, setting up separate configurations if you ever want to run it against your local codebase and will be hard to configure in regards to what should or shouldn't actually trip up the analysis and throw warnings at you, because not all of the recommendations will be even viable for your framework and how it expects code to be written.
> It is this lack of skills that comes up with fads like NoSQL. ... The outcome of bootcamps or CS degrees without sound engineering practices, while people label themselves "engineer".
Does it mean that we shouldn't do these things? No, but it definitely means that we shouldn't just wave our hands around and suggest that it's just an issue of education, when actually trying to use the current technologies is often like banging your head against a wall. Use what works well and causes the least headaches, be open to eventually trying new things as the ecosystem and tooling improve, but don't stray too far from what others do successfully for now either.
From what I've seen, the things that have absolutely improved are schema versioning solutions (and thus, versioned DB migrations) and the ability to run database instances locally for development (in throwaway containers), so that you can test breaking migrations with believable seeded data before it ever needs to run on a shared environment. Codegen still could be better (e.g. generating Java/.NET/... entity code for an ORM in a schema-first approach), but some forwards/backwards engineering has been around for a decent amount of time at least, when dealing with models (for example, in MySQL Workbench, though pgAdmin is still lagging behind there). There are even tools for easier development of APIs, like Hasura, PostGraphile, PostgREST and so on, though the adoption there varies.
Edit: oh, another thing that was really good was recent versions of Oracle letting you automatically generate indices for your schema based on how it's actually queried, in case the queries evolve with time but nobody reviews the indices. Except that the automatically generated ones couldn't be removed manually, which felt like bad design. Despite that, more RDBMSes should have that sort of functionality, or the equivalent of SQL Tuning Advisor, that gives you actionable advice. Oracle was a mess to work with for other reasons, though.
For doing in-database processing, even when you use good solutions like DataGrip, things still don't feel as good as when compared to what you can knock together using your typical Java + Spring + Hibernate setup, C# + ASP.NET + EF, or other equivalents. I'd personally use DB views for complex queries, or dynamically generate SQL (like in MyBatis XML) to not get too caught up with ORM idiosyncrasies, but would implement lots of logic in the apps still.
Expected to see canonical model get mentioned in the article, combination of proprietary online services with opensource products.
Well we know all of debate around it and the peculiarities of canonical, but I feel like it worths mentioning.
Is Canonical making money now? Last I heard (though that was a good few years ago) they were still net negative after many years of making Ubuntu successful.
RedHat is the employer of many Linux developers, including maintainers for GNU software etc., with SuSE and Canonical distant followers (at least it used to be like that).
So I guess that makes them both a very significant contributor as well as deserved receiver of financial resources towards F/OSS. OTOH, apart from adaptions to ever-changing hardware, you could say that the times where Linux innovations are actually serving end users is long, long behind us, and anything RH pushed, such as systemd, gnome, and in particular podman (to take away from Docker ie what RH sees as their market, and theirs alone) etc. is basically zero sum software created out of a self-serving interest of selling support contracts by definition. That Linux isn't pro consumer is also evident by it being used for clouds and for a privacy-invading mobile O/S.
Linux was nice and gave a direction to a developer community as long as it was chasing commercial Unix/POSIX with strong precedents and blueprints what to implement. Now it's just an IBM division.
People that weren't there in the begining don't get it that Linux only really took off around the UNIX offerings of the early 2000's, when Oracle, Compaq, Intel, IBM money started be pumped into Linux in some form.
The thing with UNIX/POSIX clones, is that there is hardly any inovation beyond adding new drivers and kernel features, as POSIX doesn't have anything else to offer.
I find it hilarious they are saying VCs wont invest in redhat style business because it's growth is not "hyper".
These same VCs who are so fundamentally incompetent at the basics of capitalism, like risk management 101 of interest rates, that they had to get bailed out by the government, by the FDIC randomly deciding to rewrite the entire rule book that the rest of us normies have to live by.
It is no secret that VC's are not interested in productive enterprises which produce a reliable and fair return on investment through revenue-generating activities.
As you point out, They are obviously interested in concentration of vast amounts of wealth, which requires a completely different set of tactics. Cornering markets and establishing monopolies, regulatory capture, pump and dumps and other forms of grift and large-scale antisocial behaviour.
It should be completely obvious to any onlooker that they have no interest in innovation or technology :)
I don't see it that way. They were bribed with venture debt and mortgages to ignore the tail risk of uninsured deposits. Probably many VCs would have made the same decision in retrospect.
The model is quite simple and it will work for others aswell, as long as onyone provide the software or core for free and for customized solution they will come to you.
The article describes, almost perfectly, the condition of being a productive enterprise within a capitalist free-market in which you are subject to competition.
What is striking is how, for observers within the tech-industry, this is perceived as a peculiar and inexplicable state of affairs, almost as if an alien creature has materialized through a portal into an alternate universe.
modern redhat --and this cannot be stressed enough-- is nothing like old redhat despite IBM's fervent insistance after the acquisition they would remain hands off.
The hat killed Centos because some IBM midlevel manager needed to make quota for a bonus, and they didnt stop to realize just how unsuccessful this tactic was for Oracle/MySQL.
moderen redhat is carved into about 40 seperate repositories and channels for packages designed to fleece you right down to the last RPM. need Ansible? sure it comes with redhat 9. need posix acl support from galaxy? that's another repo you didnt pay for. same goes for about a hundred other packages.
redhat might have had great tech support in the past, but put a ticket in now and you're in IBM levels of "prove it" hell with SOSReport zips and zoom screenshares until your entry level support guy gives up the ghost and calls a developer, who tells you the bug cant/wont be fixed because libraries are hard or a rebase is impossible or any other number of boilerplate excuses why this ancient 4.x kernel based distro cant just source its solutions from fedora like they said they do
and the flagships? forget it. Yeah Quay exists but try to implement it and youll find cool stuff like an administrator account defined in configuration but not created. once created in the admin panel youll have to go back and disable new user creation so you can control user creation at all, and the whole thing just feels rickety and poorly designed from the get-go. none of this is mentioned in the docs overtly, but is buried in Redhats bloated docs pages. just buy a quay hosted whatever from the hat instead.
Openshift is equally appalling as its a massive ecosystem Redhats done virtually everything to control and nothing to distribute or competently document. its painfully obvious IBM wants you to hire a consultant to build and deploy it...There is an open-source offering, but it cant be easily rolled into an airgap installation and it suffers the same level of byzantine and poorly written documentation as Quay. K3S/RKE2 easily beat it for nearly every use case.
Then theres the docker fiasco which unsurprisingly neither of these juggernauts wanted to deal with the others bullshit so now dockers broken in all but the most monied instances of IBM hardware. redhat became so absolutely resolute about podman being "everything you can do i can do too" that it has a fucking compose option to translate docker compose into podman because people like compose but rewriting your appstack to pods when you dont want to be arsed with k8s is sort of a non-starter.
I work at Red Hat for the last eight years, long before IBM had anything to do with us, and I cannot stress enough how much IBM is *nowhere* to be seen in not just our day to day engineering jobs, but pretty much not even periodically. I know nobody at IBM, I have no email addresses of anyone at IBM, nothing we get in our inbox (and we get a lot) ever has anything to do with "IBM" in it, I have no links or access to any IBM-specific intranets or resources of any kind. They truly have stuck to their promise of not changing Red Hat; our support system is the same, we still respond to bugs, ask for SOS reports, and customer support people do collaborative screenshares, just as we were always doing, there's nothing different about any of that. There have been no top-down changes or decrees or anything I'm aware of that would change how we do any of that stuff.
as far as Podman it's all I use for all of my server environments and I like it a lot better than Docker. It's simpler without all the weird "Dockerish" things getting in the way. Not having a monolithic server hang the whole machine due to a misbehaving container is well worth it.
> as far as Podman it's all I use for all of my server environments and I like it a lot better than Docker. It's simpler without all the weird "Dockerish" things getting in the way. Not having a monolithic server hang the whole machine due to a misbehaving container is well worth it.
Podman-compose was feature-lacking and beta even when podman already had its 3rd major release, while the communitcation was that "Podman is a drop-in replacement for Docker". It just wasn't a drop-in at all for all compose users. People either stopped considering Podman or enablded the socket daemon, so that they are still able to use docker-compose. But this goes against the design ideas podman had.
Feels as half-baked as Ingnition, which also still feels very bare-bone, as if there's no financial interest to enhance it further.
Well, there's two very different views of reality.. And one of them appears to be from a very unhappy customer.
Is it possible that different IBM Depts - or IBM regions, are selling Redhat Software/Support without much actual involvement of the Redhat business? Like the might have done in their traditional reseller/consulting/support role?
TBH I actually think it's likely - sales and account execs can probably just tickbox add RH stuff to their package deals, copy&paste the slide deck - and post sale the customer support channel will be the general enterprise support one.
IBM are probably very deliberately channelling all first level support through there anyway, same as for everything else - so core Redhat support staff might only get looped in for 2nd and 3rd level issues for those accounts, if at all.
It's quite easy to see that effectively creating two sales/support channels depending on whether you "bought from Redhat" or "packaged from IBM" - but equally it's unlikely to be presented or perceived that way, until it's too late.
I believe you when you say they've been hands off - it's surprising, but I can also see how it would make this kind of situation _worse_, because they're still effectively treating redhat as an "external" provider they resell.
Not that fully integrating the business would make it better of course, probably just different worse ( just based on my experience of corporate mergers, not IBM specific... ).
> The hat killed Centos because some IBM midlevel manager needed to make quota for a bonus, and they didnt stop to realize just how unsuccessful this tactic was for Oracle/MySQL.
RHEL clones were always going to exist, this is and was obvious, and there are more of them now than ever. That's not the point. The point is that now there is a way for RHEL clones, including ones from competitors like Oracle, have some way of contributing back to RHEL, whereas beforehand if you were a CentOS user your only possible course of action was to file a bug in the RHEL bug tracker and wait for a Red Hat employee to prioritize it.
The business benefit is also a technical benefit - the ability to fix bugs and add features is no longer strictly limited by the amount of engineering capacity Red Hat can contribute. Companies like Facebook use CentOS Stream at scale internally and make contributions that ultimately end up in RHEL, users and vendors have a target they can integrate against early without needing gated betas, etc.
I'm constantly amazed that people still don't understand why Red Hat makes money, and this article won't enlighten you as it doesn't understand it as well as being full of other incidental mistakes (like their description of CentOS is way off the mark).
Red Hat takes Linux and certifies it against all kinds of government, safety, privacy etc standardards, such as PCI and FIPS and numerous local ones. As a result if you are a government or many large companies you simply cannot download Linux and YOLO it. You are legally required to use the certified software so you take the path of least resistance and buy RHEL. That's it. It's why Red Hat makes piles of money, and Canonical or your SaaS does not. Understand what your big customers are required to have and provide it to them, even though it's boring and expensive to do.