Hacker News new | past | comments | ask | show | jobs | submit | mmrezaie's comments login

We use canonical tech in 3000 nodes in 4 different data centers. I would be afraid if LXD is going to be locked with juju for orchestration. Juju is not something the industry has been using and it is the source of most our problems e.g. performance overhead, scalability issues and juju not being mixable with other orchestration tools.

I wish LXD team try to keep it open and stop entangling lxd with the rest of canonical tools.

We do not deploy lxd on scale using snap. We build it and have as part of our packer pipeline on rockylinux 8.


Is there a link to rockylinux or almalinux expectation of this news? I just finalized choosing Rockylinux for our next generation HPC clusters!

[edit] this is the link: https://rockylinux.org/news/2023-06-22-press-release/



I think this is a common practice nowadays until the regulations resolved it. Even Google chrome now asks if you changed the default search engine by mistake.


There is a difference between asking if you want to change something, and going and changing this something without the user consent.


There is a difference between asking, and constantly nagging until the user presses the wrong key by mistake one day.


Sure but Google didn't nearly get broken up by anti-trust regulation agencies for unfair competition with browsers. Microsoft did... in a fair world they'd be facing serious fines or an actual breakup this time as repeat offenders.


Bad logic. If the governance e.g., TA is not manipulating market e.g. giving this sort of subsidies in case of failure then market could find the right place. Instead of behind the scene talks, a measurable study would be done and then fabs would be finding a suitable place to establish. I am not saying Samsung has not done studies on the location. Just pointing out the fallacy in the dangerous logic.


That's a lot of what-ifs with the assumption of corruption. There is no fundamental reason why companies and the government can't cooperate for mutual benefit.


There’s plenty of precedent for corruption with respect to tax incentives, see, e.g., what Foxconn did in Wisconsin:

https://www.theverge.com/21507966/foxconn-empty-factories-wi...

https://en.wikipedia.org/wiki/Foxconn_in_Wisconsin?wprov=sft...


I'm sorry, am I missing some sort of evidence of corruption for the Samsung plant? Is that a serious accusation?


Evidence of corruption in Samsung itself is not hard to find. Absence of evidence of corruption in this specific case is not evidence of corruption’s absence. I remain skeptical of the terms of the deal either way.


Evidence of corruption in ERCOT is not hard to find.

It would be much harder to find evidence of fair dealing from ERCOT.


even if it's corrupt, although it's rare, sometimes corruption does serve the public interest, even if by total accident of competence.


Corruption as a least-bad alternative is not much of an endorsement, and likely isn’t a good investment of public expenditures.


Not sure why one needs to give so much information to this app while you can view YouTube using brave and hope for fewer data leaks.


I think we all know JRE is not about two people having shit conversation. I used to listen to it since 2016. In the past couple of years, JRE is all about let's bring out what triggers the other side; no matter what the other side is all about.

I have friends I have lost because of this, whom I have no idea how to respond to their messages anymore; Intelligent and educated ones.

Regular people argument doesn't apply in here.


I agree. He used to have interesting guest, and I felt the conversations were more organic. I quit watching a few years back because I did not enjoy the direction the podcast was going in.


Since the last administration, Afghan government was not involved in negotiations with Taliban, and this sudden not calculated retreat. Can someone explain what is the mentality of the people who are agreeing with this. I have not seen one military person interview saying this is what they wanted.


Seems like no one knows what is going on. We gifted the Taliban 700 vehicles, hummers, drones, and even a few Blackhawk helicopters. How could securing our valuable physical assets not be important enough for our military. It makes us look like incompetent fools. Furthermore our estimates for when Kabul will be lost to Taliban have been wildly off and went from 90 days to 3 days overnight. It’s a total disaster over there and no matter what anyone says these things are the responsibility of the current administration and military. Total embarrassment so far. The audacity of the Taliban to even dare try to steal 700 vehicles from the USA. This alone brings aggression from other enemies and for that reason alone is grounds for us making a strong demand to get our stuff back or else risk our re-entry. Taliban should have a choice between letting us leave with our stuff or having us come back in force.


Fear not, for the vehicles were left without keys, so they were useless to everyone including the Afghan army.


I just saw some videos of teenage girls being dragged to be forced married on the street. Not sure how I can unsee it, really.


I agree with these statements. Although I do not understand why this is not making any nosie internationally? NSO (ergo the supporting state) seems to be stealing data from EU leaders as well. Is this mean we have many NSO like companies out there that we do not know about and each country has one and everyone knows about these? Does anyone know what is the Swedish NSO?


No evidence of that yet. Their numbers were in the list, but hasn't been confirmed yet whether Pegasus was installed.


We are counting on this. Actually investments in Sweden for data centers has rissen more than 20-40 percent at the normal rate.

I know that some purchases of cloud services are halted for now to be sure about this. Data movement is going to be increasingly an important matter. It should have been from the begining but here we are.


I am really interested to know why should anyone go with this when Debian or Ubuntu LTS exist. The two later have not changed their policies in the last decade, and they have a clear path for upgrading. CentOS was always a clear choice for device drivers support, but I never understood the stability claims.


RHEL and its derivatives are the only linux distribution which maintains binary compatibility over 10+ years while getting not only security updates but feature additions when possible.

This is something I don't think the wider community understands, nor do they understand the incredible amount of work it takes to back-port major kernel/etc features while maintaining a stable kernel ABI as well as userspace ABI. Every single other distribution stops providing feature updates within a year or two. So LTS, really means "old with a few security updates" while RHEL means, will run efficiently on your hardware (including newer than the distro) with the same binary drivers and packages from 3rd party sources for the entire lifespan.

AKA, its more a windows model than a traditional linux distro in that it allows hardware vendors to ship binary drivers, and software vendors to ship binary packages. That is a huge part of why its the most commonly supported distro for engineering tool chains, and a long list of other commercial hardware and software.


That's the value pitch for RHEL, where it's understandable — whether or not you like the enterprise IT model of avoiding upgrades as long as possible, there's a ton of money in it.

I think the gap is the question of how many people there are who want enterprise-style lifetimes but don't actually want support. If you're running servers which don't need a paid support contract, upgrading Debian every 5 years is hardly a significant burden (and balanced by not having to routinely backport packages). There's some benefit to, say, being able to develop skills an employer is looking for but that's not a huge pool of users.

I think this is the reason behind the present situation: CentOS' main appeal was to people who don't want to pay for RHEL, and not enough of those people contribute to support a community. That lead to the sale to Red Hat in the first place and it's unclear to me that anyone else could be more successful with the same pitch.


>who want enterprise-style lifetimes but don't actually want support

But lifetimes are support. Support isn't just, or even primarily, about making a phone call and saying "Help, it's broken." After all, there's nothing keeping someone from taking a snapshot of a codebase and running it unchanged for 10 years. Probably not a good idea if you're connected to the network, but certainly possible.


I was thinking more about _why_ people want that. If you're changing the system regularly, upgrading is valuable because you don't want to spend your time dealing with old software or backporting newer versions. Most of the scenarios where you do want that are long-term commercial operations where you need to deal with requirements for software which isn't provided by the distribution, and in those cases they likely do want a support contract.

I'm not sure there are enough people left who have the “don't touch it for a decade” mindset, aren't working in a business environment where they're buying RHEL/SuSE/Amazon Linux/etc. anyway, and are actually going to contribute to the community. 100% of the people I know who used it were doing so because they needed to support RHEL systems but wanted to avoid paying for licenses on every server and they weren't exactly jumping to help the upstream.

Red Hat bought CentOS in the first place because they were having trouble attracting volunteer labor and I think that any successor needs to have a good story for why the same dynamic won't repeat a second time.


>I was thinking more about _why_ people want that.

I think there are two primary reasons.

1.) A developer wants to develop/test against an x.y release that only changes minimally (major bug and security fixes) for an extended period of time.

2.) The point release model where you can decide when/if to install upgrades is just "how we've always done things" and a lot of people just aren't comfortable with changing that (even if they effectively already have with any software in public clouds or SaaS).

I largely agree with your other points.


Re: point 2, I don't know how different that is for stable distributions — e.g. if you're running Debian stable you're in control of upgrades and you can go years without installing anything other than security updates if you want.

Re: point 1, I'm definitely aware of that need but the only cases I see it are commercial settings where people have contractual obligations for either software they're shipping or for supported software they've licensed. In those cases, I question whether saving the equivalent of one billable hour per year is worth not being able to say “We test on exactly the same OS it runs on”.


Have you worked in banking or aerospace? 10 years of needed support/stability/predictability is nothing unusual. The old if it ain't broke don't fix it mindset prevails.


If they need that for their business are they really not using Red Hat?


Debian offers around three years of support. Ubuntu LTS around five. Both pale in comparison with Red Hat and, by proxy, CentOS.


Debian eLTS offers 7 years: https://wiki.debian.org/LTS/Extended

That said, if you're really in the position of depending on a free project for over five years of security support, you probably will be totally fine with just ignoring the fact it's out of support. Just keep running Debian 6 for a decade, whatever. The code still runs. Pretend you've patched. Sure, there are probably some vulnerabilities, but you haven't actually looked to see if the project you're actually using right now has patched all the known vulnerabilities, have you?

(Spoiler, it hasn't: https://arxiv.org/abs/0904.4058)


Right, but the price one pays are outdated packages.

CentOS 8 was released few days ago with kernel 4.18, which not even LTS is, and is older than the current Debian stable kernel(!).

If you need to install anything besides the base distro you need elrepo, epel, etc which I'm not sure can be counted as part of the support.


RHEL kernel versions are basically incomparable with vanilla kernel versions. They have hardware support and occasionally entire new features that have been backported from newer kernels in addition to the standard security & stability patches.

This means that RHEL 7 using a "kernel version" from 2014 will still work fine with modern hardware for which drivers didn't even exist in 2014.


That is not a good thing. RH frankenkernels can contain subtle breakage. E.g. the Go and Rust standard libraries needed to add workarounds because certain RH versions implemented copy_file_range in a manner that returns error codes inconsistent with the documented API because patches were only backported for some filesystems but not for others. These issues never occurred on mainline.

And for the same reasons that the affected users chose a "stable" and "supported" distro they were also unable to upgrade to one where the issue was fixed.


True, but it is a matter of weighing risks. I can't find it now, but I remember a few years ago there was a news story about how an update to Ubuntu had caused hospitals to start rendering MRI scan results differently due to differences in the OpenGL libraries. For those sorts of use cases, stable is the only option.


I think this is a perfect use case for CentOS/RHEL as opposed to Ubuntu when the machine has only one job and nothing shall stand in its way, ie when you expect everything to be bug-for-bug compatible. But I fail to understand why a vendor of an MRI machine charging tens of thousands for installation/support cannot provide a supported RHEL OS which costs $180-350/yr in the cheapest config [1].

[1]: https://www.redhat.com/en/store/linux-platforms


No, it doesn't. I will tell you my experience.

They bought some fancy new computers at work. Our procedures say to use CentOS 7, so we tried it, it ran like shit. Then we reinstalled CentOS 8, same. It worked, but the desktop was extremely slow. After much hair pulling I found the solution: add the elrepo-kernel repository, and update to kernel 5.x

No amount of backporting magic will make an old kernel work like a new kernel.


That's not a price, that's a feature.

If your use case doesn't consider avoiding noncritical behaviour changes for a decade to be a feature, you have other options.


if you need epel, or quicker life cycles then CentOS Stream should be just fine for you as well

People that run CentOS in prod are normally running ERP systems, Databases, LoB Apps, etc, and the only thing we need is the base distro and the the vendor binaries for what ever is service/app that needs to be installed, and probably an old ass version is JDK...

We need every bit of that 10 year life cycle, and we glad that we will probably only have to rebuild these systems 2 or 3 times in our career before we pass the torch to the next unlucky SOB that has to support an application that was written before we were born...

that is CentOS Administration ;)


The patch delta for security fixes must get larger over time as these packages age further and further away from top of tree.

I always wonder how many major vulnerabilities are introduced into these super old distros due to backporting bugs.


It's the opposite. Plenty of subsystems in the RHEL 8.3 kernel are basically on par with upstream 5.5 or so, as almost all the patches are backported. The source code is really the same to a large extent, and therefore security fixes apply straightforwardly.


So, why is RHEL not using the upstream kernel? It would allow them to avoid those issues with rust&go (and probably other software): https://news.ycombinator.com/item?id=25447752


RHEL maintains a stable ABI for drivers.

Plus, there are changes (especially around memory management or scheduling) that are fiendishly hard to do regression testing on, so they are backported more selectively.


Security audit / certification would be my guess.


That's great but what about all the other packages?


The upstream for most other packages generally move much more slowly than the kernel. The fast ones (e.g. X11, systemd, QEMU) are typically rebased every other update or so (meaning, roughly once a year).

It also helps that Red Hat employs a lot of core developers for those fast moving packages. :)


Documented cases don't seem to be common, but what comes to mind is the Debian "weak keys" scandal (2008), and the VLC "libeml" vulnerability (2019)[1]

[1]: https://old.reddit.com/r/netsec/comments/ch86o6/vlc_security...


OpenSSL upstream was almost abandoned during those days.

Software are always gonna have bugs, it's written by humans after all. The important thing is to acknowledge and work towards an ideal outcome.


Xweak keys" didn't have anything to do with backporting fixes to older versions. It was introduced into the version in sid at the time.


The kind of people that are up in arms that CentOS 8 isn't going to be supported through to 2029 are using it because it has outdated packages.


Agreed, the packages in Centos / RHEL are all super old. The RHEL license structure changes all the time and depending on which one you get it may or may not include the extended repos.


Honestly that support is meaningless for some areas I know. In our Data Center we have hit problems with old packages and at the end you will end up with a lot of your own packages. In the end I find Debian to be a good base, and you build the rest by yourself. Even though I use Fedora for Desktop, I always have a feeling Debian is the server choice which I can extend further.


This is false. Debian provides LTS with a 5-years timespan. [1]

And there is even commercial support for Extended LTS now [2]

Also, it's worth noticing that Debian provides security backports for a significantly larger set of packages and CPU architectures than other distributions.

[1] https://wiki.debian.org/DebianReleases

[2] https://wiki.debian.org/LTS/Extended


Do you trust Debian LTS? As much as RHEL? The documentation about Debian LTS always made me think it is not a fully fledged thing. I've always felt like Debian releases reached EOL on their EOL date, not their LTS EOL date.

> Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

https://wiki.debian.org/LTS


Good catch, I stand corrected. But the point still holds - five (or even seven) years is still not match for Red Hat.


Do you know something I don't? A few years back, Debian changed their LTS policy to 5 years in response to Ubuntu.

> Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

> Thus the Debian LTS team takes over security maintenance of the various releases once the Debian Security team stops its work.

https://wiki.debian.org/LTS


Arguably, no one should be running a server that long in 2020.

I would say a better reason is that while both are Linux distributions, they are distinct dialects and ecosystems. It isn't impossible to switch, but for institutions that have complex infrastructure built around the RHEL world, it is a lot of work to convert.


It's not really about running servers for 10 years. It's about having a platform to build a product on that you can support for 10 years. RHEL software gets old over time, but it's still maintained and compatible with what you started on.

Consider an appliance that will be shipped to a literal cave for some mining operation. Do you want to build that on something that you would have to keep refreshing every year, so that every appliance you ship ends up running on a different foundation?


> Consider an appliance that will be shipped to a literal cave

This.

A decade ago I was technical co-founder of a company [0] that made interactive photo booths and I chose CentOS for the OS.

There are some out in the wild still working and powered on 24/7 and not a peep from any of them.

We only ever did a few manual updates early on - after determining that the spotty, expensive cellular wasn't worth wasting on non-security updates - so most of them are running whatever version was out ten years ago.

Rock solid.

[0] https://sooh.com


The "don't touch it if it's not broken" philosophy is fundamentally at odds with an internet-connected machine.

You either need to upgrade or unplug (from the internet).

There are still places out there that are running WindowsNT or DOS even. Because they have applications which simply won't run anywhere else or need to talk to ancient hardware that runs over a parallel port or some weird crap like that. These machines will literally run forever, but you wouldn't connect it to the internet. Your hypothetical cave device would be the same.

Upgrading your OS always carries risk. Whether it's a single yum command or copying your entire app to a new OS.

Besides, if you're on CentOS 8 then wouldn't you also be looking at Docker or something? Isn't this a solved problem?


The point is the amount of "touching". Applying security patches to RHEL is still a change, but it's significantly less risky than upgrading a faster-changing system where you might not even get security patches at all for the versions of software you're using unless you switch to a newer major version.


"don't touch it if it's not broken" is not a philosophy, it is a slogan. Some people say it, because it is preferable to them to run old unpatched vulnerable systems rather than spend resources on upgrades. That's just a reality. Some care about up-to-date, some don't. Most people don't really care about security, and some of those don't care even about CYA security theatre. If they did care about security, they wouldn't run unverified software downloaded from the Internet.

Why Docker has anything to do with this discussion?


I think it is mostly about running servers (standard services that don't change much) for 10 years (and more). You don't need 10 year LTS distribution for building a product. You take whatever version of OS distribution you like, secure local copy should the upstream disappear, and vendor it into your product and never deviate from it.


I'm assuming this involve upgrading the application often as well?


OK I'll bite..

How would I run our SAP ERP apps/databases without "running servers that long"?


Of course there are use cases, but _ideally_, most workloads are staged, deployed, and backed up in such a way that it is a documented, reproducible procedure to tear down an instance of a server, rebuild, and redeploy services.

And while it may be cumbersome or cause some downtime or headaches if that isn't the case, I find the very need of doing it once every 1-3 years forces your hand to get your shit together, rather than a once per decade affair of praying you migrated all your scripts manually and that everything will work, as your OS admins threaten your life because audit is threatening theirs.


How many simultaneously running machines can you keep updating with this method? If you run non-trivial workloads for hundreds of customers, this becomes high maintenance system already with two machines. It takes ages to upgrade all applications, then validate everything works, then actually migrate with no downtime.


Run them on Kubernetes in GCP with a containerised DB2!


> Arguably, no one should be running a server that long in 2020

The main reason people choose Linux is due to its stability.

It is totally ok to run servers in 2020.

Also your statement seems the default cloud vendor lingo used to push people to adopt proprietary technology with high vendor lock-in.


I think they only meant "that long". In other words, not for 5+ years without an upgrade. Not that you shouldn't run them at all.

Not everything can be highly ephemeral or a managed service, so running servers yourself is totally okay like you said.


You are right, I misunderstood his message.


Honestly, 10 years is a long time for a server. I would be honestly surprised if a server lasted 10 years.

But I agree I also get the tone of "servers should be cattle and not pets, just kill them and build a new one". Which can also be done on bare metal if you're using vms/containers. It seems like most people forget these cloud servers need to run on bare metal.


Really? We've colocated our servers for the past 18 or so years.

We have about 40. The oldest is around 17 years old. Our newest server is 9 years old. Our average server age is probably around 13 years old.

The most common failure that completely takes them out of commission is a popped capacitor on the motherboard. Never had it happen before the 10 year mark.

Never had memory failure. Have had disk failures, but those are easy to replace. Had one power supply failure, but it was a faulty batch and happened within 2 years of the server's life.


The last time I worked with a ~8 year old server, it used to go through hard drives at a rate of 1 every 2 months. While we could replace them easily and it was RAID so there wasn't any data loss, I personally would've got fed up of replacing HDDs every couple of months.

Also, most of my experience is with rented dedicated servers and they just give me a new one completely so I never really see if they're fully scrapped.


We haven't had any hardware failures - hard drive or otherwise - in the last 5 months.


>Arguably, no one should be running a server that long in 2020.

That's mental.


My read on that is that you should be treating your servers as disposable and ephemeral as possible. Long uptimes mean configuration drift, general snowflakery, difficulties patching, patches getting delayed/not done, and so forth.

Ideally you'd never upgrade your software in the usual way. You'd simply deploy the new version with automated tooling and tear down the older.


I don't get this. If there are many servers, sure. But if it's something that runs on a single box without problem, why on earth should I tear it down?

Also, "running a server for ten years" does not need to mean that it has ten years of uptime. I think that wasn't meant.


Ten years of uptime seems neither an unreasonable nor unattainable requirement. There's more to computers than mayfly web startups.


If it's not connected to the Internet, okay.

If it is connected to the Internet, then I guess the kernel needs to be hot-patched need to be applied to avoid security issues.

Were hot kernel patches available ten years ago? I remember some company who did this (for Linux), and it was quite a while back, so it's possible. But I doubt it was mainstream.

I recall long ago that SunOS boxes had to be rebooted for kernel patches.

I don't remember about Solaris.

I'm not familiar with other Unices.


ksplice, and it's an Oracle product


"Ideally" - that's the problem. I have half a dozen long tail side projects running right now on Centos 7, and a few still on Centos 6.

Do you have any idea how much effort it is to change everything over to "treating your servers as disposable"?! It's going to eat up a third (to half) of my "fun time" budget for the foreseeable future!


Exactly, young devs here are completely out of touch with operations. Of course ideally something like standard 1TB HDD+32GB RAM system would be upgraded to newer OS and apps version by a central tool in 2hours, but we don't such FM technology yet.


> automated tooling

the 'usual way' is automated tooling


Rocky is going to be exactly what CentOS was: a free version of RHEL. The reason you would use this vs. Debian or Ubuntu is because you've got systems that need to mirror your production, but you don't want/need enterprise support on them.

When I worked for a hardware vendor we had customers who ran hundreds of CentOS boxes in dev/test alongside their production RHEL boxes. If there was an issue with a driver, we simply asked that they reproduce it on RHEL (which was easy to do). If they had been running debian or ubuntu LTS the answer would have been: I suggest you reach out the development mailing list and seek out support there.

Whether you like it or not, most hardware vendors want/require you to have an enterprise support contract on your OS in order to help with driver issues.


Because CentOS on enterprise hardware is way more stable than Debian. I've worked for 6 years as a sysadmin for 300+ servers and we migrated everything from Debian to CentOS and our hardware related issues just went away. Overall we had much less trouble in our systems.


That's probably because lots of enterprise hardware is only ever tested and certified to work with RHEL, and in many cases only provided drivers in an RPM format that's intended to be installed in a RHEL-like environment.


Which is a good reason someone might choose a RHELish distro over Debian, no?


Apart from the long support, RHEL based distros also give you built in selinux support. Apparmor exists, but it's not comparable in features and existing policies.


selinux is provided by Debian as well and it's hardly a popular (or very useful) feature compared to daemon and application sandboxing.


The module itself is provided, yes. The policies are not really integrated into Debian systems. You can adjust them to work, but it's way more work than using ready ones on a RHEL-like system.


> I am really interested to know why should anyone go with this when Debian or Ubuntu LTS exist.

There is a large world of proprietary enterprise software that is tested, developed, and supported solely on RHEL. CentOS (and theoretically, Rocky Linux) can run these applications because they are essentially a reskin of RHEL. Debian and Ubuntu LTS cannot (or at least not in a supported state) because they are not RHEL.


> when Debian or Ubuntu LTS exist

I'm not familiar with Debian, do they have same infrastructure and documentation quality as RHEL? For example do they have anything like Koji [1] for easy automated package building?

[1] https://koji.fedoraproject.org/koji/


Not an expert, but I’ve understood CentOS was interesting for people who run RedHat for production, but want something free for non-prod hosts.


This.

We used CentOS as dev environments, and RHEL as production. It gave us the best of both worlds; an unsupported but compatible and stable dev environment we could bring up and throw away as much as we wanted _Without_ licensing BS. And when the devs were happy with it, the move of a project to RHEL was easy and uneventful.

And don't even get me started on the 'free' dev version of RHEL. It's a PITA to use, we've tried. It's also why we've halted our RH purchasing for the moment. Sure, it's caused our RHEL reps no end of consternation and stress but too bad. I've been honest with them, and told them that they are probably lying through their teeth (without knowing it) when they parrot the line that RH will have some magic answer for "expanded" and/or "reduced cost" Streams usage in "1st half of 21". That trust died when RH management axed CentOS8 like they did.


For me it's always been about stability and the long term support of a 'free' distribution. That has also historically been their bread and butter which got them wide-adoption.

The branding stuff was a plus to the sys-admins and Linux die hards.


If you're using Cpanel, you have no choice but to use Redhat, CentOS or Fedora.


That is changing, they are going to Support CloudLinux, and Ubuntu now

https://blog.cpanel.com/centos-8-end-of-life-announcement/


Glad to see, but worry that Cpanel on Ubuntu coming in late 2021, corresponding to the end of CentOS 8, is cutting it close.


You may need to develop or test software for RHEL and CentOS/Rocky are bug-for-bug compatible.


I am simply not a fan of Debian/Ubuntu's utilities, with the big one being the package manager (I like yum/dnf way better), but also other things like ufw vs firewalld.


same here


Switching distributions is an uphill fight when the company has used a version of RedHat for 20 years.

Most developers use Ubuntu in their laptops. Virtualized on Windows, but Ubuntu nonetheless.


rpm. If your systems are built around rpm already that alone is a good enough reason.


Drivers for hardware that were only ever intended to work with RHEL (EMC's PowerPath drivers, for example).


Debian's package policies can be challenging for rapidly updating packages: https://lwn.net/Articles/835599/


For packages like Kubernetes or big data packages one should not use anyone else's builds. I have been finding problems in Cray's modules and eventually we are using our own builds we can reproducibly support using Spack.


I would say for any piece of software, if the vendor themselves provide a package for your distro, use that, not the distro version.

In fact I’ll go a step further and say Windows and macOS got this right, in that third party developers should do the work to “package” their apps.

It would be insane for Microsoft to maintain packages for every piece of software that ships on Windows, but somehow that’s the situation we’re in with Linux.

Hopefully Snap or Flatpak changes this!


> It would be insane for Microsoft to maintain packages for every piece of software that ships on Windows, but somehow that’s the situation we’re in with Linux.

And this is why installing ex. Filezilla on Linux is safe and easy, and doing the same on Windows is neither.


Both Debian and CentOS have the same problem here, you're not going to install kubernetes from a default repo on any traditional distro.


Debian/Ubuntu are great. However, there are people and companies that prefer the Red Hat way. That's also great.


By understanding the stability I mean the other two are as stable as far as I have experienced them in the last decade.


To system administrators and people managing large fleets of servers "stability" usually means "doesn't change much" rather than "doesn't crash". In that sense, RHEL tended to be more stable than Debian / Ubuntu. Though that may change somewhat with Ubuntu's recent 10 year LTS plans.


Agreed. I've used and advocated for RHEL/CentOS at work since version 5 because it was stable and predictible. That's gone now, and many of my users would prefer Ubuntu anwyay because it's what they use on their personal machines. So I'm making plans to move all our compute resources to Ubuntu LTS.


I'm wary of doing that, because in near future, Microsoft is likely to take over Canonical. You don't put all eggs into one basket. Always plan for escape, always have a plan B. Preferably one not relying on crystal-balling whims of a for-profit corporation. Rocky Linux, Alpine Linux, Debian, Gentoo, BSD, etc.


>Microsoft is likely to take over Canonical.

This is not the first time I've seen this prediction. What is its basis?


There have been some interesting observations on HN and elsewhere, that Canonical for a long time didn't know what it is doing, starting and cancelling projects, but in last years it is lowering interest in desktop and is more focused on providing cloud server software and foisting its new products and methodologies on its users(snap). Some people see this as indication that Canonical is positioning itself to be bought for the best price. It makes sense, as Canonical has a large Linux user base but can't make money there. Microsoft is making inroads in Linux world is the most likely buyer.


there has been some collaboration between the two, however MS collaborates with other small companies and that speculation never arises. I wouldn't set a bet that they're uninterested in Canonical, but the desire to buy it has always been overstretched, it is much bigger chance that they buy some other specialized distro vendor instead (like Google buying Neverware). Ubuntu is too generic in that sense.


Moving a server from Ubuntu to Debian doesn't seem a very arduous task? I've got a box in a rack that came from the factory with Ubuntu installed, but there are Debian addresses in /etc/apt/sources.list.d

Besides isn't that pretty much the exemplar of FUD?


But it is still a task, isn't it? So, now you move to Ubuntu, then if things get south, you move to Debian. Or, you could move to Debian or other less risky distribution now and likely save some time and energy.

On the other hand, maybe Ubuntu is providing something special that Debian can't do - then it may make sense to go with Ubuntu and maybe even swallow Microsoft's fishing hook if it comes.


Lately it seems to me Debian and Ubuntu have made some strange package decisions. They have morphed into a desktop oriented build with snap packages and auto-updates enabled by default (among other strange decisions). There's a ton of stuff we always end up disabling in the new release because it's super buggy and doesn't work well (I work at a small MSP). I'm not sure who replaced Ian Jackson, but Debian seems rudderless.

Centos was the rational other free choice, not that Red Hat hasn't made other equally strange decisions.

Sometimes I think we'd be better off rolling our own, like Amazon does.


I'm using Debian and I don't use snap packages. I guess it's optional? I just installed Minimal and installed few packages I needed.


Snap is an Ubuntu thing. It's basically a client for the Canonical app store.


Snap is proprietary and has a fairly broken implementation. Seems impressively good at preventing machines from booting, polluting the filesystem namespace (who wants 100 lines in every df?), doesn't seem to handle versioning or garbage collection well.

Server side isn't open, and Canocial repeatedly claims wide industry support ... despite not having it.

I recommend the first step in any Ubuntu system you use is to disable snap. Use something portable like flatpak that does at least have some support, is open source, and seems to have a healthy eco system.


Even on ubuntu, you don't have to use it. I don't.


That's the only sane way to use Ubuntu. Snap is abomination


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: