Still, that's what I call a server distro ;-)
Debian is a great, stable distro. Even Sid, the unstable, rolling branch sees little breakage overall. Sane defaults, package maintainers who care about the minor rendering bugs in obscure packages and make the effort to patch said bugs. I'm continually impressed by the fit & finish of Debian, especially in context of how deep the Debian Archive is, with 68 thousand software packages in total.
But, the 'stable' server is rather amazing. I (used to) get into rpm-hell with centos/redhat in a matter of weeks. Even ubuntu will break stuff more often than not when crossgrading from 'major' versions.
0 - https://etckeeper.branchable.com/
1) No need for EPEL and similar extra repositories. Debian is extremely comprehensive.
2) Rolling updates. We keep some servers in the stable release, some in the testing release. No hiccups on upgrade, yet.
Servers you keep on the stable channel will go through the same kind of upgrades you see on major CentOS versions. I never had a CentOS upgrade go smoothly; We'd usually just reinstall everything from config management. When Debian 8 came out, we opted for a distro-upgrade and everything was perfectly smooth. It took longer to go through change management (cycle out of production, test, slowly reapply load) than it took to actually upgrade the servers.
The servers we keep on testing are non-critical, and go through much more frequent distro-upgrade cycles. We had a couple bumps on the road, during the sysv init to systemd transition, but overall I still classify the distro as extremely solid and much better than our previous experience with CentOS.
AFAIK CentOS does not support distro-upgrades (eg. upgrading from CentOS 6 to CentOS 7), we always end up making a fresh install whenever we want to upgrade, i've tried to use the UpgradeTool a few times but it didn't work (there was always something blocking the upgrade).
I believe this would be a point for Debian, the upgrade between releases just work :)
However, this also comes with a perk. CentOS releases are generally the "current" release for at least three years, as opposed to two with Debian. Also, major releases are supported with critical security updates for years after they are no longer current. For example, CentOS 6 will still receive patches until November 2020.
Of course, whether this is important to you will vary based upon your needs. For some, the 2 year releases with an easy upgrade path between major releases will be the more important factor.
That being said, for RHEL 6 -> 7 I did a clean reinstall due to the large architectural changes and it was the right decision.
Yes, Debian is comprehensive, but it comes at a price. Red Hat/CentOS are much better at maintaining the (fewer) packages they support and providing timely patches.
Bugs in less critical Debian packages sometimes linger for years.
That being said, Debian is a great server OS and the only one except CentOS that I'd recommend to anyone.
I just started out with Ubuntu but realised that Debian seems to have more control over their packages and have a much more stable ecosystem which I value. Also .deb packages always seem to be available for basically any software distributed for linux and why make life hard?
I think we all just just grok the way our distro does things.
Just nitpicking, nothing else to add.
Likewise a few of our custom system management packages need tweaking when new versions come out, but aside from that no problems with 1200+ servers.
Generally it takes us about 12-18 months to get round to moving things on, and an OS upgrade is a great opportunity to kick us into a 'do it now' state.
So many annoying bugs. Debian QA is much better, but Red Hat takes the cake. Their attention to even little details is remarkable.
Probably right on the basis of the above however.
It really does feel like Debian will be one of the only distros we have at the moment that will still be around in a hundred years.
* The general tooling is far better, lately, for example, I played with the tools used to build packages in clean chroots of each distributions: mock for CentOS, cowbuilder for Debian, cowbuilder is a lot more powerful, copy on right support, ability to build several packages in parallel, more switches in the cli arguments... mock is far more limited even if it does the job.
Also I love reportbug, even if it seems antiquated at first (bug tracker based on emails, wtf!?), it's really really really convenient as it automatically grabs the information regarding the buggy package (logs, version, dependencies version...) and let you simply review it.
* Debian as more packages available, even compared to CentOS+Epel
* Debian as backports if you really need newer versions of a package
* If you want/need to be on the bleeding edge, sid (or unstable) is a great rolling release.
* Debian does a better job at maintaining stable version of a given software (I remember a few years ago, CentOS/RHEL updated OpenLDAP server to the latest version, which broke my directory on my (dev, fortunately) infrastructure due to a change in configuration format, this never happened to me on Debian).
* If you have relatively clean upstream, packaging is nearly automatic on Debian thanks to all the dh_<helpers>
* at least until CentOS 6, yum was a slow and fragile beast IMHO, apt being more robust and less susceptible to corrupt its DB in case of crash or interruptions. It seems to have gotten better with CentOS 7 however and I never played with dnf.
On the minus side compared to CentOS:
* kickstart is a dream to use compared to preseed which is quite horrible, specially if you are trying to template it.
* The life span of a CentOS/RHEL is really long, +10 years for latest releases, which can be great for long, complex and slow projects, Debian in contrast is more on an EOL after ~5 years policy.
Package selection and packaging quality. The Debian aarchive is about 10x the size of CentOS / RHEL: 60k+ packages vs about 7k or so, as of a few years ago. I've not done a recent RHEL count.
2) For major and minor releases, they are months behind RH. That may or may not be a problem for you, though.
3) Debian is less beholden to the interests of any one corporate entity (for better or worse), if that kind of thing appeals to you.
4) Debian has a huge number of packages, much more than CentOS + EPEL.
5) On the fly upgrade to the next release. Though in a professional setting where you have automated provisioning, reinstall isn't that much of a burden anyway.
That being said, both CentOS and Debian are very solid distros, you're not going wrong if you choose one of them.
They don't seem to see how this negates the advantage of using a "well tested, enterprise grade" distro!
I much prefer Debian (for servers and development machines) as a good intersection between "up to date" and "stable".
The thing I find strange is that the people I'm talking about will happily run code from random 3rd party repos (providing PHP 7.0 or similar), and copy/paste repo key fingerprints from the web without blinking, while steadfastly refusing to use an OS with such packages included and tested in the official repos "because security/quality/enterprise".
At my new job, we recently had that debate -- and I was quickly overruled for recommending CentOS/RHEL. My employer is a Java shop, I can almost always find rpm and deb pkg's for most stuff we need.
For the record: I have no idea how the package selection compares between CentOS and Debian - I've been firmly in the Debian camp since forever, and stay there mostly for other reasons, of which laziness is the most important one.
And also, what a huge effort it is to translate pages to different languages while (I think) almost everyone wants to read the English version.
Not only you have links to different languages in the bottom of the site, you can even find a link to document describing how the standards works and how to change the settings in bunch of different browsers: https://www.debian.org/intro/cn
Wherever I travel, I get search results served in that language even though I have my browser consistently configured to want English.
If I set preferred language to Czech I get search form in Czech. After changing settings to English I get English search form (with link to the Czech version underneath the search bar).
Just for fun I have also tried Dutch and it seems to work as well (there are some foreign words, but I don't speak Dutch so it might as well be Klingon).
Tested in Firefox and in newly opened private Window, so logon / previous sessions wouldn't affect the results.
If I prefer to read English texts in their original form (because I understand it perfectly clear) but read texts in my native language in that original language, there is no single setting.
It feels weird saying this, but it reeks of a very anglo-american worldview to have a single language preference.
> It feels weird saying this, but it reeks of a very anglo-american worldview to have a single language preference.
This is a the browser-side problem, so perhaps the browsers authors have an anglo-american worldview? You could file a bug/feature request with your favourite open-source browser. Alternatively, an extension that selectively sends per-website "Accept-Language" headers would work - I haven't checked if one exists but it can be written in a weekend (not sure if Firefox new extension framework will allows messing with the headers).
1. Some sites work around this by allowing you to have a choose a language that overrides the one requested by the browser and persist this in a cookie or your profile. Defaulting to the language requested by the browser is a sane default compared the other alternatives (like Geo-IP look-ups: "Oh, I see you are visiting Germany. I will assume your browser requested English in error and will serve you the German version of the article instead")
No, it sounds like a protocol problem to me. How do I state in an HTTP request: "I speak English and German; if you can serve both and the article was written in one and translated to the other, then give me the original please".
That was a long sentence, but it wouldn't have been particularly hard to define well and put into the protocol when it was originally drafted. It's a bit late now, obviously.
For example, one could define provenance of a translation much like NTP defines strata. Accept-Language could then have taken this into account.
That is a problem with the quality of the translation/translators - not the protocol. The protocol is neutral on languages and considers all versions as equivalent, it only expects that you would pick one. The specific behaviour you are expecting can be trivially built into the server side without modifying the current protocol.
While third-party extensions allow to switch Accept-Language, it is not in the standard UI of modern browsers. For people like me, an advanced setting of Accept-Language per top-level domain would reduce the number of times I have to find the language switcher on the web page to a minimum.
There's many sites that force the local language down your throat and it's sooooo annoying...
Citation needed. I'm pretty certain there are many proponents for a rather tight mapping from URL to content. Just imagine how hard it is for a search engine to index websites that deliver content based on unpredictable (anything else than URL) input variables.
It's no different (arguably a lot better) than cookies.
In my opinion, there is very limited applicability of Accept-Language. Just because this header exists does not mean you need to make use of it at the slightest perceived opportunity (If all you have is a hammer...).
As an Italian I have to agree with you; for us old timers it wasn't easy as English didn't become the official 2nd language over here until some decades ago. Both my parents and most relatives as an example knew some French but next to no English, so I had to learn it for myself.
Luckily newer generations are getting much better integrated with the rest of the world language-wise; should you visit Italy again and need directions, asking to people in their 20s or 30s will probably be more effective.
Debian experimental is permanently named "sid" (the kid that breaks toys) and "stretch" (Debian 9) is a rubber octopus from Toy Story 3.
Other curious fact about Debian, is that there are 2 logos: the open use one, and the restricted use one. The restricted use one is a less known and includes a bottle.
see "old logo" at https://www.debian.org/vote/1999
It indeed is very very rarely used whatsoever these days. Even the "official" CD media (which used to be the only place where it was really seen in the past 10 years) don't really exist anymore.
> Some packages/developers don't use experimental, they just put the new versions in unstable
An update to network-manager broke dns for everyone, the broken package stayed about 5 hours on unstable repository, if i recall correctly. This broken package never got even close to being on testing.
Of course, the same two-week wait time also applies to any regular bugfix, unless it's manually migrated to testing. Thus, one bug that affected unstable but not testing is not proof at all. Personally, I'm happier with unstable than testing on my laptop and work desktop.
That is a valid example on what can make testing unpleasant, but still holds my point as this is a lot more uncommon than sid breakages, and also the package is removed from testing automatically a few days after any RC bug is filled, which prevent new testing installations of this broken package.
> Of course, the same two-week wait time also applies to any regular bugfix, unless it's manually migrated to testing. Thus, one bug that affected unstable but not testing is not proof at all. Personally, I'm happier with unstable than testing on my laptop and work desktop.
The default wait time is 5 days (medium urgency), high urgency fixes (which fixes RC bugs on testing) waits 2 days before being migrated to testing. But you don't need to believe me, see for yourself:
Release critical bugs affecting sid (excluding packages that are already removed from testing, so we don't see any obsolete package that is to be removed from unstable too): 539
Release critical bugs affecting testing: 453
Ubuntu developer here. No, the version number is not needed before the release. All our infrastructure only uses the codename until release, since the final version name is not known for certain in advance.
For example, Ubuntu 6.06 LTS (Dapper Drake) was released late.
If you look at what semantic -vs- date-based versioning communicates to end-users of normal user-facing apps:
- how recently it's been released
- not much else
The above is useful if you are interested in being up-to-date on a piece of software you know and trust, or in trying out new / cutting edge software.
- a (very) rough idea of how long it's been in development for (number of versions its gone through)
- a (similarly, very) rough idea of how active the development effort is in terms of testing/maintentance/bug-fixes/patches (i.e. major version churn -vs- minor version churn -vs- patch version churn
- the likely relative stability of the current release (e.g. 2.0.0 might be less stable than 2.0.4)
All of the above may be rough and may give false impressions sometimes, but it is still extremely useful as an indicator for any relatively technical user evaluating whether or not to use or upgrade a piece of software.
Less technical users are less likely to be as interested in version numbers full stop.
See here: https://insights.ubuntu.com/2017/04/05/ubuntu-on-aws-gets-se...
Meanwhile I’m happy on CentOS 7 since many years now.
I ask myself that every day, certainly seems like it getting older ;-)
On a more serious note: It’s great that Linux is now sufficiently mature that even a 3 year old release is perfectly adequate and can do pretty much whatever you need. Maybe I’m misremembering here, but it didn’t always used to be like that.
Jessie will still be supported until June 2018 by the official security team, and until 2020 by the Debian LTS project.