Hacker News new | past | comments | ask | show | jobs | submit login
Debian 9.2 released (debian.org)
234 points by rayascott on Oct 9, 2017 | hide | past | favorite | 117 comments



When I run linux on servers, I always choose Debian. It is the best distro IMO. Glad to see that the project is still going as strong as ever!


As someone who has been choosing CentOS for the same purpose, what would be some reasons for me to switch off the top of your head?


I have a debian install that I've installed in 2001. It's clean, fully up to date and has never been reinstalled. It's not necessary. It had apache1 before, it had the old exim before, it had all kind of old packages and was gradually updated over the years. True, something like 'etckeeper' is a great boon these days.

Still, that's what I call a server distro ;-)


And yet, here I am running it on my laptop! :D

Debian is a great, stable distro. Even Sid, the unstable, rolling branch sees little breakage overall. Sane defaults, package maintainers who care about the minor rendering bugs in obscure packages and make the effort to patch said bugs. I'm continually impressed by the fit & finish of Debian, especially in context of how deep the Debian Archive is, with 68 thousand software packages in total.


Oh sure I also run it everywhere else -- sid on my workstation, 'testing' on my home server and laptop etc. It does cover pretty much all the bases!

But, the 'stable' server is rather amazing. I (used to) get into rpm-hell with centos/redhat in a matter of weeks. Even ubuntu will break stuff more often than not when crossgrading from 'major' versions.


I hadn't heard of etckeeper[0] before, thanks for the tip!

0 - https://etckeeper.branchable.com/


Same here! Just 2005/06 as i remember. And every time apt-get push new things to machine since i'm in "testing" build.


That's impressive.


I switched from CentOS to Debian on servers three years ago. I'd give two reasons:

1) No need for EPEL and similar extra repositories. Debian is extremely comprehensive.

2) Rolling updates. We keep some servers in the stable release, some in the testing release. No hiccups on upgrade, yet.

Servers you keep on the stable channel will go through the same kind of upgrades you see on major CentOS versions. I never had a CentOS upgrade go smoothly; We'd usually just reinstall everything from config management. When Debian 8 came out, we opted for a distro-upgrade and everything was perfectly smooth. It took longer to go through change management (cycle out of production, test, slowly reapply load) than it took to actually upgrade the servers.

The servers we keep on testing are non-critical, and go through much more frequent distro-upgrade cycles. We had a couple bumps on the road, during the sysv init to systemd transition, but overall I still classify the distro as extremely solid and much better than our previous experience with CentOS.


> I never had a CentOS upgrade go smoothly; We'd usually just reinstall everything from config management

AFAIK CentOS does not support distro-upgrades (eg. upgrading from CentOS 6 to CentOS 7)[1], we always end up making a fresh install whenever we want to upgrade, i've tried to use the UpgradeTool a few times but it didn't work (there was always something blocking the upgrade).

I believe this would be a point for Debian, the upgrade between releases just work :)

[1]https://wiki.centos.org/TipsAndTricks/CentOSUpgradeTool


Yeah, in-place upgrades between CentOS releases are not really supported, just as they are not really supported with RHEL.

However, this also comes with a perk. CentOS releases are generally the "current" release for at least three years, as opposed to two with Debian. Also, major releases are supported with critical security updates for years after they are no longer current. For example, CentOS 6 will still receive patches until November 2020.

Of course, whether this is important to you will vary based upon your needs. For some, the 2 year releases with an easy upgrade path between major releases will be the more important factor.


Having dealt with in the past RHEL 6 to 7 is supported but only for the server edition: https://access.redhat.com/solutions/637583. At a certain point I'm not sure what Debian does so much better, but that's just because I have not really needed to investigate it to the same depth as RHEL.


It will probably work for 7 -> 8, the upgrade tool is a recent addition. It already worked in RHEL for 6 -> 7.

That being said, for RHEL 6 -> 7 I did a clean reinstall due to the large architectural changes and it was the right decision.


> Debian is extremely comprehensive

Yes, Debian is comprehensive, but it comes at a price. Red Hat/CentOS are much better at maintaining the (fewer) packages they support and providing timely patches.

Bugs in less critical Debian packages sometimes linger for years.

That being said, Debian is a great server OS and the only one except CentOS that I'd recommend to anyone.


Can't really come up with any reason, if you're happy with CentOS you should probably keep using CentOS.

I just started out with Ubuntu but realised that Debian seems to have more control over their packages and have a much more stable ecosystem which I value. Also .deb packages always seem to be available for basically any software distributed for linux and why make life hard?


Good advice. I run OpenSUSE and couldn't be happier with my them in regards to Desktop and Server.

I think we all just just grok the way our distro does things.


Ubuntu is debian based, and works on .deb packages.

Just nitpicking, nothing else to add.


Yeah, if it didn't get wrecked every other upgrade, I'd probably still be on Ubuntu. But talking to the Ubuntu Washington people this past weekend, its still an ongoing problem. Comparatively, here I am running Stretch when I started with Wheezy years ago!


On the server the only upgrade issues I've had are things like apache configs breaking when moving from 2.2 to 2.4, or php (from 5 to 7), which happened when upgrading 14.04->16.04. Not sure how redhat/centos/etc would have prevented that.

Likewise a few of our custom system management packages need tweaking when new versions come out, but aside from that no problems with 1200+ servers.


Uhh, I'm talking about desktop Ubuntu, not server installs where you don't have pulse, xorg, a DE and a shedload of other software. Getting a server upgrade right shouldn't be too hard, but when you add on another thousand packages or so, it makes the upgrade process much more likely to have breakage.


I've had relatively smooth upgrades on 14.04+. Anything 12 and before has always been a bit tricky. 14 -> 16 I'd rather just reinstall since it leaves all the old upstart stuff laying around and can be confusing to troubleshoot.


Prevents PHP upgrade issues by not forcing that alongside the distro upgrade. Have php5 & php7 packages


True, but it needed to happen (and I think apache 2.4 was forced).

Generally it takes us about 12-18 months to get round to moving things on, and an OS upgrade is a great opportunity to kick us into a 'do it now' state.


Yea, on my desktop and laptops, I've completely switched over from Ubuntu to plain vanilla Debian and have been extremely pleased.


This is true! I wish to add that in my experience (years of use of both), I have found that Ubuntu is significantly lower quality and less stable than Debian. In other words, they squander some of the reliability they could inherit from Debian.


As someone who used Debian, Ubuntu and CentOS in (serious) production I agree 100%.

So many annoying bugs. Debian QA is much better, but Red Hat takes the cake. Their attention to even little details is remarkable.


As a regular user of both and a prolific runner of CentOS in production, I would err towards Debian these days. It just feels better to me. Things like hanging updates applying SELinux policies, the whole Python mess and the lag between RHEL and CentOS patches (this has improved I will say) etc just sort of put me off CentOS a bit. It's also a lot lighter.


Python is nicer in CentOS though. Software Collections are great:

https://www.softwarecollections.org/en/scls/?search=python


We just use distro provided python versions on debian and virtualenvs. Don't touch anything else!

Probably right on the basis of the above however.


Batteries are included with Debian. I seem to always have to install things from EPEL (https://access.redhat.com/solutions/3358), which has a lower standard for support. Debian seems to have everything I need in main that I need and I rarely have to go to non-free or contrib. This helps me trust my system more. Both security and stability.


When I switched around Red Hat 6 (not RHEL 6), the difference in quality of uninstall scripts and dependency management was stark: you can install and uninstall Debian packages without changes to your system. They test for this. If you need to tweak source, you can get build dependencies with one command, source with another, and build and install from your tweaked version with just one more.


The stable community...

It really does feel like Debian will be one of the only distros we have at the moment that will still be around in a hundred years.


* Debian pushes security fixes really fast (I remember a local root exploit being patch in less than a day in debian while I waited a few more days on CentOS)

* The general tooling is far better, lately, for example, I played with the tools used to build packages in clean chroots of each distributions: mock for CentOS, cowbuilder for Debian, cowbuilder is a lot more powerful, copy on right support, ability to build several packages in parallel, more switches in the cli arguments... mock is far more limited even if it does the job. Also I love reportbug, even if it seems antiquated at first (bug tracker based on emails, wtf!?), it's really really really convenient as it automatically grabs the information regarding the buggy package (logs, version, dependencies version...) and let you simply review it.

* Debian as more packages available, even compared to CentOS+Epel

* Debian as backports if you really need newer versions of a package

* If you want/need to be on the bleeding edge, sid (or unstable) is a great rolling release.

* Debian does a better job at maintaining stable version of a given software (I remember a few years ago, CentOS/RHEL updated OpenLDAP server to the latest version, which broke my directory on my (dev, fortunately) infrastructure due to a change in configuration format, this never happened to me on Debian).

* If you have relatively clean upstream, packaging is nearly automatic on Debian thanks to all the dh_<helpers>

* at least until CentOS 6, yum was a slow and fragile beast IMHO, apt being more robust and less susceptible to corrupt its DB in case of crash or interruptions. It seems to have gotten better with CentOS 7 however and I never played with dnf.

On the minus side compared to CentOS:

* kickstart is a dream to use compared to preseed which is quite horrible, specially if you are trying to template it.

* The life span of a CentOS/RHEL is really long, +10 years for latest releases, which can be great for long, complex and slow projects, Debian in contrast is more on an EOL after ~5 years policy.


Debian policy.

Package selection and packaging quality. The Debian aarchive is about 10x the size of CentOS / RHEL: 60k+ packages vs about 7k or so, as of a few years ago. I've not done a recent RHEL count.


1) One reason would be that increasingly often, when you search for some rhel/centos problem, the solution is behind a paywall at access.redhat.com. Not that I blame RH, they are a commercial company and need to make money somehow. Or similarly, you get a search hit for bugzilla.redhat.com, but it turns out that the bug has been marked customers only, so you can't access it unless you have a RHEL subscription.

2) For major and minor releases, they are months behind RH. That may or may not be a problem for you, though.

3) Debian is less beholden to the interests of any one corporate entity (for better or worse), if that kind of thing appeals to you.

4) Debian has a huge number of packages, much more than CentOS + EPEL.

5) On the fly upgrade to the next release. Though in a professional setting where you have automated provisioning, reinstall isn't that much of a burden anyway.

That being said, both CentOS and Debian are very solid distros, you're not going wrong if you choose one of them.


As someone who hasn't used CentOS much, when I've had to use it I've found the package selection (packages/versions) to be lacking, requiring me to use one of the countless 3rd party repos that may or may not contain what I need


I've known people in authority positions who have insisted on CentOS because "red hat certification", "tested for enterprise" etc, whose first setup step is to enable EPEL and a bunch of 3rd party repos.

They don't seem to see how this negates the advantage of using a "well tested, enterprise grade" distro!

I much prefer Debian (for servers and development machines) as a good intersection between "up to date" and "stable".


Well, epel has extra packages. They are by definition not essential to the functioning of the OS. Why you wouldn't want those from a rapidly updating or rolling-release repo is something I don't understand.


EPEL not being enabled by default I find a little weird, but sure it's not the end of the world.

The thing I find strange is that the people I'm talking about will happily run code from random 3rd party repos (providing PHP 7.0 or similar), and copy/paste repo key fingerprints from the web without blinking, while steadfastly refusing to use an OS with such packages included and tested in the official repos "because security/quality/enterprise".


Agreed on the third party repos, but EPEL is fine. Fedora/EPEL, while unsupported, is a Red Hat project. It's basically the upstream release for RHEL, and many Red Hat employees contribute. They have a really good QA process too.


EPEL is a must. Also I'm using official nginx and postgres repositories for latest releases. Never used other repositories.


If you want to run the latest, greatest, bleeding edge version of $foo, CentOS may not be for you.


Well, I noticed that CentOS/RHEL distro's have quite old versions of everything, but Debian stable doesn't seem all that cutting edge either. I find myself reaching out to backports and even sid to get what I need at times.

At my new job, we recently had that debate -- and I was quickly overruled for recommending CentOS/RHEL. My employer is a Java shop, I can almost always find rpm and deb pkg's for most stuff we need.


RHEL works by backporting bug fixes and new features into the old versions. Don't go by the version numbers, check the availability of features instead.


Backporting features and not bumping versions is redicilous though, especially when that means v3.1 of upstream is different to v3.1 in your repo.


You're just not their target audience. Others are willing to throw money at Red Hat for this "rediculousness", though.


RHT's market cap is over $20 billion today so that says it's providing real value to customers.


FWIW, you can find newer versions of some software packages via "Software Collections".


And neither is Debian (stable). But given that they're often used for the same purpose (servers), it's not that irrational to compare them. Having a larger, supported package selection (even if it's a bit old) may be a good thing, and relevant when you choose your distro.

For the record: I have no idea how the package selection compares between CentOS and Debian - I've been firmly in the Debian camp since forever, and stay there mostly for other reasons, of which laziness is the most important one.


It's not just the latest. Debian stable doesn't have the latest but it does have a larger selection of official packages.


On a tangential note, I don't like how I get served the German version of the site with no obvious way to get the "canonical" one. And then I hate realizing how hard it is to combine support of multiple languages and still making everything transparent and accessible.

And also, what a huge effort it is to translate pages to different languages while (I think) almost everyone wants to read the English version.


This is determined by the settings in your browser. If you don't like getting German version as default, then change your browser config.

Not only you have links to different languages in the bottom of the site, you can even find a link to document describing how the standards works and how to change the settings in bunch of different browsers: https://www.debian.org/intro/cn


And yet, a bunch of sites including Google don't give a rat's arse about the accept-language http header and do their own thing.

Wherever I travel, I get search results served in that language even though I have my browser consistently configured to want English.


I agree that in general the support might be better, but for me Google works as expected at least in my current location (Czech Republic).

If I set preferred language to Czech I get search form in Czech. After changing settings to English I get English search form (with link to the Czech version underneath the search bar). Just for fun I have also tried Dutch and it seems to work as well (there are some foreign words, but I don't speak Dutch so it might as well be Klingon).

Tested in Firefox and in newly opened private Window, so logon / previous sessions wouldn't affect the results.



I actually looked for a while there (even Ctrl+f'ed) but somehow missed it. My bad. Still it's more of a nuisance to most people and I don't like how the standard URL does not deliver the same page to all.


That's how the web is supposed to work. If you prefer English versions of pages, go into your browser configuration and tell it to prefer English.


It is a bit more complicated but I can get if people whose first language is English and who are rarely exposed to other languages cannot understand the struggle.

If I prefer to read English texts in their original form (because I understand it perfectly clear) but read texts in my native language in that original language, there is no single setting.

It feels weird saying this, but it reeks of a very anglo-american worldview to have a single language preference.


I think gp was being descriptive and not prescriptive - one preferred language is how the web works today (and I speak as a someone who is multilingual).

> It feels weird saying this, but it reeks of a very anglo-american worldview to have a single language preference.

This is a the browser-side problem[1], so perhaps the browsers authors have an anglo-american worldview? You could file a bug/feature request with your favourite open-source browser. Alternatively, an extension that selectively sends per-website "Accept-Language" headers would work - I haven't checked if one exists but it can be written in a weekend (not sure if Firefox new extension framework will allows messing with the headers).

1. Some sites work around this by allowing you to have a choose a language that overrides the one requested by the browser and persist this in a cookie or your profile. Defaulting to the language requested by the browser is a sane default compared the other alternatives (like Geo-IP look-ups: "Oh, I see you are visiting Germany. I will assume your browser requested English in error and will serve you the German version of the article instead")


> This is a the browser-side problem...

No, it sounds like a protocol problem to me. How do I state in an HTTP request: "I speak English and German; if you can serve both and the article was written in one and translated to the other, then give me the original please".

That was a long sentence, but it wouldn't have been particularly hard to define well and put into the protocol when it was originally drafted. It's a bit late now, obviously.

For example, one could define provenance of a translation much like NTP defines strata. Accept-Language could then have taken this into account.


> No, it sounds like a protocol problem to me. How do I state in an HTTP request: "I speak English and German; if you can serve both and the article was written in one and translated to the other, then give me the original please".

That is a problem with the quality of the translation/translators - not the protocol. The protocol is neutral on languages and considers all versions as equivalent, it only expects that you would pick one. The specific behaviour you are expecting can be trivially built into the server side without modifying the current protocol.


It's even worse than that. My primary criterion for whether I want English or German is not the language itself, but my expectation of the content being well-maintained. Translated versions are usually lagging behind a bit. On the Arch Linux wiki, I'll always use the English version. On my (German) university's website, I'd rather want the German version.


I usually prefer English for anything related to development, software engineering, and computers in general. I also run my user interface in English, as it is much easier to search for error messages that way. However, when visiting websites of local places such as restaurants, I prefer the German version and not the one in English meant for tourists.

While third-party extensions allow to switch Accept-Language, it is not in the standard UI of modern browsers. For people like me, an advanced setting of Accept-Language per top-level domain would reduce the number of times I have to find the language switcher on the web page to a minimum.


And it is even worse for expats.

There's many sites that force the local language down your throat and it's sooooo annoying...


Google is terrible for this. They haven't yet worked out that a thai ip !== ability to read thai.


> That's how the web is supposed to work.

Citation needed. I'm pretty certain there are many proponents for a rather tight mapping from URL to content. Just imagine how hard it is for a search engine to index websites that deliver content based on unpredictable (anything else than URL) input variables.


Eh, it really is how http is intended to work:

https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

It's no different (arguably a lot better) than cookies.


Sure, I know Accept-Language, as should be pretty clear from the discussion. Are you implying that the web is supposed to provide a shitty user experience?

In my opinion, there is very limited applicability of Accept-Language. Just because this header exists does not mean you need to make use of it at the slightest perceived opportunity (If all you have is a hammer...).


I don't prefer English, I prefer the original, untranslated version of the site I actually tried visting.


A huge effort, yes -- but probably worthwhile. You might be surprised about the number of non-english speakers; I know I was when visiting Italy and Portugal this summer (especially Italy). I suspect that we tend to live in "language bubbles".


"You might be surprised about the number of non-english speakers; I know I was when visiting Italy and Portugal this summer (especially Italy)."

As an Italian I have to agree with you; for us old timers it wasn't easy as English didn't become the official 2nd language over here until some decades ago. Both my parents and most relatives as an example knew some French but next to no English, so I had to learn it for myself. Luckily newer generations are getting much better integrated with the rest of the world language-wise; should you visit Italy again and need directions, asking to people in their 20s or 30s will probably be more effective.


English is an official language in Italy?


Ha, sorry, that was a stretched interpretation of the term official from me, I meant it's the second modern language kids study at most schools.


Yes, that will be true in general... In computer literate circles (like the ones browsing debian.org) there won't be many non English speakers, though.


I don't mind that do much. What does really bother me is when I am on vacation and using incognito or another computer and all Google sites redirect me to the other countries one. Most of the time the only way to change to English is by logging in. Their capchas are even in the local language, making it really difficult for me to do what they want.


It is actually so frustrating that I decided to go with fedora in my latest trial. It changes every time you click a link, so even if you manage to find the correct link, one click and it is back to whatever language you have at top of browser setting.


Send the correct language headers and stop complaining.


What's the correct ones? The ones that the submitter of the thread had set? My native language? English? Does it depend on the topic?


A curious fact is that Debian name their releases after Toy Story characters.

Debian experimental is permanently named "sid" (the kid that breaks toys) and "stretch" (Debian 9) is a rubber octopus from Toy Story 3.

https://www.debian.org/doc/manuals/project-history/ch-releas...

https://wiki.debian.org/DebianUnstable#Which_Toy_Story_chara...

Other curious fact about Debian, is that there are 2 logos: the open use one, and the restricted use one. The restricted use one is a less known and includes a bottle.


I always like to point out the other two logos[1][2] that prefaced the buzz lightyears chin logo[3].

[1]http://ianmurdock.debian.net/index.html%3Fp=1880.html

[2]see "old logo" at https://www.debian.org/vote/1999

[3]https://images-na.ssl-images-amazon.com/images/I/81x4CGvFvNL...


I get an error for image 3 - what is the logo?



Sorry, its not really the logo, i was kidding with the fact that the current logo looks like buzzlightyear's chin.


Ahhhhhh I never noticed the swirl on Buzz' chin before!


I am glad they did not select the chicken logo.


> The restricted use one is a less known and includes a bottle.

It indeed is very very rarely used whatsoever these days. Even the "official" CD media (which used to be the only place where it was really seen in the past 10 years) don't really exist anymore.


sid == unstable


For years I thought that the unstable release was named like that as a reference to Sid Vicious.


sid (still in Development)


While true, I guess the point was that it's misleading to refer to sid as Debian's "experimental" release, since Debian actually has a "release" referred to as experimental and it is not sid.

https://wiki.debian.org/DebianExperimental


But sid isn't referred to as an experimental release. The experimental release is more or less an voluntary option for the devs to use or not.

> Some packages/developers don't use experimental, they just put the new versions in unstable


I began using Debian c. 1998 and I've never heard that.


"It is sometimes wrongly backronymed as "Still In Development""

https://wiki.debian.org/DebianUnstable#Which_Toy_Story_chara...


sid is said to be unstable but I have experienced the opposite of that. In fact, testing is way more unstable than sid.


That is not true at all, and i can prove to you by showing just one bugreport:

An update to network-manager broke dns for everyone, the broken package stayed about 5 hours on unstable repository, if i recall correctly. This broken package never got even close to being on testing.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784587


On the other hand, if a serious issue is detected after a specific version has already migrated to testing, the package will never (automatically) migrate to testing as long as new blocking issues keep being found. This can make testing rather unpleasant.

Of course, the same two-week wait time also applies to any regular bugfix, unless it's manually migrated to testing. Thus, one bug that affected unstable but not testing is not proof at all. Personally, I'm happier with unstable than testing on my laptop and work desktop.


> On the other hand, if a serious issue is detected after a specific version has already migrated to testing, the package will never (automatically) migrate to testing as long as new blocking issues keep being found. This can make testing rather unpleasant.

That is a valid example on what can make testing unpleasant, but still holds my point as this is a lot more uncommon than sid breakages, and also the package is removed from testing automatically a few days after any RC bug is filled, which prevent new testing installations of this broken package.

> Of course, the same two-week wait time also applies to any regular bugfix, unless it's manually migrated to testing. Thus, one bug that affected unstable but not testing is not proof at all. Personally, I'm happier with unstable than testing on my laptop and work desktop.

The default wait time is 5 days (medium urgency), high urgency fixes (which fixes RC bugs on testing) waits 2 days before being migrated to testing. But you don't need to believe me, see for yourself:

Release critical bugs affecting sid (excluding packages that are already removed from testing, so we don't see any obsolete package that is to be removed from unstable too)[1]: 539

Release critical bugs affecting testing[2]: 453

[1]https://udd.debian.org/bugs/?release=sid&notbuster=ign&merge...

[2]https://udd.debian.org/bugs/?release=buster&notbuster=ign&me...


Little known: the "experimental" distribution is called "rc-buggy", a pun on "release critical".


I wish Debian would adopt Ubuntu-like versioning number: "yy.mm". So easy to understand. Another good example is TeXLive, which simply uses "yyyy". Much better.


That wouldn't work. Debian publishes minor updates to older release "branches"; naming those after the date they were released would make it impossible to distinguish the branches. For instance, Debian 9 was released in June 2017, followed by the release of 8.9 the next month as a minor update for users still running Debian 8.


Ubuntu has made the commitment for 6-month releases, so that makes sense for them. Debian releases "are ready when they're ready", so they can't predict the month (or even the year) ahead of time, and the version number is needed well before the release is made official.


...and the version number is needed well before the release is made official.

Ubuntu developer here. No, the version number is not needed before the release. All our infrastructure only uses the codename until release, since the final version name is not known for certain in advance.

For example, Ubuntu 6.06 LTS (Dapper Drake) was released late.


I wish most software would switch to this, I really don't like semantic versioning.


Semantic versioning makes sense for libraries, not so much for user-facing applications.


I used to agree with the point about year-based versioning, but I've changed my tune and I don't think this point about libraries is completely true either.

If you look at what semantic -vs- date-based versioning communicates to end-users of normal user-facing apps:

Date-based:

- how recently it's been released

- not much else

The above is useful if you are interested in being up-to-date on a piece of software you know and trust, or in trying out new / cutting edge software.

Semantic:

- a (very) rough idea of how long it's been in development for (number of versions its gone through)

- a (similarly, very) rough idea of how active the development effort is in terms of testing/maintentance/bug-fixes/patches (i.e. major version churn -vs- minor version churn -vs- patch version churn

- the likely relative stability of the current release (e.g. 2.0.0 might be less stable than 2.0.4)

All of the above may be rough and may give false impressions sometimes, but it is still extremely useful as an indicator for any relatively technical user evaluating whether or not to use or upgrade a piece of software.

Less technical users are less likely to be as interested in version numbers full stop.


If a project actually follows semver, then you know that unless the first number changes, there won't be any (significant) backwards-incompatible changes - eg, your config files will still work as expected.


This is the major obstacle with semver. People aren't terribly psyched to increase the major version whenever incompatibilities happen, because other people have different expectations of major version numbers.


Does Debian have the same support for AWS eccentricities that Ubuntu now does?


Can you elaborate?


Up until March of this year, the Ubuntu kernel was not tuned for AWS. This included not having support for Elastic Network Adapter, which limited network speeds when compared to Amazon Linux.

See here: https://insights.ubuntu.com/2017/04/05/ubuntu-on-aws-gets-se...


I have the impression that Jessie came out recently. Is time speeding up?

Meanwhile I’m happy on CentOS 7 since many years now.


> Is time speeding up?

I ask myself that every day, certainly seems like it getting older ;-)

On a more serious note: It’s great that Linux is now sufficiently mature that even a 3 year old release is perfectly adequate and can do pretty much whatever you need. Maybe I’m misremembering here, but it didn’t always used to be like that.


Jessie was released in April 2015 - Debian Stable releases are usually two years apart. CentOS 7.0 was released a bit less than a year before it (July 2014).

Jessie will still be supported until June 2018 by the official security team, and until 2020 by the Debian LTS project.


Debian switched to a more formal 2 year cycle for releases (similar to the Ubuntu LTS releases) from their "when it's ready" cycle they had before.


Debian 9 Stretch came out recently (June 2017).


Will it work on AMD promontory chipsets?


Time to upgrade my laptop




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: