The way developers somehow think DevOps is (or should be) an abbreviation of "Developers doing/replacing Operations" is terrifying to me.
I'm also in the same boat as the author, in that I recommend and target Debian Stable + Backports (and some vendor/community repos when required).
What it doesn't include is all the higher-value work the people on the Operational side tend to provide, like thinking about rollbacks, machine failure, network failure, provisioning, capacity planning, change/configuration management, security, monitoring etc - which is not to say that all developers don't think about these things, their focus tends to be on developing product rather than these non-functional requirements.
I'm presently in a world of Ubuntu LTS+a few manual backports for similar reasons as the authors. My home systems are ansible managed, giving me things like wireshark installed everywhere if I need it and new machines automatically hooked into Prometheus monitoring (which is much easier with my debs). I've seen what happens if you try to manually manage machines, and know that a small bit of upfront work will bring dividends later.
Writing the core code is just one critical step in running software long term, let's not forget the rest of the critical steps to keep it running in a sane way in the future.
Yes, it's related to (in that it's often a beneficial result of) but it's not explicitly the same as DevOps, by any definition I've seen, good or bad.
I've been discussing this with others in the "DevOps" space and in recruiting terms a "DevOps engineer" seems to mostly mean an "Ops engineer". True Devops (in the sense of developers who also care about and perform all of their own operations) is a culture, not a job role and one that seems quite rare in the wild.
Historically large companies had strictly separate development and ops teams. DevOps is about fusing them together so they talk to each other and influence each other's thinking for the benefit of everyone.
But of course sane definitions don't make good buzzwords so now we hire "DevOps" people just like we buy "private cloud" servers.
DevOps is as much about making your developers ops engineers as Agile is about making your stakeholders developers. It's a possible side-effect, but neither necessary nor sufficient.
Look up Jez Humble for a bit more idea into the origins of the term. The term is a primarily a buzzword among larger companies and almost always followed up by some engineering cultural change objective rather than about how code is deployed or even developed and run. You know that it's another management-focused trend when there's entire conferences where people say devops constantly without mentioning anything about code and half the folks in attendance or speaking are consultants in suits that consider Excel formulas the extent of their coding skills.
So the common theme I see is "any way besides what we used to do operations traditionally." It's mostly used for "operations with some idea of what is being deployed on the stack above them." Most start-ups don't have this problem at all with modern infrastructure (no more rack and stack at your 5-man start-up probably) though by definition because rigidly defined roles are a Big Company Problem.
"Modern" infrastructure (by which I assume you mean provisioned, destroyable VPS instances + associated services such as AWS, Azure, etc) is effectively just a new "how" - you call an API instead of deploying a config file or similar. You still need to know the "what" and the "why" to be effective.
I'm being very conservative with what "modern" means (within the past 20 years is about right). Traditional shops are still racking and stacking machines and maybe deploying VMs by hand using ITIL stuff trying desperately to slow down system changes to deal with demand rather than to speed things up like most shops have done. Where I am now, the "traditional" IT side of the house takes roughly 5 months to provision a new server (I lead operations on anything bleeding edge, which is now standard for most start-ups).
Going from using maybe kickstart files to API calls is not as big of a deal as the fact that you can even get something on demand in any way instead of going, finding another job, quitting that job, coming back in shame, and realizing that the server you asked for is finally up.
As I said, I don't treat "developers can push to an environment" to be the defining factor or the definition of devops
The Debian ecosystem is great, but I would never recommend a developer marry an app to it.
That way, you get static artifacts you can deploy (instead of deploying using pip, which requires compiling any non-pure-python package on the server), you get to use apt's dependencies for non-Python packages, like system libraries, and you can easily rollback by simply pushing the previous version of the deb package, which is cleaner than rolling back with pip, in my experience.
And what about the way management thinks that's when you don't have to pay Operations people (those guys who always try to stop progress in your company)?
No one says "we don't need someone to setup/maintain our servers".
DevOps is, in every interpretation I've seen, about who/how it's done.
Of course they do. And there are a lot of products which are marketed using that idea:
"AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their application code and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring."
The configuration management/orchestration tools also help with that (you don't need anyone to provision the servers, take care of dependencies etc., you can just include these 15 Chef cookbooks, and everything will be done automatically). Btw: I'm not saying configuration management tools are bad - they are a must-have, I'm just saying - nothing will replace a person who knows what they are doing.
In this situation, you've basically traded your own Ops staff for a combination of your developers and whatever support AWS provides you - so you're back to a managed service like every man and his dog was using in the 90s.
Were you targeting Wheezy?
If you look here https://github.com/erlang/otp/releases you'll see that - even on Jessie - 17.1 is unfortunately way behind.
17.3.4 was released on 4th of November 2014, according to the link you provided.
Jessie "freeze" happened on 5th of November ( https://lists.debian.org/debian-devel-announce/2014/11/msg00... ), so I even wonder that they are shipping that version.
In my opinion that is not "way behind".
If you need bleeding edge you should move to unstable or testing releases, move to a rolling release distribution like Arch or build your own.
I figured I would just install the package and use a -v or --version arg to get the exact build version. I cannot find any way to get a patch version out of Erlang. The `OTP_VERSION` file simply says 17.3, and the most I can get from `erl` or `erlc` is "17".
EDIT: Well, that's not true, I see the reason -- Linux isn't an OS so much as it is a family of OSes that are assembled from a lot of independently developed projects and are loosely compatible with each other, so the reason you need a Debian-specific version of Erlang is because Debian isn't quite the same as Arch or Fedora or what have you. I just don't think that it's such a great reason that users should just accept that it'll never get better.
You can always install the version you want, you just have to compile it, including it's dependencies if needed (using a custom path for libraries).
If you need to do that for multiple machines, build your own packages.
You have a choice how you want to get stuff on your system.
People creating software packages that are in the official repository make it easy for multiple projects to reuse stuff and it works out for most people most of the time.
I can see the point of having a "stable" kernel, but these freezes are mostly random. Back in its days, Squeeze shipped with R14A (not B). That's not stable, it's just a random cutoff. It means that whoever did the package did not even know what A and B used to mean in Erlang releases.
That's not something you can put in production.
If you're a heavy erlang user you could also consider helping to support the Debian packages for it, I'm sure they'd (Debian erlang team) appreciate any help offered.
The idea is to lean on stable for everything else that you don't care as much about, basically.
Actually, although I've been using Arch Linux for over two years now (not in production; I'm not brave enough), I've hardly had any issues at all with its rolling updates. The worst was just having to manually delete some Java binaries from /usr/bin when the way it handled Java was updated to allow Java 7 and 8 to exist side by side.
I stopped using Arch about 3 years ago after my system became unbootable after an update. It was no the first time. In the past, I would be fine reading update notes and fixing the issue. But since I started troubleshooting servers at work, I have zero patience for doing it at home.
One other annoyance that comes with rolling releases is that you should update more often, to avoid making bigger (sometimes conflicting) changes to your system. You end up reading release notes more often. I could turn on automatic updates, but I've been bitten by that in the past.
Arch also encouraged me to tinker, and I was much more likely to make breaking changes to my system than I am now just running Ubuntu. If I had the time/energy to try out new distros at home, I would probably try Nix or something similar, that emphasizes rollback capability.
So even if the system boots, it doesn't mean anything because booting a system is only one out of 100+ use cases that you will never be able (or willing) to test after each pacman -Syu..
One wheezy, that would have put you at 17.1 in less than a minute.
The fact that erlang.org does not list minor releases at all probably doesn't help. One can go to erlang.org and think that 18.0 is the latest...
Here is a quick howto for down and dirty backporting:
Get the debian, orig, and dsc files:
cmd: "dpkg-source -x erlang<blah>.dsc"
cmd: "cd erlang<blah>"
Then you have 18.0.something source packages and installable binaries for whatever debian release you have. Check the build-depends in debian/control or try apt-get build-dep erlang first.
If you want a newer release and its just a minor change like 18.0.2 surely is... then just download their current source file and rename it like the orig file you downloaded from debian. Or go update the entry in debian/changelog to properly represent it. (dch -v 1:18.0.2-1~olgeni+1 -m)
It only takes a few minutes in prep work and a couple commands to complete. Most of the time is spent waiting for something like erlang itself to compile before it pops out installable binaries for whatever version of debian you're on. Plus now you have 1-for-1 installable runtime for all your other nodes or environments without any work.
You can cheat a little. For example I am using some system debs for python packages, but if there were more than a handful of python utility packages that can get unmanageable fast. That leads naturally to the fat jar/virtualenv/static binary approach with manually managed dependencies which gives you isolation between applications, and all the associated costs.
If we were in an ideal world we would do that, but unfortunately we're not, and in my experience it's way easier for the dev AND for the sysadmin to use Maven and build a fat jar.
This is what tools like Autoconf and pkg-config take care of. Lots of people use them to discover executables and libraries and generate files with the right machine-specific variables in them. You should never assume that binaries are in /usr/bin or that libraries are in /usr/lib. A lot of packaging issues are caused by such assumptions.
If you use the right tools, you don't have to bundle your dependencies. Bundling introduces a serious maintenance burden onto the developers and the packagers. It's easy to avoid bundling for C libraries and things, but with the prevalence of language-specific package managers, it's become a harder problem because everyone just assumes that you will fetch dependencies through it and never use the system package manager. It's a sad state.
If you're upset about that, learn how to instantiate containers on your chosen platform :-)
I am also perplexed... what are these other people doing?
No bug fixes that resolve problems, or modern functionality (that release is over a year old), and will be EOLd in 1 month. That means a big delta of change that will be needed to handled when debian finally does get round to upgrading. Large deltas of change mean lots of risk.
It's much better to stay further up the crest of the wave and handle more regular updates, to minimise the size of the risk I'm bringing into my code at each release, than it is to stick to an old version and not handle the stream of new functionality that's coming in as it arrives.
Security updates for php5 in Debian seem to have changed from backporting security fixes to staying up to date on any given upstream minor branch, including other fixes. https://security-tracker.debian.org/tracker/source-package/p...
Maybe next time I stand up a real VM I'll look at using Debian stable then.
Also why it looks like RHEL family is completely out of fashion? Is it considered too "enterprisy"?
On the desktop, Debian has quite a few more rough edges, and you wouldn't really recommend it for a newbie.
Regarding redhat being out of fashion, part of that is that Ubuntu was a big drawcard to bring devs to linux because they focused on polishing the desktop, so the debian family got an influx of users. Personally, I don't like redhat tooling as I find them full of gotchas and their output is usually full of chaff and hard to parse. That may be just a personal taste issue; I'm sure plenty of redhat admins find debian tooling weird and odd.
On top of that, it was already a pretty popular distribution. The more people use it, the more documentation becomes available, and the more new people will pick it up.
Well, look at the website: https://www.redhat.com/en/technologies/linux-platforms/enter.... That won't appeal to anyone outside of an enterprise chair.
RHEL/CentOS is still around, but you don't hear about it too often.
Might be the worst solution but is the most well known.
Sure, it'd easy to set it up yourself. But the point is that PPAs are managed by someone else and have already been set up.
If I want a new software, and there's a PPA, I can just use that PPA and be able to use it straight away.
The key thing here is not "Debian is bad" or "CentOS is bad" it's that you need to keep up to date with security patches. For Debian that usually means a combination of using the Security Apt repo, and for things like OpenSSH, using the Backports Apt repo.
I do agree that Debian isn't a silver bullet, but in my experience it's much easier to work with from a setup/management point of view than CentOS, particularly for small shops that are't heavily invested in a full-blown CM tool - shell scripts and/or Debian config packages  can be used to fully provision one server or fifty.
The value of going home on time each night, because there is a wealth of google searches to expose and workaround known bugs should never be underestimates.