
Debops (2014) - chei0aiV
http://enricozini.org/2014/debian/debops/
======
stephenr
I like this a lot, and I love the term DebOps.

The way developers somehow think DevOps is (or should be) an abbreviation of
"Developers doing/replacing Operations" is terrifying to me.

I'm also in the same boat as the author, in that I recommend and target Debian
Stable + Backports (and some vendor/community repos when required).

~~~
bbrazil
A bit over a year ago I researched what "Devops" means, and the answer seemed
to be Developers being able to push code to production without having to
involve Operations. This sounds like a good goal to have (presuming you've got
good unittests etc.) as it removes an unnecessary friction for developers.

What it doesn't include is all the higher-value work the people on the
Operational side tend to provide, like thinking about rollbacks, machine
failure, network failure, provisioning, capacity planning,
change/configuration management, security, monitoring etc - which is not to
say that all developers don't think about these things, their focus tends to
be on developing product rather than these non-functional requirements.

I'm presently in a world of Ubuntu LTS+a few manual backports for similar
reasons as the authors. My home systems are ansible managed, giving me things
like wireshark installed everywhere if I need it and new machines
automatically hooked into Prometheus monitoring (which is much easier with my
debs). I've seen what happens if you try to manually manage machines, and know
that a small bit of upfront work will bring dividends later.

Writing the core code is just one critical step in running software long term,
let's not forget the rest of the critical steps to keep it running in a sane
way in the future.

~~~
stephenr
Many people have different interpretations of the term 'DevOps', but what
you're describing in that first paragraph is Continuous Deployment.

Yes, it's related to (in that it's often a beneficial result of) but it's not
explicitly the same as DevOps, by any definition I've seen, good or bad.

~~~
bbrazil
Among all the definitions I found, that was the only common theme.

I've been discussing this with others in the "DevOps" space and in recruiting
terms a "DevOps engineer" seems to mostly mean an "Ops engineer". True Devops
(in the sense of developers who also care about and perform all of their own
operations) is a culture, not a job role and one that seems quite rare in the
wild.

~~~
devonkim
Definitions of the term are almost as diluted as "cloud" is. It is easier to
define it as what it's not.

Look up Jez Humble for a bit more idea into the origins of the term. The term
is a primarily a buzzword among larger companies and almost always followed up
by some engineering cultural change objective rather than about how code is
deployed or even developed and run. You know that it's another management-
focused trend when there's entire conferences where people say devops
constantly without mentioning anything about code and half the folks in
attendance or speaking are consultants in suits that consider Excel formulas
the extent of their coding skills.

So the common theme I see is "any way besides what we used to do operations
traditionally." It's mostly used for "operations with some idea of what is
being deployed on the stack above them." Most start-ups don't have this
problem at all with modern infrastructure (no more rack and stack at your
5-man start-up probably) though by definition because rigidly defined roles
are a Big Company Problem.

~~~
stephenr
The problem modern startups have is that a developer sees he can make calls to
AWS or Azure APIs and assumes that makes him qualified to define system
architecture, security policies, deployment processes, etc.

"Modern" infrastructure (by which I assume you mean provisioned, destroyable
VPS instances + associated services such as AWS, Azure, etc) is effectively
just a new "how" \- you call an API instead of deploying a config file or
similar. You still need to know the "what" and the "why" to be effective.

~~~
devonkim
I don't necessarily see the situation as inflated egos as much as lack of
resources to do it better. I have rarely met developers that are excited to do
operations work like defining and implementing system security policies,
change control, and orchestration. It's a chore that's as exciting for them as
doing their laundry.

I'm being very conservative with what "modern" means (within the past 20 years
is about right). Traditional shops are still racking and stacking machines and
maybe deploying VMs by hand using ITIL stuff trying desperately to slow down
system changes to deal with demand rather than to speed things up like most
shops have done. Where I am now, the "traditional" IT side of the house takes
roughly 5 months to provision a new server (I lead operations on anything
bleeding edge, which is now standard for most start-ups).

Going from using maybe kickstart files to API calls is not as big of a deal as
the fact that you can even get something on demand in any way instead of
going, finding another job, quitting that job, coming back in shame, and
realizing that the server you asked for is finally up.

~~~
stephenr
I've definitely met developers who made the choice to say "I can do this
myself, I don't need someone else to do it", and I've also met developers who
when tasked with "make our app run" simply say "ok, its running on port 80, so
it's up right?" without any of the associated work to make it secure,
reliable, backed up, etc.

------
olgeni
I recently had to develop an Erlang application targeting Debian "stable". It
was humiliating at best. Countless hour were wasted trying to get around bugs
in prehistoric packages, or building new (and working) packages from scratch.

Never again.

~~~
logicchains
Live dangerously: deploy on Arch. Then you'll never have to work with outdated
anything again ;)

Actually, although I've been using Arch Linux for over two years now (not in
production; I'm not brave enough), I've hardly had any issues at all with its
rolling updates. The worst was just having to manually delete some Java
binaries from /usr/bin when the way it handled Java was updated to allow Java
7 and 8 to exist side by side.

~~~
sleepydog
I know a lot of people who use Arch. I myself used Arch for about 10 years.
Everyone I talk to has glowing reviews of it, and they've never had any
issues.

I stopped using Arch about 3 years ago after my system became unbootable after
an update. It was no the first time. In the past, I would be fine reading
update notes and fixing the issue. But since I started troubleshooting servers
at work, I have _zero_ patience for doing it at home.

One other annoyance that comes with rolling releases is that you should update
more often, to avoid making bigger (sometimes conflicting) changes to your
system. You end up reading release notes more often. I could turn on automatic
updates, but I've been bitten by that in the past.

Arch also encouraged me to tinker, and I was much more likely to make breaking
changes to my system than I am now just running Ubuntu. If I had the
time/energy to try out new distros at home, I would probably try Nix or
something similar, that emphasizes rollback capability.

------
zufallsheld
There's also DebOps[0], a collection of ansible playbooks a " Debian-based
data center in a box".

[0][https://github.com/debops/debops](https://github.com/debops/debops)

~~~
nickjj
I remember going over with the author on what we should name the domain[0]. I
figured it was pretty unique.

[0][http://debops.org](http://debops.org)

------
Sphax
How do you not bundle in dependencies when coding in, say, Java ? The Java
packages on Debian are for the most part outdated. It's way more practical to
just use Maven, build a fat jar and be done with it.

~~~
_yosefk
How do you not bundle dependencies programming in anything? If you want to be
portable across Linuxes, you can't even count on tcsh or env being installed
at the same path. Counting on the stuff installed with the system unleashes a
world of pain on the user.

~~~
davexunit
>you can't even count on tcsh or env being installed at the same path

This is what tools like Autoconf and pkg-config take care of. Lots of people
use them to discover executables and libraries and generate files with the
right machine-specific variables in them. You should _never_ assume that
binaries are in /usr/bin or that libraries are in /usr/lib. A lot of packaging
issues are caused by such assumptions.

If you use the right tools, you don't have to bundle your dependencies.
Bundling introduces a serious maintenance burden onto the developers _and_ the
packagers. It's easy to avoid bundling for C libraries and things, but with
the prevalence of language-specific package managers, it's become a harder
problem because everyone just assumes that you will fetch dependencies through
it and never use the system package manager. It's a sad state.

------
saurik
> I build my software targeting Debian Stable + Backports. At FOSDEM I noticed
> that some people consider it uncool. I was perplexed.

I am also perplexed... what are these other people doing?

~~~
chei0aiV
They are doing stuff like this:

[https://news.ycombinator.com/item?id=9952356](https://news.ycombinator.com/item?id=9952356)

~~~
chei0aiV
To the downvoters: the comments on that post mention a lot of deployment
methods that people not doing Debops are using.

------
Kabacaru
So I develop PHP. The reason I don't use Debian stable is because the latest
version of PHP on there is 5.4.41, which is behind the "old stable version".
That means security fixes only.

No bug fixes that resolve problems, or modern functionality (that release is
over a year old), and will be EOLd in 1 month. That means a big delta of
change that will be needed to handled when debian finally does get round to
upgrading. Large deltas of change mean lots of risk.

It's much better to stay further up the crest of the wave and handle more
regular updates, to minimise the size of the risk I'm bringing into my code at
each release, than it is to stick to an old version and not handle the stream
of new functionality that's coming in as it arrives.

~~~
0x0
Debian Stable has 5.6.9 at the moment,
[https://packages.debian.org/jessie/php5](https://packages.debian.org/jessie/php5)
\- and shipped with 5.6.7 initially.

Security updates for php5 in Debian seem to have changed from backporting
security fixes to staying up to date on any given upstream minor branch,
including other fixes. [https://security-tracker.debian.org/tracker/source-
package/p...](https://security-tracker.debian.org/tracker/source-package/php5)

~~~
Kabacaru
Gah, well there's that argument out the window. We tend to use docker on
CoreOS here, so it's not much of an issue.

Maybe next time I stand up a real VM I'll look at using Debian stable then.

------
skarap
Never understood why debian ended as the distribution of choice of the DevOps-
related movements. Ubuntu is understandable - developers used it on their
desktops so when they orderer their first VPS, they chose ubuntu. But why
debian?

Also why it looks like RHEL family is completely out of fashion? Is it
considered too "enterprisy"?

~~~
juliangregorian
Why Ubuntu? If you're not writing GUIs, Ubuntu is just a less-stable Debian.

~~~
jnbiche
> Why Ubuntu?

PPAs.

~~~
duggan
PPA's are a seriously killer feature. Easily one of the most important parts
of the Ubuntu ecosystem. Not enough people realize it.

~~~
chei0aiV
What is the killer part of PPAs? It is pretty easy to setup your own repo on a
server using reprepro or aptly. Is it the build-machines-as-a-service aspect?
Or the ease of adding a PPA to your system? Or the fact anyone can dump some
stuff into a repo and have Launchpad/Canonical/Ubuntu bless them?

~~~
erikb
I think it's the part of having bleeding edge packages easily linked into your
system. Other options might be available but people simply don't know about
it. And developers themselves might already provide the PPAs which means there
are many PPAs available.

Might be the worst solution but is the most well known.

------
nvarsj
Using Debian isn't a panacea. Neither is any particular distribution.
Anecdotally, the only personal server I've had hacked was running Debian - due
to it using an old openssh version (the issue was not present in later
versions). It was fixed quickly, but I got 0dayed.

~~~
stephenr
The only server's I've seen (and subsequently replaced, with Debian) breached
were old CentOS boxes.

The key thing here is not "Debian is bad" or "CentOS is bad" it's that you
need to keep up to date with security patches. For Debian that usually means a
combination of using the Security Apt repo, and for things like OpenSSH, using
the Backports Apt repo.

I do agree that Debian isn't a silver bullet, but in my experience it's much
easier to work with from a setup/management point of view than CentOS,
particularly for small shops that are't heavily invested in a full-blown CM
tool - shell scripts and/or Debian config packages [1] can be used to fully
provision one server or fifty.

[1] [http://debathena.mit.edu/config-
packages/](http://debathena.mit.edu/config-packages/)

------
throwaway0189
Related to this: Outsourcing your webapp maintenance to Debian

[https://feeding.cloud.geek.nz/posts/outsourcing-webapp-
maint...](https://feeding.cloud.geek.nz/posts/outsourcing-webapp-maintenance-
to-debian/)

------
KaiserPro
Yes, working smart not hard.

The value of going home on time each night, because there is a wealth of
google searches to expose and workaround known bugs should never be
underestimates.

------
ecesena
Please edit title with (2014)

------
HeXetic
Am I the only one who finds this style of usually-just-one-sentence-per-
paragraph almost completely unreadable on a visual level?

