
Agility Requires Safety - runesoerensen
http://themacro.com/articles/2016/03/agility-requires-safety/
======
mikekchar
Google is failing me right now, but I remember Kent Beck explaining why he
used the term "extreme" in XP. It refers to things like extreme skiing. You
can helicopter to the top of a mountain and do crazy things, but you will get
killed. Extreme skiers avoid getting killed by reducing risk. You break down
all of your movements to their absolute basics and you perfect them. Then when
you are skiing, you stick to those basics, performing them almost perfectly
time after time. You never do anything extraordinary. By doing this you de-
risk each movement to the point that you can chain them together and do
something astonishing. In reality, though, extreme skiing (and extreme
programming) is simply monotonous repetition of basic skills executed to near
perfection.

To me, this is what "agile" means. It means boiling it all down to a small set
of skills that you can perfect and repeating them over and over again without
exception. Because you are never taking any risks, the overall project can
appear to be insane, but still be accomplished.

If you are negotiating what you are doing by saying, "maybe we can get away
with X", or "maybe this will be good enough", then you will not be able to
achieve this kind of agility.

And before someone asks: yes, the code will still not be perfect in the end.
You goal is the perfection and de-risking of your _actions_ not your
artifacts. It is a subtle but important point.

~~~
seivan
No risk, no experiments, no innovation and getting caught in a rut of
mediocrity and boredom.

I love experimenting and taking chances with code and ideas. Yeah you end up
spending more nights and it might take more time or it might blow up in your
face but what's the point of being a programmer if you can't push limits.

~~~
mikekchar
This is actually a good point. In XP this is one of the reasons you do spikes.
Often you will throw away all of your rigour when you do a spike and just see
what comes out the other side. Then you throw away the code and reimplement it
using your rigour. That way you get the best of both worlds with a small
penalty of having to do some rewrites. It takes a considerable amount of
discipline, though, to throw away working code when people are screaming at
you to deliver ;-)

~~~
seivan
Haha, I thought it needed more discipline to keep working coder as I keep
renaming and rewriting.

------
junke
While I agree with the content, I dislike the straw-man pattern that is so
commonly found: "you can use method A stupidly or method B (my favorite)
intelligently. Hence method B is the best."

Take for example:

> Developers work in total isolation for weeks or months at a time on feature
> branches and then try to merge all their work together into a release branch
> at the very last minute.

You have also option 3: Come up with a _specification_ (not design) for all
the components up front in such a way that it minimizes integration risks.
Split tasks and define interfaces and/or protocols so that the work can be
developed in parallel. Still, try to follow up with the changes in other
branches and keep your work up-to-date with _major_ changes that may happen in
the main branch. Try to merge to see if there is any conflict, but don't push
half-work. However, if you have to modify a main component that is used by
other in order to implement your thing, and that change can be merged back
safely, do it as soon as possible.

~~~
brikis98
Author here. FWIW, that sentence "Developers work in total isolation for weeks
or months at a time on feature branches and then try to merge all their work
together into a release branch at the very last minute" is actually the exact
opposite of a straw man: it's a real occurrence I witnessed first hand at a
number of companies (if you want to hear all the gory details about one of
them, check out the book: [http://www.hello-startup.net/](http://www.hello-
startup.net/)). There certainly are companies that are able to make feature
branches work for them the way you described, but based on my personal
experience and interviews with several dozen other successful companies,
feature branches usually lead to disaster.

~~~
mistermann
Using one anecdotal experience as proof is not the exact opposite of a
strawman.

Similarly: "That one hour you “saved” by not writing tests will cost you five
hours of tracking down a nasty bug in production, and five hours more when
your “hotfix” causes a new bug."

These numbers don't line up with any experience I've ever had. In my opinion
the value of testing should stand on its own without having to exaggerate.

~~~
brikis98
> Using one anecdotal experience as proof is not the exact opposite of a
> strawman.

A strawman argument is an argument no one is actually making, but one that's
easy to debate against (to "knock down"). So even "one anecdotal experience"
implies there _is_ someone making that argument and it's not a strawman.
Moreover, as I said above, it's not a single anecdote, but experience with
many, many companies, including ones I worked for directly, those that are the
clients of my company ([http://atomic-squirrel.net/](http://atomic-
squirrel.net/)), and the many companies I interviewed while writing my book.
Of course, the plural of anecdote is not "data", but I'm pretty sure that you
don't need statistically significant data sets to show your argument isn't a
strawman.

> These numbers don't line up with any experience I've ever had. In my opinion
> the value of testing should stand on its own without having to exaggerate.

If anything, it's not an exaggeration, but an underestimate. I can't count how
many hours I've lost to debugging that could've easily been saved by a handful
of automated tests. But perhaps you're a better programmer than I am, and I
envy that your code works perfectly regardless of whether you write tests or
not.

~~~
mistermann
> I can't count how many hours I've lost to debugging that could've easily
> been saved by a handful of automated tests.

Undoubtedly, but be careful you're not suffering from confirmation bias, are
you properly accounting for all the tests you wrote that never detected an
issue, where they didn't save tons of time in a major refactor (because it
never happened)?

> But perhaps you're a better programmer than I am, and I envy that your code
> works perfectly regardless of whether you write tests or not.

[http://blog.dilbert.com/post/141657128476/the-sarcasm-
tell-w...](http://blog.dilbert.com/post/141657128476/the-sarcasm-tell-with-an-
absurd-absolute)

------
paulojreis
I've been having some trainings on agility and software development, at an
european Big Corp™ (where I work).

Here at european Big Corp™, management is worried that our size is making us
slow, and they also want to be cool like the start-up _kids_ in the valley. As
such, we've been using/trying to use Agile, but with very limited success. Big
Corp™'s solution to this obviously low success is to buy thousands and
thousands of Euros worth of training, with Agile and Scrum certified trainers
and partners, which - every single time - repeat _ad nauseam_ the same
doctrine and dogmas, much like liturgy in a church. A few examples:

* "So your colleagues are over-estimating every single task - pardon me, user story! - to have time to browse reddit? Estimate with story points, they'll see that their velocity is slow.";

* "So your colleagues don't like resolving bugs? Put a bug chart in the office where everyone can see it, they'll feel guilty and solve them!";

* "So your colleagues don't like to create tests and write half-assed code, totally ignoring definitions of done and the like? Just wait until velocity drops because of technical debt, they'll understand and learn!".

Well, what's my point here? Agile may require "safety" and all the technical
_goodies_ described in this article, but - before that - it requires a team of
committed people. This is the basis of Agile and Scrum. And that's where my
company fails, and that's why Agile - or any other approach - won't work here
until people are responsible and committed. The build is now broken, will a
team of uncommitted people will care? They'll push around the responsibility
until someone fixes it.

So, yeah, Agile requires safety; but, before that, it requires commitment. I
feel it's an engineering-type trait, to try to solve human issues with tools
(bug charts, code coverage, continuous integration/delivery), and many
engineering-driven companies seem to play the game that way. But all these
tools won't solve the real problems which explain why a team might be failing.
And if a team is committed, they'll eventually succeed, even without the shiny
tools and cool approaches.

~~~
brikis98
> So, yeah, Agile requires safety; but, before that, it requires commitment.

Completely agreed. There are certainly tools and processes that are more
effective than others (as I discussed in the post), but for a creative
discipline like programming, no process or tool will be effective unless the
creators (the programmers) buy into it. That reminds me of a quote from
_Peopleware_ :

> The maddening thing about most of our organizations is that they are only as
> good as the people who staff them. Wouldn't it be nice if we could get
> around that natural limit, and have good organizations even though they were
> staffed by mediocre or incompetent people? Nothing could be easier—all we
> need is (trumpet fanfare, please) a Methodology.

~~~
jacques_chester
Tellingly, the "high discipline methodologies"[1] page on c2 was kicked off by
listing XP and the Personal Software Process.

(You _can_ create systems for enabling median folk to accomplish things, even
if they are disinterested. We call it "bureaucracy", and it sucks, but a lot
of the time it kinda-sorta works. A bit.)

Anyway, as usual: you need good people, good process and good tools.

 _None of these are substitutable for the others_ , despite what
methodologists, tool vendors and various worthies might tell you.

[1]
[http://c2.com/cgi/wiki?HighDisciplineMethodology](http://c2.com/cgi/wiki?HighDisciplineMethodology)

~~~
paulojreis
> (You can create systems for enabling median folk to accomplish things, even
> if they are disinterested. We call it "bureaucracy", and it sucks, but a lot
> of the time it kinda-sorta works. A bit.)

Yeah, it's true. And the _kinda-sorta_ might be just enough to some companies
(which are too big to fail and have enough leverage to push mediocre stuff to
the market).

On a side note, I've been _feeling_ , lately (and within the context of all
this Agile BS my company tries to indoctrinate me with), that management is
really the _art_ of accomplishing _stuff_ without making large assumptions
about your resources (in software, without assuming any kind of talent,
commitment or responsibility from the team). And, although it seems horrible
to me, there's really a lot of knowledge and value in achieving things even
when you only have a bunch of uncaring, undedicated and uncommitted monkeys
which only care about collecting their paycheck.

------
partycoder
Agile enthusiasts really enjoy applying agile methodologies to functional
requirements such as user stories.

But non-functional requirements are rarely explicit: security, reliability,
redundancy, durability, concurrency, performance, scalability, configuration,
deployment, documentation, logging, monitoring, supervision, maintainability,
construction for verification...

You don't get a user story saying: "as a user i would like my information to
be private" or "as a user i would not like to experience a concurrency bug".

To neglect those non-functional requirements in favor of perceived progress is
not in the company's best interest. A solution that doesn't comply with those
requirements also has a name: a functional prototype. There's a difference
between developing production software and developing prototypes, even if you
consider yourself agile.

Now, as a software engineer, you should be able to identify these requirements
and include them in estimations. You can also ignore them, and project an
image of a highly productive engineer, but some day your code will crash and
you won't have 10 days to fix it. That day you will be miserably fired to the
sound of a trumpet.

~~~
jacques_chester
> _You don 't get a user story saying: "as a user i would like my information
> to be private" or "as a user i would not like to experience a concurrency
> bug"._

Uh, you do. You totally do.

"As a User, I want the site to appear on my browser quickly".

In acceptance criteria, "Quickly is defined as 98th percentile 400ms full
roundtrip to AWS US-east-1".

How do you do this? You write a test. What kind? A performance test. When does
it run? _Every time you check into CI_.

How is this different from standard agile practice?

It isn't.

~~~
zby
The quote is about 'privacy' and your user story is about 'speed' \- these are
different criteria and 'privacy' is much harder to specify in a user story.

~~~
jacques_chester
That's what I get for reading too quickly :)

I focused on performance because that's one example I've had brought up
several times. And it's probably the easiest one.

I've seen privacy stories too. The style I like is to create a malicious user
and _deny_ them.

    
    
        As a Malicious User
        I want to steal credit card numbers
        So that I can sell them on the black market
    
        Given I have access to the web app
        When I supply malformed URLs
        I am ignored
    
        A/C
        Pentest tool with decent corpus
    

Another approach, also used, is exploratory testing. Security and privacy are
tricky because you're dealing with humans who can react creatively; so the
best test is humans who react creatively.

------
hibikir
I have done agile development successfully, and yet, the article rings very
hollow for me, because most of its examples have very little to do with the
principles the author tries to explain.

For instance, he talks about working strategies, and puts Google as an
example. Google is a terrible example for almost every other company out
there. They have a gigantic monorepo, which is only manageable because they
have custom tooling from hell. For most of us, the organization doesn't have
said custom tooling from hell: The OSS version of bazel just doesn't work
quite as well out of the box. If you aren't running a highly modified version
control system like they do, check in performance dies. You might be able to
pretend to be Google if you are tiny instead, but anyone in between will just
meet suffering. It's a bit like trying to match apple in industrial design.

Then there's the talk about feature flags. They are great, useful things, but
there is also hidden suffering behind feature flagging all the things. There
is much extra gardening required, and a completely different set of headaches
when relatively large changes affecting the same parts of the code are being
hidden behind feature flags. They have weird interactions too! And don't
forget how much fun it is to have intermediate data structures changing for
features that aren't active: He is just glossing over big, big problems, that
happen to be different than the one he discusses.

Given that the author is consulting for tiny startups, chances are he hasn't
stared at those problems in the face, but they exist, and they are especially
pernicious when your tiny company believes that imitating google will work,
and then reaches 60-80 programmers: Then all the advice above starts to crack,
and it only gets worse when you get into the 200 engineer range.

The difficult part is not being a small company, where any and all practices
will work. It's not that hard being huge either: Just invest heavily in your
own ecosystem. PHP too slow? Rewrite it! Git too slow? Write a new engine for
Mercurial! It is when a company is growing fast, bit isn't really large, when
your technical practices can cost you your company, and the article's advice
is PRECISELY the way to get murdered.

~~~
brikis98
> For instance, he talks about working strategies, and puts Google as an
> example.

First, I also listed LinkedIn and Facebook. Second, many small companies use
the same strategies, but most people probably haven't heard of them, so they
don't serve as particularly useful examples.

> Given that the author is consulting for tiny startups, chances are he hasn't
> stared at those problems in the face

If you're going to make an ad hominem argument, you should at least do a more
thorough job of looking up my background :)

I've worked at and with small, medium, and large companies. Every single tool
and technique involves trade-offs, and no one approach will fit everyone,
which is _exactly_ the point I discuss at the end of the post.

------
nxzero
Agility requires prior knowledge and any action that puts that knowledge at
risk of catastrophic lose is questionable. That said, agents taking on risks
that are expendable without major loss of knowledge or unreasonable
expenditures of resources will outperform competing systems that avoid loss at
all costs.

------
franciscop
> all the teams would use the metric system, except one

We all know which one. Side question: why don't tech companies help push the
USA public towards the metric system and the SI (International System of
Units) in general?

~~~
ktRolster
Because units don't matter anymore ever since the _units_ command line utility
was created. I can use watts/hogshead or horsepower/liter just as easily as
kilometers or miles.

In practical terms, for the UI, it's best to use whatever customers are most
comfortable with, since you don't want to scare them away over irrelevant
issues.

~~~
franciscop
I'm sure in 1998 there was a similar utility such as _units_ ; still the Mars
Climate Orbiter disaster [1] and the Gimli Glider [2] happened. By Engineers
or technical people, not the general public. I'm sure everyday small
misunderstandings/errors happen from the disparity.

[1]
[https://en.m.wikipedia.org/wiki/Mars_Climate_Orbiter](https://en.m.wikipedia.org/wiki/Mars_Climate_Orbiter)

[2]
[https://en.m.wikipedia.org/wiki/Gimli_Glider](https://en.m.wikipedia.org/wiki/Gimli_Glider)

~~~
garrettgrimsley
>I'm sure in 1998 there was a similar utility such as units; still the Mars
Climate Orbiter disaster

The utility did exist then, but a reading of the mishap report[0] for that
event makes it clear that NASA was not using it. It looks like someone
assigned a constant using the Imperial measure rather than Metric
representation of a value. The function that used this constant did not verify
the unit of measure, and the constant was likely just a floating point number,
not a data type that even stored unit of measure.

I haven't read the Gimli report yet, but it's probably the same story: human
error.

Edit: The Gimli Glider [1] was caused by human error.

[0] ftp://ftp.hq.nasa.gov/pub/pao/reports/1999/MCO_report.pdf see page 16

[1] [http://aviation-
safety.net/database/record.php?id=19830723-0](http://aviation-
safety.net/database/record.php?id=19830723-0)

~~~
mpweiher
> probably the same story: human error.

 _Right_. Humans are error-prone. That's why you want to remove places where
human can make errors, such as unnecessary unit conversions.

~~~
garrettgrimsley
I agree, and I don't mean to argue against moving to the metric system. I only
intended to dispel the notion that software safeguards that would have
prevented these incidents were in place and failed.

Educating developers about this potential issue and ensuring safeguards are in
place to prevent disaster seems like a more readily achievable goal than
inducing a national migration to the metric system.

~~~
Ntrails
Sod national, I'm talking about corporate. If my company mandates that all
code must be written in Lisp, then that's what I'll do. If they mandate all
monetary values must be stored as USD then that's what I'll do. And if they
require that all code must use the metric system - with display options to
convert to imperial? Thats. What. I'll. Do.

I don't care what the coding guidelines are - I only care that they exist.
[within reason]

------
jimjimjim
basically: write some tests, write some docs, add some automated tests,
checkin often, be sensible and try not to cut corners.

how about: code like you are going to have to remove 2 features and add 3 new
features to your code in 6 months time without the place burning down.

~~~
insanebits
Yep, that is the reality most of the time. It would be easy if you had a fixed
list of features for a project. But they change most of the time throughout
the development. You get used to it and start planning accordingly.

------
smegel
So do more faster?

Sounds like something pointy-haired boss would say.

~~~
distrill
That's not quite what I got from it. More like, here's a more robust way to go
about writing software, that may end up saving you time in the long run.

