
Uncle Bob and Silver Bullets - henrik_w
https://www.hillelwayne.com/post/uncle-bob/
======
Terretta
Here is the post this post is in response to:

[http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotThe...](http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotTheAnswer.html)

I don’t interpret the original piece as saying any of these tools and
techniques are not useful.

I interpret it as saying average developer mindset needs to shape up.

I tend to agree. I work with guys who believe in formal reasoning tools (we
collaborated with Amazon’s ARG for example[1]), and use all these tools. These
guys are almost compelled to write these tools and evangelize using the tools
in an effort to help, because the developers around them don’t carefully
reason about their code and it makes the guys who care crazy.

So you have two camps. People who believe correctness is important, and people
who think continuous deployment is an answer to bugs. The first group is
trying to help the second group through tools that can become part of the
SDLC.

I read “Uncle Bob” as suggesting that’s not enough, it’s not the root cause —
the second group needs to care.

I see nothing in his post that suggests the tools shouldn’t be used. (“I think
that good software tools make it easier to write good software.“) But if the
crowds in his talks aren’t using unit tests, are they really going to be using
TLA+?

First, do no harm, then use like Light Table, Model Driven Engineering, and
TLA+.

I take that as his point, and it seems painfully valid.

1\. [https://www.slideshare.net/mobile/AmazonWebServices/aws-
rein...](https://www.slideshare.net/mobile/AmazonWebServices/aws-
reinvent-2016-automated-formal-reasoning-about-aws-systems-sec401)

~~~
tom_mellior
> First, do no harm, then use like Light Table, Model Driven Engineering, and
> TLA+.

Looking at some of Uncle Bob's other posts linked from this article, I don't
think that's what he's saying. He (Uncle Bob) goes on and on about discipline,
but at the same time dismisses anything that might actually force programmers
to be disciplined, like type systems that force you to check references for
null before dereferencing them.

For example, in [http://blog.cleancoder.com/uncle-
bob/2017/01/11/TheDarkPath....](http://blog.cleancoder.com/uncle-
bob/2017/01/11/TheDarkPath.html) he asks: "It is very risky to have nulls
rampaging around the system out of control. The question is: Whose job is it
to manage the nulls. The language? Or the programmer?" and goes on to answer:
"Defects are the fault of programmers. It is programmers who create defects –
not languages." Then he goes ranting about programmers who don't test their
code, and if only they tested, such type systems would not be needed.

Again: He dismisses the one thing that _actually_ forces programmers to be
disciplined and instead just _wishes_ (against all evidence) that they were
disciplined. To me, he seems to be saying "First, do no harm, then use tests,
then nothing."

~~~
falcolas
No tool will ever successfully enforce discipline. If anything, it makes
people more lax.

"I don't have to reason about whether this can be null, the type system takes
care of that for me." "I don't have to worry about leaking memory, valgrind
will let me know."

"I don't have to pay attention to the road, Tesla's autopilot takes care of it
for me."

All the tools do is enforce a loop of compile -> make compiler happy ->
compile.

~~~
adamlett
_No tool will ever successfully enforce discipline. If anything, it makes
people more lax._

But the end goal is not disciplined programmers! It is better, safer and more
timely software.

Your argument applies just as well to safety equipment in cars and on heavy
equipment, and you may be right that the reduced risk of being killed or
maimed makes people more lax. But you can’t argue with the results: Far fewer
people do in fact get killed or maimed!

~~~
mrleinad
Fewer errors get deployed to production code because tools help us catch them
more often? Yes.

Fewer errors get deployed to production code because programmers have a more
rigorous discipline when creating software? Also yes.

Which part should a teacher like Uncle Bob (I consider him as such) should
care more about? People using tools or processes, or having them care about
the discipline enough?

I'll go with the latter. And if he seems dismissive about the former, is
because he's trying to teach about discipline, which is far more difficult to
teach, than to use a tool or follow some process.

~~~
adamlett
You’re creating a false dilemma. It’s entirely possible to be a disciplined
programmer _and_ use good tools.

And, yes, a teacher should care more about the skills than the tools, but
Uncle Bob is actively against the tools, claiming that they impede learning
the skills. Which is no more true than claiming that automatic transmissions
or anti-spin in modern race cars impedes learning how to race.

~~~
mrleinad
I don't think he's actively against those tools. I think he's actively against
claims that those tools actually solve the problems developers deal with,
instead of uncovering the true behavior that lies beneath those problems.

~~~
adamlett
No, he really is against the tools. He is on the record as being opposed to
static typing, optionals etc.

And by the way, noone is making the argument that tools solve _all_ the
problems developers deal with. But to say that actually, they solve none of
them, is foolish. Optionals, to take one example of something Uncle Bob is
against, prevents a large proportion of what is probably the most common
runtime error, namely dereferencing a null pointer. And yes, Uncle Bob is
right that it comes at the small expense, which is having to think more
carefully up front about your data and your abstractions, and there is no
doubt he is right that some programmers will feel constrained and will find
and use whatever mechanisms exist to bypass the safety that this feature
offers. But in much the same way, there are also some people who feel
constrained by seat belts, hard hats, helmets, dust masks etc. and choose not
to wear them, yet you don’t find many people who would argue that this makes
them pointless and generally a bad idea.

~~~
mrleinad
No, he is not against the tools.

"I have nothing against tools like this. I’ve even contributed money to the
Light Table project. I think that good software tools make it easier to write
good software. However, tools are not the answer to the “Apocalypse”."

[http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotThe...](http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotTheAnswer.html)

------
bad_user
My problem with Uncle Bob's opinions is that they are pretty common within the
industry, even if "discipline" really isn't the issue and all those negative
feelings people allude to are warranted. In general, people are made to feel
guilty for not being able to use the shitty tools and techniques we are given.

Lets take for example ORMs and Hibernate in particular. I can't tell you how
many times I've seen horrors related to ORMs / Hibernate, because of an actual
mismatch between relational data and object graphs, but then developers are
made to feel guilty because they " _aren 't using it right_", made to think
that only if they switch this setting on, use this and that annotation, read a
book or two, everything will be alright going forward.

Most such projects fail from a technical perspective of course, because the
developers having invested so much time in those shitty techniques,
exploration of alternatives becomes unfathomable to them, because all of that
effort which can span years ends up being unjustified. And I suspect that
Uncle Bob is guilty as well.

Lets take TDD as another example. I remember a time when TDD was religious
dogma, which was unbelievable, because I couldn't see actual TDD in projects
of companies advertising TDD in job descriptions. Much like Agile, Scrum and
all that crap, it usually doesn't work because, as the masters will quickly
tell you, you're " _not doing it right_ ". I've seen a single project where
TDD was implemented by the book, but the developers where wasting so much time
on testing stuff, including endless discussions, that they were forgetting TDD
wasn't the actual artifact they had to deliver.

Now Uncle Bob tells me that we don't have the discipline required. That's not
surprising. I’ve been hearing this argument in some form or another for years.

The surprise is the attention that we are paying to what Uncle Bob has to say.

~~~
_wc0m
Completely agree with this. The post where Martin derides modern, type-safe
languages (Kotlin and Swift) [0] was just unbelievable to me.

 _" Ask yourself why we are trying to plug defects with language features. The
answer ought to be obvious. We are trying to plug these defects because these
defects happen too often.

Now, ask yourself why these defects happen too often. If your answer is that
our languages don’t prevent them, then I strongly suggest that you quit your
job and never think about being a programmer again; because defects are never
the fault of our languages. Defects are the fault of programmers. It is
programmers who create defects – not languages."_

Arrgh! Until we get out of this ridiculous, macho, victim-blaming mindset,
software development will be stuck in the relative dark ages. More, better,
safer languages, please!

[0] [http://blog.cleancoder.com/uncle-
bob/2017/01/11/TheDarkPath....](http://blog.cleancoder.com/uncle-
bob/2017/01/11/TheDarkPath.html)

~~~
oblio
Somebody should show Uncle Bob this:
[https://youtu.be/fPF4fBGNK0U](https://youtu.be/fPF4fBGNK0U)

My point being: yes, if we'd be perfect drivers at all times and in all
weather conditions there would be no more accidents. Luckily for us auto
designers haven't adopted this perspective :)

~~~
GuiA
The top comment on that video, in light of the points being made in this
thread, is quite ironic:

 _" And because of this 'improved safety' we have millions of drivers out
there who drive even faster and take even more risks, because they think the
new cars are so much safer. "_

(of course this comment is silly, the number of motor vehicle deaths per
capita is about a third of what it was 50 years ago:
[https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...](https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year#/media/File:U.S._traffic_deaths_as_fraction_of_total_population_1900-2010.png))

------
zbentley
I'm usually pretty skeptical of Uncle Bob's stuff, but in this case I think
that both Hillel and Bob are a little right. Tools _do_ make a difference, as
Hillel says, and can be used to mitigate risk. The key word there is
_mitigate_ , though. Bob is right in that the root cause of that risk is a
lack of discipline. Such risk will persist, mitigated by tools or not, so long
as that lack of discipline persists.

I think that "discipline" is a bit of trigger word for some folks, though. It
can be seen to imply a personal failing, but as Bob's post explains
"undisciplined" programming is usually a combination of personal and systemic
problems. The programmers who created the hypothetical bad code shouldn't have
created it, but they also shouldn't have been asked to, or placed under
requirements that made bad code, over a long enough time, inevitable. You can
have undisciplined cultures in the same way as you can have undisciplined
programmers, PIs, requirements, or problem domains.

What's more, all of those are problems that it's important to be aware of. If
anything, Hillel's article pushes a bit far (understandably) in the other
direction, hinting that a failure of discipline is so inevitable as to be a
waste of time to fight against.

It's important to do both: mitigate the impact of failures caused by lack of
discipline, and understand and try to address the causes of that lack. Easier
said than done, of course.

~~~
adamlett
_I think that "discipline" is a bit of trigger word for some folks, though. It
can be seen to imply a personal failing, but as Bob's post explains
"undisciplined" programming is usually a combination of personal and systemic
problems_

I think you’re on to something here, but you’re not all the way there. I think
“lack of discipline” is such an unsatisfactory explanation for anything, and
almost never the _root cause_ of any problems, as you claim in your first
paragraph. The reason it unsatisfactory is that it leaves us with no other
remedy than to try harder. But if the root cause is systemic, then merely
trying harder is bound to fail. If we are to gain any insights, we have to
look closely at the systemic issues that manifest themselves as a lack of
discipline. Saying that the cause of undisciplined individuals is an
undisciplined culture doesn’t really get us anywhere. We have to understand
_why_ a culture can get that way. What it often comes down to in my
experience, are incentive structures inside or outside of that company that
have negative side effects or are downright dysfunctional. Sometimes you can
fix them, other times you just have to accept them and live with them.

------
dustingetz
Uncle Bob (2017)

 _I stood before a sea of programmers a few days ago. I asked them the
question I always ask: “How many of you write unit tests on a regular basis?”
Not one in twenty raised their hands._

Rich Hickey (2011)

 _It passed all the tests. Okay. So now what do you do? Right? I think we 're
in this world I'd like to call guardrail programming. Right? It's really sad.
We're like: I can make change because I have tests. Who does that? Who drives
their car around banging against the guardrail saying, "Whoa! I'm glad I've
got these guardrails because I'd never make it to the show on time."_

[https://github.com/matthiasn/talk-
transcripts/blob/master/Hi...](https://github.com/matthiasn/talk-
transcripts/blob/master/Hickey_Rich/SimpleMadeEasy.md) incremental search
"passed all" for the rest

~~~
agentultra
Disclosure: I _didn 't_ like the talk.

In Uncle Bob's view, which I assume has some popularity, unit-tests are a form
of software specification. It's informal as the specification can only test
specific examples of behavior based on a determined state and the people
responsible for its maintenance are the programmers. And if you're not
disciplined enough with how you write these tests you end up in a situation
where you're writing the rules for your own success and the specification
itself contains errors.

I look at specifications as blueprints. If you're planning a sky-scraper or
mass transit system you use blueprints. If you're building a shed in your
backyard you might just use a quick sketch on a napkin. I think the majority
of commercial software is somewhere between napkin and blueprint; building
houses to lean on a well-trod analogy.

You need something more specific and robust than a sketch on a napkin but not
quite as formal as a blueprint for a rail switching system.

Regardless unit tests are not a very good specification format. They're only
going to show you examples where your software meets your requirements.
However we all know that software is complex, hard, and systems are more than
the sum of their unit test suites: there are operators involved, conditions
and constraints on its operation, failure modes. You need a tool to help you
design the system and check your work for you because _these systems are
getting too complex for you to reason about without solid abstractions_.

I think Rich Hickey completely missed all of this when he said that in his
talk. If we were to modify the analogy to fit the view of testing as described
the guardrails would have holes in them. All over the place. You'd only be
safe in the cases where the car "bumps" into the correct spots you thought to
put a test.

However I do agree that unit tests alone cannot be relied upon as the only way
to write "correct" software where "correctness" is determined by it's
adherance to its specification and that the implementation of it follows state
of the art practices. One has to do more than just use unit tests to achieve
that.

~~~
bad_user
Building houses is not the right analogy to building software, because the
used techniques are known and most new buildings are going to be exactly like
the millions of other buildings that came before it, even when speaking of
sky-scrapers. There's very little innovation in it.

Building a new building is usually akin to _copying /downloading software_,
only more expensive. Art is involved in designing buildings sometimes and
there is some innovation for the engineers that want to break records, e.g. in
building the world's tallest buildings or in building them faster, but I think
those are a minority.

I don't think Rich Hickey missed the point at all. If I ever saw an actual
_software architect_ , then Rich Hickey is one.

I think you missed his point of view, because Rich Hickey is also the guy that
helped create Clojure's Spec and is a proponent of property-based testing,
which Clojure's Spec now enables.

In fact Rich Hickey, in "Simple Made Easy" and other presentations, likes to
talk about about the need for upfront design, for carrying about the delivered
artifact, for thinking deeply about architecture, for simplification. He's in
no way proposing a napkin-driven approach, quite the opposite ;-)

------
Tehnix
Well written! I’ve always cringed when I’ve read anything that Uncle Bob
writes. It almost seems like his entire existence depends on the acceptance of
unit testing, and the unjustified dismissal of tools, especially type systems,
seems to me something a only a novice would do when evangelizing their
favorite lang (we’ve all been there, right?...), until they get more
experience with other methods and approaches.

~~~
Delmania
I never understand why Martin gets the kind of attention he does. As far as I
can tell, the only significant achievement he's done for the technical
community is the Agile manifesto. Outside of that, he's a typical consultant
in that he's got books, training courses, etc. However, I have yet to find a
significant publicly known project he's been involved in where he's had to put
his principle into practice.

~~~
chrisseaton
There's definitely two types of very well known speakers.

People who have worked on some really notable project. Linus Torvalds, David
Heinemeier Hansson, Joel Spolsky and so on. I know these people are worth
listening to because I can see that their code is a success.

Then there are people who I have no idea what they have ever written at all.
Robert Martin, Martin Fowler and so on. I don't know if I should listen to
what these people say at all. Does it work? What software projects have they
made a success that I can look at?

I suppose in the latter case it's usually closed-source enterprise code that
they've consulted on? Well if I can't look at it I'm not sure I'm just
prepared to take their word that it's successful.

Is this reasonable of me?

~~~
rootlocus
> People who have worked on some really notable project. Linus Torvalds, David
> Heinemeier Hansson, Joel Spolsky and so on. I know these people are worth
> listening to because I can see that their code is a success.

I agree with your examples, Linus Torvalds in particular. However, I would not
consider Mark Zuckerberg, Steve Jobs or Bill Gates worth listening to on a
technical level just because their products are a success.

~~~
jerf
Bill Gates was actually a pretty decent programmer. His technical insights
peter out somewhere in the 1980s or early 90s at most, but he's had some
interesting things to say about those eras. (Interesting in the sense that
it's someone speaking about the eras that was there in a big way, not
necessarily because he's got surprising insights into software engineering or
something.)

~~~
etblg
Do you have any links to his insights? They sound pretty interesting but I'm
not sure what to search for to find them.

~~~
Delmania
[https://www.joelonsoftware.com/2006/06/16/my-first-billg-
rev...](https://www.joelonsoftware.com/2006/06/16/my-first-billg-review/)

~~~
rootlocus

      He didn’t meddle in software if he trusted the people who
      were working on it, but you couldn’t bullshit him for a
      minute because he was a programmer. A real, actual, 
      programmer.
    

I stand corrected. Thank you for the read.

------
AlexDenisov
Discipline is a good thing, but it does not scale.

I work on a tool for mutation testing for C/C++[1] for a few years now. During
this time I was thinking about tests, tools, people, etc. a lot. Here is the
summary of my musings:

Quality of software depends on several factors such as hardware, operating
system, programming language and tooling used to build the software in the
first place, etc. For decades we try to improve each of those aspects. But,
the most significant and most harmful factor is the human. Naturally, we
cannot improve developers, but we can decrease the destructive influence of a
human being by using better tooling.

[1] [https://github.com/mull-project/mull](https://github.com/mull-
project/mull)

~~~
smacktoward
Or alternately: we _can_ improve developers, but doing so is hugely expensive
and time-consuming, and while we can improve them we can never make them
_perfect_. So making tools that can be used safely by imperfect developers is
always going to be simpler and more cost-effective.

~~~
AlexDenisov
Exactly, this is what I mean by "do not scale." It takes many years, if not
decades for one to become a decent developer. We simply cannot afford this.

~~~
dorian-graph
We can't afford it with the current pipeline, but perhaps with a better
pipeline, we can. Perhaps something more proper and akin to the older
engineering disciplines? Or to doctors?

They're doing incredibly important jobs, and spent an appropriate amount of
time learning and proving them selves.

Software is now completely critical in so many aspects of the world, and
becoming more so. When people's lives are at risk, we can certainly afford
'many years'.

Yes, not everyone is working on perhaps safety-critical systems, but more and
more people are. It's not just safety in the physical sense either, but
emotional and mental well-being too. Facebook isn't saving potentially
affecting lives as how a doctor would (more 'physical'), but social media
definitely has emotional and mental effects.

~~~
AlexDenisov
I think this is a fallacy. I believe another pipeline does not necessarily
produce better doctors. They also make mistakes; sometimes those mistakes cost
people lives.

------
OmIsMyShield
Not directly on topic - but I was a hobbyist silversmith as a teenager. I
learnt many of my skills from a stern old German bloke at the silversmithing
club.

He liked to say "Gutes Werkzeug ist die halbe Arbeit" (or sometimes: "Gutes
Werkzeug, bessere Arbeit"). A good tool is half the work, or: Good tools,
better work.

This idea has informed my coding decisions too. Evaluating the worth of a tool
is more complex in software than in silversmithing, though.

------
nnq
I think the thing is that the we have _a different level problem_ in software,
that can't be solved by either "methodologies" or "patterns" 0r "discipline"
or languages...

We have a _learnability_ / _teachability_ / _knowledge transmission_ problems
that always disguises itself as a technical problem... If you could simply
"distill" and "transmit" the relevant design knowledge to the person requiring
it under a ressonable amount of time most software quality problems would
become trivial.

We supposedly do "knowledge work"... but we learn mostly from experience, like
_cooks_ or _handcraft artisans_ or whatever, instead of how civil engineers or
auto engineers do. (And I suspect the problems is not just immaturity of the
field... it's more that "the space of possible solutions" in software
architecture is _so insanely big_ that there's no way to learn/label/teach
what the engineers need to know... we have the requirements of engineering,
but the infinite solutions space of mathematics... and training successful
mathematicians is something _we know we have no clue how to do_... only here
we also expect the "untrainables" to have the professional conduct of
engineers coupled with the depth of though of philosophers.)

------
neilwilson
The problem is that the more tools you add, the more mistakes you make in the
tools.

This is the same debate we had previously about formal methods. They don't
eliminate errors, they just move them to the testing systems and where there
are several the gaps between them. Communication becomes more difficult
between the people involved and you rapidly start looking for the Chief
Programmer Solomon who knows everything. then you get communication failures
because not everybody is Einstein.

So it is always a trade off.

And it would help if we stopped calling automated testing as testing and
described what it is - a domain specific type system checker.

If we realised we always build a system and a compiler for that system we
might stop reinventing the wheel so much.

~~~
taneq
I don't get why automated testing is a "domain specific type system checker".
Not saying it's not, I just don't get it. Maybe that's because my automated
tests are pretty free-form and tend to be more about throwing mud at an
algorithm and seeing if the results stick.

Your last line is bang on, though. We build a system, and a compiler (or
processor) for that system. It doesn't matter if everything is "data driven"
if your data is code and the thing it's driving is an interpreter.

~~~
marcosdumay
> I don't get why automated testing is a "domain specific type system
> checker".

That's because he was talking about formal verification. Testing like the name
is normally used is a "adhock domain specific heuristic type system checker".

------
tenpoundhammer
I'm sure this comment will get buried because I'm so late to the thread but oh
well.

"Later, the guilty programmer thanked the lead developer for protecting him.
He said: “I knew I shouldn’t have reused that code, but we were in a rush.”
She smiled at him and told him not to worry about it."

This quote describes the real issue. Uncle bob is equating software
development as some kind of unified profession and making apples to apple
comparisons for all software projects. But software development is not like
being a civil engineer or being a doctor. Writing software is totally
different from heart surgery or bridge building.

Software development is applied differently to all projects and all projects
having varying needs about quality and cost. If you are building a bridge it
there is a high standard of quality every time. If you take a shortcut you may
kill people, or more likely never pass inspection.

In software sometimes you could kill someone but far more often the quality of
the software may not matter. There are times when it makes sense to cut
corners, there are times when it makes sense to do a quick hack, there are
times your boss makes the decision to something the worst way possible and fix
it later, and sometimes that's ok. There are other times when it's not ok.

Every person on earth has an ethical obligation to not bring physical harm to
others and should avoid doing so even at the cost of employment. But no one
has an ethical obligation to write "good" code just for the sake of it. There
are times when the best thing for everyone is writing software badly.

Maybe software engineers should have some widely disseminated ethical
principles, but they have to be realistic and take into account the entire
spectrum of software development. We have different ethical standards
depending on the type of project we are working on. If you are a developer
working on software for Airline Autopilot you have different ethical
obligations from a developer working on Angry Birds 10.

------
ant2017hotpotat
I think the author completely misses the point of Uncle Bob's article for two
reasons.

First, in "Tools are not the Answer", it is clearly stated that tools are
valuable.

Second, Uncle Bob's article is not telling programmers to be better. It is
telling programmers to be professional. It is not like telling drivers to
drive better. It is like telling drivers to stop texting while driving.

~~~
unclebobmartin
:-)

------
NumberSix
Both "Uncle Bob" and the author are selling extremely complicated, cumbersome,
expensive ways to develop software -- great for them as consultants if they
can pull it off, not necessarily great for their customers. Giant near
monopolies such as Microsoft or Google can afford these methods, may even
benefit from this sort of super-bureaucratic "gold plating" of software and
certainly do benefit from convincing small potential non-monopoly competitors
to adopt these slow, expensive, bureaucratic methods that smaller firms cannot
afford (they run out of money trying to create perfect software instead of
software that works and gets the job done).

~~~
timgebrally
Perhaps if the methods are sound (even if expensive) then someone can step up
and make tooling or refine them so the methods are more accessible to smaller
firms. Uncle Bob's opinion's might be discouraging people from exploring them
and finding ways to drive down the cost of adopting these better methods.

~~~
Jtsummers
> Perhaps if the methods are sound (even if expensive) then someone can step
> up and make tooling or refine them so the methods are more accessible to
> smaller firms.

To his credit, Hillel is trying to evangelize (in general, not directly
through the post linked) a particular tool (TLA+) whose creator (Leslie
Lamport) wants to do exactly this. I don't know how big Hillel's employer is,
but from what I understand it's nowhere near the biggest engineering firm in
number of technical employees. Lamport's current objective with TLA+ is to get
people to learn concepts for formal _description /specification_ of systems,
and he happens to provide an effective tool for testing those system specs.

Check out his TLA+ tutorial at [https://learntla.com](https://learntla.com).

If you want an example of solving concurrency issues, jump to here:
[https://learntla.com/concurrency/processes/](https://learntla.com/concurrency/processes/).

~~~
hwayne
We have about ten engineers, so nowhere near the size where people consider
formal methods "appropriate". Nonetheless it's still been incredibly useful
for our work.

I'm a pretty huge evangelist of TLA+, but I don't think it's the silver bullet
of software correctness. It just happens to be the tool I'm most familiar with
and the one I thought could benefit most from a free guide. If people start
widely using TLA+, I'll be ecstatic. If people ignore TLA+ but start widely
using Alloy, I'll still be ecstatic. Software correctness is a really huge
field and there's lots of really cool stuff in it!

Speaking of making methods more accessible, I'm working on a tutorial about
Stateful Testing. Hypothesis
([https://hypothesis.works](https://hypothesis.works)) is an absolutely
incredible property-based testing library for Python, and I think it could
potentially make PBT a mainstream technique. One of the more niche features is
that you can define a test state machine that runs by randomly selecting
transitions rules and mutating your program state, then running assertions on
the new state. It's really neat!

------
endorphone
From the original post that is being replied to-

"The author did not interview software experts like Kent Beck, or Ward
Cunningham, or Martin Fowler."

I mean no slight, but are Beck and Fowler really the experts of the field?

This is akin to the relationship guru who has been through three divorces --
often just declaring yourself an expert fools enough people into thinking you
are.

~~~
PaulHoule
Fowler wrote some good books ten to twenty years ago. Today he writes as if he
is not sure if he gets microservices or not, which might prove he is honest,
but he is not giving the bold prescriptions which will make money.

Beck is up there with Tony Robbins so far as I am concerned. Cunningham is
famous for running a Wiki which was down for two years.

------
afpx
This is unfortunately a very defensive article. The author would do better if
he chilled out a bit.

I get why he's defensive: Martin calls his baby out in his article. And,
although I think that Martin himself comes across as defensive (he is alarmed
that the journalists didn't interview 'real' experts like himself), I also
don't think he is wrong. In fact, as a practitioner for over 20 years, I've
witnessed first-hand the gradual degradation of software discipline. I've seen
too many companies do exactly as Martin describes, and it causes many issues:
from critical bugs to developer burnout.

It's important to look at the root causes. I surely don't have all of the
answers, but I do note the rise of the 'startup culture' as a major issue.
Even at big organizations, employees are now expected to run projects using
extremely low resources. The companies encourage this behavior saying things
like, 'think of your program as a startup'. Startup is the new executive buzz
word.

Startups are obviously great, but they take on extreme amounts of risk. And,
since people only see the survivors, they naturally make the (erroneous)
logical conclusion that startups are silver bullets and have guaranteed return
on investment. Therefore, every project should be a startup! And, with that,
we see skeleton teams with huge responsibilities and expected return but a
deficit in resources. Today, an engineering team of 4 people is expected to do
the work of 16 people circa 1995. Team leaders accept this pain and manage it
by sweeping issues under the rug. As a side note, it tends to cause people to
be more unethical.

The examples that the author uses (Amazon, Microsoft) are not representative
of typical software organizations today. Amazon and Microsoft have huge
amounts of resources available. They have veterans and researchers able to
take things like TLA and apply them to real projects. They have the ability to
manage their risk. But, a typical team is strapped for time and they must rely
on tools to save them at the expense of software discipline.

Anyway, my point is that investors want huge return on investment. This causes
sales and marketing teams to over-promise and under-price. This causes
delivery teams have to drive their projects with extreme risk. Extra people
are too expensive, so just wing it. If you survive, maybe you’ll get to build
it right the second time. If you fail, well rinse and repeat at the next
place.

~~~
arwhatever
I can't imagine many other industries as rife with false economies as Software
Engineering. There are so incredibly many opportunities to appear to get
things done quickly today, only to find that you've hamstrung your future
self.

------
hergin
The article just focused on Uncle Bob's asking for "unit tests" in "Tools are
not the answer". But I believe Bob just asked a rhetorical question over
there. If someone is not writing unit tests at the very first place, he
doesn't need to ask any other kind of tests furthermore, which are likely be
less and less people raising hands.

------
bjpbakker
I stopped reading Uncle Bob's articles a few years back, after he started
making money by telling programmers they need to be more disciplined to
prevent bugs.

While he's not exactly wrong, he does dismiss any other solution than testing
(such strong type systems). One reason could be that he doesn't make any money
out of these solutions.

All his blogs and talks seem to send the message: "pay for my content and
learn how to be a good programmer by writing tests". Again, not necessarily a
bad thing, but there might be other ways than only hand-crafted tests.

------
ryanmarsh
Humans are really really bad at static analysis. A 486 with 8Mb of RAM could
outperform any human at this task.

The solution is more and better tools.

------
eric_b
I agree with the author of the article to an extent - there's lots of things
about Uncle Bob's advice I don't care for. But...

I agree with Uncle Bob's take that discipline is lacking in software
development today. It's not that people won't make mistakes if they are more
disciplined - of course they will - but it's the kind of mistakes people are
making that might change. I see code written every day that, when I give even
a cursory glance to the PR, I can tell there are issues. The developer didn't
even do the barest due diligence to make sure they didn't break things.

Formal verification and other software correctness tools can be great in the
right circumstance. I think anyone doing life-critical software should be
using something above and beyond the basics.

But, for most projects I see - formal verification isn't really a solution to
quality issues. Having developers who give a damn and have a little paranoia
and pessimism ("how is this thing going to fail on me?") would do 100 times
more for the outcomes. In my mind the best code is the kind you didn't write,
and the best infrastructure is the kind you didn't need to stand up, and the
best distributed system is no distributed system.

------
ducttape12
I've thought for a while the real solution to bad software is simple: hire
intelligent developers who care

Only problem is, they are few and far between... (in my 10+ years in the
industry, I've worked with only 3 people who I'd put into this camp)

~~~
Cthulhu_
Those intelligent developers who care need to make sure the less intelligent,
more indifferent developers up their game. Tools and processes are put in
place to mitigate risk from that second group, but it's the seniors that need
to be able to transfer their mindset to the others. Which may not always work,
mind you.

What should be avoided if possible is the hero developer; the risk there is
that the hero dies or moves on and leaves behind unmaintainable work.

~~~
mirceal
The risk? Hahaha. It’s the reality. The hero eventually moves on an you’re
left holding the bag

------
jswizzy
I'm still going to read the hell out of all of Uncle Bob's books. Every
programmer has his faults and Programming is more of an art than science at
this time in history.

------
ncds42
Uncle Bob's true value to me was to teach me how bad of shape the software
world really is in. Look at any engineering discipline. There are standards,
there's a rigorous design process, there's checks from more experienced
engineers, there are best practices, etc. Software is like the wild west, and
if people don't realize that and try to create some sort of standard than
eventually someone with no knowledge of the field will. He's trying to
encourage people to understand the problem and think towards creating rules so
everyone can perform and collaborate better. Eventually software will become
like other more established fields but it's up to our generations to take us
there.

~~~
AlexCoventry
> if people don't realize that and try to create some sort of standard than
> eventually someone with no knowledge of the field will.

I don't think so... No one's hunting down and calling to account the engineers
responsible for the Equifax breach, and nor should they. The problem is
blatantly a leadership and resource-allocation issue, and is being treated as
such.

~~~
ncds42
Bad software has done worse things than release private info. Just a few worse
things: failed space missions and cost NASA credibility, killed hospital
patients, bankrupt Knight Capital, almost caused World War III due to a false-
positive nuclear-launch warning, caused wide-spread blackouts, killed people
(lookup Patriot Missile).

We as developers are just waiting for something so terrible to happen that
someone is forced to take notice.

------
shadowmint
nah.

I think you could quite easily quantify the effect various tools have on a
code base; adding unit tests, using a repl, static vs. dynamic etc. on the
various aspects of your code:

\- How quick/easy is it to add new features?

\- How often do you add new bugs?

\- How long does it take to fix bugs?

...but I think you could _also_ look at the impact individual bad actors (the
'dont care' programmers) have on the same code base.

Now, because the 'dont care' programmers hack out rubbish at the speed of
light, it may superficially appear that the net impact on the code base is
balanced; more features, more bugs... but that rubbish slowly spreads out into
the code base, and incrementally trigger a cascading bug creation rate for
_all the programmers_.

Now, if you want to suggest that tools can add enough value to overcome the
negatives from having rubbish programmers, I'm all ears, because I've never
seen a tool that can do that. The bad programmers just abuse the tools or
ignore them.

So... in that context, I think its fair to say, more tools really achieves
nothing much, when bad programmers and bad programming practice results in
such catastrophically bad outcomes for software.

Maybe, if you can make your tools nice enough and add enough processes you can
kind of force bad programmers up a notch. I appreciate that; but the question
_really is_ , why are there so many programmers who are so rubbish? Why do
they never seem to get better? Why don't they care?

Do you really think you engineer a solution to that? ...because I think it's a
social problem, not a technical one.

Maybe some tools can help, and maybe not everything Uncle Bob says is
gospel... sure; but I think this is just stupid:

> Uncle Bob gives terrible advice. Following it will make your code worse.

I think history has objectively shown this is false; but of course, you're
welcome to have your own opinion without any substantive basis other than that
opinion itself if you like.

~~~
Ace17
> Now, if you want to suggest that tools can add enough value to overcome the
> negatives from having rubbish programmers, I'm all ears, because I've never
> seen a tool that can do that. The bad programmers just abuse the tools or
> ignore them.

So much this.

This perfectly matches my experience (and desperation). Bad programmers will
do their best to work around tool-imposed limitations that they don't
understand. And these workarounds will generally increase code complexity!

\- Force a minimum coverage ratio: they will create useless tests that don't
check anything (without assertions).

\- Force a maximum on function sizes: they will split their methods halfway
and pass the whole local state through parameters.

\- Force a "no-global variable" policy: they will create singleton classes.

\- Force a "no public member variable" policy: they will create set/get
methods for their private members.

I'm not advocating for any of the above restrictions, they do more harm than
good. Please, give me something I can put into my CI server that checks for
bad coding practises, something that checks that the code stays SIMPLE.

Fact: you can inject a new disabled feature into a C codebase with zero risk
of introducing any bug ... by wrapping your 50 modifications of the codebase
into #if/#endif clauses. Guess what? This will make the code a lot more
complex, will slow down every developer, and will increase the likelihood that
they create bugs in other features.

Finding bugs is only the tip of the iceberg. What I want is to decrease the
probability of their apparition, by keeping the code simple. Bugs and code
complexity are correlated. Only targetting bugs will break this correlation,
and we will be left with correct _rigid_ code (very hard to modify). Even if
my compiler perfectly prevents me from breaking anything, I still want to be
able to add a feature in a reasonnable time.

Let me summon Rich Hickey here: Simplicity Matters, and "nobody drives on the
highway by bumping into the guard rails".

------
dep_b
I think Uncle Bob is a great read just because he'll challenge some of your
opinions and at least forces you to reason against his arguments. You won't
have that if you only visit
[https://swiftisfreakingawesomeblog.com](https://swiftisfreakingawesomeblog.com)
or
[http://www.igetpaidtolikethislanguage.org](http://www.igetpaidtolikethislanguage.org)
where everybody repeats your opinion.

There's not a lot in Clean Code that I wouldn't recommend to most programmers.

~~~
halostatue
I strongly recommend against reading Clean Code, because it only applies to
Java-like languages, and it doesn't really make good advice anyway. It doesn’t
even properly apply to C++, and certainly doesn’t apply to languages with
different models (like Ruby or Python).

Uncle Bob has made a very successful business out of saying things that sound
profound, become popular, but are at best the technical equivalent of fortune
cookies, and at worst are profoundly bad advice.

Seriously, in TypeWars[1], he says “My own prediction is that TDD is the
deciding factor. You don’t need static type checking if you have 100% unit
test coverage. And, as we have repeatedly seen, unit test coverage close to
100% can, and is, being achieved. What’s more, the benefits of that
achievement are enormous.”

This is so profoundly wrong on so many levels that it isn’t funny. How do you
measure coverage? If you’re doing it by lines hit (which is how most dynamic
language coverage tools do it), then you’re not getting any sense of how good
your boundary conditions or branch coverage is. If you have a better way of
measuring coverage (typically branch coverage), then maybe _aiming_ toward
100% is good, but hitting 100% coverage is a complete waste of time. (I aim
for ~85% coverage, depending on the code base and how much is boilerplate
code.)

In Ruby, do you really need a unit test for:

    
    
        attr_accessor :foo
    

If you don’t at least _access_ `foo`, SimpleCov will say that line isn’t
covered. You know what? It doesn’t _need_ coverage, because when you write a
unit test for that you’re writing a unit test for the language.

Unless you’re actively adding “attack tests” to make sure you have all of your
boundary conditions covered, and at least know you can abort or recover from
an error condition properly, your 100% coverage means absolutely nothing.
Property Checks are better for this. Fuzzing and mutation tests matter to help
find this. Even static typing (of which I’m _not_ actually a fan, because 15
years with C++ makes it clear that static typing is not that useful in weakly
typed languages) can help find certain types of boundary conditions (look to
Ada, where if you declare a type as a range 1..100, _only_ values within that
range, or variables that can never exceed that range, can be used with that
range).

And people take Uncle Bob seriously, even when he posts things that are just
asinine attacks on people who have taken him to task when he is wrong (which,
IMO, is far more often than he’s right).

[1] [http://blog.cleancoder.com/uncle-
bob/2016/05/01/TypeWars.htm...](http://blog.cleancoder.com/uncle-
bob/2016/05/01/TypeWars.html)

~~~
lgunsch
> it only applies to Java-like languages, and it doesn't really make good
> advice anyway. It doesn’t even properly apply to C++, and certainly doesn’t
> apply to languages with different models (like Ruby or Python).

I disagree strongly. Sure, there are a very few parts that are more relevant
to Java. However, the vast majority of the book is easily applied to almost
any language. There aren't many languages that don't have variable names,
function/method names, modules, etc. Some of the OOP principles wouldn't
easily apply to C, or other languages like it, but a lot of the other advice
applies easily.

------
OliverJones
Defects in code do indeed come from sloppiness. Of course, real people are
sloppy sometimes. So real systems need resilience in depth to help resist
system effects from localized sloppiness.

The same set of arguments applies to security. The last few years of history
teach us that nobody, not even state actors with deep pockets, can keep
secrets forever. Don't we need to change our thinking about that, and make it
so security breaches aren't uniformly disastrous? It's all about resilient
SYSTEMS dealing with localized sloppiness.

------
projektfu
"Don't test the UI" is not equivalent to "No end-to-end testing." The XP
people, for example, prefer to make UI a thin shell over a programmable
interface that can be easily tested. Then, visually confirming the UI operates
the interface is enough.

I admit I don't read much Bob Martin, but does he say you shouldn't use
automated functional testing, e.g., Fit or Cucumber?

~~~
mdpopescu
As far as I know, he actually wrote FitNesse and recommends Cucumber.

------
newsreader
After reading the post by Hillel Wayne I decided to read Uncle Bob's article
that is mentioned in the post ([http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotThe...](http://blog.cleancoder.com/uncle-
bob/2017/10/04/CodeIsNotTheAnswer.html)). After reading both posts, and after
reading most of the comments on this thread, I have to admit that my
inclination is in favor of Uncle Bob. I agree with most of the points Uncle
Bob made in his blog post.

Tools are not the answer. TDD is not the answer. Agile is not the answer. More
programming languages are not the answer. There is no silver bullet. For me
the answer is to do the best I can, with the tools I have.

Never stop learning, always strive to improve my skills. And yes, it takes
discipline.

~~~
mbizzle88
I disagree. Neither of them is saying anything is "the answer". They are
merely putting emphasis on different things.

From that perspective, I would say that Uncle Bob is more wrong. It doesn't
make sense to emphasise discipline before best practices. Discipline is about
following through on things and avoiding short cuts. That means in order to
have discipline, you first have to have a procedure you are meant to follow in
the first place.

Additionally, Uncle Bob seems to look at things from the perspective: "If
you're undisciplined, better tools can't make up for that." Sure.

But, IMO, Hillel's perspective is a lot more useful: "If you are disciplined,
better tools will help you do better work."

~~~
newsreader
I agree that neither of them is saying anything is "the answer". I read my
comment and don't believe I implied it either. What I'm stating is my opinion:
For me the answer is to do the best I can, with the tools I have.

Never stop learning, always strive to improve my skills. And yes, it takes
discipline.

------
arwhatever
"There is not now, nor has there ever been, nor will there ever be, any
programming language in which it is the least bit difficult to write bad
code." \- Unknown

------
hannofcart
If all we needed to write robust software was discipline, we wouldn't need
high level programming languages. All we'd need is the documentation of the
instruction set that we want to target.

Why are we still taking this geezer seriously?

------
mishkovski
I think that automated testing is superior to any other tool/technique that we
can use to avoid mistakes in programming. Can anyone argue with this? What is
the alternative?

~~~
bad_user
Static typing combined with functional programming is superior to automated
testing, although the three are actually complementary, because automatic,
property-based testing works better if you have type info (such that the
library knows what values to generate) and referential transparency, because
side effects can be awkward to specify as laws.

So it is not a coincidence that Haskell developers have innovated testing
tools and are now sold on QuickCheck, those two working well together ;-)

Being the Scala developer that I am, I love ScalaCheck, but I’ve also used
jsverify for JavaScript ... not as good as ScalaCheck or QuickCheck, due to
not having static typing and type classes btw, but did the job.

~~~
mishkovski
Static typing or functional programming can protect you only from some types
of problems. With automated tests you have the flexibility to define the
problem you are verifying your code against.

~~~
lkitching
By the same token, tests can only verify certain properties of a function. How
could you test that a function is free from side-effects? How could you test
it never returns null?

------
Ace17
Let's dismiss ideas, not people.

------
agentultra
I think Uncle Bob is way off the mark here.

In _Clean Code_ he hammered in the idea that unit tests form an automated
specification of some software system under development. This was a decent
idea at the time. So why on earth would he tell people not to use type-safe
languages or tools for defining better, formal specifications?

Probably because he has something to sell?

We need better tools, processes, and "disciplined" methods to writing software
if we're going to tackle complexity and produce systems that can be considered
robust, reliable, and _safe_. Unit tests alone are never going to cut it.
There's a reason Microsoft Research has been heavily investing in formal
methods: imagine the only specifcation for CosmosDB existed as a unit test
suite. It'd have data-losing critical errors in it for years to come.

I liken Bob's post on discipline to be much like the snake oil used to sell
Christian religions: _you were born sick, but if you believe in me only I can
make you whole_. What a crock. There are plenty of smart, capable people out
there writing perfectly good software. The problem in the industry isn't that
developers aren't writing unit tests the way Bob Martin prescribes: it's that
a great deal of hand-waving is done to following the state of the art and
establishing processes for developing robust, reliable, and safe code.

The entire aerospace, space, and safety-critical systems disciplines in
software development have been making huge investments in tools and languages
to make managing the complexity of these requirements on software systems to
be manageable, correct, and reason about. TLA+ is only one such tool and I
think Hillel is right to point it out: you need more than one tool. Your
entire process has to be built around avoiding errors.

There's a reason why engineers designing roads stopped calling vehicle
collisions "accidents." Once you start looking for the real root of the
problem, the system itself, you start to optimize for different goals in order
to reduce the negative outcomes in the design of the system. Vehicle
collisions still happen because of human error but a great deal more happen
because they were enabled by the system: the roads, the vehicles, the by-laws.

In a similar fashion errors in software systems are _enabled_ to happen by
different choices we make: stakeholders, deadlines, requirements, time to
market, etc. The ISO/IEC/IEEE 29148 guidelines on requirements engineering
encourage you to consider these factors as part of your specifications. If the
operation of your system will only cause nuisance or interference with the
business' goals then the risk is quite low and you might put more priority on
time to market. However if there's more risk to harm people like say, losing
their personal information and putting them and the insurance industry at
greater risk, then I'd say you need to put in the effort to avoid those risks
into your processes: use more formal specifications, use statically typed
languages with sound type systems, write property-based as well as declarative
tests, etc, etc.

If you start treating software errors like we started treating auto-accidents
then you'll start to see where you can reduce the negative outcomes. Stop
calling software errors as bugs just like we stopped calling collisions,
"accidents." See errors as risk and look for ways to reduce your risk. Know
where some risk is acceptable.

Uncle Bob is only calling programmers "undiscplined" because he has a few more
books to sell to help them get better.

~~~
unclebobmartin
Well, that's not the _only_ reason. ;-)

~~~
agentultra
Hah!

I have read your books and used some of that material to train several teams
over the years to practice test-driven development. So thank you for the
inspiration and drive.

------
floobyhoob
Just some cat fight.

One guy likes doing things in way, other does not.

Ignore.

------
RickJWagner
<sigh>

More nihilism, this time in programming circles.

It's a turbulent age. I can't wait for it to pass.

------
mwj
Can we stop talking about Uncle Bob? He's like the old, weird, drunk, racist
uncle at a wedding. His ideas haven't really been relevant for about 10 years,
and he is increasingly posting ranty, politicised, attention-seeking
diatribes...

