Hacker News new | comments | show | ask | jobs | submit login
Uncle Bob and Silver Bullets (hillelwayne.com)
277 points by henrik_w 16 days ago | hide | past | web | 218 comments | favorite



My problem with Uncle Bob's opinions is that they are pretty common within the industry, even if "discipline" really isn't the issue and all those negative feelings people allude to are warranted. In general, people are made to feel guilty for not being able to use the shitty tools and techniques we are given.

Lets take for example ORMs and Hibernate in particular. I can't tell you how many times I've seen horrors related to ORMs / Hibernate, because of an actual mismatch between relational data and object graphs, but then developers are made to feel guilty because they "aren't using it right", made to think that only if they switch this setting on, use this and that annotation, read a book or two, everything will be alright going forward.

Most such projects fail from a technical perspective of course, because the developers having invested so much time in those shitty techniques, exploration of alternatives becomes unfathomable to them, because all of that effort which can span years ends up being unjustified. And I suspect that Uncle Bob is guilty as well.

Lets take TDD as another example. I remember a time when TDD was religious dogma, which was unbelievable, because I couldn't see actual TDD in projects of companies advertising TDD in job descriptions. Much like Agile, Scrum and all that crap, it usually doesn't work because, as the masters will quickly tell you, you're "not doing it right". I've seen a single project where TDD was implemented by the book, but the developers where wasting so much time on testing stuff, including endless discussions, that they were forgetting TDD wasn't the actual artifact they had to deliver.

Now Uncle Bob tells me that we don't have the discipline required. That's not surprising. I’ve been hearing this argument in some form or another for years.

The surprise is the attention that we are paying to what Uncle Bob has to say.


Completely agree with this. The post where Martin derides modern, type-safe languages (Kotlin and Swift) [0] was just unbelievable to me.

"Ask yourself why we are trying to plug defects with language features. The answer ought to be obvious. We are trying to plug these defects because these defects happen too often.

Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages."

Arrgh! Until we get out of this ridiculous, macho, victim-blaming mindset, software development will be stuck in the relative dark ages. More, better, safer languages, please!

[0] http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath....


Somebody should show Uncle Bob this: https://youtu.be/fPF4fBGNK0U

My point being: yes, if we'd be perfect drivers at all times and in all weather conditions there would be no more accidents. Luckily for us auto designers haven't adopted this perspective :)


The top comment on that video, in light of the points being made in this thread, is quite ironic:

"And because of this 'improved safety' we have millions of drivers out there who drive even faster and take even more risks, because they think the new cars are so much safer. "

(of course this comment is silly, the number of motor vehicle deaths per capita is about a third of what it was 50 years ago: https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...)


This is a video of a crash _test_. Personally, I'm not sure TDD is necessarily the answer but even with typed code you should still have tests. You just don't need tests for things covered by the type system.

Of course, it also depends on what the software is being used for. If I create a script for automating something just for me, I don't see any reason to add repeatable tests.


You just reminded me of my old driving instructor.

"All accidents are your fault. Got rear-ended? Shouldn't have been driving so slowly, or shouldn't have stopped. Got t-boned? Should have looked more closely at the intersection. Skidded on ice? Shouldn't have been driving so quickly, or turned so sharply. Yes, you avoided the pedestrian, but that isn't an excuse. In extremis, you shouldn't have been driving that day at all, and you do always have that option, so it's your fault regardless."

The line between empowerment and victim blaming is surprisingly thin. ;)


That was cool!


I agree with both, really. Those defects are our fault, and they are there because we aren't careful enough with the code we write. We know about these things, and yet, we don't work to prevent them. We either handwave them away, or we give into management pressures to just get the thing out the door.

Now, that said, one of the ways you can work to prevent these things is to use better tools. Like languages that don't let you make those kinds of mistakes as easily.


I can see how that would be popular but it's wrong.

You can take that same reasoning and say we should build web projects in C or assembly because any defects are our own fault.

Let's be honest. That's bollocks. It's just the same old "real programmers use X" line. I'm sure there's an XKCD that parodies it.

Sure we need more discipline. Who doesn't? Yet that discipline is far better spent on doing what the compiler can't. It makes no sense doing what a computer can do unless it's really necessary. That's why we're programmers after all, right?


Using C as an example is a straw man. There's been A LOT of progress since C. Can you say the same for Java or Python? Not as much.


No it's not. It's the same argument we've had about any kind of progress in programming languages since we started using higher level languages (I.e Fortran and up).


He didn't condemn Kotlin and swift. He is saying that they aren't necessary and don't solve the original problem. That the constant search for a new framework or of language that will fix everything is a futile attempt to avoid learning how to more effectively code with the tools you already have.


In one of the linked posts he says that 100% test coverage might not be truly achievable, but it's nonetheless a valid asymptote to aspire to.

I think that exact argument applies to designing ever-safer, ever-better languages. You might never have the perfect tools that ensure 100% program correctness, but it's sure as hell what a tool-maker should aspire to.


His argument mostly makes sense up to the point where he says tests are the only solution... Why not default to overriding the safeties, then use them where your reasoning says they ought to hold? Cheaper than writing a bunch of tests just to guard against null pointers.


Personally I’d prefer my language to let me fuck up.


> Personally I’d prefer my language to let me fuck up.

And as a (completely hypothetical) competitor of yours, I would support you using a language that encourages you to fuck up frequently.


There is a difference between letting you and encouraging you to fuck up. I have given Rust a go, but its tricky to understand, and that seems like a language designed not to let you fuck up.


y tho. you can have safety and expressivity...


it's often not about expressivity, it's about conciseness. It's much easier to say a correct statement than it is to prove the statement correct. This seems to be the main cost to stongly typed static languages, all the time spent proving to the compiler that this thing is actually compatible with that thing.


What strongly typed static languages are you talking about?

> proving to the compiler that this thing is actually compatible with that thing

I used to have this opinion, but nowadays I think that time is actually spent proving to myself that I'm right, which is always a good thing.

Having a compiler to do the hand holding is not such a bad thing, because the alternative is println-development.

What happens in a dynamic language is that you have to load in your head the shape of the data your working with, the API and everything really. Your head can't hold that much for anything that is bigger than a simple script, so you have to constantly execute and verify (in the REPL, in the browser, with a unit test, etc) each line of code that you write, before writing the next line of code.

This is why people complain about compilation speed in Scala, Haskell, etc, even if they miss they point, because even a slow, static compiler will improve efficiency, since you no longer have to compile and execute each line of code before you write the next one.

And yes, in my experience this is exactly what happens in big code bases built in dynamic languages, short of being really sloppy and introducing lots of bugs.


As opposed to the main cost to dynamically-typed languages, all the time spent proving to the test suite and proving to your teammates that this thing is actually compatible with that thing. And I say this as someone who adores Python! Dynamic typing does not magically make problems go away, it just makes them easier to ignore (whether deliberately or accidentally).


> proving to the test suite and proving to your teammates that this thing is actually compatible with that thing.

I really don't know where that canard comes from. My test suites rarely if ever explicitly test compatibility.

They test for specific values: if I do this and that, do I get the value that I expect. The type "test" gets covered implicitly, because if it's not the correct type, it will also not be the correct value.


The problem is that without knowing the type in question you don't know the values you're testing with cover all of the possible cases. When interfacing with an external dynamically-typed function this requires attempting to reverse-engineer the types from the definition (and recursively for all the functions it calls).


> Lets take for example ORMs and Hibernate in particular. I can't tell you how many times I've seen horrors related to ORMs / Hibernate ...

My teams don't use ORMs anymore. I've found that all of the time we "save" by not hand-writing SQL is instead lost fighting with the ORM, and problems with SQL are a lot easier to Google.

I've also found that the ability to swap databases quickly is almost totally unimportant. 100% of the time we're targeting SQLite and PostgreSQL. I don't need a few extra thousand lines of code so I can theoretically support SQL Server running on a toaster in some kid's dorm somewhere.


I'm actually a big fan of myBatis [0] for this very reason. You still write all the SQL, including stored procedures. All it does is map objects to input parameters, and map result sets to objects. Which is awesome, because that's the part of database processing that is actually tedious and sucks. Writing SQL is not.

I think StackExchange's Dapper [1] works similarly in the .NET world.

[0] https://github.com/mybatis/mybatis-3

[1] https://github.com/StackExchange/Dapper


this is pretty close to the sweet spot for me. being able to map rows to object easily + being able to easily compose queries are what i want from my 'ORM'. if your SQL layer doesn't support chaining conditions you end up writing it anyway so being able to do .and(condition).and(condition).and(condition), etc is pretty useful. ActiveRecord in rails comes close to this but it also manages relationships as well which makes things a bit more complicated but not as complicated as what hibernate or other ORMs do.


Having worked on many systems like this, I generally don't really consider it to be better. Most of them just introduced different edge case behavior than Hibernate.

Though, I generally prefer a non-religious approach to ORM mapping anyway. Use Hibernate for the stuff that it is good at. If you find it awkward for some particular service, don't be afraid to grab a DB connection and do a bit of SQL (maybe even using Hibernate's powerful rowmapper functionality) when it is easier.

Any ORM is going to be a little rough in some cases (even ones you write yourself) and none fit all problems perfectly.


> Use Hibernate for the stuff that it is good at.

Sure, but that's a bit circular. I find that a sensible default is to write the code to match the data, not abstract the data to fit the code. Because, if nothing else, you'll want to understand that data with another language or framework. If it's a one-off Java project that will get thrown away in a few months, whatever is fine, but most Java applications (especially enterprise Java) aren't throwaway projects.

Do you know what works in pretty much every language? Do you know what will be supported in 30 years? SQL queries.

Hibernate is good at some things, but if we think in terms of product lifetimes and not sprints, I think SQL is a much lighter dependency.


I would actually argue that raw SQL in Java has worked pretty poorly overall ever since it was introduced. The APIs have gotten better, and will probably continue to.

But you know what has consistently provided a nice abstraction that works really well for 90% of cases (even if you start with DML, as you should)? Hibernate :)

I've used plenty of homegrown frameworks, and they just never are quite what their originators crack them up to be.


I think every commercial system that I have worked on has had a database abstraction layer of some form, ignoring the fact that an RDBMS with SQL was a better abstraction layer than what they came up with.


It doesn't help that the common advice to securing a database application is "don't, it's too hard to secure databases, just write another application to talk to the database application, and secure that."

A database is just another application. SQL is just another Turing-complete programming language.


How do you make sure that what you are writing is secure?


What difference does an ORM make?


ORM's by and large do protect your from SQL injection attacks.


By using prepared statements, which you can use as well.


They don't. All decent ORMs have a layers architecture, and it's usually some "query builder/processor" or "DAL" or whatever that sits under/above the ORM your app uses (and can usually also be used directly, but has a horrible API since it's not optimized for this use case) that does the simple but tricky tasks of parameter sanitization and all...

The ORM is just an big fat thing that sits on top, and from which you only use 10% of the functionality it provides, but in theory each app uses a different 10% so it justifies itself...

ORMs can be ok... as long as you don't build business logic on top of and inseparable from it!

(As an example of horrible use of ORMs: some frameworks have ORM systems with an events system on top... god have mercy on your soul if you make the mistake of using this system to implement business logic, and end up with it accidentally depending on db connection and caching details, and not having time to refactor when you realize this! ...but if you avoid the trap, you can reap the benefits of the functionality that the ORM provides, while moving your logic at at a whole different layer of the app - hint: just invent a new "XBizLogicServices layer" or whatever and not import framework stuff into it if you can't find a better design, it will at least be easily testable.)


I wasn't defending ORM's.

I was just pointing out that if you use one you do receive protection from injection attacks. Just because only part of the ORM is actually performing the protection doesn't change the fact that you have it as a consequence of using the ORM.

You can also get the same protection by using a SQL library that supports prepared statements. But ORM's still provide that benefit.


Martin has built up a large following of acolytes, some of whom are very active wherever software development is discussed, in a very dogmatic way (such as the guys for whom TDD is the one true solution to every problem.) He has encouraged this through his didactic style of writing, which focuses on a few of the most basic faults in current development practice, and elevates them to being the roots of all evil, without any regard to the problems that you will run into once you have these surmounted.


Agree with this. I like some of his ideas. But a problem I have(possibly of my perception) with uncle Bob is that he gets too aggressive. If you don't have X percent (100%) code coverage, then you are doing it wrong, you are not a craftsman, etc. Well, Dan North put things in perspective a bit- https://dannorth.net/2011/01/11/programming-is-not-a-craft/ The problem is dogma. And it keeps repeating. But probably this aggression helps them in their careers.

(Not a frequent contributor to comments/ here on HN. Mostly like to read.)


I have never told people that they must have 100% test coverage. Indeed, I tell them that 100% test coverage is impossible.

What you are referring to is a statement I frequently make -- 100% coverage is the goal. It's an asymptotic goal, but it's still the goal. No lesser goal makes any sense.


But then you can find people saying that type checking has no value because all errors will be caught by the testing you should be doing - I came across someone saying that just a few days ago.

I am not sure that 100% can be called an asymptotic goal, given that you cannot even get close, for real systems. Once you recognize this, you can start having a realistic discussion about what you actually will be able to test for, how best to allocate your limited resources, and what other steps you might take to mitigate the consequences of this unfortunate fact. It means that methodologies bassed on extensive testing must justify this use. An additional benefit is that we might be able to discuss what we can do to improve software development without the sort of dogmatic non-sequitur that I mentioned in my first paragraph.


> 100% coverage is the goal. It's an asymptotic goal, but it's still the goal

What is a goal then? A maximum? A desirable attribute?

I'd argue a fixed percentage target of any kind doesn't make sense, much like measuring productivity by lines of code.


I assume you've seen "Object-Relational Mapping is the Vietnam of Computer Science"?

http://blogs.tedneward.com/post/the-vietnam-of-computer-scie...


ORM is like C++. SQLAlchemy is the best ORM I know of. You can use it lightly and map tables, migrations, and never go deep into the ORM "problem". Saves you a lot of effort doing SQL push-ups. Many projects doing OO end up building a worse ORM from scratch eventually.


> Many projects doing OO end up building a worse ORM from scratch eventually.

Sure, and lots of people end up implementing a lisp in C++. Doesn't necessarily mean they should just use a lisp instead. Usually it just means they should learn how to use their tools idiomatically.


Someone help me with the origin of this (paraphrased) quote, which I have not been able to re-find on my own:

"The sooner you admit that you can't fit the whole program's source code in your head, the sooner you can begin incorporating that into your design."


Notes on Structured Programming: http://www.cs.utexas.edu/users/EWD/ewd02xx/EWD249.PDF

Page 3:

  Summarizing: as a slow-witted human being I have a very
  small head and I had better learn to live with it and to
  respect my limitations and give them full credit, rather
  than to try to ignore them, for the latter vain effort
  will be punished by failure.
The Humble Programmer: http://www.cs.utexas.edu/users/EWD/ewd03xx/EWD340.PDF

Page 11:

  The competent programmer is fully aware of the strictly
  limited size of his own skull; therefore he approaches
  the programming task in full humility, and among other
  things he avoids clever tricks like the plague.
He expressed similar thoughts on other occasions. And there are others who express the same idea.


Thank you for this! It's nice to have a large repository of EWD's writings.


The most common tactic for pushing TTD is confusing "no TDD" with "no testing". So many "benefits of TDD" begin with sabotaged examples of a team that doesn't test at all, before trying TDD - classic stone-soup.


I agree with this comment. OTOH it's very hard to have and enforce discipline in commercial SW development in the US anyway. Managers just want solutions hacked out as quick as possible, so no real time for properly crafting solutions in most cases. Scrum etc. also drives this type of mindset.


Agreed Scrum RAD DSDM are not suitable for all software projects.


There's nothing wrong with tools. I like tools. I use them all the time. I want better ones.

I also want programmers to be able to use those tools well.

Finally, I look at a lot of crap code written by programmers who throw all their disciplines away because of schedule pressure or other factors. For the sake of going fast they throw away everything that might allow them go actually go fast.



I think you don't see much TDD in companies because, when a snag is hit, the first thing out the window is testing.


Bob makes money selling TDD to chumps.


Here is the post this post is in response to:

http://blog.cleancoder.com/uncle-bob/2017/10/04/CodeIsNotThe...

I don’t interpret the original piece as saying any of these tools and techniques are not useful.

I interpret it as saying average developer mindset needs to shape up.

I tend to agree. I work with guys who believe in formal reasoning tools (we collaborated with Amazon’s ARG for example[1]), and use all these tools. These guys are almost compelled to write these tools and evangelize using the tools in an effort to help, because the developers around them don’t carefully reason about their code and it makes the guys who care crazy.

So you have two camps. People who believe correctness is important, and people who think continuous deployment is an answer to bugs. The first group is trying to help the second group through tools that can become part of the SDLC.

I read “Uncle Bob” as suggesting that’s not enough, it’s not the root cause — the second group needs to care.

I see nothing in his post that suggests the tools shouldn’t be used. (“I think that good software tools make it easier to write good software.“) But if the crowds in his talks aren’t using unit tests, are they really going to be using TLA+?

First, do no harm, then use like Light Table, Model Driven Engineering, and TLA+.

I take that as his point, and it seems painfully valid.

1. https://www.slideshare.net/mobile/AmazonWebServices/aws-rein...


> First, do no harm, then use like Light Table, Model Driven Engineering, and TLA+.

Looking at some of Uncle Bob's other posts linked from this article, I don't think that's what he's saying. He (Uncle Bob) goes on and on about discipline, but at the same time dismisses anything that might actually force programmers to be disciplined, like type systems that force you to check references for null before dereferencing them.

For example, in http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.... he asks: "It is very risky to have nulls rampaging around the system out of control. The question is: Whose job is it to manage the nulls. The language? Or the programmer?" and goes on to answer: "Defects are the fault of programmers. It is programmers who create defects – not languages." Then he goes ranting about programmers who don't test their code, and if only they tested, such type systems would not be needed.

Again: He dismisses the one thing that actually forces programmers to be disciplined and instead just wishes (against all evidence) that they were disciplined. To me, he seems to be saying "First, do no harm, then use tests, then nothing."


That's quite a common issue when you're trying to emphasize a specific point. Uncle Bob's trying to make software developers care about their craft. He seems dismissive of any other techniques, but I think that's because he really wants people who read his posts and watch his videos to understand that specific point. If you already understand that point, then you're not the audience he's trying to talk to, and you're free to enhance your craft with any tools you see fit, because you've got the foundations right.

Unless you understand that he's trying to teach people something that's really difficult to fully understand and implement on a daily basis, I don't think you'll be able to understand why he's so dismissive about anything else.


From "The Dark Path":

> Now, ask yourself why these defects happen too often. If your answer is that our languages don’t prevent them, then I strongly suggest that you quit your job and never think about being a programmer again; because defects are never the fault of our languages. Defects are the fault of programmers. It is programmers who create defects – not languages.

http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath....

I think that software defects happen too often because the most widely used programming languages don't do enough to prevent them. That makes me the audience of this statement. And he says that I should abandon programming. I think that's rather presumptuous of him.

(I also think that on average programmers aren't careful enough and don't spend enough time making sure their stuff doesn't break, but I see that as primarily an economic problem. Most software companies are rewarded more for delivering features than closing the security holes. We can attack that problem either by finding some way to convince software vendors to pay for the tremendously expensive effort of testing software, or we can reduce the effort required by using stronger type systems and formal methods. The latter seems more realistic and attainable.)


I don't agree with your interpretation, and neither does the user "unclebobmartin" here on HN: https://news.ycombinator.com/item?id=15420618

This is not about emphasizing a point about caring for your craft. It's about open hostility to certain things that are not his particular preferred thing.


Indeed, for any reliability tool not named "unit tests" he comes off as saying "Just write better code!"

It really feels like someone peddling their pet diet, and when confronted with other ways of managing weight they just say "You don't need to do that, just eat less food!"


No tool will ever successfully enforce discipline. If anything, it makes people more lax.

"I don't have to reason about whether this can be null, the type system takes care of that for me." "I don't have to worry about leaking memory, valgrind will let me know."

"I don't have to pay attention to the road, Tesla's autopilot takes care of it for me."

All the tools do is enforce a loop of compile -> make compiler happy -> compile.


No tool will ever successfully enforce discipline. If anything, it makes people more lax.

But the end goal is not disciplined programmers! It is better, safer and more timely software.

Your argument applies just as well to safety equipment in cars and on heavy equipment, and you may be right that the reduced risk of being killed or maimed makes people more lax. But you can’t argue with the results: Far fewer people do in fact get killed or maimed!


Fewer errors get deployed to production code because tools help us catch them more often? Yes.

Fewer errors get deployed to production code because programmers have a more rigorous discipline when creating software? Also yes.

Which part should a teacher like Uncle Bob (I consider him as such) should care more about? People using tools or processes, or having them care about the discipline enough?

I'll go with the latter. And if he seems dismissive about the former, is because he's trying to teach about discipline, which is far more difficult to teach, than to use a tool or follow some process.


You’re creating a false dilemma. It’s entirely possible to be a disciplined programmer and use good tools.

And, yes, a teacher should care more about the skills than the tools, but Uncle Bob is actively against the tools, claiming that they impede learning the skills. Which is no more true than claiming that automatic transmissions or anti-spin in modern race cars impedes learning how to race.


There's no dilemma. I never said it wasn't possible. I'm saying that it's far more difficult to be disciplined oneself, than it is to learn some tools. Thus, it's far more interesting for a teacher to try and instill discipline in the students, than to teach the latest set of tools. The former is far more stable and will affect a person's behavior much more deeply, the latter will keep changing and can be added to a skillset within a week by reading some tutorials.


I don't think he's actively against those tools. I think he's actively against claims that those tools actually solve the problems developers deal with, instead of uncovering the true behavior that lies beneath those problems.


No, he really is against the tools. He is on the record as being opposed to static typing, optionals etc.

And by the way, noone is making the argument that tools solve all the problems developers deal with. But to say that actually, they solve none of them, is foolish. Optionals, to take one example of something Uncle Bob is against, prevents a large proportion of what is probably the most common runtime error, namely dereferencing a null pointer. And yes, Uncle Bob is right that it comes at the small expense, which is having to think more carefully up front about your data and your abstractions, and there is no doubt he is right that some programmers will feel constrained and will find and use whatever mechanisms exist to bypass the safety that this feature offers. But in much the same way, there are also some people who feel constrained by seat belts, hard hats, helmets, dust masks etc. and choose not to wear them, yet you don’t find many people who would argue that this makes them pointless and generally a bad idea.


No, he is not against the tools.

"I have nothing against tools like this. I’ve even contributed money to the Light Table project. I think that good software tools make it easier to write good software. However, tools are not the answer to the “Apocalypse”."

http://blog.cleancoder.com/uncle-bob/2017/10/04/CodeIsNotThe...


I complain about quality a lot. I might almost have this superhuman level of discipline on my best days, and only then if business allows me to indulge it. But think about the worst programmer you've ever worked with. In his hands the tool is a much bigger win.


A tool is short term, shallow, and goal-oriented. Building discipline and good practices permeates a person's life far more deeply than tools.


Banking on discipline in any human endeavour seems to be fool's errand. You will get more reliable results by improving the environment, systems and tools. This can be seen in how much checklists have improved the quality and reliability in fields like medicine. We are in a field where we can literally codify the checklists to enforce them.

I want both: Better tools and more disciplined programmers. The tools will catch errors, and that's great. But they won't catch all the errors, whereas disciplined programmers will work to minimize the potential of them.


So, you're advocating removing the "make compiler happy" step and you think that will lead to greater discipline?

If the tool automatically fixed it, you'd have a point, but most if not all of the static analysis tools I've come across simply point to the error and leave it up to the developer to fix, which is the right thing to do, reinforcing good habits.


Not OP, but I interpreted that as implying your goal should not be "make compiler happy" but "think about and write reasonable code that also of course compiles".

If one's assumption is that because of a type system and other built-in protections a successful compile means no bugs, then that person is doomed to fail.


> a successful compile means no bugs

Isn't that a strawman? Shouldn't we verify to what extent this mentality actually develops (I doubt it) rather than rebuild our methodologies around it - or at least begin by simply educating developers directly about the pitfalls of this assumption.

Sometimes it feels like treating developer flaws as unfixable constants that can only be handled by process change, as opposed to developer improvement. Perhaps a symptom of hiring the cheapest devs en-masse?..


Pretty much exactly this. You can't keep throwing tools at a job and expecting them to compensate for poor developers. You also need to keep with currently good developers to ensure that they don't become lazy, relying on the tooling instead of critical thinking.


> relying on the tooling instead of critical thinking

If the tool is "type checking" a computerised logic/type-checking will be far more efficient that mentally checking such a thing. Computer type-check logic is a superior form of a subset of "critical thinking".


I don’t see why we should make this a requirement of every programmer.


I don't see why we shouldn't.


This overgeneralizes. When I started using strict null checking (in Typescript) I found it had an unexpected impact: I started writing code differently. Slowly, I also began _thinking_ about it differently. Ultimately you can't _make_ someone care if they don't want to. But if they already do care, some of these tools will make them better.


> All the tools do is enforce a loop of compile -> make compiler happy -> compile.

Sorry, this is nonsense. Tools can eliminate whole classes of bugs from ever happening. I will never have a null pointer exception in a Haskell program, for example[1].

I'm not so sure what's so difficult to understand about this.

[1] Alright, if we're being extremely nitpicky, it's possible to do really weird things with various language extensions which could conceivably cause similar things, but a) those are opt-in, and b) very rarely necessary, if ever except in the implementation of the runtime/language itself. They can also be localized per-module such that you know what code to be extra careful about.


I can attest: I wrote Ruby for many years, recently I've only been writing in languages with much stronger static analysis capabilities. I had to be much more disciplined when writing and reviewing Ruby code. Inevitably, I slipped up frequently in both disciplines, resulting in lots of bugs in the software and a chronic nagging worry in my psyche. I don't need to be as disciplined now, but I'm creating fewer bugs and sleeping better. I feel that these are good things and much more important than a theoretical concern about it making me "more lax". This is just my experience, your mileage may vary, etc...


The benefit of building checks into our languages (like static typing, design by contracts, etc.) is that they let you focus on higher level concerns.

If I have a language that will let me say that a Foo is one of {1,2,3} and will never allow any other integer variable's value to be put into a Foo variable (without explicit casting), I can now focus on my use of Foos and not have to add a type invariant test to every function that consumes or produces a Foo. This doesn't make me less disciplined. It shifts the discipline.

The disciplined programmer can spend all day putting in type invariant checks on their code. Or the disciplined programmer can spend 5 minutes determining what types they need to use, and spend the rest of the day writing out the actual business logic instead of a shit ton of boiler plate code and/or unit tests (which are hopefully comprehensive enough to catch these errors).


Some people may have those attitudes, but if so, then they are demonstrating a lack of discipline. The intended purpose of tools is not to enforce discipline, but to enable disciplined people to do a better job.


There are nuances. With this reasoning taken to extreme, would you in the end got rid of the compiler just force yourself to be more careful?

And I kinda get your point, because I got in to the past to a point where I had passed through all of the checks for my PR with code that didn't actually work.

I still think that checks I put in place for myself help me more than they make me more lax.


I don’t get why being lax is a bad thing lol. You’re not being lax randomly, you’re being lax because you learned that empirically you can afford it. Instead of wasting that energy on whatever discipline you used to have to apply, you can spend it on something of value. I don’t think it’s smart to get upset about life getting easier.


Because being lax leads to bugs.


Type systems do not force programmers to be disciplined. Type systems tempt programmers to cheat.


I cannot conceive of how type systems enable "cheating". Mostly, I see people cheating by reducing everything to a handful of types and not exercising the system as much as it might be.

An expressive type system allows me to formally specify parts of my system, and have the language itself enforce those constraints. It's much better for me to be able to say:

  type Byte is mod 256;
And now any instance of Byte cannot end up larger than 256. In this example, most languages can achieve the same thing as they have an 8-bit unsigned int type. But in Ada I can do this for any arbitrary (positive, I assume, never tried negative) integer after the mod (up the the system's Integer'Last). I don't have to test every time I assign into a variable of this type. I don't have to test after every computation to make sure it's still within the range. It just happens for me. This isn't cheating, this is using my tools to enable me to focus on the overall design and not the menial boilerplate of range checking everything.


I don't think he means they enable cheating, I think he means they frustrate developers who then try and find ways around the type system. Types pissing you off, just declare everything object and cast when you need to.

You're talking about leveraging the type system to help you, but that requires fully liking and understanding the type system, many devs don't understand the type systems they're forced to work with, don't understand how to leverage it, and don't like creating new types. Most devs have a primitive obsession and don't much create their own data types.


> You're talking about leveraging the type system to help you, but that requires fully liking and understanding the type system

To the extent this is true (it requires understanding, but not liking—tolerating enough to actually put the effort into use it well, sure) TDD is no different in this regard, it's just that even with that the best case guarantees it provides are weaker.


Like giving WW1 pilots parachutes would tempt them to bail out of perfectly good airplanes. Like putting enough lifeboats on the Titanic would tempt Captain Smith to race blindly through ice-filled seas.

What does 'cheating' mean in this context? If anything, type systems (especially modern ones) encourage programmers to think more carefully about the types in their programs.


What do you mean by "cheat" here? Is it "cheating" to no longer write the kind of tests that type systems render obsolete? Because that's the main difference between programming in a language with a strong static type system and one without.


> People who believe correctness is important, and people who think continuous deployment

I don't see why these exclude each other? You can easily have both, through e.g. powerful type systems (a la Haskell) and still use continuous deployment.

> I interpret it as saying average developer mindset needs to shape up.

I think you're giving Uncle Bob too much leeway. If you go through his other posts it should be a bit more clear, but he is very much focused on the programmer being at fault. Heck, in one (as the other commenter pointed out) of his posts he even says that it's never the languages fault (which is obviously wrong, you can eliminate wrong behaviour with good language design).

Another example is http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTest... where he states,

    So, no, type systems do not decrease the testing load. Not even the tiniest bit. But they can prevent some errors that unit tests might not see. (e.g. Double vs. Int)
Which is not just something taking out of context, it's more or less the thesis of his article. He really shows here his inexperience with what powerful type systems can do, and I can assure you that you can definitely cut down on a wealth of tests by modelling things in the type system instead - let the compiler do the job for you.


Plus, there are systems like quickcheck that can greatly simplify the act of writing tests.


It's Clear from that that it's been a while since Bob Martin worked in the real world with real bosses, budgets and deadlines.

The problem is that the business tend to drive the "if it compiles, ship it culture" but also want to blame some lowly replaceable coder for any mistake so the organization and it's leaders can avoid any responsibility for any damage done due to their decisions to ship first and test later.

And Bob Martins post seams to cater perfectly for that sentiment, of never blaming the CIO/CXO for demanding the impossible, and overruling concerns but laying the blame entirely on the lowest and least powerful link in the chain.

People tend to forget that shipping despite warnings from a formal test suite, is baiting an audit failure in a way that overruling concerns voiced by actual people.


It's the CEO's job to demand the impossible. It's the engineer's job to deal with reality. When the engineer tells the CEO that the impossible is possible, the engineer has forsaken his/her responsibilities.


It's also the engineer's job to make rent and pay their health insurance despite unreasonable demands. But apparently the only part of this labor issue we want to discuss is the nobility of sacrifice.


Because Uncle Bob isn't speaking to the CXO. He's speaking to those first line developers. And he's saying that those people need to push back, they need to stand up and tell the bosses that what they're doing is unsustainable.


> I don’t interpret the original piece as saying any of these tools and techniques are not useful.

I don’t even know if that is up for interpretation,

> I have nothing against tools like this. I’ve even contributed money to the Light Table project. I think that good software tools make it easier to write good software. However, tools are not the answer to the “Apocalypse”.

If this is indeed the article OP is responding too I can’t but help he was missing the point. The point is scheduling pressures lead to awful code because quality is cut.


I had luck to work on some projects without time pressure. Some people still produced low quality work.


> I don’t interpret the original piece as saying any of these tools and techniques are not useful.

You haven't been reading much of Uncle Bob's articles or tweets then, because that's exactly what he's saying and there's no point in second guessing.


This post is not in response to just that post. It's important to take that post in the larger context of what Bob has said. He has many times implicitly (and some times explicitly) made the claim that typed languages are uninteresting because the type checker merely duplicates the work that unit tests accomplish.

As an aside, I rarely write unit tests, but I often use static analysis tools. I find that for the majority of the software I write, I get more bang for the buck with functional and integration tests than I do with unit tests.


> So you have two camps. People who believe correctness is important, and people who think continuous deployment is an answer to bugs.

I am curious as to how the people of the second group (if they do, in fact, exist) think that the continuous deployment of serially faulty code is a solution to anything.


I don't think there is such a group (or at least it is very small). What continuous deployment does do is help you find bugs faster as they get shipped sooner. It also promotes more automated testing which can help as well since manual testing is less likely to fully regress everything.


> What continuous deployment does do is help you find bugs faster as they get shipped sooner.

Yep, that's the thinking that stresses me in the second group.


I'm usually pretty skeptical of Uncle Bob's stuff, but in this case I think that both Hillel and Bob are a little right. Tools do make a difference, as Hillel says, and can be used to mitigate risk. The key word there is mitigate, though. Bob is right in that the root cause of that risk is a lack of discipline. Such risk will persist, mitigated by tools or not, so long as that lack of discipline persists.

I think that "discipline" is a bit of trigger word for some folks, though. It can be seen to imply a personal failing, but as Bob's post explains "undisciplined" programming is usually a combination of personal and systemic problems. The programmers who created the hypothetical bad code shouldn't have created it, but they also shouldn't have been asked to, or placed under requirements that made bad code, over a long enough time, inevitable. You can have undisciplined cultures in the same way as you can have undisciplined programmers, PIs, requirements, or problem domains.

What's more, all of those are problems that it's important to be aware of. If anything, Hillel's article pushes a bit far (understandably) in the other direction, hinting that a failure of discipline is so inevitable as to be a waste of time to fight against.

It's important to do both: mitigate the impact of failures caused by lack of discipline, and understand and try to address the causes of that lack. Easier said than done, of course.


I think that "discipline" is a bit of trigger word for some folks, though. It can be seen to imply a personal failing, but as Bob's post explains "undisciplined" programming is usually a combination of personal and systemic problems

I think you’re on to something here, but you’re not all the way there. I think “lack of discipline” is such an unsatisfactory explanation for anything, and almost never the root cause of any problems, as you claim in your first paragraph. The reason it unsatisfactory is that it leaves us with no other remedy than to try harder. But if the root cause is systemic, then merely trying harder is bound to fail. If we are to gain any insights, we have to look closely at the systemic issues that manifest themselves as a lack of discipline. Saying that the cause of undisciplined individuals is an undisciplined culture doesn’t really get us anywhere. We have to understand why a culture can get that way. What it often comes down to in my experience, are incentive structures inside or outside of that company that have negative side effects or are downright dysfunctional. Sometimes you can fix them, other times you just have to accept them and live with them.


> Such risk will persist, mitigated by tools or not, so long as that lack of discipline persists.

I think you could have stopped after "Such risk will persist".

An engineer working on a project has far more control over what tools are used in the project than over the discipline or attention level of whoever is working on this project in three years. I don't see Hillel as arguing that we should give up on trying to improve discipline, but that we must expect to sometimes fail. Changing tools is pretty easy in comparison to changing cultures or incentives, and as a bonus, you can do it immediately, rather than hoping to respond when necessary.


> If anything, Hillel's article pushes a bit far (understandably) in the other direction, hinting that a failure of discipline is so inevitable as to be a waste of time to fight against.

I think you're mischaracterizing Hillel and putting words into his mouth. Hillel isn't saying discipline is a waste of time to fight against. He's arguing that focusing exclusively on discipline at the expense of everything else is foolish and narrow minded.


Uncle Bob (2017)

I stood before a sea of programmers a few days ago. I asked them the question I always ask: “How many of you write unit tests on a regular basis?” Not one in twenty raised their hands.

Rich Hickey (2011)

It passed all the tests. Okay. So now what do you do? Right? I think we're in this world I'd like to call guardrail programming. Right? It's really sad. We're like: I can make change because I have tests. Who does that? Who drives their car around banging against the guardrail saying, "Whoa! I'm glad I've got these guardrails because I'd never make it to the show on time."

https://github.com/matthiasn/talk-transcripts/blob/master/Hi... incremental search "passed all" for the rest


Disclosure: I didn't like the talk.

In Uncle Bob's view, which I assume has some popularity, unit-tests are a form of software specification. It's informal as the specification can only test specific examples of behavior based on a determined state and the people responsible for its maintenance are the programmers. And if you're not disciplined enough with how you write these tests you end up in a situation where you're writing the rules for your own success and the specification itself contains errors.

I look at specifications as blueprints. If you're planning a sky-scraper or mass transit system you use blueprints. If you're building a shed in your backyard you might just use a quick sketch on a napkin. I think the majority of commercial software is somewhere between napkin and blueprint; building houses to lean on a well-trod analogy.

You need something more specific and robust than a sketch on a napkin but not quite as formal as a blueprint for a rail switching system.

Regardless unit tests are not a very good specification format. They're only going to show you examples where your software meets your requirements. However we all know that software is complex, hard, and systems are more than the sum of their unit test suites: there are operators involved, conditions and constraints on its operation, failure modes. You need a tool to help you design the system and check your work for you because these systems are getting too complex for you to reason about without solid abstractions.

I think Rich Hickey completely missed all of this when he said that in his talk. If we were to modify the analogy to fit the view of testing as described the guardrails would have holes in them. All over the place. You'd only be safe in the cases where the car "bumps" into the correct spots you thought to put a test.

However I do agree that unit tests alone cannot be relied upon as the only way to write "correct" software where "correctness" is determined by it's adherance to its specification and that the implementation of it follows state of the art practices. One has to do more than just use unit tests to achieve that.


Building houses is not the right analogy to building software, because the used techniques are known and most new buildings are going to be exactly like the millions of other buildings that came before it, even when speaking of sky-scrapers. There's very little innovation in it.

Building a new building is usually akin to copying/downloading software, only more expensive. Art is involved in designing buildings sometimes and there is some innovation for the engineers that want to break records, e.g. in building the world's tallest buildings or in building them faster, but I think those are a minority.

I don't think Rich Hickey missed the point at all. If I ever saw an actual software architect, then Rich Hickey is one.

I think you missed his point of view, because Rich Hickey is also the guy that helped create Clojure's Spec and is a proponent of property-based testing, which Clojure's Spec now enables.

In fact Rich Hickey, in "Simple Made Easy" and other presentations, likes to talk about about the need for upfront design, for carrying about the delivered artifact, for thinking deeply about architecture, for simplification. He's in no way proposing a napkin-driven approach, quite the opposite ;-)


While I'm not a major fan of Clojure, I /loved/ this talk :)


It's a fantastic talk. One of the best talks on programming that I'm aware of.


Who does that? Who uses guard rails like that? Accountants.


Well written! I’ve always cringed when I’ve read anything that Uncle Bob writes. It almost seems like his entire existence depends on the acceptance of unit testing, and the unjustified dismissal of tools, especially type systems, seems to me something a only a novice would do when evangelizing their favorite lang (we’ve all been there, right?...), until they get more experience with other methods and approaches.


I never understand why Martin gets the kind of attention he does. As far as I can tell, the only significant achievement he's done for the technical community is the Agile manifesto. Outside of that, he's a typical consultant in that he's got books, training courses, etc. However, I have yet to find a significant publicly known project he's been involved in where he's had to put his principle into practice.


There's definitely two types of very well known speakers.

People who have worked on some really notable project. Linus Torvalds, David Heinemeier Hansson, Joel Spolsky and so on. I know these people are worth listening to because I can see that their code is a success.

Then there are people who I have no idea what they have ever written at all. Robert Martin, Martin Fowler and so on. I don't know if I should listen to what these people say at all. Does it work? What software projects have they made a success that I can look at?

I suppose in the latter case it's usually closed-source enterprise code that they've consulted on? Well if I can't look at it I'm not sure I'm just prepared to take their word that it's successful.

Is this reasonable of me?


To be fair to Martin Fowler, his writing is generally more journalistic in its approach. Unlike the others you refer to, who are very ideological in their writing, Fowler describes technologies and programming patterns/approaches, in much the same way a tech journalist would do so. And he writes with a level of detail that suggests he's done some significant coding.

Also, re: open source, he does sometimes write about particular open source projects that have successfully made use of the tools or technologies he advocates. Or simply about the open source projects themselves.

I happen to enjoy a lot of Fowler's blog writing. In particular, I've enjoyed reading his posts on the LMAX architecture. Fascinating stuff.


I'd say Fowler has more credibility because his books are a bit more technical than Martin's. His book on refactoring is evidence he's worked with a lot of code.


I wanted to echo this. Fowler should not be lumped in with the other guys for his Refactoring book alone. Not only is it a really good guide that helps programmers understand how to rigorously make incremental, non-breaking changes. It almost reads like a specification for a refactoring tool.

Really good software engineers write more than code. They know how to clearly outline ideas than can be handed to other developers.


It is not reasonable of you to ask unless you've made a reasonable effort to look for their software and failed.

Robert Martin's best known open source software contributions are for FitNesse, which is reasonably well known in the testing world. https://en.wikipedia.org/wiki/FitNesse

Tell me, have you read the Space Shuttle control software? There are dozens of not hundreds of papers about NASA's development process. Are you saying that since you can't look at it, you're not prepared to take their word that the software is successful?

Because it seems to me that there are other ways to judge success than simply reviewing the code.


Sorry I didn't really mean about reviewing the code - I meant being able to see the successful project.

I listen to Torvalds when he talks about how to write his software, because his software is a massive success.

I'd listen to NASA because their missions were a success.

Why should I listen to other people like Robert Martin? I had seen FitNesse in the past, and while I'm sure it's a useful and well-written bit of software, I don't anyone would remotely describe it a particularly notable success. Why would I listen to his ideas more than anyone else who's written a testing framework that some people use?


Ahh, I apologize. I misunderstood your original comment.

I'm still a confused off by your comment "I have no idea what they have ever written at all. Robert Martin, .. and so on." when you say that you've "seen FitNesse in the past". Did you not know that he is part of the project?

Have you looked to see what projects they've been on?

FWIW, I don't think Fitnesse is well-written. When I looked at it about 8 years back there were several security holes in it, including no protection against directory traversal attacks.

It was clear that the developers had no experience working in an untrusted environment. Which, to be fair, is the security model for a FitNesse deployment.

(FitNesse.org is served by FitNesse-v20160618, so that's not 100% true.)

I don't trust Martin's views. I think he's full of bluster, and makes assertions far beyond what is justifiable. For example, he did a Prime Numbers kata some years back where it was clear he was gaming the problem - unknowingly, I think - so that his cute but inefficient two-liner would be an acceptable answer, while poo-pooing those who immediately thought to use a sieve variant.


I had forgotten about FitNesse but I remembered it when you mentioned it.


> People who have worked on some really notable project. Linus Torvalds, David Heinemeier Hansson, Joel Spolsky and so on. I know these people are worth listening to because I can see that their code is a success.

I agree with your examples, Linus Torvalds in particular. However, I would not consider Mark Zuckerberg, Steve Jobs or Bill Gates worth listening to on a technical level just because their products are a success.


Bill Gates was actually a pretty decent programmer. His technical insights peter out somewhere in the 1980s or early 90s at most, but he's had some interesting things to say about those eras. (Interesting in the sense that it's someone speaking about the eras that was there in a big way, not necessarily because he's got surprising insights into software engineering or something.)


Do you have any links to his insights? They sound pretty interesting but I'm not sure what to search for to find them.



  He didn’t meddle in software if he trusted the people who
  were working on it, but you couldn’t bullshit him for a
  minute because he was a programmer. A real, actual, 
  programmer.
I stand corrected. Thank you for the read.


They are worth listening when talking about the kind of thing they used to make their products a success.


In fairness, Joel Spolsky has done a lot of great work as a founder - founding Stack Exchange being the most notable — but is any of his personal work open source? We don’t know how well written his code is. Twitter is successful but their code has reputation for being horrible.


It's not open source, but he led Excel development didn't he? I can use that and appreciate that it's good quality software so I want to hear about how he managed that.

I don't know if I'm just ignorant, but to be honest I have no idea if people like Martin and Fowler can program at all or not!


No, Spolsky didn't lead Excel development. He was a PM.

I happen to like Spolsky a lot but most of the career success we've actually seen from him is managerial, which is not unimportant, but it's not the same as being a dev lead on Excel.

On the other hand, Spolsky's programming advice is much more broad-spectrum and rooted in common sense. I'm not sure you've ever seen Spolsky dismiss a whole category of languages or correctness tools, for instance.


Isn't the PM the one who leads project? (I don't work in a team that uses these terms so genuinely asking.)


A product manager generally owns the spec for a product, but doesn't lead development of it.

(Microsoft has a more complicated PM scheme with I think two different kinds of PMs? So I can't speak to exactly what Spolsky's role was there. It wasn't a small role, but no, I don't think he led Excel development.)


Spolsky's role is addressed in his blog post about The Time That Bill Gates Reviewed His Work. It's an interesting read itself, I recommend it.

https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...

tl;dr He wrote the spec for VBA, aka Excel Macros. It's not being in charge of all of Excel, but I would call that a pretty substantial contribution.


It is definitely not my point that Spolsky's role with Excel was unimportant. Hopefully, none of my comments can be taken as minimizing his role at all. I'm just saying he wasn't an Excel dev lead, which is a distinction that is relevant to this thread.


Fowler in specific definitely knows a lot about the process of software development though, I mean he's written multiple books on the craftsmanship of software development. Practical advice which a lot of people can see the value of.

Bob is more of a theoretical, philosophical guy it seems.


There is a lot to it, but people still need to find ways to make corporate development work better. Torvalds et al, while great, can't offer a lot to help here. (Except indirectly by producing a good foundation to build upon, like Linux.)


Very.


I have the same question. Not, "what makes Martin interesting or worthwhile", which is a totally subjective question, but, "what are the most compelling projects whose success he enabled?".

That seems like a simple question we could actually answer here without having loud debates about unit testing or whatever.


It's hard to build your consultant image by telling people that multiple methods and tools might work. It's easier to market a simple cure.

Also, the point to "stop making shitty software" is not the same as "stop making mistakes". You can ship high quality software even while being human, it's just slower and more costly.

And while I agree with the article in general, I do believe the current balance has shifted towards too fast iteration and shoddy quality. Just look at the recent iOS/OSX releases. But even that might make business sense, because of the even shittier alternatives.


> It's hard to build your consultant image by telling people that multiple methods and tools might work. It's easier to market a simple cure.

The thing about these middlebrow dismissals is that they work the other way around:

"It's hard to build your consultant image by telling people that they need self-discipline and rules. It's easier to just sell them on tools that will fix the problem."


I just don't think it's likely that the real reason most software is crappy, is because nobody thought of extending unit test coverage yet. Selling tools here is not "the other way", it's the same problem.

What if the problem is bad management? Or low experience/expertise in general? All of these methods and tools are only optimizing locally, and cannot help the root causes.


Discipline is a good thing, but it does not scale.

I work on a tool for mutation testing for C/C++[1] for a few years now. During this time I was thinking about tests, tools, people, etc. a lot. Here is the summary of my musings:

Quality of software depends on several factors such as hardware, operating system, programming language and tooling used to build the software in the first place, etc. For decades we try to improve each of those aspects. But, the most significant and most harmful factor is the human. Naturally, we cannot improve developers, but we can decrease the destructive influence of a human being by using better tooling.

[1] https://github.com/mull-project/mull


Or alternately: we can improve developers, but doing so is hugely expensive and time-consuming, and while we can improve them we can never make them perfect. So making tools that can be used safely by imperfect developers is always going to be simpler and more cost-effective.


Exactly, this is what I mean by "do not scale." It takes many years, if not decades for one to become a decent developer. We simply cannot afford this.


We can't afford it with the current pipeline, but perhaps with a better pipeline, we can. Perhaps something more proper and akin to the older engineering disciplines? Or to doctors?

They're doing incredibly important jobs, and spent an appropriate amount of time learning and proving them selves.

Software is now completely critical in so many aspects of the world, and becoming more so. When people's lives are at risk, we can certainly afford 'many years'.

Yes, not everyone is working on perhaps safety-critical systems, but more and more people are. It's not just safety in the physical sense either, but emotional and mental well-being too. Facebook isn't saving potentially affecting lives as how a doctor would (more 'physical'), but social media definitely has emotional and mental effects.


I think this is a fallacy. I believe another pipeline does not necessarily produce better doctors. They also make mistakes; sometimes those mistakes cost people lives.


Not directly on topic - but I was a hobbyist silversmith as a teenager. I learnt many of my skills from a stern old German bloke at the silversmithing club.

He liked to say "Gutes Werkzeug ist die halbe Arbeit" (or sometimes: "Gutes Werkzeug, bessere Arbeit"). A good tool is half the work, or: Good tools, better work.

This idea has informed my coding decisions too. Evaluating the worth of a tool is more complex in software than in silversmithing, though.


I think the thing is that the we have a different level problem in software, that can't be solved by either "methodologies" or "patterns" 0r "discipline" or languages...

We have a learnability / teachability / knowledge transmission problems that always disguises itself as a technical problem... If you could simply "distill" and "transmit" the relevant design knowledge to the person requiring it under a ressonable amount of time most software quality problems would become trivial.

We supposedly do "knowledge work"... but we learn mostly from experience, like cooks or handcraft artisans or whatever, instead of how civil engineers or auto engineers do. (And I suspect the problems is not just immaturity of the field... it's more that "the space of possible solutions" in software architecture is so insanely big that there's no way to learn/label/teach what the engineers need to know... we have the requirements of engineering, but the infinite solutions space of mathematics... and training successful mathematicians is something we know we have no clue how to do... only here we also expect the "untrainables" to have the professional conduct of engineers coupled with the depth of though of philosophers.)


The problem is that the more tools you add, the more mistakes you make in the tools.

This is the same debate we had previously about formal methods. They don't eliminate errors, they just move them to the testing systems and where there are several the gaps between them. Communication becomes more difficult between the people involved and you rapidly start looking for the Chief Programmer Solomon who knows everything. then you get communication failures because not everybody is Einstein.

So it is always a trade off.

And it would help if we stopped calling automated testing as testing and described what it is - a domain specific type system checker.

If we realised we always build a system and a compiler for that system we might stop reinventing the wheel so much.


I don't get why automated testing is a "domain specific type system checker". Not saying it's not, I just don't get it. Maybe that's because my automated tests are pretty free-form and tend to be more about throwing mud at an algorithm and seeing if the results stick.

Your last line is bang on, though. We build a system, and a compiler (or processor) for that system. It doesn't matter if everything is "data driven" if your data is code and the thing it's driving is an interpreter.


> I don't get why automated testing is a "domain specific type system checker".

That's because he was talking about formal verification. Testing like the name is normally used is a "adhock domain specific heuristic type system checker".


I can see that label applying if you're using a language that has weak dynamic typing, so that you have to be checking all manner of crap that the compiler should flag for you. Like Python and JS unit tests suites I've seen too many of.


Formal methods, in theory, don't eliminate errors. In practice, though, they do, as dramatically demonstrated by fuzzing CompCert C compiler.


IIRC they were actually able to get CompCert to generate incorrect code when fuzzing due to a bug in a system header file.

Formal methods do (both in theory and practice) eliminate errors that can occur within the scope of what they cover. If there is an error in your model for the underlying system, then formal methods don't save you, but it seems obvious that eliminating one source of bugs reduces overall bugs.

Furthermore, if there is an error for your formal model for the underlying system, there would also probably be an error with your informal model that your programmers would have in their head while developing software without formal methods, yielding similar bugs.

All software should be tested; that doesn't mean formal methods have no value.


Agreed, each tool - process, framework, ritual, management layer, etc - adds more complexity to juggle, which hides the original problem and/or source code that does the actual work. It's why I like the direction the Go language and its proponents point towards, less tooling, simpler language with sane defaults and a complete standard library, and a mindset to go with it (things like "Nope you don't need a web application framework, just use the standard library first")


I would agree that additional tools like static analyzers and linters add extra work in the beginning, sometimes significant extra work. But once they're up and running, they greatly decrease the amount of work I have to do, in my experience.

I do like Go's sane defaults and awesome standard library, although I find it wanting as a language (Rust, which ships w/Cargo, also has sane defaults and is more what I look for in a programming language).


I'm sure this comment will get buried because I'm so late to the thread but oh well.

"Later, the guilty programmer thanked the lead developer for protecting him. He said: “I knew I shouldn’t have reused that code, but we were in a rush.” She smiled at him and told him not to worry about it."

This quote describes the real issue. Uncle bob is equating software development as some kind of unified profession and making apples to apple comparisons for all software projects. But software development is not like being a civil engineer or being a doctor. Writing software is totally different from heart surgery or bridge building.

Software development is applied differently to all projects and all projects having varying needs about quality and cost. If you are building a bridge it there is a high standard of quality every time. If you take a shortcut you may kill people, or more likely never pass inspection.

In software sometimes you could kill someone but far more often the quality of the software may not matter. There are times when it makes sense to cut corners, there are times when it makes sense to do a quick hack, there are times your boss makes the decision to something the worst way possible and fix it later, and sometimes that's ok. There are other times when it's not ok.

Every person on earth has an ethical obligation to not bring physical harm to others and should avoid doing so even at the cost of employment. But no one has an ethical obligation to write "good" code just for the sake of it. There are times when the best thing for everyone is writing software badly.

Maybe software engineers should have some widely disseminated ethical principles, but they have to be realistic and take into account the entire spectrum of software development. We have different ethical standards depending on the type of project we are working on. If you are a developer working on software for Airline Autopilot you have different ethical obligations from a developer working on Angry Birds 10.


I think the author completely misses the point of Uncle Bob's article for two reasons.

First, in "Tools are not the Answer", it is clearly stated that tools are valuable.

Second, Uncle Bob's article is not telling programmers to be better. It is telling programmers to be professional. It is not like telling drivers to drive better. It is like telling drivers to stop texting while driving.


:-)


Both "Uncle Bob" and the author are selling extremely complicated, cumbersome, expensive ways to develop software -- great for them as consultants if they can pull it off, not necessarily great for their customers. Giant near monopolies such as Microsoft or Google can afford these methods, may even benefit from this sort of super-bureaucratic "gold plating" of software and certainly do benefit from convincing small potential non-monopoly competitors to adopt these slow, expensive, bureaucratic methods that smaller firms cannot afford (they run out of money trying to create perfect software instead of software that works and gets the job done).


Perhaps if the methods are sound (even if expensive) then someone can step up and make tooling or refine them so the methods are more accessible to smaller firms. Uncle Bob's opinion's might be discouraging people from exploring them and finding ways to drive down the cost of adopting these better methods.


> Perhaps if the methods are sound (even if expensive) then someone can step up and make tooling or refine them so the methods are more accessible to smaller firms.

To his credit, Hillel is trying to evangelize (in general, not directly through the post linked) a particular tool (TLA+) whose creator (Leslie Lamport) wants to do exactly this. I don't know how big Hillel's employer is, but from what I understand it's nowhere near the biggest engineering firm in number of technical employees. Lamport's current objective with TLA+ is to get people to learn concepts for formal description/specification of systems, and he happens to provide an effective tool for testing those system specs.

Check out his TLA+ tutorial at https://learntla.com.

If you want an example of solving concurrency issues, jump to here: https://learntla.com/concurrency/processes/.


We have about ten engineers, so nowhere near the size where people consider formal methods "appropriate". Nonetheless it's still been incredibly useful for our work.

I'm a pretty huge evangelist of TLA+, but I don't think it's the silver bullet of software correctness. It just happens to be the tool I'm most familiar with and the one I thought could benefit most from a free guide. If people start widely using TLA+, I'll be ecstatic. If people ignore TLA+ but start widely using Alloy, I'll still be ecstatic. Software correctness is a really huge field and there's lots of really cool stuff in it!

Speaking of making methods more accessible, I'm working on a tutorial about Stateful Testing. Hypothesis (https://hypothesis.works) is an absolutely incredible property-based testing library for Python, and I think it could potentially make PBT a mainstream technique. One of the more niche features is that you can define a test state machine that runs by randomly selecting transitions rules and mutating your program state, then running assertions on the new state. It's really neat!


Perhaps as people who call themselves "engineers" we should move away from the idea that "gets the job done" is good enough? Especially when we have security breaches left right and center.


There is no obvious connection between the methodologies peddled by "Uncle Bob" or the author of the piece and security breaches. They are not claiming to be computer security experts. How exactly are unit tests or code reviews or similar rituals supposed to have prevented the numerous breaches by Julian Assange and his WikiLeaks associates, especially when most of the information appears to have come from insiders with authorization to access the data? (for example). In the Bradley/Chelsea Manning case the government has consistently implied that Manning had authorization to access the vast State department archive of diplomatic cables, as implausible at that seems. I don't see how unit tests or code reviews would have prevented John Podesta from clicking on an obvious phishing link. :-)


From the original post that is being replied to-

"The author did not interview software experts like Kent Beck, or Ward Cunningham, or Martin Fowler."

I mean no slight, but are Beck and Fowler really the experts of the field?

This is akin to the relationship guru who has been through three divorces -- often just declaring yourself an expert fools enough people into thinking you are.


Fowler wrote some good books ten to twenty years ago. Today he writes as if he is not sure if he gets microservices or not, which might prove he is honest, but he is not giving the bold prescriptions which will make money.

Beck is up there with Tony Robbins so far as I am concerned. Cunningham is famous for running a Wiki which was down for two years.


This is unfortunately a very defensive article. The author would do better if he chilled out a bit.

I get why he's defensive: Martin calls his baby out in his article. And, although I think that Martin himself comes across as defensive (he is alarmed that the journalists didn't interview 'real' experts like himself), I also don't think he is wrong. In fact, as a practitioner for over 20 years, I've witnessed first-hand the gradual degradation of software discipline. I've seen too many companies do exactly as Martin describes, and it causes many issues: from critical bugs to developer burnout.

It's important to look at the root causes. I surely don't have all of the answers, but I do note the rise of the 'startup culture' as a major issue. Even at big organizations, employees are now expected to run projects using extremely low resources. The companies encourage this behavior saying things like, 'think of your program as a startup'. Startup is the new executive buzz word.

Startups are obviously great, but they take on extreme amounts of risk. And, since people only see the survivors, they naturally make the (erroneous) logical conclusion that startups are silver bullets and have guaranteed return on investment. Therefore, every project should be a startup! And, with that, we see skeleton teams with huge responsibilities and expected return but a deficit in resources. Today, an engineering team of 4 people is expected to do the work of 16 people circa 1995. Team leaders accept this pain and manage it by sweeping issues under the rug. As a side note, it tends to cause people to be more unethical.

The examples that the author uses (Amazon, Microsoft) are not representative of typical software organizations today. Amazon and Microsoft have huge amounts of resources available. They have veterans and researchers able to take things like TLA and apply them to real projects. They have the ability to manage their risk. But, a typical team is strapped for time and they must rely on tools to save them at the expense of software discipline.

Anyway, my point is that investors want huge return on investment. This causes sales and marketing teams to over-promise and under-price. This causes delivery teams have to drive their projects with extreme risk. Extra people are too expensive, so just wing it. If you survive, maybe you’ll get to build it right the second time. If you fail, well rinse and repeat at the next place.


I can't imagine many other industries as rife with false economies as Software Engineering. There are so incredibly many opportunities to appear to get things done quickly today, only to find that you've hamstrung your future self.


The article just focused on Uncle Bob's asking for "unit tests" in "Tools are not the answer". But I believe Bob just asked a rhetorical question over there. If someone is not writing unit tests at the very first place, he doesn't need to ask any other kind of tests furthermore, which are likely be less and less people raising hands.


I stopped reading Uncle Bob's articles a few years back, after he started making money by telling programmers they need to be more disciplined to prevent bugs.

While he's not exactly wrong, he does dismiss any other solution than testing (such strong type systems). One reason could be that he doesn't make any money out of these solutions.

All his blogs and talks seem to send the message: "pay for my content and learn how to be a good programmer by writing tests". Again, not necessarily a bad thing, but there might be other ways than only hand-crafted tests.


Humans are really really bad at static analysis. A 486 with 8Mb of RAM could outperform any human at this task.

The solution is more and better tools.


I agree with the author of the article to an extent - there's lots of things about Uncle Bob's advice I don't care for. But...

I agree with Uncle Bob's take that discipline is lacking in software development today. It's not that people won't make mistakes if they are more disciplined - of course they will - but it's the kind of mistakes people are making that might change. I see code written every day that, when I give even a cursory glance to the PR, I can tell there are issues. The developer didn't even do the barest due diligence to make sure they didn't break things.

Formal verification and other software correctness tools can be great in the right circumstance. I think anyone doing life-critical software should be using something above and beyond the basics.

But, for most projects I see - formal verification isn't really a solution to quality issues. Having developers who give a damn and have a little paranoia and pessimism ("how is this thing going to fail on me?") would do 100 times more for the outcomes. In my mind the best code is the kind you didn't write, and the best infrastructure is the kind you didn't need to stand up, and the best distributed system is no distributed system.


I've thought for a while the real solution to bad software is simple: hire intelligent developers who care

Only problem is, they are few and far between... (in my 10+ years in the industry, I've worked with only 3 people who I'd put into this camp)


Those intelligent developers who care need to make sure the less intelligent, more indifferent developers up their game. Tools and processes are put in place to mitigate risk from that second group, but it's the seniors that need to be able to transfer their mindset to the others. Which may not always work, mind you.

What should be avoided if possible is the hero developer; the risk there is that the hero dies or moves on and leaves behind unmaintainable work.


The risk? Hahaha. It’s the reality. The hero eventually moves on an you’re left holding the bag


I'm still going to read the hell out of all of Uncle Bob's books. Every programmer has his faults and Programming is more of an art than science at this time in history.


Uncle Bob's true value to me was to teach me how bad of shape the software world really is in. Look at any engineering discipline. There are standards, there's a rigorous design process, there's checks from more experienced engineers, there are best practices, etc. Software is like the wild west, and if people don't realize that and try to create some sort of standard than eventually someone with no knowledge of the field will. He's trying to encourage people to understand the problem and think towards creating rules so everyone can perform and collaborate better. Eventually software will become like other more established fields but it's up to our generations to take us there.


I think his main point is that we don't see MBAs supervising the construction of a bridge, why do we let them supervise the creation of software? A civil engineer has sufficient organizational clout to push back on unrealistic deadlines when managing a project, but software engineers don't have this level of recognition. The way to be recognized as professionnals, according to him, is to start acting like one.


> if people don't realize that and try to create some sort of standard than eventually someone with no knowledge of the field will.

I don't think so... No one's hunting down and calling to account the engineers responsible for the Equifax breach, and nor should they. The problem is blatantly a leadership and resource-allocation issue, and is being treated as such.


Bad software has done worse things than release private info. Just a few worse things: failed space missions and cost NASA credibility, killed hospital patients, bankrupt Knight Capital, almost caused World War III due to a false-positive nuclear-launch warning, caused wide-spread blackouts, killed people (lookup Patriot Missile).

We as developers are just waiting for something so terrible to happen that someone is forced to take notice.


nah.

I think you could quite easily quantify the effect various tools have on a code base; adding unit tests, using a repl, static vs. dynamic etc. on the various aspects of your code:

- How quick/easy is it to add new features?

- How often do you add new bugs?

- How long does it take to fix bugs?

...but I think you could also look at the impact individual bad actors (the 'dont care' programmers) have on the same code base.

Now, because the 'dont care' programmers hack out rubbish at the speed of light, it may superficially appear that the net impact on the code base is balanced; more features, more bugs... but that rubbish slowly spreads out into the code base, and incrementally trigger a cascading bug creation rate for all the programmers.

Now, if you want to suggest that tools can add enough value to overcome the negatives from having rubbish programmers, I'm all ears, because I've never seen a tool that can do that. The bad programmers just abuse the tools or ignore them.

So... in that context, I think its fair to say, more tools really achieves nothing much, when bad programmers and bad programming practice results in such catastrophically bad outcomes for software.

Maybe, if you can make your tools nice enough and add enough processes you can kind of force bad programmers up a notch. I appreciate that; but the question really is, why are there so many programmers who are so rubbish? Why do they never seem to get better? Why don't they care?

Do you really think you engineer a solution to that? ...because I think it's a social problem, not a technical one.

Maybe some tools can help, and maybe not everything Uncle Bob says is gospel... sure; but I think this is just stupid:

> Uncle Bob gives terrible advice. Following it will make your code worse.

I think history has objectively shown this is false; but of course, you're welcome to have your own opinion without any substantive basis other than that opinion itself if you like.


> Now, if you want to suggest that tools can add enough value to overcome the negatives from having rubbish programmers, I'm all ears, because I've never seen a tool that can do that. The bad programmers just abuse the tools or ignore them.

So much this.

This perfectly matches my experience (and desperation). Bad programmers will do their best to work around tool-imposed limitations that they don't understand. And these workarounds will generally increase code complexity!

- Force a minimum coverage ratio: they will create useless tests that don't check anything (without assertions).

- Force a maximum on function sizes: they will split their methods halfway and pass the whole local state through parameters.

- Force a "no-global variable" policy: they will create singleton classes.

- Force a "no public member variable" policy: they will create set/get methods for their private members.

I'm not advocating for any of the above restrictions, they do more harm than good. Please, give me something I can put into my CI server that checks for bad coding practises, something that checks that the code stays SIMPLE.

Fact: you can inject a new disabled feature into a C codebase with zero risk of introducing any bug ... by wrapping your 50 modifications of the codebase into #if/#endif clauses. Guess what? This will make the code a lot more complex, will slow down every developer, and will increase the likelihood that they create bugs in other features.

Finding bugs is only the tip of the iceberg. What I want is to decrease the probability of their apparition, by keeping the code simple. Bugs and code complexity are correlated. Only targetting bugs will break this correlation, and we will be left with correct rigid code (very hard to modify). Even if my compiler perfectly prevents me from breaking anything, I still want to be able to add a feature in a reasonnable time.

Let me summon Rich Hickey here: Simplicity Matters, and "nobody drives on the highway by bumping into the guard rails".


I think Uncle Bob is a great read just because he'll challenge some of your opinions and at least forces you to reason against his arguments. You won't have that if you only visit https://swiftisfreakingawesomeblog.com or http://www.igetpaidtolikethislanguage.org where everybody repeats your opinion.

There's not a lot in Clean Code that I wouldn't recommend to most programmers.


I strongly recommend against reading Clean Code, because it only applies to Java-like languages, and it doesn't really make good advice anyway. It doesn’t even properly apply to C++, and certainly doesn’t apply to languages with different models (like Ruby or Python).

Uncle Bob has made a very successful business out of saying things that sound profound, become popular, but are at best the technical equivalent of fortune cookies, and at worst are profoundly bad advice.

Seriously, in TypeWars[1], he says “My own prediction is that TDD is the deciding factor. You don’t need static type checking if you have 100% unit test coverage. And, as we have repeatedly seen, unit test coverage close to 100% can, and is, being achieved. What’s more, the benefits of that achievement are enormous.”

This is so profoundly wrong on so many levels that it isn’t funny. How do you measure coverage? If you’re doing it by lines hit (which is how most dynamic language coverage tools do it), then you’re not getting any sense of how good your boundary conditions or branch coverage is. If you have a better way of measuring coverage (typically branch coverage), then maybe aiming toward 100% is good, but hitting 100% coverage is a complete waste of time. (I aim for ~85% coverage, depending on the code base and how much is boilerplate code.)

In Ruby, do you really need a unit test for:

    attr_accessor :foo
If you don’t at least access `foo`, SimpleCov will say that line isn’t covered. You know what? It doesn’t need coverage, because when you write a unit test for that you’re writing a unit test for the language.

Unless you’re actively adding “attack tests” to make sure you have all of your boundary conditions covered, and at least know you can abort or recover from an error condition properly, your 100% coverage means absolutely nothing. Property Checks are better for this. Fuzzing and mutation tests matter to help find this. Even static typing (of which I’m not actually a fan, because 15 years with C++ makes it clear that static typing is not that useful in weakly typed languages) can help find certain types of boundary conditions (look to Ada, where if you declare a type as a range 1..100, only values within that range, or variables that can never exceed that range, can be used with that range).

And people take Uncle Bob seriously, even when he posts things that are just asinine attacks on people who have taken him to task when he is wrong (which, IMO, is far more often than he’s right).

[1] http://blog.cleancoder.com/uncle-bob/2016/05/01/TypeWars.htm...


> it only applies to Java-like languages, and it doesn't really make good advice anyway. It doesn’t even properly apply to C++, and certainly doesn’t apply to languages with different models (like Ruby or Python).

I disagree strongly. Sure, there are a very few parts that are more relevant to Java. However, the vast majority of the book is easily applied to almost any language. There aren't many languages that don't have variable names, function/method names, modules, etc. Some of the OOP principles wouldn't easily apply to C, or other languages like it, but a lot of the other advice applies easily.


I read that. I disagree with his opinion. But that's not from Clean Code. And sure the book is about Java but I don't remember it's a great idea to have unreadable names or classes that do a million things in PHP for example. I never program in Java and I'm really happy about that but a lot of principles apply.

The unit test zealotism is the part I ignore about Clean Code. But the rest? Most of it is solid advice.


Every type declaration implies a set of unit tests, and needing to actually write them is not better in any way.


Defects in code do indeed come from sloppiness. Of course, real people are sloppy sometimes. So real systems need resilience in depth to help resist system effects from localized sloppiness.

The same set of arguments applies to security. The last few years of history teach us that nobody, not even state actors with deep pockets, can keep secrets forever. Don't we need to change our thinking about that, and make it so security breaches aren't uniformly disastrous? It's all about resilient SYSTEMS dealing with localized sloppiness.


"Don't test the UI" is not equivalent to "No end-to-end testing." The XP people, for example, prefer to make UI a thin shell over a programmable interface that can be easily tested. Then, visually confirming the UI operates the interface is enough.

I admit I don't read much Bob Martin, but does he say you shouldn't use automated functional testing, e.g., Fit or Cucumber?


As far as I know, he actually wrote FitNesse and recommends Cucumber.


After reading the post by Hillel Wayne I decided to read Uncle Bob's article that is mentioned in the post (http://blog.cleancoder.com/uncle-bob/2017/10/04/CodeIsNotThe...). After reading both posts, and after reading most of the comments on this thread, I have to admit that my inclination is in favor of Uncle Bob. I agree with most of the points Uncle Bob made in his blog post.

Tools are not the answer. TDD is not the answer. Agile is not the answer. More programming languages are not the answer. There is no silver bullet. For me the answer is to do the best I can, with the tools I have.

Never stop learning, always strive to improve my skills. And yes, it takes discipline.


I disagree. Neither of them is saying anything is "the answer". They are merely putting emphasis on different things.

From that perspective, I would say that Uncle Bob is more wrong. It doesn't make sense to emphasise discipline before best practices. Discipline is about following through on things and avoiding short cuts. That means in order to have discipline, you first have to have a procedure you are meant to follow in the first place.

Additionally, Uncle Bob seems to look at things from the perspective: "If you're undisciplined, better tools can't make up for that." Sure.

But, IMO, Hillel's perspective is a lot more useful: "If you are disciplined, better tools will help you do better work."


I agree that neither of them is saying anything is "the answer". I read my comment and don't believe I implied it either. What I'm stating is my opinion: For me the answer is to do the best I can, with the tools I have.

Never stop learning, always strive to improve my skills. And yes, it takes discipline.


"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code." - Unknown


If all we needed to write robust software was discipline, we wouldn't need high level programming languages. All we'd need is the documentation of the instruction set that we want to target.

Why are we still taking this geezer seriously?


I think that automated testing is superior to any other tool/technique that we can use to avoid mistakes in programming. Can anyone argue with this? What is the alternative?


Static typing combined with functional programming is superior to automated testing, although the three are actually complementary, because automatic, property-based testing works better if you have type info (such that the library knows what values to generate) and referential transparency, because side effects can be awkward to specify as laws.

So it is not a coincidence that Haskell developers have innovated testing tools and are now sold on QuickCheck, those two working well together ;-)

Being the Scala developer that I am, I love ScalaCheck, but I’ve also used jsverify for JavaScript ... not as good as ScalaCheck or QuickCheck, due to not having static typing and type classes btw, but did the job.


Static typing or functional programming can protect you only from some types of problems. With automated tests you have the flexibility to define the problem you are verifying your code against.


By the same token, tests can only verify certain properties of a function. How could you test that a function is free from side-effects? How could you test it never returns null?


QuickCheck is actually older than the TDD movement!


There is no best tool or technique because there is no silver bullet.

And what do you mean by automated testing? What kind of tests? Unit, integration, regression, black box, white box?

Automated testing is only one tool, that every system should be using. Manual testing is obviously too tedious (though sometimes required) to do for all test sequences for most systems so of course we need to automate.

But what do we test? How do we develop the tests?

Do we test type invariance constraints? In a dynamically typed language you absolutely should! In a statically typed language, you might need to. In C I need to test that my int foo is within the expected range. In Ada I can define a new type Foo which will always be within the right range. Both are statically typed, one offers more than the other.

How do we define a unit? Do I want tests of every class, of every function? Do I write unit tests for private portions of classes? What should be hidden from users as "implementation details", but actually matters to me as the implementer.

How do we actually test out concurrency bugs? I might have a concurrency bug that shows up in 1 of 20 potential traces of the concurrent system. But practically only shows up 1 time in 1,000,000. That's impossible to show with tests with any reliability, but almost trivial to show with TLA+ and some other formal methods (if you can abstract the concurrent algorithm properly). Maybe the bug only occurs when I have at least 10 users on the system, but is still rare or hard to force into existence.

The alternative: Learn static typing. Learn math. Learn formal methods. Learn multiple concurrency paradigms. Learn multiple languages. Internalize their concepts. When you write a program, design the program. Maybe not a full formal spec, but you need to know what it should do or you cannot verify it. You cannot have any confidence in it.

You use a dynamic language like Ruby? Guess what, you can make it do something like Ada's type system! Create a new class instead of using a generic integer that redefines + and - and other ops so that you get the desired properties (like range constraints or modular arithmetic) but still otherwise acts like an integer. You use a language like C that lets you clobber all over the memory in other threads. Great, learn about channels, semaphores, and message queues. Pick the appropriate subset (or library if viable) and design your concurrent algorithms in a disciplined fashion.


I meant automated testing(any kind) vs no automated testing. Most of us don't do any kind of automated testing.


I'm finding it hard to imagine any place that does no automated testing, but then I started my career as a tester and later became a verification engineer before moving to development.

I'll agree with you, if you have no automated testing then that should be the first priority for a dev shop. By this I mean, if testing is: follow this instruction sheet and do each step, hopefully we've made enough of these and it's thorough enough. No other tool or method is likely to help you as much as developing automated tests.


>if you have no automated testing then that should be the first priority for a dev shop

This is the key point.


Let's dismiss ideas, not people.


I think Uncle Bob is way off the mark here.

In Clean Code he hammered in the idea that unit tests form an automated specification of some software system under development. This was a decent idea at the time. So why on earth would he tell people not to use type-safe languages or tools for defining better, formal specifications?

Probably because he has something to sell?

We need better tools, processes, and "disciplined" methods to writing software if we're going to tackle complexity and produce systems that can be considered robust, reliable, and safe. Unit tests alone are never going to cut it. There's a reason Microsoft Research has been heavily investing in formal methods: imagine the only specifcation for CosmosDB existed as a unit test suite. It'd have data-losing critical errors in it for years to come.

I liken Bob's post on discipline to be much like the snake oil used to sell Christian religions: you were born sick, but if you believe in me only I can make you whole. What a crock. There are plenty of smart, capable people out there writing perfectly good software. The problem in the industry isn't that developers aren't writing unit tests the way Bob Martin prescribes: it's that a great deal of hand-waving is done to following the state of the art and establishing processes for developing robust, reliable, and safe code.

The entire aerospace, space, and safety-critical systems disciplines in software development have been making huge investments in tools and languages to make managing the complexity of these requirements on software systems to be manageable, correct, and reason about. TLA+ is only one such tool and I think Hillel is right to point it out: you need more than one tool. Your entire process has to be built around avoiding errors.

There's a reason why engineers designing roads stopped calling vehicle collisions "accidents." Once you start looking for the real root of the problem, the system itself, you start to optimize for different goals in order to reduce the negative outcomes in the design of the system. Vehicle collisions still happen because of human error but a great deal more happen because they were enabled by the system: the roads, the vehicles, the by-laws.

In a similar fashion errors in software systems are enabled to happen by different choices we make: stakeholders, deadlines, requirements, time to market, etc. The ISO/IEC/IEEE 29148 guidelines on requirements engineering encourage you to consider these factors as part of your specifications. If the operation of your system will only cause nuisance or interference with the business' goals then the risk is quite low and you might put more priority on time to market. However if there's more risk to harm people like say, losing their personal information and putting them and the insurance industry at greater risk, then I'd say you need to put in the effort to avoid those risks into your processes: use more formal specifications, use statically typed languages with sound type systems, write property-based as well as declarative tests, etc, etc.

If you start treating software errors like we started treating auto-accidents then you'll start to see where you can reduce the negative outcomes. Stop calling software errors as bugs just like we stopped calling collisions, "accidents." See errors as risk and look for ways to reduce your risk. Know where some risk is acceptable.

Uncle Bob is only calling programmers "undiscplined" because he has a few more books to sell to help them get better.


Well, that's not the _only_ reason. ;-)


Hah!

I have read your books and used some of that material to train several teams over the years to practice test-driven development. So thank you for the inspiration and drive.


Just some cat fight.

One guy likes doing things in way, other does not.

Ignore.


<sigh>

More nihilism, this time in programming circles.

It's a turbulent age. I can't wait for it to pass.


Can we stop talking about Uncle Bob? He's like the old, weird, drunk, racist uncle at a wedding. His ideas haven't really been relevant for about 10 years, and he is increasingly posting ranty, politicised, attention-seeking diatribes...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: