I got involved with this stuff before the term "Agile" existed. At the beginning, it was a bunch of professionals (mostly developers) sincerely trying to find better ways of working. Extreme Programming, for example, came to be because the developers were really interested to experiment with how their team got stuff done.
It breaks my heart that in the ensuing decade it has turned into exactly the kind of bullshit, top-down, PHB-fluffing idiocy that early agilists were trying to get away from. If you look at the Agile Manifesto, the focus is supposed to be on a) people, b) shipping working software, c) collaboration, and d) being adaptable. That is sure not what it has become.
In my view, we made a crucial mistake: we didn't think about money and power enough. Now the Agile industry is 98% selling idiotic certifications and homeopathic doses of process improvement to organizations that don't really want to change anything at all.
Mad. It makes me mad. Sorry we screwed it up, everybody.
a) rigid based on their definition
b) defined themselves as experts in making teams agile based on their rigid definition
c) charged for it
PS. These same guys, once they couldn't squeeze anymore blood out of the Agile stone, moved onto a new marketing term, "craftsmanship". They now charge the same clients, even more money, to teach them this "new way of doing it ..... right".
Plus, they make a mint on the books they hastily write and push out.
I eagerly anticipate that successor to Craftsmanship.
I'm reflecting on how the same "agile experts that sell/sold their services" quickly abandoned selling that and moved onto selling "craftsmanship" once their Agile well dried up.
cough Bob Martin cough
cough Object Mentor cough
Also, it's very, very rare for somebody to make a mint on a software book. I've talked with a number of authors, and their universal view is that writing code pays much better than writing a book. You do it because you have something to say, not because you want to get rich.
Unfortunately, you can be totally sincere in your good intentions, and yet still repeatedly be wrong. When you are a high profile figure who presumes to advise others on the best ways to do their job, that makes you a liability.
It's a shame. Some of Bob Martin's earlier work exploring OO and the SOLID principles was quite decent stuff. But I think it's obvious at this point that he and several of his colleagues at Object Mentor have collectively lost the plot.
The big problems I saw, though, came from people who weren't particularly sincere. They were happy to sell whatever large companies were buying. E.g., two-day "Scrum Master" courses and a splash of Agile holy water to bless whatever top-down idiocy a company was already engaging in.
- Object Mentor are big advocates of XP. The fundamental principle of XP is that a certain practice is good, then doing more of it must be better. There is no logic in that position at all, and it doesn't stand up to even cursory criticism. Moreover, if XP is as superior to other processes as the typical advocacy quotes and statistics imply, how come organisations using XP aren't consistently reporting dramatically better measurable results and how come so few software development groups have chosen to adopt it? Sooner or later, people notice that the emperor has no clothes. (I suspect this is why we now have Software Craftsmanship: it's a new positive-sounding but conveniently meaningless marketing term to pitch to clients.)
- Bob Martin has repeatedly stated that anyone who doesn't do TDD is unprofessional. Safety-critical software is typically not developed using TDD; in fact, formal methods, BUFD, and other very much not Agile processes are often used in such fields.
- Michael Feathers redefined the term "legacy code" in terms of unit tests. There are decades of research studying what actually causes a project to decay to the point that it is difficult to maintain and update. To my knowledge, a lack of unit tests has not yet been cited as a causal factor by any paper on the subject. (FWIW, I do think Feathers' book on the subject offered some interesting and worthwhile ideas, I just don't accept his premise that having unit tests is what defines whether code is legacy or not for practical maintenance/project management purposes. I think when you try to co-opt an ill-defined but commonly understood term and give it a formal definition that is very different to the mainstream concept, you lose some credibility.)
- Brett Schuchert, a man writing a book on C++, managed to make "Hello, world" take five source files and a makefile totalling nearly 100 lines, using TDD of course.
- Ron Jeffries. Sudoku. Probably enough said. TDD is not an alternative to understanding the problem and how you're going to solve it.
- From a post on the Object Mentor blog, Brett Schuchert apparently advocates pair programming based on a 1975 study of something involving two-person teams, a ten-year-old study of university students, and a couple of links to secondary sources. The original research for almost every one of the primary sources he appeals to either directly or indirectly is no longer available at the cited links less than 18 months later.
- Bob Martin thinks there are no more new kinds of programming language left to find. That's roughly on par with equating Haskell and Brainfuck because they're both Turing complete, and shows a complete lack of awareness of the state of the art.
- When it comes to the amount of up-front design and formal architecture that makes sense for a project, the amount of retconning in recent comments from the TDD guys is laughable. There was a particular interview featuring Bob Martin and Jim Coplien a couple of years back that was almost painful to watch.
I could go on, but if that lot doesn't paint a clear enough picture for anyone reading this, I don't have a powerful enough Kool-Aid antidote to help them.
I do agree with you about the insincerity. That's worse in theory, but unfortunately it's probably no less damaging in practice.
Edit: Here are few links to support some of the points above.
That team found that they really liked turning particular practices way up. But you can't turn all the knobs up, so you are implicitly turning others down. E.g., if you turn up iteration speed, then you are turning down the sort of heavyweight waterfall requirements process ubiquitous at the time.
So the "extreme" was a way to explore the space of possible processes, not any sort of fundamental principle. Teams trying XP are explicitly encouraged to experiment similarly. I sure have; the process we use is derived from XP but departs from it in a number of areas.
I think a lot of the rest of your points are similar misunderstandings along with some cherry picking. E.g., the OM blog post on pairing. He said that people sometimes asked him for basic background materials, so he posted some links. To go from that to "Brett Schuchert apparently advocates pair programming based on.." is either very poor reading comprehension or the act of somebody with an axe to grind.
As to not doing TDD being unprofessional, I'd generally agree. I tried TDD first in 2001, and have worked on a number of code bases since. For any significant code base that's meant to last and be maintainable, I think it's irresponsible to not have a good unit test suite. I also think there's no more efficient way to get a solid suite than TDD.
If you (or anybody) wants to discuss this further, probably better to email me; that's easy to find from my profile.
Well, of course they're entitled to their opinion, but that's all it is: an opinion. An argument that if some testing is good then test-driving everything must be better, or that if code review is good then full-time review via pair programming must be better, has no basis in logic. And those kinds of arguments go right back to the original book by Kent Beck, and they have been propagated by the XP consultancy crowd from the top right on down ever since.
IMHO, if a trainer is going to go around telling people that if they don't program a certain way then they are unprofessional, then that trainer had better have rock solid empirical data to back up his position. Maybe as you say, I do have a giant misunderstanding, and in fact Object Mentor do make their case based on robust evidence rather than the sort of illogical arguments I've mentioned. In that case, I assume you can cite plenty of examples of this evidence-based approach in their published work, so we can all see it for ourselves. Go ahead; I'll wait.
This is a consultant who presumes to tell others how to do their job, openly posting asking for any source material from others to back up his predetermined position, and then claiming in almost the very next sentence to favour material based on research or experience. He says that the links he gave (the ones where much of the original research is either clearly based on flawed-at-best methodologies or simply not there at all any more) are things he often cites. And he gives no indication, either in that post or anywhere else that I have seen, of having any library of other links to reports of properly conducted studies that support his position. I don't think criticism based on this kind of post is cherry-picking at all, but of course if it is then again you should have no difficulty citing lots of other material from the same consultant that is of better quality and supported by more robust evidence, to demonstrate how the post I picked on was an outlier.
The same goes for any of my other points. If you think I'm cherry-picking, all you have to do to prove it is give a few other examples that refute my point and show that the case I picked on was the exception and not the rule. If you can't do that -- and whether or not you choose to continue the debate here, you know whether you can do that -- then I think you have to accept that I'm not really cherry-picking at all.
Please note that I'm not disputing that an automated unit test suite can be a useful tool. On the contrary, in many contexts I think unit testing is valuable, and I have seen plenty of research that support such a conclusion more widely than my inevitably limited personal experience.
On the other hand, I don't accept your premise about TDD. For one thing, TDD implies a lot more than merely the creation of unit tests. Among other things, I've worked on projects where bugs really could result in very bad things happening. You don't build that sort of software by trial and error. You have a very clear statement of requirements before you start, and you have a rigorous change request process if those requirements need to be updated over time. You might have formal models of your entire system, in which case you analyse your requirements and determine how to meet them at that level before you even start writing code. At the very least, you probably have your data structures and algorithms worked out in advance, and you get them peer reviewed, possibly by several reviewers looking from different perspectives. Your quality processes probably do involve some sort of formal code review and/or active walkthrough after the code is done, too.
If you came into an environment like that, and claimed that the only "professional" thing to do was to skip all that formal specification and up-front design and systematic modelling and structured peer review, and instead to make up a few test cases as you went along and trust that your code was OK as long as it passed them all, you would be laughed out of the building five minutes later. If you suggested that working in real time with one other developer was a substitute for independent peer review at a distance, they'd just chuck you right out the window to save time.
TDD is not an alternative to understanding the underlying problem you're trying to solve and knowing how to solve it. A test suite is not a substitute for a specification. Pair programming is not a substitute for formal peer review. They never have been, and they never can be.
I haven't gone into it here, but of course there are other areas where TDD simply doesn't work either. Unit testing is at its best when you're working with pure code and discrete inputs and outputs. It's much harder to TDD an algorithm with a continuous input and/or output space. Tell me, how would you test-drive a medical rendering system, which accepts data from a scanner and is required to display a 3D visualisation of parts of a human body based on the readings? Even if this particular example weren't potentially safety-critical, how would you even start to test-drive code where the input consists of thousands of data points, the processing consists of running complex algorithms to compute many more pieces of data, and the observable output is a visualisation of that computed data that varies in real time as the operator moves their "camera" around?
If you (or anybody) wants to discuss this further, probably better to email me; that's easy to find from my profile.
I appreciate the offer, but I prefer to keep debates that start on a public forum out in the open. That way everyone reading can examine any evidence provided for themselves and draw their own conclusions about which positions stand up to scrutiny.
That's not the argument at all. That is, as I just said, the reason they decided to try that. Their reasons for continuing to do it and further to recommend it are entirely different.
[...] better have rock solid empirical data [...]
You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right? And also, that you're privileging an arbitrary historical accident by saying that new thing X has to have evidence when the common practice doesn't?
If you came into an environment like that, and only "professional" thing to do was to [...] make up a few test cases as you went along and trust that your code was OK [...]
That is not something I have ever heard any Object Mentor person say, and it's not something I said. It's so far from what I've ever heard somebody like Bob Martin or Kent Beck say that your misunderstanding is so dramatic that I have a hard time believing it's not willful.
I prefer to keep debates that start on a public forum out in the open.
Well, I'm not trying to have a debate. If you'd like to have one, you'll have to do it without me.
So you keep saying. The problem is, almost everything Object Mentor advocate does seem to be based on some combination of their personal experience and pure faith. I object to someone telling me that my colleagues and I are "unprofessional" because we happen to believe differently, particularly when we do have measurable data that shows our performance is significantly better than the industry average.
You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right?
That may be so, but most people in the industry aren't telling me how to do my job, and insulting me for not believing the same things they do.
That is not something I have ever heard any Object Mentor person say, and it's not something I said.
Good for you. XP consultants have been making essentially that argument, in public, for many years. TDD without any planning ahead is inherently a trial-and-error approach, which fails spectacularly in the absence of understanding as Jeffries so wonderfully demonstrated. Plenty of consultants -- including some of those from Object Mentor -- have given various arbitrary amounts of time they think you should spend on forward planning before you dive into TDD and writing real code, and often those periods have been as short as half an hour. You may choose not to believe that if you wish. I'm not sure even they really believe it any more, as they've backpeddled and weasel-worded and retconned that whole issue repeatedly in their recent work. But I've read the articles and watched the videos and sat in the conference presentations and heard them make an argument with literally no more substance than what I wrote there.
You keep saying that I'm misunderstanding or cherry-picking evidence. Perhaps that is so and I really am missing something important in this whole discussion. However, as far as I can see, throughout this entire thread you haven't actually provided a single counterexample or alternative interpretation of the advice that consultants like those at Object Mentor routinely and publicly give. You're just saying I'm wrong, because, and there's not really anything I can say to answer that.
And let me just leave this here... http://jamesshore.com/Blog/The-Decline-and-Fall-of-Agile.htm...
Like many early Agile adopters I have been shocked and saddened by what it has become. I think the problem is that brainfucked corporate IT programmers (Java, .NET, whatever) have jumped on the bandwagon and driven it into the sea.
I was recently smacked around the head by this in a meeting about how a team was going to implement Jira ticketing to help improve their development process. I am not sure entirely what I was expecting, but I was totally, totally unprepared for the mind-stunning crap that I got - an endless series of slides of things called "workflows" which apparently document the life-cycle of a ticket and need to be mapped out in excruciating gory detail. I decided to raise my hand and ask a simple question "how do I just raise a ticket and assign it to someone" - the presenter gave me this kind of stunned "why would you want to do that??" look and explained that it doesn't work that way, first you need to assign everyone into teams and roles (like tester, developer etc), and then the workflow will decide where the ticket goes based on the kind of ticket and it's status. Mind blown. I could almost appreciate how such system might be of use on a really large project where you don't know and interact with most project members directly - they had a team of 8 people including testers and analysts. And in their minds they were doing Agile.
In another case I was arguing for the use of the Jira Fisheye plugin to enable meaningful access to VC to enable things like code review and release diffs and the like. The so called "Agile practitioners" had a different idea however - they were keen on a different plugin called Greenhopper which apparently does Agile process management and the like. I knew something was wrong when it was mentioned that it does nice Gantt charts. I probed a little deeper into why they didn't see the value in Fisheye and found that their actual VC practices were a joke - ludicrously large commits with minimal comment just before a release, no consistent release tags, broken commit history (they seemed to be deleting and re-creating their trunk after every release...what I don't even...). Advanced process/project management techniques over getting the basics like VC right? That doesn't sound like Agile in any universe I am familiar with.
I really struggle to think of any other examples where a worthy movement has been so thoroughly corrupted and debased into something almost diametrically opposed to it's original vision. Fuck Agile programming and what it has become. I think I'll stick with being a Pragmatic programmer.
Of course not. But in my experience and observations, corporate IT programmers, especially those with a Java background, are the worst offenders when it comes to abusing the Agile concept and failing to get the basics anywhere near right.
"And only small teams can be Agile? Amazon.com would disagree with you, for one."
I am not saying that, but I think it is a good question. I have never seen a team of hundreds coordinated to an outcome using an Agile approach. I have been involved in large projects with up to 300 people (think multiple, large teams of business analysts, developers, database designers, architects and testers and a little hierarchy of IT project management) work towards a single outcome using a highly managed, strict waterfall approach over several years. It wasn't pretty, but we got there in the end, and I would not swear on the Bible that we could have done the same using Agile.
If Amazon is really using an Agile approach to coordinate hundreds of people working on a single project, I would love to read about it.
Rather than teaching people processes, they should help foster better ways of actually thinking about the code itself. E.g., learning new programming languages and frameworks gives one a variety of perspectives to think about code from.