It's a hallmark of "experienced" non-dogmatic product people (UI/UX/Dev) that can use their intuition to know what are the happy paths that they need to test, what interfaces are likely to not change (e.g. a user is probabilistically always going to be a member of an organization, so unit tests around that are likely not waste), and what level of quality to introduce for a given feature relative to the probability the feature is going to exist in perpetuity.
You can concretize this by calling it "spike driven development" if you want (that's what we do) but the point isn't that TDD is faster or slower but that high coverage might be inappropriate (TDD isn't binary - degree of test coverage goes from 0-100%) at different phases of the build/learn/refine cycle.
For example, if we're building a speculative feature that we think has value but hasn't been proven yet, we want to spend the least amount of effort (code/time/money/pick some) possible to prove that it's viable.
Small bugs aren't really that interesting (provided the thing "works") and so we're explicitly willing to live with degraded quality to prove the feature is worth further investment. We're also explicitly not building a "large" feature (so it's very likely to get coded in a day or two) so the surface area for major showstopper bugs is minimized.
Often the feature will be thrown away or majorly refactored into something that is completely different.
In this case, full-bore 9x% coverage TDD is probably waste as the feature existed for a short period of time, the interface changed dramatically, or otherwise was proven invalid. High test coverage makes major interface refactors really expensive and you really don't need that level of quality for speculative features.
After you prove this feature is a "thing" you want to keep around (and you've nailed down the interface), then it's a perfect time to re-write it with a much higher degree of test coverage.
It really depends on the type of functionality that you're talking about. So many web apps (at least the kind that I seem to work on the most) just consist of data storage and fancy presentation on the screen. Not much worth testing there, especially when the testing frameworks are immature and have the possibility of sucking you into time-consuming, head-scratching, yak-shaving hours of debugging and figuring out what is wrong with your TESTS. Been there, done that... my views on testing are much more pragmatic than they were in the past.
> After you prove this feature is a "thing" you want to keep around (and you've nailed down the interface), then it's a perfect time to re-write it with a much higher degree of test coverage.
I find that tests help me write things that are complete and "nailed down" the very first time.
You must have read that with a different definition of "interface" than I did. Under my reading of "user interface", I don't see any way that a developer writing tests has anything to do with how nailed down it is. You need user feedback for that.
I know this author is advocating it, but has 0 (zero) solid data or evidence to back this assertion. He is just trying to sell another religion. What I have noticed, TDD is being pedled heavy by RoR contracting shops that just care about billable hours and not vested if the startup or company is going to make it on the long run. I see often that it is the usually youngish and guidable engineers that fall in the trap of eating it up. TDD is something that makes younger and less experienced developers feel good (more locs) and a false sense that the code is correct while there is solid data that even 100% unit testing coverage finds only 20% of defects, at best.
Having shipped many world class products, and none of the processes that build those products had any resemble of TDD in them. So, to all the pragmatists out there feel free to ignore the advice on the article. I'd yet to hear a super successful startup that used TDD.
Having Tested Code (which is a good goal) != TDD. TDD to me is putting the carriage in front of the horse. Anybody that have created large systems from scratch, TDD is a major slowdown on the refining process. Your job is to SHIP a working product. Testing is only a small part of it, and its positives should be considered alongside its negatives and side effects on the shipping timelines.
Having worked for over 10 years in the industry I have to meet any great programmer that preferred TDD. I hate to attack the author, but from his bio he seems a bit like a process peddler. I don't see him have worked on any successful startups, or anything that had a good exit. So take his claims with a grain of salt.
http://www.8thlight.com/our-team/robert-martin
There are two things that I dislike about these articles.
The first is that they ignore or gloss over the fact that there is no shortage of world class software written without TDD. I also have yet to see any great programmer preach the virtues of TDD.
The second is that they blur the line between TDD and unit testing, maybe intentionally. I don't use TDD, but that doesn't mean I don't write unit tests. I just don't write unit tests all the time, and I don't write them first most of the time. In fact I think the advocates would have better success if they advocated plain unit testing instead of TDD.
What irritates me though is that several valid criticisms of TDD have been published (1), and in some cases the advocates (Uncle Bob included) were given an opportunity to respond, but did so poorly, or just didn't bother at all. This sends a message that they are trying to sell something.
TDD has its place, but it is not always the right tool, and it is certainly not a silver bullet.
With regard to Dalke's complaints, I participated in that thread a bit; but others were doing a better job than I so I walked away. In short, Dalke's complaints were the complaints of someone who had little or no experience with TDD; and so weren't really significant complaints. Profit's paper complained more about the experimental procedure of Phil Hack's paper in support of TDD, than about TDD itself.
The reference to a "silver bullet" is a straw man. TDD is not a silver bullet. TDD doesn't keep you from screwing up. TDD will not guarantee success. But adopting and following the discipline of TDD can be remarkably beneficial, and shift the odds more towards success.
I work at one of those "contracting shops" (Pivotal Labs), and I think you're misunderstanding the reason we push TDD so damn hard: It's not about speed. It's about predictability.
Good TDD forces you into building loosely-coupled code that's easy to refactor and change. When new business requirements come in, the effort for implementing them isn't increased because of your previous code getting in the way. There's an overhead to writing the tests, but it's a fixed continuous cost. As a result, you're always in a position where you can ship a new feature in a predictable (and reasonably fast) fashion.
Not doing TDD often leads to tightly-coupled brittle software, which can be very fast to implement but also difficult to change down the road. It certainly doesn't have to, but in reality that's what happens 90% of the time.
There's a few successful startups that immediately spring to mind that use TDD (Taskrabbit and Modcloth spring immediately to mind). However, the real question you should be asking is, "Which startups died because they got buried under the weight of their own codebase?" That's a long and very depressing list. In many of those situations, some TDD might have helped.
In short: TDD won't make your startup healthy, but it helps ward off some of the more common fatal diseases.
I don't understand this. You give a disclaimer saying TDD won't make your startup healthy, but then your second paragraph implies that "Good TDD" is a silver bullet by "forcing you" into building "loosely-coupled code that's easy to refactor and change".
TDD can test a monolithic `public static void main(String[] args)` just as well as the most dainty collection of python scripts. In fact, assuming you wrote the two to behave the same, your tests wouldn't know the difference. Isn't one of the larger problems of TDD that it simply tests quantifiable results, not profile the machinations of the code?
Writing truly good code is hard. It takes time, practice and a wealth of knowledge... No "process" (as the GP states) will shortcut these requirements for you.
You can have a polished high quality codebase, but still have a dying startup because you've built something that nobody wants. Conversely, having a desirable product makes up for a vast multitude of sins.
However, technical debt is expensive. Having a clean well-factored codebase makes changes cheaper, which means your company will have lower overhead and be more responsive to market demands.
As far as "forcing you to build loosely coupled code", good TDD should involve a thin layer of integration tests (Selenium/Capybara/Whatever), which drive out unit tests for the individual components. If you let the tests drive your code design, and follow the "Red -> Green -> Refactor" workflow, it tends to shepherd you into writing small easily testable functions and objects that are loosely coupled.
You can also use TDD to salvage crappy code, and derive good design even after the damage has been done. For a beautiful demonstration of this process at work, I strongly recommend Katrina Owen's video on "Therapeutic Refactoring". http://www.youtube.com/watch?v=J4dlF0kcThQ
Of course, there's no substitute for having good technical instincts. I couldn't agree with you more on that point. TDD isn't a silver bullet. It's just a damn useful tool, and more startups should be using it.
> "Which startups died because they got buried under the weight of their own codebase?" That's a long and very depressing list. In many of those situations, some TDD might have helped.
Could you give some examples? PG doesn't include that in his list: http://www.paulgraham.com/startupmistakes.html, and in my personal experience I've never met a startup that failed because they had an unmanageable codebase.
Quite a few actually. I can name two off the top. You've probably never heard of them (why would you, they failed). Saber, and Lucid. Oh, there are certainly others. What kills them is a code base that's too tangled to manage. Estimates grow by an order of magnitude, costs skyrocket, defects mount, customers flee, investors cut their losses.
> Not doing TDD often leads to tightly-coupled brittle software, which can be very fast to implement but also difficult to change down the road. It certainly doesn't have to, but in reality that's what happens 90% of the time.
This is flatly wrong. It may be a tool for productive programmers to keep their code on the right track, but assuming that loosely-coupled code isn't written without it is a huge overreach. Certainly, tests do help, but TDD is just a tool in the toolbox. I have seen many very talented programmers and many of them do not need TDD to produce stable, well-architected projects.
I can only speak from my experience (20 years) of developing web apps, but I've never seen a cohesive, loosely coupled web app, that was both over 2 years old AND not using some form of TDD.
Phrased that way, I agree completely. An established codebase needs to be verified in some ways. But writing code and running it against tests is not exactly equivalent to TDD.
TDD as I understand it is characterized by writing a failing test, followed by implementation code, followed by test-fixing, etc.
However, I can write plenty of good code that does what it's supposed to and works and is stable. THEN I'll refactor as needed, write tests, and since I anticipated my needs, making those tests good and the code testable will be relatively straightforward. That's not TDD, though. It's a pragmatic approach that doesn't prioritize setting requirements (or solidifying an API) over starting simple and iterating quickly.
Funny, I had a comment ready before Matt's that started "I can only speak from my experience..."
So, yeah...in my experience this has played out exactly so: in practice, coders who ignore TDD are probably also ignoring a lot of other good practices. The two principles may be "orthogonal" in theory, but the correlation has strong anecdotal support for me.
FWIW, Robert C. Martin is a talented and well-respected guy who has been writing code for more than 30 years. He has shipped plenty of working, reliable products.
He means something very specific when he talks about TDD, and while he's certainly emphatic about it, I dont think he's wrong. He might not be right either, but he also isn't lacking in "solid data or evidence to back up this assertion".
http://vimeo.com/43536488 is a talk he gave at NDC2012 that you might want to watch and ponder before dismissing the dude. He might be peddling, but he practices what he preaches while associating with people who care deeply (and publicly) about maintainable software.
It's when he ventures into the territory of "startups" that I start to question his experience. He's definitely talented and experienced, but it seems to me that vast majority of his experience is in consulting -- which is a very different beast than working in a startup.
I had an exchange on Twitter with him today in which I asked what startups he was involved in, and he said that he consulted for several, and then said 8th Light (the consulting firm he works at) is one. To me, that's not relevant experience.
If he wants to make the argument that TDD makes for better software, then fine. I just strongly disagree that it's worth the cost in an early startup environment.
I have 20 years of experience developing web apps, most of which was at startups. Some succeeded despite not doing TDD, others failed despite doing it.
I don't think doing TDD alone will make or break a startup.
But I will say this - when your startup requires 40 developers to maintain the "festering pile of code" instead of the 4-6 that should be required, you are wasting investor dollars.
When prospective candidates for employment see your code and run away from the interview, you are wasting time & money.
When your developers get burnt out dealing with that pile of crap, and your annual turnover exceeds 100% you are wasting time, money and experience.
All of these things I have seen happen, personally, at companies I worked for.
And skipping TDD doesn't help you go faster. The only timescale I've seen where it seems that TDD slows me down, is on the order of minutes. Even after working a couple of hours, I'm ahead of the game because my code works - I don't spend time with a debugger, and I won't have to do a week of refactoring next month just to add a new feature.
You've said in a much more concise way what I took far longer with upstream. I can only beg off that I think my coding practices are better in this regard. :)
For the record, I have been a software developer since 1970, and have worked at several different startups. One in 1976 which was a great success; and another in 1987 which muddled and eventually failed. I have started several of my own companies since 1990. I have recently founded cleancoders.com, which is a very successful video training startup using TDD for the website. I also work for 8th Light which is a consulting company started by my son 6 years ago, and is dedicated to TDD.
I have been practicing TDD since 1999; and have found very few situations in which it does not help me be both better and faster at my job. I would certainly never hire anyone who did not practice it.
I don't understand how your experience at startups in 1976 and 1987, which happened 23 and 12 years respectively before you started practicing TDD, give you any information on the use of TDD in a startup environment. Consulting doesn't count, because theoretically you've sold your client on TDD's value and they're willing to pay.
Again, understand that I am not challenging your overall experience or skill. I'm not even arguing that TDD is a bad practice or that testing doesn't lead to better code. I'm saying that you are ignoring the most important factor in the success of most startups: the ability to get a product to market quickly and iterate rapidly to solve problem that is not well understood.
I started a (relatively) successful startup called AgileZen in 2009. When I started developing the first version, I obsessively wrote tests. That lasted about two weeks before I deleted the entire battery.
Now, I believe that testing helps create good software. So why would I trash the tests? Because it was a bootstrapped startup. We had no revenue coming in and I was burning through the small amount of money we had in the bank. We hadn't launched our product yet. We weren't even sure anyone would give a shit once we did launch. I couldn't afford to spend time writing automated tests around code that wasn't validated from a business standpoint.
Fortunately for us, people did care, and we grew very quickly before being acquired by Rally Software in early 2010.
I am a firm believer that if I had stuck to "best practices" and maintained the tests while trying to get our product to market quickly, we may not have succeeded in the fashion we did.
This disconnect between your post and my personal experience leads me to believe what I originally said on Twitter: if you really believe that TDD is a requirement for a startup to be successful, I don't think you've ever really experienced what it's like to work in a startup.
And, for the record: I would never hire anyone who insisted on practicing TDD without enough pragmatism to consider the cost versus the value, and time the introduction of automated tests appropriately.
TDD helps make your code better. In your case, it sounds like the key criterion was not that the code work well, but that it let you validate your belief in its business value, with the minimal possible time/money investment. I would agree TDD wasn't applicable, because it didn't provide the specific value you needed.
I would say, though, that once you reach the point where good code becomes important (and even startups face that need), TDD can help you write better code in less time with less effort. Please note the use of "can help"; IMHO neither TDD nor any other methodology will guarantee success.
The earlier startups ('76 and '87) gave me the experience of what a startup is like. I _was_ the warrior god. I worked 60, 70, 80 hours per week. I _knew_ I could overcome all. In one case it worked, and in another it failed.
By relfecting on those experiences, and many other experiences in companies large and small I have since learned a great deal about programming; and about human nature. One of my greatest realizations is that you cannot go fast by rushing. Every shortcut you take slows you down. Every attempt at cutting corners, adds more corners. And so I adopted the mantra: "The only way to go fast is to go well."
In 2002, when I started the FitNesse (fitnesse.org) project, TDD was the rule. I didn't know how well it would work, because I was very new to TDD at the time. But since the risk was low, I gave it a shot. It succeeded beyond my expectations, both as a programming discipline, and also as an open source project. The power TDD gave us (and still gives us) to keep the code clean and under control is something I'd never experienced in any other project. It was profound. It was powerful. It allowed us to go fast, and _keep_ going fast because the code stayed clean. I have come to view it as a moral imperative. No project team should ever lose control of their code; and any slowdown represents that loss of control.
I have since consulted on many projects using TDD, and have helped many others to adopt it. My son, Micah, who was the lead programmer on the FitNesse project, continued to practice TDD in other projects for clients of my company, with great success. When he founded his own company, 8thLight.com, TDD was a founding principle.
When it came time for me to start another company, (cleancoders.com) TDD was, and has remained, a founding principle. TDD keeps the code under control. That control keeps me going fast. I can't imagine surrendering control of the code ever again.
There's an old saying: "Your true beliefs are exposed under pressure." I'm glad your AgileZen story had a happy ending; but I think you got lucky. When the pressure came, your true beliefs came out, and they did not include TDD. You abandoned the discipline because you thought it was slowing you down. And now you are communicating that meme to the world at large. That's a shame.
As long as someone thinks TDD will slow them down, they will abandon it whenever the pressure is high enough. I, on the other hand, know that TDD speeds me up. So when the pressure comes, I hold to my discipline.
I know I can't convince people that TDD will speed them up. But as I look out over the industry from my rather long perspective, I see the change rolling across it. TDD was unknown in 1999. It has gained in awareness and adoption every year since that time. The momentum continues to build.
In another 10 or 15 years TDD will likely be as prevalent and important to programmers as hand-washing is to surgeons or double-entry bookkeeping is to accountants. I stand a good chance of seeing that happen.
I think that TDD works great only when you have a clear idea of what you're building.
I think that TDD is probably not appropriate for a team that is still trying to figure out what they should be building. Software needs to be incredibly malleable at this stage, and the developers can expect to be constantly refactoring and throwing away big chunks of code. Most early-stage startups find themselves in this situation, this is why you are getting so much pushback here.
But yes, once you have a solid vision of what software should be, TDD will help the software stay true to that vision.
I agree that in a startup software needs to be incredibly malleable, and the developers can expect to be constantly refactoring and throwing away big chunks of code. That's precisely what TDD helps you do. You _can not_ refactor without a suite of tests to protect you; all you can do is hack.
If you need your code to be malleable, for God's sake make sure you have a suite of tests you trust with your life. Otherwise you'll be slowed down by the fear that you'll break the code.
Actually, TDD is the way you figure out what you are building. People who start with a solution in mind may write tests first, doing a kind of pseudo-tdd, but they are not reaping the benefits of letting the requirements drive the design.
This is NOT TDD:
1 - Think of a solution
2 - Imagine a bunch of classes and functions that you just know you’ll need to implement (1)
3 - Write some tests that assert the existence of (2)
4 - Run all tests and fail
5 - Implement a bunch of stuff
6 - Run all tests and fail
7 - Debug
8 - Run the tests and succeed
9 - Write a TODO saying to go back and refactor some stuff later.
I don't think we necessarily disagree. In the blog post, it looks like the author was training people on "true" TDD by using pair programming and a simple game like tic-tac-toe or rock-paper-scissors as the "target". This worked because teams were using TDD to evolve their design into something that they already knew pretty well.
This is certainly the dominant perception coming out of SV these days. But since the vast majority of startups fail and have no more time (or rather, will) to collect development metrics than they do to implement TDD, I doubt we'll ever have any kind of real validation of this perception.
I suspect it comes from a combination of a) young coders more likely to find themselves at a startup, who are also more likely not to have developed TDD habits and 2) VCs, angels and the like who, while well-intentioned, likely push this notion that TDD is for "mature" companies and/or, "we'll pay that off later, just get the code out the door." Young CEOs, CTOs, founders and the like who need to validate all of their startups' initiatives and practices probably are swimming upstream if they want to go TDD from the ground up.
For some instances of prototyping and lightweight apps (like Obie mentioned), this is probably fine. In my experience though, the short term effects of playing "shipping" against "craft/testing" settle in within a matter of days and weeks, not months and years, and the mid-term effects of technical debt when a startup moves into "mature" phase (again, a much shorter time span than anyone realizes) are such that the idea of "debt" and "what are we going to do about it" becomes a/the dominant conversation on the team. I've experienced both sides.
My perspective: the cost of TDD is mainly to young coders who need to learn it and find the curve daunting. Teams built around these type of otherwise talented devs really would incur significant up-front cost to booting up a CI/Test server and getting past the curve to the point where the team is moving at a comfortable pace, comparable to where they were before implementing TDD. In this case, TDD is a very hard sell. I don't know how to solve that problem, other than changing the SV/startup culture to the point where those holding the purse strings see the benefits of craft/testing as being as attainable in the mid-to-short term as they are in the long term (and in my experience, they absolutely are).
If you have buy-in to TDD/craft (I can't really separate the two in my mind) from the top-down, and you have experienced coders that can facilitate implementing, in my opinion it's absolutely worth pursuing. The second-order effects of a culture built on these principles kick in with a vengeance right when a startup needs it: at crucial pivot points.
When you have to change your business assumptions quickly -- and this usually happens weeks or months into the starup lifecycle -- would you rather do it on a codebase where, say, your authentication/authorization and user model was tightly coupled to other concerns or not? Would you rather know with a relative degree of confidence that the changes you're making are not breaking core concerns? Wouldn't you rather do this quickly, rather than thrash around in unclear code that you'd rather just rewrite (you know, like it felt when we started!)?
And if you're pivoting, are you not more likely to be less cash-rich, under more pressure and in need of some really good success right now, since pivoting probably implies that former assumptions did not pan out the way you'd hoped? Isn't that exactly the point where you'd like to both reuse the code you need and know that things are still working around your baseline assumptions that seemed so clear a few short weeks/months ago?
I would think at this point the benefits to a startup in this mode, of a clean, fast-moving and validated codebase, of the kind that good TDD tends to produce, should be patently freaking obvious. Just my opinion.
To be fair, Bob Martin has been peddling this stuff for years, longer than RoR has even existed, so it's understandable that he would take it for granted that TDD is a best practice. You also need to take into consideration the type of corporate environments in which Uncle Bob cut his teeth.
I think his whole crusade grew out of being frustrated by an insanely crufty and rigid waterfall process that he saw on enterprise projects he worked on in the 80s and 90s. I don't actually know what he worked on, but I imagine it was the kind of projects where they would throw a couple dozen C++ programmers at various subsystems conceived by an architect, code like crazy for 6 months, and then spend another 6 months attempting to flush out bugs with human QA.
Post-XP/Agile I think it's easy to forget that automated testing was not a standardized practice 20 years ago. Everyone had their own methods of doing it if they did it all. To me, TDD was a phase I went through for developing good testing discipline. The power of TDD in my mind is a training tool to develop strong testing skills. If you are not good at testing, there is always a high burden to write tests, so you do it less because it doesn't seem worth it. However if you are very good at writing tests, then you find that a lot of the time you can crank out a test/spec suite in about the same amount of time as manually testing. The benefit of course is that then you have documented and programmatically verifiable proof of your intention being fulfilled. Practicing TDD is a way to force yourself to learn how to test things in all cases, after you've mastered that you can step back and consider what the truly valuable tests are without having your judgement clouded by overweighting the relative difficulty of producing certain tests.
Now that I'm good at testing I almost never do true TDD, but I never produce any long-lived application code without some kind of test coverage.
I'm not a huge fan of TDD either. Tests are helpful when refactoring or adding new features because you avoid regressions. Regressions are the worst kinds of bugs because clients get really, really annoyed with "I thought we fixed this the last 3 times" and "we already paid to fix this" type scenarios.
That said, I think the article has a solid thesis. Just because you don't practice TDD doesn't that mean building things correctly won't help you ship working software faster. If your code is brittle, write some tests around it, refactor a bit, and keep running fast. We can debate over whether you should always write tests first or only write tests to cover brittle and important features, but that's a "how much" argument.
The real problem is when you run into methodologies that always involve work-arounds, and you let the technical debt continually pile up for weeks or months because "you're a startup." This will cause you to run very slowly, and sooner rather than later. I feel that's Uncle Bob's real point, and it's a valid one.
It also felt like "selling a religion" to me. The fact is that I haven't checked on the "TDD discussion" for a while now (maybe 6 months?) and when I read the initial paragraphs and saw how confident the author was I thought that maybe I had missed something in the meantime, that probably some new studies had been published that had demonstrated that TDD did in fact provide a positive boost to the teams adopting it etc etc .
I read the whole thing but there was no mention of hard numbers. Facts were also lacking.
A few years ago this comment would have made me facepalm, bang my head on the corner of my bulky CRT monitor and look for the nearest ice pick to stab through my eardrums. Kind of like I did when I read the transcript of Joel and Jeff's reaction to Uncle Bob and SOLID.
These days it just makes me sad, for two reasons and not (probably) the ones you think. It makes me sad mainly because, well, in a LOT of ways you're not wrong. You can live (and live, and live) in a comfortable bubble between the time a new codebase is started and the time technical debt settles in and makes new features of nearly any kind far more expensive than they're worth, for an entire career. You can deliver a lot of software that way.
You can also develop some pretty good habits (as they did in the days before TDD frameworks) around manual testing and general experience, that mitigates the onset of technical debt and bad architectures. The further down that path you go, the better you get and the more successful projects you complete, the less TDD will seem like a Good Idea, and more like, as you put it, someone trying to "sell a religion."
I've been in the industry 15 years, and I've worked with a lot of guys like that think like this and gone a long way down that path. Probably like you. Some really, really smart guys that I've learned from and admired. Some of the smartest coders I've worked with. They had unreal IQs and analytical strengths that always knocked me out. They could crank out features very, very fast.
And they wrote some of the worst code I've ever seen. Not dumb code. Bad code. Awful. A trail of debt to make the heart sink and the eyes roll back: long, 300-600 line methods, inlined sorting algorithms because maybe they thought built-in Hash's search was slower than they could write themselves. Early denormalization. Unclear intentions, poorly named methods, and (here's the overall characteristic): non-idiomatic code. Code that doesn't follow conventions, code that is hard to read, impossible to reuse, obviously unclear in its original intent, and therefore brittle, brittle, brittle.
The business consequences in specific instances were more than tangible. Six weeks for a refactoring here. Months added to timelines for new features there. Features actually abandoned. Millions in revenue lost.
There has been an uncanny relationship between these kinds of coders and a common theme of resistance to the following ideas: TDD, SOLID, Design Patterns. The same guys who leave a trail of maddening debt (all the faster because they also tended to be some of the smartest coders on the team), also tended to have a deep suspicion of "high level" thinking, frameworks, big-picture conventions. And always, always, the same old red herring, false dichotomy you present here: "Do you want to ship code or write elegant, well-tested code?" As if the two were opposed. As if the two weren't actually directly related.
Here's the thing: they had a point. The inverse errors -- over-architected, over-tested, over-thought code can be just as crazy-making. Some junior dev who just read Design Patterns in 2005 and thought they'd Seen the Light and starts spraying Bridge Patterns all over the codebase was just as bad as the aforementioned analytical, structured-code debt machines.
I can only go on what I know, and here's what I know: without exception every guy I've worked with that thought this way -- suspicious of TDD, plays "shipping" against "tested/craftsmanship" -- also created code that in the mid-to-long term cost the company far more money than they apparently saved by "shipping fast" and writing untested, unclear code.
The most frustrating thing to me about this is that, once you integrate these practices -- TDD, patterns, SOLID -- into your workflow, once you master them, they cost nothing in terms of time spent. You get practical about it, use it judiciously where it makes sense, don't use it where it's unnecessary and (like a lot of the Design Pattern stuffs) ignore it altogether. But you can only do this if you can see both sides, and you can only see both sides if you've mastered the practices. I spend much less time writing tested code than I would without it. I spend much less time managing well-designed codebases than I would thrashing around in a tangled mess.
It's like learning Vim enough to get proficient. You have to commit to it. Then you get pragmatic about it, learn its strengths and weaknesses and over time get to know the real benefits. But you have to have that willingness to commit to it up front and anticipate the benefits down the line.
And as an aside, in my 15 years, the great coders I've worked with (a few name-drop-worthy dudes in the mix) have, without exception, been committed to a pragmatic, healthy approach to TDD that was neither religious nor slow, and in fact made the entire team faster, the codebase more robust and the software far more shippable.
And while I have run into a lot of guys who have never written a single test - much less mastered TDD to the point of being able to speak intelligently about the pros and cons from a standpoint of actual experience - declare loudly and confidently that TDD is "religious" ("snake oil" is another favorite) and "feel free to ignore", I have never run into an experienced coder who has integrated TDD into their workflow in a credible way, who actually can speak to both sides of the argument say the same thing.
I don't see him have worked on any successful startups, or anything that had a good exit.
Since when does one have to work on a successful startup or
have a good exit to have opinions on the best way to start projects? I work at a BigCo and have started a couple of projects.
I would love to see an empirical study proving the claims made about TDD in this article and numerous others.
It seems to me TDD is a huge waste of time when prototyping a minimum viable product. You want me to spend 2 hours writing tests for a feature that someone's going to tell me to rip out 15 minutes later? No thanks.
It's really easy to tell others to use TDD, and even admonish them for not using it. But unless you are the one in their shoes, you will not know the whole of their reality.
In a perfect world I'd write tests for everything. In the real world, I write tests for almost nothing. Most of the work I do is on stuff that might be gone tomorrow.
Some workflows don't make sense with TDD. For example, if your company builds proof-of-concept security exploits as part of a security audit or penetration test, then TDD offers negative value, because in many ways, your code is the test -- if it runs and exploits the vulnerability, then it works.
Likewise, when I'm figuring out how something might need to work, like learning a new API or trying to figure out how to build one, I don't do TDD. But I also don't do this as part of my main project -- it goes into a special '~/playground' where I can experiment with ideas.
Other than that, I do everything TDD. Here's why.
It doesn't take two hours to write tests, because by and large, you're writing the same code you'd throw into the REPL, or performing the same actions (in code) that you would need to repeat over-and-over again in a browser to see if the feature works. It's the same amount of work, you just need to write it down so that it can be repeated.
I've found that this saves me tons of time, because I can run the same test over-and-over while I make stuff work.
This assumes you are familiar with your testing tools -- if not, then yes, getting going requires learning the tools, which is the same overhead as learning a new library, algorithm, or language.
If you're working on stuff that might be gone tomorrow, you're doing your customer discovery wrong. Sure, you're going to implement features that it turns out were needless from time to time, but if you're regularly implementing work that gets thrown away fifteen minutes later, you're wasting a huge amount of time.
There are better ways to solve the question of "what does the customer want" other than building it and showing it to them -- check out all the work that Janice Fraser has done over at Luxr.
“I would love to see an empirical study proving the claims made about TDD in this article and numerous others.”
This point is so important that I think it should have been the only one you made. Tests look good on paper. And they are great intuitively. We can make great arguments for them. But last time I asked for a clear study indicating that TDD led to better results than a non-TDD development, everyone seemed to come up blank.
What it sounds like to me is religion. Doesn't mean I won't test. And it certainly doesn't mean I'll eschew testing on a team that does testing. But it still smells suspiciously like religion, and that's very worrisome to me.
TDD teaches you how to write tests at all costs. Once you know how to test in every conceivable scenario than you can discard TDD and replace it with the wisdom of what tests are useful and understand the real ROI of writing this test or that test. If you haven't forced yourself to get good at writing tests at some point then you will fail to write a valuable test simply because the cost of writing the test seemed higher than the return even though it was only because of your lack of skill in that department.
Although TDD does give you more confidence to make changes to your code in future, I think you're overlooking a major benefit, which is allowing swifter code/test/debug cycles.
So rather than having to make a change, run up the app in my browser, and manually test that the latest bit of code is working, I can test individual functions just by running my unit tests.
Of course this doesn't obviate the need for browser based testing too - but it can reduce the amount of it you need.
TDD allows me to produce working code faster by helping me notice and fix my errors sooner. If you're an excellent coder, maybe you don't need this- but I certainly do!
I agree. Prototype are not very well defined and prone to rapid changes. Besides that, when you have an app that interact a lot with third parties APIs, it's almost impossible to make useful tests. One guy working with us has basically mocked the whole world wide web to allow him to do TDD, sometimes TDD is just plain stupid.
I don't understand your point. If your third-party APIs are volatile and out of your control, causing a liability in maintaing tests... wouldn't that cause an even bigger liability in your application code? Such volatility is even more dangerous to your application. This is why we have integration tests.
This is classic case of why you need tests and integration policy. When your APIs change or break, the tests break and you are quickly made aware. You just saved yourself a huge embarrassment and likely have the competitive advantage to those we don't look forward like you should do. Need to make a release quick and need to update your handlers? Go for it. If you're on a deadline and secure with the release, go ahead and push it with a couple tests breaking with false-negatives then swing back for an hour and fix them.
You'll thank yourself when they release v2 of that API and sunset your methods.
I agree in parts. In real life, you can't always do integration tests against third parties for various reasons, the main one is that some websites/apps don't like to be taken as a sandbox to try out every CRUD operations we can have in mind. If you isolate too much, your code will be running against only itself and won't be again that useful. The way to assure software quality for our case is to carefully check on logs from data from real users, pretty regular actually. Not as sexy as TTD, but customers appreciate you take care about their real account or not another Foo Bar foo@bar.com profile.
>You want me to spend 2 hours writing tests for a feature that someone's going to tell me to rip out 15 minutes later? No thanks.
If you are going to rip out the feature 15 minutes later, why even waste the two hours writing it? Also, as someone who typically practices TDD, writing the feature + tests doesn't tend to take any more time than just writing the feature (and I have the cycle time metrics from my current project to prove it).
There have been several posts on this thread in which research papers in support of TDD have been referenced. You can find them if you simply google it. I suggest you do that.
One of the first concepts you learn in introductory finance is that, in a mature market (ie the stock market -- basically something with a bunch of people and a bunch of information) you have to be compensated for risk. This is why a riskier stock -- like tech stocks or penny stocks -- can boast incredible amounts of risk but generally will give you higher returns over the aggregate than something like pork commodities or a CD. When you choose to purchase a tech stock -- or any stock at all -- you're saying "okay, I recognize that this is riskier, but I think I'm being fairly compensated for the risk, too."
Choosing to eschew TDD is like purchasing stocks. By definition, going through TDD is going to be a safe route, but it's rare that TDD (at least in my and my friends'/peers' experiences) is actually going to make you get from Point A to Point B any faster. TDD isn't, by default, a superior or inferior approach to anything: it's a tradeoff -- do you want risk or do you want return?
Sometimes, you want to minimize risk, and that's probably smart. Sometimes, you just want to produce an MVP -- and that's okay, too.
> By definition, going through TDD is going to be a safe route, ...
Why is TDD safe? Most TDD advocates seem to be blind to the fact that testing is a terrible way to prove many important properties about software systems, security properties for example. If you're betting your ever-so-scarce programming resources on TDD, you're probably paying too much, getting a lower return than you could be getting, and leaving some serious holes in your software. As I wrote in [1]:
If all you know about getting your code right is TDD, you’ll never bet on types or proofs or constructive correctness because you don’t know how to place those bets. But those bets are often dirt cheap and pay in spades. If you’re not betting on them at least some of the time, whatever you are betting on probably costs more and pays less. You could be doing better.
So I don't think that TDD is a "safe" bet. I think it's an expensive bet that has relatively poor payoffs.
The safety of TDD comes from having a test suite that you trust. Given that suite, you can safely refactor the code. If you can refactor safely, you can improve the design safely. If you can improve the design, you can stop the inevitable slowdown that comes from making a mess.
What is the risk of _not_ doing TDD? The risk is that slowdown. We've all experienced it. What is the cost of TDD? You'd like to say that it takes time; but since the risk is a slowdown, the net is positive no matter how you look at it.
That's the irony of all these complaints. They assume, and sometimes they simply state, that TDD slows you down. And yet, the primary effect of TDD is to speed you up, and speed you up a lot.
Some folks suggest that it's a short-term slowdown for a long-term speedup. But in my experience the short-term is measured in minutes. Yes, it might take you a few extra minutes to write that test first; but by the end of the day you've refactored and cleaned the code so much that you've gone much faster _that day_ than you would have without TDD.
The benefits that you attribute to TDD are not exclusive to TDD. They are the benefits of having well-tested code, and TDD is only one of the ways to get there.
The problem with the TDD way of getting there, however, is that it's expensive: It makes programmers see their code through the pinhole of one failing test at a time, blinding them to larger concerns, which are important. As a result, a lot of avoidably crappy code gets written at first and then must be reworked later, when its flaws are finally allowed to come into view.
If you're a new programmer who hasn't learned how to reason about larger units of logic and the relationships between them, maybe that pinhole restriction proves helpful. But for more seasoned programmers, it's constraining and wasteful.
The idea that TDD involves some kind of blind faith that the tests will generate grand designs and beautiful code is both silly and wrong. You are right about that. Good design and good code require skill, thought, and knowledge irrespective of whether you are using TDD. So as I practice the discipline, I am thinking all the time about larger scale issues, and I am _not_ being blinded to those concerns.
However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable. It is hard to understate this benefit. If forces a level of decoupling that most programmers, even very experience programmers, would not otherwise engage in.
It also has a psychological impact on the programmer. If every line of production code you write is in response to a failing test, you will _trust_ your test suite. And when you trust your test suite, you can make fearless changes to the code on a whim. You can _clean_ it and improve the design without trepidation.
Gaining these benefits without writing tests first is possible, but much less reliable. And yet the cost of writing the tests first is no greater than writing the tests second.
> However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable.
No, the act of writing well-tested code at all has that benefit. Whether you write the tests before or after the code, one at a time or in module-sized groups, writing code that's hard to test has immediate and obvious penalties when you test it (e.g., tedious rework), and you'll quickly learn to avoid those penalties. So just having the discipline to write well-tested code at all forces you to write code that's not only testable but easily testable. This benefit is not unique to TDD.
> It also has a psychological impact on the programmer. If every line of production code you write is in response to a failing test, you will _trust_ your test suite.
It's not enough to trust that your tests actually test your code. You also need to trust that your tests express your desired semantics. And that's harder to do when the semantics is not designed in whatever form and grouping is most natural to its representation but rather is extruded, one test at a time, through the pinhole view that TDD imposes upon programmers.
> And yet the cost of writing the tests first is no greater than writing the tests second.
What you seem to be overlooking is that TDD not only forces you to write tests first but also in tiny baby-steps that cause programmers to focus only on satisfying one test at a time. As a result, the initial code that is written satisfies only a small portion of the system's overall semantics (the portion that's been expressed as tests so far), and a lot of that code ends up having to be reworked when later tests finally uncover other requirements that affect it. This leads to rework that would have been avoidable had the programmers not been blinded to those requirements earlier on.
The problem with TDD isn't so much that it's test first but that it promotes a pinhole view of subjects that are not narrow.
"However, the act of writing tests first has a powerful benefit: the code you write _must_ be testable. It is hard to understate this benefit. If forces a level of decoupling that most programmers, even very experience programmers, would not otherwise engage in".
I'm sorry but I really find that sentiment to be totally inaccurate. I've never seen a good experienced programmer write hard-to-test or coupled code, regardless of whether they are using TDD or not. A hallmark of when makes them good is that they all have some testing methodology that enforces this and works for them. TDD is one, but there are many others (and yes, that includes good manual-only testing).
I also don't see why you believe TDD is the only way to successfully refactor code, or that only developers who use TDD continually refactor their code to eliminate technical debt and increase productivity. Again, every good programmer does this. TDD is one way to get there. It is not the only way.
It's Test Driven Development - not Proof Of Correctness.
The worst thing about TDD having 'test' in the name is that people think it is about preventing bugs, or catching them.
It is not.
It is about enabling easy refactoring. The other stuff - preventing regressions, catching bugs, etc. is gravy.
Just curious: If you believe that TDD isn't supposed to help you write code that does what it's supposed to do, what do you do in addition to TDD to make sure that your code actually works correctly?
I write a specification (using RSpec) of some behavior. It fails.
Then I write code to make that specification pass.
Now the code is working correctly, as I have defined it.
Then I refactor, using my specs (tests) as a safety net to ensure that everything after the refactor still works as intended.
This is a _VERY_ different approach than coming up with some solution in my head, implementing it (most likely with bugs), and then using tests to find and eliminate as many bugs as possible (but usually not all of them).
Any errors that make it through to a commit, when I am doing TDD, are errors in how I have specified (or failed to specify) the behavior. Any errors in the design or implementation of the solution are caught by building to a spec in very small steps.
That's the key difference between properly done BDD/TDD and other testing. Writing the tests prevents bugs instead of catching them, and it ensures that behavior does not change after refactoring.
It may be a subtle distinction, but in practice it makes a huge impact.
1. What do you about security? How, for example, do you make sure you don't introduce XSS vulnerabilities into your code? To use your words, how does "writing the tests prevent bugs" when we're talking about bugs that create XSS vectors?
2. Don't you think you're paying a penalty by defining and implementing your system's semantics through the pinhole-sized view of one failing test at a time? That is, why wouldn't you be better off defining the semantics in whatever-sized units make the most sense, not necessarily one spec's worth at a time, and then deriving your tests and implementation from the semantics accordingly?
To take your analogy further, an early-stage startup has a high cost of capital. You are expected to produce massive returns (or fail). You thus are explicitly required by investors to take risks. You have a duty to not invest in the software equivalent of CDs.
I like your analogy, but it needs a little something extra.
TDD (and BDD) is an assurance that the promises that your code made yesterday will be kept tomorrow. If you break those promises, you do so consciously.
Not doing TDD is like investing in the pink sheets -- the company you invest in might be a fly-by-night operation, or it might be a legitimate business. It could be both, but you never know, and assuming you're not part of a pump-and-dump scheme, the rewards aren't really higher than listed stocks.
But the risk is a lot higher than listed stocks.
Lots and lots of risk, but very little upside, which is the same as writing code without tests. You save a little time getting started, but spend a lot more time doing the repetitive work that testing automates, and in doing so, eat a lot of long-term technical risk that no company should consider acceptable.
I thought this was going to be an article about overwork, or about charging for what you build as a way of proving your market, or about how employee stock options are often a bad bet. Instead, it's about TDD. What a disappointment.
Which is ironic, because the real startup trap is the one where people build things but are told to not charge for them (or charge very little for it).
TDD is like those best-practices books that everyone pretends to read, but hardly anyone ever really reads them (I'm looking at you Code Complete). The Folks that do read these books know the dirty secret that no one else really reads them either, but they feel that endorsing the books somehow gives them credibility.
TDD has its place, as most all main-stream methodologies do. But, lets just admit that the people that use this methodology are in the minority. The rest of us are working on smallish projects that are struggling to be worthy of the time-budget that they've been granted, and we're more worried about shipping than caring about how not having TDD in place will slow us down in phase III. Assuming phase III ever happens. We're using all the best-practices we can, but TDD doesn't rise above the bar most of the time.
Uncle Bob isn't saying "If you aren't using TDD at the startup phase, you suck!" What he's saying is "Just because you're in the startup phase and think you're invincible doesn't mean you should throw best practices out the window." There are good excuses for not using TDD, and I tend to agree that "We're in a startup phase. We don't have time for this crap." isn't one of them.
You're right. Then we'd have a passive-aggressive ding. The only thing you would have to add to make it perfect would be "...but I'd never claim the author is trying to do that."
I wouldn't even call it passive-aggressive; it's soapboxing, plain and simple.
> This article is about X. But I don't care what the article says about X. I have something to say about X, and by God, I'm going to say it.
--But I don't know that this is a particularly problematic thing to be happening in a threaded comment system, since people can get sidetracked by "soapbox issues" in a subthread while letting the "parent conversation" continue around them. It's just kind of confusing for people who treat this place like a linear-chronological message-board that discusses one topic at a time.
There are n-ways to join in a conversation, but the more direct experience you have, the better you will be able to field certain types of discussion. So, experts and expert debaters will dive right in and attack the topic head-on. More timid souls will wait in the wings and hope that someone will say something that they can directly respond to with confidence.
Sometimes you have to be slightly shameless if you want to join in a conversation. Stating an opinion (soap-boxing) does feel a bit like cheating the system. But I would argue its slightly better than standing on the sideline.
Honestly, if your opinions were at least novel, I probably wouldn't have said anything. I've heard the same old debate many times before and frankly, it gets old. Especially when people make tangentially-related topics into being about TDD. Can someone mention TDD in a blog post without dredging up all these old topics?
Ok, then. Let's put it in another way. TDD should be used by those who can afford it. Right?
I don't think many startups can, that's the point I guess.
With all due respect, I'd hope people would read Code Complete out of sheer interest, not signaling. CC is very accessible compared to other books you could be reading.
As for the time argument, I hear it, but I don't buy it. This is an institutional problem, and should be dealt with accordingly. Your responsibility is to advocate what can be done within the allotted timeframe. There's a time and a place for slapping things together and making it work, but it should never be 100% of the time.
If it is, it's a sick institution. And there are a lot of those.
I would love to see a poll of how many people are in your position. I love being proved wrong by stats! (Edit: yes I get the irony that I made a baseless statement and then required the other side to provide stats to back up their arguments; {sigh} I'll try to do better.)
I previously worked in environments which were very stringent on things. Ex: I formerly worked on simulation software for fighter jets. So, there are definitely places where it makes sense, but it still think its minority. TDD is definitely better for VERY LARGE and SENSITIVE products and have long, ongoing development cycles involving large teams.
TDD is very valuable in the consumer web. Not because anyone's life is at risk but due to the speed at which customer confidence erodes when your product doesn't do what it's supposed to.
My feeling is that the point of this article isn't strictly to promote TDD or any single engineering tool that startups tend to eschew. Instead, the author seeks to make the broader point that cutting corners in engineering is not a good idea even in a startup. While it can make startup engineers feel like they are moving quickly relative to the competition, the engineering debt incurred is non-negligible.
I strongly agree with this point, and don't buy the blanket rationale that speed trumps everything in a startup. Yes, speed is quite important when you are bringing something new to market; it lets you get your foot in the door and rapidly find the right fit for your product. But if you cut corners then your product will suffer, and you'll end up treating your customers poorly by it.
There's a fine balance to all these things, of course. We all have to find the right balance between keeping engineering standards high and heavy-handed structures.
Boy, did I have to scroll down far to find someone who read the article.
I was hoping for tips about how to find a balance when other team members have bought the blanket rationale that speed trumps everything and then complain about why we spend so much time fixing bugs and fighting fires instead of working on the next thing.
Instead a bunch of reading comprehension challenged commenters are bitching about TDD again.
Man, I've been back and forth on tests from one extreme to the other. I have to say that I do agree with this but with one caveat - Test Driven Development is a bit miserable.
Test "Driven" Development is a real invitation to write too many tests. A small, good set of tests gives you freedom to work fast, to refactor and to have multiple developers pushing simultaneously. Not having them really isn't sustainable past a certain point but equally to many tests will drag you to the ground.
Tests are overhead, they cost time to write, maintain and run and TDD tends to drag you into the deep end. Avoiding them altogether ends up being a false economy (if only in that it makes you too nervous to push often).
To those who fear the slippery slope, a nice self-annealing approach to get into testing is to only write tests for something that has failed. That way you waste no time writing tests for things that are actually pretty robust but equally you avoid addressing (and fearing) the same issue twice.
I'd like to emphasize this approach: "a nice self-annealing approach to get into testing is to only write tests for something that has failed."
It is great for old-timer enterprises without any test, and balances the time pressures of startups. However, you need to write those test not only for prod failures, but also if that something failed in your dev environment.
Over the years, I've grown less and less likely to write lots of tests, after being a very large test zealot during the peak of the TDD wave. There are a few reasons.
First, yes, TDD slows you down. The reason this matters is because a lot of our time as developers is spent exploring ideas, and it's pretty well understood that the faster you can get feedback on your ideas the easier it is to creatively develop them. In fact, the final result of your creative process can be completely different depending on the feedback cycle you have. From the micro to the macro perspective, religious TDD introduces an constant factor slowdown for small projects that yes, ends up being a net win in the long run if you live with the code for long, but is a net loss in the short run and also can prevent you from finding a solution to a problem since your creativity is stifled by the slow speed of idea development.
Second, when building web applications, building tests turns out to be fairly overrated. First, people will tolerate most bugs. (Write good tests around the parts they won't.) Second, if you have a lot of traffic, bugs will be surfaced nearly instantly in logs and very quickly via support tickets/forum posts. (What if there are bugs that don't make it into the logs and don't affect people? If a tree falls in a forest...) Much more important than mean time between failures is mean time to recovery. I'd rather have ten bugs that I fix within 5 minutes of them affecting someone than one bug that festers for a month. Not only because this is healthier for the code base in terms of building robustness, but also because human behavior is such that many fast bug fixes make everyone feel good but few lingering bugs make everyone miserable. People want to feel like you care, and are much more likely to feel cared for when you fix their problems quickly, and are not often interested in just how few bugs you ship if the one that you do has been affecting them for a month.
This isn't theoretical nonsense, it's a very real phenomenon where you use tests and manual testing to get up to a basic working prototype and then just throw it over the fence to flush out the bugs. It's the only way to do it anyway. (This only really works if you have traffic and can deploy changes quickly. To paraphrase a colleague, production traffic is the blood of continuous deployment.) Obsession over deploying bug-free software (an oxymoron anyway) is usually coming from people who haven't gotten over the fact that we don't need to ship software once a month or once a year but can do it every 10 minutes in certain domains. Instead of focusing on not shipping bugs, focus on shipping faster.
I, on the other hand, have grown to love tests more and more.
Not for reasons of finding bugs – though that is often a nice side effect – but because once I have decent test coverage I don't need to look at the application anymore. Being able to run the tests in the background to verify my work is sane, while I move on to the next feature in parallel, I find, is considerably faster than the code/build/review cycle you find yourself in without a decent test suite.
My own performance, being a limited commodity, is the most important factor and I find tests help me increase output, not slow it down as you suggest. They are certainly not a panacea though. As always, use the right tool for the job.
I personally can't rationalize skipping the "review" part of the cycle. If it faces the customer, it's my responsibility and the luxury of not looking at it seems like trading away effectiveness for efficiency.
I see your point. I also take on design roles, so I naturally do that extra double take on the work during the design phases. Though I generally find the tests really do a great job of covering what they should.
If you are working with larger teams with designers and developers, I suppose it may not work out as well.
First of all I agree with the idea that automated testing is not a guarantor of quality. Rather automated testing is one tool, along with exception notification, logging, and a fail-fast philosophy that each shines a light from a different angle to help overall quality. And if you need the strongest possible guarantees against failure then you have to go with a more fundamentally strict language such as Haskell.
But I think your comment misses out on the middle ground in common dynamic languages. What a good test suite does is reflect on code to document the intention of the author and provide an automated means of verifying that that intention was in fact fulfilled. Granted, test suites have bugs too, so a passing test does not guarantee there wasn't a bug, but the fact that you have two (hopefully) orthogonal descriptions of the code in question means that you stand a much better chance of actually teasing apart what happened when something blows up down the line. I've been saved and emboldened by my test suite enough in Ruby that I now consider it irresponsible not to have complete-ish coverage. I just see it as good code hygiene.
I think it is necessary hygiene for Ruby. In dynamic language web apps, everything is so loosely coupled together by a mixture of conventions and magic strings (or symbols, if that makes you feel any better about them) that it's very easy for one of the strings to break without noticing.
But IMO you are mostly (perhaps 80%) working around an inadequacy in the tools. It is not a virtue in and of itself.
I'm in the odd position of having transitioned to working at a startup, writing Backbone / RoR, after having been a compiler engineer, and before that, in ~2006, the author of a server-side web app framework designed to be strongly typed throughout (using a custom language for control binding to achieve this goal). RoR is very error-prone and a lot of work by comparison, especially when interfaced with Backbone so that the UI doesn't need constant whole page round trips.
So many bits and pieces need to be glued together, from attr_accessible through to update params slicing, binding controls in JS, JBuilder json templates, the whole game of finding just the right elements with jQ and friends, so much busywork with so many opportunities for mistakes to creep in - so that tests are absolutely critical.
You are conflating Ruby with RoR. You start by saying "I think it is necessary hygiene for Ruby." but then go on to say "RoR is very error-prone and a lot of work by comparison" and "So many bits and pieces need to be glued together, from attr_accessible through to update params slicing..." -- that's Rails you're talking about. And I don't blame you for that. As a long-time Ruby developer, I can say with no qualms that Rails is a not a very good web framework, way too buggy and full of magic. But please don't conflate that with Ruby. Ruby with Sinatra is a hell of a lot cleaner and it's much easier to structure your code well.
Over the years I have found that writing tests first is usually a faster way to get creative feedback. The reason is simple, the tests are easy to write, and they give you confidence that the feedback you are getting is accurate and not the result of something stupid.
Writing tests doesn't take much time since it's just a restatement of the code you already know you want to write. If you don't know what code you want to write, you can't write it. So you think about it. Once you know what you want to write, the test is trivial. Yet that trivial test let's you see the code execute, and gives you the confidence that the code you write was the code you _intended_ to write.
I agree with you about shipping faster. I don't agree with you about bugs. TDD can eliminate the more stupid of those bugs, while helping you go faster.
Imagine test driven development for visual art. How would one write a test for visual art, beyond viewing and comprehending the partially completed work? If one could write a test in advance, one would have made a significant advance: distilled to a verbal form a description of what good art is.
Instead, many artists -- or software authors, or chefs, find a way to repeatedly sample and appraise their work as it develops over time.
I think this is far more important than TDD, because the truly important problems of software engineering are not in making software that can achieve simple correctness, but in making something that people want.
Actually, that is a great metaphor. But you should know, TDD is the way I repeatedly sample and appraise my work as I go. Multiple times per hour, in fact.
Red -> Green is the sampling and Green -> Refactor is the appraising.
"Of course one of the disciplines I'm talking about is TDD. Anybody who thinks they can go faster by not writing tests is smoking some pretty serious shit."
Why do so many TDD people think that people who don't use TDD don't write tests?
TDD is often a waste of time. I worked at some place that had taken the TDD philosophy to the extreme: every getter and setter had a test.
On the opposite side of the spectrum, I worked on software that processed millions of dollars in daily transactions and there were maybe 5 "tests" in the whole system. This was about 500,000 lines of C and C++ code in the early 2000's.
My personal philosophy is to write unit tests where I think it's important, not test everything.
>Of course one of the disciplines I'm talking about is TDD. Anybody who thinks they can go faster by not writing tests is smoking some pretty serious shit.
That's some serious strawman. In between 0 tests and TDD, there's a lot of room to maneuver.
Some people don't write tests. But Bob was only comparing TDD and 0 tests. A lot of people sprinkle a few unit tests for sanity, instead of letting tests drive development.
There’s a wide spectrum of successful coding practices that get products shipped. Bob’s article only compares the outliers.
TDD is but one method of writing code; one that I've never found particularly useful. There's plenty of other things you can do to perform your job well.
The winners in a brutal race have learned how to avoid as much risk as possible. The prize does not go to the biggest risker. The prize goes to those who do everything they can to minimize the risk of going slow.
What slows a programmer down? Debugging. Messy code. Fear of change. How do you minimize those risks? A comprehensive suite of tests. How do you get that suite? TDD.
That is certainly your fear. It is a fear I don't share. Firstly, I think you'll get there faster if you work well. Secondly, as MySpace showed, being first isn't really all it's cracked up to be.
Im not making myself clear. You might beat me, personally, once. But you won't beat that one lucky guy that didn't do the testing, got his code to work the 1st time, and got to market before either one of us. That's how the game's played. Conservative, careful, correct coders don't win that game.
And if your business plan is to be 2nd to market, well, good luck with that.
Yes, some bugs are acceptable. The point of writing software is not to create bug-free code; it's to create value. The marginal returns on eliminating the last bug are much lower than implementing new functionality.
As for the gist of the post, from my experience, TDD is OK; but it is unit tests that are essential. They are far more important in dynamic languages, because you otherwise have very little feedback when you make mistakes as simple as a typo. They're also important in static languages, but fairly large programs can successfully be written with without anything near the volume of testing needed with a dynamic language.
In a Rails project, the Rails boot-up time makes TDD a painful experience if you are not using tools like Zeus or Spork. Even in the presence of such tools, you need a powerful machine to not hate the slowness of the whole thing and worse still, break your flow.
My recommendation for someone who hates TDD for the wrong reason, aka: breaking the flow, would be to get a fast machine, fix the code to make TDD as painless as possible and to use the right tools.
But once you start using TDD as a tool to organize your thoughts and model your domain, you might end up becoming too dependant on it and find it hard to work any other way. This is anecdotal experience.
Also, learning how to wield TDD properly takes a lot of time, error and practice. Good things don't come easy.
There are obviously places where TDD isn't a good fit - a spike, requirement that is known in advance to change soon, and exploratory programming are all candidates. However, good practice dictates that you refactor your code once a spike calcifies into production code. At this point, TDD becomes just unit-testing.
Most of the arguments against TDD in this thread seems to be against unit testing in general. But we know unit tests are important. Doing it before the fact increases the value of the unit tests manifold and also ensures that you do have coverage (though that is not at a primary objective).
And the road to failure is ignoring best practices because "I'm special" and "I'll do it later because I'll magically have time then". It's a balancing act that pithy aphorisms don't help with.
There's a reason people use the metaphor "Design Debt". Debt is a tool with tradeoffs, financial or design, use appropriately.
If the debt outweighs the benefits you don't take it, I have a project right now that I "know" will be fragile and a maintenance headache later. So even though it's urgent I'm writing the appropriate tests and doing all those best practice things that you know will pay off 10x later but often skip because you don't want to take the time now.
You could measure how much debt your team tends to take on, by measuring how much time is spent on new features and how much on the type of bugs and the refactoring that pays off design debt. This still misses what I think is the most important part of design debt, how much longer it takes to implement new ideas because you are paying the "the pieces this depends on were rushed and don't work/integrate/extend well" tax (I guess I should call this "interest payments" to not mix metaphors) .
You can't measure debt as you take it on though, the best you can do is estimate how much work it will be to fix it later (and that's only if it needs to be fixed later, maybe you get lucky and your hacked up code just stays good enough). We all know more than enough about the pitfalls of estimating software projects and this adds in more uncertainty about future need.
This is all part of the craft side of software development, experience helps, but it's not something easily measured. Too many people take that as an excuse to just do the quick and easy thing and say "move fast and break things!" or fall back on over designing and never get any work done. HN talks about the latter more often and pretty much ignores the former. I find this strange, I've read plenty of accounts of failure where the reasons boiled down to "we got to a point where we couldn't adapt our codebase to changes needed to face a new competitor, change in the landscape, business model pivot, etc. because it was too crufty". Enough design debt means some smaller and more agile competitor will eat your lunch.
Well that's definitely a long enough answer to two simple short questions.
I think we just lie on different sides of the fence on whether to apply best practice early and suffer the initial time cost upfront versus getting a rough around the edges product in place and refactoring later.
Do you have links to articles about companies whose business failed due to lack of technical agility? I was thinking about this recently and don't believe I've ever read about such a case, but its highly likely I'm looking in the wrong places :)
I look at it as a spectrum rather than a binary choice. I find, for myself, that in the last few years I've gotten screwed too often by spending weeks (amortized over a few months to a year) fixing bugs and patching up hacks that I could have avoided with a few hours of upfront cost.
So I try to move my default a little more to the best practice side. I also don't see this epidemic of overdesign in our community, although it exists. I see people hit a design debt wall and have greatly reduced velocity more often. Sometimes laziness reinforces a desire to move quickly and the wrong tradeoffs are chosen. Sometimes the upfront cost is higher because you have to train people in those best practices, or at least teach them the new tools.
I also have noticed that different fields of programming have different sweet spots so it's not surprising that people don't agree on the one true way either.
But if you don't start with TDD, then when are you going to put it in? How do you transition into it? Are you going to go back over everything you've written and write tests for all of it?
I think you build it in when you know what you're actually building. In the initial dev cycle, everything and anything could be ripped out so I'd suggest holding off til perhaps halfway through your MVP cycle until you start TDDing.
Largely agree with everything here but in my opinion many times there can be a very strong current that forces you to do it the fast, stupid way. In my opinion if you dont have any of these problems, you work in a unique place...
- Agile - Agile IMHO largely screws you in making sound technical decisions. Its not necessarily because agile is flawed - its usually because business/management uses agile as an excuse to randomly take a hard left turn every other week making it much harder to make long term architectural choices that are beneficial.
- No agreed upon standards or unification amongst software professionals - The accountant in this situation has a set of standards and expectations that allow/force him or her to do things in a moderately set way. This allows for the accountant to usually fend off management from pressuring things to be done in a shoddy or just get it done way. On the other hand its much more difficult for software professionals to say "we as a group do not condone writing shitty software" (because ironically a large number of us do ... "move fast, break things" has done more good than bad from my perspective)
- Ageism - Also known as experience doesn't really matter. Some will say thats because technology changes so much - but it actually doesn't. Just because you've been grappling with software problems and design patterns in Java doesn't mean that when you switch over to Python that really anything changes. Same shit, same problems, different words - but in all honesty we seem to be pretty bad at building on the experience of our elders because they are over the hill at 35... what could they possibly teach us.
For me it at times has been frustrating because coming out of school I really enjoyed the fun of designing systems and code that are sustainable, performant, etc etc but there seemingly is typically more reward for just throwing quality out the window in startups. Just my personal experience.
In my mind, TDD is a defense against complexity. Complexity is always the enemy. If you can simplify your code through structure or clean design you can minimize or remove testing.
The moment you can't hold the whole thing in your head with ease is the moment you should have done TDD a while ago.
Although I've been a hardcore TDD advocate from time to time, I find myself writing fewer and fewer tests in the early stages these days.
In the very earliest days of a startup, you can easily hold the entirety of the problem space in your head. Refactoring is simple, tests or not. I can destroy and rewrite a feature from scratch in hours or days. A non-trivial number of the features I'm writing will not exist in any form a week or two later. This is not a problem with process or planning; this is the nature of a startup.
I think that TDD is immensely valuable as a team and product grows, but claiming that your startup will fail because you aren't applying engineering "best practices" from day 1 is counter to most successful startups I've worked with.
1. What you are doing starts at 0 value.
2. You can (usually) break it and it's ok.
3. You need to change stuff a LOT to figure out something that someone wants to use, rewriting tests makes that slower.
Every test case you write has a cost: It verifies that something of unknown value works. What if the value of that code is 0? Well, you just doubled down on something useless.
There are no absolutes in software development, but how we did it was have uncomfortably too few test cases when something is new, change the product for a while until someone actually wants to use it, then add a bunch of tests until we know it works from there on out.
And I hesitate to add: Also we work in Java so a bunch of testing comes for free, so there's that.
The way I do my TDD is by writing the unit tests to specifically test the functionality of the code.
For example, if I'm writing a module that does validation for a specific type (for example, in JS) I tend to test the functionality while writing the tests. I am unable to speak amongst others, but you have to test your code eventually, and it's either going to be in the classes that are using the code, the REPL, or it's going to be in your unit test/integration framework. I usually pick the test framework first because it serves 2 benefits:
1) I can prove my code works. (I must prove this to my self, or the people using the code at some point.)
2) I can reliably change the code later.
Complexity is also mentioned in a couple of comments here. TDD helps simplify complexity by writing code that is testable. It also makes you think about your API as you're writing the different possibilities the module/code can be used in.
Further, I think there are multiple issues with tests, and depending on the type of test your doing, different problems can arrise. You have your unit tests, and then you have your integration tests.
Unit tests are relatively easy to write (if your code is split into individual chunks, that do a specific task.) and they should always be written regardless of the time you have.
I think the major issues arrise in terms of time are the integration tests. When you're testing the functionality between complicated modules that require databases, i/o writes, network communication, it is sometimes hard to write the tests, and it may not even be worth it.
The majority of people who disregard TDD, usually disregard a specific sub-problem of TDD. I think TDD has its benefits depending on what type of tests you're doing, and how easily the problems can be solved with those types of tests. You can always pick to write unit tests, and disregard integration tests, etc.
I think there are a lot more issues to expand upon, like the language you're using, the platform you're using and how often your code is required to change, but TDD has large benefits, and code rot is very real and TDD can help mitigate it.
I'm not sure if the author has any idea what is it like to bootstrap a web product and watch the money go down the drain day after day, with zero income to compensate, and with the certainty that you won't make it for another 6 months if the product is not out and making revenue. I'll deal with the technical debt later, thanks. I'm pretty sure Facebook and Google didn't do anything remotely close to TDD when they first shipped, and yet they survived.
To those of you who asked for "links" to studies that prove that TDD works; google around, you'll find plenty. Some are positive, some are negative -- what else is new. Now, please show me the studies that show that _your_ discipline works.
To those of you who consider TDD a religion; you are being silly. TDD is a discipline, like double-entry bookkeeping, or sterile procedure. Like those disciplines it has aspects that are dogmatic; but that aim at a purpose. That purpose is to help you write software fast and well.
To those of you who think I'm a process peddler. You're right; I am. I make a very good living teaching people how to do TDD. I have been doing so for over ten years now. I hope to continue for some years to come. My goal is to help raise our industry to a higher level of professional behavior; because at the moment our behavior is pretty poor.
To those of you who wonder whether I've ever worked at a real start-up. I've worked at several over the years. My first was in 1976; and it was very successful. Another was in 1989, and it didn't do so well. I've recently founded cleancoders.com a startup in video training for software developers. The website is done entirely with TDD. And this doesn't count the rather large number of startups I have consulted for in the last 10 years. So I've got a _lot_ of experience with startups.
Folks, I am 60 years old. I got my first job as a programmer when I was 18. I wrote my first program when I was 12 (on a plastic 3-bit computer). I started using TDD in 1999, after I'd been programming for thirty years. I continue to write code using TDD today. I've seen the startup trap. I've lived the startup trap. I strongly advise you to avoid that trap.
Hacking up a quick demo the non-TDD way definitely has its upsides but if you don't force yourself to review and rewrite your early code you would have been better off doing the right thing from the start.
Just preaching TDD is not helpful, we need a coding process that allows clean separation of early and production code.
Statistically speaking, a web startup can safely not invest in TDD. For precisely same reason that a daily Russian Roulette player can safely not invest in health insurance.
But for the minority who miraculously dodge the bullet, the extra investment will eventually become a lifesaver.
I find that making sure the foundation of an application is solid (I'm an API first kind of person) let's you get away with a lot less testing in the upper layers. This lets you move faster and experiment far more quickly than if you were built on quicksand.
This "trap" is Deadline Culture and it doesn't pertain only to code quality, but to other matters like product direction and personnel decisions, which are made quickly and often badly. It's sloppy and destructive. Companies with Deadline Culture often make it easy for a total asshole to gain power just by citing existential risks that either do not exist or are of extremely minor importance. Deadline Culture companies are obsessed with competitors, even though it's their own internal corrosion that does them in.
Sometimes, having deadlines is unavoidable. They might be set externally and exist for regulatory reasons. Deadline Culture is when the company starts encouraging the formation of unreasonable deadlines that triumph over professional integrity and intellectual honesty.
VC-istan seems to encourage it with the whole "if we get bored with you, we'll fund your competitors" arrangement.
Deadline Culture is, however, great for the true fascist. Fascists love (real or imaginary) existential threats, especially vague ones they can continually adapt to their needs, but that come with time pressure.
It's a hallmark of "experienced" non-dogmatic product people (UI/UX/Dev) that can use their intuition to know what are the happy paths that they need to test, what interfaces are likely to not change (e.g. a user is probabilistically always going to be a member of an organization, so unit tests around that are likely not waste), and what level of quality to introduce for a given feature relative to the probability the feature is going to exist in perpetuity.
You can concretize this by calling it "spike driven development" if you want (that's what we do) but the point isn't that TDD is faster or slower but that high coverage might be inappropriate (TDD isn't binary - degree of test coverage goes from 0-100%) at different phases of the build/learn/refine cycle.
For example, if we're building a speculative feature that we think has value but hasn't been proven yet, we want to spend the least amount of effort (code/time/money/pick some) possible to prove that it's viable.
Small bugs aren't really that interesting (provided the thing "works") and so we're explicitly willing to live with degraded quality to prove the feature is worth further investment. We're also explicitly not building a "large" feature (so it's very likely to get coded in a day or two) so the surface area for major showstopper bugs is minimized.
Often the feature will be thrown away or majorly refactored into something that is completely different.
In this case, full-bore 9x% coverage TDD is probably waste as the feature existed for a short period of time, the interface changed dramatically, or otherwise was proven invalid. High test coverage makes major interface refactors really expensive and you really don't need that level of quality for speculative features.
After you prove this feature is a "thing" you want to keep around (and you've nailed down the interface), then it's a perfect time to re-write it with a much higher degree of test coverage.