It's a shame most of the comments are focusing on the not-testing point. The last point was much more interesting (if I may paraphrase, engineers in an early-stage startup should be responsible for proving that the features they build actually add value).
Anyway, I agreed with pretty much everything, personally.
So true. Flash and glam when new features are rolled out isn't as important as whether or not they're actually an improvement. An example of this is the difference between Bing and Google.
"Don't worship the code." Well put. I've personally witnessed a situation where a startup was paralyzed by code narcissism. The business was asking for features they couldn't get because implementing those features would mean violating the lovely design patterns chosen by the architecture astronauts. Every developer wants to believe that someday their code will be released as a heroic open source project that solves everyone's problems but in the meantime they neglect to solve the problem their customer is having right now.
I've worked with a software vendor (as a fortune 50 client) where their architecture and product vision was solid but the engineering team was deciding what aspects of both they would implement. Not when they would implement but if they would. Imagine if a carpenter or autoworker did this? The fallout from this was a major contributing factor to their loss of new business.
The main problem with this is that author is creating a false dichotomy between writing great software and running a business(hustling). I don't think you have to forgo one to succeed at the other.
I just think that you need to make a decision up front what your businesses core competency is, if it is supposed to be writing great software, then really should you be doing things like "Stop being such a nerd".?
Max Levchin phrased this very point beautifully on his Startup School talk (I think it was an advice by Peter Thiel): "A difficult problem doesn't necessarily mean a valuable problem". Your job as a programmer is to add value. Now if you keep working on super interesting stuff, that doesn't add value, what's the point really? You should spend your free time on the interesting problems that don't add value.
I mean, writing software that is usable, useful, maintainable, robust and reliable. This is the type of code that both other developers AND your users will love.
I agree in principle, but not in practice. Yeah, in an ideal world, everyone would set aside their own personal desires and wants to focus on the business's goals. And yes, you have to choose the customers' goals if the two should ever conflict.
But in practice, human beings are needy animals who aren't always practical. I find that programmers (including myself) can be more productive if you indulge that side a little bit.
In short, I think that you can have your cake and eat it too: code that programmers appreciate working on and also serves the customers' needs.
It really depends what problem you're solving. Not all problems require elegant solutions that other programmers will admire. If that's the case, you have several options:
1. Switch to a "harder" problem (however that's defined), that necessitates an elegant solution that will impress other programmers.
2. Ignore the desire to impress people who don't matter to your business.
I think it's hilarious that most of the comments I'm seeing here so far seem to be doing the exact over analysis that this article is trying to dissuade us over-analytical types from doing ... maybe?
What I find hilarious is the idea that things can "just" be built (motherfuckas), rather than things built through lots of hard work, dedication, and yes even that apparently dirty word analysis.
What this article is suggesting sounds great for prototypes, which is what startups should be doing early on, but not forever.
"Here’s what you do instead: write integration tests for the critical parts of your application."
Yes! The "unit" in unit tests makes sense when "units" are what define your code -- because you are delivering units. If you are delivering applications, focus your efforts on having quality tests for those.
(Of course competent coders will develop units, and collections of units aka libraries, along the way, and of course unit tests as such will be right for them. But the measure of an application must be taken at the application level, not at the unit- or aggregate-of-units- level.)
For that matter, what the author complained about in the article didn't strike me as unit testing. It was testing, and it may have been done using a unit testing package, but it sounds like what was being tested was not units.
As a consultant I am constantly called onto projects that are in varied states of "prototype gone wild" - they seem to already be following this advice on many levels and pay the price later. YMMV
It is WAY more expensive to do later than it is to keep this sort of thing in mind up front. I've done a lot of work with product startups that have dug a massive tech dept hole that is actively affecting their bottom line. Rickety foundations.
TDD fully supports the concept of prototyping. You don't unit test a prototype. The trick is to STOP that prototype when the concept is proven and move onto something you can really build on.
So, here's the thing: Early effort is more expensive than later effort, because if the idea doesn't survive, all that effort has infinite cost/benefit ratio! Before you have any value, what are your tests supposed to be defending?
Once you can quantify the benefit, and know what your business is, then a development process based on testing can accelerate value creation while minimizing risk to existing value.
This is a nice idea, and perhaps is the original meaning of the term, but in practice sometimes a mess is exactly what you should be creating. Since you know you are going to be throwing it in the trash this mental mode of prototyping fosters creativity like no other.
Technical debt is just that: Debt. And in a start-up that is incurring large piles of various forms of financial debt, it might very much not be rational to incur financial debt to avoid technical debt. Of course, there's a balance to be struck and it's also important to call out the fact that you're incurring technical debt and make it a conscious decision.
you've nailed the core problem with the article-- you don't test stuff at an extremely detailed level (unit tests) until you're pretty damn sure of the feature and the way you want to implement it. the way you get there is through user tests, user tests, user tests. the goal should be to never write a line of code that you'll throw out later.
this article reads like someone read a couple of articles about TDD, decided that they should do TDD on their first hacky prototype of their startup idea, and got upset when they realized their idea wasn't fleshed out so they had to throw out code and consequently the tests were useless. this is not the way product design and development works.
nope. i've done TDD for 3 years, ever since I learned Rails.
in my experience, that's exactly how product design and development works. whether it's a startup or a goliath like Google, you implement a lot of features to test a hypothesis and if they don't work out you scrap them. i think you're "never write a line of code you'll throw out later" plan comes from a mythical world.
This is different: he said that he is called as a consultant on projects where this way of working has run amok. So the parent has no reason to propagate his own way of doing things.
I don't know...a hustler actually solves problems. Nasty, gross, annoying, fuzzy problems that have no right to exist, but do anyway. Hackers hate these problems[1].
I moved into a new apartment complex a few months ago from out of state. The landlord was going to put us into a different living situation than we contracted for. All our calls were met with a bland, "We ran out of room." Resident hustler roommate had a talk with them and they meekly knocked down a wall to make things right, and gave us a discount.
Hustling has a very different set of skills than hacking. Hacking rewards thinking carefully and doing the Right Thing The First Time, where hustling rewards more of a shotgun approach. Maneuvering one's way past secretaries, remembering 10 acquaintances who might be talked into putting up seed money and calling all of them---these are things hackers hate to do if they can't accomplish them cleverly, and so they often go neglected for great products with hackers behind them (because no one else could be).
I'll take a good hustler over an idea guy any day. If they're legit a hustler, they're worth their weight in gold.
I'd say a hustler is a do-er. He's solution focused, knows how to talk to whom about what problem to get it solved, goes the extra mile and is just as passionate as the hacker (he just works on a different problem than you).
More than that being a self-proclaimed "hustler" also feels sleazy. It's sort of like combining an "idea" guy with somebody whose only goal is to sell the product, not to add quality or make something cool.
This is just what I associate with the word "hustler" and not a reflection on the article, on which I have no comment at the moment.
It seems to be used in a more positive way in tech, but I agree that I always read it as having a sleazy connotation--- not just someone in it for the money, but in it to make money at all costs, dishonestly if necessary. The term came from people who use deception to lure people into making bad bets, after all.
The point is, code evolves. It’s never “done”, so don’t write tests that presume it will be static and your interfaces won’t change.
That's exactly why one does TDD, so that you can both be guided in your design (code that's hard to test is probably crappy code) and also have confidence when it comes to refactoring. This is particularly important in a dynamic language.
Good tests are not written with the requirement or presumption that the codebase will be tightly coupled or be difficult to change. Good tests are written entirely to support change and to give the developer confidence in the ability of that code to change easily.
I think we need more and more materials out there on good TDD and OOD because I'm finding that a lot of really smart people have just never seen or experienced it and have been turned off by the first few slimy rungs of the ladder (including me, once!)
However, it takes a lot more time to write code guided by tests than it does to just write code. And you may end up throwing away a lot of those tests and rewriting them when the code changes in response to customer feedback.
In my mind that's the main argument against TDD.
I'm happy to write tests after the v1.0 of an application is shipped and I have sign-off from my customer, because it is clear then that the time is not wasted - they have the working product as soon as possible, after all.
However, in my world it's inevitable that I'll be assigned to another task almost immediately after I ship a product. Those tests often don't get written.
I suppose I'm accumulating a lot of technical debt, but everything seems to work...
i feel the exact opposite-- if you have a good test, the code is nearly trivial to write. the key is to always bounce back and forth between tests and code, so that you're always just writing the minimum possible thing that passes, which (when looking at a test) is super simple.
"throwing away tests due to customer feedback" is a red herring-- if you weren't throwing away tests, you'd be throwing away code. additionally, the goal with all development is to only sit down and write code when it's extremely unlikely it's going to change. you should be sure that you're not going to drop this thing you're writing, because you've done the necessary user research to be sure the feature is needed and usable.
"I'm happy to write tests after the v1.0 of an application is shipped and I have sign-off from my customer". if you're shipping untested code, that's a huge problem. the last thing you want is to deal with scaled feedback from a 1.0 release alongside ironing out code quality and reliability with tests
additionally, the goal with all development is to only sit down and write code when it's extremely unlikely it's going to change.
To a point. But as a general rule, I don't agree. The goal should be to write code that's both loosely coupled and highly cohesive enough that you can defer as many changes or decisions as possible to the latest point possible. While this might involve spending more time in planning, the process of developing will itself impact the overall design (in most, but not all, cases).
> the goal with all development is to only sit down and write code when it's extremely unlikely it's going to change
If only this goal were achievable! Unfortunately it is a frequent requirement of my job to show early prototypes and create new iterations of those products based on the feedback I receive.
"tightly coupled" is the norm (I'd pick on Rubyists, but really it's the problem in any language I'm familiar with). It's hard to get over the hump.
So I agree. On the other hand, I think it's also very pragmattic to say "this isn't working for me, there's more important things to focus on".
I think manual testing is underrated, and even integration testing is probably not given it's due as one of the better ways to test a product in an automated fashion.
Unit tests in the TDD sense aren't (IMO) all that useful as tests. I'm a big fan of the tweak in calling it Test Driven Design personally.
Code I plan to keep? I'm writing unit tests for anything that I'm worried might get broken. Code I'm going to throw away? Fuck it. Rarely worth testing.
The problem comes when people start treating prototype code as production code. That tells me that a) their prototype was probably too rich, and b) they're asking for trouble by building on a shaky foundation.
I think the solution is to be very clear about whether a given chunk of code is a prototype or for real. When a prototype pays off enough that you want to take is seriously, rewrite it. Tests and all.
The points made here seem like a great idea if your goal is to get something out the door as fast as possible. But they don't seem so great if your goal is to actually produce something that you can continue maintaining over the next few years. So maybe if your goal is to get bought out, then you only care about having something working for a short time, but if your goal is to build great software that you can continue to build and maintain for 5 or 10 years, then you might need to rethink some of this.
TDD does work, and I use it in practice. I know a whole company full of people who all use it in practice, and it works.
I have a guess about why HN likes to upvote opinion pieces that hate on TDD. TDD initially feels like it takes discipline. It's natural to dislike things that require discipline. I think people are trying TDD, finding that it doesn't work as advertised, and gravitating towards the most pleasant explanation: that TDD sucks. They never consider the alternate explanation: you're doing it wrong.
I agree, from the perspective of once being one of those people.
I'm seeing a parallel with fitness or eating right. If you leap into a diet or heavy fitness regime right away, it really sucks for a while. But eventually it works out. Practicing mindful, intelligent TDD and object oriented design results in a similar experience. There's a "dip" you need to plough through before things start to click.
I agree with almost all of what you said. I'm a coder, agilist, craftsman, blah, blah who has worked on a startup more than once. On the last one I wrote almost no unit tests. The whole thing was so experimental from the start that I never go around to it. I explained it to myself that I was being a hustler. I'm okay with that.
However, I gotta say it. If unit tests are making it harder to restructure your code, you're doing it wrong. The opposite should be true. The greatest purpose of unit tests is to add comfort and safety (and thus speed) to very big refactors and restructuring. If your system behavior changes, yes, your tests gotta change, and that takes effort. But if you restructure your code (e.g., break it apart to add an intermediate abstraction) and unit tests slow you down because you got get all up in 'em, then brother you got yourself some bad tests. Those should not have written.
There's too abrupt a distinction between integration and unit tests. Testing the behavior of your code before it's written can be organizing and efficient. Testing private methods or code that is otherwise internal, is, as was mentioned, an additional maintenance cost.
I don't really agree that unit tests should need to be re-written quite as much as indicated in the article. Our unit tests tend to have good coverage at the lower, model level. At the UI level we rely more on usability testing because it's a bit harder to test automatically and things tend to change more frequently.
I do think it's best to find a balance that works for whatever particular product you're developing. If you don't follow any methodology at all you're likely to spend a lot of time reinventing the wheel. But you don't necessarily need to follow a methodology to the letter in order to have a great team and produce great software.
I think a lot of startups (and companies in general) become so enamored with their work that they lose sight of the real value of what they're doing. For example, as the author said, losing the forest for the trees: getting tunnel vision on one very good feature causes stagnation to the rest of the system as a whole. It doesn't move forward, it just makes one 10/10 feature in a 7/10 system. Metrics that show how useful a userbase finds one feature are always good because they keep the programmer's eye on the difference between being objectively useful and just being neurotic.
> where if I had written unit tests I would have found myself essentially rewriting all of them to respect the new abstraction layer.
Here's your problem: You're either not writing unit tests correctly, or you're writing bad unit tests. If you have to rewrite all your unit tests to respect the new abstraction layer, you're programming astonishingly poorly - that should be a change to a few different tests, not all of them.
the problem isn't that my tests are bad, the problem is that it's commonplace to adapt new interfaces in a constantly evolving codebase. when your unit tests were built for one interface, you have to change them when you change the interface.
for example, the interface to enroll in a course used to be an association between an "enrollment" and the "course". when we added "courses has_many sessions", you enrolled in "courses" via "sessions". you have to rewrite all your enrollment specs to respect that new interface now, regardless of if they were shitty or well-written.
to me, that's a maintenance cost that isn't worth it at a startup given how rapidly you have to change those internal interfaces.
> you have to rewrite all your enrollment specs to respect that new interface now, regardless of if they were shitty or well-written.
No, not if you're doing it correctly.
Again, that's not a unit test, it's a functional test. Unit tests test really small blocks of code. If you're crossing class boundaries, generally, it's not really a unit test.
Those are unit tests (not beautiful ones, written to refactor, but I digress) - they mock cross-object (and even cross method, in some cases) and they really focus on individual paths through the code. If you change a method, only tests relating to that method fail.
Edit: also, "the problem isn't that my tests are bad," is a poor assumption to start with - you're assuming a priori something that we're discussing here, which is that you're complaining about problems resulting from not testing correctly. "Bad" is a loaded word in any case - I'm not sure there's a test I've seen that couldn't be improved.
And, if you decide to rename any of the methods on the classes you've mocked out here, your unit tests will continue to pass despite the fact your implementation is now full of bugs. And, if you do rename the methods or class you've mocked, you now have to update every single test for any class coupled with the changed method.
I've never understood endo-testing/mock objects in environments where the compiler cannot check your mocked interfaces. I also don't understand how people can aruge that you shouldn't be testing against the implementation and then say in the next breath you should be mocking out every single method call the internal implementation makes explicitly. You're just setting yourself to get lots and lots of green tests on code that will explode as soon as it hits production. Whenever I've done aggressive mock object based testing I soon have zero confidence in my tests because I get burned due to the fact that the mock objects eventually start asserting that the wrong behavior is right and my code explodes when integrated.
(And yes, I know the excuse here is that you should then write integration tests and functional tests too. But seriously now, how many tests are you going to end up writing for your 100 line Ruby class before you decide you're going overboard in the name of purity?)
Better to instead just write it so instead of worrying about no obvious deficiencies there are obviously no deficiencies (avoid side effects, state, extra coupling), and write some functional tests just to be safe. Yes, those ones that actually hit the database and test the interaction between multiple classes that TDD advocates loathe because they are so slow and impure. Slow they may be, but at least I know they're testing the code that's going to run on my servers. I'd rather have 10 tests break when I change one class that are easy to fix than have zero tests break when I change one class and let broken code get to production.
> And, if you decide to rename any of the methods on the classes you've mocked out here, your unit tests will continue to pass despite the fact your implementation is now full of bugs.
Yes, that's why you have other tests to cover those implementations. This is just an isolated example.
> And, if you do rename the methods or class you've mocked, you now have to update every single test for any class coupled with the changed method.
Yes, this is true. It's a helpful thing in my experience: you wish to mock as little as possible, and so having clear end points is important. Having to change every single usage of the method means you tend to write better code, in essence.
> I've never understood endo-testing/mock objects in environments where the compiler cannot check your mocked interfaces.
That's fine, don't do it. This is just a demonstration of what works for me.
> I also don't understand how people can aruge that you shouldn't be testing against the implementation and then say in the next breath you should be mocking out every single method call the internal implementation makes explicitly. You're just setting yourself to get lots and lots of green tests on code that will explode as soon as it hits production.
This is true, however, it means that when you edit one method, at most a half dozen tests will fail, as opposed to your entire test suite failing. You end up with very good locality of failure, as opposed to binary 'something is wrong' tests.
> Whenever I've done aggressive mock object based testing I soon have zero confidence in my tests because I get burned due to the fact that the mock objects eventually start asserting that the wrong behavior is right and my code explodes when integrated.
This is true, but not something that can be avoided - you end up with problems either way, and these test (as above) give you very good feedback about where your error is. That, combined with very comprehensive testing, leads to a situation where you can trust your tests really well never to throw false positives.
> (And yes, I know the excuse here is that you should then write integration tests and functional tests too. But seriously now, how many tests are you going to end up writing for your 100 line Ruby class before you decide you're going overboard in the name of purity?)
It depends on how important it is to you - for example, the tests above test functionality that is core to a piece of code that runs on many hundreds of applications - not something you ever want to break. As a result, the investment was worth it. You have to decide those tradeoffs on your own.
> Better to instead just write it so instead of worrying about no obvious deficiencies there are obviously no deficiencies (avoid side effects, state, extra coupling), and write some functional tests just to be safe.
I'm worried about more than obvious deficiencies - I'm worried about corner cases and things you haven't thought of. In writing these tests I caught dozens of unspecified and poor behavior corner cases.
> Yes, those ones that actually hit the database and test the interaction between multiple classes that TDD advocates loathe because they are so slow and impure. Slow they may be, but at least I know they're testing the code that's going to run on my servers. I'd rather have 10 tests break when I change one class that are easy to fix than have zero tests break when I change one class and let broken code get to production.
I totally agree with that. There are comprehensive integration style tests and comprehensive functional tests too - but they're pointless without the assistance of specific tests that indicate which portion of the application is failing.
If a functional test fails without a unit test failing, you have work to do on your unit test suite. Unit testing is a tool for programming as much as it is a tool to verify correctness.
I totally agree - they're not designed to be readable, unfortunately, they're just designed to test things so that I could make sure they passed after a refactoring. As I said, there are many things I could wish to improve about them - they were just a demonstration of the point at hand (isolation)
You're mixing up interface and implementation. You should be able to change how you implement the association without changing the interface or breaking the test; unless your test knows too much about the internals of the implementation, in which case it's a poorly written test.
That's the OO theory, but interfaces do frequently change in any codebase that isn't mature, because it's not always clear up front what the interface actually should be, and if you haven't committed to supporting it (e.g. because it's an externally exposed API that can't be dropped), it can be better to change your 6-month-old interface than keep one you realized wasn't great.
Good tests help you change either the interface or the implementation separately. Trying to change both at once is madness, in my opinion, and almost impossible even with good tests.
Besides, unless you're rewriting literally all of your tests, the ones that you don't change help make sure that your other code isn't broken by your new abstraction layer.
I would like to add hat anyone that uses TDD to test develop any GUI application such as iphone or android is a freaking ill-informed idiot..unit tests are poor behavior analysts or indicators
I think "idiot" is a little harsh. You can definitely be too earnest in writing unit tests, but I think there are places in almost any program that would benefit from them. For example, in a GUI app chances are you have some sort of model of the data; tests could help ensure that the model is well behaved. This way, if you have an odd GUI glitch you can be confident it's actually in the GUI code rather than in the data underlying the interface.
Anyway, I agreed with pretty much everything, personally.