Anyway, I agreed with pretty much everything, personally.
upvotes for you
I just think that you need to make a decision up front what your businesses core competency is, if it is supposed to be writing great software, then really should you be doing things like "Stop being such a nerd".?
you shouldn't let the first one get in the way of the second.
But in practice, human beings are needy animals who aren't always practical. I find that programmers (including myself) can be more productive if you indulge that side a little bit.
In short, I think that you can have your cake and eat it too: code that programmers appreciate working on and also serves the customers' needs.
1. Switch to a "harder" problem (however that's defined), that necessitates an elegant solution that will impress other programmers.
2. Ignore the desire to impress people who don't matter to your business.
3. There is no third option.
Just build it motherfuckas?
What this article is suggesting sounds great for prototypes, which is what startups should be doing early on, but not forever.
Yes! The "unit" in unit tests makes sense when "units" are what define your code -- because you are delivering units. If you are delivering applications, focus your efforts on having quality tests for those.
(Of course competent coders will develop units, and collections of units aka libraries, along the way, and of course unit tests as such will be right for them. But the measure of an application must be taken at the application level, not at the unit- or aggregate-of-units- level.)
TDD fully supports the concept of prototyping. You don't unit test a prototype. The trick is to STOP that prototype when the concept is proven and move onto something you can really build on.
Once you can quantify the benefit, and know what your business is, then a development process based on testing can accelerate value creation while minimizing risk to existing value.
this article reads like someone read a couple of articles about TDD, decided that they should do TDD on their first hacky prototype of their startup idea, and got upset when they realized their idea wasn't fleshed out so they had to throw out code and consequently the tests were useless. this is not the way product design and development works.
in my experience, that's exactly how product design and development works. whether it's a startup or a goliath like Google, you implement a lot of features to test a hypothesis and if they don't work out you scrap them. i think you're "never write a line of code you'll throw out later" plan comes from a mythical world.
It's right up there with "I'm an idea guy".
In fact, I think that "I'm an idea guy", has just morphed into "I'm a hustler".
I moved into a new apartment complex a few months ago from out of state. The landlord was going to put us into a different living situation than we contracted for. All our calls were met with a bland, "We ran out of room." Resident hustler roommate had a talk with them and they meekly knocked down a wall to make things right, and gave us a discount.
Hustling has a very different set of skills than hacking. Hacking rewards thinking carefully and doing the Right Thing The First Time, where hustling rewards more of a shotgun approach. Maneuvering one's way past secretaries, remembering 10 acquaintances who might be talked into putting up seed money and calling all of them---these are things hackers hate to do if they can't accomplish them cleverly, and so they often go neglected for great products with hackers behind them (because no one else could be).
I'll take a good hustler over an idea guy any day. If they're legit a hustler, they're worth their weight in gold.
http://www.paulgraham.com/gh.html, search "nasty"
You have just won the gymkhana for linguistic drifting.
A startup can consist of two hackers - but one has to function at least part time as a hustler.
FYI locally at Michigan State the student entrepreneurs group is called hackers and hustlers and it was named after that very post.
This is just what I associate with the word "hustler" and not a reflection on the article, on which I have no comment at the moment.
That's exactly why one does TDD, so that you can both be guided in your design (code that's hard to test is probably crappy code) and also have confidence when it comes to refactoring. This is particularly important in a dynamic language.
Good tests are not written with the requirement or presumption that the codebase will be tightly coupled or be difficult to change. Good tests are written entirely to support change and to give the developer confidence in the ability of that code to change easily.
I think we need more and more materials out there on good TDD and OOD because I'm finding that a lot of really smart people have just never seen or experienced it and have been turned off by the first few slimy rungs of the ladder (including me, once!)
However, it takes a lot more time to write code guided by tests than it does to just write code. And you may end up throwing away a lot of those tests and rewriting them when the code changes in response to customer feedback.
In my mind that's the main argument against TDD.
I'm happy to write tests after the v1.0 of an application is shipped and I have sign-off from my customer, because it is clear then that the time is not wasted - they have the working product as soon as possible, after all.
However, in my world it's inevitable that I'll be assigned to another task almost immediately after I ship a product. Those tests often don't get written.
I suppose I'm accumulating a lot of technical debt, but everything seems to work...
"throwing away tests due to customer feedback" is a red herring-- if you weren't throwing away tests, you'd be throwing away code. additionally, the goal with all development is to only sit down and write code when it's extremely unlikely it's going to change. you should be sure that you're not going to drop this thing you're writing, because you've done the necessary user research to be sure the feature is needed and usable.
"I'm happy to write tests after the v1.0 of an application is shipped and I have sign-off from my customer". if you're shipping untested code, that's a huge problem. the last thing you want is to deal with scaled feedback from a 1.0 release alongside ironing out code quality and reliability with tests
To a point. But as a general rule, I don't agree. The goal should be to write code that's both loosely coupled and highly cohesive enough that you can defer as many changes or decisions as possible to the latest point possible. While this might involve spending more time in planning, the process of developing will itself impact the overall design (in most, but not all, cases).
If only this goal were achievable! Unfortunately it is a frequent requirement of my job to show early prototypes and create new iterations of those products based on the feedback I receive.
So I agree. On the other hand, I think it's also very pragmattic to say "this isn't working for me, there's more important things to focus on".
I think manual testing is underrated, and even integration testing is probably not given it's due as one of the better ways to test a product in an automated fashion.
Unit tests in the TDD sense aren't (IMO) all that useful as tests. I'm a big fan of the tweak in calling it Test Driven Design personally.
Which materials do you recommend?
Code I plan to keep? I'm writing unit tests for anything that I'm worried might get broken. Code I'm going to throw away? Fuck it. Rarely worth testing.
The problem comes when people start treating prototype code as production code. That tells me that a) their prototype was probably too rich, and b) they're asking for trouble by building on a shaky foundation.
I think the solution is to be very clear about whether a given chunk of code is a prototype or for real. When a prototype pays off enough that you want to take is seriously, rewrite it. Tests and all.
I have a guess about why HN likes to upvote opinion pieces that hate on TDD. TDD initially feels like it takes discipline. It's natural to dislike things that require discipline. I think people are trying TDD, finding that it doesn't work as advertised, and gravitating towards the most pleasant explanation: that TDD sucks. They never consider the alternate explanation: you're doing it wrong.
I'm seeing a parallel with fitness or eating right. If you leap into a diet or heavy fitness regime right away, it really sucks for a while. But eventually it works out. Practicing mindful, intelligent TDD and object oriented design results in a similar experience. There's a "dip" you need to plough through before things start to click.
However, I gotta say it. If unit tests are making it harder to restructure your code, you're doing it wrong. The opposite should be true. The greatest purpose of unit tests is to add comfort and safety (and thus speed) to very big refactors and restructuring. If your system behavior changes, yes, your tests gotta change, and that takes effort. But if you restructure your code (e.g., break it apart to add an intermediate abstraction) and unit tests slow you down because you got get all up in 'em, then brother you got yourself some bad tests. Those should not have written.
I do think it's best to find a balance that works for whatever particular product you're developing. If you don't follow any methodology at all you're likely to spend a lot of time reinventing the wheel. But you don't necessarily need to follow a methodology to the letter in order to have a great team and produce great software.
Comes from South Park S02E17 Gnomes: http://www.youtube.com/watch?v=TBiSI6OdqvA
Not making the transition at the right time (or at all) has probably killed as many startups as over-engineering in the proving stage.
Here's your problem: You're either not writing unit tests correctly, or you're writing bad unit tests. If you have to rewrite all your unit tests to respect the new abstraction layer, you're programming astonishingly poorly - that should be a change to a few different tests, not all of them.
for example, the interface to enroll in a course used to be an association between an "enrollment" and the "course". when we added "courses has_many sessions", you enrolled in "courses" via "sessions". you have to rewrite all your enrollment specs to respect that new interface now, regardless of if they were shitty or well-written.
to me, that's a maintenance cost that isn't worth it at a startup given how rapidly you have to change those internal interfaces.
No, not if you're doing it correctly.
Again, that's not a unit test, it's a functional test. Unit tests test really small blocks of code. If you're crossing class boundaries, generally, it's not really a unit test.
Let me show you what I mean with tests I wrote:
Those are unit tests (not beautiful ones, written to refactor, but I digress) - they mock cross-object (and even cross method, in some cases) and they really focus on individual paths through the code. If you change a method, only tests relating to that method fail.
Edit: also, "the problem isn't that my tests are bad," is a poor assumption to start with - you're assuming a priori something that we're discussing here, which is that you're complaining about problems resulting from not testing correctly. "Bad" is a loaded word in any case - I'm not sure there's a test I've seen that couldn't be improved.
I've never understood endo-testing/mock objects in environments where the compiler cannot check your mocked interfaces. I also don't understand how people can aruge that you shouldn't be testing against the implementation and then say in the next breath you should be mocking out every single method call the internal implementation makes explicitly. You're just setting yourself to get lots and lots of green tests on code that will explode as soon as it hits production. Whenever I've done aggressive mock object based testing I soon have zero confidence in my tests because I get burned due to the fact that the mock objects eventually start asserting that the wrong behavior is right and my code explodes when integrated.
(And yes, I know the excuse here is that you should then write integration tests and functional tests too. But seriously now, how many tests are you going to end up writing for your 100 line Ruby class before you decide you're going overboard in the name of purity?)
Better to instead just write it so instead of worrying about no obvious deficiencies there are obviously no deficiencies (avoid side effects, state, extra coupling), and write some functional tests just to be safe. Yes, those ones that actually hit the database and test the interaction between multiple classes that TDD advocates loathe because they are so slow and impure. Slow they may be, but at least I know they're testing the code that's going to run on my servers. I'd rather have 10 tests break when I change one class that are easy to fix than have zero tests break when I change one class and let broken code get to production.
> And, if you decide to rename any of the methods on the classes you've mocked out here, your unit tests will continue to pass despite the fact your implementation is now full of bugs.
Yes, that's why you have other tests to cover those implementations. This is just an isolated example.
> And, if you do rename the methods or class you've mocked, you now have to update every single test for any class coupled with the changed method.
Yes, this is true. It's a helpful thing in my experience: you wish to mock as little as possible, and so having clear end points is important. Having to change every single usage of the method means you tend to write better code, in essence.
> I've never understood endo-testing/mock objects in environments where the compiler cannot check your mocked interfaces.
That's fine, don't do it. This is just a demonstration of what works for me.
> I also don't understand how people can aruge that you shouldn't be testing against the implementation and then say in the next breath you should be mocking out every single method call the internal implementation makes explicitly. You're just setting yourself to get lots and lots of green tests on code that will explode as soon as it hits production.
This is true, however, it means that when you edit one method, at most a half dozen tests will fail, as opposed to your entire test suite failing. You end up with very good locality of failure, as opposed to binary 'something is wrong' tests.
> Whenever I've done aggressive mock object based testing I soon have zero confidence in my tests because I get burned due to the fact that the mock objects eventually start asserting that the wrong behavior is right and my code explodes when integrated.
This is true, but not something that can be avoided - you end up with problems either way, and these test (as above) give you very good feedback about where your error is. That, combined with very comprehensive testing, leads to a situation where you can trust your tests really well never to throw false positives.
> (And yes, I know the excuse here is that you should then write integration tests and functional tests too. But seriously now, how many tests are you going to end up writing for your 100 line Ruby class before you decide you're going overboard in the name of purity?)
It depends on how important it is to you - for example, the tests above test functionality that is core to a piece of code that runs on many hundreds of applications - not something you ever want to break. As a result, the investment was worth it. You have to decide those tradeoffs on your own.
> Better to instead just write it so instead of worrying about no obvious deficiencies there are obviously no deficiencies (avoid side effects, state, extra coupling), and write some functional tests just to be safe.
I'm worried about more than obvious deficiencies - I'm worried about corner cases and things you haven't thought of. In writing these tests I caught dozens of unspecified and poor behavior corner cases.
> Yes, those ones that actually hit the database and test the interaction between multiple classes that TDD advocates loathe because they are so slow and impure. Slow they may be, but at least I know they're testing the code that's going to run on my servers. I'd rather have 10 tests break when I change one class that are easy to fix than have zero tests break when I change one class and let broken code get to production.
I totally agree with that. There are comprehensive integration style tests and comprehensive functional tests too - but they're pointless without the assistance of specific tests that indicate which portion of the application is failing.
If a functional test fails without a unit test failing, you have work to do on your unit test suite. Unit testing is a tool for programming as much as it is a tool to verify correctness.
We all tend toward "doing what we know." (Coding the next feature, for example) If you recognize it in yourself, that's a big start.
I often "lose" days making code look just a little better, after making it do what it needs to do.