Hacker News new | comments | show | ask | jobs | submit login
Why We Don't Write Tests (whatdoitest.com)
14 points by genericsteele 1238 days ago | hide | past | web | 16 comments | favorite



He forgot two of the most important reasons people don't test:

1) We inherited this monolithic spaghetti mess of a legacy system with a class hierarchy that does not lend itself to testing without a major rewrite of the codebase.

2) Online tutorials expertly teach you how to test methods like add(x, y) and things associated with the 5-minute blog tutorial they also have, but fail miserably at teaching you how to test code that actually might exist in the real world.


Yeah I did.

1) Inheriting someone else's bad code and habits is a huge reason to throw testing out the window. It's really frustrating and always comes with a "We'll write tests in the future."

2) This goes in line with another thing I've been finding. It's super easy to show why you should test, but It's much harder to actually show how to test in the real world. These tutorials show the simplest way to write a test, and it hurts those trying to learn.


Any plans to take a crack at solving #2, or does your book already do that (we'll assume for now that #1 is an impossible situation)? I understand testing on an intellectual level, and I understand completely how to write tests for a 5-minute blog. But I have yet to experience instruction on writing tests that actually use real-world classes (i.e. not Dog inherited from Animal) and actual real-world data. First one to do this gets my ebook money.

Specifically, my software deals with hardware devices. Do I simulate those devices in code (and if so, do I need tests to test my device simulator)? Or do I somehow gather many MB of data and keep it stored somehow for testing? I'm thinking these are simple questions for a testing veteran, but nobody I work with is that. And getting permission to spend time learning is not easy in a bad economy. :)


1) is pretty hard, but I'm tackling some basic strategies to adding tests to an untested mass of code.

2) is the entire reason why I'm writing the book. Building a testing habit isn't as simple as following some basic tutorials. It's a fundamental shift in how you think about writing code and can't be summed up in a 5 minute blog, like you say.

To address your software, the answer is a little stretchy. For the code that depends on device data, you simulate as little device data as possible needed for your code to work. This means that if you have a method that only needs a device id, you only provide a device id. If you have a method that generates a report, you provide all the data that is needed in the report.

Another approach would be to try to group the test data together into common traits. I don't know enough about your software to come up with some examples, but you likely don't need to collect test data for every single device, but instead data that is representative of every single device.

If you want to find me on the twitter (@genericsteele), we could keep this conversation going. I'm interested in how you see the world of testing and just this thread has helped me think of new perspectives. I would love to figure out you could overcome the obstacles your work is throwing at you.


I read the headline and skimmed the article. I only realised at the end, when he's trying to sell you something, that the 'we' is inclusive-marketing-speak.

Before that I thought that it was 'we (my organisation) don't write tests, and here's a defence of why', followed by a list of excuses (in which I thought 'I wouldn't touch this guy with a bargepole'). I only realised he wasn't making excuses for himself after reading the homepage.

tl;dr misleading title and confessional writing style YMMV


In my opinion, tests should be an after-thought until your product is seeing success and heavy use.


A year ago, I would have agreed with you. There's nothing stopping you from building a successful product without writing tests. Since I built the habit of writing tests, though, It definitely makes a difference, especially if you're working with a team or contractors.

Tests are a super helpful way to say "See, this works. Make sure it keeps working, everyone."


You're not afraid that some actual/potential users might be scared off from your product due to defects they encounter while using/trying the product?


I would manually test every aspect of it, and make sure there are no known defects. So any defect found by a customer would likely be some obscure edge case.


What happens when your customers' data is lost or corrupted because of a bug you could have caught?


Data redundancy in multiple locations and good logging can fix that problem better that tests, I think. Most bugs can be found anyways during development without coded tests.


For a very narrow range of software, I suppose. Not for software that has side-effects and actually does stuff in the real world.

What about software that sends emails to people, or places orders, performs billable work, or gives people directions, or supplies them with data that they then carry forward and use in decisions or in other systems?

And are you proposing that instead of writing tests you write a play-back-able-log system that can roll back state and re-apply transformations if a given component did incorrect things?

I think sensible testing is the way forward, where sensible is appropriate to the type of application, language and requirements. 100% coverage is suitable for industrial code and 1% coverage is appropriate for toy projects. But no tests at all seems foolhardy.


That's not going to save you from failing to capture key records, or capturing them incorrectly.

You do have a point about manual testing being just as effective as automated testing, just a lot more time consuming.


If your data transformations are bugged, you've merely succeeded in duplicating redundantly the wrong answer--backups won't save you.


Not sure what you mean by "coded" tests, but I agree that most bugs can be found during development - unfortunately, my experience is that that's rarely the case. But if your development team finds most bugs during development, my hat's off to you.


By coded test, I meant taking time to write test scripts. I assumed this is what is meant by the article.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: