1) We inherited this monolithic spaghetti mess of a legacy system with a class hierarchy that does not lend itself to testing without a major rewrite of the codebase.
2) Online tutorials expertly teach you how to test methods like add(x, y) and things associated with the 5-minute blog tutorial they also have, but fail miserably at teaching you how to test code that actually might exist in the real world.
1) Inheriting someone else's bad code and habits is a huge reason to throw testing out the window. It's really frustrating and always comes with a "We'll write tests in the future."
2) This goes in line with another thing I've been finding. It's super easy to show why you should test, but It's much harder to actually show how to test in the real world. These tutorials show the simplest way to write a test, and it hurts those trying to learn.
Specifically, my software deals with hardware devices. Do I simulate those devices in code (and if so, do I need tests to test my device simulator)? Or do I somehow gather many MB of data and keep it stored somehow for testing? I'm thinking these are simple questions for a testing veteran, but nobody I work with is that. And getting permission to spend time learning is not easy in a bad economy. :)
2) is the entire reason why I'm writing the book. Building a testing habit isn't as simple as following some basic tutorials. It's a fundamental shift in how you think about writing code and can't be summed up in a 5 minute blog, like you say.
To address your software, the answer is a little stretchy. For the code that depends on device data, you simulate as little device data as possible needed for your code to work. This means that if you have a method that only needs a device id, you only provide a device id. If you have a method that generates a report, you provide all the data that is needed in the report.
Another approach would be to try to group the test data together into common traits. I don't know enough about your software to come up with some examples, but you likely don't need to collect test data for every single device, but instead data that is representative of every single device.
If you want to find me on the twitter (@genericsteele), we could keep this conversation going. I'm interested in how you see the world of testing and just this thread has helped me think of new perspectives. I would love to figure out you could overcome the obstacles your work is throwing at you.
Before that I thought that it was 'we (my organisation) don't write tests, and here's a defence of why', followed by a list of excuses (in which I thought 'I wouldn't touch this guy with a bargepole'). I only realised he wasn't making excuses for himself after reading the homepage.
tl;dr misleading title and confessional writing style YMMV
Tests are a super helpful way to say "See, this works. Make sure it keeps working, everyone."
What about software that sends emails to people, or places orders, performs billable work, or gives people directions, or supplies them with data that they then carry forward and use in decisions or in other systems?
And are you proposing that instead of writing tests you write a play-back-able-log system that can roll back state and re-apply transformations if a given component did incorrect things?
I think sensible testing is the way forward, where sensible is appropriate to the type of application, language and requirements. 100% coverage is suitable for industrial code and 1% coverage is appropriate for toy projects. But no tests at all seems foolhardy.
You do have a point about manual testing being just as effective as automated testing, just a lot more time consuming.