I also don't like unit tests and very rarely unit test. I think they provide a false sense of security and were invented by corporate software shops to better quantify "units of work" (oh, how many times I've had unit tests assigned to me in tickets!).
If you can't formally prove something doesn't break (in your head or with pen & paper or via pseudocode), your code is too complex. There are a few exceptions to this, but they are highly technical (e.g. FFT, cryptographic, physics/math implementations, tricky pointer arithmetic, regular expressions, etc.) where a hard-to-see typo can actually break things in non-obvious ways.
Most code is not that -- it's just written poorly (because code standards aren't enforced). Linux is a great example of a project where coding standards are annoyingly enforced (and there's no real "unit" testing) and lo and behold, the code is of exceptional quality.
I'd rather have an existing suite of tests that I can use to verify that when working with other people's code.
And more importantly, if JIRAISSUE-15295 is reproducible, then a unit test that a) reproduces it and b) verifies that the bug no longer occurs, is invaluable to prevent someone bringing JIRAISSUE-15295 back from the dead.
Of course, if the unit test is too tightly coupled, and has too many insights into code it shouldn't, then it's worthless as JIRAISSUE-15295 will most likely reoccur via a different code path.
But, poorly written unit tests aside, I have found significant value in unit tests when maintaining a rapidly changing code base.
A rapidly changing codebase is where unit tests are least useful though. Most of the changes are going to be because of new requirements which just means the test has to be updated. It's just busywork at that point.
If you have discovered a way to write unit tests that can tell the difference between a regression and an enhancement, please let me know.
This is probably a bad habit, but I've been writing tests so that I don't have to actually navigate though my app, put it in the correct state, and push a button to see if the feature works or not. Writing the test is just faster.
This is one of the major sources of bugs in my experience. People write unit tests that are perfect and pass, but only pass because you give it exactly the right input to make it pass in the first place.
Then when running the actual app, the data is not exactly like on the unit test and you have bugs.
To cover those scenarios, we use integration/acceptance tests.
So I always come back to the same question: then why should I even bother with unit tests? Especially the unit tests that people normalized (test per class/public method).
You end up with unit tests that are extremely coupled to the implementation, and without proper integration tests, you can't guarantee it will all work anyways. It only works on a bubble.
I believe tests should be much more about the broader behaviors than the implementation details, but TDD and evangelists of today will have you writing tests for every small class you create. I personally still use unit tests from time to time when there are a ton of edge cases on a single behavior that I want to test, but that's the exception not the rule.
By avoiding unit tests, or to put it differently making the units tested larger, you get more space to refactor, less coupling between tests and implementation as well as more meaningful tests.
Somewhere along the way we lost the meaning of "unit" and it became "class/methods", when it was originally supposed to be more at a module level.
I use unit tests primarily for a) verifying my edge cases (like the old joke goes, "a tester walks into a bar and orders -1, 0, jkhkhkhjkh...") and b) preventing regressions.
> This is probably a bad habit, but I've been writing tests so that I don't have to actually navigate though my app, put it in the correct state, and push a button to see if the feature works or not. Writing the test is just faster.
This is a clever way of solving this problem, and I've ran into it before, as well. Applications where state is very deep (like a game) and you need to verify certain operations on that state is quite tricky. IMO, I'd probably call these integration tests, not unit tests.
I also don't like unit tests and very rarely unit test. I think they provide a false sense of security and were invented by corporate software shops to better quantify "units of work" (oh, how many times I've had unit tests assigned to me in tickets!).
If you can't formally prove something doesn't break (in your head or with pen & paper or via pseudocode), your code is too complex. There are a few exceptions to this, but they are highly technical (e.g. FFT, cryptographic, physics/math implementations, tricky pointer arithmetic, regular expressions, etc.) where a hard-to-see typo can actually break things in non-obvious ways.
Most code is not that -- it's just written poorly (because code standards aren't enforced). Linux is a great example of a project where coding standards are annoyingly enforced (and there's no real "unit" testing) and lo and behold, the code is of exceptional quality.