
How many times has a unittest saved your day? - GrumpyNl
How much time is spend on writing unit tests compared to receiving the bug and fixing it. All i hear around me is, the screen must be green so all unit tests have been successfully processed. When a unit test fails and the dot is red, 99% of the time they have to fix the unit test and not the actual code, I understand when you use unit test to workout your problem, but most of the time its the unit test that fails and not the code.
======
AnimalMuppet
Several. I've found race conditions with unit tests. I found a call to a pure
virtual function in the base class's destructor. I've found "hey, that code I
just wrote didn't do what I thought it did" not _frequently_ , but still more
times than I care to admit. I've also found "whoops, I didn't think about how
my change would affect that code over there" several times.

That's just _my_ code. Then there's finding out what my _co-worker 's_ changes
did to my code...

------
externalreality
I agree most of the time I am just fixing silly issues like this mock is now
broken because I added a method or a field or the like - or some beefy test
setup code is now broken because of some otherwise innocuous change. I do
believe high-level functional/acceptance tests are important however, that is,
tests that check whether the various programs that make up the system actually
do what they are meant to do. For example a CLI test for a program that
interfaces with the user from the CLI. If a program is writing to storage,
check to see if it actually writing to the storage properly. All the little
mocks and stubs and so forth seem to be to form a false sense of security.

Those who are in favor of unit tests have one big weapon on their side of the
argument and that is simply the word "test" "how can a test be bad?, right?"

------
mping
If you are specifically asking about unit tests and not testing in general,
they don't save my day as much as give me confidence that my code is working
as it is supposed, specially at the edge cases. I'm not a fan of mocking alot
of stuff because it takes time to maintain, so in general terms I prefer
testing higher in the pyramid (eg integration testing).

As for saving the day, I find that they are very helpful with refactoring,
specially in dynamic languages. When the business rules change, first thing I
do is rewrite the test, the just make it pass.

------
flukus
So you made a change and a test broke, what about all the ones that didn't
break? They still represent a lot of scenarios you probably wouldn't have
tested manually.

I would also guess that the problems with your tests breaking are because
you're not following the single assert principle. A unit test should generally
(plenty of exceptions) not break unless the specific behavior (the unit) it is
testing for changes. A common anti-pattern is to treat tests as scenarios,
setup the scenario, test it and check everything went as expected for that
scenario. Instead you want to work out the behavior you're testing, "foobar
gets written to the log" is a test and the setup should be just enough to
produce that behavior..

------
photonios
Our code base is primarily Python. We do use type checking, but we risk having
problems that you simply don't have in strongly typed languages.

Unit test save the day every time I need to refactor large parts of the code
base or make a change that affects the entire code base. They don't
necessarily find problems I didn't think of, but they make these kind of large
refactoring a lot easier.

Recently we replaced the ORM. I replaced the code that handles setting things
up, and wrote the code to configure the new ORM. Then I just ran the tests.
Almost all of them failed because the imports and syntax are slightly
different. So, I would just take it test by test and kept fixing things until
the tests passed. When all the tests were passing I was reasonably sure
everything was still working as expected.

Sure, we also face the problems you're facing. Sometimes the tests just fail
for unrelated reasons. Or somebody wrote a test that is a bit flaky and fails
from time to time. Usually we just consider this the cost of doing business so
to say.

When tests are failing every time the code changes, that usually indicates a
problem with how you write tests. Stick to the single assert principle, make
sure tests are simple, don't require a lot of set up etc. complex tests are
usually a sign of complex code.

In dynamic languages, mocks are very popular, but I've found them to be the
source of having to change things. They are often a sign that the code is
poorly split and untestable. Instead, write smaller, isolated modules and
possibly use dependency injection, making the "mocking" part of the code.

~~~
mbrock
Basic idea: to verify your robot in a quick and harmless way, disconnect its
actuators, feed fake data to its sensors, observe its stream of decisions, and
verify that it behaves according to some rules.

If your robot has different parts with their own rules of behavior, then you
should be able to detach them to test them on their own, or in a much simpler
test harness.

I think the complaints about unit tests often amount to “we’re taking the
robot apart into too many parts that actually depend on each other in complex
ways, so testing them independently is tedious and gives us little real
information.”

Another kind of complaint is “our robot has a huge amount of functions that
all interact in very complicated ways and we haven’t specified any overarching
principles so our tests are just a lot of arbitrary scripts that we have to
change constantly to accomodate the robot’s ever-changing repertoire of
fascinating behaviors.”

Or “there is no exhaustive list of our robot’s sensors and actuators, and many
of them are complex third party products which we integrate without any
adapters, so it’s nearly impossible to take the thing apart and provide a
realistic simulated environment.”

I think the underlying complaint is something like “we were told that there
was an easy method to make our software correct, and it’s disappointing that
it doesn’t work well without considerable investment into domain modeling,
architectural work, mathematical analysis, etc.”

------
cimmanom
Not necessarily unit tests, but automated tests in general? All the time.
Setting aside linting failures (and frankly I’m not sure how those even get
pushed), I’d say we have a new test failure on our integration branch in CI an
average of once a day.

------
bjourne
You really learn to appreciate unit tests when writing algorithms based on
heuristics. One heuristic can easily break another so unit tests become
critical to avoid introducing regressions. However, BAD unit tests can
definitely be a drag. But the remedy is not to stop writing unit tests, it is
to write better tests! :)

------
sethammons
More times than I can count.

------
tmaly
unit tests have saved me a few times.

they really help with legacy code where you may not be the original creator.

