PR comments I agree with, but after believing in unit tests for years I'm drifting slowly into the "waste of time" camp.
I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
At my current position I have the opportunity to work with two large code-bases, built by different teams in different offices. One of the projects has ~70% code coverage, the other doesn't have a single test. Exposure to both of these systems really bent my opinion on unit tests and it has not recovered.
The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
Whats the difference between teams that developed these vastly different applications?? I've worked with both teams for a while, and honestly, the engineers that wrote no tests are of far higher caliber. Use Linux at home, programming since they can remember, hacking assembler on the weekends and 3D printing random useless objects they could easily buy. The other team went to school to program, and they do it because it pays the bills. Most of the bad programmers know what they're doing is wrong, but they do it anyways so they can pad their resume with more crap and tell the boss how great they are for adding machine learning to the help screen that nobody has ever opened.
If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you. Maybe there's some middle ground if you have a mixed team or a bunch of mediocre devs??
Tests help very much against regression. And if you have mixed people touching the code.
Anecdotal: I once helped out a team who was writing a Maven plugin for doing static tests on some js code during build. There was already a test suite with a bunch of test code. As my stuff was fairly complicated and I have a habit of writing unit tests for such I added a bunch. Fast forward a year and a half later: I was greeted with a mail that there was a bug in it. I had to fight the better part of a day to nail it down: First not being familiar any longer with the code and secondly because a bunch of stuff has been added meanwhile. I fixed and thought it was a good idea to add a test as it was a nasty corner case. I headed for the natural place where the test would go and found -- exactly the test I was going to write, nicely commented out. A quick check with Git revealed that I added this test initially, which then was commented out when the new feature causing the bug was added. Firing up git blame was next... This is why I am fond of having tests: you are stomped onto it if you break something at least if your test suite is worth its salt.
"git annotate" does the same thing, but "git blame" can be more fun / dramatic if you're looking into the cause of a problem.
Interestingly, "svn annotate" had 2 aliases: "svn blame" and "svn praise". But git didn't add a "praise" alias, just "blame". I actually almost submitted a PR to add "git praise" one time.
> I'm convinced that unit tests don't usually find bugs.
They don't, they test whether or not the API contract the developer had in mind is still valid or not.
> IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it.
You don't write test to find bugs (in 98% of cases), but you can write tests for bugs found.
> Fuzzing is a much better approach.
If you're writing an I/O intense thing, such as a JSON parser, then yes. For 80% which is CRUD, probably not.
> The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
You are blaming tests for bad design choices. With the patterns raised unit tests only get you so far, integration tests are what help you prevent bad deployments.
> The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
So how many exceptions were raised due to bad deploys? Core review only gets you so far.
> If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you.
Failing tests don't have to do with devs being "great" or not. Developers must have the capability of quickly testing the system without manual work, in order to be more effective and ship new features faster. If the tests are one-sided (only unit tests, only integration tests), then this will get you only so far, but it still get's you that far.
Don't abandon good development practices only because you saw a terrible Java EE application.
Tests are a pattern. And patterns are the bread and butter of the medicore. That's not to say that patterns or tests are bad, but high calibre guys know when to use which tool. As a tool, unit testing is almost useless.
Low calibre guys don't have any feel for what they're doing. They just use the tools and patterns they were taught to use. All the time. This goes from engineers to managers to other disciplines.
I've seen people at a factory floor treating my test instructions for a device I built as some kind of bible gospel. I had a new girl who had no idea I designed said gadget telling me off for not doing the testing exactly like the instruction manual I wrote says.
The same thing happened with patterns and unit tests. You have hordes of stupid people following the mantra to the letter because they don't actually understand the intent. Any workplace where testing is part of their 'culture' signals to me that its full of mediocre devs who were whipped into some kind of productivity by overbearing use of patterns. It's a good way to get work done with mediocre devs, but good devs are just stifled by it and avoid places that force it.
I find unit tests to be _most_ useful in very particular cases: When a given function I'm writing has a set of input/outputs that I'm going for. Various items like parsing a URL into various components, or writing a class that must match a particular interface. I need to make sure the function works anyway, so I can either test it manually, or I can take a few extra moments and commit those test cases to version control.
For more complex items, I'm much more interested in higher level black-box integration tests.
That's a great example of why unit testing is mostly useless.
Having an expected/input output set when writing something like a parser is standard practice. Turning that set into unit tests is worthless for a few reasons.
1: You will design your code to make them all pass. A unit test is useless if it always passes. When your test framework comes back saying x/x (100%) of tests have passed, you are receiving ZERO information as to the validity of your system.
2: You wrote the unit tests with the same assumptions, biases, and limitations as the code they're testing. If you have a fundamental misunderstanding of what the system should do, it will manifest in both the code and the test. This is true of most unit tests - they are tautological.
3: While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing. More than likely, that's not the most readable or understandable way said code could have been written. You sacrificed clarity for unit testability. Metrics like test code coverage unintentionally steer developers to writing unreadable tangled code.
The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
You're complaint about always passing only makes sense if you ignore negative tests. Good tests will also test that bad/incorrect input results in predictable behaviour - e.g. invalid input into a parser doesn't parse.
> While doing all of the above and achieving almost zero additional utility, you had to fragment your logic in a way that easily yields itself to unit testing
Another way to consider it is that unit testing forces you to structure your code to be more composeable which is a win. The amount of intrusive access/changes you need to avail test-code is language-dependent.
> The only use case for unit testing here would be if this parser was a component that gets frequently updated by many people or a component that gets implemented anew for different configurations. But at this point i'm just talking about regression testing, and there are many ways to do that other than unit testing.
And yet successful large-scale projects like LLVM use unit-testing. Not exclusively but it's a part of their overall strategy to ensure code quality. Sure, for very small-scale projects with a few constant team members it can be overkill. Those aren't particularly interesting scenarios because you're facing fewer organizational challenges. The fact of the matter is that despite all the hand-wringing about how it's not useful, unit tests are the inevitable addition to any codebase that has a non-trivial number of contributors, changes and/or lines of code.
The applicability of unit testing to your particular cases varies greatly across languages & runtimes.
For URL parsing, some runtimes/frameworks have that thing already implemented. E.g. in .NET the Uri class allows getting scheme/host/port/path/segments, and there’s a separate ParseQueryString utility method to parse query part of the URI.
To ensure a class conforms to an interface, the majority of strongly-types OO languages have interfaces in their type systems. If you’ll use that but fail to implement an interface or some parts of it, you code just won’t compile.
Indeed. Tests allow new members of the team to confidently make changes. I've seen codebases that had near zero tests and also a total mess, with one change somewhere breaking a hundred things 30 levels down the stack. We'd find the issue only in production, along with an enraged customer.
Tests are not a replacement for good developers, they are just a tool for contract validation and a regression safety net.
> Developers must have the capability of quickly testing the system without manual work
Running unit tests is hardly quick. Especially if you have to compile them. End-to-end are even worse, in this regard.
> They don't, they test whether or not the API contract the developer had in mind is still valid or not.
If you're always breaking the API, then that's a sign that the API is too complex and poorly designed. The API should be the closest thing you have to being set in stone. Linus Torvalds has many rants on breaking the Linux kernel's API (which, also, has no real unit tests).
It's also really easy to tell if you're breaking the API. Are you touching that API code path at this time? Then yes, you're probably breaking the API. Unless there was a preexisting bug that you are fixing (in which case, the unit test failed) then you are, by definition, breaking the API, assuming your API truly is doing one logical, self-contained thing at a time as any good API should.
edit: As an aside, I'd like to point out that POSIX C/C11/jQuery/etc. are littered with deprecated API calls, such as sprintf(). This is almost always the correct thing to do. Deprecate broken interfaces and create new interfaces that fix the issues. Attempting to fix broken APIs by introducing optional "modes" or parameters to an interface, or altering the response is certain to cause bugs in the consumer of the interface.
> Don't abandon good development practices
Unit tests are a tool. There are cases where they make sense, where they are trivial to implement and benefit you greatly at the same time. Then there are cases where implementing a unit test will take an entire day with marginal benefit and the code will be entirely rewritten next year anyway (literally all web development everywhere). It doesn't make sense to spend man-months and man-years writing and maintaining unit tests when the app will get tossed out and rewritten in LatestFad Framework almost as soon as you write the test.
I think it probably depends on the complexity of the code as well. I can't count the number of times my unit tests on projects I alone maintain have saved my ass from releasing some unwanted bug into production due to a change I did.
Especially, if the codebase evolves due to new end-user requirements being discovered along the lifetime of the project unit test on various corner cases can be a lifesaver.
I'm not a bad dev, honestly. The complexity of the code I have to maintain just overwhelms my working memory. And yes, without any silly patterning. Sometimes domain requirements alone are sufficiently complex to confound a person without additional safeguards.
Another level of complexity comes from functionality that is rational only to include form third party sources. The third party sources must be updated frequently (because the domain is complex and later versions usually are subjectively of higher quality). The unit tests are about the only thing that can tell me in timely manner if there was a braking change somewhere.
Yes, there is smoke testing later on, but I much prefer dealing with a few unit tests telling me they don't work rather than dealing with all the ruckus from bugs caught closer to the end user.
On projects I alone maintain I prefer to only unit test the primary API. That usually gives me the information I need to triangulate issues and I move too slowly otherwise.
My tests are generally against the module interface as well. Unit tests don't need to be atomic, as long as they run sufficiently fast. Sometimes there is no 'correct' output, I just need to pinpoint if some change affected the output and is that a problem or not.
Dogmatic unit testing is silly. Testing should focus on fixing in the critical end user constraints and covering as much of the functionality visible to the end user. So, I would not necessary focus on testing individual methods unless they are obviously tricky.
In a an organization where everybody can code anywhere I would enforce per method testing, though. Sometimes a succint and lucid unit test is the best possible documentation.
Thanks. I also have a history with tests and I continue to struggle to find the right balance not just between coverage, but also unit vs integration (and within unit, between behavioural and implementation). I think this uncertainty based on experience is in a whole other class than "I can't get my employees to write tests."
Two quick points.
1 - I've added fuzz testing to my arsenal and find it a good value, especially if you're blasting through a new project.
2 - Good monitoring (logging, metrics) trump tests when it comes to quality, both for being _much_ easier to do and in terms of absolute worth.
That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better (especially true if you're working with static typed languages, where coupling tends to be more problemantic). There are secondary benefits though: regression, documentation, onboarding.
I think one key difference between what you describe and my own situation is that my small team manages a lot of different projects. Maybe their total size is the same as your two, but our ability to confidently alter a project we haven't looked at in months is largely due to our test coverage. I agree that then and there, I get less benefits from tests for the project that I'm currently focused on.
The nut I haven't cracked yet is real integration test between systems. This seems like something everyone is struggling with, and it's both increasingly critical (as we move to more and and more services) and increasingly difficult. My "solution" to this hasn't been testing, but rather: use typed massaging (protocol buffers) and use Elixir (the process isolation lets you have a lot of the same wins as SOA without the drawbacks, but it isn't always a solution, obviously)
>That said, testing is primarily a design tool (aka, identifying tight coupling). The more you do it, the more you learn from it, the less value you get from it because you inherently start to program better
Unit tests "identify" tight coupling because they are themselves a form of tight coupling.
Huh? My interpretation is, it's harder to write shitty code (e.g. hard-coding the database server IP) if you write unittests (where you'll need to abstract the database interface to be able to mock it). In this manner, unittests promote clean, separated interfaces and work against tight coupling.
Tests that couple to database connection code and mock or stub it out are more tightly coupled to the code than tests that just couple to, say, REST end points.
I'm not denying that the pain of having one form of tight coupling interact with another can be used to "guide" the design of the code. It can be. I've done it.
I'm simply pointing out that you're using tight couplings (between unit test and code) to highlight other tight couplings (between code and code).
I use my eyes to detect tight couplings in code that deserve a refactor because that's cheaper than writing 10,000 lines of unit test code to "guide design". Each to their own, though. I know my view isn't a popular one:
https://news.ycombinator.com/item?id=16374624
I've never liked unittests. They make it harder to refactor, which you have to do because all good designs come from iterating many times, and the types of mistakes unittests catch tend to be easy to spot (when the error occurs) and fix anyways. In general I feel like there's a disturbing tendency of programmers to avoid reading code and only think in terms of inputs/outputs. This is often a nice abstraction to make, but not always. See all the comments saying something like "unittests let someone new contribute easily"; I disagree with doing this, I believe before you start making any changes you should know where and how a function is being used, instead of relying of unittests. You're saying 'this code may not work let's write some more code to check it', but what if the test code does not work? The idea is rotten to the core.
I think it can be related in a certain type of poor team that cargo cults strict rules and patterns and mechanistically writes tests for every little getter and setter of all the useless layers of useless glue classes whose real purpose is to mask their lack of understanding.
> If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
I routinely call myself a proponent of BDT (Bug Driven Tests) over TDD for much a similar reason. That said, tests are HUGELY beneficial for guarding against regressions and ensuring effective refactors. Anecdotal but on my current project tests helped us:
* Catch (new) bugs/changes in behavior when upgrading libraries.
* Rewrite core components from the ground up with a high degree of confidence in backwards compatability.
* Refactor our Object model completely for better async programming patterns.
I don't think tests are particularly good at guarding against future bugs in new features; as your comment about fuzzing hits squarely on.
But I DO think tests are good at catching regressions and improving confidence in the effectiveness of fundamental changes in design or underlying utilities version to version.
Unit tests are there to make the code less fragile, so that it can be modified with confidence. But if you need tests to make your code robust, it's likely a mess underneath; probably better to spend the time refactoring.
Personally, I say write tests when it makes development quicker or serves as a good example / spec.
I think unit tests will die one day, and that day is probably not too far away.
These days I follow three "good practice" rules, all of which are violated when you follow common unit testing practise:
* Only put tests on the edge of a project. If you feel like you need lower level test than that then either a) you don't or b) architecturally, you need to break that code off into a different project.
* Test as realistically as possible. That means if your app uses a database, test against the real database - same version as in prod, data that's as close to prod as possible. Where speed conflicts with realism, bias toward realism.
* Isolate and mock everything in your tests that has caused test indeterminism - e.g. date/time, calls to external servers that go down, etc.
I mostly agree with your point, but I think this is too much. Projects should be made up of decoupled modules (units ?) with well-defined interfaces. These should be stable and tested, and mostly without mocking required.
The larger your project the more important this is.
>Projects should be made up of decoupled modules (units ?) with well-defined interfaces.
That goes without saying.
Nonetheless, if it's a very decoupled module with a very well defined, relatively unchanging interface which you could surround with tests with the bare minimum of mocking - to me that says that it probably makes sense to put it in a project by itself.
>The larger your project the more important this is.
The larger a project gets the more I want to break it up.
To be clear, while I'm a big fan of BDD the practise, I strongly dislike cucumber and other gherkin tools. I consider a large part of the relative unpopularity of BDD to be attributable to their problems.
I think the downvotes are largely dogma driven - people are quite attached to unit testing, especially for the situations where it worked for them and largely see them in opposition to "no testing" not a "different kind of test".
> I'm convinced that unit tests don't usually find bugs.
At work, I've rejected many merge requests with the comment "this unit test is equivalent to verifying a checksum of the method source". It's so frustrating that people still think it's necessary to write things like this literally real example:
Unit tests aren't supposed to find all bugs. Moreover, if you're not enforcing that the tests have to pass before merged/pushed into a shared branch they are beyond useless because they age & more importantly the pain of broken tests is multiplied as it escapes the developer making the change to the entire team being blocked.
To understand how unit tests are useful, you look at how code is developed. Typically there's a write/compile/run cycle that you iterate on as you write code (or you do it in that order if you're a coding god). Then you test it with some sample inputs to validate that it works correctly. The "test it with some sample inputs" is simply what a unit test is. This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise. If you submit this then at the very least reviewers can have more confidence in the quality of your code or perhaps see some corner cases that may have been missed in your testing as devs in my experience are horrible at communicating exactly what was tested in a change (moreover, they tend to be high-level descriptions that can contain implicit information that's omitted whereas unit tests do not). Once you get it in, pre-submit validation enforces that someone else can't break the assumptions you've made. This is a double-edged sword because sometimes you have to rewrite sections of code that can invalidate a lot of unit tests. However, the true value-add of unit tests is much longer-term. When you fix a bug, you write a regression test so that the bug won't resurface as you keep developing. Importantly you have to provide a comment that links to the bug system you're using that can provide further contextual information about the bug.
Unit tests aren't free as they can be over-engineered to the point of basically being another parallel code base to maintain or they can be over-specified and duplicated so that minor alterations causes a cascading sequence of failures. However, for complex projects with lots of moving parts it can be used to obtain the super useful result of always being able to guarantee a minimum level of quality before you hand off to a manual QA process. Moreover, the unit tests can serve a very useful role of on-boarding less experienced engineers more quickly (even quality coders take time to ramp up) or handing off the software to less motivated/inexperienced/lower quality contractors if the SW has transitioned into maintenance mode. Additionally, code reviews can be hit or miss with respect to catching issues so automated tests ensure that developers can focus on other higher-level discussions rather than figuring out if the code works.
Sure unit tests can go insane by having mocks/stubs everywhere to the point of insanity. I prefer to keep test code minimal & only use mocks/stubs when absolutely necessary because the test environment has different needs (e.g. not sending e-mails, "shutdown" meaning something else, etc). There's no free lunch but I have yet to see a decent combination of well thought-out automation & unit tests failing to ensure the quality maintains over time (the pre-submit automation part is a 100% prerequisite for unit tests to be useful).
"This is frequently a simpler environment to do so as you can control the inputs in a more fine-grained manner than you might otherwise"
One of the things that really sold me on unit tests for Django development was realising that it was quicker to write a test than to open a shell, import what I was working on and run the code manually.
I'm convinced that unit tests don't usually find bugs. IMO, most bugs are edge cases that were an oversight in the design. If the dev didn't handle the case in code they're not going to know to test for it. Fuzzing is a much better approach.
At my current position I have the opportunity to work with two large code-bases, built by different teams in different offices. One of the projects has ~70% code coverage, the other doesn't have a single test. Exposure to both of these systems really bent my opinion on unit tests and it has not recovered.
The project with high code coverage is basically shit and has so many bugs that we regularly cull anything marked less than "medium" severity as "not worth fixing". This project was written by a team that loves "patterns", so you can find all sorts of gems like Aspect Based Programming, CQRS, N-tier, etc. well mixed into a featureless grey Java EE goo. We get so many bug reports that its someones job to go through them.
The other project with no tests is a dream to work on. Not a single file over a few hundreds lines, everything linted and well documented. Almost no methods that don't fit on the screen, no recursion. No bullshit "layering" or "patterns". I can't remember the last time we had a bug report, since our monitoring picks up any exception client and server side. Every bug I've worked on was identified by our monitoring and fixed before anyone noticed.
Whats the difference between teams that developed these vastly different applications?? I've worked with both teams for a while, and honestly, the engineers that wrote no tests are of far higher caliber. Use Linux at home, programming since they can remember, hacking assembler on the weekends and 3D printing random useless objects they could easily buy. The other team went to school to program, and they do it because it pays the bills. Most of the bad programmers know what they're doing is wrong, but they do it anyways so they can pad their resume with more crap and tell the boss how great they are for adding machine learning to the help screen that nobody has ever opened.
If your developers are great then tests would hardly fail and be fairly useless, and if they're terrible tests don't save you. Maybe there's some middle ground if you have a mixed team or a bunch of mediocre devs??