I’ve studied and practiced TDD for years. I’ve even written articles to help beginners.
I now only practice it sparingly. Sometimes it makes more sense to write the test first and other times I already know exactly what I want and I know how to make it testable.
An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code that can withstand no change without all hell breaking lose.
Lately I spend more time on acceptance tests, which I do often write first. I use unit testing for non-trival logic.
> I already know exactly what I want and I know how to make it testable. An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code. ... use unit testing for non-trival logic.
Couldn't have said it better myself. If you're unsure about the best form of your API, then TDD can be hindering. Tests are important but TDD can be an anti-pattern.
> An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code that can withstand no change without all hell breaking lose.
In my opinion, unit tests should only cover code that doesn't have any side-effects or interact with external dependencies. For example, suppose you have a method that handles a HTTP request, extracts information from that request, create a database query, query the database, processes the results from the DB, generates a HTTP response, and then sends the response back to the client. You can easily factor out the parts that don't have external dependencies into their own methods. For example, you could pass in a HTTP request as a parameter to the method that extracts information from it and then returns that information.
If you unit test those methods, then you would have coverage for everything other than the actual interaction between the HTTP server and client and the interaction between the server and database. Those cases can then be covered with integration tests where an environment is set up that runs test instances of the HTTP client, server, and database.
I would like to believe this, so let's kick it about. I hypothesize that you just get to move state around: that having a pile of perfectly functional code means that you then have to setup/inject a lot of stateful stuff in order to actually test anything, and the complexity of the code/testcode system is conserved. Is there a good way to test this? Is the whole system easier or harder to understand if the stateful bits are isolated?
In the example I gave, the state is held in the outer method. That is, the connections with the HTTP client and the database.
The methods that process the data from the request, generate the database query, process the data returned from the database and generate the HTTP response can all be tested by setting up the function parameters and asserting on the return value. Setting up mocks to do things like assert that the function calls another method with a particular set of parameters is not necessary.
As for testing interactions with the HTTP client and database, one can set up an environment that runs a actual HTTP client and an actual database server (populating it with test data). Testing against actual implementations is always better than testing against mocks.
It sounds like, "if you wish to make an apple pie from scratch, you must first invent the universe." But, hey, if it's good enough for the universe, who are we to try to improve on that?
It depends where your state is. If it's all in one place, set-able in one language, then it's probably no worse. But if your state is spread out across 30 different config files for 12 different versions of software, spread among multiple machines... good luck getting it setup for reproducible tests.
I'd say procedural more than functional, it drives you to avoid encapsulated state and that's about it, unless of course you want to define C's level of abstraction as functional :)
It depends on where you work, but to me acceptance tests run on a browser (e.g.: automate site testing with puppeteer, selenium), while integration tests I would say tests running an API call that's allowed to hit a data store
At some places, the only acceptance testing that exists is user acceptance testing, where a user manually interacts with the application to sign off on updates after verifying that it meets the business requirements
You're right. The definitions of boundaries between acceptance vs. integration tests is fluid and depends upon the team context.
In our team, we define the terms less by tactics and more by intent.
It's our belief that you can generalize your idea on what acceptance testing means. The intent of an acceptance test is to verify that -- no matter how it's exercised -- the implementation performs the work and outputs the result I expected. Now, you can exercise this implementation with any interface. Of which, a browser is just one interface to your application layer. An HTTP API is yet another interface; a command-line utility is another; etc.
We actually have separate acceptance tests at the UI level and the API level. Sometimes even at the _feature_ level, in that it exercises a multi-step multi-result pathway through the application.
The engineer responsible for developing a unit of work may or may not implement API level acceptance tests (it's recommended); the reviewing engineer usually writes an automated API level acceptance test that must pass for the merge request to pass from code review -> QA/QC; then the QA/QC engineer implements and runs UI level acceptance tests that must pass to move from QA/QC -> Done.
We've found this process to be very effective in reducing our defect rate and improving the quality of our product.
Unit test: tests single function/class, uses fake or mocked data to test for expectations. Test coverage can be used to find spots missed in testing.
Integration test: test two or more things in isolation - interaction between two classes, functions, etc. OR more than one system (say micro-service) communicating with another. The rest of the systems are fake/mock or fetch/replay golden data.
Unit test should take milliseconds to execute.
I'm not actually sure what acceptance test would be?
> An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code that can withstand no change without all hell breaking lose.
In the .NET world, at least, I highly recommend the AutoFixture library. It's mainly meant for generating test data, but combined with your preferred mocking library it can auto generate mocks on the fly. For example, imagine that you're testing a class with 5 dependencies, but only one is relevant to the current test. AutoFixture will generate all the mock objects automatically for you, then let you capture the one you care about to set up its behavior for your test.
> An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code that can withstand no change without all hell breaking lose.
In my opinion, unit tests should only cover code that doesn't have any side-effects or interact with external dependencies. For example, if you have a method that handles a HTTP request, extracts information from that request, queries a DB, processes the results from the DB
Mods, you can delete the post above and this post. Somehow, it was posted before I finished typing it (the finished version is at this link https://news.ycombinator.com/item?id=17763711).
2. If acceptance tests are the highest level of tests, then focusing on them only may not be very efficient, mainly because they are expensive to run (see the test pyramid). If something doesn't work then most likely debugging is necessary, then it defeats the purpose a bit.
As always, the truth is somewhere in the middle. There are certain stages in the lifetime of a project where TDD can work execptionally well, but it is by no means a panacea. Things can and will sometimes emerge organically, but most of the times a [lot of] though has to be put into the arhitecture of the software we build. Domain knowledge and understanding the tradeoffs that are being made also help a lot.
As for Uncle Bob, the gut feeling I get whenever I hear him speak is “snakeoil salesman”. You need to understand that he is in the business of selling you something (his books, training, etc). And that’s okay - what’s not okay is flat out attacking everyone that has a different opinion/disagrees with him. Who gave you the authority to dictate what’s good and wrong? It’s sad because he could make money from his [mostly good] ideas without attacking anyone.
Uncle Bob is a menace to TDD. Nothing more effectively or immediately turns people off than being told that they are, in fact, unprofessional and by extension unethical.
As for myself: I am not sure that TDD can really be learned effectively from a book alone. I read Kent Beck's book and my reaction was "well this is concisely-stated bullshit". Then I went to work for Pivotal Labs and I now I understand the fuss.
What I begrudge Bob Martin for is making TDD into religion.
Even this article, which wonders why TDD is not more practiced, has religious overtones ("Print out this list and place it at your desk," --as though the reader cannot by him/herself determine what to do with the information. (In contrast: "I found this table so useful that I cut it out and placed it at my desk" sets a whole different tone.)
Absolutely, I mean I'm pretty sure Linus isn't using TDD to develop the linux kernel, or git.
My problem is that Uncle Bob's dogmatism, is then cargo culted by mostly 'weak' developers as a shield behind which to create an illusion of competence.
Just repeat outrageous things like everything must be TDD, we must have 100% code coverage, and avoid those tricky decisions called software engineering.
It could be that Linus, the people he chose to be his "captains", the design and code review process, and the sheer amount of linux usage anywhere maybe enough. The linux kernel might be the most analyzed (by people and machines) piece of software there was.
But then a lot of business-logic software is out there just to support client-A able to do rpc calls to your service, and usually one or two engineers implement it, few more look at it, gets deployed, and it's expected to run flawlessly for long time.
Then someone else modifies it later, and the original "developers" are gone (another project, company, retired, etc.)... Definitely not the case with Linux.
So my take is that such software needs TDD to hold on it's own, such that it could be developed further by complete strangers.
Does Pivotal have a list of resources/references to help new hires ramp up on TDD practices and principles that you can share?
My current plan is to finish reading and working through the examples in 'Growing Object Oriented Software, Guided by Tests' as that seems to be a very highly recommended book.
Great article! We want better code and we want to foster a culture that values quality work. We want to sleep at night knowing our codebase is under our control and won't wake us up for PagerDuty. I observe that testing first is a great way to express our team's commitment to quality.
There's often a lot of negativity in TDD threads like this. But please limit the FUD because some of us are actually trying to improve and would appreciate encouragement for ourselves and our peers who are on the fence about rigorous testing.
Also, I don't doubt there are better routes to improving our software quality than TDD, I'd just like to hear what those are for people.
I always wonder why so many "TDD or not TDD" discussions. TDD is based on some guidelines (like "Agile"), it works for some, it doesn't for others. It depends on many things: tech stack, tooling, team, company culture, etc... just to name a few.
Once you get the discipline to follow your version of TDD, then you can focus on more important stuff. You don't spend time on deciding if something has to be tested or not. Having different devs with different level of experience will lead to inconsistent coverage levels. Similar to following traffic/safety rules, you just do it without 2nd guesses.
I don't know if I do TDD, but I do write a lot of tests (unit and integration), during code writing. Easy way of mocking components is a huge advantage (e.g. groovy mocking), after all tests is code that needs maintainance.
There are multiple reasons to write tests. To me, to get most out of them is TDD or something close to it. Adding tests later, reduces their value, simply because using tests during development is more efficient development (especially in complex systems) than after the fact.
Personally, I hate writing unit tests, especially if they require complex mocking of deeply nested dependencies.
On the other hand, I love refactoring and/or updating well-tested code much more, so I usually force myself to write tests -- up front as much as I can, but also after the code is written and functionality clearly defined.
I’ve studied and practiced TDD for years. I’ve even written articles to help beginners.
I now only practice it sparingly. Sometimes it makes more sense to write the test first and other times I already know exactly what I want and I know how to make it testable.
An even bigger issue I have is the huge reliance on mocking which leads to completely brittle tests and code that can withstand no change without all hell breaking lose.
Lately I spend more time on acceptance tests, which I do often write first. I use unit testing for non-trival logic.