Hacker News new | comments | show | ask | jobs | submit login

I'd take a slightly different take:

- Structure your code so it is mostly leaves.

- Unit test the leaves.

- Integration test the rest if needed.

I like this approach in part because making lots of leaves also adds to the "literate"-ness of the code. With lots of opportunities to name your primitives, the code is much closer to being self documenting.

Depending on the project and its requirements, I also think "lazy" testing has value. Any time you are looking at a block of code, suspicious that it's the source of a bug, write a test for it. If you're in an environment where bugs aren't costly, where attribution goes through few layers of code, and bugs are easily visible when they occur, this can save a lot of time.




I have adopted the same philosophy. A few resources on this, part of the so-called London school TDD:

- https://github.com/testdouble/contributing-tests/wiki/London... (and the rest of the Wiki)

- http://blog.testdouble.com/posts/2015-09-10-how-i-use-test-d...

- Most of the screencasts and articles at https://www.destroyallsoftware.com/screencasts (especially this brilliant talk https://www.destroyallsoftware.com/talks/boundaries)

- Integration Tests Are A Scam: https://www.youtube.com/watch?v=VDfX44fZoMc

All of these basically go the opposite way of the article's philosophy:

Not too many integration tests, mostly unit tests. Clearly define a contract between the boundaries of the code, and stub/mock on the contract. You'll be left with mostly pure functions at the leaves, which you'll unit test.


Thanks for the links, they make sense - I've always had trouble with blind "you should unit test" advice, but especially the video explains the reasoning very well :)


I’ve been practicing TDD for 6 years and this is exactly what I ended up doing. It’s a fantastic way to program.

My leaves are either pure functions (FP languages) or value objects that init themselves based on other value objects (OOP languages). These value objects have no methods, no computed properties, etc. Just inert data.

No mocks and no “header” interfaces needed.

On top of that I sprinkle a bunch of UI tests to verify it’s all properly wired up.

Works great!


- Structure your code so it is mostly leaves. - Unit test the leaves. - Integration test the rest if needed.

Exactly. You expressed my thoughts very succinctly. Though I feel the post tries to say the same just in a lot more words.


I didn't get that from the post at all, I thought the post advocates mostly for integration tests and I didn't see anything about refactoring code to make unit testing easier.


This is my exact mentality as well! In fact, I like it so much that I apply it to system design as well. Structure the pieces of code into a directed acyclic graph for great success. A tree structure that terminated leaves is a DAG.

https://en.m.wikipedia.org/wiki/Directed_acyclic_graph


> Integration test the rest if needed.

Is there any situation where there is integration, but no need to test it?

You seem to be suggesting that if the leaves are thoroughly tested, nothing can go wrong in their integration, but at the same time, I cannot imagine someone believing that.


Exactly, most bugs I see are in integration, mis-matches in data-models or state. But I also work on business-y applications which tend to be more integrations than local business logic.


Integration tests are always needed in some form, because you need to make sure the leaves are actually called, since unit tests are executed in a vacuum, the functions might work but might never be called at all, or might not work because of weird bugs that appear when only testing the whole tree.


Anyone can give me an example or explain a little more about "Structure your code so it is mostly leaves."?


Leaves in this context would be classes that have no dependencies.

If you need to create an object, can you pass the name of the class in? Or can the object be created elsewhere and passed in fresh? If you're making a call to a remote service (even your local DB) are you being passed a proxy object?

All of these references can then be provided as a test double or test spy, so long as they are strict about the interface they provide/expect, and you can exhaustively cover whatever internal edge cases you need with unit tests.

Don't _forget_ the integration tests, but my personal opinion is that it usually suffices to have one "success" and one "error" integration test to cover the whole stack, and then rely on unit tests to be more exhaustive about handling the possible error cases.


This is very interesting. I'm not 100% sure I understand. Any example of this or resources on this style?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: