Hacker News new | past | comments | ask | show | jobs | submit login

If you really want to buy into the module pattern, it seems like anything that's enough of a unit that you'd want to test could be in its own module.

In the example given, the sum function could be its own module that you require into the stats module. That way you could test them independently, and the stats module could simply expose its own appropriate methods.

This has the added benefit of making the sum function reusable.

Personally I prefer this approach to introducing environmental concerns into your code.




That's a great point. I definitely agree that erring on the side of modules is better. I just worry about times when the purpose of a module is not clearly defined at the outset (e.g. when it only contains one function that is only used in one other module). I have often seen that devolve into the "helper" module, where random functions collect.

I'm also not sure I buy the assertion that all code you would want to test is code you would want to export publicly. Do you have any reasoning to support that idea?


> I'm also not sure I buy the assertion that all code you would want to test is code you would want to export publicly. Do you have any reasoning to support that idea?

All code you would want to test (or, even, to have exist) is either code that is in publicly exported functions or code whose pathways can all be exercised via interaction with the publicly exported interfaces, since code pathways that are neither in publicly exported functions nor reachable through publicly exported functions is dead code.

If its not testable via the public interface, it shouldn't exist.


What if a public function is very complicated—too complicated to be considered a unit—and it calls several simpler private functions? Those helpers should be tested with their own unit tests, but shouldn't be exported.

It's not just about coverage in terms of code execution; you also want the right test “resolution”, so you can quickly find the level of abstraction where a bug is, and so you can prove correctness of the parts of a complicated function as you build it.


The other side of this, though, is that you're now coupling your tests to the implementation of your solution and not the interface you want to present. You can't simply refactor your code to improve it; you will also have to find the failing tests, remove them, and write new ones for your new private functions, all while the public behavior of your system has remained working perfectly.

That's my primary reason for not testing private functions.

My alternative is simple, and has been stated elsewhere in the thread; if a function rises to the level of complexity of really needing its own tests, then it probably should be exposed as a public function somewhere in my system. Otherwise, I haven't factored the problem properly yet.


Exactly...coupling is more detrimental than one would think. As your codebase gets larger and more developers on-board and features change, what should have been a simple refactor can turn into a nightmare as your test suite starts to give false negatives because you've changed the implementation or removed a now unused private function that your tests relied upon directly. Writing unit tests directly against private functions is a good way to ensure your team's velocity doesn't scale as well as it should.

Unit tests will serve you better when written to assert results rather than implementation details.


I think I just mean that a testable unit is a good heuristic for when a piece of code should be pulled out into a separate module. Put another way: when there's enough functionality/complexity in a given function that it needs to be tested independent of the public methods that use it, it's modularizable.

This probably isn't categorically true, but it's served me pretty well.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: