Writing tests seems to actually be the easy part.
Figuring out what to test is the hard part.
Integration tests or unit tests.
To mock or not to mock.
How deep to test my logic in controllers.
How many edge cases is enough.
And no matter how good your tests are you can still get bugs.
Doing Django dev right now. I always right unit tests for a component but not necessarily integration tests. Depends on whether or not the component interacts with an outside system (db, redis, aws, etc.). Always mock those systems if the component does. Also always assert as specific as possible on those mocks. I usually find by the time I have to write integration tests the majority of the issues are worked out as long as I'm very diligent about mocking the return values correctly.
I don't think good developers write bad tests. Developers inexperienced in writing tests for the type of task they are working on write bad tests.
But also, given how standard setUp method is and how often it is used, putting set up config there is not forcing anyone to "read your entire test suite". It forces them to read setUp method and that should be no shock to whoever is making the test. I dont mind doing it the way author suggests, but I also think that difference between the two is minimal.
Helper methods deserve equal treatment in code and tests. Operating on two different layers of abstraction as well as other complains he has about their bad use would be same bad in main code.
Likewise, most other problems are also bad in code under test code. Helper methods should not bury critical values or interact with state unless it is very clear from their name. Overly concise names are bad too and trying to save key strokes is pointless. Pointless constants (FOUR=4) are pointless.
A lot of the points in this article are also in other works on the topic such as Kent Beck's Test-Driven Development by Example, and Clean Code by Robert Martin.
When you then edit such a function it affects multiple tests at the same time. But you rarely check if that modification neutralizes some of them or not.
> Suppose I invented a new ruler that measured in “abstract ruler units.” To convert from “ruler units” to inches or centimeters, you’d use a separate conversion chart. If I offered such a ruler to a carpenter, they’d smack me in the face with it. It would be absurd to add a layer of abstraction to a tool that gives clear, unambiguous information.
Assuming these ARUs are linear (like every unit of measure for distance), you don't need a chart. You just need a multiplier. In that case, it sounds like this is describing a scale ruler [1] -- and your carpenter probably loves it.
For anyone who ever has to read (or draw) an architectural diagram, they're great! Rather than adding a level of abstraction, they remove one. I'm glad rulers haven't simply "existed in the same form for hundreds of years".
It's interesting that the author sees helpers and abstractions as being uniformly opposed to simplicity. If that's the case, why use helpers or abstractions in our production code either? Why not apply the author's advice in production code as well?
Badly designed helpers and abstractions are certainly ... bad. But well designed helpers and abstractions make the code much simpler to read, understand and maintain. That's their entire purpose.
There's certainly a lot of badly designed abstractions out there. If you come across one that makes the code harder to understand and maintain, you should certainly get rid of it and inline it instead. Regardless of whether it's production or test code. The flip side of this is that if you see an opportunity for abstractions that make the code easier to understand and maintain, you should certainly use it, even in test code.
The way I've always thought of it is that my unit tests have to be a level of complexity simpler than the code under test. Otherwise the "bugs" are equally likely to be in my test code as the production code.
I think he makes a very strong point that if the code is too complicated to use in tests then the production code should change. Simply because if it's hard to set up for you in tests; then it's almost impossible for someone else to use in real code. When they should be reusing your component they will give up and write a 'better' version for themselves.
I don't see how replacing a bunch of copy-pasted code with a helper method, makes your bug risk any greater. If anything, it reduces your bug risk because when you need to make changes everywhere, you won't miss out in a couple places by accident.
Not to mention that the right abstractions will actually make your code simpler, instead of more complex. Imagine if someone needs to use an arraylist in a test, but decides to reimplement inline using arrays instead. Sure, they have avoided the use of a higher level abstraction, but the resulting code is now both harder to read and more likely to contain bugs.
In production code a helper lets you reduce duplication, making refactoring easier and reducing bugs. It also adds additional test interfaces that can make code more reliable. Finally, it makes code more readable.
But for tests you are optimizing for a different thing. You want to know what the input to the test is. This makes referring to the test as documentation much easier.
Helper functions can work in tests but often they are completely opaque and make tests hard to understand.
Good QA people are worth their weight in gold.