I stopped thinking about "unit testing" and just started thinking "when I write some code, I should write some other code that runs the code automatically".
There's every likelihood that both my original code and the code I write to test it will have bugs, but it's somewhat unlikely that the bugs will be perfectly complementary.
However that will happen too, and so it is not the role of automated testing to eliminate bugs. It is the role of quality control testing to eliminate bugs. The role of automated testing is to reduce the cost of quality control (by reducing regressions and reducing the amount of time required to run manual quality control testing).
I have to credit Sebastian Bergmann with the genesis of this attitude. I took a course on unit testing with him somewhere around 2008 and I said "it seems that test code is just more code, how is it any different from writing code in the rest of your project?" and he said "it isn't, tests are just part of your codebase like all your other code". That was a watershed moment for me -- I always thought I was missing something.
The difference between Quality Control and Quality Assurance is really important but most developers don’t really know the difference, and their only form of quality is QC catching some bugs on the way out rather than QA catching them earlier in the pipeline. I’m a contract test engineer & very rarely see the latter - for example test design alongside feature design. It’s amazing how quickly you can improve a products quality metrics (defect leakage, customer reports, defect costs, etc) by focusing a little on the QA side of development.
It’s one of the issues companies have when they think they can just replace QA with developers automated tests - it misses every other part of quality except a subset of QC.
> I stopped thinking about "unit testing" and just started thinking "when I write some code, I should write some other code that runs the code automatically".
I like that perspective. Personally, the difference of "unit" vs "integration" testing from an organization/thinking perspective for me is what has to be available when it runs. We currently have .test.ts and .int.test.ts files in our code base. The former gets to run really fast without any dependencies, and the latter takes longer to run, but has full access to DBs and Redis. That separation just helps split it up for CI.
Thinking about it, you can probably combine them and just require everything is always available, but I've found that the distinction has helped how I think about writing pure functions and allows me to iterate on them with tests easily.
Yeah I sometimes write tests that are separate from external dependencies still, I think the freedom I gained through this attitude is that I don’t stress too much about it, especially in the early stages. This gives me “good enough” coverage most of the time without the sorts of extreme bending over backwards to mock out every little dependency that a purist unit testing approach would entail.
As a side note I really liked using Lettuce when I was working with Django not so much because of the whole “BDD” thing which was lost on me but because of the way they did database fixtures.
I wrote a little automated test runner in PHP that uses the same approach, and later wrote a little test runner in node with the same name (but never implemented database testing in it because I’m not masochistic enough to use asynchronous code to access a database):
I've been writing a lot of 8-bit assembler for the 6303, which is in the same processor family as the CPU in the TRS-80, which the author is writing code for (for this project in case anyone is wondering: https://github.com/ajxs/yamaha_dx97). I ran into the exact same issue. I wrote a MAME driver for the target platform, so I could test my builds on my development machine. Obviously that sped things up a lot. The MAME debugger isn't built for unit testing, and it can't really be 'instrumented', but I was able to write a lot of scripts which could get me pretty close. You can set the debugger to initialise the system RAM from a file when it hits the breakpoint at the start of your function, then test the RAM against the desired output when it when it hits the breakpoint at the end. As clunky as this is, it saved me in a few tricky places!
Plug plug: I've written an assembler[0] for the 6502 (with full LSP and debugging support). It also supports the concept of unit tests whereby your program gets assembled and every test individually gets assembled and run, whereby you can add certain asserts to check for CPU register states and things like that.
I loved coding assembly on for the 6809 for the TRS-80 Color Computer: there was the cartridge editor/assembler, a disk assembler, and eventually an assembler for OS-9 that worked together with C, Pascal, and other compilers. (e.g. if your assembly function or a wrapper around it follows the calling conventions, you could write your tests in an HLL)
The 6809 team saw themselves as revolutionaries of system software for microcomputers in that the 6809 supported PIC (Position Independent Code) which could be used to implement shared libraries, even in ROM.
So it is totally in character to have your function in one file and have another file with a test harness and another with the actual test that when linked together form a program.
So far as the verbosity of the tests you could solve that with a good macro assembler although I don't remember ever using one for the 6809. The Coco3 could support 512kb of RAM so it could have supported a full phat assembler in OS-9. Macro assembly for the IBM 360 was particularly powerful because you could specify a register in an argument, I am wondering today if you could get away in 6809 with adding an "LD[parameter]" instruction which could assemble to LDA, LDB, LDX or such depending on the value of parameter.
The tests don't have to be as comprehensive as he says they should be, though some fundamental algorithms really can use a comprehensive test.
Back in 1988, my manager, who is taking graduate School courses at a local college, requested that we start writing test programs instead of walking down stairs, loading and running our assembly code on the local machine.
To generate adequate tests, we would have to simulate part of the actual machine.
Fortunately, he understood we were more likely to write a bad emulator and let us get our exercise instead
this stuff about how unit tests aren't tests for units but rather tests that are units is nonsense. the term has been used to mean 'tests for units' for about a century
it is certainly true that the optimal amount and kind of testing depends on your development environment. if for some reason you find yourself writing 6809 assembly, here are some ways you can make testing cheaper
- don't write the tests in assembly. there are c compilers. forth is also a fine option. or you can test manually from a debugger or monitor program
- forth makes a decent monitor program and macro assembler and is scriptable for testing. it's maybe a lot better at that than it is at being a programming language. there are reportedly very good forths for the 6809
- don't do the development self-hosted. you can simulate a 6809 on a laptop or cellphone hundreds of times faster than real time, and the author is in fact doing that. if you have a big computer with either a 6809 simulator or a debugging umbilical to your 6809, you can run the test scripts on the big computer instead of the 6809, while running the code under test on the 6809, real or emulated. then you don't have to write your tests in c or forth; you can write them in tcl, python, js, lua, squirrel, or whatever you prefer. they can control the 6809 like a marionette, and you can use property testing frameworks like hypothesis, which would be very ambitious to implement on the 6809
- maybe non-time-critical parts of your logic can be written in a language that can be compiled to either the 6809 or the big computer, such as c, so you can test it compiled natively on the big computer. cross-compiled to the 6809, this will generate bloated code that uses more memory, so it depends on how tight you are on space, though the 6809 isn't quite as unfriendly to c as something like a 6502 or pdp-10
- the exception is that if you write the non-time-critical parts in an itc forth instead of c or something, it won't use an unreasonable amount of memory, but it will be a lot slower and more bug prone
disclaimer, I haven't programmed a 6809 since 01985, and then it was in basic, and my forth isn't all that hot either
> this stuff about how unit tests aren't tests for units but rather tests that are units is nonsense. the term has been used to mean 'tests for units' for about a century
Is that in the article? I've never heard this before. Unit tests are tests that test a "unit" as opposed to integration tests which test how "units" work together. Of course when it comes to code, what one might call a "unit" is more arbitrary than what one might call a "unit" in an electronics project, which is I think one of the points the author is making.
> Originally, the term “unit” in “unit test” referred not to the system under test but to the test itself. This implies that the test can be executed as one unit and does not rely on other tests running upfront
which is historically incoherent. the author has weakened their post to now say
> Rumors have it that the term “unit” in “unit test” originally referred to test itself, not to a unit of the system under test. (...)
which is still reckless disregard for historical accuracy but now at least it's deniable
If you do this as a recreational activity, absolutely feel free do it self-hosted, for an authentic 1980s home computer user experience. To me doing retro computing on modern platforms or using simulated FPGA based hardware defeats the purpose.
oh, yeah, by all means, if you're looking for ways to increase the challenge level or gain empathy with 80s hackers, that's definitely a good way to do it, and you will probably have a lot of fun
my advice 'don't do the development self-hosted' was only intended as a way to reduce the cost of automated testing (and, well, the whole development project) rather than an abstract moral imperative
no, i am speaking quite precisely. 01923 is hardly medieval, and that's when the term came into common use in engineering jargon, with pretty much its current meaning, plus or minus about ten years
This is a little off, unless you give “unit” a very different term than most use it. The term unit vs division tests are used in the military, so a unit test is to check combat readiness of a unit and division tests is the combat readiness of a division. The books that use that term mostly come from US Artillery documentation.
I wonder if the etymology of modern unit testing does come from that; but if it did I’d assume the other terms would exist too.
interesting, i appreciate the correction. for some reason the google books date range searches linked from the n-grams page weren't loading for me, and the corresponding internet archive searches are also broken
there also seem to be some search hits from 'unit tests' for education, in which the 'unit' is presumably a course or subdivision thereof; https://www.google.com.ar/books/edition/The_Organization_of_... is an example, though sadly google refuses to show me any excerpts, perhaps because i'm in a country they don't like
oh, yeah, probably before the computer era, though see the other comment thread for a possible shortening of the time frame; "unit test" in the engineering sense might date from as recently as the 01960s
There's every likelihood that both my original code and the code I write to test it will have bugs, but it's somewhat unlikely that the bugs will be perfectly complementary.
However that will happen too, and so it is not the role of automated testing to eliminate bugs. It is the role of quality control testing to eliminate bugs. The role of automated testing is to reduce the cost of quality control (by reducing regressions and reducing the amount of time required to run manual quality control testing).
I have to credit Sebastian Bergmann with the genesis of this attitude. I took a course on unit testing with him somewhere around 2008 and I said "it seems that test code is just more code, how is it any different from writing code in the rest of your project?" and he said "it isn't, tests are just part of your codebase like all your other code". That was a watershed moment for me -- I always thought I was missing something.