What I find an interesting design choice about Go’s approach (using ‘defer’) is that they are executed at the end of the function — not the end of the current block:
This means that if you were to alter a function by placing parts of its body inside a loop, you may accidentally introduce O(n) buildup of deferred statements. This means that you could also be piling up resources associated with those resources (e.g., O(n) file handles).
Because the number of ‘defer’ calls is thus unbounded, it may be the case the compiler needs to generate code to store the list of pending closures on the heap. This may already happen for bounded functions if the compiler is unable to analyze it. In those cases it may thus be faster to resort to traditional C-like error handling.
You just explained why it's done this way. Because 'defer' is not scoped by block you can now use your standard constructs with blocks — if, for-loops, etc. — to initiate the defers. If your function opens a N files based on some parameter N, it can still use a loop and defer each's close dynamically.
Sure there's ways to work around it like putting another loop in a single defer but it's not completely braindead this way.
It's much more annoying to need to add syntax to "expand" the scope of a deferred beyond a codeblock when you need it than to just wrap some code in a function when you need it.
Is it the end of the function, or the end of the scope?
There's also no reason the compiler couldn't allow the deferred call once the last reference has been utilized (though in practice the common and performance minded case probably is to just use it as an additional goroutine that the context passes execution to when it queues any return messages).
> There's also no reason the compiler couldn't allow the deferred call once the last reference has been utilized
There's one reason not to do this - preserving the language semantics. Rust's version of RAII drops owned objects at the end of their scope for the same reason, even though it 'could' do better especially given NLL.
A comparison like this would be much more informative if it used more than one layer of setup/cleanup. When you get to four or five, it really highlights how well some of these scale up (or not). Also, neither nested functions nor mini state machines seem to get a mention, which is a shame. Nested functions are a good example of an approach that gets unwieldy fast as layers are added, and mini state machines scale as well as the "kernel" style without having to endure app-snobs' sneers for using a lowly goto.
I'm not sure what you mean by "nested function" as a cleanup mechanism, and couldn't find a mention of state machines for cleanup purposes either (but maybe I'm searching for the wrong keywords). Is there existing open source code where these can be found, or do you have a link to an explanation for these?
Also, I use a style in Java with higher-order functions (aka Callables/Runnables) that's like the pattern ascribed to Ruby, for situations not covered by AutoCloseable.
That just speaks to usage of any orchestration class with overridable hooks, and has everything to do with events pertaining to domain (xunit's domain being "run a series of isolated tests") and not clarifying lower-level code organization. It's sort of like the author treated onFocus and onBlur in JS DOM events as startup/cleanup. It's just entry/exit orchestration.
The editorial stuff about being unnecessary also ignores that unit tests must fully reset the fixture between every test to be isolated--one reason test doubles are nice, they can generally be reset instantly--and that one of the more popular ways to organize unit tests is around common fixture handlers, aka setup/teardown routines.
This usually makes sense if you're testing units of an otherwise cohesive module since related operations usually use related fixtures. If it doesn't make sense then, sure, you move the setup/cleanup into each test but it totally craps up being able to review them for validity, etc.
Ideally a test is a clean known fixture handed to the test, one state change, and a verification of post-state, nothing more. Anything else complicates the test to some extent or the other, and (though devs get this wrong constantly) readability is paramount in tests or you don't know you're testing the right thing six months from now. Tests have to be their own docs. That's why setup and teardown exists, to hold everything but that.
Most of the article was pretty good, though.
Just...ruby, python, C++, xunit...misguided testing advice...wtf? Felt like the author was swimming outside their lane and missed a pass in editing, frankly.
Admittedly the part about xUnit was a slight derail, but I felt it made some sense to mention, as an example for "the framework" invoking cleanup handlers for the developer.
The remark about unnecessary tearDown() hooks is probably worth a separate article and not a very compelling argument in such a short form.
I can see where it's an attractive example for cleanup behavior, for sure. But I do think the distinction between code-level cleanup and domain-level cleanup is important. You can see the fixture management as part of the test code and then they align much more closely. It's usually better to not couple that way, though, and to think of automation as something that fulfills a test instead of being the test.
I also know there's a lot of debate over xUnit patterns and, in particular, setup/teardown. My take is that the perceived value depends on a lot on background and how much experience one has actually maintaining automation strategies over time. One of the bigger dangers is losing track of what you're testing so thoroughly that any confidence you might have rests essentially in magical belief instead of periodic review. That especially hits unit tests since they're added from multiple sources, and makes things like standardized fixtures/setup/teardowns pretty useful for alignment.
I'll keep a look out for future blog posts. I'm interested in your take.
There's also Haskell's "bracket" (https://wiki.haskell.org/Bracket_pattern) which is similar to python's "with" statement but with plain functions instead of context managers.
Thanks for the pointer! I'm having trouble categorizing this one, tbh. (also its relationship to what dwohnitmok says on this HN discussion)
Superficially, the code on this Wikipedia page looks like the bracket pattern in similar to the `dynamic-unwind` low-level function and friends in Scheme and Lisp - but then again I realize the notion of cleanup probably only makes sense when mutable state exists :)
How does that look in practice? Is there a clever way to use the bracket operator together with Haskell's syntactic sugar for monads?
https://play.golang.org/p/q5n0P-mKrmS
This means that if you were to alter a function by placing parts of its body inside a loop, you may accidentally introduce O(n) buildup of deferred statements. This means that you could also be piling up resources associated with those resources (e.g., O(n) file handles).
Because the number of ‘defer’ calls is thus unbounded, it may be the case the compiler needs to generate code to store the list of pending closures on the heap. This may already happen for bounded functions if the compiler is unable to analyze it. In those cases it may thus be faster to resort to traditional C-like error handling.
Optimizer passes for eliminating this overhead were added to Go 1.13: https://golang.org/doc/go1.13#runtime