Oh no it isn't. Garbage collector needs to prove that what's being collected is garbage. If objects get collected because of an error... that's not really how you want GC to work.
If you are looking for an apt metaphor, Stalin sort might be more in line with what's going on here. Or maybe "ostrich algorithm".
>Garbage collector needs to prove that what's being collected is garbage
Some collectors may need to do this, but there are several collectors that don't. EpsilonGC is a prime example of a GC that doesen't need to prove anything
EpsilonGC is a GC in the same sense as a suitable-size stick is a fully automatic rifle when you hold it to your shoulder and say pew-pew...
I mean, I interpret your comment to be a joke, but you could've made it a bit more obvious for people not familiar with the latest fancy in Java world.
To be fair this is what the BEAM vm structures everything on: If something is wonky, crash it and restart from a known ok state. Except when BEAM does it everyone says it's brilliant
It's one thing to design a crash-only system, and a quite different to design a system that crashes all the time but paper over it with a cloud orchestration layer later.
I don't see the fundamental difference. Both systems work under expected conditions and will crash parts of it if the conditions don't happen. The scales (and thus the visibility of bugs) change, the technologies change, but the architecture really doesn't. Erlang programs are not magically devoid of bugs, the bugs are just not creating errors
I understand this perspective but a BEAM thread can die and respawn in microseconds but this solution involves booting a whole Linux kernel. The cost of the crash domain matters. Similarly, thread-per-request webservers are a somewhat reasonable architecture on unix but awful on Windows. Why? Windows processes are more expensive to spawn and destroy than unix ones.