This reminds me of a demo a friend once gave me on a Symbolics Lisp Machine. He turned the garbage collector off, then ran a program that allocated memory like mad. After a while, a message popped up saying something like "you are about to run out of memory. Activate GC?" He declined, and the program continued running. After a bit more time, another message popped up saying "you are about to run out of memory, and if you don't start GC now, there will not be enough room to run it when you do. Activate GC?" Again, he declined. After more time, another message popped up saying "you have run out of memory, and it is not possible to start GC now. Would you like to attach an external disk drive?" The idea was that the external drive could serve as additional memory either to continue running without GC or as scratch memory for the GC. I've always found that impressive.
On a Symbolics you can add swap space files when there is an out-of-memory error. If you find some disk space somewhere (possibly after deleting some unused junk), create a swap file, add it and continue.
Reminds me of a comment I once read by someone who had created instrumentation for underground nuclear tests. They had, if memory serves, something like 5 milliseconds between initial detonation and the point where the equipment would be destroyed. "Now that's a hard-real-time constraint," they said.
That reminds me of a wonderful, surely apocryphal story I was told by an old boss who went to grad school at Cornell in the 1970s.
The story goes that there was some talk about the Cornell computer science department doing nothing but ivory tower mathematics that was useless to people who actually, you know, used computers in their research — such as the physicists programming data collection controllers for the particle accelerator. This concerned the head of the CS department as the harsh talk was creating a little bit of mutual resentment, not to mention an image problem for his department. He resolved to help bridge the theory-practice divide and show the non-CS computer users on campus that his theoreticians were doing worthwhile work.
So one day a CS professor schedules an intro talk on formal methods, nothing but basic mathematical maturity required, and the department chair seizes his chance! He invites a bunch of the particle accelerator folks to the talk, telling them formal methods will help them write more reliable code so they don't waste their time re-running experiments because of data collection errors. The physicist roll their eyes and shake their heads, but when the hour rolls around, some of them are present at the back of the room.
The professor gives a pretty standard formal methods talk, introducing a hypothetical (of course) programming language, showing how mathematical propositions about a program can be formulated and proved, defining a simple mathematical problem involving coloring a graph, and demonstrating a proof of correctness for a solution in the hypothetical programming language.
The physicists are impressed but a little wary. After all, their problems are mostly with performance rather than the kind of correctness addressed in the lecture. One of them raises his hand. "Is it possible, using these methods, to bound the number of times each subroutine is called?"
The CS professor, who has been instructed to engage the physicists and be very accommodating and diplomatic, is delighted by the question. "Yes! We can formulate an answer in terms of the input and prove it by recursion." He turns to the blackboard and formulates the statement and a proof. "There we have it. And I must congratulate you on such an interesting theoretical question."
"I'm actually getting at something very practical," the physicist replies. "Namely, performance. This program isn't worth much if it takes too long to solve the problem."
The CS professor narrows his brow and says, "Hmmm, I see. Yes, that can be a concern sometimes. Well, I hope these ideas have put all your fears to rest on that topic!"
"I'm afraid it's not so obvious to me," the physicist says.
"Well, we've proved the program terminates, and that when it terminates, it produces a correct solution. We can prove various other properties such as which solution will be produced if there are multiple correct solutions."
"There are other properties of the program that are of interesting as well. Can you prove whether runs within certain memory bounds?"
The CS professor starts to look profoundly bored. "Yes, these methods could be used for that. Space and time bounds can be very interesting."
"Wonderful, wonderful! So if I need to be sure that this code runs in, say, 50 milliseconds...."
"Runs? In 50 milliseconds?"
"Yes, supposing I want to run it hundreds of times per second, I..."
"Hold on a second, I don't think you've got the point after all. You never have to run the program. That is why" (the CS professor gestures at the chalkboard) "we have done all of this work! You never have to run it, because you already know it produces the correct answer!"
Alternative punchline: "Hold on a second, we can't run our program. We've solved the halting problem, so we can't actually run it without violating our assumptions."
Another thing about that involved blowing up memory storage key material. The memory was small enough that individual pieces might have the whole key. So, whoever I was reading on split it across the address space in such as way that each fragment would only have a piece of the key.
Apparently Sinclair calculator designers learned that the electroflourescent display and another component, could live with the power dropped, sufficient I believe so they could use button cell batteries, so making the (claimed, I can't verify, "smallest calculator" was so hotly contested that that was a large part of a super documentary on the Japanese silicon industry) smallest model at the time. The display, I presume, had a decay for the elements, but what the other component was must go on my too long research todo.
Edit: my cherished ef display calculator at about the same time, used a 9v "square" battery, that was fast depleted, faster than my pocket money could withstand, anyhow.
That's funny because I had the opposite experience: had a sales call at unnamed aerospace company. They were interested in Cygnus's free embedded OS (now eCos). Why?
Well, the O/S they were using was licensed "per device". When they deployed, they had one device, but during operation they would have more than one.
They didn't want to get in trouble by wasting the government's money on unnecessary licenses, yet they didn't want to get the government in trouble (because they would get in trouble) for using unlicensed software. A free software (i.e. open source) alternative would solve the problem.
I pretty much immediately realized that they were building MIRV, and at some point suggested that perhaps it would not be a problem. They didn't seem particularly amused by this point, and after the visit wow, our sales person was pissed!
And indeed they didn't follow up -- I imagine they just struck a deal with their existing vendor who will go unnamed.
The same idea applies more generally as well with so-called "region-based" memory allocation. A sub-process allocates incrementally from a pool, then when that process ends, the entire pool is freed at once. Optionally, garbage collection or other memory freeing can happen during the process. Additionally, regions can be arranged hierarchically for processes and sub-proesses, recursively.
The easiest way to take advantage of this is, if for example, you have a short-lived command line tool. Maybe you don't even bother to free any memory at all, as the operating system is going to free it all for you when you're done anyway.
The (new in c++17) polymorphic allocators allow this, as you can instantiate a monotonic_memory_resource, and use it to construct all the regular C++ containers with it for the duration of the remaining scope - this works just fine as long as no memory is needed outside of that scope.
That's cool. Do the C++ containers now contain a pointer to the monotonic_memory_resource, or is there some magic that lets the local calls to push_back, etc., know about the allocator in that local scope?
When I needed many arena-backed vectors, I wrote an ArenaVec with an API that takes a pointer to the arena in all methods that can allocate, keeping the memory size down.
Yes they do, but since c++11, allocators were not required to be stateless. As a result, containers (or any allocator-aware objects) have needed to hold an instance of the allocator.
Same for games written i Java, by using pooling and not deallocating until a safe spot. Was especially important in early Android, where a GC break would not just feel like minor lag, but actally make a really big stutter.
I used object pooling as a technique to increase performance on some of the tools I wrote in my previous role. These were protocol analysis tools, used for monitoring network links, and it was an order of magnitude increase in performance by pooling most objects, and reducing the work of the GC.
For performance critical code in a managed language, it made an astonishing difference, at least in my use case.
Similarly, many old C and assembly games just have static arrays for a sufficiently large number of each type and allocate from that using some simple scheme (e.g. slot count + compacting, bitmap, used/unused bit in the data structure itself, etc.).
I worked at a bank and saw that pattern used in market connection middleware as well. It's really common, but since that stuff is all closed source it's hard to point to a concrete example.
(The thing was also optimized to reuse objects to allocate as little as possible... but a writeup from that team on an internal website said, basically, that Java GC was so efficient nowadays that it may not have been worth the extra maintenance effort and potential for bugs, in retrospect)
You can probably find discussions if you dig around in the mechanical sympathy mailing list.
My impression, btw, is that it still requires pretty significant efforts to control allocation. I'm not sure there's a 1-1 match, but I associate it with people writing custom substitutes for java.lang.String to avoid allocations.
Long time ago I shared an office with a guy who had designed missiles. He had a similar tale but relating to cooling and thermal management : the guidance system was designed to be inadequately cooled, in that if it ran for long enough it would overheat. However that time was well in excess of its maximum flight time..
i have a similar apocryphal story from the dod. there was an image processing element for a missile targeting system that was a 3d stack of silicon. it ran at an amazingly large number of ops per second, but only for a few seconds, because there was no way to get the heat out of the stack.
which was (apparently) fine, but it made testing very expensive
this was early 90s, i don't remember the ops, but i remember it being comparable in magnitude to the CM-2, which was where we were running some kind of simulation (in a skif, so everything I heard was very second hand)
OK, so they apparently understood the software well enough to predict to within a factor of 2 the magnitude of the leaks, but not well enough to fix the leaks. I suppose that's possible, but a little unnerving.
Writing code that explicitly manages memory has costs too; there's a risk of use after free, or double free. If you don't free you can't have these specific problems. And if you accept this, it opens the way for GC style programming, for some parts of the app at least.
Small utilities with limited lifetime can be written in the same manner. You just allocate memory and never free it. When the utility finishes its job, it just terminates freeing all the memory it allocated in one shot.
So long as that isn't somehow still in the guidance modules of the latest Tomahawks that can be told to circle / reposition / group awaiting the final target telemetry!