The story goes that there was some talk about the Cornell computer science department doing nothing but ivory tower mathematics that was useless to people who actually, you know, used computers in their research — such as the physicists programming data collection controllers for the particle accelerator. This concerned the head of the CS department as the harsh talk was creating a little bit of mutual resentment, not to mention an image problem for his department. He resolved to help bridge the theory-practice divide and show the non-CS computer users on campus that his theoreticians were doing worthwhile work.
So one day a CS professor schedules an intro talk on formal methods, nothing but basic mathematical maturity required, and the department chair seizes his chance! He invites a bunch of the particle accelerator folks to the talk, telling them formal methods will help them write more reliable code so they don't waste their time re-running experiments because of data collection errors. The physicist roll their eyes and shake their heads, but when the hour rolls around, some of them are present at the back of the room.
The professor gives a pretty standard formal methods talk, introducing a hypothetical (of course) programming language, showing how mathematical propositions about a program can be formulated and proved, defining a simple mathematical problem involving coloring a graph, and demonstrating a proof of correctness for a solution in the hypothetical programming language.
The physicists are impressed but a little wary. After all, their problems are mostly with performance rather than the kind of correctness addressed in the lecture. One of them raises his hand. "Is it possible, using these methods, to bound the number of times each subroutine is called?"
The CS professor, who has been instructed to engage the physicists and be very accommodating and diplomatic, is delighted by the question. "Yes! We can formulate an answer in terms of the input and prove it by recursion." He turns to the blackboard and formulates the statement and a proof. "There we have it. And I must congratulate you on such an interesting theoretical question."
"I'm actually getting at something very practical," the physicist replies. "Namely, performance. This program isn't worth much if it takes too long to solve the problem."
The CS professor narrows his brow and says, "Hmmm, I see. Yes, that can be a concern sometimes. Well, I hope these ideas have put all your fears to rest on that topic!"
"I'm afraid it's not so obvious to me," the physicist says.
"Well, we've proved the program terminates, and that when it terminates, it produces a correct solution. We can prove various other properties such as which solution will be produced if there are multiple correct solutions."
"There are other properties of the program that are of interesting as well. Can you prove whether runs within certain memory bounds?"
The CS professor starts to look profoundly bored. "Yes, these methods could be used for that. Space and time bounds can be very interesting."
"Wonderful, wonderful! So if I need to be sure that this code runs in, say, 50 milliseconds...."
"Runs? In 50 milliseconds?"
"Yes, supposing I want to run it hundreds of times per second, I..."
"Hold on a second, I don't think you've got the point after all. You never have to run the program. That is why" (the CS professor gestures at the chalkboard) "we have done all of this work! You never have to run it, because you already know it produces the correct answer!"
(This story was awesome, by the way.)
Apparently Sinclair calculator designers learned that the electroflourescent display and another component, could live with the power dropped, sufficient I believe so they could use button cell batteries, so making the (claimed, I can't verify, "smallest calculator" was so hotly contested that that was a large part of a super documentary on the Japanese silicon industry) smallest model at the time. The display, I presume, had a decay for the elements, but what the other component was must go on my too long research todo.
Edit: my cherished ef display calculator at about the same time, used a 9v "square" battery, that was fast depleted, faster than my pocket money could withstand, anyhow.
Well, the O/S they were using was licensed "per device". When they deployed, they had one device, but during operation they would have more than one.
They didn't want to get in trouble by wasting the government's money on unnecessary licenses, yet they didn't want to get the government in trouble (because they would get in trouble) for using unlicensed software. A free software (i.e. open source) alternative would solve the problem.
I pretty much immediately realized that they were building MIRV, and at some point suggested that perhaps it would not be a problem. They didn't seem particularly amused by this point, and after the visit wow, our sales person was pissed!
And indeed they didn't follow up -- I imagine they just struck a deal with their existing vendor who will go unnamed.
The same idea applies more generally as well with so-called "region-based" memory allocation. A sub-process allocates incrementally from a pool, then when that process ends, the entire pool is freed at once. Optionally, garbage collection or other memory freeing can happen during the process. Additionally, regions can be arranged hierarchically for processes and sub-proesses, recursively.
The easiest way to take advantage of this is, if for example, you have a short-lived command line tool. Maybe you don't even bother to free any memory at all, as the operating system is going to free it all for you when you're done anyway.
When I needed many arena-backed vectors, I wrote an ArenaVec with an API that takes a pointer to the arena in all methods that can allocate, keeping the memory size down.
For performance critical code in a managed language, it made an astonishing difference, at least in my use case.
BTW, I think that many of us use the same 'technique' by accident :)
As in, if you deploy often enough, you may never notice that your app has a memory leak.
(The thing was also optimized to reuse objects to allocate as little as possible... but a writeup from that team on an internal website said, basically, that Java GC was so efficient nowadays that it may not have been worth the extra maintenance effort and potential for bugs, in retrospect)
My impression, btw, is that it still requires pretty significant efforts to control allocation. I'm not sure there's a 1-1 match, but I associate it with people writing custom substitutes for java.lang.String to avoid allocations.
which was (apparently) fine, but it made testing very expensive
Was the device able to be downclocked for certain tests?
With only globals used, you don't have to worry about free() at all. The OS does it better and faster.
Anyone know what the watson group at IBM was doing in 1995?