That's really gold. I love these optimization rounds.
Worst experience in this was that I once spent a lot of time optimizing some function that looked like it was using a whole pile of time, only to realize afterwards that a hand-optimized version in assembly was already graciously provided in the same subdirectory. And it ran a lot faster than mine :(
Handbook has this interesting statement: "When a job is dominated by calculations or data logging, a multi-bit processor is more appropriate. When the task is decision and command oriented, a one-bit machine is an excellent choice."
I'd like to see that put to the test with a number of simple, control systems to see how it measures up. Plus, I'd love to see the demoscene have a go at 4- and 1-bitters to see what they're really capable of.
Regarding the One-bit computer: although that writeup is a few years old, it so happens that I tweaked a rewrote a few bits just few days ago! I'm pleased by how much attention it has attracted since it was first published. (At the same time it's startling that the mere mention of MC14500 left some apparently inattentive readers with a false impression. What I built is NOT a '4500 machine!)
the only part of the design that bothers me is the 2716. i think i've been spoiled by lots of years of having micro controllers one could program with a general purpose communications channel (hc11 shoutout)
i have to imagine this would be a great introduction to processors for non-specialists. a 20 chip ttl design with instruction decode and a register file and address multiplexing is beyond a lot of people's interest.
all those things are really here in concept though, just small enough that you can fit all of them in 8 bits.
thanks for posting that
It's not like SUBLEQ is a very simple instruction either. It has to read two memory addresses, perform arithmetic, and write the result back, on top of possibly branching.
This machine writes a fixed bit to one address, and reads one bit from another address, branching on it. Even if you break it down into micro-ops, that's two instructions. Set-bit and branch. I'd call it simpler.
I don't think such a design would scale up. You cannot put 32 of these together and make a powerful 32-bit computer. There are lots of operations where the bit positions would need to interact. For example an adder where the carry bit must ripple between the bit positions.
Now which is better, this 1-bit computer of a 32-bit computer? It depends on what you want it to do. If all you want to do is take some switch inputs and turn on control some lights, maybe the 1-bit computer is simple, reliable and low latency. But what if you wanted to blink the lights or add some timer delays and suddenly that 1-bit computer doesn't meet your needs without adding a timer source.
a2 a1 a0
+ b2 b1 b0
= c2 c1 c0
c0 = 0;
c0 = 1;
c0 = 1;
c0 = 0;