The article says that ternary uses exponentially fewer symbols (actually, in further nitpicking, it says "bits" rather than "symbols") than binary, but that's not correct. The decrease is just linear, by a factor of log_2(3).
The comment about subtraction being a lot easier doesn't really make a lot of sense in light of 2's-complement notation; yes, it's a tiny bit easier, but... (the comments about sign bits also seem a bit out of place in this light).
Disk drives use a variant of RLL:
Um, why would you break (or rather worse, invert) such a common convention?
> So 11 is 9+3–1.
...and not follow through, just two sentences later?
> [...] at which point the number 11 would be 2-0-1 (1+1+9).
Surely you mean 2+0+9 or 9+0+2?
Oh, are these not in use anymore?
> The first group had two bits, one representing the number 0 and the other representing the number 5. The second group had five bits representing the numbers 0–4.
Sounds like an abacus to me…
And mechanical :)
A lot of MOSFETs don't even spec their linear regions anymore. The trend is away from linear.
There used to be electric analog computers. I've been working on an analog photonic computer (very theoretically so far) that I think could work.
Do you know how many electrons you move when you execute an instruction? It's like on the order of 10^18. I might be able to get away with a couple of photons.
Also there appears to be some dispute on the validity of "Spaghetti sort"
I guess my point is that unless you can take advantage of traditional chip architectures or manufacturing, it may take an immense amount of effort to develop something close to their component density.
The problem with blogging about this is that it diverges from current status quite a bit and as a result I've been called a crackpot. But I'll show them all lol.