I read this as information density in theory, not necessarily in practice.
For this result to be true in practice evidence that ternary hardware can be manufactured and operated at the scale of binary hardware, e.g clock speed, error rates and power draw. Everything I have heard prior pointed to the conclusion that binary is more efficient when accounting for the physical properties of the substrate.
I'm sceptical of claims in the realm of physics on the basis of pure math without empirical confirmation.
Is there any modern research into manufacturing ternary chips?
Binary also has mathematical advantages over ternary, for instance 2 is the smallest integer greater than 1 (no shit) so powers of 2 are as close together as you can get in the integers.
Bitwise operations correspond to the finite field GF(2), while there is a GF(3) it's not nearly as interesting or useful.
Binary has advantages on a discrete/logical sense (namely bitwise operation).
Balanced ternary (just ternary no) has the advantage that the sign is trivially included in the number, so there is no consideration of (1/2 complement or signed/unsigned extension)
This is partially a lie, as balanced ternary can be read as normal ternary giving only non-negative integers... Still it is a more natural encoding of arithmetic
a) Numeral Systems (e.g. ternary) are just trees, and specific numerals are just paths from root to leaf.
b) A 6-digit numeral roughly corresponds to a tree of length 6.
c) Base_10 corresponds to a tree with 10 possible children for each node.
d) e is the most efficient multiplier when trying to achieve compound growth in the fewest iterations of multiplication.
> Also, why does Euler's constant appear all over the place?
e is special because e^x is its own derivative. It also acts as a "bridge" between addition and multiplication. It often appears where growth or trees are involved.
Thank you for taking the time to explain, with a diagram even.
Another comment mentioned "radix economy", which, together with your description helped me understand (generally) why Euler's constant is the most efficient base in terms of number of digits needed to express numbers.
The cost of finding a leaf in balanced tree (with data only at leaves, which is how we represent integers in base B) of size N is on average proportional to its branching factor B multiplied by its height H. Height is (log N)/(log B), so cost is B * (log N)/(log B).
By derivatives, that's minimized when B satisfies
0 = (log N) (1 /
(log B) + B(-1/((log B)^2)(1/B)
==
0 = 1/(log B) - 1/(log B)^2
1 = log B
Base of the logarithm = B.
This looks like you can pick any logarithmic base b
you want, and so
any B you want, but in fact the derivative I wrote assumes e is the base (hence the term "natural" logarithm.
Other bases b would yield a scaling factor of ... e/b,
since d(e^x)/dx = e^x
and b^x = e^((ln b) x)
You can chase "deeper" reasons for this all day long by digging deeper into the the many definitions/properties of e and proving they are equivalent.
Playing with e is the most fun you can have in pre/calculus.
Frankly, it's just a pattern I've observed from playing with numbers myself. I'm unsure how to explain it properly, and I have no academic sources to point toward. You can probably find a better explanation somewhere in an article on optimization problems.
The intuition is that (e) is the optimal water-level when limited water is distributed among a variable amount of buckets where all the filled buckets multiply each other. Alternatively, it's like how volume() is maximized where
volume(x, y, z) = (x y z)
const = (x + y + z)
when (x = y = z). Except in our original situation, the number of dimensions is arbitrary instead of fixed.
Balanced ternary is insane, and most data formats would need to be adapted to deal with it. Except in specialized processors, it is ludicrous to even consider it until almost every other avenue has been explored.
Binary has so many useful properties that are relatively intuitive to understand. Binary is a uniquely convenient number system. Even day to day, base-two number systems are hyper-convenient.
One thing I can think of that balanced ternary has interesting properties for is 2D space partitioning. Balanced ternary partitioning is an interesting tool; then again, you could just use modulo binary space partitioning (i.e. wrap the space around by half the width of a binary partition), it's almost as good. I wonder what the balanced ternary equivalent of an octree would be, I guess it would be a 27-tree, kinda cool, like the 3² tree.
> used in the Soviet Union until 1970
So was a planned economy, and they still haven't recovered. The KGB sure have staying power though, must be the balanced ternary advantage! ;- )
It's not like balanced ternary has no positives, but you have to acknowledge that well before even decent relay computers were invented, binary was already much better explored than balanced ternary; obviously relay computers helped solidify that (though bi-quinary did not survive, popular as it was since Colossus).
Signed arithmetic is elegant enough in binary, I guarantee you will fail to express even basic arithmetic concepts in balanced ternary to any lay person.
As for experts implementing them in a system, there are advantages to balanced forms, but good luck finding enough experts. The benefits probably start kicking in for very large integers, where the carry itself can take longer than you might like.
This is a matter of physical implementations. Using normal transistors ternary is horrible, but there are other physical medium where ternary is more natural (often using magnetic field).
In hindsight it is a much better way to encode signs for both integers and floating points.