Hacker News new | past | comments | ask | show | jobs | submit login

My favorite one (that probably will never come back) is the concept of ternary computer on a balanced ternary number system.

In hindsight it is a much better way to encode signs for both integers and floating points.




This was the Setun (https://en.wikipedia.org/wiki/Setun), developed at Moscow State University.


Why is ternary better? I haven't heard this argued before?


Maximal information efficiency is achieved at "e-ary" (~2.718281828) distinct values per place, and this is closer to ternary than binary.


I read this as information density in theory, not necessarily in practice.

For this result to be true in practice evidence that ternary hardware can be manufactured and operated at the scale of binary hardware, e.g clock speed, error rates and power draw. Everything I have heard prior pointed to the conclusion that binary is more efficient when accounting for the physical properties of the substrate.

I'm sceptical of claims in the realm of physics on the basis of pure math without empirical confirmation.

Is there any modern research into manufacturing ternary chips?


Binary also has mathematical advantages over ternary, for instance 2 is the smallest integer greater than 1 (no shit) so powers of 2 are as close together as you can get in the integers.

Bitwise operations correspond to the finite field GF(2), while there is a GF(3) it's not nearly as interesting or useful.


Binary has advantages on a discrete/logical sense (namely bitwise operation).

Balanced ternary (just ternary no) has the advantage that the sign is trivially included in the number, so there is no consideration of (1/2 complement or signed/unsigned extension)

This is partially a lie, as balanced ternary can be read as normal ternary giving only non-negative integers... Still it is a more natural encoding of arithmetic


> powers of 2 are as close together as you can get in the integers.

Why is that useful?

> not nearly as interesting or useful.

Why it that important in computing?


I'm unfamiliar with the literature in this area. May I ask why? (Also, why does Euler's constant appear all over the place?)


         /
       (0)
      /   \
    0      (1)
   / \     / \
  0   1   0  (1)

  = 011 = 3 (in decimal)
a) Numeral Systems (e.g. ternary) are just trees, and specific numerals are just paths from root to leaf.

b) A 6-digit numeral roughly corresponds to a tree of length 6.

c) Base_10 corresponds to a tree with 10 possible children for each node.

d) e is the most efficient multiplier when trying to achieve compound growth in the fewest iterations of multiplication.

> Also, why does Euler's constant appear all over the place?

e is special because e^x is its own derivative. It also acts as a "bridge" between addition and multiplication. It often appears where growth or trees are involved.


Thank you for taking the time to explain, with a diagram even.

Another comment mentioned "radix economy", which, together with your description helped me understand (generally) why Euler's constant is the most efficient base in terms of number of digits needed to express numbers.

https://en.wikipedia.org/wiki/Radix_economy


I don't understand point d)

What do you mean by efficient?

Any other literature you could reference? Cause this sounds interesting and I'd like to do some more research into this.


The cost of finding a leaf in balanced tree (with data only at leaves, which is how we represent integers in base B) of size N is on average proportional to its branching factor B multiplied by its height H. Height is (log N)/(log B), so cost is B * (log N)/(log B).

By derivatives, that's minimized when B satisfies

0 = (log N) (1 / (log B) + B(-1/((log B)^2)(1/B)

==

0 = 1/(log B) - 1/(log B)^2

1 = log B

Base of the logarithm = B.

This looks like you can pick any logarithmic base b you want, and so any B you want, but in fact the derivative I wrote assumes e is the base (hence the term "natural" logarithm.

Other bases b would yield a scaling factor of ... e/b, since d(e^x)/dx = e^x

and b^x = e^((ln b) x)

You can chase "deeper" reasons for this all day long by digging deeper into the the many definitions/properties of e and proving they are equivalent.

Playing with e is the most fun you can have in pre/calculus.


Frankly, it's just a pattern I've observed from playing with numbers myself. I'm unsure how to explain it properly, and I have no academic sources to point toward. You can probably find a better explanation somewhere in an article on optimization problems.

The intuition is that (e) is the optimal water-level when limited water is distributed among a variable amount of buckets where all the filled buckets multiply each other. Alternatively, it's like how volume() is maximized where

  volume(x, y, z) = (x y z)
  const = (x + y + z)
when (x = y = z). Except in our original situation, the number of dimensions is arbitrary instead of fixed.


> Also, why does Euler's constant appear all over the place?

That is just a natural consequence.


Downvoters, this is a reference to the "natural" logarithm.


OK, but the percentage increase in efficiency will be what? At the cost of completely overhauling all your low-level hardware?


About 5.7 percent. So not all that much. For comparison, binary is about 42 percent more efficient then decimal. See [1] for more numbers.

1: https://en.wikipedia.org/wiki/Radix_economy#Comparing_differ...


Balanced ternary is insane, and most data formats would need to be adapted to deal with it. Except in specialized processors, it is ludicrous to even consider it until almost every other avenue has been explored.


Because we're now locked into binary? Balanced ternary computers were built and used in the Soviet Union until 1970: http://www.computer-museum.ru/english/setun.htm


> Because we're now locked into binary

Binary has so many useful properties that are relatively intuitive to understand. Binary is a uniquely convenient number system. Even day to day, base-two number systems are hyper-convenient.

One thing I can think of that balanced ternary has interesting properties for is 2D space partitioning. Balanced ternary partitioning is an interesting tool; then again, you could just use modulo binary space partitioning (i.e. wrap the space around by half the width of a binary partition), it's almost as good. I wonder what the balanced ternary equivalent of an octree would be, I guess it would be a 27-tree, kinda cool, like the 3² tree.

> used in the Soviet Union until 1970

So was a planned economy, and they still haven't recovered. The KGB sure have staying power though, must be the balanced ternary advantage! ;- )

It's not like balanced ternary has no positives, but you have to acknowledge that well before even decent relay computers were invented, binary was already much better explored than balanced ternary; obviously relay computers helped solidify that (though bi-quinary did not survive, popular as it was since Colossus).


The Setun wasn't part of the planned economy, which resulted in it being cancelled by the USSR's economic planners.


> Binary is a uniquely convenient number system.

Binary has many advantages, but more from a logical perspective than a numerical one.

Namely signs.

I don't think balanced ternary computers are ever coming back. I think it is a nice lost technology :)

(And if I ever will find myself in the condition to start a civilization from nothing I will consider a balanced ternary/5/7 number system)


> Namely signs.

Signed arithmetic is elegant enough in binary, I guarantee you will fail to express even basic arithmetic concepts in balanced ternary to any lay person.

As for experts implementing them in a system, there are advantages to balanced forms, but good luck finding enough experts. The benefits probably start kicking in for very large integers, where the carry itself can take longer than you might like.


Information efficiency defined how?

What specifically becomes more efficient about computation?

Very curious, I never heard of this before.


Ternary is one of those things where the theory looks beautiful until you need to start putting circuits together. Then it's kinda shitty.


This is a matter of physical implementations. Using normal transistors ternary is horrible, but there are other physical medium where ternary is more natural (often using magnetic field).


Can you make 5nm CPU features with magnetic fields, even in principle?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: