I also note that he dedicates a page on his website to "Gustafson's law".
Also worth noting that half of the book is about the "ubox" method for solving optimization problems - also cool, but may be overkill if you are just interested in the numeric format itself. Personally, I've been working on an implementation of the format that I can toy around with - I have no real interest in learning a lot about the cool algorithms I could do with it until I can show myself that it works for basic arithmetic, etc, as well as the author claims.
Gustafson also makes the code available (I think [here](https://www.crcpress.com/The-End-of-Error-Unum-Computing/Gus...). It's Mathematica code... there is a free viewer for that format (if you don't have Mathematica) which can print out a PDF with richly formatted equations.
Also, Googling for that link led me to this [Python implementation someone whipped up](https://github.com/jrmuizel/pyunum).
Would be happy to explain it further, but about to get on a flight.
Most people don't realize that it is the data movement that is most expensive thing in a processor... It takes 100 picojoules to do a double precision (64 bit) floating point operation, but a humongous 4200 picojoules to actually move the 64 bits from DRAM to your registers. The really crazy thing is that around 60% of that power used to move the data is wasted in the processor itself, in the logic powering the hardware cache hierarchy. My startup (http://rexcomputing.com) is solving this with our new processor, and are working with John Gustafson in experimenting with unum for future generations of our chip.
Even in the case of a core access accessing another cores local scratchpad when they are on opposite corners of the chip, it takes only one cycle per hop on the Network on Chip... meaning for our 256 core chip, you can go all away across the chip (and access a total of 32MB of memory) in 32 cycles... Less than the ~40 cycles it takes to access L3 cache on an Intel chip.
But to answer your question directly, we are targeting 1GHz conservatively...we think it could do more, but as we are focused on efficiency, we think it is a good middle ground between performance and energy usage. We'll be able to make a more informed decision (and possibly change that) when we have silicon in hand.
The same way as you store some "meta data" by adding x10^23 to a value of 6.022 ... here you are adding meta data of precision (certainty) and resolving a ton of "NaN" and error issues with IEEE.
It seems to be a superset of floating point (and fixed point?) though, so maybe LAPACK could still work? I may be misreading or misunderstanding this however.
That said, information here is sparse and I'm not an expert on numerical computing (although I do graphics at work and know some about the subject).
You could pad your values to achieve constant time indexing.
> I have been unable to find a problem that breaks unum math.
Then perhaps try (x+y) != x, where x is a very large, and y is a very small positive number.
Folks, if you don't want to buy the book, Amazon lets you do a "look inside the book" that gets enough introduction to explain the unum format, for free.
He's 1/3 of the way to three impossible things before breakfast by slide 15.
You should really read the book. A lot of the atomic operations seem cumbersome, because the unum doesn't have a fixed size representation, but you just get over that when you remember that the real problem is shuttling data over buses. More compute transistors is not a problem.
Now I think we can say some representations are better than others (at least for certain applications). It may even be possible that some representation with a given n can be better in all cases than a representation with a larger n. So an n equals 1 system will obviously be terrible compared to many systems with n around 30 or 60. But I'm not really concerned with that point, just the claim of being able to represent all real numbers.
In the book he talks about a simple unum with values -inf, less than -2, -2, between -2 ... -1, -1, between -1 ... 0, 0, 0...1, 1, 1...2, 2, > 2, and inf, also intervals like 0...2, -2...0 etc 1...inf, emerge when you intentionally use less precision.
This is a mathematically closed system that represents all real numbers and is even useful!
Not an expert here though.
Unums shouldn't be seen as better than ints, longs, floats, or doubles anymore than a double is seen as better than an int. They have different uses and strengths, and there are problems where none are a good choice. Perhaps unums are the best choice for some problems. Perhaps they are clearly and significantly better in some areas. But they should be thought of as an alternative to the existing formats and not a superior replacement.
They're also an inferior replacement in the case that you want to take advantage of highly-optimized hardware, and that getting a correct answer doesn't really matter. I don't see unums replacing floats for, say, video game graphics. But for numerical computation, it seems like the only real flaw with unums compared to doubles is the nonexistence of a hardware implementation, and the existing popularity of doubles.
It's not the same as dyadic interval math, it's kind of a hybrid.
A reasonable metaphor would be that you cannot draw all of the Mandelbrot diagram, but you can render any piece of it to any resolution you choose. Being able to describe it precisely, and to know the limits of your accuracy, is useful.
So assuming that he's saying anything at all, he's at least being imprecise, and the actually claim should be something like "represent any dyadic interval using a finite number of bits".
Anyway, I guess his motivation might be "you can represent any real number (with finite bits, and therefore finite precision)". In the book, he presents an interesting case: little 4-bit versions of the Unum that can represent:
-inf, (-inf, -2), -2, (-2, -1), (-1, -1/2), -1/2, (-1/2, -0), -0, 0, (0, 1/2), 1/2, (1/2, 1), 1, (1, 2), 2, (2, inf), inf.
Putting together a pair of them, the book outlines simple interval arithmetic (where pairs of numbers can represent any interval between numbers on the line above, and single numbers can represent some of the closed intervals as above). The reason these are kind of neat is that using the standard Unum algorithms (without any fudging), you can get "correct" (albeit terribly imprecise) results for many real number computations. Questions like "is there a number satisfying a numerical predicate in some range?" or the value of a trigonmetric or exponential expression will come out "correct" (but you might get an answer like (-inf, inf)). If things work out as well as he claims (and demonstrates for some cases), then you can basically do the math to figure out how precise you want to be and choose an appropriate specialization of the format - or take advantage of the format's flexibility and do computations starting at a low precision and increasing precision until you are satisfied. In particular, it's kind of cool that you can do computations with little 8-bit intervals, and possibly circumvent doing more expensive computations (e.g. if you test if a property will hold anywhere in the Unum range and it won't hold anywhere, assuming you (and Gustafson) have done the math right, you can avoid doing more expensive checks with increased precision).
Anyway, point is, the presentations are kind of flashy and misleading - and you're right, you can't represent any real number (just finitely representable dyadic intervals)... but the format itself _does_ seem promising...
Not sure how it would go, I think it depends if you maintain the precision or increase it after this operation. I guess 2 bit: (1/2,1)+(1,2)=(1,inf) (?) and 4 bit (2 exp and 2 mantissa?): (1/2,1)+(1,2)=(3/2,3). Maybe there's a built in check to compare how short your interval can get and stop at a reasonable precision (in this case there's no point going for more than 4 bits).
Honestly I find it quite elegant and is at least trying to solve a big issue with bandwidth limitations. I do wish the exposition was more clear/straightforward.
-inf, (-inf, -2), -2, (-2, -1), (-1, 0), 0, (0, 1), (1, 2), 2, (2, inf), inf, and both quiet and signaling NaN. They do not represent ±1/2 or use ±1/2 as an endpoint.
If you add, say, 1 to (1,2), you get (2, inf). The open interval means it does not CONTAIN infinity, but ends at a finite value too large to represent. If the largest positive real you can represent is 2, then (2, inf) is a mathematically correct answer.
Edit: here was the slide http://i.imgur.com/0eQvvK1.png