
Inventor Claims to Have Solved Floating Point Error Problem - SandmanDP
https://www.hpcwire.com/2018/01/17/inventor-claims-solved-floating-point-error-problem/
======
agar
Considering the over-the-top language ("a game changer for the computing
industry") and questionable or imprecise comments like, "[it] allows
representation of real numbers accurate to the last digit" (um, who reads that
without thinking of irrational numbers?) it sounds too much like a sales pitch
and not like serious research.

I could be wrong, but based on the similarities to interval arithmetic
everyone has already identified, I'm pretty skeptical. At best, this could be
a patent on a more efficient way to build interval arithmetic into a CPU
architecture rather than a completely new technique.

As my British friends would say though, I can't be arsed to actually read the
patent.

~~~
tombert
That's what I was thinking too; If I do 1 / 3, then of course it will have to
truncate, and integration errors would still be inevitable.

~~~
umanwizard
Actually it's possible to represent 1/3 perfectly accurately, but what about
all the numbers that it's theoretically impossible to compute? (Almost all
real numbers have this property)

~~~
mcny
> Almost all real numbers have this property

> Almost all

I love this comment because it brings back memories of school.

In a layman's terms, I think almost all in this case means all but a finite
amount

can we say almost all real numbers are irrational?

1 1/1 2/1 3/1 4/1 5/1 ... 2 1/2 2/2 3/2 4/2 5/2 ... 3 ...

so clearly we can count all the rational numbers but how many irrational
numbers are there? are there (many) more irrational numbers than there are
rational numbers?

~~~
Ma8ee
The rational numbers are clearly countable. The irrational numbers are
uncountable, which means that for every rational number there are infinitely
many irrational numbers.

~~~
umanwizard
I wasn't talking about irrational numbers.

~~~
pcvarmint
But all non-computable numbers are irrational.

~~~
TuringTest
No, there are also non-computable numbers that are imaginary, complex, or
transfinite.

~~~
pcvarmint
All rational numbers are real. Therefore all non-real numbers are irrational.

~~~
TuringTest
Uh, no, "irrational" is defined as a subset of real numbers.

~~~
pcvarmint
Non-rational, then.

This is about as consequential as debating whether 1 is a prime number.

~~~
TuringTest
This thread was about "all the numbers that it's theoretically impossible to
compute". If you think the difference between "irrational" and "non-rational"
in this context is irrelevant, you have a weak grasp of number theory. Yes,
all are non-computable, but in different ways.

------
shmolyneaux
The floating point error problem has not been solved. This patent describes a
floating-point representation that includes fields for storing error
information. The standard IEEE floating-point representation has three fields:
a sign field, an exponent field, and a mantissa (or significand). This patent
proposes reducing the size of other fields and adding additional fields to
store error information. The error information would be updated by hardware
during regular operations. The patent proposed adding a configurable amount of
precision to the numbers. If an operation exceeds this limit, an insufficient
significant bits signal "sNaN(isb)" would be raised.

Not only does this method not reduce floating point error, it reduces the
precision that you have for any given number of bits.

Unfortunately I can't find any of the figures referenced in the patent to help
me understand the novelty of this patent.

~~~
jovial_cavalier
> If an operation exceeds this limit, an insufficient significant bits signal
> "sNaN(isb)" would be raised.

What about for binary repeating decimals like 0.3? Wouldn't it always raise
that signal?

~~~
oh_sigh
Presumably if you specify that you want more units of precision than are
available, then yes. But if you say you only want 1 significant digit, then it
can store it.

------
ajennings
Is this different from/better than?

unums:
[https://en.wikipedia.org/wiki/Unum_(number_format)](https://en.wikipedia.org/wiki/Unum_\(number_format\))

interval arithmetic:
[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)

~~~
lisper
No.

------
simias
>The inventor patented a process that addresses floating point errors by
computing “two limits (or bounds) that contain the represented real number.
These bounds are carried through successive calculations. When the calculated
result is no longer sufficiently accurate the result is so marked, as are all
further calculations made using that value.”

That does seem useful but it's a bit akin to saying that you've solved the
division-by-zero problem by inventing NaN. Suppose you're writing some
critical piece of software and a floating point operation raises the
"inaccurate" flag, how do you deal with that? Do you at least have access to
the bounds computed by the hardware, so that you may decide to pick a more
conservative value if that makes sense?

Besides the link to the "1991 Patriot missile failure" kinds of contradicts
the claim that this would solve the issue since Wikipedia says:

>However, the timestamps of the two radar pulses being compared were converted
to floating point differently: one correctly, the other introducing an error
proportionate to the operation time so far (100 hours) caused by the
truncation in a 24-bit fixed-point register.

If the problem comes from truncation in a FP register I'm not sure how this
invention would've helped.

~~~
wyldfire
> a floating point operation raises the "inaccurate" flag, how do you deal
> with that?

You can trap. ...but then again, existing arithmetic traps are not uniformly
enabled by default.

------
speps
And immediately patents it... so no one else can use it.

EDIT: and for some other methods:
[https://en.wikipedia.org/wiki/Unum_%28number_format%29](https://en.wikipedia.org/wiki/Unum_%28number_format%29),
particularly the latest one being the Posit method:
[http://superfri.org/superfri/article/download/137/232](http://superfri.org/superfri/article/download/137/232)

EDIT2: of course other people can license it, but the other way to bring a new
floating point to the scene would be through the same process that happened
with IEEE 754. There are plenty of people who wouldn't touch anything patented
at all, sometimes even with a patent clause.

~~~
turc1656
He's an inventor. Inventors usually work towards patents. Also, it's not so
"no one else can use it". It's so that he can license out his work to a
company like Intel. The patent is to protect him from a company like Intel
going "sweet, thanks for the fix." And then profiting off his work. Or do you
expect this guy to work for free?

~~~
vec
I don't expect him to work for free, but I do want Intel (and AMD, ARM,
NVIDIA, TI, and anyone else who makes a floating point module) to go "sweet,
thanks for the fix" as quickly and, almost more importantly, as collectively
as possible.

I want this guy to be compensated, but I'd prefer this guy be compensated in a
manner that doesn't prevent third parties from fixing their hardware. In
general, I think bounties are a good solution to this. Failing that, there are
plenty of trade groups and nonprofits and regulatory bodies that could be
tasked (and funded) with acquiring and freely redistributing this class of
innovation if we wanted to.

~~~
sirclueless
You're living in a fantasy world. You're looking at the status quo: some guy
has invented a better floating point circuit, and you think there are two
options. 1) The guy releases it to the public for general use, or 2) the guy
patents it and holds a monopoly over its use.

Obviously 1) is a greater public good than 2), but in reality these are not
the only options.

Here's some other realistic situations: 3) With no incentives to make things
public, this guy either stops working on this much earlier, doesn't tell
anyone, or throws it in the garbage. 4) This guy goes and talks to Intel about
his design. They quietly pay him some money or hire him and implement it in
secret. Two years from now they launch a processor with this feature and for
the indefinite future, until their competitors spend costly time reverse
engineering the secret hardware, only Intel processors have this circuit. 5)
Same as 4) except Intel says, "Haha, thanks for being a sucker" and doesn't
pay anyone.

This is the patent system at its best: incentivizing some guy to work on this
invention, then publish his work and describe it in detail. For the next 20
years, he can license it to anyone he wants and profit from his work. After
that point everyone can implement it as a public good.

~~~
ad_hominem
The fantasy world is thinking that plucky little inventors creating something
is the status quo. You think big companies like Intel don't have hundreds of
people working on research full-time? That they freely lease all fruits of
their research out to their competition rather than keeping a 20-year
monopoly?

Patents, like any monopoly-granting device, benefit market incumbents much
more than encourage new entrants

~~~
sharemywin
maybe large companies shouldn't be able to own patents.

~~~
oh_sigh
Okay, so now we will have a bunch of 1 person in-name-only companies which
hold patents, which offer exclusive licenses to big companies for $1 per 1000
years.

------
gibrown
It doesn't actually sound like he "solved" it. More like he put error bounds
around it and can detect when the error is more than X.

> When the calculated result is no longer sufficiently accurate the result is
> so marked, as are all further calculations made using that value.

Solving it would be a pretty big deal. This doesn't feel like it is, though I
admit I haven't worked on a similar problem in a long time. Kinda feels like
patent trolling as I imagine that lots of companies have put bounds on
detecting floating point errors when they need it. There are certainly lots of
papers on it:
[https://www.google.com/search?q=floating+point+error+bounds](https://www.google.com/search?q=floating+point+error+bounds)

~~~
cavanasm
IANAL, but if other companies have already done it and it's that easy to find,
then it wouldn't be a good patent troll, because there's obvious and easily
discoverable prior art (which would invalidate the patent anyway).

------
danbruc
Without reading the patent it sounds a lot like interval arithmetic [1] which
sounds like a really good idea at first but is not without its own problems.
For example the inverse 1/x for an interval x like [-1,+1] containing 0
consists of two intervals (-∞,-1] and [+1;+∞).

[1]
[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)

~~~
relate
In your example, this would correspond to having the number x = 0 +-1 and then
wanting to compute 1/x. If your number can potentially be zero, why would you
want to use it as a divisor?

~~~
danbruc
The problem remains if you wrap the division in a non-zero check. Or maybe the
interval [-1,+1] is already kind of a lie, i.e. x is known to be in the
interval but you additionally know that x is non-zero when you are about to
perform the division. The example is just meant to illustrate the problem that
using a single interval is not good enough to track error bounds in the
general case.

------
ktpsns
Even worse, a patent for a "processor design, which allows representation of
real numbers accurate to the last digit" is obviously nonsense. Pi (=3.141...)
is a real number where there is no "last digit".

~~~
leetcrew
I assume it means accurate to the last digit of the representation, not the
number being represented. obviously the latter would be absurd to suggest.

------
ronnybrendel2
Is this
[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)
? I.e. you carry the lower and upper bound all the way?

~~~
jmull
In the patent he contrasts his "apparatus" with interval arithmetic. He says
IA greatly increase computation (while his method doesn't) and requires twice
as much storage (while his method doesn't.

To me, it looks like a specific mechanism for encoding the bounds and scale of
error into a floating point representation, along with a pipeline for
processing operations on operands of this form (presumably efficiently). So to
me it looks like a specific variant of IA.

It looks like the purpose is to be implemented as an alternative to
conventional floating point libraries and CPU modules. E.g., Intel might
license this and add a floating point module based on this + instructions to
access it to a future CPU. (Well, even if it's great and all is as advertised,
and proves to be generally useful, I'm not sure it would jump right into the
CPU. It would probably have to grow more organically first, but that's another
discussion.)

I mean, I have no idea if this does all of what it says or if it does, whether
that would prove to be generally useful enough to make it out of niche cases.

But it's interesting.

------
whyever
> “In the current art, static error analysis requires significant mathematical
> analysis and cannot determine actual error in real time,” reads a section of
> the patent. “This work must be done by highly skilled mathematician
> programmers. Therefore, error analysis is only used for critical projects
> because of the greatly increased cost and time required. In contrast, the
> present invention provides error computation in real time with, at most, a
> small increase in computation time and a small increase in the maximum
> number of bits available for the significand.”

I'm not sure how much it increases computation time, but software for exactly
this is freely available, see for instance Arb: [https://github.com/fredrik-
johansson/arb](https://github.com/fredrik-johansson/arb)

~~~
bringtheaction
Does anyone know a similar library for Rust?

~~~
whyever
Arb depends on lots of other numerical libraries (namely FLINT, MPFR and GMP
or MPIR). If you want pure-Rust alternatives, the ecosystem is just not there
yet.

------
sundarurfriend
> “Apparatus for Calculating and Retaining a Bound on Error During Floating
> Point Operations and Methods Thereof”

It seems to be a system where the hardware design itself keeps track of the
accuracy losses in floating point calculations, and provides them as part of
the value itself.

The title is (predictably) exaggerated, but it's an interesting idea, and
could potentially be a significant improvement in particular use cases.

------
cwmma
Patent in case anyone is curious

[https://encrypted.google.com/patents/US9817662](https://encrypted.google.com/patents/US9817662)

~~~
pacaro
The usual advice w.r.t. patents is to _not_ read them.

This may seem odd, but it can be the difference between knowing and unknowing
infringement. Knowing infringement results in triple damages.

IANAL — just repeating consistent advice I have received

~~~
amdavidson
Unless you are planning to infringe (which is knowing in itself), this is very
bizarre advice.

Reading a patent is more likely to make you _not_ infringe upon it than to
make you knowingly infringe upon it.

~~~
justrobert
Reading a patent will make you not infringe in the short term, but will you
remember you got the idea from that patent in ten years?

For people who are writing novel software it can be better to always avoid
reading patents, that way they can honestly state they haven't read a specific
patent.

------
ben11kehoe
Mathematica has the cool ability to do symbolic tracking of numerical
precision, for the ability to tell you when, for example, your differential
equation solver is giving you meaningless results.

~~~
exabrial
Is there a reason why we can't have that as a compiler warning?

~~~
dragontamer
Because it depends on the algorithms you put a float through.

Addition has a maximum accuracy of 1 LSB. Makes sense: the last bit could have
been "rounded off" and 1.5+1.5 == 2 (but really 3 should have been returned).

Subtraction has unlimited error bounds (!!!). Well, I guess there's 53-bits of
a double-precision float. So subtraction can theoretically create 53-bits of
error.

In practice, you need to keep track of the error bounds during the runtime of
the program. Its not something that can be computed at compile time. After
all, addition of a positive and negative number IS subtraction. (so some
subtractions are additions: with accuracy of 1LSB. While some additions are
subtractions: with unlimited error bounds)

------
algorithmsRcool
At a glance this reads similar to Interval Arithmetic in that it places bounds
on how much error a value carries.

Is there something more novel to his approach?

[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)

~~~
mark-r
I think the novel part is in the encoding of the error in the bits of the
value. It's hard to see how much value this patent really holds.

------
payne92
Here’s the issued patent:
[https://www.google.com/patents/US9817662](https://www.google.com/patents/US9817662)

Note that it’s a claim on the processing unit implementation (e.g. the FPU),
not the method.

Nonetheless, I’d be very surprised if this stands the test of interval
arithmetic prior art.

~~~
everybodyknows
The beauty of the patent is that it will never be tested, being of no
practical value, for numerous reasons already mentioned in other comments,
plus a few more not worth the bother of going into.

So the inventor gets a patent number for his LinkedIn profile, and USPTO get
their fee, and that's the end of it. A win-win for all involved.

Earlier HN discussion of the phenomenon:
[https://news.ycombinator.com/item?id=16015371](https://news.ycombinator.com/item?id=16015371)

~~~
payne92
"Win-win"...not at all.

Patents like this have "threat value", which is often happily exploited by "IP
monetization" companies, contingent law firms, etc.

This is the kind of stuff that turns into 100x $50k settlement demands.

------
hedora
Unless the article is missing some important nuances, this is just "range
arithmetic" or "interval arithmetic"from the 1950's. Here's a wikipedia page
explaining how it works:

[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)

------
Dangeranger
Wouldn't something like what Douglas Crockford built with DEC64 be more useful
and practical?[0]

[0] [http://dec64.com/](http://dec64.com/)

------
chmike
This looks so obvious. How could this be patented ? The real question is why
no one already implemented it ? It wouldn't surprise me if it already exist.

~~~
DonaldFisk
If no one's thought of it before, how can it be obvious? If they have thought
of it, the prior art can be brought to the attention of the patent office and
the patent invalidated.

------
tomxor
Terrible title with a terrible description of the invention.

What he is doing appears so be interval arithmetic:
[https://en.wikipedia.org/wiki/Interval_arithmetic](https://en.wikipedia.org/wiki/Interval_arithmetic)

Because we don't have infinite computer memory or processing power numbers
have to be finite, so no one will ever "solve the floating point error
problem" however being able to quantify the error is both extremely useful and
extremely complex because you have to try to determine how the error
propagates through all of the operations applied over the original input
values.

In science this is also done based on the precision of the raw data... roughly
through selecting a sensible number of significant figures in final
calculation. In other words they omit all of the digits they deem to be
potentially outside of the precision provided by the raw data, e.g your inputs
a:123.456 and b:789.012 but your result from some multistep calculation is
12.714625243422799, obviously the extra precision is artificial and should be
reduced to something slightly less than the input precision (because it will
have been rounded).

For floating point math this is about going a step further by calculating the
propagation of error from the end of the maximum length significand provided
by IEEE 754 (where anything longer causes rounding and thus error), and trying
to quantify how that window opens wider and wider as those rounding errors
propagate towards more significant digits as more operations are performed.
With interval arithmetic this is done by keeping track of the upper and lower
bounds of that window (the real number existing somewhere within that window).

This doesn't solve any of the many issues that floating point math has, but it
allows whatever is consuming it to potentially assign significance to the
output of a calculation more precisely. i.e so that you can say
1369.462628234m is actually 1.4e3m (implying ± 100m) perhaps translating into
understanding that your trajectory calculation isn't actually as accurate
accurate as the output looks, but instead the target has a variance of up to
100x100 meters.

I expect the patent details a hardware implementation to make this practical
at the instruction level rather than a likely very slow software
implementation.

------
umanwizard
Obvious crank.

------
tlb
I wrote an interval arithmetic package once too. It was slow, because it had
to change the FP rounding flags multiple times for some operations.

In the end, it seemed like any substantial computation ended up having
extremely wide bounds, much wider than they deserved. Trying to invert a
matrix often resulted in [-Inf .. +Inf] bounds.

------
pizza
Here's a link to the patent
[https://patents.google.com/patent/US9817662B2/en?oq=No.+9%2c...](https://patents.google.com/patent/US9817662B2/en?oq=No.+9%2c817%2c662)

------
titzer
He reinvented interval arithmetic, it sounds like.

Funny. There was a project at Sun Labs in the early 2000s that way far down
this road. Without looking at its specifics, I am still surprised that the
patent was accepted.

------
beyondCritics
This appears to be complete nonsense.

------
ggggtez
You can't patent math.

------
known
Has he solved
[http://0.30000000000000004.com/](http://0.30000000000000004.com/)

~~~
shmolyneaux
Not really, the idea is to store the amount of error in the binary
representation of the number. When converting from decimal "0.3" to this
floating point representation, it's more like 0.30000000000000004 ±
0.00000000000000004

