Considering the over-the-top language ("a game changer for the computing industry") and questionable or imprecise comments like, "[it] allows representation of real numbers accurate to the last digit" (um, who reads that without thinking of irrational numbers?) it sounds too much like a sales pitch and not like serious research.
I could be wrong, but based on the similarities to interval arithmetic everyone has already identified, I'm pretty skeptical. At best, this could be a patent on a more efficient way to build interval arithmetic into a CPU architecture rather than a completely new technique.
As my British friends would say though, I can't be arsed to actually read the patent.
There is inaccuracy, but the point is that it tracks how much inaccuracy there might be. I picture this as being similar to how computers can't trust time for all sorts of reasons, so Google's Spanner uses time ranges and estimates of potential inaccuracy to make it possible to work with that. It will truncate, so you won't really have 1/3, but you'll know it's approximately 0.333, definitely more than 6/20 but definitely less than 7/20, for instance, and if you ever exceed certain bounds of certainty the calculation and all resulting calculations are flagged as not being that trustable. As is my understanding from the article, anyway.
> but you'll know it's approximately 0.333, definitely more than 6/20 but definitely less than 7/20, for instance
What strikes me as odd is that in my intro course to numerical methods taught how to calculate floating point error bounds when introducing the concept of floating point numbers, including how errors propagated with each flop.
Yeah I doubt this is anything novel in the purely mathematical realm. It sounds like what's patented is a practical design for doing this on a chip.
Not sure if you were taught a different method, but I envision this being similar to counting "significant digits" in scientific notation, and it sounds like that's very similar to the approach he took. I wonder if that explains there statements about tracking to the last "digit". They obviously can't do that for irrational number, so maybe they mean the last "significant" digit as far as the underlying floating point implementation is concerned.
Actually it's possible to represent 1/3 perfectly accurately, but what about all the numbers that it's theoretically impossible to compute? (Almost all real numbers have this property)
so clearly we can count all the rational numbers but how many irrational numbers are there? are there (many) more irrational numbers than there are rational numbers?
You can count computable irrationals by numbering the algorithms that calculate (successive approximations to) them and lining them up in order. This is what Turing used his "Turing Machines" to do.
So all of the irrationals that we use or could ever use in calculations are countable. The uncountability of the irrationals comes entirely from the uncomputable ones, which we will probably never see.
The rational numbers are clearly countable. The irrational numbers are uncountable, which means that for every rational number there are infinitely many irrational numbers.
But for every integer there are also an infinite number of rationals. And since we can put the rationals into 1-1 correspondence with integers, that means that for every rational there are an infinite number of rationals.
Infinity is funny like that.
The conclusion that there are somehow more irrationals than rationals depends on subtle philosophical points that have no possible proof or disproof and usually get glossed over. Accepting that philosophy also leads to the conclusion that not only do numbers which can in no way ever be represented exist, but there are more of them than numbers which we can explicitly name. Now I ask you, in what sense do they REALLY exist?
> The conclusion that there are somehow more irrationals than rationals depends on subtle philosophical points that have no possible proof or disproof and usually get glossed over
Can you elaborate? The proof that there are more irrationals than rationals is very straightforward, from my perspective.
The easiest way to understand it is to look at the question from a philosophical framework where it makes no sense to claim that there are "more" irrationals than rationals. And then untangle why it came to a different answer.
In Constructivism, all statements have 3 possible values, not 2. They are true, false, and not proven. All possible objects must have a construction. So instead of talking about a vague "Cauchy sequence", we might instead have a computer program that given n will return a rational within 1/n of the answer.
The first thing to notice is that all possible things that could exist is contained within a countable set of all possible constructions. There can't be "more" irrationals than rationals.
But what about diagonalization? That proof still works. You still can't enumerate the reals. But why not? The answer is because determining whether a given program represents a real is a decision problem that cannot in general be solved by any algorithm. You are running into the same category of challenges that lie behind Gödel's theorem and The Halting Problem. It is not that there are "more" irrationals than rationals. It is that there are specific programs which you can't decide whether they represent reals.
From a constructivist's eyes, the traditional proof that there are more irrationals falls apart because you're reasoning about unprovable statements about arbitrary sequences whose logic could have been constructed with the same sort of logic that you're applying to them. Is it any wonder that you wind up concluding the "existence" of things that clearly don't actually exist?
Now stepping back from BOTH philosophies, the differences between them lie in different attitudes about existence and truth. Attitudes that underly the axioms which we use, and cannot possibly be proven one way or another. (Gödel actually proved that. Any contradiction in Constructivism is immediately a contradiction in classical mathematics. But conversely there is a purely mechanical transformation of any proof in classical mathematics which resulted in a contradiction, into a constructive proof that also results in a contradiction.)
However there is no logical argument that can disprove the constructivist view, in which there aren't "more" irrationals than rationals. And therefore the logical argument that there is must have some hidden implicit assumptions.
This thread was about "all the numbers that it's theoretically impossible to compute". If you think the difference between "irrational" and "non-rational" in this context is irrelevant, you have a weak grasp of number theory. Yes, all are non-computable, but in different ways.
Well, if I remember correctly, a product of an irrational and a rational is still irrational, right?
If so, here's a simple argument: If you have N rational numbers and I have K irrationals to start with, I can produce roughly N*K additional irrationals. Since we know at least two irrationals (pi and e) we can make at least twice as many irrationals as rationals.
Hmm, yes, it can be represented in ASCII. But you still have to approximate when storing it in a way that is actually useful for computation using a finite number of bits.
Many real programming languages support arbitrary precision decimal and/or rational numbers.
Sure, there are times when space or speed concerns favor inexact representation over correctness, but that's an optimization that ought to be properly evaluated; the fact that lots of languages are designed in a way which makes it the default everyone reaches for contributes to lots of errors.
Yes. Common Lisp is an example of a language that can represent rationals exactly and do arithmetic on them. You can avoid floating-point precision loss by using this method. But there are drawbacks:
1) The numerator and denominator will often turn into bignums as a calculation progresses, consuming ever-larger amounts of space and time, and
2) This won't help you with any calculation involving irrationals, except to prevent your initial imprecision from growing larger.
I've definitely owned a Casio calculator that worked on fractions rather than decimals and could even work in terms of roots and certain irrational numbers like e and pi.
For example, sqrt(2) + sqrt(8) would print 3 sqrt(2) rather than 4.242640687119286~
I wish I knew how it worked, it's probably something simple like suggested in another comment.
Briefly read, it looks like an idea and not an 'apparatus'. Yes, carrying error bound information along with the number is interesting. How is that implemented in real time? How is accumulated error calculated (how do we know the floating values we are calculating with are not perfectly precise e.g. 3.0000... and not 3.000...1)?
> Briefly read, it looks like an idea and not an 'apparatus'.
It's patent law lingo. The patent covers both the idea (the "system") and subsequent implementations (the "apparatus") that are direct implementations of the idea.
That's easy, always round down and also carry max possible error (calculated by rounding up and subtracting the rounded down value). For example, 1/3 can be represented as 0.333 with error 0.001. Multiply it by 3 and get 0.999 with error 0.003.
You won't need many bits for the error, and operations can be made reasonably fast as only several lowest bits are affected.
> who reads that without thinking of irrational numbers?
Not directly related to the article, but many [1] irrational numbers (π for example, or sqrt(2)) can be represented in a computer in their entirety, i.e. "accurate to the last digit." Not all digits are stored at once in RAM, of course, but you can obtain an arbitrary digit (given sufficient time). That's precisely how computable numbers are defined (first by Turing in his 1936 paper that first defined the notion of computation, and was called On Computable Numbers, where the "numbers" in the title refer to real numbers, including irrational ones).
[1]: Relative to the irrational numbers that "we know", not to all uncountable ones, of course.
How can it be "clickbait" if it's in the article? What am I supposed to click on?
Moreover, these are clearly marked quotes from the press release and the patent. Maybe this technology doesn't merit an article. But if it does, quoting the inventor is exactly what one expect from coverage. Note that it does invite scepticism, starting with "claims" in the headline.
This gratuitous hatred of journalism is seriously getting out of hands.
"clickbait" ~~ overboard sensationalism for the purposes of getting more attention than demure/traditional language would get. Even though it's in the article, this kind of language can be added by journalists to incentivize non-expert editors/publishers to publish their work versus others.
I agree with you. But there's more "clickbait" online than actual journalism. The saturation point has been reached for a lot of people and it's difficult for them to go back to respecting actual journalism or even distinguishing between the two.
I could be wrong, but based on the similarities to interval arithmetic everyone has already identified, I'm pretty skeptical. At best, this could be a patent on a more efficient way to build interval arithmetic into a CPU architecture rather than a completely new technique.
As my British friends would say though, I can't be arsed to actually read the patent.