Hacker News new | comments | ask | show | jobs | submit login

>We can not obtain them to feed them as input, our real-world machines can not process them if we could...

I'm not sure about this. We often make calculations with computer representations of real numbers. We can calculate quantities based on algebraic or transcendental numbers with arbitrary precision (optimizations, navigation), or we can use them as a basis (image processing) and represent rational numbers in their terms. Real numbers are all over computer applications.






Arbitrary precision is irrelevant. By it we mean arbitrary finite precision, which leaves us not only with computable numbers, but rationals. Representing rationals isn't affected by the paper's findings, because they have the same cardinality as the naturals. Arbitrary precision models reals, it doesn't actually use them.

>We can not obtain them to feed them as input, our real-world machines can not process them if we could

Suppose we are only working with rational multiples of Pi. Then we can ONLY have finite calculations of irrational numbers and it's impossible to have a finite calculation of a rational number (except for multiplication by 0).

>Representing rationals isn't affected by the paper's findings, because they have the same cardinality as the naturals.

The Natural numbers are still infinite which presents its own inherent difficulty. It's not as though countability lends itself better to programming.


First and foremost:

> The Natural numbers are still infinite which presents its own inherent difficulty. It's not as though countability lends itself better to programming

Actually countability does lend itself to programming, essentially by definition. You cannot write a function to enumerate all the real numbers precisely because they're uncountable, even if you had an infinite amount of time. You can trivially do this with a "merely" countably infinite set, like the naturals or rationals.

> Suppose we are only working with rational multiples of Pi. Then we can ONLY have finite calculations of irrational numbers and it's impossible to have a finite calculation of a rational number (except for multiplication by 0).

It sounds like you're suggesting we can store an irrational number in a finite representation by generating it with a finite function (e.g. multiply a rational number of times). But that doesn't work. You can't represent a number N in a finite number of steps if you can't represent all factors of N in a finite number of steps.

This fact follows from the field axioms for the same reason that all products of irrationals and rationals are themselves irrational. Simply put, if you can't do it for Pi, you can't do it for a rational multiple of Pi.


>You cannot write a function to enumerate all the real numbers precisely because they're uncountable, even if you had an infinite amount of time

Even with infinite amount of time your enumeration function will not generate all of the natural numbers even though the natural number are countable.

You're right that we can't write a function to determine an immediate successor but we can write successor functions.

>It sounds like you're suggesting we can store an irrational number in a finite representation by generating it with a finite function (e.g. multiply a rational number of times). But that doesn't work. You can't represent a number N in a finite number of steps if you can't represent all factors of N in a finite number of steps.

We are getting a few concepts muddled. Storing an irrational number as a finite representation is trivially easy. For example: "pi". Alternatively I could write an algorithm to generate the decimal expansion of the square root of 2. Where it's difficult is when we talk about numbers that aren't describable in a finite number of terms, i.e. Uncomputable. There is a certain unknowable quality to these numbers and I would suggest that this isn't inherent from the cardinality of the real numbers and that we run into similar problems when trying to compute some natural numbers, like random numbers.


> Even with infinite amount of time your enumeration function will not generate all of the natural numbers even though the natural number are countable.

Huh? You can trivially write a function to generate all natural numbers:

    def increment(x):
        return increment(x + 1)
    increment(1)
How do you propose doing that for the reals?

> Where it's difficult is when we talk about numbers that aren't describable in a finite number of terms, i.e. Uncomputable.

This is not what uncomputable means. Uncomputable means we cannot calculate a number to any precision, not that the number has an infinite decimal expansion.

Pi is not uncomputable. You can’t store its entire decimal expansion in finite space but it is certainly computable - we compute it all the time. However, almost all real numbers are strictly uncomputable. Among many other proofs, this is evident from the simple fact that the set of all possible Turing machines is countable. Since the reals are uncountable, it follows that almost all reals must not be computable by any Turing machine.

Likewise, there are many potential calculations involving real numbers for which a computer will output a strictly incorrect - not just imprecise - result because its real solution is uncomputable. This does not happen in the naturals or rationals.


> Uncomputable means we cannot calculate a number to any precision

Did you mean “for any precision, we cannot compute to that precision” or “we cannot compute to every precision”?


> Where it's difficult is when we talk about numbers that aren't describable in a finite number of terms, i.e. Uncomputable.

I might be nitpicking here, but uncomputable != indescribable. For example, Chaitin’s constant [1] is uncomputable but perfectly describable (and in fact limit-computable [2]). The set of arithmetically-definable [3] numbers is even larger than that of limit-computable numbers. Computable and limit-computable numbers lie at levels Δ^0_1 and Δ^0_2 of this hierarchy, respectively.

[1] https://en.wikipedia.org/wiki/Chaitin%27s_constant

[2] https://en.wikipedia.org/wiki/Computation_in_the_limit

[3] https://en.wikipedia.org/wiki/Arithmetical_set


All of the rationals are computable. Almost all of the irrationals aren't computable at all... so there's a big difference there.

Yeah, and I suspect that has a lot to do with the underlying mathematical principles too. But the paper instead reduces the issue to the Continuum Hypothesis. The problem might not even have anything to do with countability but instead be fundamentally about the differences between infinite cardinals.

> Suppose we are only working with rational multiples of Pi.

How would you start? You can't put a rational multiple of pi exactly into a computer. You can't get a rational multiple of pi exactly out of a computer. You can calculate rational numbers on the computer and scale them by a factor of pi outside the computer, if you really want, but that doesn't mean the computer is working with irrationals.

> It's not as though countability lends itself better to programming.

Huh? Yes it does; countable sets are still infinite but they're better-behaved than bigger sets.


>You can't put a rational multiple of pi exactly into a computer.

Of course we can: "1/2*pi"

>You can't get a rational multiple of pi exactly out of a computer.

Using the same limitations you seem to have in mind, we can't get one third exactly either.

>...countable sets are still infinite but they're better-behaved than bigger sets.

Yes, and that is surprising because we do have numerous clever applications that skirt around most of the intractability of exotic real numbers. That is why I think this paper is a bid deal, especially because it points to the cardinality of the real numbers instead of, say, their inclusion of the uncomputable numbers.


> Using the same limitations you seem to have in mind, we can't get one third exactly either.

We can work with exact rationals; we can do arithmetic on them and normalize them. In particular, we can compute whether two rationals are equal (in finite time), which we can't do with reals.


No, we often make calculations with a finite, countable subset of real numbers. We can only approximate the real numbers with finite precision.

>we often make calculations with a finite, countable subset of real numbers.

We also often make calculations with finite subsets of rational numbers. We don't work with all of the rational numbers.

> We can only approximate the real numbers with finite precision.

The same can be said for rational numbers. It really matters that we can be arbitrarily precise when we talk about .333333... (one third), 1.41421... (square root of 2) or 3.14159... (pi) because we can be as exact as our calculations demand.


Two points:

First, yes the rationals are already countable. But the reals are uncountable. You cannot write a function to generate all real numbers - not even if you allowed the function to run for an infinite amount of time.

Second, most of the reals are individually uncomputable, which literally means no, we cannot calculate them to arbitrary precision (or any exact precision).

Likewise we cannot just redefine the set of real numbers to be restricted to the set of computable reals. Analysis rapidly breaks down when you do so and it follows that the real field is no longer closed (by definition, and therefore no longer subject to the field axioms).


>Likewise we cannot just redefine the set of real numbers to be restricted to the set of computable reals. Analysis rapidly breaks down when you do so and it follows that the real field is no longer closed (by definition, and therefore no longer subject to the field axioms).

I was with you until you wrote that Analysis would break down. We can easily do analysis with dense subsets of the reals and also extensions of the reals, like in Non-Standard Analysis.


> Likewise we cannot just redefine the set of real numbers to be restricted to the set of computable reals. Analysis rapidly breaks down when you do so and it follows that the real field is no longer closed (by definition, and therefore no longer subject to the field axioms).

Whilst this is true, I feel it should be noted that there exist constructive and computable versions of analysis which exist to work under under assumptions like "all the reals are computable".

Of course, the big difference is that we've mostly agreed on a single canonical version of the classical reals and of classical analysis, whilst there are numerous versions of different constructive analyses depending on what exactly you mean by "constructive".

Additionally many theorems of classical analysis "split up" into multiple independent versions which would be equivalent classically, but only some of which are true constructively.

Worse than that, often there is not a single definition of the "reals" any more. For example, two definitions of the reals (Dedekind Cuts and Cauchy Sequences) may result in entirely different objects constructively speaking. And, in fact, it may be that neither one is actually a good way of capturing the concept of a "real" "number" in that particular universe, so you need yet another definition.

As terrible as this might make it sound, it is worth the effort, in my opinion. In trying model the real world, classical mathematics throws away just a little bit too much information, in my opinion. This messiness is not just the result of wanting to work under some weird axioms, but an actual statement about the messiness of reality. Trying to sweep these issues under the rug is not a sustainable strategy if you eventually want to solve real-world problems.

Using classical mathematics is fine as a first-pass. It's a good way of investigating the problem under some simplifying assumptions and getting some good and useful results. But after that is done, it's not always enough to just declare the problem "solved" (or "unsolvable"). You need to consider what the practical version of the results will be, especially if you want to write a program to use them.

I am glad that the world is moving towards more computable and constructive versions of mathematics. It is still early days, and there are still way too many different takes on trying to solve the same mathematical issues, but soon enough we'll converge on a select few "generally acceptable" versions of constructive mathematics (in particular, something based on some form of constructive Type Theory) and we'll be able to reform our mathematical and scientific knowledge on a more stable base.


Dedekind cuts and Cauchy sequences don't result in different reals. They're different methods of proof, but the resulting sets are isomorphic to one another, so they're structurally the same even if a real from one looks a bit different from a real in the other.

All the Dedekind cuts and Cauchy sequences do is tell us, in essence: "yes, the axiomatized reals you're using are correct and fine because there are constructive proofs of their existence." Moreover, the point of those proofs (and analysis more generally) is to demonstrate mathematical continuity and uncountability. No matter how you slice this problem, if you construct analysis you will lose computability along the way.


I think there's perhaps a misunderstanding.

You are correct that under classical assumptions of logic ("normal" mathematics), the Dedekind Cuts and Cauchy Sequences indeed result in the construction of isomorphic objects.

However, under constructive assumptions of the logical axioms of mathematics, such as intuitionism (not accepting the law of excluded middle, i.e., assuming that the statement "P and NOT P" is not necessarily true for all P), the construction of the Dedekind Cuts and the Cauchy Sequences may result in different objects, i.e., it may no longer provable that they are equivalent.


We can effectively calculate all of the rationals to arbitrary precision, but we can't do that with all of the reals. Most of the reals have no effective procedure to calculate their decimal expansion.

Right, but we're getting away from illustrating the point of the paper, which isn't about number computability (interestingly) but about cardinality of sets. The intractability of this machine learning problem could have more to do with an absence of a successor function in the Real numbers than about the cardinality of uncomputable numbers. Or it could be a consequence of properties of differing infinite sets in general, like the difference between the Reals and the power set of the Reals. What I'm trying to get at is that we're not just talking about infinite decimal expansions.

It's extremely counter intuitive what would happen if a computer system (a Turing machine) was operating symbolically on real numbers, not floating point numbers which have vastly different properties (for example, addition is not associative there although it is commutative). We can refer to constructive logic proofs in the real number theory. There's a ton of surprising facts there, to pick two:

  1. For x and y you cannot prove that x<=y or x=>y
  2. Every function is continuous
Some hand-waving to see why 1. is a problem: given two numbers, x and y, as a lazily-computed infinite decimal expansion, the program that loops in parallel over the expansions and seeks the first difference (to determine which number is greater) will never terminate if x and y have identical expansions.

See this submission from two months ago: https://news.ycombinator.com/item?id=18411935


Maybe I'm just arguing semantics here but I would say that while calculations are carried out using floating points the numbers themselves are represented symbolically. For example, if you came up with an algorithm to calculate euclidean distance then you would eventually employ a square root algorithm instead of an approximation of a square root stored as a floating point number. The algorithm "sqrt(2)" is a symbolic representation of the square root of 2.

Yeah I meant that there's another layer to the symbolic representation of "sqrt(2)" than just the tree of operation, so that you can ask sensible questions about the value of the number. For instance a generator of decimal representation or a function like "given N return a fraction q/p that's within 1/2^N of the value".

And floats are really a whole other beast than fractions or naturals or real numbers. As I wrote earlier - the "+" operator isn't associative. Python snippet:

    def before():
        x = 0.0
        x += 1.0
        for i in xrange(1000):
            x += 1e-17
        return x

    def after():
        x = 0.0
        for i in xrange(1000):
            x += 1e-17
        x += 1.0
        return x


    print before() == 1.0
    print after() == 1.0

Will print:

   True
   False

Rewrite your functions to be generator functions. They don't even need to be based on the desired level of precision to always be decidable.

Not sure what your point is. I can just suggest to see the PDF: https://news.ycombinator.com/item?id=18411935

Isn't it also possible for two numbers to have unequal expansions but still be equal? Take .3... * 3 vs 1. The first expands to .9... while the second expands to 1.0... but they are also equal.

Right, but note that we can use numbers from countable sets and find different symbolic representations for them (3.0 vs 3). That problem is not inherent from the uncountability of the real numbers.

Yup, it is possible. That was just a hand wavy example to show where the issue is. Symbolic real numbers need not be represented as small programs/objects generating a sequence of digits. But all of those implementations/representations will fail to compare some numbers.

Edit: for instance, you could imagine a representation (think object of a virtual class in c++) that gives you a fraction and a 1/2^n bound for error, where you can query the number for whatever n you see fit. Still, in some cases it's undecidable to determine which number is greater.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: