Hacker News new | comments | show | ask | jobs | submit login
Mathematicians Measure Infinities, Find They’re Equal (quantamagazine.org)
116 points by lisper 36 days ago | hide | past | web | 156 comments | favorite



I am having trouble grasping the result.

As far as I understand it, any proof of any result has to be with respect to some axiom system. The continuum hypothesis result is that you cannot prove a infinity exists between the naturals and the real numbers under the usual axiom system A.

How that was proved, I don't know, and I also don't know what axiom system was used in that proof. If the axiom system was also A, then it seems to me that it follows we cannot have p<t, and hence p=t, because otherwise by staying in the same axiom system A you violate the unprovability of the continuum hypothesis in A. This supplies a proof in A that p=t. But this seems trivial.

So the axiom system used to prove that "the continuum hypothesis is unsolvable in A" must be a larger system A+.

I think in A+ we again have p=t. The question this paper answers is whether we can prove "you cannot prove p=t using A" or "you can prove p=t using A". The latter means there is a straight up proof.

Any people who really know this stuff can you comment if I got the gist right?


Same disclaimer as v64: my background is also mathematics, but not in this field.

As he says, what was proved so far is that: a_0 < a_1 <= p <= t <= 2^(a_0) The first inequality is by definition (a_1 is "the smallest cardinal greater than a_0), the second one is a theorem by Hausdorff, the third one is supposedly easy, the last one is trivial. Also, if the first one is an equality, then the second one is, too.

The continuum hypothesis claims that they are all equal; that's been proved by Cohen to be independent of what you call "A" (and is generally called ZFC), by "Forcing" which is not easy, technical, and has won Cohen a Fields medal... And you're right in saying that what is proved in an axiom set is true in any larger axiom set.

However, the main point is that "we cannot have p<t, and hence p=t" is false; that is the whole point of indecidability. A statement can be "true", "false" or "not decidable". Many people apparently expected the p=t question to fall into this last group (with some enlarged axiom set "A+" permitting to prove the strict inequality).

And no, the statement "The continuum hypothesis is undecidable in ZFC" is proved within ZFC, otherwise it wouldn't have been proved at all... To give a trivial counter-example, as a corollary of Cohen's proof, the axiom choice "ZFC+CH" (ZFC + continuum hypothesis) is as valid an axiom set as ZFC is. Within it, you can certainly prove that "ZFC implies CH" since... CH is simply true.


Thanks.

I think the idea I had in my head was this: If p=t or p!=t is decidable in ZFC+AoC, then it must be that p=t because otherwise that contradicts that the continuum hypothesis is undecidable.

So a better way to sum up this result is: "The question of whether p=t or p!=t is decidable in ZFC+AoC. And oh yeah, they happen to be equal."

How does that sound?


I hate to be that guy but a friendly correction: ZF + AoC is what ZFC refers to.

I'm not sure I follow your line of thinking. If it's decidable it could either be p=t or p!=t, no? Ie. If the contium hypothesis can be proved in ZFC, it could be either.

Afaik, Cohen proved, by using Gödel, that in fact p=t or p!=t is not provable in ZFC. Ie contium hypothesis is not provable in ZFC.

As I understand, Malliaras and Shelah have proved p=t ie contium hypothesis is true by using set and model theory. As I understand they are not relying solely on ZFC to prove this since that would disprove Cohen and my understanding is they do not.

Further, I understand this result to mean there can only be two infinities which can be said to be different, the cardinal of the countable rational numbers and the cardinal of the uncountable real numbers. But I don't really understand any of this shit so it is likely I've got it wrong.


Disclaimer: I have a math background, but this isn't my field.

The continuum hypothesis was proved to be independent of ZFC. From glancing at the first few paragraphs of the actual paper [2], it's been established for decades that the following inequality holds:

aleph_1 <= p <= t,

and that aleph_1 = p can be shown to imply aleph_1 = t. Note that the inequalities aren't strict inequalities, and this proof is essentially saying that they were equivalent all along.

[1] https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...

[2] https://arxiv.org/pdf/1208.5424.pdf


The article is misleading on this point. The Continuum Hypothesis says that it is undecidable in the standard axioms whether there is a cardinality between countable and the continuum (i.e., the cardinality of the real). This work shows that p and t are different, which also means that t is of greater cardinality than the reals.


Oh, thank you. So the article just threw in Continuum Hypothesis because different infinities 'n stuff.

But what is confusing to me is that the article suggested p and t are defined as collections of subsets of the natural numbers. The power set of the natural numbers has the same cardinality as the reals, right? Since p and t are both subsets of this power set, they must have cardinalities below the cardinality of the reals.


Wait I'm confused. When you say p and t are different, you mean they are cardibalities of different sets but still equal in "size", right? Because I've understood they've proved p to be of equal "size" of t, which I guess means they have a one to one relation? Fuck this is confusing


What is p and t? You say that this work shows that p and t are different. What two entities are proven to be equal then in this work?


p and t are implicitly defined cardinalities (i.e., the smallest cardinalities with given properties). See Definition 1.1 of the source article (https://arxiv.org/pdf/1208.5424.pdf)


Actual article: https://arxiv.org/pdf/1208.5424.pdf

Great results within a very narrow field, which quantamagazine leverages into a clickbaity title.


Why do you think this is clickbait? The headline accurately describes the content of the article, which as far as I can tell accurately describes one result of this paper. Clickbait doesn't mean any interesting headline, it means a misleading and purely attention-grabbing headline.


It's like if someone proved P=NP and the magazine title was: "Mathematicians Measure Difficulty Levels, Find They're Equal."


Clickbait refers to vague titles that also overhype the story. Your example, if anything, underhypes it.


That's not a good headline, but it's not clickbait. If P vs. NP were a fairly obscure problem with little immediate impact outside of academia, that would be a fine layperson headline.

What would you have titled this Quanta article?


Maybe I misread, but I feel like solving P=NP would be closer to solving the continuum hypothesis whereas the article describes solving a related problem which could have disproved the hypothesis should it have been found that the two sets were not equal.


I was very confused by this. The paper linked by OP (the one I also had in mind when I started the article) is from 2012, but the article is written as if the results are recent (specifically it says from 2016! Admittedly, only five years for such a big result in that field is probably still "recent" to experts; but probably not to the "pop science" readers.

Furthermore, the lack of an actual citation was very disturbing, since if anybody did want to read the actual paper, they had to google for terms that would lead to basically incomprehensible search results.


The work was first posted to arxiv in 2012. It can take a while until a result is understood by enough experts to confirm its correctness. Imagine if journalists wrote an article claiming P=NP is closed every time someone posted their attempt to arxiv.

Also, this work very recently won a prestigious award.

Also, result published ---> result accepted ---> journalist writes high-quality piece explaining result to lay people can take years.

> the lack of an actual citation was very disturbing

The first linked text in the article is a link to the arxiv paper.


In Math usually the peer review time is very long. Perhaps one or one and half year (unless you are unlucky and when you contact the editor after a year he realizes that one of the referees "crashed" and must be "reseted").

In Physics people gets mad if the referees+editor take more than two months.

Each branch of science has its own typical time.


The article links to their journal publication from last year: http://www.ams.org/journals/jams/2016-29-01/S0894-0347-2015-...


I only had one one set theory course so can't vouch for the details, but this seemed like a well done article on a subject that could easily have been more eyeglazingly superficial or obscure.


Much of my research on number theory and primality relies on a deep understanding in axiomatic set theory. Based on other news sites I've read that attempt to cover naive set theory, I was pleasantly surprised at the article's fairly decent coverage on the topic.


> In a breakthrough that disproves decades of conventional wisdom, two mathematicians have shown that two different variants of infinity are actually the same size

I thought there are only two types of infinity and Cantor already proved that they are different.

* Uncountable infinity which is the cardinality of the set of real numbers

* Countable infinity which is the cardinality of the set of integers

Cantor has already proved that uncountable infinity is larger than countable infinity.

Is this article claiming that mathematicians have proved that these two infinities are equal now? Doesn't that contradict Cantor's proof? What's going on here? Is the Cantor's proof flawed or have we introduced a contradiction to mathematics?


If you take the power set of an infinite set, you /always/ get a cardinality bigger than the original set. So there are an infinite number of infinities.


This is a good point. But to be super pedantic, there isn't a size of infinity large enough to describe how many sizes of infinity there are.


Why not? The aleph numbers have a bijection with naturals.


See this answer on the mathematics stack exchange (it's really a book excerpt, but I don't know where else to find it.) https://math.stackexchange.com/a/5390/220797


This article is talking about the cardinality of two very specific sets. Before, many believed that t > p, but many believed that this was not provable in ZFC. Both sets contain only sets of integers, so their cardinality is bounded above by the cardinality of the real numbers (the continuum). Some people believed that this result was related to the Continuum hypothesis, and so was only provable one way or the other if you assume the CH or its negative.

As it turns out, both sets have the same cardinality (that of the real numbers) AND it is provable in ZFC.

The title is a little misleading, its really saying that two infinite sets are the same size, but prior to this their cardinality was unknown, and it wasn't even known if the cardinality was provable in ZFC.


There are infinite types of infinity not just two. The result is that they found that two types of infinity that were long assumed to be different turned out to be the same.

This is more interesting than it sounds because these infinities are in between (but possibly equal to one of) the size of the natural numbers and the size of the real numbers. There are hard limits on what can be known about such infinities in the usual mathematical framework (zfc): it's impossible to prove if an infinity which is strictly between the two exists. The hypothesis that no such​ infinity exists is called "the continuum hypothesis".

Writing this on mobile, but I hope I've made sense...


Not all uncountable infinities are the same cardinality.


So does this article prove that two uncountable infinities are equal? Can you tell us precisely what two entities have been proven equal in this article?

I understand aleph-0, aleph-1 and 2^aleph-0 and I also understand that if continuum hypothesis is true, then aleph-1 = 2^aleph-0. Is this what these mathematicians have proven, or have they proven something else?


The paper proves that the smallest cardinality of a set of integers (such that every finite subset has infinite intersections and has no almost-intersections) is equal to the smallest cardinality of a tower.


My question exactly. I still don't get it.


I did not know that this was an unsolved problem. It is quite paradoxical. You can map the subset of rational numbers in the range [0, 1) to the set of natural numbers, so there are an infinite number of disjoint subsets of the rational numbers that map to the natural numbers. Intuitively a 2nd dimension of infinity should be bigger, but the count of a set is defined to be one dimensional regardless of the dimensionality of the members.

Edit: real -> rational


No, this is incorrect. You can map the set of rational numbers to the set of natural numbers. The set of reals in that range is shown to be uncountably infinite by Cantor diagonalization.

https://www.reddit.com/r/explainlikeimfive/comments/1r5aya/e...


Ah, I was thinking of the rationals. Thanks for the ELI5 link.


Actually if you meant to say that the real numbers in [0,1) map bijectively to the set of natural numbers, that is incorrect.

See https://en.wikipedia.org/wiki/Unit_interval

under the section "Cardinality".


Another thing which has bothered me in the past was that a set of 2-tuple integers could be mapped by a set of 1-tuple integers, seemingly without any information loss.


The "seemingly" suggests that you remain unconvinced. If you define for example some kind of distance between an arbitrary (x,y) and (0,0), for any (x,y) you can easily count how many points are closer to the origin. There may be ties but it's easy to define a rule to break them. This means you can order all the pairs of integers: (0,0), (1,0), (0,1), (-1,0), (0, -1), (1,1), (1,-1), (-1,-1), (-1,1), (2,0),....


I am not sure if you have heard of them, but space-filling curves are a way to demonstrate this. This is a great video on the Hilbert curve https://www.youtube.com/watch?v=3s7h2MHQtxc


This picture is for natural numbers but it shows you one way to do the mapping:

https://plus.maths.org/issue47/features/macgregor/diagram3.g...


See Cantor's diagonalization argument for exactly how to do this mapping.


How? Which real number maps to the natural number 42?


It also seems obvious to me.

There is no "infinity", there are just infinite loops, and they are all equivalent.


BTW: Why then does wikipedia say that

the cardinality of the set of all real numbers (denoted c and called cardinality of the continuum) is strictly greater than the cardinality of the set of all natural numbers (denoted ℵ 0 'aleph-naught')?

UPD: I get it, they are actually talking about a third set in the article in a way that's not immediately apparent.


Three things:

First, this research didn't show that all infinities are equal, it just showed that two particular ones are equal. Indeed, in Cantor's formulation, the powerset of any set is larger than the original set, which immediately shows that there is no largest infinite set.

Second, this research wasn't about ℵ₀ and c at all, but about another pair.

Third, the issue about ℵ₀ and c isn't about whether they're equal, but whether there is or isn't a third infinity that's larger than ℵ₀ and smaller than c. It's already known that both answers (yes and no) are consistent with standard mathematical axioms. Edit: this result does have some connection with the continuum hypothesis, but I don't know exactly what that connection is. However, these researchers definitely didn't claim either to prove or disprove the continuum hypothesis.


The cardinality of the reals has been known to be strictly greater than the cardinality of the naturals since Cantor. What the Continuum Hypothesis considers is the cardinality of the reals and the cardinality of the power set of the naturals. The power set of another set is the set of unique subsets of the first set. If the first set has cardinality of N, then the power set has cardinality 2^N. Thus, the reals can be considered to be exponentially more dense than the naturals.

Some intuition: while there may be N unique stocks on the NYSE, there could in theory be 2^N unique mutual funds that are based on the different combinations of stocks.


To be a bit more specific: the cardinality of the reals is equal to the cardinality of the power set of the naturals. This has also been known since the time of Cantor (read: late 19th century, at the dawn of set theory). CH states that no cardinal numbers exist between the cardinalities of the reals and the naturals[1].

Cantor believed CH to be true, but couldn’t prove it, and it was eventually (ca. 1960) proven to be independent of the usual (ZFC) axioms of set theory.

[1] Though CH could of course be stated in infinitely many equivalent ways, by replacing “reals” and “naturals” in this statement with any other pair of sets whose cardinalities are equal to those of the reals and naturals, respectively.

For (uncountably) infinitely many stupid examples, take any non-empty open subset of any connected, finite-dimensional (Hausdorff, second-countable) manifold with at least two points, and any infinite subset of the integers.


Can you simplify for us what exactly have the mathematicians in this article proven?

I get that they have proven some infinity A = some infinity B but can you tell us what these A and B are.

Also, isn't the cardinality of reals equinumerous with that of the cardinality of the power set of naturals?


Quoting from the article:

> Briefly, p is the minimum size of a collection of infinite sets of the natural numbers that have a “strong finite intersection property” and no “pseudointersection,” which means the subsets overlap each other in a particular way; t is called the “tower number” and is the minimum size of a collection of subsets of the natural numbers that is ordered in a way called “reverse almost inclusion” and has no pseudointersection.

In short, there are two infinities, p and t, that are implicitly defined by some characteristics. It was previously known that there was a relationship between them, but it was not suspected that there were, in fact, equal.

Considerable work is required to understand what these are, and what the result means.

And to answer your explicit question, yes, the reals can be put in 1-1 correspondence with 2^N, the power set of the naturals.


Is it me or they forgot a step in the diagonalization argument? (adding/subtracting 1)


They say to change every number, and that's enough. Although they don't explicitly say so you can change each number in (nearly) any way you like.


My bad, skipped that part somehow.


Photos of humans in an article about math are pretty useless but nevertheless I find it interesting that the article included a photo of the male collaborator, Saharon Shelah, and another male mathematician, but not one of the female collaborator, Maryanthe Malliaris.


There's no photo (of her) on her academic homepage or her own Web articles either. The obvious inference is that she doesn't supply photos to journalists.


There’s a comment from the author below to say she specifically requested no photo.


Thanks, I overlooked that.


Please answer me this one question: What is the average number of bits that are necessary to represent an arbitrary natural number?

If the average number of bits is finite then I will shut up!


The average of an infinite set isn't necessarily a meaningful thing to consider. For example, what is the average integer? Well, you could argue that the answer is 0 because you can pair off the positives and the negatives. On the other hand, you could also say it's 1: pair off 0 and 2, -1 and 3, -2 and 4, etc.

Instead, the rigorous way to generalize the average is essentially to assign a probability to each member of the set, and compute the sum over the set of (n * p(n)), where p(n) is the probability of picking n, and the sum of all of the p(n) is 1. This is generally known as the 'expected value', and it has the property you want that the expected value of a set given a probability distribution is less than or equal to the maximum of the set.

On the other hand, you can't set p(n) to the same value for every natural number, since the sum of p(n) has to be 1 (because it's a probability). But the sum of infinitely many copies of the same number between 0 and 1 is either 0 or infinite.

So instead, you have to have some varying probability. For example, you might say you have have probability 1/2 of picking 1, 1/4 of picking 2, 1/8 of picking 3, and so on, halving the probability each time. And in that case, the average number of bits works out to be about 1.7.

In fact, you can show that for any way of picking a natural number at random, the expected number of bits to represent it will be finite.


This is equivalent to asking what is the average number of digits necessary to represent an arbitrary natural number. If you assume that this is a finite number you can quickly arrive at a contradiction.


So there are natural numbers with an infinite number of digits and that is my point. The average can't be larger than the largest number.


The only way to reason about this is as a limit.

lim n->infty sum(1..n)/n.

This limit diverges. Your intuition is breaking down. It might be useful for your understanding to study real analysis where these sorts of questions are handled rigorously.


> necessary to represent an arbitrary natural number

What makes you think this number exists? Infinities can not be added and divided this way.

> The average can't be larger than the largest number.

But it can be equal to it. You mistake is not realizing that infinity minus 1 = infinity.

So the average is "smaller" than the largest, and yet equal to it. Because like I said at the start, you can not just add infinities in the normal way you can manipulate finite numbers.


Oh, more than one infinity in the same reality and no contradiction? What a wonderful world we live in..

More than one zero, more than one infinity, why not? More than on nothingness, more than one everything..


So they basically found a way to represent any real number as an unique sequence of bits?

Doesn't sound too hard when spelled like that. I could probably do that, win some prize (not anymore). Do they have anything else like that?


If you've got some free time, can you look into whether or not the real part of every non-trivial zero of the Riemann zeta function is 1/2?


Can you reword this one as a discrete math primer either?


Let d(n) be the sum of the divisors of n (for example, d(6) = 12 because 1, 2, 3, and 6 divide 6 and 1+2+3+6 = 12). Let h(n) be the nth harmonic number, that is, h(n) = 1 + 1/2 + 1/3 + ... + 1/n. Let ln(x) be the natural logarithm and let e be, well, e.

Show that the inequality:

d(n) <= h(n) + (ln(h(n)) * e^(h(n)))

holds for all n >= 1.

Best I can do, hope that works! [1]

[1] http://www.theoremoftheday.org/NumberTheory/RobinLagarias/To...


I find Cantor's diagonal argument unconvincing.

The claim is that there are more real numbers in the range from zero to one than there are natural numbers. To see that this is false simply realize that you don't actually have to write a decimal point to specify the real numbers in this range. Without the decimal point these real numbers just become natural numbers. Can a rational person believe that there are infinite sequences of digits in the form of real numbers but not infinite sequences of digits in the form of natural numbers? The natural numbers are just an infinite sequence of finite numbers. If you believe n is a natural number then you must also believe that n*10 is a natural number. One more digit! There is always one more digit (that is what infinity implies). If there really are an infinite number of natural numbers then some of them must be of a transfinite number of digits or else you would be including numbers in the list more than once.

The problem with Cantor's argument comes down to the fact that the procedure he uses to find a number not in the set is essentially the same as the procedure he uses for creating the infinite set in the first place. The only difference is our understanding of randomness. His procedure for finding a number not in the set may not seem very random but it might be as random as any other. A truly random coin could theoretically come up heads every time. The important part of his argument is that the infinite list of real numbers has no repeats. The diagonalization procedure similarly ensures that there are no repeats. On the one hand he claims the infinite set of real numbers exists. On the other hand he argues that the diagonalization that yields a number not in the set has not already been done. He takes away infinity and then gives it back!

There is only one infinity. It means "repeat". It is simply the interplay of finite state with process. You can think of it as an "infinite loop" in programming. To say that one infinity is smaller than another is to deny that the smaller is infinite. Infinite means without bound.


> Without the decimal point these real numbers just become natural numbers.

No, they don't, because the vast majority of them have an infinite number of digits to the right of the decimal point. That's the key: there are more numbers with an infinite number of non-zero digits (the reals) than there are numbers with a finite number of non-zero digits (the naturals).

> The problem with Cantor's argument comes down to the fact that the procedure he uses to find a number not in the set is essentially the same as the procedure he uses for creating the infinite set in the first place.

Again, no. Cantor doesn't create the set, you do. The proof is like a game. It says: give me any procedure for (putatively) making a list of all of the real numbers, and I will give you back a number that is not in the list.


> The proof is like a game. It says: give me any procedure for (putatively) making a list of all of the real numbers, and I will give you back a number that is not in the list.

The game isn't fair though; we have to write down our complete solution (e.g. in the form of a never-halting, co-recursive computer program), then Cantor can take as much time as he likes (any finite number of steps) to analyse the source code and find a Real it'll never output.

Yet we can turn the tables to play a different game: whatever counter-example-finder Cantor chooses, if we make him write it down (as a computer program), we can always (eventually) find a program which will trick it: outputting only a countable number of Reals, but always a superset of those checked for by Cantor's program.

Of course, if we're representing the players of the game using computer programs, then we can go one step further and show that only the Computable numbers (a subset of the Reals) can ever be generated, and the computables are countable! (We can count them by pairing off all programs with all runtimes)


> if we make him write it down (as a computer program)

He did. Diagonalization is an algorithm. It's a non-halting algorithm (because it operates on infinite input and produces infinite output) but it is nonetheless a perfectly well defined algorithm that can be implemented by a Turing Machine.

> we can always (eventually) find a program which will trick it

No, you can't.


You're right about diagonalization when it's given our choices as input. When I said 'writing down' I meant 'in a self-contained way', i.e. without taking any input (such as our choices).

This rules out diagonalization, since that can't be run until it has a list of numbers to diagonalize. In the 'reverse' game, we're first running the counter-example-outputting program that Cantor provides, and then using its output to pick our numbers. If Cantor gives us a diagonalization program, we'll get a deadlock. In fact, the strategy to beat Cantor in this reverse game is simply the identity function: if Cantor's program claims that "X" isn't in our list, we can refute it by outputting "X".

The way I think about it is this: if we give Cantor our program up-front, he can run it to see what we're going to choose, then pick his 'move' accordingly (e.g. via diagonalization); if Cantor gives us his (self-contained) program up-front, we can run it to see what 'move' he's going to choose, and pick ours accordingly (e.g. via the identity function).

Both games are a win for whoever goes second.

Alternatives to the 'up-front' approach include:

- Interleaving or concurrent outputs, in which case there would be no winner: each time a number is added to the list, diagonalization extends its counter-example with another digit; each time the counter-example is extended, a number with matching prefix is added to the list.

- Interleaving or concurrent programs, with access to each others' source. I think the winner would be whoever has the most computing power (and can hence simulate their opponent further into the future).


> if Cantor gives us his (self-contained) program up-front,

Which he did. Diagonalization is an algorithm (and a trivial one at that).

> we can run it to see what 'move' he's going to choose

No, you can't, because one of the inputs to his algorithm is your algorithm. So you can't run his algorithm until you've committed to yours.

> Both games are a win for whoever goes second.

That's right. But Cantor has to go second. That's why his proof is correct.


Sorry if I wasn't making myself clear: I'm not at all trying to refute Cantor's proof by diagonalization of the uncountability of the reals. There seem to be others in this thread who do take issue with it, but I don't.

It looks like my point's been misunderstood. I'm pointing out that the whole reason that Cantor's proof works (and it does work) is because we're forced to provide our complete list up-front. Or, for those who don't like the idea of 'writing down' something infinite, we must provide a 'description' or 'program' which uniquely defines/generates our list.

You say as much here:

> you can't run his algorithm until you've committed to yours.

Exactly. In Cantor's 'game', his counter-example (a real number not in our list) is generated with full knowledge of our list. Our list is generated with no knowledge of his counter-example.

That's Cantor's proof, and it is not what I'm talking about.

I'm talking about a different 'game', where the dependency is reversed: the "counter-example" is generated with no knowledge of the list; the list is generated with full knowledge of the "counter-example", and hence is able to refute it (by including the "counter-example" in the list).

This 'inverted' setup has two important differences from Cantor's setup:

1) We must replace the program/description of 'a list of reals' with a program/description of 'a function from a real to a list of reals'.

2) We must replace the program/description of 'a function from a list of reals to a real' with a program/description of 'a real number'

In particular, point (2) means that diagonalization is irrelevant in this setup (which I repeat is not Cantor's setup!). It is irrelevant because it doesn't have the right type: it is a function from a list of real numbers to a real number but there is no place for such a function in this alternative setup; instead, we need a plain old real number. This is what I was trying to get across by saying it must be 'written down up-front' (i.e. not after waiting for us to think of a list), or being 'self-contained' (i.e. not requiring an input/reference to our list).

I hope that makes it clear why the following refutations aren't applicable:

> > if Cantor gives us his (self-contained) program up-front,

> Which he did. Diagonalization is an algorithm (and a trivial one at that).

Given what I've said above, we see that diagonalization is not self-contained/provided-up-front/fully-written-down or any of the other phrases I've tried to use to get my point across. Yes, we can write a program for diagonalization, but such a program encodes a function, which has no place in this scenario (which is not the one in Cantor's proof!).

> one of the inputs to his algorithm is your algorithm

Again, the algorithm cannot take my/your/anyone's algorithm as an input, since it cannot take anything as an input, since it is not a function, since I am not talking about Cantor's proof by diagonalization of the uncountability of the real numbers. I am talking about a different situation, in which the "counter-example" is just a number, and hence cannot be provided with any inputs, whether they be algorithms or unicorns.

> > Both games are a win for whoever goes second.

> That's right. But Cantor has to go second.

By "both games" I mean Cantor's proof and this different situation. Cantor doesn't have to go second; that's my point! Cantor wins because he chooses to go second. If we choose to go second, then we will win, by constructing a list which includes the supposed "counter-example".

> That's why his proof is correct.

I know it is; but I've never claimed otherwise, and that is completely beside the point.

The point is that Cantor's proof isn't actually about the real numbers! Instead, it's about how some infinite structures (like "all numbers") are too rich to be completely captured by any particular finite representation (there will always be missing numbers). It suffices to use a countable infinity, like the computable numbers.

Alternatively, we can think of Cantor's proof as talking about computational resources: diagonalization is trivial to state, as you say, but it is computationally difficult to run. This is because it runs the list-generator as a subroutine (to find the digits), which can be made arbitrarily hard by generating the list in an arbitrarily complex way. Since diagonalization then adds 1 mod 10, it always ends up doing slightly more work than it takes to generate the list. Hence we can see Cantor's proof as statement that more powerful computers can always beat less powerful computers, assuming the code is all globally known, because the faster computers can simulate the slower ones to see what they'll do. That the essence of the second player always winning.


> Sorry if I wasn't making myself clear

You still aren't.

> diagonalization is not self-contained/provided-up-front/fully-written-down or any of the other phrases I've tried to use to get my point across

But it is. The fact that one of its inputs happens to be a function doesn't make it any less self-contined.

> The point is that Cantor's proof isn't actually about the real numbers!

It isn't just about the reals, but it is definitely about the reals. The claim is: there exist well-defined mathematical structures that cannot be put into a one-to-one correspondence with the naturals. The proof is constructive, with the reals as the canonical example. Cantor's proof is about the real numbers in the exact same sense that Turing's proof of the non-computability of the halting problem is about Turing machines.


> diagonalization is not self-contained/provided-up-front/fully-written-down or any of the other phrases I've tried to use to get my point across

> But it is.

No it's not. It's a function. That is exactly the opposite of what I mean by "self-contained" (and all of the various rephrasings of how I defined it).

> The fact that one of its inputs happens to be a function doesn't make it any less self-contined.

True, it doesn't become less self-contained, because it's already not self-contained because it has an input. A function whose input is a function is just as self-contained as one whose input is a natural; or whose input is a teapot: they're all, precisely, not "self-contained" as per my definition.

I can't think of any clearer way of stating it. Maybe you're getting derailed by this phrase because you're interpreting it in some way other than the various equivalent, precise definitions I have given over and over?


> it's already not self-contained because it has an input

On this view there is no such thing as a "self-contained function". All functions have inputs.


Exactly. One way to define the same notion of "self-contained" is "not a function" , so it makes perfect sense that there are no self-contained functions.

Free variables are also not allowed, e.g. defining diagonalization in terms of some free variable "x" to stand for our list. That's effectively the same as being a function of "x", so again it doesn't count as self-contained.

Hence, a program for diagnonalization is not self-contained, since it would be a function, e.g.

    function (List<Real> x) { return ... }
Some examples of programs which are self-contained are:

    42

    sqrt(2)

    // Difficult to compute, but still a well-defined number
    if (riemannHypothesis) then 1 else 2

    // Doesn't halt, evaluates to 1.1111111...
    let f = function(n) { return (1 / n) + f(n * 10); }
     in f(1)
If we want to allow classical logic, then hypercomputation can be included too, e.g.

    if (haltingProblem) then 1 else 2
So by "program" or "description" we just mean a finite representation of a (potentially infinite) object, e.g. like the `1.11111...` example, which we could never write down in 'fully expanded' form.

We need this idea of "programs" to give meaning to statements like "we must write down our entire list before Cantor chooses his counter-example", or "if we force Cantor to write down his supposed counter-example before we write down our list".

Since we're talking about infinite objects (real numbers with a never-ending list of digits, or lists of real numbers), we can't write them down in a 'fully expanded' form, since we'll never finish writing, and hence statements about what happens "after" are vacuous (for the same reason that proof assistants don't allow (non-productive) non-termination).

If we write down a program instead, then 'the entire thing' can be written down in finite time, and hence it makes sense to talk about what happens "after".

Consider the case where a real number is written down first and then a list is written down (i.e. the opposite of Cantor's proof):

- The number cannot be written down in 'fully expanded' form, since its digits never end, so we'd never get to write down the list.

- The number can be written down as a self-contained program. For example, `pi`.

- That program cannot be/use diagonalization, since diagonalization is not self-contained. We can't write down the diagonalization program as our number, since diagonalization itself is not a number (it's a function) so that's a type error. We can't apply diagonalization to anything, since we don't have access to any list of numbers (assuming that we don't have a time machine); we could make up a list of numbers to diagonalize, but in that case we might as well skip the diagonalization altogether and just make up our number.

The theorem we can prove about this scenario is that it's always possible for us to write down a list containing that number. This is trivial, since we're writing our list after the number has been written down (as a program), so we can just put that at the head of our list.

This "opposite situation" gives a result which is the "opposite" of that in Cantor's proof (in Cantor's proof the number is never in the list). Hence "the second player always wins", and the reason the second player wins is that they get to look at the first player's choice before making their own.

We could make a winning strategy for rock/paper/scissors in a similar way: no matter how complex the strategy of the first player is (even if it requires solving the Riemann Hypothesis!), we can always beat them because we get to look at their choice.


All of that is a red herring. The basic structure of Cantor's proof applies equally to finite (what you call "self-contained") lists. Consider:

Theorem: there are an infinite number of the integers.

Proof: Assume to the contrary that there are a finite number of integers. Then it is possible to construct a finite list L that contains all of the integers. Find the largest integer in L and add one. The result will be larger (by one) than the largest integer in L, and hence larger than all of the integers in L, and hence not in L, contradicting our assumption that L contains all of the integers. Hence no finite list can contain all of the integers. QED.

You are saying, "But if you give me any integer, I can produce a finite list containing that integer." That's true. But if you don't see at this point why that is irrelevant then you are beyond my ability to help.


> finite (what you call "self-contained") lists.

Wow, I thought it was pretty clear when I said that self-contained == not a function. I don't know how you've interpreted that to mean 'is finite'...

The thing that must be finite isn't our list, it's the representation/program/description of our list. Here's a finite representation/program/description of an infinite list:

    The list beginning with zero, where each successive element is one more than its predecessor
That represents/computes/describes an infinite list (containing all of the naturals in ascending order), but it itself is not infinite (it only contains 92 characters) and is not a list (it is a program/description/whatever).

That has nothing to do with being "self-contained". All I meant by "self-contained" is, as I've said, "not a function".

The program above is not a function; it has no inputs; it has no free variables; etc. hence it is a finite, self-contained program.

The list it describes/computes/represents is also not a function (it's a list); it has no inputs; it has no free variables; etc. hence it is an infinite, self-contained list.

If you like, we could flesh out examples elsewhere on these axes:

- An example of an infinite, self-contained program is: "the list containing the number zero, followed by the number one, followed by ..." and so on ad infinitum. That's self-contained, since it's not a function (it has no inputs), but it's infinite (of course, to represent it in a Hacker News comment I've had to use a finite description at a meta-level (the '...'), but such is the nature of these things!)

- An example of an infinite, not-self-contained program is: "given a number N, return the list containing N, followed by N+1, followed by N+2, followed ...". It's not self-contained because it is a function: it has an input (N). Again, we have to use a meta-level (the '...') to 'represent the infinite representation' ;)

- An example of a finite, not-self-contained program is diagonalization: "given a list L, return the number whose Nth digit is equal to the Nth digit of the Nth element of L, plus one mod 10." That's finite (it's only 117 characters long), and it's not-self-contained since it's a function (it takes the input 'L'). Note that in this case, we're computing a real number rather than a list, but that's irrelevant to the concepts of "self-contained" (i.e. not a function) and the distinction between representations and objects.

Just for fun, here's another not-self-contained program: "Given a binary tree, return its left-most leaf". That's got nothing to do with numbers, lists, diagonalization, etc. but we can immediately see that it's not self-contained, since it's a function.

The program/object distinction is needed because Cantors proof talks about "after" we've given/written-down/etc. an infinite list. We cannot write down such a list in 'unexpanded' form (e.g. [0, 1, 2, 3, ...and so on), since we would keep going forever; and hence any statement about what happens "after" (like Cantor finding a number that's not in our list) is meaningless, since there's no "after".

On the other hand, it's trivial to write down a finite representation/program/description of this list; that's exactly what I've written above as "The list beginning with zero...".

> The basic structure of Cantor's proof applies equally to finite lists... Theorem: there are an infinite number of the integers. Proof: ...

Yes. It also applies to Euclid's proof that there are infinitely many primes ("given a list of primes, I can write down a prime that's not in that list"). We can use it to prove that there's no "best" choice in rock/paper/scissors: given a choice, I can beat it". It's a very useful technique (as is diagonalization).

> All of that is a red herring

From what? Paying the bills? In that sense, all pure maths (including Cantor's proof) is a 'red herring'. I was pointing out (what I find to be) some interesting aspects of Cantor's proof:

- If we do the choices in the opposite order, we get the opposite result.

- If we interleave the choices (e.g. pick one more list element, pick one more digit, pick one more list element, etc.) then it's a "stalemate".

- If we make the choices concurrently, in a perfect information setting (i.e. each "player" knows their own "program" and their competitor's) then it's a win for whoever has more computing power: since they can simulate their opponent and perform the 'extra step' of add-one-mod-10, or append-number-with-diagonalized-prefix (depending on which "player" they are).

Isn't 'noticing and discussing interesting things about an abstract/formal system' basically the process of mathematics? So why is it a "red herring"? What were you hoping to achieve, other than (hopefully) understanding these aspects of the proof that I've tried to point out (which you can then ignore, or take in another direction, or whatever you like)?

> if you don't see at this point why that is irrelevant then you are beyond my ability to help.

Again, irrelevant to what and help with what? I don't plan on taking these observations any further (e.g. writing a paper), since I would imagine they're pretty clear to those with a mathematical mindset. I was pointing them out here because:

- The HN audience seems to like mathematical ideas (for example, this article!)

- The HN audience isn't all highly trained mathematicians, for whom this might be too trivial to discuss

- The HN audience seems to like computational ideas, which these aspects of the proof are related to (although classical mathematics isn't limited to turing computability; hence why I said a "program"/"description" could use hypercomputation, if we like)


> Wow, I thought it was pretty clear when I said that self-contained == not a function. I don't know how you've interpreted that to mean 'is finite'...

Because your definition doesn't make sense. Everything in mathematics is a function. I was trying to do you the courtesy of interpreting what you said in a way that made sense rather than just pointing out that you don't seem to understand what a function is.

> The program above is not a function; it has no inputs; it has no free variables; etc. hence it is a finite, self-contained program.

Now it sounds like what you're trying to get at is the concept of a "constant function" (https://en.wikipedia.org/wiki/Constant_function). Constant functions are functions, and they can have inputs.

So now it seems that you've just spent all this time and effort to point out that diagonalization is not a constant function. That's true, but I don't think it's a particularly interesting observation.


> Everything in mathematics is a function.

No, everything in mathematics is precisely what it is defined to be by whoever is doing the defining. If you want to introduce extraneous extra concepts into a theory, then that's fine; for example, Russell and Whitehead seemed to enjoy contorting set-based constructions into areas where they're completely unnecessary.

However, care should be taken to remain compatible with the original definitions. In this case, there is a clear distinction between "a foo" and "a function which takes some input and returns a foo which depends on which particular value was provided as input". To introduce functions as a 'universal substrate' such that everything is a function, then complain that every "foo" is a function, then introduce the concept of 'constant function' serves no purpose other than to obfuscate perfectly clear definitions under a mountain of irrelevant one-upmanship.

> So now it seems that you've just spent all this time and effort to point out that diagonalization is not a constant function.

That's what most of the time and effort have been spent on, yes. That's not at all what I was trying to "point out" though. I was simply assuming that this was common knowledge (for those familar with Cantor's proof, at least), and so I defined some terms like "self-contained" precisely so I that I could ignore algorithms like diagonalization.

> That's true, but I don't think it's a particularly interesting observation.

I agree it's not interesting. But I wouldn't even call it an "observation". Again, it's simply a trivial consequence of the definitions I gave, and I gave those definitions precisely so that I can ignore diagonalization, for this reason.

My actual observations are along the lines of "the steps of Cantor's proof can be rearranged in these ways, which give rise to these alternative results". Diagonalization doesn't matter for any of that, but you seem to keep trying to:

- Insert diagonalization into those rearrangements, despite it being a type error to do so (because diagonalization is a function, not a number; modulo whatever 'constant function' axioms would make you feel better)

- Complain that those rearrangements are "wrong" (they're not; they're my definitions, so they're "right" by definition!).

- Restate over and over various ways that my definitions do not correspond to Cantor's proof. Which is the entire point.

- "Prove wrong" those definitions, by showing that they're incompatible with various aspects of Cantor's proof (e.g. the order of steps) and diagonalization (e.g. its type). Which is completely backwards, since those definitions were chose precisely so that Cantor's proof and diagonalization will not work for them.


I will just diagonalize forever before I give the set to Cantor. If he claims that he can diagonalize the set to generate a new number not in the set then I will say "Oh, yeah - I missed one. I wasn't really done after all. Give it back to me and I will add some more to it before you get your hands on it!". As you can see, Cantor will never get a list because I will never really be done generating it.

I could just give him the list and say that whatever he adds is precisely what I would have added had I continued. This is like an infinite lazy list in programming.


> Cantor will never get a list because I will never really be done generating it.

That would be true even if you weren't diagonalizing because the list is infinite. In fact, just a single item in the list is potentially infinite. So you can't generate the whole list regardless.

What you have to do is to produce an algorithm that takes any two natural numbers i and j as input and produces as output in finite time the j'th digit of the i'th number in the list. Because you have to produce your digit in finite time you can only diagonalize a finite number of times, so Cantor can always do you one better and produce a number not on your list.


Why is it that Cantor can do an infinite procedure of diagonalization but I can't? If he can diagonalize then I can too. Is it possible that Cantor's "algorithm" is not really an algorithm? Knuth says an algorithm must be correct and must also terminate.

Regardless, I have a lot to ponder.


Cantor is playing by the exact same rules that you are. His burden is: given an integer i, produce in finite time the i'th digit of a real not in your list. (He can't produce the whole thing in finite time because it's infinite, obviously.) He does this by using your algorithm to produce the i'th digit of the i'th row and adds one to it (mod 10).


> given an integer i, produce in finite time the i'th digit of a real not in your list

Yes; to be clear, my point is simply that there are two variants of this task:

- Given an integer i and our list (or the infinite procedure that generates it), then the task is easy since we can diagonalise.

- Given an integer i and no knowledge of our list, the task becomes impossible, since the supposed "real not in your list" is actually independent of the list (by definition); hence we're free to choose any list we like, including waiting until after the supposed counter-example has been generated, and sticking that at the head of our list.


> my point is simply that there are two variants of this task

There are infinite variants of this task. But only one of them is mathematically interesting with respect to the claim that the reals cannot be put into one-to-one correspondence with the naturals.

> including waiting until after the supposed counter-example has been generated

Obviously, if I give you a real you can then generate a list that includes that real. That isn't very interesting.


What does this prove though? That no specific real is guaranteed to be left out of every list? Was that ever in dispute?


Okay, let me try to write these "algorithms" in a concrete term.

Assume that you have a way of generating a sequence R_1, R_2, R_3, ... of real numbers in [0, 1]. (For simplicity, let's just consider [0, 1], because it still has the same cardinalityas R.) You have your "algorithm":

    generate_digit(i, j):
        Somehow compute the j'th digit of R_i.
        return the digit
You are allowed to spend infinitely long time to return each digit. (That is, you can use constructions that are not even computable in finite time, as long as you can prove that given i and j, there is exactly one digit that satisfies your criterion.)

Cantor's objective is to defeat your algorithm by generating a real number not in the list. Similarly as above, it can be written as a function that returns the j'th digit, given j. Behold it in its full glory:

    defeat_indexing(j):
        digit = generate_digit(j, j)
        return (digit + 1) % 10
That's it.


"the vast majority of them have an infinite number of digits to the right of the decimal point."

Technically, I believe all of the real numbers have an infinite number of digits to the right of the decimal point.

A vanishingly small portion of them have a repeating series of digits, such as 0. :-)


Start with zero. Add an infinitesimal epsilon an infinite number of times. Now go back to zero and subtract the same epsilon an infinite number of times. You have now traversed all the real numbers.

The decimal representation of the epsilon has an infinite number of zeroes after the decimal point and before the last digit, which is '1'. So if you were to just chop off the leading zero and decimal point to make an equivalence with natural numbers, the epsilon is as much a representation of the natural number 1 as 0.1 or 0.01 or 0.001 or 0.0001 . Infinite real number equivalents to one natural number.


That infinitesimal is not a real number. To simplify a little, a real number is something which is the limit of a sequence of rational numbers. Or, given an error bound 1/n, you can write down a rational number within 1/n of the real number. Two real numbers are the same if the difference between their approximations converges to 0 as n gets arbitrarily large.

A number with infinitely many zeros after the decimal point is within 1/n of zero no matter the n. Therefore the number is zero.

This is like how 0.9999... is 1. The reason is that 1-0.999... is within 1/n of 0 no matter the n.

Suppose you had an infinitesimal epsilon (outside the real numbers --- this is fine, and people do this). How many times are you planning on adding it to itself? To get any actual real number, you are going to have to add it to itself well more than countably many times, though I'm not sure this makes much sense.


Fine, then. Just use the smallest positive nonzero real number, instead.

Edit: I really hope I'm not the only one laughing.


"...only one laughing..."

You are illustrating the reason that mathematicians generally disparage the concept of an "infinitesimal", when it's used as proof rather than conceptual aid. (Yes, I know about https://en.wikipedia.org/wiki/Non-standard_analysis)

Not only do you get wrong conclusions, you get tedious, hard-to-adjudicate arguments.


I agree with your thought, and that of the majority of mathematicians for many years, that infinitesimals are better construed heuristically than literally.

You mention that you know of non-standard analysis and indicate that it's irrelevant. Though I don't know why you think this, I agree with you. I just wanted to plug non-standard analysis as both mathematically interesting and also very useful. One can jettison the philosophical thought that NSA "vindicates" the historical use of infinitesimals (as I think we should), while still seeing NSA as the wonderful and deep piece of math that it in fact is.


Thanks for your support, but I never said anything about non-standard analysis. I made one post in support of its parent, and people jumped on it to say how wrong I am, as apparently I accidentally stumbled over a sore spot in mathematics. I don't know why infinitesimals apparently aren't allowed, but the tone around here has convinced me to not care.

And now every post I have made in this thread tree is getting downvoted. So I'm out. Y'all can argue about nothing--and nearly-nothing--by yourselves.


What would that be?

If c>0 were smallest, then c/2 would be smaller and still positive!


> before the last digit, which is '1'

No. There is no last digit. That's the whole point. If there were a last digit your argument would be correct, but there isn't, so it's not.


I don't see the problem with having a first digit and a last digit and an infinite number of digits in between.

Edit: Infinitesimal divided by two is infinitesimal, in the same way that infinity multiplied by two is infinity. So 0.000...0001 / 2 = 0.000...0001 . Infinitesimal multiplied by any finite number is infinitesimal. Infinitesimal multiplied by infinity is every number in the interval from infinitesimal to infinity. Or none of them. Or just one. Or all of them except one. Infinity just throws common sense out the window, then catches it as it tries to sneak back in through the chimney and sets it on fire. I'm not convinced that any sane person can adequately grasp the concept.

Don't confuse the limitations on mathematical notation with a limitation on imagination. 0.9 repeating is not exactly the same as the infinite sequence of ( 0.9 + 0.09 + 0.009 + ... ). The repeating notation indicates to use nine's complement for that portion of the fraction instead of ten's complement. 0.9 repeating is literally equal to one, by notation convention, but the infinite sequence of 9 digits is one minus infinitesimal, which is equal to one in every calculation that does not involve an infinity.

You have to have an infinite number of infinitesimals to make any number that isn't zero, but when you do that, you can get all of them.


> Don't confuse the limitations on mathematical notation with a limitation on imagination

Good luck proving or calculating anything.

You can define "infinitesimal/2 == infinitesimal", but nothing good will come out of it. A definition is no good unless it lets you do something.

Letting e=infinitesimal, you have e/2==e, so e==2e so 0==2e-e so 0==e. This definition is inconsistent with being able divide by non-zero integers and subtraction.


That's not the definition, that's just what it does.

The definition of infinitesimal is "the smallest-magnitude number that is greater than zero". If you divide a finite number by infinity, infinitesimal is what you get, but don't go thinking that if you multiply it by infinity again that you will get the same number back, because you won't.

The floating point standard does not include a representation for infinitesimal, but an underflow now hints at its existence, instead of just going to zero.

It's probably easier to think of quantities like zero, one, infinity, and infinitesimal as the base vectors in mutually orthogonal dimensions. Their behaviors can be defined separately, such that whatever rules you choose for them can produce different types of math, perhaps useful for different purposes (or none beyond cranking out the dissertation), in the same way that slightly changing the Euclidian parallel lines property can produce elliptic and hyperbolic geometries.


> The definition of infinitesimal is "the smallest-magnitude number that is greater than zero"

That is hardly a definition. The real numbers are defined either as Dedekind cuts or as equivalence classes of Cauchy sequences of rationals. If you say "e is defined to be a real number such that e>0 and for all c>0, c>e", you would get a contradiction purely from the definition of the reals since "for all c>0, c>e" implies "e <= 0".

The only consistent scenario I can think is that you are actually extending the real numbers with a new element called "infinitesimal." Go ahead, but don't pretend that it is an element of the set of real numbers. Also, don't get the idea that there is some "true" set of numbers that we are trying to approximate with better accuracy. Modern mathematics has blown this idea wide open by introducing a wide array of mutually-inconsistent number systems.

> If you divide a finite number by infinity, infinitesimal is what you get

So you say. This would need to be part of the definition, or at least provable from it. Quoting Timothy Gowers, a mathematical object is what it does. How was I supposed to know that twice infinitesimal is equal to infinitesimal?

Elaborating extension: there is a way to add ("adjoin") an infinitesimal element to the real numbers. Let R(e) be the set of rational functions in e, a formal constant. For instance, 2+3e or 5e^2. I think there is a way to give R(e) a total order by saying 0<e<c for all positive real c. You also have things like 1/e=e^{-1} ("infinity") is greater than all real numbers. Don't make the mistake that R(e) is the real numbers, however.

> It's probably easier to think of quantities like zero, one, infinity, and infinitesimal as the base vectors in mutually orthogonal dimensions

In what way? In a vector space, I can divide by two, and I am apparently not able to divide infinitesimal by two (in the sense that e/2=e implies e=0).

> in the same way that slightly changing the Euclidian parallel lines property can produce elliptic and hyperbolic geometries

At least right now, there is a rather large difference: many interesting theorems follow from hyperbolic geometry.

What interesting things follow from asserting that there is a smallest-magnitude real number greater than zero? (If you say "I never said the number was real," then there has been no point to this discussion, because it started when you claimed the real numbers were countable by multiplying "infinitesimal" by other numbers.)


You can do that, but then the "last digit" doesn't behave the way you intuitively expect it to. For example, 0.0...1 is exactly equal to 0 for the same reason that 0.999... is exactly equal to 1. So you can't add them up to get a non-zero number.

The reason that adding numbers with a finite number of zeros before the first non-zero digit works to give you any number is that that carries go off to the left. But if there are an infinite number of zeros before the first non-zero digit then there will remain an infinite number of zeros after every addition. The carries can only "overcome" a finite number of zeros.


There's not a problem if you can answer this: What is 0.000...1 + 0.000...9? (The 1 and 9 are digits after infinitely many zeros.)

You are not allowed to say "undefined" if these are real numbers, because you are supposed to be able to add any two real numbers.

You are also not allowed to say 0.000...10 since that changes the place values.

Well, you are allowed to say 0.000...10 if you are imagining a real number is a pair (normal real number, natural number). If this is what you are imagining, then there's not a problem if you can answer this: What is the value of 0.000...1 divided by two?


He's already answered you:

If you are going to allow this kind of notation, then

0.000...1 = 0 is equivalent to 0

0.000...9 = 9 x 0 = 0

0.000...1 + 0.000...9 = 0 + 0 = 10 x 0 = 0.000...10

(0.000...10) / 2 = 0/2 = 0

You're getting hung up on notation and missing the concept.


Neither has logfromblammo answered me nor am I hung up on notation. The notation is only incidental.

They are claiming that it is a well-defined number system with numbers "having a first digit and a last digit and an infinite number of digits in between." I say show that it works.

You are saying the way this works is to disregard the digits after the infinitely many digits. Sure, that would make a consistent system.

They seem to be saying something distinct from your interpretation. It's possible they mean to take the real numbers and adjoin a new "infinitesimal" element, for instance.


I'm not going to answer you, as I haven't made any claim that I care to defend. I made one little post in support of its parent, and people crawled out of the woodwork to tell me how wrong I am, and apparently try to convince me that infinitesimals are not allowed in serious mathematics, or at least not allowed in the way I was trying to use them.

And now every post I have made in this thread tree is getting downvoted. So I'm out. Y'all can argue about nothing--and nearly-nothing--by yourselves.


For what it's worth, I spent time arguing with you because the ideas you were proposing were interesting, but the problem with math is you have to make rigorous definitions and such. The ideas as stated cannot be defended, but as I said elsewhere, you can make something like infinitesimals work with some more effort, but they aren't the real numbers anymore.

"Define it that way, but show me the theorems" is an important organizational philosophy of math. It also helps remove ego from everything. It can be painful creating math without realizing this, and communicating the philosophy was the main point I was trying to make. (I also had some hopes you would show interesting consequences!)

It's not like what you were saying was obviously wrong. It wasn't until the mid-1800's that people really sorted out the real numbers. I myself spent some time thinking I "solved" the 1/0 problem and thought about "numbers" like 0.000..infinitelymany...01, but the nice thing about math is that performing experiments isn't too expensive.

(I didn't downvote you. Sorry for the full-contact lesson in math philosophy, and don't get the idea this is a "sore spot in mathematics," rather the non-existence of infinitesimals in the real numbers is easily defended.)

Edit: Beyond the algebraic way of making infinitesimals work, there is also the real analysis version: limits to 0. The idea of infinitesimal there is that no matter what positive real number you give me, I can give you a smaller one. This concept of infinitesimal isn't a number per se. Similarly, one of the many ways infinity shows up is that no matter what number you give me, I can give you a larger one.

The metaphor of infinity also shows up in: cardinality of sets, as the added point in a one-point compactification, the extended real line, the Riemann sphere, arbitrarily large numbers (limits), and that's all I can think of at the top of my head.


Proofs and explanations are not always the same. Proofs depend on logic and rigor, while explanations depend on the audience. Mathematics and pedagogy are not usually considered to be closely related areas of study. And yet universities make mathematicians teach mathematics.

Are they the best teachers? No. No, they are not. But they are the only ones that understand the subject matter well enough to do it. And that leads to the vicious cycle where you have to think like a mathematician in order to learn math from one, because they have difficulty explaining anything to any other type of person. A student that needs an explanation gets a proof, which is technically correct, but still fails to elucidate.

I think some kinds of math are fun and interesting, but proving the math is [currently] less than 1% of my job, and I have never had to worry about precision that would underflow a 64-bit floating point double.

Take another look at the whole thread tree, originating at https://news.ycombinator.com/item?id=15236430 , and look at the posts by "zelah". Realize that all the responses made them realize that they got something wrong somewhere, but it looks like they are still as confused as ever, and probably net negative karma from being wrong on the Internet and not knowing why.

My original goal was to help zelah understand, and I failed. My secondary goal was to play the game alluded to by lisper, who essentially said I cheated. There is nothing left for me to accomplish here. I wasn't trying to be pissy and storm out the door in a cloud of drama, but rereading, it seems like that's probably the simplest interpretation of my last post. So... sorry for that. I'm still not writing you any theorems.


> Now go back to zero and ...

You can't ever finish adding epsilon infinitely often, so everything after that is dead code. There is no "now" to speak of.

The same thing comes into play when you speak about an infinite number of zeroes after the decimal point and before the last digit. There is no last digit. You could have numbers with two "ends", but the so-called real numbers are different. They stretch infintely to the right, without end in sight.

There are also p-adic numbers, which stretch infinitely to the left, but they behave very different from real numbers. https://en.wikipedia.org/wiki/P-adic_number


So if I understand you correctly, you find it unconvincing, and therefore generations of mathematicians who study these things must all be wrong.

Perhaps you simply don't understand the argument in detail, and are relying on your intuition. And perhaps your intuition is faulty.

Which seems more likely?

So let me try to provide a better insight for you. Consider the collection of natural numbers, including 0. Call it N. We all agree that N, also described as the set of non-negative integers, is infinite.

Now imagine flipping a coin at time t_0, t_1, t_2, etc. If you're worried that this will take infinite amounts of time, we can suppose that each flip - because we are practised - takes half the time of the previous flip, so all the flips can be done in finite time. However, we're in the realm of Pure Mathematics now, chasing the puzzle for its own sake, and not worrying about practicalities.

So what might the result be? Well, you might get all heads, you might get all tails, you might get alternating heads and tail, in practice, of course, you'll get something that looks random.

Let's think about all the possible results of flipping the coin. All possible results. Let's let F be the set of all possible results obtained from flipping the coin are each of t_0, t_1, t_2, and so on. We can think of F as functions from N to {H,T}.

So we have F and N. Let's wonder if it's possible to have a function from N to F that hits every element of F. Suppose we can.

So we have m:N -> F, and for every f in F, there is an n in N such that m(n)=f.

Do you think that's possible? Because hundreds of thousands of mathematicians say that it's not possible.


It is possible. The inverse of m is called q. The function q takes an infinite sequence of coin flips and one by one changes every heads to 1 and every tails to 0. The infinite string of zeros and ones is then prepended with a 1 and interpreted as a transfinite natural number in binary notation. This will not take forever because each change will only take half as long as the previous one. A transfinite natural number can exist since there are an infinite number of natural numbers. If we stop at 1-bit numbers then the natural numbers are not an infinite set. Likewise, if we stop at 2-bit numbers then the natural numbers are not an infinite set. Our only option is to concede that transfinite natural numbers do actually exist and that they can be put into correspondence with all sequences of coin flips.

Hundreds of thousands of mathematicians are wrong.


So I asked: is it possible to have m:N -> F such that for every f in F, there is an n in N such that m(n)=f?

Your reply says yes, but then your construction does not do it. In particular you said:

> The infinite string of zeros and ones is then prepended with a 1 and interpreted as a transfinite natural number in binary notation.

But the set N does not have transfinite natural numbers, so q does not map F to N, it maps F to something else.

So I ask again, is it possible to have m:N -> F, such that for every f in F, there is an n in N such that m(n)=f?


>But the set N does not have transfinite natural numbers, so q does not map F to N, it maps F to something else.

How many natural numbers are there? How many bits does it take to represent the average natural number? If you believe the natural numbers do not include transfinite numbers then how do you pick a successor when counting? There are infinite picks to be made so some of the picks must be transfinite. What I am calling a transfinite natural number must exist in N because N is an infinite set.

Assume that N has only finite numbers in it but is itself an infinite set. Would you care to tell me which number (or numbers) are listed twice? But then it is not really a set!


> How many natural numbers are there?

Infinitely many, more than any finite number.

> If you believe the natural numbers do not include transfinite numbers then how do you pick a successor when counting?

Just add one.

> There are infinite picks to be made so some of the picks must be transfinite.

No, adding one to a finite number does not result in a transfinite number.

> What I am calling a transfinite natural number must exist in N because N is an infinite set.

The definition is that N is the smallest set that contains 0, and every successor of something already in N. Every finite non-negative integer is there, and nothing else - they are all finite.

> Assume that N has only finite numbers in it but is itself an infinite set. Would you care to tell me which number (or numbers) are listed twice?

No number has to be listed twice - just because each thing in the the set is finite, that does not mean that the set has to be finite. There is no contradiction in N being infinite, but all its elements being finite.

You appear to be using words in a non-standard manner. As such, quite simply, you need to be amazingly careful, or you will not be understood.

Certainly I don't understand you.


I think you are confusing "arbitrary long" with "infinite".

The leading/trailing zeros are confusing. So many digits are confusing. Let's pick an easy model: The "naïve roman numbers" (This is a made up name, not a technical name.)

Let's consider the set that has

I

II

III

IIII

IIIII

IIIIII

...

IIIIIIIIIIIIIIIIIIIIIIII

IIIIIIIIIIIIIIIIIIIIIIIII

IIIIIIIIIIIIIIIIIIIIIIIIII

...

...

i.e. all the strings that have a bunch of "I" characters.

Each element has a finite amount of "I". If you delete one of them you get a shorter string/number.

The set has strings that are as long as you want. If you print them in a 8pt times new roman font, you can pick one of them that will be long enough to wrap around the Earth, one of the is long enough to reach the Moon, ...

This is an infinite set, where each element has a finite amount of "I".


Hundreds of thousands of mathematicians are wrong.

No. No natural number has an infinite number of digits, and you are wrong.

If you weren't, of course, it would be easy to prove me wrong -- simply name the natural number to which the successor function is applied that results in a "transfinite" number, whatever that is.


Be careful, there are such things as transfinite numbers, and they are well-founded. They let us do wonderful things like transfinite induction, and to prove, for example, that Goodstein's Theorem is true, even though it's unproveable in Peano Arithmetic.

But transfinite numbers are not "natural numbers", they are not in the set N, they don't have infinitely many digits, and in the context of this thread, they are a red herring.


I'm fine with nonstandard models; my scare quotes on "transfinite" were more to emphasize that the term was being used without basis or definition.


Phew, you have some courage, questioning the foundations of modern Mathematics in a place like this. But I can relate to your concerns about Cantor's argument. When I first heard it, it also felt artificial and unconvincing to me.

What helped me (as with many proofs and concepts in Math) was an image, a visual metaphor if you like.

Imagine a very, very large paper on which you place infinitely many dots in a grid. That's the infinity you were referring to, the infinity of a for-loop, discrete infinity. Here's the trick: you can always add more dots, say by making the distance between grid points half as small, which would quadruple the number of dots in your grid. But no matter how many dots you place on the paper, ho matter how fine your grid, there will always be holes (imagine "zooming in" on a square of four grid points). In fact, most of the paper will be empty!

The other kind of infinity, continuous infinity, does not have any holes. Every spot is covered. You could not add any grid point, because the whole paper itself is painted.

I'm not a "full-time Mathematician", so this view may be entirely wrong. But it helped me understand and appreciate Cantor. Perhaps it did the same for you.

Cheers!


That's a very nice intuition, but for the wrong concept. What you have been describing is the difference between a dense set (almost no holes) and a nowhere-dense set (holes everywhere).

It turns out that there is a nowhere-dense set that is still uncountably infinite: https://en.wikipedia.org/wiki/Cantor_set


No, iamlucaswolf is correct, describing a countable dense set (the dyadic rational points) and an uncountable dense set.


> To see that this is false simply realize that you don't actually have to write a decimal point to specify the real numbers in this range. Without the decimal point these real numbers just become natural numbers.

There are no natural numbers with infinite digits, so this is not correct.


Your answer is buried, which is too bad, because it is the one-sentence rebuttal to the construction above.


> There are no natural numbers with infinite digits, so this is not correct.

Exactly. If you allow an infinite number of digits you wind up with the p-adic numbers, which are uncountable just like the reals.


That's not how infinity works, which is why many results involving infinity are counterintuitive.

For instance, consider the natural numbers and just the even natural numbers. Intuition says these two sets must differ in size, because I took one set and removed half of its elements. But the mapping f(x) = 2x is a trivial bijection from the natural numbers to the even natural numbers, showing the two sets are of equal size.

Do you agree with the statement that two infinite sets are of different cardinality if it can be shown that no bijection exists between them? Let's consider an uncountable set that's simpler to imagine than the real numbers: the power set of the natural numbers, that is, the set of all subsets of N. It can be shown that no bijection exists between any set and its power set (Cantor's theorem [1]). Do you agree with that theorem? It can also be shown that a bijection does exist between the power set of N and R [2], implying they are of the same cardinality, and are both of a larger cardinality than N.

> There is only one infinity. It means "repeat".

Then what does it mean to you that infinite sets can be constructed that can be shown to have no bijection between them?

[1] https://en.wikipedia.org/wiki/Cantor%27s_theorem

[2] https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstei...


The hubris in this thread is spectacular. Multiple people are demonstrating that your grasp of the math here is wrong, yet you keep arguing that no, the mathematicians must be wrong because you haven't grasped the idea. You could argue that only one infinity exists/actually matters in devland, but in mathematics it's absolutely not the case.


Based on your other replies, I suspect it's not worth engaging, but for anyone else reading, here's a great article by a math journal editor writing about all of the different attempts to disprove Cantor's diagonal argument he received and tracing out some common mistakes he saw:

http://www.logic.univie.ac.at/~ykhomski/ST2013/Hodges.pdf


Thank you - I have read this and do realize I am out of my depth.


[I'll try a non technical argument to convince you. It's also not a complete argument, so you must think about this for a while.]

> If you believe n is a natural number then you must also believe that n x 10 is a natural number. One more digit!

If you interpret the natural number in this way, the important property is that they have only a finite amount of "interesting" digits. Almost all their digits are zero.

You can define the set of number that have a finite amount of non zero digits. Let's call them the "Very Boring" numbers.

The set of the "Very Boring" numbers is infinite, but it's the same infinite that the Natural numbers.

You can extend this set to include the periodic numbers, for example 46.2222222222222... and 462.2222222222222... and 4622.2222222222222... ... Let's call them the "Boring" numbers. You still get the same infinity.

You can also 71.3535353535... and 713.5353535353... and 7135.3535353535... and all the periodic numbers. Now you have the "rational" numbers. You still get the same infinity.

The Cantor's diagonal argument fails with Very Boring, Boring and Rational numbers. Because the number you get after taking the diagonal digits and changing them may not be Very Boring, Boring or Rational.

--

A somewhat unrelated technical detail that may be useful:

Most of the times you don't prove that the cardinal of real numbers between 0 and 1 is a bigger infinity than the cardinal of the natural numbers.

I's much easier to consider the infinite strings of digits like "0.765653625367523765..." or "0.5265362556..." or "0.000073468763478..." and also the one with repetitions like "0.0006767000000..." or "0.0072257822222222...". This is essentially a copy of the real number, but in this copy "0.2999999999999..." is different from "0.300000000000000..."

This trick makes much easier to prove that the diagonal ´+1 in each one digit is not in the list. Then it's possible to fix the details and use the real numbers instead of the infinite strings of digits.


>I's much easier to consider the infinite strings of digits like "0.765653625367523765..." or "0.5265362556..." or "0.000073468763478..." and also the one with repetitions like "0.0006767000000..." or "0.0072257822222222...". This is essentially a copy of the real number, but in this copy "0.2999999999999..." is different from "0.300000000000000..." This trick makes much easier to prove that the diagonal ´+1 in each one digit is not in the list. Then it's possible to fix the details and use the real numbers instead of the infinite strings of digits.

What am I missing?

"0.765653625367523765..." could be assigned the transfinite natural number beginning "1765653625367523765..."

"0.000073468763478..." could be assigned the transfinite natural number beginning "1000073468763478..."

Transfinite natural numbers must exist otherwise you do not have an infinite set.


>Transfinite natural numbers must exist otherwise you do not have an infinite set

Transfinite

This word does not mean what you think it means. I'm not entirely sure what you think it means, but it's definitely not what it means to everyone else.


Suppose I have the set F = { all integers x where x can be written with a finite number of digits }

Are you saying the set F is finite?

Or are you saying that the set F contains transfinite numbers?


The difference is that `0.7656536...` is smaller than 0.7656537, whereas a similar number with every digit moved left of the decimal point has no bound.


Yes they could, but note that these real numbers are all between 0 and 1. So the "infiniteness" of R is "higher" than that of N.


It's OK not to be convinced by an argument, even a mathematical proof, and especially an informal proof.

But Cantor's argument is correct.

If you don't find a mathematical proof convincing even though all the trained mathematicians seem to believe the proof is correct, here's my advice. You should FIRST convert the proof into a sequence of valid deductions in a fixed logic. (If you cannot do this, you don't understand the proof/theorem; perhaps (re-)take a few mathematics courses.) If you can find a mistake in the completely formal proof, then you can convert that mistake into an informal explanation. If your disagreement boils down to a disagreement with a axiom in the formal system even though most mathematicians accept it as an adequate foundations, realize you're getting close to philosophy.

This advice is meant for people at the first phase in Terry Tao's hierarchy of mathematical maturity. So not great advice if you're a genius or a trained mathematician (read: people call you doctor).

> There is only one infinity. It means "repeat". It is simply the interplay of finite state with process. You can think of it as an "infinite loop" in programming. To say that one infinity is smaller than another is to deny that the smaller is infinite. Infinite means without bound.

But "modeling non-terminating loops in a computer program" is NOT the motivation for real numbers, so this foundational criticism of completing the rationals makes absolutely no sense.


> But "modeling non-terminating loops in a computer program" is NOT the motivation for real numbers

Indeed Turing himself proved that there are non-computable real numbers.


What a shame you're getting downvoted, simply because you're wrong.

    > Without the decimal point these real
    > numbers just become natural numbers.
If your argument is true, presumably you could write a simple program that would generate all the real numbers with a single, infinite loop?

I wonder how you'd manage to generate 0.1 and 1.0 with your scheme.


I agree that the downvoting is unfortunate. I guess the assumption is that the post is simply a troll, which seems to be backed up by some of the down-thread replies. Even so, the original post just seems like a list of common misunderstandings about Cantor's notion of infinity and how it corresponds to the way that we use the word in colloquial use.

In defense of the GP, I think they were simply thinking of numbers in [0,1). If you write them backwards, it almost seems like it would work:

    1    ->  .1
    2    ->  .2
    ...
    9    ->  .9
    10   ->  .01
    11   ->  .11
    12   ->  .21
    ...
    3124 ->  .4213
    ...
Done!

Unfortunately, you are either stuck with the fact that some numbers (even simple rational ones) do not have a finite decimal expansion. In most formal proofs of the diagonal theorem, we use the infinite representation without trailing 0s (using trailing 9's instead) to force uniqueness, which makes it even trickier.

Or you're stuck trying to assume that there are natural numbers with an infinite representation, so the decimal representation of the rational number 1/3 would correspond to an actual natural number.


"What a shame you're getting downvoted, simply because you're wrong."

It's not simply because of being wrong. Being wrong is a hazard to your karma, yes, even sometimes just asking questions can be a hazard (which I dislike and do what little I can to fight, but it's still obviously true). But there's a much bigger hazard being invoked here.


That is extremely disingenuous of you. He's being downvoted for his massive arrogance.


You need to write down the 1-1 correspondence you are proposing. Don't describe it in words, actually start writing the natural numbers in one column and the corresponding reals in the other. When you do this, you will find either that the 1-1 correspondence that you are envisioning doesn't actually work (i.e., it isn't 1-1), or that it does work ... but in that case, Cantor's diagonalization argument can be applied to it.


I find that people who question Cantor's (obviously correct) theorems and concepts seem to share commonalities with people questioning Einstein's (obviously correct) theories and concepts. Let's just say if Cantor's last name was Rasmussen and the infinities weren't indexed Alephs, I wouldn't expect people to get their panties in a bunch over abstract math, and care so much about proving him wrong, a fraud, a lunatic, etc (without even a basic understanding of the subject).

It hits the subconscious strings of jealousy and mistrust of the majority towards the more successful almost-the-same-but-not-quite minority so very perfectly, especially when the subject matter put forward by the minority is cryptic or unintuitive at the first glance...

Just a theory :)


The natural numbers are each finite. In set theory, the standard way they are defined is that they can be constructed by assuming the existence of the empty set (or 0) and assuming that if you "insert a set into itself" the result will be a set. (So they can be thought of as {} = 0, {{}} = 1, {{}, {{}}} = 2, etc.).

The natural numbers are simply the (smallest) set that contains the empty set and is closed under this "insert a set into itself" operation (successor). It only contains finite sets since the successor operation will never turn an finite set into an infinite set.

The existence of this set of ALL natural numbers (an infinite object) relies on an axiom, the axiom of infinity.

There is no transfinite natural number.


> There is only one infinity.

Fractions are countable. Real numbers are not.

In other words, fractions, integers, positive integers all belong to the set of countable infinities, meaning there is an isomorphic function that bidirectionally maps each positive integer (the count) to every item in the target, countable infinity.

There is no isomorphism between real numbers and any countable set. If you create one before you are 40, you will get a Field Medal.

The isomorphism and the distinction between sets that have them and sets that do not has proven useful.

You could think of infinity as one concept, but it is usefully divided into countable and uncountable versions.


The error in your reasoning lies here :

> Without the decimal point these real numbers just become natural numbers

This is wrong because irrational numbers have an infinite number of digit, if you remove the «dot» you end up with a number infinitely long, which is not a natural number. □

Edit: the set of natural numbers is infinite, which means it can contain arbitrarily big numbers, yet it doesn't contain «numbers» with an infinite amount of digits.


There is one English definition of infinity, but when applied to ordinals ("degrees of freedoms"), you get different mathematical concepts of infinity.

Vsauce's explanation is approachable: https://youtu.be/SrU9YDoXE88


Thanks - that was very fun!


Why do mathematicians normally not distinguish between numbers for counting and numbers for measuring (length, volume, etc)? They are fundamentally different, and conflation makes many arguments hard to grasp.


I think I figured it out: you must be one of the aliens predicted by the downward Löwenheim–Skolem theorem!

If we could have a model of set theory (a set1 of all set2s, where a set1 is a set in our set theory and a set2 a modeled set, like an interpreter), which is necessarily infinitely large, then the downward Löwenheim–Skolem theorem implies there is a model that is only countably infinite. There is a model of the real numbers in there, so from our point of view, the real numbers are countable! Though from the model's point of view, Cantor's diagonal argument still works, and they are not countable!

My theory is that you are looking at our real numbers and thinking they are countable because you have a much more powerful set of natural numbers than the rest of us. Unfortunately for you, your bijection between our reals and your naturals does not carry over to our set theory. You should find, however, that you do not have a bijection between your reals and your naturals.

One problem with this is that Gödel's incompleteness theorem implies that if we ever had a model and could prove it was a model, then set theory would be inconsistent.

More seriously, Cantor's argument as usually given is not Cantor's original argument. He originally did something involving nested closed intervals, but it was simplified to listing out the digit expansions of a list of real numbers.

I am partial to the following argument: suppose there were an invertible function f between N and infinite sequences of 0's and 1's. The type of f is written N -> (N -> Bool) since an infinite sequence of 0's and 1's is a function from N to {0,1}. Let g(n)=not f(n)(n). This is a function N -> Bool. Since f is invertible, let finv be the inverse f : (N -> Bool) -> N. Then finv(g) is a natural number. Plug this into g:

    g(finv(g)) = not f(finv(g))(finv(g))
               = not g(finv(g))
Uh oh, the value of g at finv(g) is not whatever its value is. Something must be wrong: it could be there is no set of natural numbers or booleans (unlikely), that g is not definable (but it is a simple expression of f, and not even recursive; unlikely), or that there is no such function f (this is the only assumption remaining, so there must not have been an f that is a bijection).

Notice there was nothing special about N in this argument. It could have been a finite set, an infinite set, or even the real numbers, and we still would have concluded there is some value at which g is not its own value!

It is possible to prove that there are at least as many real numbers as there are sequences N -> Bool using infinite series. It is also possible to prove there are no more real numbers than there are such sequences.


I don´t think this is an alien that applied downwards Löwenheim-Skolem to the reals. Rather I think this is an alien which applied upwards Löwenheim-Skolem to peano arithmetic. I think this because the alien talks about infinite natural numbers, aka non-standard natural numbers. It has used upwards Löwenheim-Skolem to get a model of peano arithmetic that has the cardinality of the continuum, and then there is of course a bijection between our reals and the thing that it calls the natural numbers. I see no sign that it´s using a non-standard model of the reals, but lots of signs that it´s using a non-standard model of the naturals.


The power of modern logic is that we can make predictions like this!

I wasn't sure whether the Löwenheim–Skolem alien was just translating things into our set theory for our sake, but I think your proposal is more likely.


Congratulations. You hijacked a thread.


"If there really are an infinite number of natural numbers then some of them must be of a transfinite number of digits or else you would be including numbers in the list more than once."

Theorem: There are no infinite natural numbers.

Proof: By Induction:

Induction Start: 0 is finite.

Induction Step: If n is finite then n+1 is finite.

The principle of induction then tells us: All natural numbers are finite.

If all natural numbers are finite, then there are no infinite natural numbers.

End of proof.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: