In my experience with children, one of the easiest-to-grasp concepts of infinity is provided by the transfinite ordinals, since it can be viewed as a continuation of the usual counting manner of children, but proceeding into the transfinite:
1,2,3,⋯,ω,ω+1,ω+2,⋯,ω+ω=ω⋅2,ω⋅2+1,⋯,ω⋅3,⋯,ω2,ω2+1,⋯,ω2+ω,⋯⋯
Presumably this person has no experience with 6 year olds? This explanation is horrendous haha
No it isn't. If you ask a child what comes after infinity, "Infinity + 1" is pretty much the default answer. Any kid who knows multiplication knows "Infinity + Infinity" is the same as "Infinity Times Two". The answer of "Infinity TIMES Infinity" is also popular for kids to say when they know a number bigger than their friend (who just proclaimed infinity is the largest number).
I thought that most of us learn at an early age, as a result of this kind of exchange, that "infinity" is not "the biggest number" or even a number at all, as far as the ordinary notion of "number" goes.
No math instruction I had ever discussed infinity with any rigor until calculus -- and even then, it was only infinity as a limit. Infinity as a concept was brushed off in the same way that the square root of negative one was brushed off until we were actually taught about it.
On the one hand I get why that is - the calculus notion of infinity is the one that tends to be useful in applied math - on the other hand it's a shame because the set theoretic notion of infinity has more to offer to someone trying to ponder the nature of the infinite.
Or put another way, "what's ∞ + 1" basically invites the non-answer "that's not a well-formed question" whereas "what's ω + 1" gives you a whole intellectual thread to pull on.
I've always been disappointed that number theory, set theory, etc aren't introduced in middle school or high school.
It makes sense, since those are a lot less useful than the subjects that are taught, but something like number theory is incredibly approachable to a middle school student. And it can show students that math can be a lot less about memorization and a lot more about creative thinking w.r.t. proofs.
I would argue that "that's not a well-formed question" is a correct answer, not a non-answer.
...and that the intellectual thread you are pulling on is a (more) artificial notion, constructed by set theorists for the sake of set theorists, not for the sake of counting or measuring in any real sense.
I’d disagree personally. The idea that you can add things to an infinite set, or multiply an infinite set are actually useful concepts. If you imagine the universe is infinite and as such has infinite stars in it (ω) you could discuss how some infinite universes have twice as much star density as our infinite universe (ω2). Or imagine taking a copy of our universe and adding a single star 10 light years from earth (ω+1). In a very real sense the second universe would have twice as much stuff in it as the first universe, even if you can countably map the two universes to each other.
Or maybe put another way, taking the idea that infinity is just infinity makes a lot of sense when you’re primarily considering non-infinite numbers. When you’re primarily considering the concept of infinity and what you can do with it mathematically though, using systems that let you describe infinity with more nuance makes a lot of sense.
It came up a bit in some physics classes, when you can mathematically make something go to positive or negative infinity being able to remove it from the simplified calculation of something is very handy.
You're not alone! Georg Cantor was deeply concerned about the theological implications of his work on transfinite numbers, to the point that he wrote letters to Pope Leo XIII to explain why the new infinities were consistent with a God of an even higher order of infinity.
> Any kid who knows multiplication knows "Infinity + Infinity" is the same as "Infinity Times Two".
Or is it "Two Times Infinity"? (Hint: It isn't, because "Two Times Infinity" = "Infinity", while "Infinity Times Two" = "Infinity + Infinity". Not sure every kid knows that.)
I think you have that backwards. “Two times infinity” is “infinity, two times” or “infinity, twice,” which maps to Infinity + Infinity. “Infinity times two” is 2 + 2 + 2 + 2 + 2… forever.
Is addition defined _by_ set theory, or is set theory one way of defining addition? If it's the later, then there could be other ways of defining addition that don't have the same results for infinity (because our math system doesn't really "work" for infinity, or 0, depending on the circumstances).
I am in no way a mathematician. My question about the definition of addition as it relates to set theory is just that; a question.
It's the latter; I'm also not a mathematician, just a guy who worked through Halmos's "Naive Set Theory" in intense detail...
But your question actually hints at my most profound takeaway from that whole book. I think what you're saying is right, AND that foundations-of-mathematics folks spent a long intense period searching for different set theory axioms that did NOT lead to transfinite numbers. But anything anyone could come up with that included "the axiom of infinity" led to transfinites leaking in.
Which begs the question of how to think about these things. Are they "real"? Are they an oddball side effect that we shouldn't take seriously?
I think you've arrowed right to the philosophical heart of all of this.
Does everything become a paradox given enough time and/or thought?
I think we often end up at the end of logical thought processes back at the original question - how can we observe and describe a system that we are inherently a part of?
There are many ways of definiting everything. Most of them are equivalent in the ways that matter, which is why math "works" so well as the language of science. Some of them are different in critical ways, which opens up vistas of new objects and concepts.
I majored in math and my biggest problem with this is that you don't get to "do" anything infinitely many times in the math that I'm used to. In discrete contexts where infinity is used, you instead can "do" something an unbounded but finite number of times. In a continuous setting you are allowed to pick an arbitrarily large (finite) number.
In that context the first quantity that you refer to above is nonsensical because you can't "increment infinitely many times".
Secondly, I'm not sure your construction is correct, since your Infinity+1 set cannot be a singleton (it must contain all the numbers less than Infinity).
Sorry, of course you're right on "Secondly". The right construction is ω, ω∪{ω}, ω∪{ω}∪{ω∪{ω}}...
For the first point, I went through the book long enough ago that I can't rebuild the proof here, but iirc the more rigorous idea is that you can construct a bijection between 1+ω and ω given the recipe I had above for how to represent numbers as sets, but you can't do it for ω+1, which is bijective with ω∪{ω}. The axiom of infinity declares that ω itself is a set, opening the door for transfinite numbers.
I had it explained at a very early age as "three lots of three", and to imagine it like three boxes of three ice-creams. Treating the multiplication symbol as one would to indicate quantity in a list, thus calculating how many ice-creams there are.
The technique used by the Oregon public school system in the 80s went something like "Hand the child a 10x10 grid of numbers, then tell them, absent of any other context, that they must be memorized." I like your way better.
Ontario's 1990s curriculum was pretty awesome. The idea of dimension and sets were both introduced simultaneously and joined, using multiplication. Started in the 2nd grade and they just kept elaborating. Number lines and groups of items. (Tied it into geometry, too. Square numbers came up by at least 4th grade.) What is 3 x 3 but moving 3 units, 3 times in one dimension? Now, memorize these tables up to 12 x 12, you won't always have a calculator at hand.
I do though. I still blow minds when I put my iPhone calculator in scientific mode. Math education is important for many reasons. But teaching it as a practical survival skill using no tools does a disservice to the student. Either it is useful as a problem solving exercise or it is a practical skill that should take advantage of tools. "Just memorize this stuff" isn't useful because it backfires into hating learning. Nothing about math makes it ideal for memorization and none of my math teachers spent any time on study skills.
Nah. As somebody who ends up doing a ton of mental math, I think it's valuable. Yes, they should also learn how to use tools. But developing a feel for numbers is valuable, and I think that is much harder to do if one always relies on a calculator. (And yes, of course, this should be learned in a way that doesn't involve the kids hating it. But that's possible.)
Phrases like "you won't always have a calculator at hand" only serve to erode trust in the educator. It's simply not compelling, and for all practical intents and purposes is untrue. Even on backpacking trips I have a cell phone, even if it is off. If you believe mental math is useful then say that and explain the benefits. Students can smell a lie.
That sounds like an excellent thing to somebody who actually said "you won't always have a calculator at hand". Maybe you should find someone like that.
I'm not sure that HN readers-- a community that will disproportionately skew towards folks educated &/or employed in STEM fields-- are indicative of other people's contact with 6-year old kids.
Number blocks is a great show, my 5yo watches and, being entertained by it, absorbs more than I could easily get him to sit still for. Then he asks me questions about what he watched and is more engaged with my answers as a result.
Making a subject "fun" is alright, but making it entertaining (IME) makes for more productive engagement.
Yes. Simple multiplication and even division and fractions are part of the national curriculum at ages 5 to 6 in the UK. Which is about the age when I remember learning them decades ago too. I think we learned how to add and multiply fractions too.
By age 6 to 7 they're expected to understand that addition and multiplication are commutative, while subtraction and division are not.
They dont usually know formal arithmetic multiplication but they well understand the concepts of repeated addition and subtraction. Most places in the world do start teaching multiplication at age 6/7.
Yes, I learned long division fairly well around that time. I was fortunate (/ disruptive) enough to be sent to a "Montessori school". Long division was definitely pushing it when I was about 5 or 6, but, honestly, given steadier instruction in math starting earlier, I suspect I could have been entirely solid on long division by that time and moving on to algebra. And, I think this is true for a reasonable proportion of children.
My experience, ultimately, was much less ... 'high-quality', let's say. When I left the Montessori school (by 3rd grade), I learned practically no math from then until after high school. First, in normal 'elementary' school (US), multiplication was still being covered in 6th grade. Then, suddenly (from my perspective), letters were being brought into the picture in 7th or 8th grade. So, in my arc, math started to not make sense, at all.
From my perspective, we had spent multiple years on multiplication and long division, which I already understood very well by the end of 2nd grade ... so, there was the period where I basically didn't learn anything, where it seemed like we'd reached the end of math or something. Or, perhaps, like there were some sort of subtleties remaining in multiplication and division. It just gave me a chance to be bored with all of it, boredom correlates heavily with mistakes with kids with attention issues (IMO), this fed into some sort of doubts about my understanding of everything etc., and then, suddenly, there was new material again starting in 7th grade. Material that was 'mechanical', and that didn't seem to have explanations I could understand.
Ultimately, I struggled along with that garbage through high school, then, after, took a course where we actually did PROOFS. Basic number theory stuff - modular arithmetic, etc. Bam, suddenly, the subject started to make sense.
Typing this out actually makes me slightly angry. I'm not sure I previously connected it all together - why I had so much trouble with math for some years ... how this 'arc' was pretty much perfectly engineered to make math a problem, for me. In any case, schooling through high school can be a really low quality experience at times - for some students, subjects, etc. The math curricula, methods of teaching, and progression I was exposed to, worked together, in some sense, to make the subject a problem for me. To do almost the opposite of what was intended - to pretty well impede learning. There's no one factor in that story I can point to and say 'here, fix this' ... no one involved in the story was actively attempting to do anything other than what they thought was best or what they were required to do, but, the net result was honestly worse - I now believe (and believed some years ago, even without quite this analysis) - than if I'd just been given some selection of math material to pick from and been allowed some sort of semi-self directed coursework.
Even better, though, if I'd simply had that course with proofs / basic number theory in, say, 8th grade ... guh, would have avoided so much pain, I'm pretty sure...
I explained basically this to my 4 year old nephew recently. He wanted to count to infinity. I asked him what is the biggest problem with counting to infinity? It's too slow. I said ok let's take bigger steps. We counted by 2's then 10's then hundreds and millions and then zillions and other ridiculous superlative numbers. It doesn't really matter because everything is still too slow. So then we said ok lets make up a number ω that is half way there, One ω, Two ω, done. He's happy. Then I told him to add one more and sent him back to play fetch with the dog.
I taught my kid that the way to think of infinity is that it's like hugs, there's always one more, unlike candy, which is limited and can be counted, infinity cannot be counted.
Hmm, that could potentially cause confusion later. There are 'countable' and 'uncountable' forms of infinity / infinite sets.
A countably infinite set could be 'counted' (i.e., you could sit around labeling elements using the 'natural' or 'counting' numbers) in the sense that we might count candy. The issue for a human being is that you'd run out of time but not elements to count, at least, proceeding in the sense one might count the candy - a piece at a time. Of course, you can, instead, simply provide a 'bijection' (between the natural numbers and the set you wish to prove is countably infinite), and in a sense, you are done.
The subject of infinity and infinite sets can be kind of subtle, and for years the best mathematicians made many mistakes and had many difficulties handling these concepts in ways that didn't cause potentially serious problems (absurdities, paradoxes, etc.). I think that with the development of things like Zermelo-Fraenkel set theory, Gödel's incompleteness theorems, etc., things became a lot clearer. It's a lot easier, with all of the groundwork laid by people who worked on these, to get a good sense of what is possible and what isn't - what gets you into trouble and what doesn't. But, boy, did it twist the minds of the people trying to work it out at the time. In part, this is because it was less clear, without development in these areas, what math even is and what its limits are ... what its relationship to the structure of the universe, say, even is (something along those lines, in my opinion / experience).
> Hmm, that could potentially cause confusion later [...]
(Q: Do you have kids?)
Our experience is that pretty much everything parents tell young children could potentially cause confusion later.
In no particular order: Father Christmas aka Santa Claus, The Tooth Fairy, Where Babies Come From... it's a long list, our eldest is 13 and we're not done yet.
(sorry for responding after so many days - didn't see reply before)
Ha! Certainly a fair and good point.
I would propose that there is a spectrum when it comes to the 'damage', as a term that comes to mind right now, (likely to be) caused by various kinds potentially confusing information.
Given differences in the way different people understand, well, pretty much anything, I'd propose that it might best be thought of as some set of statistical distributions. Using this kind of framework*, we might be able to reasonably improve thinking about what these distributions might look like, how we might tailor the information we provide and how much work we put into trying to avoid introducing possibilities for confusion, etc. Further, I suggest 'set' as we might benefit from 'parameterizing' (thinking about distinct distributions) in terms of traits - autism, ADHD, anxiety, etc.
In my mind, and based on my experiences, I would (in part, thinking terms of the model I'm proposing here) be much more wary of asserting potentially incorrect information in the realm of math and some of the more 'abstract' subjects that people tend to have more trouble in the first place. A concept like 'Santa Claus' isn't something that a child may need to be able to use as a basis for building serious skills on, say. Of course, 'Santa Claus' can be helpful for building imagination, ability with storytelling, developing narratives, etc. ... but the fundamental information regarding some specific entity 'Santa Claus', is not really problematic, in terms of the perspective I'm trying put forward here. On the other hand, statements that are 'too strong' (or 'too weak' possibly) or using terms in ways that aren't standard in mathematical discourse ... these sorts of things can make it feel like the ground is really slipping away as you try to learn other bits about a subject that, again, for many people is ... nebulous ... it's not (so) visual, tactile, ... it's very strange in many ways, early on.
That's the best I can do, right now, in response, I think.
You raise a good point, for sure. And I'm sure there are entire books, there are papers out there in the literature, etc. Personally, I can HIGHLY recommend books like Polya's "How to Solve It" ... as a starting point regarding 'math pedagogy'. That book is a gem, IMO, and gives some real insight into how to think and problem solving in general. And, it's a good gateway to many more resources and research into these areas.
As with everything human and 'complex', there's really no 'optimum' or chance of finding any such thing, I think. Avoiding the worst impacts ... essentially, in terms of opportunities and establishing bases etc., that's doing pretty well - raising children / 'new humans' is hard.
* Which is a way I've been trained to think, sorry if it's not a great model for you - kind of best I can think of off the top of my head and with limited time this moment
> this person has experience teaching children mathematics
Just as a FYI, there are plenty of countries in Europe where many 6 year-olds are still in kindergarten not at school, as a result they most likely have not have properly started learning numbers or reading and writing.
unless the kindergarten is playfully toying with numbers already, usually with no obligation but as an enrichment for those kids who love such activities.
Also my first thought. I assume he's writing this to other people who know what transfinite ordinals are (I don't understand the explanation) and would frame it differently with an actual kid. Even in context it's a hilarious quote though, I think it's possible this was on purpose
I think the big assumption that kids can't get "complicated" ideas is faulty.
Sure, they lack rigor, and often will just get the sketch of the idea.
And it's a lot more work to think about how to put things in the terms that a kid will understand given their knowledge so far.
But this idea? "Infinity plus one?!@" --- this is a conversation elementary school kids have on their own. Pulling it a little closer to a sane footing in ordinal analysis is not hard. Half of six year olds can handle it.
On the other hand, there's not a lot of obvious utility to teaching a six year old this particular concept early. On the gripping hand, there is a cost to keeping kids in a bubble where you don't talk about any big ideas (of whatever sort-- mathematical, philosophical, historical, linguistic) at all, or excessively dilute them to the point where they're meaningless.
Richard Feynman would be making disapproving noises.
Explain everything like you're talking to a fifth grader. If you can't, you don't understand your problem fully.
He spend much of his professorship agonizing about how to fit all of physics into a freshman lecture. When he couldn't, he knew we needed to think more about that area.
Transfinite ordinals also known as hyperreals should really be taught in school as they make many parts of math easier: algebraic definition of derivatives (including algebraic derivative of step functions without dirac 'density') and yes: natural addition and multiplication.
> Transfinite ordinals also known as hyperreals should really be taught in school as they make many parts of math easier: algebraic definition of derivatives
Q: What proportion of children study maths long enough to understand derivatives?
Having mechanical formulae for solving closed form equations involving the notation for derivatives… does not mean that derivatives have been understood, in my experience of tutoring not-especially-mathematically-inclined folks.
Do you think 90% of attendees of Gymnasium (which I don’t think is the majority) understand derivatives? My friend’s wife who attended Gymnasium and got reasonably good grades most certainly did not, but she is my only example of a non-mathematician Gymnasium graduate, so I’m quite willing to be convinced she is an outlier.
They are not explaining to a 6 years old, they explains to somebody who will in their turn explain it to a 6 years old, which is a different task and has to be optimized in a different way.
In the comments of the answer the author says they have a 4 and a 9 year old:
"Bill, despite your emphatic comments, I know for a fact that counting into the ordinals is something that children can easily learn. I have two young children (ages 4 and 9), who are happy to discuss ℵα
for small ordinals α---although my daughter's pronunciation sounds more like Olive0, Olive1---and my son can count up to small countably infinite ordinals. The pattern below ωω is not difficult to grasp. Below ω2, it is rather like counting to 100, since the numbers have the form ω⋅n+k, essentially two digits"
Ordinals are hard to grasp for people that know the standard school curriculum, know about countability and uncountable sets, cardinality, and the basic properties and arithmetic of cardinality.
I don't know why would it be hard for people that haven't been familiarized with a similar but different concept?
He has children (not sure about age right now) and discusses mathematics often with them. His tweets have had many interesting examples.
I do not think he means he would use symbols to explain to children, but that the notion of counting natural numbers that children have easily generalises to counting transfinite numbers.
The way I would explain it to a 6 year old would be like this:
Infinity isn't a number really, it's a concept, like the word many or the word few. If someone says they have many of something, you don't think is that odd or even you just know they have a lot of it. Infinity is kind of like that, it explains the idea of things going on forever, not an exact quantity of things like the number 10 or 11.
x^2 is not exponential; it's quadratic. 2^x is an example of an exponential function.
The parent comment was alluding to the idea of set cardinality (https://en.wikipedia.org/wiki/Cardinality). Two sets have the same cardinality if you can establish a bijection (a one-to-one mapping) between elements of one set and elements of the other. The set of all natural numbers is said to have a "countably infinite" cardinality.
It turns out that for any countably infinite set S, the set S x S is also countably infinite (see Hilbert's hotel: https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Gra...). For example, the set of 2-tuples of natural numbers (1, 1), (1, 2), (1, 3), ..., (2, 1), (2, 2), ... is the same size as the set of natural numbers. So in this sense, "endless*endless = endless". Whereas the set of infinitely-long tuples of natural numbers is "uncountably infinite;" it has a cardinality greater than that of the set of natural numbers. Thus, "you have to go exponential"; i.e. "endless^endless".
Huh. Most of those make sense to me, but infinity == infinity being true definitely feels like risky business. Algebraic limits is full of even some pretty trivial scenarios where infinity divided by a lesser-infinity turns out to be a real number— those cases where the two infinities are definitely not equal to each other.
>>> 1/0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: division by zero
Still you do have infinity and nan in python - because these are part of the floating point spec.
>>> float('inf') - float('inf')
nan
>>> float('inf') == float('inf')
True
>>> NAN=float('inf') - float('inf')
>>> NAN == NAN
False
However that's not mathematics, it's computers (these are even stranger...)
I have my own little programming language - PYX [1] - and i don't allow this madness (even if it is a violation of the floating point spec ;-)
pyx
> mathconst.Infinity - mathconst.Infinity
Error: results in 'not a number' - that's not allowed here
#(1) mathconst.Infinity - mathconst.Infinity
|....................^
> 1/0
Error: Can't divide by zero
#(1) 1/0
|..^
[1] PYX - https://github.com/MoserMichael/jscriptparse - it's supposed to be an educational programming language, where I am trying to have detailed error messages, my side project.
Dividing by 0 or -0 is a valid floating-point operation because there's an infinity in the number system, and JS uses double precision floating point for all numbers. Python has an integer type and a double type, and division by 0 is disallowed for integers, but okay for doubles.
> those cases where the two infinities are definitely not equal to each other
It's been a while since I was doing this in a classroom, but I feel like those things you're thinking are nonuniform infinities could just as well be thought of as entities comprised of infinity and a (perhaps implied) coefficient. Divide out infinity to reveal the coefficient. (And the same for powers/logs, etc.)
In this model, infinities are indeed uniform (quite similar to a constant), though they are often augmented in any of infinitely many ways.
I can just imagine someone coming up with the infinity symbol, and arguing (as we are here) about whether a circle represents the idea. No, not quite; it needs something more... another circle should suffice, and connect them seamlessly. Yes, yes. This looks much more infinite than a mere circle.
The way I would explain it to a 6 year old would be like this:
There are natural numbers, like 0,1,2 and so on. Natural numbers can be odd or even. There is no such natural number as infinity. Therefore the question if 'infinity' is odd or even is meaningless. It does not even type-check.
In math people like well-formed questions, and generally don't like ill-formed questions.
The fallback metaphor I use in these situations or similar ones, "What's outside of the universe" for example, is the old, "What's North of the North Pole?" Then you explain that we can create questions and statements in our languages which don't have logical, mathematical or physical validity. Although we can often describe scientific and technical concepts in common languages, that's just a translation, the real language is math.
Carlos Castaneda is at his most interesting when he wrestles with "what's outside of the universe" paradoxes since his informants seem like they're able to not only hold mutually exclusive concepts but exist in a relationship between them. They'd have an internally consistent idea about what's North of the North Pole and could explain it to you in terms you might understand.
He's given me quite a bit to think about in regard to NULL and the assumptions I make around the concept, which is fascinating in itself because his books are hot garbage.
The first one is basically "Fear and Loathing on the Campaign Trail '72" for anthro majors, and then he gets less focused somehow. He'd be my personal Kilgore Trout if we didn't have contemporary science fiction.
> In math people like well-formed questions, and generally don't like ill-formed questions.
This is not so simple, though. Ill formed questions can be interesting as a motivation to formalise them (ie make them well-formed) in generalising/abstracting concepts into new concepts. Eg how even/odd has been generalised to transfinite numbers.
If you try hard enough, you can find similar questions, that do type-check. You can talk with 6yo children about them if you want. Still, I stand with my answer. I would say this (also I think this is the best thing to say/I am capable of).
On the contrary, it's entirely natural. The technical definition is quite intuitive.
"There are many infinities! The smallest one is bigger than all the counting numbers, so you can't count up to it, but it's out there! We call it omega. You can make bigger infinities too, like omega + 1!"
Kids LOVE that, and it's good math too! (But gets tricky quickly, because addition of transfinite ordinals is not commutative, and standard transfinite ordinals don't allow subtraction)
Terminology and strict definitions aside, it is quite intuitive. A commenter below also pointers this out:
> If you ask a child what comes after infinity, "Infinity + 1" is pretty much the default answer. Any kid who knows multiplication knows "Infinity + Infinity" is the same as "Infinity Times Two". The answer of "Infinity TIMES Infinity" is also popular for kids to say when they know a number bigger than their friend (who just proclaimed infinity is the largest number).
The only thing that is a little off here to me is that I don't think there is mathematical notation for "many" or "few". And yet infinity does have mathematical notation and is used in some equations, no?
It's an analogy, meant to show the similarities between two things in a limited way, to illustrate an idea. They do not have to be exactly the same in every way.
The answer that says "Here is a simple example that has some hope of being comprehensible to a 6-year-old." and then begins "Consider the ring of polynomial functions with integer coefficients, ..." gets upvoted tens of times.
Even the answer that uses "numerocity", "refined cardinality", and "logarithm" as the explanation to a 6-year-old gets upvoted.
The answer, https://math.stackexchange.com/a/49065/13638, that says as the answer-to-a-6-year-old the same thing that several commenters have actually posted here (e.g. https://news.ycombinator.com/item?id=35790064 for one of many), on Hacker News in just the past hour or so, and that explains in terms that a 6-year-old has at least a chance of having encountered, gets 5 votes in 12 years and the submitter is banned from the site.
The answer is using big words but the concept is simple. Like talking about fractions as a quotient ring over a field.
A six year old can absolutely grasp that even means "being able to be split into two equally sized piles" where equally sized means each thing in the left pile can be matched to something in the right. 6 apples is even because you can split them into 3 and 3.
Then for infinity you separate them into the even and odd numbers, boom. Infinity is even.
Saying "infinity isn't a number", to me, is so much worse an answer because it's not satisfying. Because both you and the 6 year old know that isn't right. The 6 year old is grasping at a bigger concept but doesn't have the words.
So there seems like a glaring hole in the answer, but maybe I'm missing something. Because:
> It is easy to prove from this definition by transfinite recursion that the ordinals come in an alternating even/odd pattern, and that every limit ordinal (and hence every infinite cardinal) is even.
Sure, if we use the natural numbers and start at 1, then we can group:
[1, 2], [3, 4], [5, 6], ...
and prove infinity is even.
But we could also just as easily group:
1, [2, 3], [4, 5], [6, 7], ...
and prove infinity is odd.
It's the same if we try to split into two equal subsets, because we can split into:
[1, 3, 5, ...]
[2, 4, 6, ...]
and say it's even. Or we can divide:
1
[2, 4, 6, ...]
[3, 5, 7, ...]
and prove it's odd because we have two equal subsets plus one left over.
So I'm missing the reason for why the second versions aren't just as valid.
(Of course, I'm more inclined to agree with many commenters here that it's just a category error, and asking whether infinity is even/odd is as useful as asking whether democracy is blonde or brunette.)
The definition given was 'if there is another ordinal 𝛽 such that 2⋅𝛽=𝛼' [1], but the intuition is better explained by the post below:
> A set 𝑆 has even cardinality if it can be written as the disjoint union of two subsets 𝐴,𝐵 which have the same cardinality. [2]
In other words, a set is even if it can be paired up, by finding one grouping where it pairs. Finding alternative groupings that do not pair does not matter.
OK, so I guess I'm just understanding that mathematicians arbitrarily decided to prioritize "even" over "odd"?
Because as I stated in another comment, you could just as easily say odd cardinality exists if you can find two subsets with the same cardinality and there's one element left over, and otherwise we call it even.
So at the end of the day, what you're saying is that ultimately infinity would be even just because mathematicians arbitrarily defined 'even' that way -- not because there's any intuitive logic behind it, any deeper justification, or any necessary consistency with parity for finite sets.
Modern mathematics is all about coming up with definitions and rules that give rise to interesting (to a mathematician!) properties when further investigated.
The definition given naturally lets the ordinal numbers continue the odd/even/odd... pattern. Choosing the alternative definition would not.
In one sense that's 'arbitrary' because we decided on one definition over another. But another sense, we picked the parity rule that lets us extend the same pattern from the natural numbers, so it's a 'better' parity rule. And the fact that one rule gives this pattern while the other does not, did not come from humans, but is a 'metamathematical fact' from the universe of possible ways to define things.
So I would say this definition is not fully arbitrary, it's an interaction between what mathematicians find interesting and the Platonic realm of possible mathematical constructs.
Anyway, I'm not a mathematician but it seems this is how the game of math is played: to continually discover new rules that give rise to more interesting math.
Thanks, but you may have misunderstood the definition I have for defining odd numbers, because that corresponds equally to the natural numbers as well.
So there is no better parity rule as you say, it is entirely arbitrary. It's not extending the same pattern, it's seeing that there are two ways of extending it and picking one arbitrarily that happens to prioritize even. When you could have just as easily prioritized odd.
So that's not an argument for why infinity is even, or should be. It's just a decree, an arbitrary labeling, the flip of a coin.
Evenness is a more natural condition, so to speak, in that it has a simple definition and is easy to generalize. Having defined an even number, if an integer isn't even, it's odd.
To get a feel for why this is convenient, consider that you can generalize by replacing "multiples of 2" with "multiples of n". Then, instead of splitting everything into two sets (even/odd), we can naturally split the integers into n sets called equivalence classes modulo n. For n=10, these would be "multiples of 10", "numbers whose remainder after dividing by 10 is 1", "numbers whose remainder after dividing by 10 is 2", and so on. Seen this way, you may find it less arbitrary now.
I understand what you're saying, so thank you, but I still find myself disagreeing.
There are just as many odd numbers as even, so there's nothing more natural about either. They alternate. Yes you can extend to higher multiples, but there's still nothing more natural about multiples of 7 vs. multiples of 7 with remainder 3.
And it's just as easy to say that infinity is divisible by 7, as it is to say that infinity is divisible by 7 with remainder 3:
So the entire idea I'm arguing against is that there's anything more natural, more default, more basic about the concept of "evenness" next to "oddness". The very first natural number, 1, is odd -- not even -- so it's just as easy to say that oddness comes first. But really they're fundamentally complementary -- they require each other, neither is more primitive.
It's true that there are just as many odd numbers as even (using most reasonable ways of counting; things always get a bit dicey with infinite sets), and just as many multiples of 7 as "3 more than a multiple of 7" and so on.
Still, there's a good reason to privilege the multiples. With regular addition of the integers, the number zero has a special role, in that n + 0 = 0 + n = n for all n. It's called the "additive identity", and it's the only number that has this property. If we think of inverses of numbers, like "what's the opposite of 19?", then in the world of addition, they are defined in relation to 0. The "opposite" of 19 is -19, because 19 + (-19) = 0.
Many algebraic structures have an identity; in the world of multiplication of fractions, the identity is 1, and the inverse of 19 is now 1/19. A more abstract example would be the operations on a Rubik's Cube, where the identity is "do nothing". That's the least exciting thing to do with a Rubik's Cube, but it has a special role, just like 0 with addition. If we want to talk about inverses of Rubik's operations, then again, they are defined in relation to the identity: the opposite of "rotate the top face a quarter turn clockwise" is "rotate the top face a quarter turn counterclockwise", because the sequence of those two operations gives you "do nothing".
It is in this sense that "multiples of n" are special, because they effectively comprise the identity element under addition modulo n. That is, if we add numbers and only look at the last digit (in other words, the remainder after dividing by 10), we'll find that adding 0, or 10, 20, 30, etc., leaves that digit unchanged. Another way to say this is that if you take two numbers with the same last digit, their difference will be a multiple of 10.
In other words, it isn't merely that there are just as many numbers in one set as another, it's that one of the sets acts as a point of reference. For a real-world metaphor, consider the concept of birthdays (disregarding complications like leap years). If you were born on February 5, then every other February 5 is a birthday, because the difference of those two dates is a multiple of 365. This might highlight the conceptual argument: I would agree that there's nothing fundamentally more special or interesting about February 5 than August 27 or any other day, but it's when we start comparing dates or using them in some frame of reference (like trips around the sun) that the number 365 and its multiples come into focus.
Or, for a real-world example related to evenness vs. oddness, go and flick a light switch an even number of times. If the light was off to begin with, it will still be off at the end; if it was on, it will still be on. Now, if you have a fancy lamp with three settings, then turn the switch a multiple of 3 times. Again, this will preserve the state, and this is why multiples are in some sense special.
Finally, as for infinity: I'm with you in that it gets a bit uncomfortable to talk about the evenness or oddness of infinity itself. At that point it really comes down to the choice of definitions, and a perfectly reasonable definition is that infinity isn't a number but an unattainable goal (it's the trip, not the destination), in which case the concepts of evenness and oddness don't apply at all.
well, if you claim omega is odd, are you willing to claim omega + 1 is even? There is no ordinal B such that 2 * B is omega + 1, so it fails that definition. So you have to say omega is odd and omega + 1 is also odd, which is... odd.
I'm arguing that because it's just as easy to say that omega is odd as to say that it's even, that the whole concept breaks down and loses and all meaning.
Because if you want to divide omega + 1 in half to show that it's even, we can do that. If we denote the set element inside of the "1" of "+ 1" by the symbol "a", then we can write out:
[1, 3, 5, 7, ...]
[a, 2, 4, 6, ...]
We can infinitely extend this 1-1 correspondence between these two disjoint subsets, so omega + 1 is evenly divisible. (Or, again, it can also be odd if you choose to arrange the elements differently.)
But I'm not saying that this is useful or interesting. My whole point is that it's not because even/odd is not meaningful at all for transfinite numbers, because they're just as odd as even. That in the same way there's no utility in attempting to decide whether the decimal 2.7 is odd or even, there's similarly no utility in defining omega as odd or even (or omega + 1).
It might be a better explanation but those two are very much not equivalent.
Actually the fact that splitting it into pairs is the same as splitting into two equal sets of equal cardinality is itself non-trivial. The reason why shows up when you try to get the two definitions closer together.
Splitting an ordinal into pairs is essentially splitting it into ordered pairs (a_i, b_i) such that the map i to a_i is monotonic and for no i<j the pair (a_i, b_i) overlaps with (a_j, b_j) in the sense that a_j <= b_j.
Splitting a set into pairs is splitting it into sets {a_i, b_i} such that for no i != j the two sets {a_i, b_i} and {a_j, b_j} overlap.
These two are note the same, you can split pretty much any infinite set into two disjoint sets of equal cardinality.
It's hard to get the definitions general enough to get one definition for both ordinals and sets. Mostly because products of ordinals are a bit weird. For sets (and most other types of mathematical objects) it doesn't matter which way around you pair things up, but for the ordinals you end up with a completely different object if you do it the other way around and this is apparently the more interesting definition of the two.
I would weaken the definition of even/odd to say that a set is even if /there exists/ a way to pair things off, and odd if /there is no way/ to pair things off (ie, not even). So the countable numbers would be even.
But that seems redundant with countable/uncountable sets, because then every countable infinite set would be even (e.g. rational numbers), and every uncountable infinite set would be odd (e.g. real numbers).
It's also not clear to me what justification there would be for a "preference" for the "even" category that way -- it seems arbitrary. Why not be odd if there exists a way to pair things off such that one is left over, and even if there isn't such a way?
I think the reals are also even: If x is rational pair it as you would in the rational case (which we assume is even - I haven't proven this). Otherwise pair it to -x, and thus the reals are even.
Being "even" seems like a much more interesting (and simpler) property of a set. I don't see what use there could be to know that you could pair things off, with one element left over. When you extend the notion you do have to decide what to preserve, but to me parity is much more about divisibility and symmetry than it is about reminader. I agree that it's arbitrary, though less arbitrary than the odd definition.
If you want to pair positives with negatives, then reals would still be odd, as long as zero is unsigned. Zero is the unpaired element, hence odd.
But it all just seems silly. We can say the set of positive integers is even because we can come up with a pairing of elements, while the set of positive reals is odd because we can't come up with a pairing? Where's the mathematical utility in that?
Because evenness is a special case of k-evenness: A set is k-even if it can be divided into equal sets of size k. Which, for finite sets, is equivalent to the size of the set being 0 mod k, ie, is divisible by k. There are many ways to be not be divisible by any particular number bigger than two, and only one way to be divisible.
Uncountability is a particularly interesting form of non-divisibility, so I'm just fine calling all uncountable sets odd and countable sets even...
(And just because we're hung up on divisibility by two, let us remember: All prime numbers are odd, and two is the oddest of them all.)
But pairity is just a question of sorting the countable numbers into sets of size two, and the more general form even of that is sorting into sets of size N. It's just as easy to say that the countable numbers are odd if there exists a way to sort them into sets of size three. So I'd argue the countable numbers are odd.
And you then you could retort with sets of size four, and I could use five, and then we can argue about whether we'll end up at the limit with more odd sets or even sets, and now we're arguing in circles. Reductio ad absurdum.
> we can...prove infinity is even....and prove infinity is odd...
> maybe I'm missing something
The answer said:
> the usual definition is that an ordinal number 𝛼 is even if... Otherwise, it is odd.
In other words, if a number could be proved to be even, it is even. If not, it is odd.
Using their definition, there is no such thing as "proving a number is odd". You'd have to do it by failing to prove it's evenness. In the case of infinity, because we can successfully prove evenness, it's even and not odd.
Omega is the lowest countable infinity. There's no parity within a countable infinite as you describe.
It's only even or odd with respect to other infinities which the cardinal numbers can count based on the presence of a bijection or not. It's a kind of relative parity.
> There's no parity within a countable infinite as you describe.
That directly contradicts the quoted text I included from the original answer though, as far as I understand. It directly asserted that "every infinite cardinal is even".
> It's a kind of relative parity.
What is relative parity? The original question was whether infinity is even or odd... I don't know what you mean by relative parity.
>To explain the idea to a child, I would focus on the principal idea: whether finite or infinite, a number is even when it can be divided into pairs. For finite sets, this is the same as the ability to divide the set into two sets of equal size, since one may consider the first element of each pair and the second element of each pair.
The answer this quote came from is amazingly obtuse, but it does make me think that infinity must be even since infinity can be divided into 2 pairs, each of which is of equal size since both are infinity.
In mathematics, you can define things in different ways to get different answers. Ways of defining things tend to be highlighted as true (in at least some context) if they are interesting and useful, and ignored if not. I don't think the definition based on "dividing into pairs" is particularly interesting or useful in the context of the child's understanding of numbers, because it's too vague to be useful, and it doesn't lead to any insights.
The definition based on transfinite ordinals explained in the same answer does seem interesting, and I wouldn't be surprised if it were useful. I think this is a case of simplification gone wrong, where everything interesting was lost in the translation to more accessible terminology.
A more honest thing to say to a child would be that the way even and odd are defined only make sense for finite numbers. It's true for the definition they know, and it introduces them to the important insight that logical rules that are created for one kind of thing might not work when applied to something else. I think this would be more accessible and stimulating for a six-year-old than giving them a half-baked verbal imitation of a result from transfinite mathematics.
They'll be thrilled later if they study math and discover that there are definitions of "infinity" and "even" that yield an answer to their childhood question.
An even more honest thing to say is that infinity when used as a number is a hack introduced by mathematicians to make notation and reasoning simple in some cases, but that it can be dangerous in other cases, like any other hack. If you want to use infinity in a safe way, then use limits around your expressions.
(And this quickly resolves the case of this article, since lim x->inf x-2*floor(x/2) does not exist).
It's not a hack to create a new set and work out rules for how to use it which are both internally consistent and support easy morphisms with more familiar sets.
It may not be easy, but it's hardly a hack. It's one of the big ways math works, really. Are negative numbers a hack? Rational numbers? Algebraic numbers? Well then neither is the two-point compactification of the reals or extending the natural numbers into the ordinal numbers.
These are things with very precise models and interpretations. No hacks at all.
That's true in calculus, and probably a lot of other applied mathematics contexts where rigor tends to get swept under the rug, but it's not completely fair since there are versions of "infinity" that are defined and used rigorously. (The transfinite ordinals and cardinals mentioned in the Stack Overflow article are the example I'm familiar with.)
The concept of "infinity - 1" doesn't exist. Subtraction isn't defined for ordinals. Furthermore even if you try to define it, it doesn't work for limit ordinals.
If you are thinking about the difference between
[0,1,2,3,…]
and
0, [1,2,3,4,…]
Then I regret to inform you the former is omega and the latter is 1+omega which is the same as omega. In other words attempting to subtract one from infinity by removing from the front results in infinity.
> In other words attempting to subtract one from infinity by removing from the front results in infinity.
And I regret to inform you that if you read more carefully, you will find that my comment above makes use of that very same property of infinity. Not only do I already know it; that's the joke.
Specifically, that statements about omega are also statements about 1 + omega. The parent post saying "I think that infinity must be even" is such a statement. Regardless of if it's true or not, well-defined or not, coherent or not, it's equally all that about (infinity - 1).
Should I also spell out that an argument that "n - 1 is even" is also an argument that "n is odd" ?
In the general case, the comparability of cardinals relies on the axiom of choice. In other words, they are comparable, but they require a slightly unintuitive foundation to establish that they are always comparable.
They meant "their cardinalities are equal". It's honestly an easy mistake to make, especially if typing on a small screen. Or especially if having a discussion where sizes of infinity are already being discussed.
> Unlike the case of even integers, one cannot go on to characterize even ordinals as ordinal numbers of the form β2 = β + β. Ordinal multiplication is not commutative, so in general 2β ≠ β2. In fact, the even ordinal ω + 4 cannot be expressed as β + β, and the ordinal number
For a six year old, I'd tell that infinite is not a number so it's not even or odd. If s/he even get's a Ph.D. in math, s/he will understand.
Moreover, I remember when I was a graduate T.A. that one day before lunch I went to a class to learn about the https://en.wikipedia.org/wiki/Alexandroff_extension in the morning. (The idea is that you add one ∞ to a set of numbers to get a compact set. And in the new set ∞ is (almost) a number as good as the other numbers.) After lunch, I went to teach limits to first years students, and with a total straight face I told them that ∞ is not a number.
> After lunch, I went to teach limits to first years students, and with a total straight face I told them that ∞ is not a number.
When you apply Alexandroff extension to add the point at infinity to, say, the real numbers, what you're left with is not a set of numbers (i.e. a field) anymore. So it makes sense to say that ∞ is not a number. Moreover, the way ∞ is used in analysis is different from Alexandroff compactification, in that you usually use two infinities (±∞) as a shorthand for quantification over increasing or decreasing sequences of real numbers (this can be formalized using extended real numbers [0] or other gadgets but doing so has no advantages in a first-year analysis class, and might in fact make matters worse).
It was a long time ago, something like an optional course in Advanced Functional Analysis. It was about the algebras of functions with and without unity, and how to complete the ones without unity using the compactification (i.e. including a ∞) and a few variants.
> two infinities (±∞)
It depends. In the real numbers it depends, but in most cases I agree that it's better to use two. In complex analysis it's much better to have only one infinity. And there are more weird case like the projective plane where you have one infinity in each direction.
> So it makes sense to say that ∞ is not a number.
I agree, it's not longer a field and the operation lose many properties if you try to extend them. So I said "(almost) a number". Anyway, the weird part is that in some cases you can write f(∞) in an advanced math course, but you can never write f(∞) in a fist year math course.
> The problem with transfinite is that you lose commutatively.
Depends on which transfinite algebra you're working with. If you restrict "number" to mean "element of an ordered field" (thus excluding things like the "complex numbers" but matching the usual intuition of how numbers should behave) then you can't include Cantor's ordinals but you can include the Surreal Numbers. Those include infinite ordinals and (due to being a field) have commutative addition and multiplication operations.
I'd say that the problem with transfinites is that you lose intuitive understanding of what's going on, and one of those intuitions is commutativity.
People seem to assume that they know a couple of tricks about infinity (adding, multiplying) and don't stop to think that there should be a much more rigorous definition. Which, they shouldn't -- the average person will never _actually_ care about transfinites.
Imagine the + in C++ when you have to add two complex numbers. They are just a struct with x and y, and some magic to make all operations work as intended.
The use of the + in this example is more like the concatenation of strings, like "Hello " + "World!" is "Hello World!". But in this case, the content of the string doesn't matter so "Hello " == "World!" and there are some magical strings that are infinite.
The idea is that anyone can overload the symbol + and sum whatever they want. It's not necessary to use + with numbers. Obviously, most silly overloads are ignored, and nobody use them. In this case it's a popular overload so it is teach in an advanced math course and has it's own Wikipedia article.
The concept of "number" has a lot of definitions in mathematics. I agree with this [0] more in depth explanation that calling infinity strictly not a number is not useful (though it certainly is not e.g. a natural number). But more importantly, the concept of evenness readily generalizes to ordinals, so as long as we specify that we are in (or move into) that context, then the question is well formed and interesting.
Cardinal numbers (size) and ordinal numbers (ordering) are both numbers. The numbers we're familiar with represent both concepts, sometimes simultaneously.
I really don't think that block quoting ChatGPT is a good contribution.
I agree. It's an interesting intellectual exercise, but I am not sure if we would miss out on anything if we just had a symbol(s) for specific really large discrete numbers.
Sometimes I wonder if there's a better math language waiting to be invented that eschews the non-discrete.
This thread almost reads like parody to me. It perfectly encapsulates the Stack Exchange experience in that when a question is clearly asked by a beginner in a subject, they are likely to get responses only decipherable by experts, or at least people who know enough to not be asking that question.
In IEC 60559* floating-point arithmetic, pow(-1, ∞) is 1.
This is because all large
binary and decimal floating-point numbers are even, and thus so is infinity.
*this is the successor standard to ieee-754 and shares text in recent revisions, though I don't have direct access on this phone. You can find the specific pow specification in Annex F of the C99 standard.
Is the “thus” for ease of implementation? I.e., so that all floating-point numbers comparing greater than some threshold can be considered even without having to check for infinity?
No, it comes from the fact that floating point is binary and has limited precision. Think of it in terms of scientific notation. Here's an example in decimal. If we limit ourselves to four significant digits, then a number like:
3.101 * 10^3
is odd -- it's equivalent to 3101 (three thousand one hundred one). It's followed by 3.102*10^3 (3102), which is even, and 3.103*10^3 (3103), which is odd. But a number like:
3.101 * 10^5
which is equivalent to 310100, is even. It's followed by 3.102*10^5 (310200), which is also even, and 3.103*10^5 (310300), which is again even. If you have four significant digits and an exponent larger than 3, then you the value in the ones place will always be zero. Thus, the number is always a multiple of 10, and therefore even.
Floating point is the same, except it's binary. In a 32-bit float, you have 23 bits of mantissa after the decimal point. If the exponent is larger than 2^23, the ones place is always zero, so the number is guaranteed to be a multiple of 2, and therefore even.
It's not that they are considered even, they just are. There's no way to encode a large odd
even-radix floating point number. You have some (small, compared to the range that the exponent can encode) bits of significand and once you exhaust those all numbers are even (or divisible by ten in the rare decimal case).
There are, and it turns out that this is a significant mathematical concept.
The integers between 0 and infinity are defined as "countably infinite". Other infinities are considered countably infinite, or the "same" infinity, if and only if you can arrange it in a list such that each item in the list pairs to an integer in our 0 to infinity list. So the set of even numbers is countably infinite because for every i that is an even number, it pairs with the number i/2.
The decimal (real) numbers between 0 and 1 are not countably infinite, and we know this from a concept called Cantor diagonalization. What Cantor did was a proof by contradiction: assume that the numbers are countably infinite, then you can arrange them in a list. However, he then builds a number by altering the first decimal place of the first number, the second decimal place of the second number, and so on. Finally, he shows that this built number is both a real number and is not on the list. Therefore, the real numbers between 0 and 1 cannot be ordered into a list, therefore they are not countably infinite, and there are more decimal numbers between 0 and 1 than integers between 0 and infinity.
The way I parse "decimal number" in this context is a number expressible as a (finite?) string of decimal numerals. Those numbers are not reals, they are rationals.
Whatever method you use to generate your decimals, you can just slap an integer on each step of the way. You'll never run out of integers.
I'll put Cantor and his proof in a box, tell him to give me his fancy decimals quick as he can, and I can match each one with an integer no problem.
And pairing one infinite list with another infinite list doesn't make either one any more countable, because however high you count, they keep on going.
I think "countably infinite" makes no sense to you because you have a different idea of countable than a mathematician (disclaimer: not a mathematician).
A mathematician compares the size of two sets of stuff by pairing off items from each set, but this is not a mechanical process taking a finite or even unbounded amount of time: they just need to show such a mapping exists or that nonexistence would lead to a contradiction; they don't need to actually carry out the process mechanically. By definition (according to mathematicians), something is countable if it is the same size as the set of natural numbers {0, 1, 2, ...} or smaller, and "countably infinite" just means it is the same size as the naturals (and not smaller, which would make it finite).
A small minority of mathematicians hold the position that proof-by-contradiction is not good enough, and that you really do need to positively prove something. They are called intuitionists.
Presumably, an even smaller minority of mathematicians hold the position that this proof must (theoretically) be able to be carried out in a mechanical manner. They're some flavor of constructivists, but maybe they're better called programmers. <- This is where you are.
> Whatever method you use to generate your decimals, you can just slap an integer on each step of the way. You'll never run out of integers.
Exactly correct! This holds true of everything you can generate stepwise, even infinite sets. Cantor proved that you cannot "generate" (stepwise) all Reals between 0 and 1. Any infinite set you can generate stepwise is Countably Infinite.
> I'll put Cantor and his proof in a box, tell him to give me his fancy decimals quick as he can, and I can match each one with an integer no problem.
Exactly correct! And then infinitely later, when you're "done", having generated every Real between 0 and 1, he will then generate a new Real not on your list. Oops! You have not generated all Reals between 0 and 1, even with infinite time.
> And pairing one infinite list with another infinite list doesn't make either one any more countable, because however high you count, they keep on going.
Exactly correct! Any two sets you can pair together (via a bijection) have the exact same cardinality. Neither is more infinite nor countable than the other. Cantor proved you cannot "pair" the Reals with the Natural Numbers.
You and Cantor agree completely. You're very close to understanding why the Reals are bigger.
There can be no 'and then' after infinitely later.
I don't see why stepwise is important but that must be the key to Cantor's proof.
If he gives me 1.1 1.2 1.3 and I pair with 1 2 3, then he gives me 1.11 and I pair with 4, that seems fine as far as counting is concerned.
The ordering could be entirely random, I don't see how it makes a difference. There will always be enough integers to match.
Is it that my black box metaphor is cheating by coercing a truly 'parallel' generation of decimals into a linear operation? But even then, if I'm getting exponentially bigger chunks of new decimals, I can provide equally large chunks of integers... so it still doesn't make sense to me. Infinity is infinity and you cannot count it.
Mathematicians consider two sets to be of the same size or more precisely "cardinality", if it is possible to construct a 1-1 map of elements from the first set to the second set. These maps can obviously be constructed for sets with finitely many elements, and they can be constructed for sets with an infinite number of elements as well. For instance, the set of all integers has the same cardinality of the set of all positive integers (just enumerate the integers alternating back and forth expanding from 0 - this constructs the 1-1 map).
We can prove that no such 1-1 map exists between the integers (an infinite set) and the decimals in the interval [0,1] (another infinite set). The proof is by contradiction, meaning that we assume such a 1-1 map exists and prove it leads to a contradiction, therefore our assumption that the 1-1 map exists must be false.
So suppose we were able to construct a map from all decimals in [0,1], by enumerating them according to some clever rule. Let d_i be the ith digit of number i in your mapping. For each I pick another different digit d_i'. Let's construct the number with decimal representation D = . d_0' d_1' d_2' ...
Assuming we have our 1-1 map, it must be somewhere in our mapping. Let's say it's element k. By our labeling concention the kth decimal digit of D is actually d_k. However, this contradicts our method of construction of D. Therefore our assumption that there is a 1-1 map between decimals in [0,1] and the integers must be false.
It is in this sense that there are infinities of different sizes.
> It is in this sense that there are infinities of different sizes.
They aren’t actually different sizes, though.
All this proves is that under specific set theoretic assumptions, a contradiction arises if you define “size” as “cardinality” and assume that a particular bijective relation exists between your two infinite sets.
It doesn’t actually mean the sets have different sizes, it just means they differ under a set of assumptions that may (or may not) be useful for your purposes.
I don’t have one that doesn’t admit a contradiction here … which I’d argue is because comparing the size of an infinite set is nonsensical, even if the properties used to do so are otherwise useful.
Similarly, I can also work around Russel’s paradox by introducing infinite universes, but that doesn’t actually resolve the paradox, it just provides a set (ha ha) of rules that may be leveraged to formalize the Set category and otherwise prove useful things.
Just because your formalization admits a proof by contradiction doesn’t actually prove two infinite sets have different sizes, it just proves that a contradiction exists under your assumptions.
If you aren't allowed to operate in a logical system with a concrete definition of "size", then you can't say things like "doesn't actually prove two infinite sets have different sizes". So the whole debate is moot.
> If he gives me 1.1 1.2 1.3 and I pair with 1 2 3, then he gives me 1.11 and I pair with 4, that seems fine as far as counting is concerned.
Exactly correct! Any bijection between the naturals and the reals would suffice to show that they're the same cardinality; the order does not matter. I think where you're getting confused is just in who's trying to do what; who's the "protagonist" and "antagonist" in the proof.
Cantor is not trying to overwhelm you with so many real numbers that you run out of integers. Instead, he completely accepts and agrees with everything you're saying. And then he says: okay, pick any numbering of the reals you like. 1.11 is 4, 1.111 is 76, and 1.1111 is 445662323. It doesn't matter. You pick the pairing. Write your pairing down on an infinitely long sheet of paper. If the reals and integers have the same cardinality, there must be some way to write them all down on an (infinitely long) list. Pick any one and write it down.
Cantor's only job now is to show you that any real number exists that is not on your list. To do this, he constructs a number a digit at a time. He looks at the 1st digit of the 1st number, and writes down a different digit for his 1st digit. He looks at the 2nd digit of the 2nd number, and writes a different one for his 2nd digit. He looks at the nth digit of the nth number and writes a different one down for that digit, for every digit. Real numbers never run out of digits, so this goes on forever.
If this number he has written down is on your list, you should be able to point to a number on your list and say "Aha! You see, that is just real # 65,334,649!" but you can't, because it's different from that number in its 65,334,649th digit. It is truly different from every number on your list. And so there are more reals than integers.
I feel the number he generates via any procedure would be on the list since all of them are on the list. What am I missing about the plausibility of a procedire that must be possible which must generate a real number not in the series?
> I feel the number he generates via any procedure would be on the list since all of them are on the list
The claim is that all of them are on the list. The constructed number proves that claim false.
It's a proof by contradiction. If you assume there is any way to write an infinite numbered list of all reals, then Cantor shows it's possible to come up with a number not on your list. The construction uses your list as input, and given any list, can always produce a real number not on that list. Therefore there is no way to write an infinite numbered list of all reals.
It relies on the fact that real numbers have (countably) infinite digits, and therefore infinite "degrees of freedom" to be different. This may be one reason it's hard to accept. A "true" real number can contain infinite information in a single number. For instance, we can jam all of the naturals into a single real by just concatenating their decimal representations: 0.1234567891011121314151617181920212223...
This one single real number encodes the full infinite natural number line. That hopefully gives you a sense of why the "infinite digits" definitions of reals makes them qualitatively "bigger" than any number that has finite representation.
No I still don't get it, it's like saying that infinity^2 is larger than infinity. If 0^2 is no larger than 0, then it must be the same for infinity.
I see how a list of reals is like 2D list of infinities, so one grows from the middle and the other grows from the end, but they're both still infinite. I guess I'm still stuck in a 'mechanical' approach and not a mathematical one. I'm not sure I want to leave ;) This has been fascinating to think about anyway.
Cantor's proof is a single attempt. Suppose you could construct a space-filling curve that did indeed map all numbers between 0 and 1 to all integers? Has there been a proof that such a curve does not exist? The fact that his proof leans on a specific set of decimal places at every juncture has always seemed a weakness of his proof, because you can always map any set of numbers from 0 to 1 with any set of decimal places to a set of integers.
Depends what you mean by decimal. Decimal is a system of notation, does it count as a decimal number if it cannot be written in decimal notation (in finite time)?
if not then they are equal, if yes then there are more decimal numbers between 0 and 1 than integers.
I don't usually use that term, but I take it to mean "number you (may, and, if not using e.g. fractions, must) write using a decimal point" because that seems to always be what people intend by it.
Everybody experienced writing irrational numbers using decimal notation in school, so those definitely count.
You truly thought I didn't realize that "3.14" is an abbreviated representation of π, or that I somehow missed years and years of using the "repeating" sign above various decimal representations, or all those "..."s, such that it was plausible I meant the obviously-wrong thing rather than the correct thing? This stuff is hammered in in US K-12 school.
[EDIT] Look, I don't mean to be a dick, performative misreading and plainly-unnecessary "correction" are just two of my least-favorite types of HN post. I probably should have just downvoted the original performative misreading (not yours, the one up-thread) and not Assumed Good Faith that the original poster genuinely doesn't understand what every non-math-nerd means when they say or write "decimal number" (it's the ones you write with a decimal. It's... so very simple, that's why non-math-nerds use that and not "real number", the definition of which they've long since forgotten. "Well but you can't actually represent irrationals them entirely in decimal notation" great, wonderful, has zero bearing on what people mean by it).
You really are being mean about someone trying to help you. Not sure why.
“Decimal numbers” is not a term routinely used by mathematicians (quite distinct from primary and secondary teachers of arithmetic who are, unfortunately, rarely mathematicians), precisely because of the confusion you, perhaps unwittingly, elicited. If you mean by this phrase all infinite series with a decimal approximation, then you’re talking about the reals. Some people thought you meant this!
Other people, also quite reasonably, interpret “the Decimal numbers” to mean all numbers that can actually be expressed with (finite) decimal notation, in which case you are talking about (a subset of) the rationals.
It is extremely important, when discussing different sets, to be clear about the difference between these two.
Yes, but you'll see "decimal" more in the wild, and that's what people mean by it. "You write it with a decimal point", and they do usually mean to include the irrationals. So, yes, real numbers, but the reasoning behind their usage is "you write it with a decimal point". I'd bet more people understand "decimal number" used in that sense, than understand "real number".
> Yes never, not in school, not in analysis, and certainly not in numerical analysis.
Weird, I just assumed that was normal in most education systems. I don't know how you'd get a sense of the rough scale of various common irrationals, without having some idea what they look like when represented in decimal notation. Such representations are normal starting not later than when we start seriously working with circles, in US school, and never really stop coming after that. Estimation exercises lean heavily on having some idea of the decimal representation.
> You've proved my point. It's either π or 3.14. Except that the latter is a rational number :)
Never claimed π is 3.14, so no, I didn't at all prove your point. I wrote that it's very well known that it starts that way. When a normal person says "decimal number" they mean to include π, because any usefully-precise decimal representation of it's going to involve a decimal point. At least in the US, they saw it represented "3.14..." or "3.1459..." or whatever, many, many times in school. It's obviously, to a non-mathematician, a "decimal number". They mean "the real numbers" (or, perhaps, depending on context, exclusively the parts of the reals that aren't whole integers), except that name is harder to remember than the incorrect (but more common and intuitive) "decimal numbers".
That's not quite what you have said. 3/2 does not have an integer power of 10 as the denominator, but can be written as a decimal. Of course, it can also be written as a fraction with a integer power, for example 15/10.
(You are of course right on the 1/sqrt(2) issue)
If you thought of this question from no real math training then that's pretty interesting. You should have been a mathematician. Your question is one of the most important and concisely stated questions about infinity that you can ask!
I was thinking if Pi never repeats itself and infinity of integers can only go up then it seems to make sense that there are more decimals between any two numbers than infinite integers. I can't describe the thought process behind it just seems intuitive.
I have an opinion that number of decimal numbers between 0 and e is equal to number of decimal numbers between e and +Infinity, because a parabola with a=e will grow in x with same speed as in y.
Turn it into a proof?
I need to revisit Cantor's proof, the argument I was taught left out key aspects of numbers, particularly how "number" and "string of digits you've printed out so far" aren't the same thing. It's really about creating a space-filling curve.
The amount of real numbers between any two distinct real numbers (a,b) is the same as the amount of all real numbers. This is true for (0,e), (0,1), and any other combination.
- No, even if you include all the decimals which can be individually described in any notation whatsoever.
You can disregard all the arguments in the other comments about whether "decimals" includes fractions like 1/7 using decimal repeat notation, or irrationals described by a formula like sqrt(2), or transcendentals from mathematical definitions like pi and e.
Those are interesting and deep rabbit holes, but they don't change the answer to your question, because it is still "no" with all of those. Even with all possible definitions which can be written in any symbolic language. This is because the set of all possible definitions which can be written can be enumerated systematically in a list, and mapped 1:1 to all the integers.
- Yes, if you include all the other numbers in the range 0 to 1 which are not ones you can individually describe. Most real numbers in the range 0 to1 are actually these type of "individually undescribables". But I can't point out an individual one, of course.
The "real numbers" contain these. They are present due to a consequence of logic that keeps regular math simpler and more consistent than it would be otherwise.
(Aside: The question of whether 1/3 = 0.3(repeating), times 3 = 0.9(repeating), is equal to 1 is an example of choosing the simpler and more consistent logic. Of course 1/3 times 3 is 1 so 0.9(repeating) must be defined as equal to 1, or fractions wouldn't be consistent with decimals...)
But the rationals (fractions), algebraic numbers (solutions to polynomials with integer coefficients, such as square roots), computables (numbers you can define by any algorithm), and even some types of uncomputables (such as all Chaitin's constants for all enumeration rules), and all mathematically individually definable transcendental numbers like pi/4 and e/3 do not contain these.
It follows that all the "individually undescribables" in the real numbers can only, conceptually, be imagined as infinitely long decimals with no repeats and no pattern to the digits definable by a finite-length rule in any language. You obviously can't write one of them down, you can only conceptualise what one already written down might look like. (For example a spiral of digits of ever descreasing size would fit one in finite area.) And we can only reason about them as a set by logical construction.
If you were to pick a random real number uniformly (ie. fairly) from the range 0 to 1 by picking a sequence of random decimal digits, it would ɓe one of these infinitely long decimals with probability 1. Because simple random values from a continuous range are like this, perhaps this explains why they are actually a natural and not unreasonable concept.
So the answer depends on whether your meaning of "decimal numbers" means the "real numbers" in the continuous range 0 to 1, or just certain ways of writing numbers. From the other comments, evidently some people include all sort of things in their idea of "decimals" including 1/3 = 0.333...(repeating) and the exact value of pi/4 for example, not just finite strings of digits. While other people think of "decimals" as being only strings of digits you can write down, so they would not include the exact value of pi/4 for example. These two meanings of "decimal numbers" give different answers to your question.
My understanding is that this is true because there are infinite decimals between every decimal, infinitely.
For example, there is infinity between 0.1 and 0.2, and infinity between 0.1 and 0.11, etc. i.e. infinite sets of infinity rather than one set of infinity.
In the end it's all infinity, but their sets have higher cardinality described in Aleph terms ... (or something)
It’s not because there are infinite decimals between every two decimal numbers. That applies to the rational numbers too, e.g. there are infinite rational numbers between 1/2 and 3/4. Rather, the real numbers are more dense in a way that makes them fundamentally larger than the integers / rational numbers. “Larger” means not being able to pair up the two sets one by one so that each element of both sets is the member of a pair. No matter how you pair up the integers to the reals, you can prove that some real numbers will be unpaired.
Maybe this is misguided cheat, but couldn't you map any real number (between 0 and 1) to a natural number by mirroring the decimal digits across the decimal point. So 0.123 -> 321, but also sqrt(2)/2 -> ?601707 where ? is the rest of the decimal representation. This creates infinitely large numbers, but it's still a 1-to-1 mapping.
To explain to a six-year-old I would start by telling them that there are many different kinds of infinity, not just one. Some infinities are odd, others are even, and others are neither. It matters whether you are asking "how many" (cardinals) or "in what position" (ordinals). For regular finite numbers, cardinals and ordinals are (more or less) the same, but for infinities they behave differently. Then, if they want to get into the weeds, you can introduce them to transfinite ordinals, diagonalization, and all that fun stuff.
> It matters whether you are asking "how many" (cardinals) or "in what position" (ordinals)
But "even" and "odd" are all about whether you can partition something into an equal number of pairs or not. If you're asking "in what position" (ordinals), you've explicitly said you're not in the realm of counting sets of things. I would argue division makes no sense in the realm of ordinals! Everyone is saying the transfinite ordinals alternate even-odd, but those are exactly the numbers where we've stated we're only interested in position, not counting. It's not clear to me why "dividing" an ordinal number into equal pairs makes any sense. (Whereas it makes perfect sense for cardinal numbers.)
> But "even" and "odd" are all about whether you can partition something into an equal number of pairs or not.
Sez you. I can just as easily define even and odd in terms of whether or not I can arrive at a given position in a (potentially infinite) sequence taking by taking two steps at a time.
Right. The problem with teaching infinity by starting with cardinal numbers is that it's either too trivial or too hard. You can establish that several other sets of numbers are identified by the same infinity but there's not much you can do.
> You can establish that several other sets of numbers are identified by the same infinity but there's not much you can do.
Well, you can introduce them to the diagonal argument and the idea of a one-to-one correspondence. That's nothing to sneeze at.
But I think the real trick here is to teach them that numbers can stand for different kinds of ideas, and in particular, they can stand for "how many" or "what position", and that these are different. I would start, not with infinity, but with negative numbers. You can't have "one less than zero" because you can't take away anything from zero. That is the definition of zero. But you can have "the thing before zero", or, to be more precise, "the thing before the zeroth thing (where the zeroth thing is the thing before the first thing)", which we call -1.
Likewise you can't have "one more than infinity" because that's just infinity. That's the definition of infinity. But you can have "the thing after infinity" (or, to be more precise, "the thing after all the things that are the nth thing for all finite values of n", which we call ω.
This just implies that infinity is both even and odd, which means that the statement "infinity is even" is still technically correct by this reasoning.
Infinity is out of domain of integer numbers where notions of even and odd are defined and make sense.
Notion of infinity is applicable when we are discussing sequences and their behaviour, such as convergence.
Convergence of sequence x(n) to infinity by definition is: for each real number ε>0 there exists a natural number N(ε) such that for every number n≥N(ε) we have |x(n)|>ε.
Convergence of sequence x(n) to plus infinity by definition is: for each real number ε>0 there exists a natural number N(ε) such that for every number n≥N(ε) we have x(n)>ε.
Convergence of sequence x(n) to minus infinity by definition is: for each real number ε>0 there exists a natural number N(ε) such that for every number n≥N(ε) we have x(n)<-ε.
For example sequence of natural numbers 1, 2, 3... converges to plus infinity and to infinity; sequence of negated natural numbers -1, -2, -3... converges to minus infinity and to infinity; and sequence of sign-alternating numbers (-1)^n * n: -1, 2, -3, 4, -5, 6, -7, 8, -9, 10... converges to infinity.
So notion of infinity applies to behaviour of sequences, whose elements remain finite nevertheless. If we consider other mathematical objects, e.g. integer numbers, then notion of infinity does not apply. If we consider convergence of sequences where notion of infinity is applicable, then notion of even/odd is not applicable.
While discussing sequences converging to an infinity with a child, it may be useful to consider some interesting counterexamples: sequences which are unbounded, but still do not converge to infinity, e.g. 1, 2, 1, 4, 1, 6, 1, 8, 1, 10, 1, 12, 1, 14, 1, 16, 1, 18, 1, 20... (formula is n^((1+(-1)^n)/2)).
Yeah. Though the defenders of transfinite (ordinal and cardinal) numbers do in fact assert that there are many infinite numbers, such as aleph zero or omega. They are just usually somewhat embarrassed about this and therefore only talk about "ordinals" or "cardinals". It's like trying to hide that you drink beer by saying you merely drink lagers and ales.
Absense of limit is not always an infinity, e.g. for sequence (-1)^n.
Even if sequence is unbounded, it does not always converge to infinity, e.g. n^((1+(-1)^n)/2): 1, 2, 1, 4, 1, 6, 1, 8, 1, 10, 1, 12, 1, 14, 1, 16, 1, 18, 1, 20 ...
Convergence of sequence x(n) to infinity by definition is: for each real number ε>0 there exists a natural number N(ε) such that for every number n≥N(ε) we have |x(n)|>ε.
David Deutsch expanded on Hilbert's hotel in this chapter https://publicism.info/science/infinity/9.html of one of his books, one of the funnest little discussions of (mostly countable) infinity I've seen.
Ahh, the mis-uses of infinity again. Infinity, is both simply because inf+1 = inf, so if infinity is odd, then infinity + 1 is even, which equals infinity which is then odd. Think of sets. Inf and -inf are in both sets. You can prove this with deltas and epsilons, but that is beyond the scope of explaining it to 6 year olds.
There are twin primes, and the number of twin primes is infinite, so now, the number of twin primes should be even... because by definition they always are. Also the number of primes is infinite, so there is both a even number of primes, and an odd number of primes.
Infinite sets are indivisible, and also have infinite magnitude. i.e. you cannot sub divide them into anything, and loose its infinite property, you also cannot multiply them, and change the infinite property,
so, If someone says Infinity = 1/12 its useful, but tricky.
if you multiply both sides of that you get infinity * infinity = 1/12 * infinity. i.e. it reduces to infinity = infinity.
Why can't infinity be both even and odd? The field of numbers should allow a possibility for both even & odd to happen at the point of infinity. Of course, infinity would then be a point where the definition of a number could break down. If it doesn't, then it can be both even and odd.
> To explain the idea to a child, I would focus on the principal idea: whether finite or infinite, a number is even when it can be divided into pairs. For finite sets, .....
Pretty sure if I tried to explain this to my kid niece she'd just say: "I'm uncomfortable. Can I go now?"
The key insight when dealing with infinities is that the tools we use to deal with finite numbers extrapolate to infinite sets by talking about relationships between numbers, not individual numbers.
This is also how we get to the notion of infinities larger than other infinities.
I really hate these mathematical technicalities spawned from material implication, chosen way of making a definition and vacuous truth - why can't we even consider some questions to be marked as non-sense/non-relevant like in relevance logic?
That has nothing to do with relevance logic. Someone has written a smart-ass answer about transfinite ordinals to a question about infinity; and people upvoted it for right or wrong reasons. To me personally the answer looks witty but misleading.
Not really. One could argue some math is innate and we are just rediscovering it. See the disconnect between natural language and math which happened early 20th century because of material implication bringing vacuous truth, leading to "impedance mismatch". Medicine is still using counterfactuals precisely because of weirdness introduced by Russell in order to make all Boolean values defined for inference.
That's the definition of infinity in calculus and analysis. Most of the comments in this HN discussion are talking about infinity as a set theoretical concept, i.e. cardinals and ordinals.
There's always someone who sees a question in a submission title and feels the need to comment simply to answer said question in the most boring, banal, and least insightful way possible. Most people realize that if an article that poses a seemingly-simple question makes it to HN frontpage, there's almost certainly some unexpected, interesting, and/or insightful discussion there that reveals that the question wasn't so simple after all.
Almost everything interesting in mathematics stems from a simple question: "How could we extend a concept to be more generally applicable?" Saying that infinity is not even or odd because it's not a number is like claiming that matrices can't be multiplied because they are not numbers.
I am not preventing the discussion or claiming that you shouldn't extend this concept, just objecting to the way to question is formulated ("is infinity an odd or even number"). The stackexchange comments agree with me, and do this extension by pointing out reasons it would be useful to pick one or the other, but I found it important to point out this caveat that there is no logical answer (in terms of numbers) and whatever you pick would be an extension, not a conclusion.
Nowhere in my comment do I say that this is a silly submission or suggest that it shouldn't be in the front page.
Ordinals are often also called ordinal numbers. Both ordinal and cardinal numbers are very much generalisation of natural numbers to a more general concept of number, depending on if you see counting sizes of sets or denoting position (e.g. first, second, etc.) as the fundamental thing numbers do. Of course there are also other notions of number that focus on other aspects (e.g. number fields). But I think it's definitely not wrong to call all of these things numbers.
So it seems weird to me to object to the form of the question, it is perfectly fine as it is. If the question was "is infinity an odd or even natural number?", then of course you'd be right.
All of these things are also stuff one could discuss with a 6 year old (i.e. tell them that there are multiple notions of numbers that focus on different aspects, and that the question has a different (probably interesting) answer in each of these different contexts). Insisting that infinity is not a number seems like a less interesting way to talk about this, without even being necessarily more rigorous.
You can build those and then pick if you want them even or odd, which are regular-number concepts. That is exactly what they did, and what I described. You go and re-read it.
ChatGPT response: Infinity is not considered an odd or even number. In mathematics, odd and even are properties of integers, which are finite numbers. Infinity, on the other hand, is not a number in the usual sense. It is a concept that represents an unbounded or limitless quantity. Therefore, the concepts of odd and even do not apply to infinity.
In the ordinals infinity != infinity + 1 (but 1 + infinity=infinity). You are right that infinity is even, and in fact infinity+1 is odd. All this is explained in the first answer of TFA.
i think making the odd/even distinction is a mistake. It makes people say things like "2 is the only even prime" as if that's somehow different than "3 is the only prime evenly divisible by 3".
Even is a quality of division when the remainder is 0; even is a quality of making a rectangle with discrete integral sides.
> Infinity is not a number, odd or even, but rather a concept or a mathematical idea that represents an unbounded or limitless quantity. Infinity is not a real number that can be used in ordinary arithmetic operations, but it is used to describe a quantity that is larger than any finite number. Therefore, the concept of odd or even does not apply to infinity.
Imagine if we had ChatGPT a couple hundred years ago: "What is the square root of -1?": "The square root of -1 is not defined because you cannot take the square root of a negative number."
So you are proposing that a couple of hundred years ago, in 1823, a hypothetical ChatGPT would not have been trained on the works of Leonhard Euler, from 80 years before that.