Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is no proof that will ever satisfy a person dead-set against this. Ever since I brought this home from school as a child, my whole family ribbed me mercilessly for it.

If you tell a person that 3/6 = 1/2, they'll believe you - because they have been taught from an early age that fractions can have multiple "representations" for the same underlying amount.

People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct. The bar or dot or ... are there to plug a gap, allowing more values to be represented accurately than plain-old decimal numbers allow for. It has the side effect of introducing multiple representations - and even with this limitation, it doesn't cover everything - Pi can't be represented with an accurate number, for example.

But it also exposes a limitation in humans: We cannot imagine infinity. Some of us can abstract it away in useful ways, but for the rest of the world everything has an end.

I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?



It's more fundamental: People seem to have the intuition that the decimal representation of a number is a number. I don't know if it's because decimals resemble the ntural numbers or what, but decimals seem to have a primacy for people that fractions do not. The idea that there's a gap between the symbol for a thing and the thing itself is the stumbling block.


I think this is a very insightful remark. People think that numerals _are_ numbers, and it's hard to explain why this is not the case, because we have no way to talk about specific numbers _except_ by using numerals. But many frequently-asked questions are based in a confusion between numbers and numerals. For example, many beginner questions on Math SE about irrational numbers are based in the mistaken belief that an irrational number is one whose decimal representation doesn't repeat. I've met many people who were just boggled by the idea that “10” might mean ●●, or ●●●● ●●●● ●●●● ●●●●, rather than ●●●●● ●●●●●. A particularly interesting example I remember is the guy who asked what were the digits that made up the number ∞. It's a number, so it must have digits, right? (https://math.stackexchange.com/q/709657/25554)

Computer programmers (and historians!) have a similar problem with dates, and in particular with issues like daylight saving time and time zones. I think a lot of the problem is that again there's no way to talk about a particular instant of time without adopting some necessarily arbitrary and relative nomenclature like “January 17, 1706 at 09:37 local time in Boston”. But when was this _really_? Unfortunately there is no “really”. (“Oh, you mean Ramadan 1117 AH, now I understand.”)


Apparently the New Math (https://en.wikipedia.org/wiki/New_Math) tried to address this kind of issue quite explicitly, by drawing a consistent distinction between numbers and numerals (where a numeral is a symbol that names a number). Reportedly most American math students found this kind of distinction extremely hard to grasp when they were presented with this kind of issue in elementary school. Maybe it would have worked better when they were a bit older.

I wonder if there's a way of teaching this kind of distinction and issue well in a way that would make sense for most students.

I think Feynman said somewhere that the New Math explicitly taught base representation and base conversions, probably as a way of trying to underscore the idea that "123" is a representation of a number rather than a number. Feynman found this to be of questionable value and thought that most students didn't manage to get the point.

Edit: there's a similar issue in linguistics because you have words, phonemes, phones, graphemes, and glyphs. You could say that "dog" isn't a word, but is rather the standard way of writing a particular word in the standard writing system for English (which would sometimes be indicated by <dog> in linguistic contexts). This idea lets you refer to <alright> and <all right> as ways of writing the same word, or <color> and <colour>, or in the case of languages with multiple writing systems <हिन्दुस्तानी> and <ہندوستانی>, or <אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט> and <a shprakh iz a dialekt mit an armey un flot>.


>the mistaken belief that an irrational number is one whose decimal representation doesn't repeat

...which is true for any base-n representation where n is a natural (even rational) number. And that's kind of implied most of the time, so it seems like a useful definition. Where would this lead to problems?


I think the problem is that the definition, while valid, makes you hyperfocus on irrational numbers this way.

Seldom do we prove that a number is irrational by inspecting its decimal expansion. This would be in most cases a very unnatural proof. Since irrationality is a negative property (meaning, one arising out of a negation: the number is not a ratio), most of the time you prove it by contradiction. But people who just know the "digits don't repeat" definition expect us to somehow be able to list all of the digits of an irrational number and show that this infinite list doesn't repeat, which is, of course, an impossible task.


It's _true_, but it's not good as a definition, because it's hard to reason about. It drags in all sorts of contingent facts about base-10 representations that are not usually of interest.

The equivalent property, that a number is irrational if it's not equal to m÷n for any integers m and n, is much simpler. So we use that as the definition, and from that simple and intrinsic definition, we prove the _theorem_ that the decimal representation of an irrational number never repeats.


Yeah, can we have an example of an irrational number whose decimal representation repeats or terminates?

Or a rational number whose decimal representation doesn't repeat?


My phrasing was bad. I should have said "the mistaken belief that an irrational number is * defined to be * one whose decimal representation doesn't repeat”.

Usually we define it like this: an irrational number is one that isn't a quotient of two integers. Starting from that definition, we then prove the _theorem_ that the decimal representation a number repeats if and only if the number is rational.

It's much easier to start from the intrinsic properties, and use those to prove things about the representation, than the other way around. But if you don't distinguish the representation from the thing itself, you can't tell which way you are going.


I am still not in agreement.

The proof that the usual definition is equivalent to the representation is fairly straightforward and easy, no matter which side you picked as the definition. And once the equivalence is established, all other proofs proceed naturally. It therefore matters a lot that we pick one as a definition and know which one we picked, but not so much which one we picked.

Now in fact the quotient definition is by far more interesting mathematically. There is also a clear foundational reason to prefer it, namely that you can easily construct and prove things about the rational numbers long before you construct the real numbers. However it is unlikely that anyone who is confused about the definition of a rational number has a clear understanding of how the reals are constructed, so that is not a particularly important consideration for them.

Furthermore the fact that foundational considerations argue for one construction over another has little bearing on what is pedagogically preferable. As a famous example, the easiest way to rigorously define logarithms is through the integral of 1/x. However explaining logarithms that way to someone who doesn't know them is a pedagogical disaster.


I expect mjd is thinking of irrational bases. The number might still be written as 10, in digits that look decimal.


Thank you! This thread is full of people insisting on something wrong because they were taught incorrectly; an irrational number is defined in terms of integer ratios for a reason.

It's not like those people haven't worked with an irrational base before, either! Radians have an irrational base. When we talk about 2π radians, or 1/4π radians, that's exactly what we're doing.


That is not what I meant at all. (My phrasing was unclear. Sorry for the confusion.) Jordi understood what I meant though.


It was unclear; I understand rational numbers to be ratios, and irrationals to be inexpressible as fractions, and I see an almost direct connection between that and digital representation in rational bases, so it seemed deeply confusing to see a consequence of the definition of rationals being refuted. The only way out was an irrational base.


Maybe I'm misunderstanding, but I think the issue with dates is strictly different. Dates are hard not because time is fundamentally hard, but because there is lots of complexity in human representation of time (different places at different times have had similar but different representations of time). But that's not inherent to time. Ignoring relativity, if everyone throughout time used something like seconds since Unix epoch (or some similarly arbitrary point in time), then writing programs about time would be much simpler.

I think the numerals and numbers issue is more complex because numerals are fundamentally hard to reason about. Even the question "what is a number?" is deceivingly deep.


No, dates are harder than that. Humans use time to coordinate; the representation of time is fundamentally about communication. Only timestamps of events in the physical world are easy (ish). But that's not always, or perhaps mostly, what people are interested in.

When people receive a time, they may (usually) want it in their own time zone, but they might instead want it in the time zone of the entity they're getting the time from, if they're subsequently going to talk to that entity about the time. When they talk about meeting someone else, when they convey the time of the meeting, they usually mean whatever that time means in the place where they meet, which might be different from the current location of either. It might even be different due to political changes around time zones and daylight saving if the meeting is far enough in the future.


You say "no, dates are harder than that", but it sounds like you agree with me exactly based on the rest of your post. Time is hard because human representations of it are varied, complicated, and often arbitrary--not because there's anything fundamentally hard about the math of time (again, relativity notwithstanding). Contrast that with numbers which are inherently difficult to intuit about apart from issues of representation.


Yet the way we represent numbers is also a human construct. 1 = .999... is hard for people to understand because we think and write in base 10 rather than base 3. There's nothing that is fundamentally hard to reason about here.


Oof, I meant to write "numbers are fundamentally hard to reason about", not "numerals". Missed the edit window. The point I was trying to make is that numbers are very often fundamentally counterintuitive irrespective of notation. E.g., is the set of natural numbers larger than the set of decimal numbers? Or the other way around? Or equal? The complexities of time are almost exclusively inferring and converting between (often ambiguous) representations.


Of course in base three, they'll have a hard time with .222...


For example, many beginner questions on Math SE about irrational numbers are based in the mistaken belief that an irrational number is one whose decimal representation doesn't repeat.

How is this a mistaken belief?

Every rational number winds up in a repeating decimal representation and every number with a repeating decimal representation is a rational number. We learn algorithms to go back and forth between the two in elementary school.

Therefore irrational numbers cannot have repeating decimal representations. Conversely numbers with decimal representations that don't wind up repeating cannot be rational and so must be irrational.



Proof: bring up removing time zones, such that there is only one time zone.

The responses are comical, even here on HN, such as not being able to wake up or not knowing when morning is or when meals are. Let’s not forget that time zones are a human invention less than 200 years old.


> Let’s not forget that time zones are a human invention less than 200 years old.

What's the point of that statement? Before the introduction of formal time zones, we had thousands of informal ones, one for each settlement, calibrating noon to the zenith of the sun.


The point is to indicate that humans knew how to wake up, eat food, communicate, and conduct business (even over long distances) before they had clocks much less the relatively recently invented time zones.


200 years ago we didn't need timezones because we couldn't communicate or travel fast enough or often enough for them to matter. With railroads and telegraphs the level of granularity required for "some time 300 miles away" went from days to hours and minutes. No one in the U.S. west wanted to be told "The sun is at its zenith around 8 a.m. for you." For most people talking about time in their day to day lives it's far more useful to communicate a relative time of day with people near them than to communicate an absolute moment in time and do the math to figure out how bright it is outside.


> 200 years ago we didn't need timezones because...

We still do not. China and India are examples of large geographies spanning across a vast amount of longitude and yet each are a single time zone. Time zones are a political entity only, an unnecessary complexity. The absence of time zones will not halt business or communication over long distances.

> For most people talking about time in their day to day lives it's far more useful to communicate a relative time of day

People have done this for thousands of years without modern chronometers. Examples: dusk, dawn, morning, midday, afternoon, evening, twilight.


These two national policies aren't equally difficult: India extends across about 29° of longitude, while China extends across about 61°. The natural size of a time zone is 15° of longitude, so without political considerations India would include about 2-3 time zones, while China would include about 4-5.

I've heard there's some pushback against the Chinese policy in that some people in the west keep an unofficial local time which is widely understood and quoted (though presumably not for things that are sufficiently official or relevant to other regions). Apparently there's currently an ethnic conflict over the time zone status in Xinjiang:

https://en.wikipedia.org/wiki/Xinjiang_Time

Maybe this conflict has now been pushed underground by force?

> In 2018, according to Human Rights Watch, a Uyghur man was arrested and sent to a detention center because he set his watch to Xinjiang Time.


> A particularly interesting example I remember is the guy who asked what were the digits that made up the number ∞. It's a number, so it must have digits, right?

I don't think there are many areas of math where ∞ is a number. In my experience people have a whole other problem with ∞, thinking that it is some sort of huge concept defined globally in math, where it is just a notation shared by various non-mystical definitions across subjects (e.g. bijection-based definition of infinite set, epsilon-based definition of convergence, etc)


Reminds me of Terry Pratchett's description of the mathematical reasoning of camels in Pyramids:

Lack of fingers was another big spur to the development of camel intellect. Human mathematical development had always been held back by everyone’s instinctive tendency, when faced with something really complex in the way of triform polynomials or parametric differentials, to count fingers. Camels started from the word go by counting numbers.


Possibly people are looking at two different symbols and asking "can you show me logically why those are equal." If they're given a definition of "equal" and they still object, that's a different problem.

I have this problem every time I play with group theory again. You get the axioms for a group, which say there is some identity but don't explicity require the identity to be unique. You can easily prove that the identity of a group is unique ... so long as you define "unique" to mean "if element e1 and element e2 are equal, then we say they are the same element."

You could count things differently and say the identity is "not unique", it would just lead to a lot of stupid and un-illuminating consequences.


I used to have the same annoyance about group and category theory; learning some about the type-theory-as-foundations work (particularly homotopy type theory and cubical type theory) has helped w/ this; in that setting, you have several distinct but well-developed notions of equality: propositional equality, which group theory and most math cares about, vs judgemental equality, which is the one that's "obviously true" by the rules of the system.

e.g. 5 = 5 is true under judgemental and propositional equality, whereas x + 2 = 2 + x is only true under propositional


Yeah forcing myself to keep making sure I'm defining "equality" seems to be a really useful way of "breaking my brain" until I understand what an algebraic structure "really is."


I think a lot of people don't think of math in terms of definitions and proof. Math was just something they were taught as kids. And even if they've gotten into more advanced math, i think the 1 = .9... question hits their kindergarten brain and they just say "no" to it the same way they'd say "no" to someone singing the alphabet song in the wrong order.


I think a lot of people see “find me a number between 0.9999... and 1” as no more or less valid (plus, perhaps, no more or less pedantic and dickish) than “ok smarty pants, why don’t you just keep adding 9s until you reach 1, then we’ll talk.”


I imagine that's true sometimes.

On the other hand, hypothetically, if it were the case that we were missing a key definition that's needed for some proof, and someone didn't believe that proof, then maybe we would't quite know enough yet to decide that the person is in mental kindergarten. A better first step for us might be to supply the missing definition.


This is a good point, but an even more basic issue is that the question "what is a number" is a matter of definition. There isn't a "correct" definition of numbers; only one that we've accepted as standard. The accepted definition of a "real number" is actually quite complicated [1], and it's certainly not easy to convey why this complexity is necessary. Other definitions are also possible [2], but nonstandard.

The simplest definition is: a finite decimal ak ... a1.b1 ... bh is defined to be a fraction and an infinite decimal is defined to be a limit. You'd still have to define what a limit is, but that is somewhat more intuitive.

[1] https://en.wikipedia.org/wiki/Dedekind_cut

[2] https://en.wikipedia.org/wiki/Hyperreal_number


Sorry, no.

There are TWO standard definitions of the real numbers. Namely Dedekind cuts and Cauchy sequences. (They are completely equivalent.) The usual decimal representation of a number turns out to be a Cauchy sequence.

The "simplest definition" that you provide turns out to be rather non-simple in practice. Try proving that multiplication is commutative to see the difficulty.

There are plenty of other number systems out there. Try https://en.wikipedia.org/wiki/Surreal_number or https://en.wikipedia.org/wiki/P-adic_number or the complex numbers.

https://en.wikipedia.org/wiki/Surreal_number


Yes, but at the same time it is common for people to insist that 0.999… only "approaches" unity as if it were a series approaching a limit, rather than an unique number. Intuition is a funny thing.


Saying "a number isn't a limit" is true, but it's only really relevant if you're talking to someone who genuinely has no idea what limits are. In actual math the number 0.99... can be defined as the limit of a series.


Every number is a limit, yes, but people think of 0.999… as a series. Not rigorously, of course, but that's a common argument even by people (perhaps especially by people) who have a highschool or even math-minor level understanding of series and limits.


great point. My thought on (1/3 = 0.3333...) * 3 = 1 = 0.999... was that it is intuitively obvious that the "problem" is that we use base-10 for decimals. There is nothing magic or unknowable about the quantity 1/3.

I've often wondered if there is some alternate base or mathematical system entirely that would be "better" in these respects. The thought usually comes up thinking about why pi is such an "ugly" number in base-10 decimal.


I think the problem is that people are often only taught base-10 so they confuse numerals with numbers. If you learn base 2 and then base 16 and then base pi, you start to realize that numbers are something more abstract than whichever numeral system we use to represent them. Rightly or wrongly, the way I imagine integers now is an infinite set of different numerals (base infinity?) such that there is only ever 1 digit (I don't actually have concrete pictorials assigned to those numerals).


Interesting. I find it funny to imagine what if you did have pictorial representations. They'd have more and more complex strokes and knots, and you would also need an infinitely large paper or infinitely precise pen in order to not end up repeating a number sooner or later as you enumerate them.

Come to think of it, this is one way of thinking about the relationship between symbols and geometry.


Who says pictorials have to be 2-dimensional? ;)


Base 12 is a better base overall. It is divisible by more numbers. That is why it is used in various monetary, time and measurement systems.

It's the one issue I have with the metric system... But that ship has sailed :) look up the dozenal society if you're curious how fervent some supporters might be.


The Babylonians though it was 60. (But imagine having to remember additional 50 numeric characters.)


> remember additional 50 numeric characters

The numerals are not distinctly varied like our Arabic numerals. Quite the opposite, they are repetitive and completely systematic and require 80% less effort to remember.

https://commons.wikimedia.org/wiki/File:Babylonian_numerals....


Any other argument for a different base aside, in base 12 you have the analogous “problem” that 0.BBB... = 1. None of the usual bases, equipped with this “...” power, avoid the “problem” (non-unique representation).


I think the confusion comes when mathematicians say "a sheet of paper can be 0.333... (0.3_, ie 0.3 recurring) units long" or something because in experienced reality we can always choose a measure that's rational (in the maths sense). That 1/3 of a meter can be measured in one-third-meter units and be precise and easily written.

Now sure, make a square using those measures and measure the diagonal, like an awkward mathematician - "see, see, we need irrationals!" - but then we can just cut another measure that's exactly that length ... stupid mathematicians!

Yeah representation is not reality.

Now, says the ratio of those two measuring sticks ...


Whatever base you use, it will only be useful for rationals. You can't use any base to significantly improve the representation of pi. In base 7, 3.1 would be a nice approximation, but you can't go beyond approximations.


It doesn't have a terrible representation in base pi ;)


;) As someone fond of integers, I can't advocate base pi.


You can't fix irrationals by changing base. But, there are other approaches.

Continued fractions give very approachable representations for common irrational numbers like e and sqrt(2). While π doesn't have a good continued fraction, it has some very well-behaved generalized continued fractions.

https://en.wikipedia.org/wiki/Continued_fraction


Decimals are easier to compare than fractions. I can't easily work out in my head which one is bigger, 457/790 or 580/924, but I can easily see that (approximately) 0.57848 is smaller than (approximately) 0.62771.

Since the fundamental thing most people want to do with numbers is see which one is bigger, they favour decimal expansions. And since decimals worked so well for fractions, why not use them for everything else?


"Players and painted stage took all my love, And not those things that they were emblems of."

The Circus Animal's Desertion, W. B. Yeats


Similarly, perhaps, many take for granted that the written version of a word, in current standard spelling is the word.


The real issue is that we don't define the real number system before we use it. The fact that 0.999... = 1 is a consequence of a formal definition of decimal numbers. We can create a new definition of decimal numbers that does not satisfy this equation and use it in place of our current one.

Let's imagine a new decimal number system with some vague notion of infinitesimal numbers. We lose some properties we enjoy in our current system but all of those properties still hold for numbers with no infinitesimal part. We can still use our every day numbers like nothing has changed yet we also have a notion to describe infinitesimal values. We can make statements like 1/3 is infinitesimally less than 0.333... and carry on like nothing else has changed.

Now let's sit someone down, start with the rational numbers, introduce Dedekind cuts to define the real numbers and prove that in the real number system that 0.999... is exactly equal to one. Let's also convince them that the real numbers are the unique complete ordered field and that each of these properties are indispensable. Then they will believe that 0.999... should be equal to 1.


> There is no proof that will ever satisfy a person dead-set against this.

Indeed. I've torn my hair out trying to convince smart people with PhDs in hard sciences and had to give up in frustration.

I usually find that the most success can be had by kicking the ball to them immediately and having them define what they actually mean when they say "0.999…". If we're going to debate whether that thing equals another thing, we better make sure we know what we're talking about. Inevitably, this either causes the dead-set person to give up, or give a myriad of definitions that are either meaningless, ill-defined, or causes them to realize that they don't actually know what "0.999…" means (or what they want it to mean). It is hard to have the patience to chase down the consequences of their ill-fated definitions, though.


Ask for a number between .9 repeated and 1


"There isn't one, 1 is the very next number right after 0.999... Checkmate atheists." (In all seriousness I don't think it's a very convincing argument for someone who doesn't buy the proofs -- it requires you to believe and have internalized the idea that there are an infinite number of reals between any two distinct reals, and therefore that any pair of reals with nothing between are the same number. Those seem like bigger logical leaps to me than the simple proofs for someone who hasn't thought about this stuff.)


How about, ask for an integer between 1 and 2. Can't think of one? Guess they're the same number then.


Apples and oranges. For any two different real numbers, there's a number between them. Integers work differently.


This is a bold assertion, and one that is not obviously true, especially in cases like 0.999... and 1.0


Those aren’t different real numbers. That’s the whole point of the conversation.


Obviously saying it is not obviously true is false if 0.999... == 1.0


Can something be true and not obvious?


Obviously not.

Which is the point in using the word obvious, obviously. Namely, using it to feel superior or to not provide a better argument.


When I use the word, the point is to call out the fact that I think it's obvious, so if others don't, they can explain why. Not to forestall any discussion.

Anyone who uses the word differently is doing it wrong.



I don't understand how they're the same number.

I will never accept that they are the same. The difference between 0.9 repeating infinitely and 1 is infinitely small, but it isn't zero.


What is an "infinitely small" number?

Is 9999..... the same as infinity?

What is 1.0 - 0.99999.... = ?

What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?


> What is an "infinitely small" number?

What is an infinitely large number?

> What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?

By that logic, 0.99 repeating isn't a number at all, and therefore can't be equivalent to 1, because you can't subtract it from 1. So my understanding that they are different is correct.


> > What is an "infinitely small" number?

> What is an infinitely large number?

Neither is a well-defined concept within the standard reals, and completely unnecessary for understanding that 0.999…=1.

> > What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?

> By that logic, 0.99 repeating isn't a number at all, and therefore can't be equivalent to 1, because you can't subtract it from 1. So my understanding that they are different is correct.

0.99… is a real number. The sequence (a_n)_{n positive integer} with a_n = 9/10^1 + 9/10^2 + … + 9/10^n has a limit (do you want me to prove that?). 0.99… is defined as that limit. That limit is 1. Therefore 0.99… = 1.

I think you're struggling to grasp the definition here. The defintion of 0.ddd…, where d is an integer between 0 and 9, is the limit of the above sequence with 9 replaced by d. That limit always exists, and the definition is therefore OK. In the case of d=9, the limit is 1.


    0.9      is not equal to 1,
    0.99     is not equal to 1,
    0.999    is not equal to 1,
    0.9999   is not equal to 1,
    0.99999  is not equal to 1,
    0.999999 is not equal to 1,
and so on, ad infinitum.

Saying that if you add enough "9"s it suddenly equals 1.0 makes absolutely no sense to me, and I seriously doubt that anyone will be able to convince me that it does make sense. I've read every single post in this thread and none of you have gotten me any closer at all to believing or understanding that 0.9 repeating equals 1.

Maybe I'm too old to understand this "new math" where all numbers are equal to each other.


This is correct.

No finite representation of repeating 0.9s can equal 1.0

The ask that people accept infinite representations as valid is a big one.


> 0.9 is not equal to 1,

> 0.99 is not equal to 1,

> 0.999 is not equal to 1,

> 0.9999 is not equal to 1,

> 0.99999 is not equal to 1,

> 0.999999 is not equal to 1,

> and so on, ad infinitum.

You are correct about all of these, and all finite strings of the above form.

> Saying that if you add enough "9"s it suddenly equals 1.0 makes absolutely no sense to me, and I seriously doubt that anyone will be able to convince me that it does make sense. I've read every single post in this thread and none of you have gotten me any closer at all to believing or understanding that 0.9 repeating equals 1.

I think it's because you, and a lot of other people in this thread, are turning the question on its head. The difficulty does not so much lie in figuring out whether 0.999… is equal to 1 or not, but rather in what we mean when we write 0.999….

I know I'm repeating myself from elsewhere in the thread, but I'll try again. Try to go through these step by step, and feel free to let me know where you lose the thread.

DEFINITION: A finite decimal representation of a real number is a finite string of the form `a_m a_{m-1} … a_0 . b_1 b_2 … b_n` where each `a_i` and each `b_i` is a natural number between 0 and 9 inclusive (a digit). We say that this finite decimal representation represents the real number

    a_m*10^m + a_{m-1}*10^{m-1} + … + a_0 + b_1*10^{-1} + b_2*10^{-2} + … + b_n*10^{-n}.
Note: The previous definition deals with finite strings and finite sums. I hope we can agree that these are well-defined and unambiguous concepts.

EXAMPLE: The string `12.98` has `m=1`, `n=2` with `a_1=1`, `a_0=2`, `b_1=9` and `b_2=8`. It therefore represents the real number

    1*10^1 + 2*10^0 + 9*10^{-1} + 8*10^{-2}
(duh!).

Within this standard framework, there is no way to ask "what is 0.999…?. It is not yet defined, because we have only defined what finite strings mean. The standard definition for what one means by 0.999… follows. (One can obviously also define these things 0.888…, 1.999…, etc., but let's stick to one case here).

DEFINITION: Let `(c_n)_{n natural}` be a sequence of real numbers (let me know if you need a definition of sequences!). We say that the sequence has the limit x as n tends to infinity (these are words, you don't have to ascribe meaning to "infinity" in that sentence – it's just a word, like "gnarf"!) if, given any real eps>0, there exists an M such that for all m > M, |c_m - x| < eps.

Definition (this is the definition you have to wrap your head around before continuing): Consider the sequence `(c_n)_{n natural}` where `c_n` is the finite sum

    9*10^{-1} + 9*10^{-2} + … + 9*10^{-n}
The string `0.999…` (which we colloquially speak of as "zero point nine nine nine with nines repeating forever") denotes the limit of the sequence `(c_n)_{n natural}` as n tends to infinity (if it exists).

"THEOREM": The limit defining `0.999…` does exist. It is `1`.

PROOF: You can fill this in. If you can't, I'm happy to do it.

As you can see, at no point in the above did feelings or beliefs matter :-)


I don't have a strong opinion or much mathematical knowledge, but an "infinitesimal" number is a thing that most people have heard of even if they're fuzzy on what it is. If there is such a thing, what is the difference between 0.999... and 1 - 1/∞?


Those are great questions that not every system is required to address in the same way. (In a similar vein, +0 != -0 in Java) This is breakdown in notation and/or convention. There is no ground truth, just what's true within the system.


So does this mean that an infinitely small number is zero? As in 1/∞ ?


In real numbers, there doesn't exist such a thing as "infinitely small number" that is apart from zero. Yes, there exists infinitely many numbers between any minisculely small number and zero, but the way they are defined, every single number you can grasp, is finitely small. The "infinitely" small gap is inaccessible. In some other number systems it isn't, but in the standard reals it is.

That means that the "infinitely small" doesn't exist; "smallest apart from zero" doesn't exist either.


You can read about this in any work on nonstandard analysis. ("Nonstandard" is just the name, much like "imaginary" numbers.)

An infinitely small number is zero when projected onto the real number line. If you introduce an infinitesimal quantity to the reals, then for every number there is a unique real number to which that first number is infinitely close (that is, the difference between them is infinitesimal). You can use that real number as a (good) approximation of all the nonstandard numbers in its halo. (As long as you're comparing it to other real numbers.)


> So does this mean that an infinitely small number is zero?

What does "infinitely small" mean?

> As in 1/∞ ?

What notion of division are we talking about here? The division most people expect is that of real numbers. ∞ is not a real number, so you'll have to specify what you mean.


There is no infinitely small number between 0.999... and 1. The difference is 0.000... Not infinitely small, but infinitely zero.


> There is no infinitely small number between 0.999... and 1. The difference is 0.000... Not infinitely small, but infinitely zero.

Zero. Just zero. The difference is zero. 0. Because 0.999… = 1.


You are stating that 0.999... = 1 proves that 1 - 0.999... equals zero. I am stating that 1 - 0.999... = 0.000... proves that 0.999... = 1.

I think people intuitively see that infinitely zero equals zero.


> 1 - 0.999... = 0.000... proves that 0.999... = 1.

If people accept the former, and that the RHS of the former is in fact 0, they've already also accepted that 0.999…=1. I don't see what the discussion is at that point


If people don't accept the former, they can take out a pencil and paper to compute it themselves. After a few digits it will become obvious.

I don't have a direct computation for making the latter obvious, just indirect ones like 1 - 0.999... and 3 x 0.333...


> If people don't accept the former, they can take out a pencil and paper to compute it themselves. After a few digits it will become obvious.

How can they compute 1-0.999… when they clearly have no idea what 0.999… is?


They have to know that 0.999... means you never stop writing nines.

Put 1.0 on top, 0.9 on the bottom. Start subtracting from left to right, and keep writing nines on the bottom as you go to the right. In no time you'll see that the answer is infinite zeros.


> They have to know that 0.999... means you never stop writing nines.

How do they know that that's a real number?


They don't have to know that it's a real number. Knowing that you never stop writing nines is sufficient to perform the calculation.


You're asking them to perform subtraction. They probably know how to do that with real numbers, but problably not with much else. So they'll have to know that they're real numbers (or whatever numbers you are demanding that they be – you're still unclear on this point if it's not actually the reals).


Internally I'm saying they're real numbers. In what I say to the person trying to intuit that 0.999... = 1, I'm deliberately avoiding talking about number systems. I'm assuming this person thinks of numbers as sequences of digits, possibly with a decimal point.


And that is where it all goes wrong.


In calculus, yes.


It's the smallest number bigger than 0.


That doesn't exist. An open interval doesn't have a smallest number.


It does exist. The other poster just clearly showed that it exists by referring to it.

The problem is that if we include such a number in our formal system of math, we quickly find contradictions and the whole system falls apart. So such a number is incompatible with any formal system of math (though I guess you could start building one which does include such a number and see what properties it has).

Herein lies the problem, the people you are talking with do not use a form system. There system of math has something similar to the same flaw of their system of grouping of things, which would include the whole grouping that contains every grouping that doesn't contain itself. People rarely deal in formal systems and thus they can handle completely illogical statements fine as long they are protected from seeing the consequence of it.


You are certainly correct that people arguing the opposite side probably don't have a formal system in mind, but I think the intuition that an open interval in the Reals doesn't have a smallest number is easy to grasp even without any formal training. So you can force them to see the consequences of it through fairly straightforward logical contradictions.

Assume x is the smallest real number greater than 0. Then x/2 is also a real number and is greater than 0 but less than x. Therefore, x can't be the smallest real number greater than 0.


In math, when assuming the existence of something proved a contradiction, we conclude that the thing does not exist. The description may exist "integer between 3 and 4", but there is not described object. A description names a set or a class, and that class can have 0,1, or more numbers.


>In math, when assuming the existence of something proved a contradiction, we conclude that the thing does not exist.

Well only to the extent that you don't want to throw away any of the other axioms. Sometimes you do and there are some fun systems of math, but few have any practicality and those that do are often so advanced that even someone with an undergraduate focus in math can't appreciate those systems.

It is much the same with computer science. I personally enjoyed playing around with formal concepts of computation and adding some extras to see what happens. For example, what happens to a Turing machine if part of the machine can time travel or has access to an oracle. Does this make concepts like time travel inherently contradictory to our notion of computation?

But the practicality of these exercises does not exceed their entertainment value.


Of course it does. It's called the infinitesimal. It's common definition for real number is 1 / infinity: https://en.wikipedia.org/wiki/Infinitesimal

If you've taken Calculus, you've already worked with math that requires the infinitesimal to exist.

It's not a value you can meaningfully write out, but you can't write out pi, e, phi, root 2, 1 / 3 in base 10, root -1, etc. "I can't write it down" isn't a particularly unique property for numbers.


> If you've taken Calculus, you've already worked with math that requires the infinitesimal to exist.

Not at all. Standard calculus uses standard real numbers, for which there is no infinitesimal. One may well speak of infinitesimals as a mental tool when building a mental model for calculus, but those infinitesimals are not actual real numbers (or a well-defined mathematical object at all - in standard calculus).


Correct. This is covered in the article. https://en.wikipedia.org/wiki/0.999...#Infinitesimals


There is no smallest positive infinitesimal either. At least in theories that manage to define those rigorously. And it’s mostly a formal trick anyway; standard epsilon-delta calculus avoids them entirely.

Had you actually meaningfully studied this subject, or did you just link to a Wikipedia article you half-heartedly skimmed one day?


That's a funny way to say, "No, I think you misunderstand. I mean to say no single infinitesimal number exists. Like infinity, the concept exists, but as a literal single number, no."


> There is no smallest positive infinitesimal either.

There is in the surreal and hyperreal number systems. I got that from skimming wikipedia though....


At the very least, don't write "Of course it does". It does not in the real number system.


Why is 0.000... bigger than 0?


It isn't.


Or is it? Say I'm a layman and I decide that in the system of math as I understand it, 0.000... is larger than 0. Yes, if I was going to be completely form with my own system of math I would eventually have to face the problems this introduces and resolve it, but until then I can generally adopt a self contradictory system and continue to live my life unaffected. Much like many people live their whole lives using naive set theory for their understanding of sets.


Then in your system of math 0.999... is also less than 1.

However, basic arithmetic taught to children requires that adding trailing zeros does not change the value of a number. You'll have a hard time doing arithmetic once you change that assumption.


You are correct. This was a rhetorical question to get traderjane to question whether "bigger than 0" really applies here.


strictly bigger than 0


Doesn't exist.


bigger or equal


0.


Ask for a letter between G and H.


You're missing the point. This would be an analogy fit for talking with someone who's looking for an integer between 1 and 2.


0.00...1


> 0.00...1

And what does this mean? I will remind you that for an integer d between 0 and 9, 0.ddd… means the limit of \sum_{i=1}^N d/10^i as N tends to infinity.


  0.000...1 = 1/∞


And what does the right hand side of that mean? Division is commonly defined for a real numerator and a real, non-zero denominator. You are using the common symbol, but with ∞ in the place of the denominator. Since ∞ is not a real number, you must be using a non-standard definition of division, and have to define what you mean.


In some systems, division by ∞ is not defined at all (forbidden), in other it defined as 0, in another systems it defined as non zero.


> In some systems, division by ∞ is not defined at all (forbidden), in other it defined as 0, in another systems it defined as non zero.

Fine by me. Define whatever notion you're using. You can't just throw out non-standard things and expect people to know what you mean.


No, there's no 1. 2OEH8eoCRo0 is exactly right. Subtract 0.999... from 1 and you get 0.000...


    0.999... + 0.000...1 = 1
    0.000...1 = 1/∞
    0.999... = 1 - 1/∞


You're repeating the same wrong thing you said earlier.

It's 0.999... and not 0.999...0

In the same way, it's 0.000... and not 0.000...1.


  0.999... = 0.999...9
  0.999...9 + 0.000...1 = 1
  0.999...0 + 0.000..1 = 0.999..1
  0.000...1 = 1/∞
  0.999...9 = 1 - 1/∞
  0.999...0 = 1 - 1/∞ - 9/∞ = 1 - 10/∞
  If x/∞ = 0, then 0.999...x = 1.
  If x/∞ ≠ 0, then 0.999...x ≠ 1.


Ok, now you're saying that infinite decimals have final digits.


If Universe is infinite, then if we compare you to size of Universe, you are infinitely small, so you don't exists at all. Why I should waste my time?

If Universe is finite, then finite number of elements can make only finite number of combinations, thus this discussion is repeated infinite number of times again. Why I should waste my time again?


Yes, you're wasting your time if you compare my size to the size of an infinite universe. If you really want to waste your time that way, you don't want to use real numbers. On the real number line my size is exactly zero. You need to go into infinitesimals, which are out of place when you're looking at decimal notation, which is only for real numbers.


None of these, except 0.999… and 1 are well-known standard objects in this setting. You have to define what you mean.


I defined it:

    0.000...1 = 1/10^∞ = 1/∞


That's not a definition. Neither 10^∞ nor 1/∞ is defined in any standard system, so you'll have to define those too if you want to use them to define 0.000…1.



You're working with surreal numbers? This is not what people would expect unless it's explicitly stated. In addition, you're likely going to have a hard time explaining surreal numbers to someone who struggles to grasp that 0.999... = 1 in the ordinary reals.


Surreal numbers and infinistemal are simpler to work with when you need to work with infinite series.

Here John Conway explains them: https://www.youtube.com/watch?v=1eAmxgINXrE


Nice notation. I will steal it.


What exactly is the value of the number that ends in a 1 but has an infinite number of 9s before it?


    0.999...1 = 1 - 1/∞ - 8/∞ = 1 - 9/∞


So what you're really trying to say is 0 = 0


You are 1 in infinite Universe, so you are 1/∞, so you are 0.


> It is hard to have the patience to chase down the consequences of their ill-fated definitions, though.

Of course it's hard because in day to day life, even for the vast majority of STEM practitioners, the nuance of the proof that 0.9999... is 1 is not of much utility.

Whenever one sees a 0.999[... to however many digits] one can safely assume it's less than one or perhaps more realistically "almost 1". To say 0.999... with the very specific detail that the 9's go on forever is actually a strange thing to say and outside of most people's experience.

There are simple enough proofs of this that normal folks who paid attention in high school can follow, but I think it has to be framed more as a clever brain-teaser than as a proof.


> Of course it's hard because in day to day life, even for the vast majority of STEM practitioners, the nuance of the proof that 0.9999... is 1 is not of much utility.

Oh absolutely. I'm not expecting STE(no M this time!) practitioners to necessarily be aware of why 0.999…=1 in their daily lives, but I do expect them to have encountered enough situations in their field of expertise where scraping the surface using shallow intuition and gut feeling lead them wildly astray. I'm therefore surprised that they're willing to deny this basic fact to the face of mathematicians. The ones I've interacted with also don't happen to be the types that'll start arguing Anatomy 101 facts with a heart surgeon at a bar, but somehow arguing over basic calculus with mathematicians is fine.


> start arguing Anatomy 101 facts with a heart surgeon at a bar

Well, there's this:

wikipedia.org/wiki/Vaccine_hesitancy

wikipedia.org/wiki/Homeopathy


(STATEMENT OF PERSONAL IGNORANCE [SOPI]: Anyone who actually understands this stuff please correct my mistakes below. Thanks.)

In the real numbers, which are not always simple or intuitive, 0.99... = 1. That's true and I seem to understand the proof.

But the real numbers aren't the only system that might be sitting behind "0.99..." and "1" when I write those symbols down and talk intuitively to people in my family. The reals are just the system we're taught first.

I believe there are other systems (I think the surreals are an example) that work just as well for everyday purposes, but where ( my understanding is that) there are numbers that differ from 1 by a value that approaches zero, yet those numbers are not equal to 1. (I've played with the surreals but only as a hobbyist.)

If you do calculus in these other numbers, I think physically meaningful problems will still yield the same answers. (For example Zeno's Paradoxes are still not an excuse for failure to attend school.) But it isn't a law of nature, I think, that all number systems that can hold 0.999... and 1 must make them equal.


The easiest way I know to explain it is fractions.

  1 / 3 = 0.33333....

  2 / 3 = 0.66666....
So what's 3 / 3?

Some people don't like that one. They might like this one better:

  1 / 11 = 0.0909090909...
What's 10 times that?

  10 * 0.0909090909... = 0.90909090...
So, let's do some addition and let the values zipper together because a nine will always line up with a zero:

  10 * 0.0909090909... + 0.0909090909... = 0.90909090... + 0.0909090909... = 0.9999999999...
However, 10 * 1 / 11 = 10 / 11. And 10 / 11 + 1 / 11 = 11 / 11. So 11 / 11 must be the same as 0.99999....

This works for any repeating fraction. You can do it with 1/7 and 6/7. You add the decimal representations of the numbers up and the value will be 0.99999...

Technically, it works for any repeating fraction in any base. This is great because a lot of fractions are only repeating fractions in certain bases. So if 0.1 in base 10 is a repeating decimal in base 2 (it is) then you can show that (in decimal) 0.1 + 9 * 0.1 will represent (in binary) 0.11111...., which is equal to 1.

The issue is that 1 / 11 + 10 / 11 (in decimal) must still equal 1 in ALL bases. Well, guess what? In Base 11 the decimal looks like:

  0.1 + 0.A = 1.0
And 1.0 in base 11 is 1.0 in any base.


This is basically how we were taught to convert decimal fractions with periodic decimal expansions to regular fractions in school. You can even do that without that lining up of zeroes and nines, just multiply by the period: if x = 0.090909..., then 100x = 09.090909... (we shift the decimal point by two positions), and since the stuff after the point is the same, after subtracting it cancels out: 100x - x = 9.0, and so 99x = 9, from which we obtain x = 1/11.


That 11ths thing is actually really clever. I had never seen that argument before.


A number is just a number, it doesn't approach anything. A series can approach something, but a number can't. In any system where 0.99... is valid notation for a number, it doesn't approach anything.


Sure. Instead of "0.99..." please substitute lim n->inf sum(1..n)(9 times 10^-n).

The point I'm making is that the "obvious truth" 0.99... = 1 that we're all talking about depends on the assumption that we're working in the real numbers.

I claim that the real numbers are not something intuitively obvious to every sufficiently intelligent person; instead they are kind of weird and technical. I go on to claim, though I'm more unsure of this, that the real numbers are not even the only way to make calculus work.


Well, a limit of a series is just a number and doesn't approach anything either. If the series approaches something, we say the limit exists and is equal to that thing.

Anyway, in the surreal numbers you could probably make up a notation where 0.999... actually denotes 1 - ε or something. But I daresay it might not be very useful because then how do you denote 1 - ε/2 or anything else.


We've defined the series we're talking about, and we've defined 0.99... as the limit of that series.

I don't know whether repeating decimals are useful for testing equality of surreal numbers.

All I'm trying to say is we're all talking about anything and everything except the definition "when are two real numbers equal?"

And then we're saying people who don't understand the consequences of that definition are kind of dummies ... while we continue to not actually say what the definition is.


Fair. Maybe the definition is that two real numbers are equal iff there is no real number between them. Which is nonintuitive if you're used to things like natural numbers which have successors. But if you teach people this way of thinking about the real numbers, then arguments like "it's the last number riiight before 1" stop working.


It's smart of you to bring up the surreal numbers: https://thatsmaths.com/2019/01/10/really-0-999999-is-equal-t...


Strictly speaking I brought up the surreal numbers.


Then I guess you're the smart one. My apologies.


Sorry, sorry. I've been trying to bring up nonstandard analysis, and repeatedly getting poked by people saying "what do you think 'calculus' is?" "have you ever heard of a limit?" and so forth. In the process I have apparently become even more ornery than usual.


As we have no idea who you are don’t you think it makes sense we try to figure it out? How one answers the question will vary based on background.


Oh, I figured it out, kind of. [I don't really know what I'm talking about either.]

  1.000...0 = 1
  1.000...05 = 1 + ε/2
  1.000...1 = 1 + ε
  0.999...8 = 1 - 2ε
  0.999...9 = 1 - ε
  0.999...98 = 1 - ε/5
  .
  .
  .


> 0.999...8 = 1 - 2ε

> 0.999...98 = 1 - ε/5

cough


Sure, why not? Different notations for different surreal numbers. But it's not actually a good notation; I wouldn't know how to write 1 + 10ε.


> I go on to claim, though I'm more unsure of this, that the real numbers are not even the only way to make calculus work.

That claim is certainly true. The proof is that calculus (Analysis) exists for the complex number system too. Although I don’t think that’s what you meant and I doubt the complex number system is “more intuitive.” Just out of curiosity, have you heard of real analysis? How do you define calculus?


Take a look at nonstandard calculus:

https://en.wikipedia.org/wiki/Nonstandard_calculus

It is based on the hyperreal numbers:

https://en.wikipedia.org/wiki/Hyperreal_number

Practically speaking, I don't think it buys you anything over traditional calculus/analysis. It's just pointing out that there are alternative approaches to formalizing calculus.


A Riemann integral in complex analysis uses the same definitions as one in real analysis. Derivatives ditto. In the thread we're comparing real analysis to nonstandard analysis.

I'm eager to be corrected if you can tell me something I said that's wrong. I'm not interested in gradually upping the ante with you until it's clear who really has more math background.


I didn’t really see anything wrong in the thread but you use words like “calculus” and I honestly don’t know exactly what you mean. Do you mean the plug and chug methods often taught in high school? Or do you mean the application of a set of theorems derived from the axioms of a given system of numbers? If you mean the former then perhaps we can clear up some misconceptions which are clarified by the latter.


Those number systems do exist, but I'm not sure it's right to say they work just as well for everyday purposes. They work only as long as you use them in a way that reduces to treating them as real numbers, either never computing an infintesimal in the first place or calculating 23 + 6ε and saying "oh that's basically just 23".


Sure. And it's true, 0.99... is equal to 1.

All I'm saying is [SOPI below] it's all a little more technical than the junior high school proof. For example if 23+6\epsilon = 23, then how do I define 23 + 6\epsilon - 23? I can choose different approaches here, but "zero" is going to be pretty inconvenient when I go to do an integral.

[SOPI] Statement of Personal Ignorance. I don't quite know what I'm talking about. If you do know, please step in and help correct me.


What are you integrating over?


I'm taking the limit of any convergent infinite sum of terms each weighted by \epsilon. Although \epsilon itself equals zero, of course the limit of the sum isn't necessarily zero.

So if we go and define some other funky "infinitesimal" \epsilon != 0 in the surreals, we have to be careful. Apparently people have done that kind of thing successfully, but it took a long time after Cauchy for that to happen.


Even in surreal numbers, where there indeed exists numbers "infinitely close" to 1 but smaller than it, 0.9999.. would still be exactly 1. That's a quirk/feature of the decimal representation, not the underlying theory of numbers.

In surreal numbers there's a number called ε a number infinitely close to 0 (but larger than it), so what you would think that 0.9999.. represent is actually written 1-ε maybe?

But there's another number ε/2 that is between 0 and ε; 1-ε/2 is even closer to 1 than 1-ε is. Indeed, there are infinite numbers infinitely close to 1! (and none is really represented by 0.999...)


    0.999... + 0.000...1 = 1
    0.000...1 = 1/∞
    0.999... = 1 - 1/∞
1/∞ is zero or not?


If you accept that

1/∞ = 0

Then you accept that

∞ * 0 = 1

But the definition of 0 is exactly that anything multiplied by it must be 0. So this cannot be true.

To take a more verbal route: you cannot take nothingness and repeat it. Repeating (or multiplying) nothingness (or 0) is fundamentally nonsense.

Programmer explanation: one cannot loop through `null` even once, let alone a large number.


No, you don't have to accept that. In my Analysis 2 course we worked a whole bunch with [0, ∞], i.e. the positive real numbers together with infinity, and we defined 1/∞ = 0 and ∞ * 0 = 0. You lose some of your usual rules of arithmetic, but it gets a lot easier to talk about integrals.


https://en.wikipedia.org/wiki/Surreal_number

Quote:

There are also representations like

{ 0, 1, 2, 3, … | } = ω

{ 0 | 1, 1/2, 1/4, 1/8, … } = ε

where ω is a transfinite number greater than all integers and ε is an infinitesimal greater than 0 but less than any positive real number. Moreover, the standard arithmetic operations (addition, subtraction, multiplication, and division) can be extended to these non-real numbers in a manner that turns the collection of surreal numbers into an ordered field, so that one can talk about 2ω or ω − 1 and so forth.


I'm not quite sure I understand what you're trying to tell me? The example I mentioned is not about surreal numbers.


I recommend watching this lecture about surreal numbers by John Conway: https://www.youtube.com/watch?v=1eAmxgINXrE .


There are some weird rules like that for floating point numbers too.


What on earth is .000...1

If it terminates it’s not an infinite series. In any case, if you add any finite number to .99999... it’ll be equal to 1 + that number.


One way to introduce the idea that a number represented as decimal digits can have multiple representations is to talk about the numbers 1 and 1.0 being exactly the same. And that 1.00 is the same as 1. Just like 0, 0.0, and 0.00 are the same number. Most people would agree at this point.

Then keep stretching the number of zeroes to 0.000... - which, again, is exactly the same as 0.

From there, it is not a huge stretch to be able to go from that 0.000... is another way to write 0, then 0.999... is another way to write 1.


The real question is what do you get if you add:

0.999 ... infinite number of 9s ... 9

and

0.000 ... infinite number of 0s ... 1


> 0.999 ... infinite number of 9s ... 9

> 0.000 ... infinite number of 0s ... 1

Wouldn't these be ill-defined? You can't say "infinite number of 9s" then have the numerical representation terminate with a 9. If the decimal representation terminates, then by definition it isn't infinite.


Whether or not a specific construction is well defined depends on the system you are using. This representation is certainly well definable.

I think your objection here is in the same vein as those who object to the notion that 0.999... = 1.0

For most people, the concept of an infinite representation is not well defined.


> This representation is certainly well definable.

Do you mind defining it more formally for me then or pointing me to explanations/systems that would permit such a definition? I'm not really sure what such a formulation might look like, but then again I'm not nearly well-acquainted enough with mathematical topics past what is commonly taught. Does it involve the hyperreals, surreals, or one of the other systems beyond the "standard" reals as mentioned by other comments in the thread?

> I think your objection here is in the same vein as those who object to the notion that 0.999... = 1.0

> For most people, the concept of an infinite representation is not well defined.

I think (or at least hope) it's a bit more nuanced than that. I understand that some reals with infinite decimal representations can be well defined as an infinite series, and that defining 0.999... using such a series allows other manipulations to be done to complete the proof.

However, adding the concept of an "end" to said infinite series kind of breaks that understanding. The translation to an infinite sum no longer seems to hold, so I'm at a bit of a loss.

It's also somewhat counterintuitive to have an "end" to infinity, but as the rest of the thread shows intuition isn't always reliable for this kind of thing, especially for those who aren't particularly familiar with more detailed bits of math.


When I was a child I was convinced by a pretty simple conversation with my father:

Me: 0.9999... is not the same as 1

Him: Well if it's not the same is it more than 1 or less than 1?

Me: Less

Him: Okay then how much less is it?

At this point I started trying to do 1 - 0.999..., using the methods I'd been taught, and after a few iterations of "borrowing" the 1 I realized the answer was 0.000... which I was pretty convinced was equal to 0.


Hehe... smart man.

Another one is that

1 / 3 * 3 = 1

<==> 0.333... * 3 = 1

<==> 0.999... = 1


"yes but 1/3 does not equal .333... it's just an approximation since there's no perfect way to represent 1/3"


If 1/3 doesn't equal .333... then how much do they differ by?


    (1/3)/∞


In base 3,

    1/10*10=1, 0.1*10=1
What the problem?


I usually say "If and only if two numbers are different, then you can find a number between them". People often accept this axiom. Then, I offer them to find a number between 0.999... and 1.


This works because you have defined what equality means.

We all wanna talk infinity because it sounds more exciting, but I think everybody gets "infinitely close to 1" pretty well intuitively. What they don't get is whether "infinitely close to 1" means "equal to 1". That could happen because these people are stupid.[note] But it could also happen if nobody has defined equality.

[note]for example even highly educated people maybe don't listen, which is functionally a lot like being stupid.


is there any difference between a black hole and nothing? (somewhat joking but I was thinking of a physical analogy of the limit approaching zero)


A black hole has mass. Nothing has no mass.

(also a somewhat joking answer)


Yes, there's a difference.

Firstly, though, there are multiple different types of black hole, from the theoretical to the astrophysical. We must narrow your question down to have any hope of a good answer.

The simplest theoretical black hole, the Schwarzschild black hole, has one variable -- the central mass -- which must be positive.

If we set the central mass in a Schwarzschild spacetime to zero, then we have Minkowski spacetime: no curvature, no horizon, no black hole.

The Schwarzschild spacetime is completely empty except for the central mass, which is constant and located at an infinitesimally small point at all times. The Minkowski spacetime is completely empty everywhere and at all times.

The symmetries of Schwarzschild and Minkowski spacetime are different, and if one were to probe the spacetimes in question with a Synge curvature detector [1], we would quickly discover which we were probing if our probes happened to be placed close to the central mass, and eventually if they were placed far from the central mass.

If one placed the probes infinitely far from the central mass, it would take an infinitely long time to distinguish the presence of the central mass (which makes spacetime non-Minkowski); but these spacetimes are eternal anyway, so that's OK. So that's almost a "yes" to there being a theoretical black hole analogy between (1-) 0.999... and (1-) 1.

I would not call this a physical analogy since neither Minkowski spacetime nor Schwarzschild spacetime is at all physical. Nature is full of stress-energy (gas, dust, ...) any of which breaks the vacuum condition of these spacetimes, there seem to be a lot of astrophysical black holes at the centres of galaxies and individual/binary stars that have become black holes, and even a two-black-hole universe is markedly different than a Schwarzschild spacetime. Additionally, these astrophysical black holes are not eternal, unlike Schwarzschild. In particular, the stellar mass ones were once stars, and the galaxy-centre ones at least had less mass in the past. These last conditions alone are substantial deviations from Schwarzschild that are even more obviously not Minkowski (e.g. if you put probe finitely but sufficiently far away, you could see an image of the radiant precursor star rather than the black hole!).

Finally, in our physical galaxy the answer to your question is a big "yes!". The observed orbits of these stars [2] would be noticeably different if the central mass in the Milky Way's central parsec were anything but a black hole, and would be even more different if that central mass were not there at all.

- --

[1] Synge, J.L., _Gravitation. The General Theory_, ch. XI §8, "A five-point curvature detector".

[2] http://www.astro.ucla.edu/~ghezgroup/gc/animations.html and http://www.astro.ucla.edu/~ghezgroup/gc/blackhole.html


Why not 0.00...1?


This is not an infinite decimal. The digit 1 is somewhere out there.


But this is not a compelling argument to somebody in this situation. While correct, it feels identical to saying "it just is".


The whole numerical representation scheme really is just a man made system. If you dig deep enough the veneer disappears. This is especially noticeable when you start to see things like numbers that are finite in decimal but have infinite repeating patterns binary.


Though you could treat it as an infinite series in a similar way:

0.9 -> 0.99 -> 0.999 -> ... -> ?

0.1 -> 0.01 -> 0.001 -> ... -> ?


The first is an infinite series:

  9/10 + 9/100 + 9/1000 + ... + 9/10^n + ...
The second is not?!


I mean you could if you want to:

1/10 + -9/100 + -9/1000 + -9/10000 + ...

But I just meant them as series, not necessarily as sums.


Or

  1 - 9/10 - 9/100 - ...
or

  1 - ( 9/10 + 9/100 + ... )
So it becomes circular:

  0.00...1 = 1 - 0.999...

(In math a series is a sum:

https://en.m.wikipedia.org/wiki/Series_(mathematics)


Woops, you're totally right, I think I should've said "sequence".


Because unlike 0.99..., 0.00...1 is not a Real number.

The decimal representation of a Real number has to be indexed by Natural numbers, i.e. every decimal digit[n] has a well-defined index n which is a Natural number. Infinity is not a Natural number, so 0.00...1 is not a Real number either.


Then the question is, is 0.00....1 equal to zero? We use the definition of equality above and we say yes.

EDIT: The number above seems well defined. It's lim n->inf (10^-n). That's zero.


It is not.

The decimal representation of a Real number has to be indexed by Natural numbers, i.e. every decimal digit[n] has a well-defined index n which is a Natural number. Infinity is not a Natural number, so 0.00...1 is not a Real number either.


This is usually described mathematically by saying that the representation of a decimal has to be countable, whereas the number 0.000...1 is not a countable representation.

I will say though that if your explanation of 0.999... = 1.0 requires that you explain the distinction between countable and uncountable infinities, that's a big ask for most lay people.


Could you please explain a little more what "not a countable representation" means?


A countable set is one where you can reach any element in a finite amount of steps. See:

https://en.wikipedia.org/wiki/Countable_set


If you're saying we never get from the initial 0 to the trailing 1 in a finite number of steps, that's true and that's what DavidVoid is saying. But I haven't made the list of digits uncountably large by adding one element. Instead I put that element in a transfinite position in the ordering and I have to watch out for weird consequences.

I'm no expert, but countability doesn't depend on how the set is ordered. It depends on whether the elements can be placed in 1:1 correspondence with the integers. 1,0,0,... has a countable number of elements, and so does 0,0,0,...,1. They can be put in 1:1 correspondence with each other. This definition of countability is described in your link.

I meant to ask, does "countable representation" some kind of detailed definition that I can look at?


It's not the set of digits you use which is uncountable, it's the representation itself.


I think it's just another way to word DavidVoid's explanation a few comments up.


Ok. (a) I think you're saying that 0.00 ... 1 is not a real number. (b) Do we agree that lim n->inf 10^-n is a real number? (I.e. zero?) (c) Then you're saying that limit in "(b)" is not a reasonable definition of the string "0.00 ... 1". Is that correct?


That is correct.

The limit of lim n->inf 10^-n is exactly 0, it is not 0.00...1.


Since we have no common definition of what 0.00...1 might possibly mean, let's say we agree.


0.0000... is the repeating decimal representation of zero.


What is 0.00...1 times 34?


0.00...34


Is 0.00...1 times 34 equal to 0.00..1 times 3.4?


I think the answer to that question depends on the axioms you are using.


That's a great approach


I didn't grok infinity until I started thinking in terms of verbs rather than nouns. As a static number, the concept of infinity makes no sense; but once reimagined as a process (start counting up from 1, and never stop), all apparent paradoxes disappear.

This is the inverse problem: it could just as easily be reframed as 0.000...0001 = 0. Defined as static nouns (does such a thing exist in nature?), it's seemingly paradoxical, and fascinatingly debatable in a "is a hot dog a sandwich" sort of way. But reframe it as a process (or as code), and all confusion disappears: for how many loops would you like to proceed? If you never stop, 0.99999... clearly approaches 1, without ever reaching it, and asking if they're the "same" is as academic as asking if the Ship of Theseus is the same ship, or if an electron is the same entity from one picosecond to the next.


> it could just as easily be reframed as 0.000...0001 = 0

But it can't be, because there's nothing after "0.000..."; that ... goes on infinitely. It's literally "0s forever, never stopping". It's not a process of "keep adding 0s", it's the end result of never adding 0s. It's not a process, it is a noun.


My argument is that "noun" is a purely human abstraction, and that phenomena that act noun-like in nature are at best snapshots of iterative processes. Within the bounds of the noun abstraction, sure, I'll cede that point.

But if one eschews that abstraction and looks at it purely as a process (I want to render 1/3 in decimal notation, then multiply that decimal notation by 3), there is always that niggling 0.000...1 remainder at every snapshot. The "never stopping" bit is what smuggles verbiness into the "0.999..." noun, while simultaneously pretending it's a static value.


Regardless of whether or not they are human abstractions, a process (making cheese) and a noun (cheese) are two distinct things.


0.000...1 can be written as 1/inf, which has sense in Surreal Numbers math.


You could just as easily say that 0.000…54234 can be written as 1/∞. Surreal Numbers is a bit of a detour in this case. The premise of the idea that 1 - 0.999… could be written 0.000…1 is the mistaken concept that there is some point "after an infinite number of steps" where the expansion of 0.999… stops and you can leave the remaining 1. The expansion never stops and there is no final remainder. The result is 0.000…, which is more typically written as 0. Plain zero, not an infinitesimal.

Personally I find the following algebraic proof to be the most approachable:

          x = 0.999…
          x = 0.9 + 0.0999…
          x = 0.9 + (0.999… ÷ 10)
          x = 0.9 + (x ÷ 10)
    x - 0.9 = x ÷ 10
    10x - 9 = x
     9x - 9 = 0
         9x = 9
          x = 1
     0.999… = 1


    x = 0.999...
    x = 9/10 + 9/10^2 + 9/10^3 ... 9/10^inf
    x = 9/10 + (9/10 + 9/10^2 + 9/10^3 ... 9/10^(inf-1))/10
    x = 9/10 + (x - 9/10^inf)/10
    x - 9/10 = (x - 9/10^inf)/10
    10x - 9 = x - 9/10^inf
    9x - 9 + 9/10^inf = 0
    9x = 9 - 9/10^inf
    x = 1 - 1/10^inf
    x = 1 - 0.000...1


    x = 9/10 + 9/10^2 + 9/10^3 ... 9/10^inf
    x = 9/10 + (9/10 + 9/10^2 + 9/10^3 ... 9/10^(inf-1))/10
This is exactly the issue I was referring to. You're assuming the sequence stops "at infinity" but infinity is not a concrete number of steps, it's the absence of any end condition. Subtracting one step from "no end condition" is nonsense. The sequence (0.999… - 0.9)×10 does not end earlier than 0.999…; these are exactly the same sequence, repeating 9s without end. The difference between them is zero in every digit, with no trailing 1.


The sequence doesn't end earlier, but it does start later by one element, so they are not exactly same sequences. One infinite sequence has one more element than another infinite sequence, so difference is 1/inf.


No, they start at the same element (they're both 0.999… and thus start with 9/10) and have the same (infinite) "number" of elements. If you lined them up digit by digit there is never a point where one digit is a 9 and the other is a 0.


Incidentally, this confusion is part of the reason why actual infinite sequences are written with a trailing ellipsis (9/10 + 9/10² + 9/10³ + …) and not a final term involving infinity. There is no final term—not even 9÷10^∞. So the correct way to write the separation of the leading term in your formula is:

    x = 9÷10 + 9÷10² + 9÷10³ + …
    x = 9÷10 + (9÷10 + 9÷10² + …) ÷ 10
Note that the number of elements in the sequence is the same no matter how many leading terms you write, so long as the pattern doesn't change. The notation { 1, 3, 5, 7, 9, … } and { 1, 3, 5, … } both refer to exactly the same set; the first notation is merely a bit more verbose. Similarly, the parenthesized portion of the second formula above is exactly equal to x, despite being written with two explicit leading terms rather than three.


I find that "infinite as an endless process" concept intuitively very heplful as well. However, reading Gödel, Escher, Bach[1] showed me that there's another, more static logical interpretation of infinity which also comes handy.

In an infinite process, you can always take "one more step" to create the item after that. Let's assume there exists a "final" mathematical object that goes after every finite item in the generation process (i.e. it is higher than any item in the list, or smaller, or has happened after all of them)... This object doesn't really belong to the infinite generative sequence, it's an item outside all of them, and can't be reached by completing the sequence; it merely exist outside the process and happens to have the property of "dominating" all the items in it.

You can assume the existence in the same way you assume the existence of a number which is the square root of -1, or how you define triangles whose angles add up to more or less than 180 degrees. If you do that, this object "at the infinite" can be formally defined and treated axiomatically to find out what mathematical properties it possesses.

[1] https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach


This thought can be made much more concrete when talking about sets. Clearly we can understand a collection of things as a set. Clearly numbers are things, so we can talk about the set of all numbers. But how many elements are in this set? A clever answer could be: It has as many elements as there are natural numbers! As subsets are also a thing, we could ask next how many subsets the set of all numbers has. A clever, but very problematic answer could be "as many as there are natural numbers!"


That’s actually not true, the power set of the naturals is uncountable.


As I said, it would be a very problematic answer. But only if you properly define when two non-finite sets have the same size, you can lead this to a contradiction. Infinity can be understood intuitively, extending it to the cardinal numbers not.


> As a static number, the concept of infinity makes no sense; but once reimagined as a process

Super insightful. That's the key right there.

The same concept can also be applied to the physical world. Things are not static, they are in constant flux, everything is a process in motion.


Yes, although for me this conception of infinity as a process also captures why there are probably no actual infinite things in the universe, only in silly games with numbers.


Could you elaborate? Why infinite process cannot be actual thing in the universe? It's not like we know the start and the end date of the universe...


I wonder if this is related to "intuitionist" math. This is an alternative formulation of math which doesn't have the law of excluded middle, recently discussed on Hacker News relating to this physics research: https://www.quantamagazine.org/does-time-really-flow-new-clu...


If you want to work with the real numbers intuitionistically (or constructively), you quickly find out that infinite decimal expansions are not what you want.

In classical mathematics all the usual definitions of real numbers (decimal, Cauchy sequences and Dedekin cuts) are equivalent. If you overthrow the Law of Excluded middle, these are all different.

Infinite decimal expansions are bad intuitionistilcally for several reasons. The first one which come to mind is that you cannot add numbers together. Imagine your numbers started 0.33333 and 0.66666. OK, so far it would seem that the sum would start 0.99999, but somewhere down the line one the firs number could contain two 4s, making a 1 carry all the way up and leaving one behind, so that it should in reality be 1.00000000001…

On the other hand there could also show up a 2 later, making it 0.999999998. Thus, you cannot decide weather the first decimals should be 1.00 or 0.99 without looking at infinitely many decimals. And the fact that 0.999… = 1.000 will not help you out, since 1.00000000001 ≠ 0.999999998.

Being able to define addition on decimal expansions is equivalent for constructivists to solving the halting problem. It cannot be done.

It turns out Cauchy sequences are better behaved, and (with a bit computational improvement) you can make a lot of things work out. See Bihshop's book, Foundations of Constructive Analysis, for details.


Thank you for the reference! Your explanation makes sense to me. I'm curious if students that struggle with learning decimal expansions and limits are just subconsciously rejecting the non-constructive foundations of the math they're learning. Probably not, but it's fascinating regardless.


I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?

I would try to explain to them that numbers are a framework for us to understand both the observable universe and abstract ideas, depending on what we're using them for.

Like you said, it's hard for people to understand that numbers have multiple representations and to grasp the implications of those representations. I think that if you can communicate that different representations can have the same meaning, accepting those representations when they come across them may be easier.

Or, if they're experienced enough with math, I think going through Euler's identity in addition to the link could help.

https://en.wikipedia.org/wiki/Euler%27s_identity


I feel like most people are not answering this, they are giving the proof in a different way.

Answering how to teach a child not to be bound to 100%'s is really hard. I personally would say just let them explore on their own, teaching a person that 0.99999 = 1 results in the same as if you taught them that 0.99999 != 1. You need to teach them that all sciences and maths are changing constantly, what might be a fact today could change tomorrow. You need to teach them that anything can become wrong or right as we progress and to be open about accepting new information, while being hesitant enough not to not succumb to false/fake information.

That's a very hard lesson to learn and an even harder one to practice. But one that I think a lot of people need to learn.


I'm willing to believe, but every proof on that page I read came down to basically, it might as well be 1, so it is 1. The way i see it, it comes down to accuracy like any of our measurements and falls under rounding error. There's no way we can ever actually measure the infinite amount of space between .999... and 1 so effectively they're the same. As far as math and anything practical and even theoretical is concerned, they're the same, but...conceptually in my brain it just feels that little bit smaller. I know I'm wrong for all intents and purposes, but I dunno, that.


I find the algebraic way convinces most people:

x = 0.9999..

10x = 9.9999...

10x - x = 9

9x = 9

x = 1


I was prepared to blow my then 6th or 7th grade daughter's mind with this algebraic proof. I started by asking if 0.999... = 1, to which she said "no." I rephrased it and said it is equal, do you know why? She thought for a moment and said "1/9 is 0.111..., so 9/9 is 0.999... and 9/9 is 1." And I had to admit she had a far better solution than I did.


Because I wondered, it’s not a trick:

x = 0.444444...

10x = 4.444444...

10x - x = 4

9x = 4

x = 4/9 = 0.444444...


All the single digit repeating decimals are x/9.

0.111... == 1/9

0.222... == 2/9

...

0.888... == 8/9

0.999... == 9/9 :)


That's how I was taught in my Algebra class


black magic! I wonder if there are any programming languages that are able to handle this properly?




There are programming languages with rational number types, but none that I know of that represent numbers as repeating decimals.


There isn’t a proof because it’s actually kind of arbitrary that 0.999... = 1. Fundamentally, this is true because we chose it to be true.

Now, there are good reasons we chose it to be true, and that’s what people usually use as proofs. If it’s not true then a bunch of mathematical expressions become more inconvenient. But there is no reason as such why 0.999... could not have been defined as something that was always < 1.

Fundamentally, 0.999... has no intrinsic meaning, and it’s value depends on the meaning we decide to give this representation.


You are downvoted, but you are actually correct. 0.999... != 1 can be true in nonstandard analysis. So if using standard analysis over nonstandard analysis is a convention, then ultimately 0.999... = 1 is a convention too.

(The Wikipedia article even reproduces that argument)


.999... and 1 exist on a continuous line. If they are different numbers, name a number between them.


The Wikipedia article says how, you need a definition of real numbers that includes nonzero infinitesimals (IOW does not satisfy the Archimedian property).

So let there be an ω with 0.999... = 1 - 1/ω. Then a number between 0.999... and 1 would be 1 - 0.5/ω.


How do you know that 1/ω is not 0?


{ 0.999... | 1 } using surreal number notation.


? Just because there is nothing between two numbers does not mean the two numbers are equivalent. What nonsense is this


Either two numbers are equal, or they're not equal. (Unless we're calling into question the law of excluded middle, but must we?)


Without using 9999..., please name any two real numbers that don't have a number in between.


That is true for the reals


I've had the same experience, even debating this topic with engineers. I think there are actually two hang-ups.

1. People have had it drilled into their heads that humans can't comprehend infinity. It was taken for granted by philosophers, that an "infinite regression" is a logical fallacy (e.g., used in a proof by Thomas Aquinas), and that tricks such as infinity and the infinitesimal were not rigorous. Mathematical infinity has been a settled matter for all practical purposes since the early 20th century AFAIK.

2. Related to the above, most people also believe that there is always a gap in any knowledge, and something hiding in that gap. Thus it's perfectly natural to believe that there's something hiding between 0.999... and 1, that we just haven't found yet. Knowing for certain that there is nothing between 0.999... and 1 is regarded as a kind of arrogance.

I think the way to approach this with children is to teach math as an abstract topic, that's not necessarily rooted in the objects of everyday life. For instance there's no physics experiment that can test the necessity of any math being carried beyond roughly the 15th decimal place. Yet we enjoy exploring it anyway.


> It was taken for granted by philosophers, that an "infinite regression" is a logical fallacy (e.g., used in a proof by Thomas Aquinas)...

Aquinas specifically objected to the notion of an essentially ordered infinite causal series. He had no objection to an accidentally ordered infinite causal series or other kinds of infinite series.

This distinction is extremely important for the purposes of understanding his proofs of God's existence, and people often unfairly reject his arguments because they conflate the two.

More reading here: http://edwardfeser.blogspot.com/2010/08/edwards-on-infinite-...


Couldn't you formulate a problem for extreme decimal-place accuracy based on the multiplication of errors in a physical process that's repeated in ways that multiply small errors into bigger ones?


That would certainly be an interesting study, I just have never been able to come up with any concrete idea on my own.


I too was unable to convince my family but based on your comment I just thought of a new (to me) example I might have tried. It leverages the grasp of fractions that you mentioned people already have.

Everyone knows that 1/3 = .333... and it can be pretty easily shown that 1/3 + 1/3 = 2/3 = .666...

So I would ask them that since .333... + .333... = .666... does it make sense that .333... + .333... + .333... = .999...? And since .333... = 1/3 isn't .333... + .333... + .333... = .999... the same as saying 1/3 + 1/3 + 1/3 = 3/3? And since 3/3 = 1 and 3/3 = .999... it makes sense that 1 = 3/3 = .999...

This might work on your kids but in my experience recalcitrant people will either act bored as if they don't care or will try to claim that somehow they understood it all along.


What's interesting is that people pretty quickly become comfortable with the idea that 1/3 = 0.333…

So using that as a foothold, we can express 1/3 + 1/3 + 1/3 as 0.333… + 0.333… + 0.333… and it should be pretty easy to digest. At once we can see that in this little zone we've defined, 1 and 0.999… mean the same thing.

Not a rigorous proof, and one or two people will probably bring up whataboutisms like "that's just because the calculator can't do stuff!" but it should at least be proof of comfort for most people.


This is a really good point. Maybe the problem is how we define equality. What's the test for when two numbers are equal?

People accept that 1/3 = 0.333333... The same people don't always seem to accept that 3*0.33333... = 1. Well, how are we defining "equals"? If we can give that definition in black and white, I think that may help.


> What's the test for when two numbers are equal?

I put this in another comment but: For the reals: eliminate a < b and a > b then conclude a = b.


If 30.33333... = 1, then 30.33333... != 0.9999..., then 0.9999... != 1


Maybe try expressing it in the form of money? Let's say a gallon of gas is 99 cents with infinitely repeating 9's. 0.99999999999 cents. You're still going to end up paying a dollar a gallon for it because eventually it's going to get rounded off. No gas store operator is going to try to cut a penny for you and give you a fraction of a penny. They could argue that the fraction of a penny becomes infinitely small and that giving you a shaving off the side of a copper penny would be infinitely too large.

Now don't mind me while I open up a store where every price tag ends in 0.99...repeating and have a poor college student at the checkout lane with a penny shaver to calm down any rowdy customers he or she can't explain away.


What set in stone the equality for me was learning about limits and series, because 0.999... is essentially a funny way to represent a serie.

Before that, despite accepting the proofs that were given to me, there was always something in the back of the brain telling me "mmmm there is something wrong in that". The only thing close to that was a reasoning like the following:

1 divided by 3 = 1/3 = 0.333..., but then 3 * 0.333... = 0.999... so 1 = 0.999...

This comment in the wikipedia page nails it down:

"The lower primate in us still resists, saying: .999~ doesn't really represent a number, then, but a process. To find a number we have to halt the process, at which point the .999~ = 1 thing falls apart. Nonsense."


You must go up to something like limits to make ... meaningful.


"The intelligibility of the continuum has been found–many times over–to require that the domain of real numbers be enlarged to include infinitesimals. This enlarged domain may be styled the domain of continuum numbers. It will now be evident that .9999... does not equal 1 but falls infinitesimally short of it. I think that .9999... should indeed be admitted as a number ... though not as a real number."


Is this a mental limitation, or is it a simple defense mechanism against diving into rabbit-holes of thought with no end and no real productivity? It seems much easier to come to the conclusion that .99999... and 1 are different numbers, is it really worth the effort to consider otherwise?

We create these abstractions to simplify our thought- and analyzing or over-analyzing these simplifications can have the opposite effect.


Infinity will forever be an abstraction, as there is no infinite physical actions you can carry out.

Which means what infinite actions actually constitue is purely by definition, as you cannot experimentally verify it. And that's why under some definition it makes sense to say 1+2+3+4+5+... = -1/12


This is essentially just a matter of limits, without which the world wouldn't make any sense.

So you must explain that if you move your hand closer to an object, technically you are halving the distance infinitely many times, but if 0.999... != 1 then your hand would never touch anything.


> We cannot imagine infinity

Now try imagining that some infinities are bigger than others: https://en.wikipedia.org/wiki/Aleph_number


This is one of my pet peeves in maths.

Although I do understand the concepts presented, the notion of "greater" makes no sense when applied to something without boundaries.

Yet it's used all the time.


Things quickly fall apart if you rely on your intuition.

Let's imagine all the odd numbers: 1, 3, 5, etc. Now imagine all the even numbers: 2, 4, 6, etc.

Can we agree that there is an "infinite" amount of numbers in each of those groups?

Now imagine all the odd numbers and even numbers together. That's also infinite right?

Would you say there are more "all numbers" than just "all odd numbers", or would you say that there are an equal number of them? (hint: the answer is equal).


Well, the answer is equal because of how you define equality of infinite sets (one-to-one and onto). It's a very useful definition, but it's hardly the only possible definition.


Personally, I think it makes perfect sense.

Take two sets A and B. If we can assign every element in A to a different one in B, we say that |A|≤|B|.

Makes perfect sense for normal, finite sets, right? As it happens, this definition extends to infinite sets as well.


Right, but many things make sense for finite sets that don't make sense for infinite sets. Just because you can extend that definition doesn't mean that it's "true" for infinite sets.


People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct.

It is correct if you take the limit, people usually do not.


Not only can numbers have more than one representation, but they can also have zero!

Looking at you, irrational numbers.


Irrationals have a unique decimal representation in the mathematical sense: given a definition such as $x^2 = 2 $, any digit of the decimal expansion of $x$ can be determined.


> Irrationals have a unique decimal representation in the mathematical sense

Not all of them do. Actually, so many don't that mathematically, the number of them that do is zero.

Sure, there are exceptions like sqrt(2) and sqrt(3), but there is an uncountable infinity of irrationals between these two numbers that just don't have a representation.


For irrationals, the problem with infinite sequence of digit 9 does not occur. So for any given irrational (given its definition), any digit is determined by this definition, because any irrational number can be approximated with arbitrary precision by a rational number (which has unique decimal expansion). If you think otherwise, where is the problem?

This is not affected by the fact that irrationals cannot be counted. Given the irrational, a rational close enough exists which has the same decimal expansion for the first n digits, for any n.


just tell them to write out the complete infinite sequence of 9's

when they are done they will have undesrtood.

chuckle


Yep. Agree 100%. It is like the blue dress.

I think the problem is the repeating function. Infinite things are non-intuitive and should be presented differently.

Even here on HN you still see people confused about "convergence" and "identity". 0.999... doesn't CONVERGE, it literally is 1.

I suspect this persists even with students that have had second year college calculus that discusses convergent series and sums.


Fine. Define 0.999... as the limit of the series sum(n=1 ... N)(10^-n), as N-> infinity. This is standard high school calculus. "Number" and "series" and "limit" and "convergence" don't all mean the same thing. However this number is defined as the limit of a convergent series. So the question really is meaningful. (One clue that this question is meaningful is the amount of space introductory calculus textbooks use to address it.)

Because I can still ask, in black and white, what law of "equality" do I use to establish that my limit equals 1? (It does, if I import the definition of "equality" from the real numbers. That's what they do in calculus class. )


Thanks to a commenter who pointed out that my sum above should be

sum(n=1 ... N)(9*10^-n).

I can't, uh ... fully endorse that comment, which is not entirely accurate and doesn't answer my question. But I sure did miss the '9'.


You asked for a "law" of equality (whatever that is) and provided an answer that proves it converges to 1. What more could you possibly want?


We seem to agree on this: you don't think there's any need for a way to determine whether two real numbers are equal.

For ordinary math, though, using some criterion for equality (for example x>=y and y>=x) is basic and not controversial. So it seems unconvincing (to me) when you seem to imply the opposite.


There is an easy way to prove two numbers are equal. Typically in the reals there are three possibilities: a > b, a < b, a = b. If you eliminate a > b and a < b then you are left to conclude a = b. And this is exactly what is done in Apostol's Calculus Vol 1 (IMHO the greatest calc book ever written) chapter 1 when he proves that the area under n^2 is EXACTLY (n^3)/3, with no "calculus". You would be shocked how far into calculus the author gets with just that theorem. Can't recommend that book enough.


Thanks, I'll take a look. I like that kind of thing very much.

I use applied math. I haven't taken a class in real analysis. But it's fun how often grinding out the solution to a "real world," practical PDE turns out not to actually be the nicest (simplest and/or clearest and/or sufficiently insight-producing) way to understand the (hopefully) corresponding physical problem in the lab.

Stripping off the "calculus" and replacing it by limits sometimes seems to help highlight alternate perspectives that the magic "integrals" and "derivatives" kind of conceal.

Even when it's not more effective, it's definitely more fun.


> you don't think there's any need for a way to determine whether two real numbers are equal.

You are putting words in my mouth.

And you clearly do not understand the answer.

I guess I'm not very good at ELI5 because I very clearly answered your question with your own proposal.

Maybe when you get to college a professor can do a better job explaining it to you (if you actually make it to college, because you're going to struggle very hard if that's how you think when an answer is spoon-fed to you).


I'm not sure if you are asking for an answer or a rhetorical question? I'll assume the former.

Your terms are bit jumbled, so let's keep it simple: you're asking how to prove if an infinite sum converges and what its value is. Convergence proofs require analytic thought: meaning there may not be an immediate look-up. You need to convert the problem into the known corpus of convergent sums or use one of many tests (bounds test, integral test, etc) to show it converges analytically. Which you only learn through experience and memorization (unless you want to re-prove hundreds of series... maybe you do!) Fortunately this one is easily re-written as a known convergent sum.

First, you missed a term in your sum (9), re-written here:

sum(n=1..inf) 9 * 10^-n

Step 1: you pull out the 9 and it becomes 1/10+1/100+1/1000...

Step 2: Then we shift to n=0 by subtracting 1/10^0 from the series so that it is in the form n=0..k-1

1/10^0 + 1/10 + 1/100 + 1/1000 + ... + 1/10^-n - 1/10^0

Step 3: Now we've got ourselves a geometric series of just 1/10^n .. wikipedia does a great job explaining the sum convergence for GS from n=0...inf: https://en.wikipedia.org/wiki/Geometric_series

Step 4: compute geometric convergence

(1-r^n)/(1-r) = (1-(1/10)^n)/(1-1/10) = 1/(1-1/10) = 10/9

So we have 10/9 as the solution to Sum[n=0...inf](1/10^n)

Step 5: the remaining arithmetic

Now subtract our 1/10^0 ... and then * 9 = 1


> There is no proof that will ever satisfy a person dead-set against this.

Yes there is. There is a proof that uses only fundamentals of first year university analysis. When you see

0.99999....

this can be written as an infinite sum

\sum_{i=0}^\infty 0.9 x 10^{-i}.

Truncate the sum and set

S_n = \sum_{i=0}^n 0.9 x 10^{-i}

and now simply use the rules of arithmetic progressions to get the limit out:

0.1 S_n = \sum_{i=0}^n 0.9 x 10^{-i-1}

S_n - 0.1 S_n = 0.9 - 0.9 x 10^{-n-1}

0.9 S_n = 0.9 ( 1 - 0.1^{n+1} )

S_n = 1 - 0.1^{n+1}

Now let n tend to infinity to find the limit, which is 1.

You don't need to imagine infinity to do any of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: