

What is so wrong with thinking of real numbers as infinite decimals? - tokenadult
http://www.dpmms.cam.ac.uk/~wtg10/decimals.html

======
WilliamLP
> However, it has the advantage, for beginners, of being very close to the
> picture of real numbers they already have.

Haha, no. And here's why.

> Because of irritating difficulties such as the need to carry digits and to
> identify 0.999999.... with 1

Not only does this not fit in with people's existing picture of real numbers,
people become emotionally attached to the opposite with a fervor unknown in
any other part of mathematics! (Second place would be Cantor's theorem for
real numbers, but you have to know a little more to be passionately opposed to
that.)

I think even for beginners who are willing to believe that 0.99999... = 1,
they are going to have trouble agreeing that 0.99999... < 1 is a false
statement.

~~~
gjm11
"Beginners" here mostly means "first-year undergraduates reading mathematics
at the University of Cambridge". I can assure you that those people very
seldom have any trouble agreeing that "0.999... < 1" is false.

[EDITED to insert the word "mostly"; I expect WTG wasn't thinking _only_ of
his and his colleagues' first-year students.]

------
mustpax
On a somewhat related note: as computer scientists, we're usually dealing with
discrete data, so correctness comes naturally to us with discrete problems.

The problem with floating point numbers is that most mathematical operations
on them are approximations. People expect the same kind of correctness from
floating point arithmetic that we've come to expect from integer arithmetic,
when you really should be thinking in error bars.

I mean, just ask the average programmer to do currency math and I assure you
they'll go straight for the `double`. Most of the time, not a catastrophic
error, except for when it is.

For the curious, this is a long but very thorough overview of floating point
arithmetic and it's quirks:
<http://docs.sun.com/source/806-3568/ncg_goldberg.html>

~~~
gjm11
That's all very well (and Goldberg's paper is excellent); but it has nothing
whatever to do with the linked page, which is by Fields medallist Tim Gowers,
arguing that in pure mathematics "infinite decimals" is as good a way to
construct the real numbers as "Cauchy sequences" or "Dedekind cuts".

For what little it's worth, I agree with Gowers, and have done since before I
read that page, though I'd prefer to use binary rather than decimal.

~~~
mustpax
Thanks for pointing that out, I stand corrected.

In my defense, I wasn't trying to criticize the theoretical content of the
article, which, in all honesty, is beyond my mathematical knowledge to
criticize. The real number to "infinite" decimal conversion issue just
reminded me of the age-old floating point number arithmetic problem, so I let
out a little rant.

------
dnaquin
The bottom line is. There are numerous, equivalent ways of defining the real
numbers and each have the advantages and disadvantages in understanding. Any
good introduction to analysis class will do a definition several times and
prove equivalence.

Infinite decimals in particular bring up the equality problem. If you're
willing to think of the real numbers as ONLY infinite decimals, you avoid this
confusion. But it's silly to always write .999... instead of 1.

------
defen
Linked article reminded me of a passage in "Topoi - The Categorial Analysis of
Logic" by Robert Goldblatt:

"It would be somewhat misleading to infer [...] that foundational systems act
primarily as a basis out of which mathematics is actually created. The
artificiality of that view is evident when one reflects that the essential
content of mathematics is already there before the basis is made explicitly,
and does not depend on it for its existence. We may for example think of a
real number as an infinite decimal expression, or a point on the number line.
Alternatively it could be introduced as an element of a complete ordered
field, an equivalence class of Cauchy sequences, or a Dedekind cut. None of
these could be said to be _the_ correct explanation of what a real number is.
Each is an embodiment of an intuitive notion and we evaluate it, not in terms
of its correctness, but rather in terms of its effectiveness in explicating
the nature of the real number system."

Tim Gowers makes a good case for the usefulness of the "infinite decimal
expression" view of real numbers.

------
madcaptenor
Gowers has a few dozen short pieces of this sort, listed at
<http://www.dpmms.cam.ac.uk/~wtg10/mathsindex.html>. Another one that might be
of interest is "why study finite-dimensional vector spaces in the abstract if
they are all isomorphic to R^n?"
(<http://www.dpmms.cam.ac.uk/~wtg10/vspaces.html>) The answer, roughly, is
that the notation is less ugly if you don't have to keep track of coordinates,
and that the abstract theory carries over better to the infinite-dimensional
case.

~~~
WilliamLP
> "why study finite-dimensional vector spaces in the abstract if they are all
> isomorphic to R^n?"

Uh, C^2?

------
teeja
Cantor went mad thinking about infinity - to me, a clear sign of how
worthwhile the OCD path is.

To quote my favorite close-is-good-enough source, Wikipedia, "Poincaré
referred to Cantor's ideas as a "grave disease" infecting the discipline of
mathematics".

------
Dove
There's nothing wrong with thinking of the real numbers as infinite decimals.
But you probably already do. Learning to think of them as the limits of cauchy
sequences is good exercise for learning to think like a mathematician.

