
New Proof Settles How to Approximate Numbers Like Pi - theafh
https://www.quantamagazine.org/new-proof-settles-ancient-question-of-how-to-simplify-numbers-like-pi-20190814/
======
gjm11
Ugh. In the title:

"Ancient" means 1941. "Simplify" means approximate. "Numbers like pi" means
irrational numbers. "How to" means "how accurately you can".

Background to the actual conjecture: if you take any irrational number and ask
how well you can approximate it by rationals p/q, then it turns out (the proof
isn't trivial but e.g. plenty of undergraduate mathematicians would be able to
follow it without too much trouble) you can always find infinitely many
approximations where the error is no bigger than of order 1/q^2.

The actual conjecture: suppose you want it to be true that for _almost all_
irrational numbers, you can find infinitely many p/q with an approximation
error no more than f(q). Then (so conjectured Duffin and Schaeffer, and so
apparently Maynard and Koukoulopoulos have proved):

you can do it if and only if the sum (over all positive integers q) of f(q)
phi(q) is infinite, where phi is the "Euler totient function": phi(q) = number
of things <= q and coprime to q.

Very crudely, how fast does this allow f to decrease with q? Well, the sum of
1/q is infinite but "only just"[1], so we want f(q) phi(q) not to decrease
much faster than 1/q. phi(q) varies fairly wildly but crudely is "not usually
much smaller than q", so we want f(q) q not to decrease much faster than 1/q;
that is, we want f(q) not to decrease much faster than 1/q^2. In other words,
the "classical" result I started with is kinda-sorta about the best you can
do. (I think that at this level of handwaviness this has been well known for a
while; the Duffin-Schaeffer conjecture is about making it extra-precise.)

[1] The sum of 1/n is infinite but that of 1/n^(1+h) is finite if h>0\. The
sum of 1/(n log n) is infinite but that of 1/(n (log n)^(1+h)) is finite if
h>0\. The sum of 1/(n log n log log n) is infinite but that of 1/(n log n (log
log n)^(1+h)) is finite if h>0\. Etc. So in some sense the boundary between
finite and infinite is very close to the (kinda-nonsensical) 1 / (n log n log
log n log log log n ...).

~~~
svat
To be clear, in the statement of the conjecture, we do not need f(q) to be a
polynomial or “nice” function of q like 1/q or 1/q^2 — it may not even be an
increasing function of q (so statements like “we want f(q) not to decrease
much faster than 1/q^2” can be a bit misleading) — one can have arbitrary
functions like, say, specifying that f(12)=1/4, f(20)=1/8, and for other q,
f(q) = {0 if q is a multiple of 7 (so basically such denominators are never
allowed), 1/3 if q does not contain the digit 9, 1/(q log q) otherwise}, etc.
I imagine the fact that the proof has to work with arbitrary functions was
part of the difficulty.

~~~
gjm11
Yes, f can be _any function at all_ here, and indeed this allows for the sort
of more complicated things you mention, and indeed this is where a lot of the
difficulty lies. Duffin and Schaeffer, for whom the conjecture is named, had
already proved it subject to a restriction on f that kinda-sorta requires it
not to vary too wildly.

(One other bit of precision that I glossed over: the conjecture is actually
about approximations p/q _for which p and q have no common factor_ and that
genuinely makes a difference here.)

------
saagarjha
I understand that the article is trying to simplify the topic for a general
audience, but there are some things that are just plain counterproductive
(even if the article clarifies what they mean later):

> New Proof Settles Ancient Question of How to Simplify Numbers Like Pi

The title, obviously. This should say "approximate", since we're not
simplifying pi.

> Under what circumstances is it possible to represent irrational numbers that
> go on forever — like pi — with simple fractions, like 22/7?

Never, unless you're approximating.

> He proved that for every irrational number, there exist infinitely many
> fractions that approximate the number evermore closely. Specifically, the
> error of each fraction is no more than 1 divided by the square of the
> denominator.

The first part is obviously true and should have been merged with the second
part.

~~~
inlined
Mathematically speaking, Pi can’t be “simplified”, but I’m personally curious
what ratio is Pi to the highest precision necessary to calculate anything
tangible (the circumference of the universe to the Plank length). Practically
speaking one could say that is the simplification of Pi.

~~~
saagarjha
You'd need something along the lines of 60ish digits:
[https://www.wolframalpha.com/input/?i=diameter+of+the+observ...](https://www.wolframalpha.com/input/?i=diameter+of+the+observable+universe%2Fplank+length)

~~~
inlined
Yes, but what’s the ratio that’s precise within 10^-60

~~~
saagarjha
Here's one:
3141592653589793238462643383279502884197169399375105820974945/10^60.

------
svat
Here's a demonstration I made recently, of best rational approximations:
[https://shreevatsa.net/fraction/best](https://shreevatsa.net/fraction/best) —
much of this Quanta magazine article sets up background on such
approximations, so playing with the numbers on this web page may help build
some intuition about the problem.

In particular, the Duffin-Schaeffer conjecture that has just been proved
involves asking about the set of irrational numbers that are approximated to
within an error of f(q) for each denominator q, while the webpage above simply
finds the best possible approximations for a single fixed irrational number
(well, rational…) from the (semi-)convergents of its continued fraction — so
any denominator is allowed and the error (for convergents) turns out to be the
Dirichlet bound 1/q^2.

(For best results, choose a really large bound on the denominator and only ask
for the “bestest” rational approximations. The source code is just a single
HTML file and a bunch of plain JavaScript files to do the computation, so if
you can improve it, please do so and let me know:
[https://gitlab.com/svat/web-
fraction/tree/master/website](https://gitlab.com/svat/web-
fraction/tree/master/website) .)

~~~
papln
In your calculator, what does "bestest" mean?

~~~
svat
Best rational approximation of the second kind (as defined in the text higher
up on the page). Partly a joke, as there's already a natural idea of “best
rational approximations” (the first kind), and these (the second kind) are
even better than that.

------
rini17
I'd like to see rational numbers much more used in computers instead of
floating point. Not just for improved representability of widespread numbers
such as 1/3\. Also what the article hints towards: accuracy of pi in half-
precision 16-bit floating point is around 1/(2^11) ~ 0.0005, while 355/113 is
more precise 1/(113^2) ~ 0.00008 and the fraction is still possible to be
stored in 16-bit.

~~~
jerf
$YOUR_FAVORITE_LANGUAGE almost certainly has a rational library for it. Give
it a try and use it for some serious mathematical purpose; nothing will show
you why we don't do that all the time like trying it.

I invite you to ignore any syntactic issues that may arise, like perhaps
having to use x.Add(y) instead of x + y. If those were the fundamental
problems, the solution would be trivial. That's not the fundamental problem
with rational numbers being the pervasive default; the problem is that for
your increased precision, you pay for algorithms that you are used to being
O(1) in space and time, albeit with potentially imprecise results, suddenly
taking on seemingly-random values for both, with disturbing amounts of things
like "O((log2 denominator + log2 numerator)^2)" showing up and then stacking
on top of each other (^4, ^6, ^8, etc.).

~~~
waqf
And yet if you announced to a general audience of programmers that you had a
marvellous new programming language that guarantees performance but doesn't
guarantee correctness, you'd be laughed out of the room.

~~~
papln
Of course, a general audience of programmers prefers JavaScript and Python,
because we want our languages to be both slow and incorrect.

More seriously, general programming always prefers performance over
correctness, because correctnes is expensive and very few programs are rocket
surgery where perfect correctness matters. We have languages decdicated to
correctnes, like Ada and Coq. They are very niche because they are slow and
they require a lot of work the programmer doesn't want to do to hold up their
end of the bargain.

------
leiroigh
I hate how the article waffles between "almost none", "virtually none" and
"negligible".

There is a word with precise mathematical meaning that matches naive non-
mathematical intuition: "almost all" / "almost none" (aka full measure set).

This concept is easy to explain to lay people: count up lengths of intervals,
do some limiting process trickery to deal with the fact that the set cannot be
written as a finite union of intervals. Alternatively, choose a real number
uniformly at random from some interval, probability one will hit the "almost
all" set and probability zero will hit one of the others.

"negligible set" is often used to describe sets that have zero measure and
small Baire category. "Virtually" also has different mathematical
connotations.

Upside is that it was well explained that the contents of the new result is
"don't worry about overlaps between different denominators".

------
oluckyman
Wow. Maynard FTW yet again. A Fields medal down the track would be no surprise
at this rate. He has several more years to qualify.

