
Weird Constants in Math Problems - weinzierl
https://blog.plover.com/math/odd-constants.html
======
seth_tr
Seems the first constant is wrong

>>> 1/(2 * (1 - math.exp(-2)))

0.5782588213748329

Instead it should be

>>> 1/2 * (1 - math.exp(-2))

0.43233235838169365

Which handily appears in OEIS (a great resource for random constants and
sequences) as [http://oeis.org/A247847](http://oeis.org/A247847)

~~~
madcaptenor
Michael Lugo (author of the original post that Mark was responding to) here. I
had written $1/2 (1-e^{-2})$ in my post, meaning the 0.4323 constant, but with
the usual rules of precedence you do in fact get 0.5782; he just propagated my
error.

~~~
mjd
I fixed it on my side also. Thanks to both of you for the correction.

------
tikej
This is nice. I think OEIS and in general libraries of constants will be of
great help in tackling problems with no analytical solutions.

Actually I do a lot of integrals this way – symbolically but checking whether
the transformation was legit evaluating definite integral on few random
intervals. For me is very helpful and I believe it can be extended to problems
beyond integrals.

~~~
throwlogon
> It's easy to use Monte Carlo methods and find that when n is large, the
> average cluster size is approximately 2.15013

It never occurred to me before, but running simulations and looking up the
result in a dictionary of constants might be a helpful learning approach for
people who are more adept at going from practice to theory.

~~~
nullc
I solve a lot of problems this way, thanks to OEIS.

How many fromblaz of size N are there? lets count them for N=1 ... N=8 or
whatever is tractable to count using naive methods then hope a useful result
shows up in OEIS.

OEIS will also return equations for simple linear integer expressions without
an OEIS entry.

Unfortunately, the tools for single constants are significantly less good than
OEIS.

------
hoseja
[https://www.wolframalpha.com/input/?i=2.15013](https://www.wolframalpha.com/input/?i=2.15013)

It's there as one of "Possible closed forms".

it's there even for 0.4323 but further down, you need to request more closed
forms.

------
lifeformed
One I found as a kid was 0.73908513321. It appears when you just press the Cos
button on a calculator repeatedly (for any starting value). I can't find it
used anywhere else.

~~~
sgdpk
You can see that your constant is the solution to x=cos(x) (if the equality is
true, you can replace the x on the right side with cos(x), and repeat this
process).

The solution apparently has a name, Dottie constant [1].

[1]
[https://mathworld.wolfram.com/DottieNumber.html](https://mathworld.wolfram.com/DottieNumber.html)

~~~
ja3k
Beautifully, iterating any continuous function will yield a fixed point of the
function if it converges.

[https://en.wikipedia.org/wiki/Fixed-
point_iteration](https://en.wikipedia.org/wiki/Fixed-point_iteration)

~~~
Enginerrrd
Interestingly, I've found this kinda sorta occasionally works on functionals
of a differential equation too if you start with a good enough guess. Someone
has probably made this concept and the necessary/sufficient conditions
rigorous but its getting into territory that is a good bit too advanced for me
to follow.

As I recall from my fiddling, you are probably most likely to end up with a
series solution of sorts, so it's a good idea to guess with polynomials or
exponentials so you end up with component functions that form a basis for
analytic functions.

~~~
hansvm
It's not too bad to formalize :) The [[https://en.m.wikipedia.org/wiki/Fixed-
point_iteration](wikip...](https://en.m.wikipedia.org/wiki/Fixed-
point_iteration\]\(wikipedia) article on fixed point iteration) gives lip
service to everything working fine in arbitrary metric spaces. It's easy to
define a useful metric between functions (e.g. via integrating their
difference, ignoring edge cases like occasional pointwise differences since a
more careful treatment can give a presentation where those don't matter for
the problem at hand), so the same kinds of theorems that work with cos(x) on
the reals also work with differential operators on function spaces.

------
nullc
Too bad that Plouffe's inverter can't seem to stay reliably online, except in
intermittently available cut down forms.

The database behind it grew huge, making it expensive to operate, and it seems
any copy of it online goes down whenever someone graduates.

------
jonathanstrange
I find series even more fascinating than unexpected constants. Series also
show up in the weirdest places and it's often very non-trivial to explain (or
not yet explained at all) why the some series shows up in two different
areas/applications. Check out the comments in OEIS[1] entries, it's
fascinating.

[1] [https://oeis.org](https://oeis.org)

------
rocqua
I recall doing a numerical approximation to something, and it seemed to
converge to about 2.5

Enough for me to, in my overconfidence, assume the constant was 2.5. Later,
when I found an exact derivation, it turned out the actual constant was pi^2/4
or 2.46740110027. Which was apparently close enough to fool me.

~~~
mjd
A while back I looked at a problem where Monte Carlo simulation said that the
answer was probably 4.

But analysis showed that the actual answer was 9304682830147 ÷ 2329089562800 =
3.994987….

[https://blog.plover.com/math/breaking-
pills.html](https://blog.plover.com/math/breaking-pills.html)

------
bo1024
I think these kinds of constants are very normal (and very cool) in these
kinds of problems. "e" shows up a lot. For example, the probability that a
given bin is empty is the probability that every ball misses it, or (1 -
1/n)^n --> 1/e.

~~~
SamReidHughes
Another interesting way to formulate it is as a differential equation in the
number of empty buckets as a function of time. Given k empty buckets, the
probability of picking an empty bucket is k/n. For large values of n and k, we
can pretend it's a continuous and not-particularly-stochastic differential
equation dk/dt = -k/n, with k(0) = n.

That's classic exponential decay, the solution being k = ne^(-t/n). Thus k(n)
= ne^(-1).

------
SamReidHughes
One bit of the analysis is left out (if you follow the link to the Math Stack
Exchange post) -- the probability of a bin being empty is not independent of
the probability of other bins being empty. From the perspective that the
number of groupings Y(x) is contingent on the total number of bins filled, X,
you might say they're assuming E[Y(X)] = E[Y(E[X])].

But as n -> infinity, the effect of this becomes negligible.

------
godelzilla
A probability problem has Euler's constant in its answer? And numeric
approximations are close to that number?

Who cares? That's how probability works. Nothing weird.

