Here's an even better one, which explains not only how Euler calculated logarithms with the bisection method, but also how he used some smart Taylor series tricks too to get the (natural) logarithms of the integers from 1 to 10, with 25 decimals.
How good was Euler at that? I checked his results with a multiple precision library (mpmath) and Euler got all the correct digits for 7 out of 10, he rounded off the last decimal in the wrong direction in 2 cases. Just for log(7) he got 21 correct decimals and the last 4 were wrong.
Here's the link to Euler's book [1], where these logarithms are on page 118. His values are
to be fair, you'd get marked off for getting ln(1) wrong, so I think we have to give credit for getting it right. (grading on a scale solves that problem, though i guess the gaussian hadn't been invented yet).
the numerator is also "wrong"; of his 3 errors, 2 of them are wrong only "off by one" in the last digit, same direction both times, so without understanding the rounding rules in effect it's hard to know whether what happened is an actual error. ln(7) is a mess though but still at the least significant end.
It describes how Briggs created his table of logarithms with 14 decimal places, published in 1624, only 10 years after Napier published the first logarithms in 1614. Those times mush have felt like an incredible leap forward, probably similar to LLMs now.
Kepler published his 3rd law of planetary motion in 1619. It would not have been possible to get that without logarithms, and it's quite likely that he knew of logarithms independently of Napier (who came up with them after some discussions with Kepler's boss, Tycho Brahe).
I can't really follow the algorithm Euler used... I've come up with one that is likely equivalent, and easy to do with a 4 function calculator.
For log10....
Scale X to be between 1 and 10
For each time you divide by 10, add 1.0 to the log
each time you have to multiply by 10, subtract 1.0 from the log
Add the decimal point to the log
Loop:
Take X' (the scaled value) to the 10th power
square it (store to memory, if you have it, this is x'^2)
square that (which is x'^4)
square that (now x'^8)
multiply that by x'^2
you now have X'^10
the number now has to be divided N times to be between 1.0 and 10, and N to the log
if you need more digits, goto Loop
Here it is, taking log10(3)
3 0.
59,049 0.4
51,537,752.07320113 0.47
13,220,708.19480807 0.477
16.31350185342626 0.4771
133.4971414230401 0.47712
17.97710116675744 0.477121
352.530441082974 0.4771212
296,460.0951963823 0.47712125
52,439.97032955288 0.477121254
15,726,220.94397862 0.4771212547
Very nice algorithm - no square roots and 1 digit per iteration guaranteed.
I converted this into python to understand it better
def log10(x):
"""
Find 10 decimal places of log10(x) using calculator functions only
"""
assert x > 0, "can't log <= 0"
log = 0
while x >= 10:
log += 1
x /= 10
while x < 1:
log -= 1
x *= 10
# x is now [1, 10)
decimal = 0.1
for i in range(10):
x10 = x**10 # or calculate as above so as not to use power function
while x10 >= 10:
log += decimal
x10 /= 10
x = x10
decimal /= 10
print(f"log10(x) = {log:.10f}, x = {x:.10f}")
log10(3)
Which gives
log10(x) = 0.4000000000, x = 5.9049000000
log10(x) = 0.4700000000, x = 5.1537752073
log10(x) = 0.4770000000, x = 1.3220708195
log10(x) = 0.4771000000, x = 1.6313501853
log10(x) = 0.4771200000, x = 1.3349714142
log10(x) = 0.4771210000, x = 1.7977101167
log10(x) = 0.4771212000, x = 3.5253044107
log10(x) = 0.4771212500, x = 2.9646009506
log10(x) = 0.4771212540, x = 5.2439970092
log10(x) = 0.4771212547, x = 1.5726220230
I’m not sure that’s efficient. The number of digits in that tenth power grows very rapidly (59049^¹⁰ already has 47 digits; (59049¹⁰)¹⁰ around 477) and there’s the risk that rounding introduces errors (did you really do that with “a 4 function calculator”?)
On the other hand, (hand-waving) it takes many generated digits before the “far away” get shifted left of the decimal point, so numerical analysis probably can show you don’t need them all to reach a target number of digits.
>did you really do that with “a 4 function calculator”?
I have in the past, this time I used the 4 function calculator mode of Windows 10's calculator. x^2, MS, x^2, x^2 * MR = on the loops, then divide by 1+n zeros, repeat.
I used NotePad++ to record the data as I went.
Doing it for a binary logarithm would be a lot easer, because then it's square and optionally divide by 2.
> this time I used the 4 function calculator mode of Windows 10's calculator
AFAIK, that calculator has “Infinite precision for basic arithmetic operations (addition, subtraction, multiplication, division) so that calculations never lose precision” (https://github.com/microsoft/calculator)
Yeah I remember that some of the notation in the proof confused me a bit too. I proved it to my satisfaction using a slightly different approach, but I don't remember it well enough to write it out. (I'm sure it's trivial for anyone with math chops, but I don't got them.)
I was randomly re-watching some Youtube video on solving exponential equations and was poking around solving some, using the log() function in R to do the logs, and got to thinking about the "old days" when you had to look these things up in log tables. And that led me to think "Wait, how did the first log tables get written in the first place? And for that matter, how exactly do you work out a log value the manual way anyway???"
A few moments of Googling turned this up. Thought some other folks might find this topic interesting as well.
One fun aspect of all this, is that it forces you to consider that thinking of exponentiation as "repeated multiplication" is a bit dodgy when you allow for decimal exponents. I mean, given 3^2.5, what does it mean to say "multiply 3 by itself 2.5 times?" Actually, thinking about that was the thing that really set me off on an exploration of how the heck they worked out these log values in the beginning.
The basic idea of how to generalise "repeated multiplication" to the whole real continuum is to think about multiplication as a kind of scaling.
Going from 2^n to 2^(n+1) is ultimately about scaling the result up by a factor of x2. So, 2^(n+0.5) ought to be about scaling up halfway to x2, so that if you were to repeat that operation again, you got x2; and that's precisely sqrt(2). That reasoning gets you all the rationals. Since rationals are dense in the reals, impose continuity and you get all reals. This is the root of the algebraic definition of exponentials.
But where there are reals there is calculus, and we also get a lot of insight from a differential definition. The key insight that leads to this definition is that if we could break down the exponential to 2^(n+epsilon) for a very small epsilon, we could stack up O(1/epsilon) of them to get wherever we like on the real continuum. So it makes sense that the definition dy/dx = a.y should produce the same function.
This notion of "break down an operation into infinitesimal bits that stack up" can be taken seriously and formalised, and if you do that you end up with the theory of Lie groups and Lie algebras.
My grandfather was a civil engineer for Maricopa County in Arizona (USA, dry desert, last contiguous state to join the union 1912 I believe) from 195[4-9] to like 1998 about. When I started working on advanced algebra while we we driving out to go fishing he told me that they used to have the 2 interns from ASU (AZ state univ.) find a different palo verde tree and go calculate the log that they would use for the bend of a curve for the new road, by hand. If they had different answers they'd both have to go back and redo it. I was astonished because I was just always taught to just push the log button on the calculator. Pretty crazy how things had changed even in curriculum.
Thanks for the shared article
> “Euler” is pronounced “oiler,” which is why virtually every sports team assembled by the University of Alberta’s math department has been named “the Edmonton Eulers.”
"Euler’s mathematical brilliance" is footnoted: "I’m not exaggerating. I’ve taken no less than 13 University level math courses, and Euler’s name showed up in at least 10 of them. The man’s contributions to mathematics are probably more significant than Einstein’s contributions to physics. Find him on Wikipedia when you have a minute or thirty."
That includes https://en.wikipedia.org/wiki/Euclid–Euler_theorem, which must have been one of the ‘cooperations’ (Euclid died ±270BC, Euler lived 1707-1783, and his proof was published posthumously) on a theorem that took the longest time in history.
> all logarithm tables for three hundred years were borrowed from Mr. Briggs’ tables by reducing the number of decimal places. Only in modern times, with the WPA and computing machines, have new tables been independently computed.
http://eulerarchive.maa.org/hedi/HEDI-2005-07.pdf
How good was Euler at that? I checked his results with a multiple precision library (mpmath) and Euler got all the correct digits for 7 out of 10, he rounded off the last decimal in the wrong direction in 2 cases. Just for log(7) he got 21 correct decimals and the last 4 were wrong.
Here's the link to Euler's book [1], where these logarithms are on page 118. His values are
The correct values are [1]https://scholarlycommons.pacific.edu/cgi/viewcontent.cgi?art...