Hacker News new | past | comments | ask | show | jobs | submit login
Calculating Logarithms by Hand (2010) [pdf] (bureau42.com)
108 points by mindcrime on Jan 1, 2024 | hide | past | favorite | 34 comments



Here's an even better one, which explains not only how Euler calculated logarithms with the bisection method, but also how he used some smart Taylor series tricks too to get the (natural) logarithms of the integers from 1 to 10, with 25 decimals.

http://eulerarchive.maa.org/hedi/HEDI-2005-07.pdf

How good was Euler at that? I checked his results with a multiple precision library (mpmath) and Euler got all the correct digits for 7 out of 10, he rounded off the last decimal in the wrong direction in 2 cases. Just for log(7) he got 21 correct decimals and the last 4 were wrong.

Here's the link to Euler's book [1], where these logarithms are on page 118. His values are

  l1  = 0,00000 00000 00000 00000 00000
  l2  = 0,69314 71805 59945 30941 72321
  l3  = 1,09861 22886 68109 69139 52452
  l4  = 1,38629 43611 19890 61883 44642
  l5  = 1,60943 79124 34100 37460 07593
  l6  = 1,79175 94692 28055 00081 24773
  l7  = 1,94591 01490 55313 30510 54639
  l8  = 2.07944 15416 79835 92825 16964
  l9  = 2,19722 45773 36219 38279 04905
  l10 = 2,30258 50929 94045 68401 79914
The correct values are

  l1  = 0,00000 00000 00000 00000 00000
  l2  = 0,69314 71805 59945 30941 72321
  l3  = 1,09861 22886 68109 69139 52452
  l4  = 1,38629 43611 19890 61883 44642
  l5  = 1,60943 79124 34100 37460 07593
  l6  = 1,79175 94692 28055 00081 24774
  l7  = 1,94591 01490 55313 30510 53527
  l8  = 2.07944 15416 79835 92825 16964
  l9  = 2,19722 45773 36219 38279 04905
  l10 = 2,30258 50929 94045 68401 79915
[1]https://scholarlycommons.pacific.edu/cgi/viewcontent.cgi?art...


> and Euler got all the correct digits for 7 out of 10

This isn't the right denominator; you don't get any credit for calculating digits of ln(1).


to be fair, you'd get marked off for getting ln(1) wrong, so I think we have to give credit for getting it right. (grading on a scale solves that problem, though i guess the gaussian hadn't been invented yet).

the numerator is also "wrong"; of his 3 errors, 2 of them are wrong only "off by one" in the last digit, same direction both times, so without understanding the rounding rules in effect it's hard to know whether what happened is an actual error. ln(7) is a mess though but still at the least significant end.


Interesting that the log 7 case is called out as tricky in that linked PDF..


That was a fun read


Happy to hear.

Here's another fun one, although much longer.

https://www.math.ksu.edu/~cjbalm/570s14/briggs.pdf

It describes how Briggs created his table of logarithms with 14 decimal places, published in 1624, only 10 years after Napier published the first logarithms in 1614. Those times mush have felt like an incredible leap forward, probably similar to LLMs now.

Kepler published his 3rd law of planetary motion in 1619. It would not have been possible to get that without logarithms, and it's quite likely that he knew of logarithms independently of Napier (who came up with them after some discussions with Kepler's boss, Tycho Brahe).


I can't really follow the algorithm Euler used... I've come up with one that is likely equivalent, and easy to do with a 4 function calculator.

For log10....

  Scale X to be between 1 and 10
    For each time you divide by 10, add 1.0 to the log
    each time you have to multiply by 10, subtract 1.0 from the log

  Add the decimal point to the log

  Loop:
  Take X' (the scaled value) to the 10th power
    square it (store to memory, if you have it, this is x'^2)
    square that  (which is x'^4)
    square that  (now x'^8)
    multiply that by x'^2
    you now have X'^10
  the number now has to be divided N times to be between 1.0 and 10, and N to the log
    if you need more digits, goto Loop
   
  
  Here it is, taking log10(3)
  
  3                     0.
  59,049                0.4
  51,537,752.07320113   0.47
  13,220,708.19480807   0.477
  16.31350185342626     0.4771
  133.4971414230401     0.47712
  17.97710116675744     0.477121
  352.530441082974      0.4771212
  296,460.0951963823    0.47712125
  52,439.97032955288    0.477121254
  15,726,220.94397862   0.4771212547


Very nice algorithm - no square roots and 1 digit per iteration guaranteed.

I converted this into python to understand it better

    def log10(x):
        """
        Find 10 decimal places of log10(x) using calculator functions only
        """
        assert x > 0, "can't log <= 0"
        log = 0
        while x >= 10:
            log += 1
            x /= 10
        while x < 1:
            log -= 1
            x *= 10
        # x is now [1, 10) 
    
        decimal = 0.1
        for i in range(10):
            x10 = x**10 # or calculate as above so as not to use power function
            while x10 >= 10:
                log += decimal
                x10 /= 10
            x = x10
            decimal /= 10
            print(f"log10(x) = {log:.10f}, x = {x:.10f}")
    
    log10(3)

Which gives

    log10(x) = 0.4000000000, x = 5.9049000000
    log10(x) = 0.4700000000, x = 5.1537752073
    log10(x) = 0.4770000000, x = 1.3220708195
    log10(x) = 0.4771000000, x = 1.6313501853
    log10(x) = 0.4771200000, x = 1.3349714142
    log10(x) = 0.4771210000, x = 1.7977101167
    log10(x) = 0.4771212000, x = 3.5253044107
    log10(x) = 0.4771212500, x = 2.9646009506
    log10(x) = 0.4771212540, x = 5.2439970092
    log10(x) = 0.4771212547, x = 1.5726220230


your print should be x', not x!


I’m not sure that’s efficient. The number of digits in that tenth power grows very rapidly (59049^¹⁰ already has 47 digits; (59049¹⁰)¹⁰ around 477) and there’s the risk that rounding introduces errors (did you really do that with “a 4 function calculator”?)

On the other hand, (hand-waving) it takes many generated digits before the “far away” get shifted left of the decimal point, so numerical analysis probably can show you don’t need them all to reach a target number of digits.


>did you really do that with “a 4 function calculator”?

I have in the past, this time I used the 4 function calculator mode of Windows 10's calculator. x^2, MS, x^2, x^2 * MR = on the loops, then divide by 1+n zeros, repeat.

I used NotePad++ to record the data as I went.

Doing it for a binary logarithm would be a lot easer, because then it's square and optionally divide by 2.


> this time I used the 4 function calculator mode of Windows 10's calculator

AFAIK, that calculator has “Infinite precision for basic arithmetic operations (addition, subtraction, multiplication, division) so that calculations never lose precision” (https://github.com/microsoft/calculator)


The intermediate numbers never get bigger than 10^10 - if you look at the python program I posted you might find that easier to see.


That’s true, but the number of digits to the right of the decimal point grows rapidly.


I think this is quite similar to Clay S Turner's algorithm for calculating binary logarithms:

http://www.claysturner.com/dsp/BinaryLogarithm.pdf


I didn't follow the proof, but yes it is the same algorithm. It'll work for any integer base >= 2.


Yeah I remember that some of the notation in the proof confused me a bit too. I proved it to my satisfaction using a slightly different approach, but I don't remember it well enough to write it out. (I'm sure it's trivial for anyone with math chops, but I don't got them.)


I was randomly re-watching some Youtube video on solving exponential equations and was poking around solving some, using the log() function in R to do the logs, and got to thinking about the "old days" when you had to look these things up in log tables. And that led me to think "Wait, how did the first log tables get written in the first place? And for that matter, how exactly do you work out a log value the manual way anyway???"

A few moments of Googling turned this up. Thought some other folks might find this topic interesting as well.

One fun aspect of all this, is that it forces you to consider that thinking of exponentiation as "repeated multiplication" is a bit dodgy when you allow for decimal exponents. I mean, given 3^2.5, what does it mean to say "multiply 3 by itself 2.5 times?" Actually, thinking about that was the thing that really set me off on an exploration of how the heck they worked out these log values in the beginning.


> 3^2.5, what does it mean to say "multiply 3 by itself 2.5 times?"

It means 3 · 3 · √3. If you write the exponent in binary, you can approximate any such power as a product of various repeated square roots.

> how the heck they worked out these log values in the beginning

If you're really curious about the very beginning, here's an English translation of Briggs's (1624) book: https://17centurymaths.com/contents/albriggs.html

And here are English translation of Napier's (1614, 1619) books: https://17centurymaths.com/contents/napiercontents.html


Wikipedia has a concise enough explanation of Napier's invention and its peculiarity compared to the modern definition [1].

[1] https://en.wikipedia.org/wiki/History_of_logarithms#Napier


Of course Wikipedia has a page titled "History of logarithms"! I should have just looked there first. :-)


A better source is Cajori's (1913) 7-part history of logarithms in the American Math Monthly, [20(1) 5–14](https://www.jstor.org/stable/2973509), [20(2) 35–47](https://www.jstor.org/stable/2974078), [20(3) 75–84](https://www.jstor.org/stable/2973441), [20(4) 107–117](https://www.jstor.org/stable/2972960), [20(5) 148–151](https://www.jstor.org/stable/2972412), [20(6) 173–182](https://www.jstor.org/stable/2973069), [20(7) 205–210](https://www.jstor.org/stable/2974104)


A few years ago, I wrote a summary of Napier's conception and construction here: https://math.stackexchange.com/questions/47927/motivation-fo...


The basic idea of how to generalise "repeated multiplication" to the whole real continuum is to think about multiplication as a kind of scaling.

Going from 2^n to 2^(n+1) is ultimately about scaling the result up by a factor of x2. So, 2^(n+0.5) ought to be about scaling up halfway to x2, so that if you were to repeat that operation again, you got x2; and that's precisely sqrt(2). That reasoning gets you all the rationals. Since rationals are dense in the reals, impose continuity and you get all reals. This is the root of the algebraic definition of exponentials.

But where there are reals there is calculus, and we also get a lot of insight from a differential definition. The key insight that leads to this definition is that if we could break down the exponential to 2^(n+epsilon) for a very small epsilon, we could stack up O(1/epsilon) of them to get wherever we like on the real continuum. So it makes sense that the definition dy/dx = a.y should produce the same function.

This notion of "break down an operation into infinitesimal bits that stack up" can be taken seriously and formalised, and if you do that you end up with the theory of Lie groups and Lie algebras.


A very good exposition of this topic can be found in the Feynman lectures

https://www.feynmanlectures.caltech.edu/I_22.html


Related:

Hartzler, "A two-and-one-half-place logarithm table", The Mathematics Teacher, Vol. 53, No. 3 (Mar 1960) https://www.jstor.org/stable/27956101

Bayer, "Setting up an approximate antilog table", The Mathematics Teacher, Vol. 55, No. 3 (Mar 1962) https://www.jstor.org/stable/27956560

Doerfler, "Dead Reckoning: Calculating Without Instruments", Taylor Trade Publishing (Sep 1993) https://www.amazon.com/Dead-Reckoning-Calculating-Without-In...


My grandfather was a civil engineer for Maricopa County in Arizona (USA, dry desert, last contiguous state to join the union 1912 I believe) from 195[4-9] to like 1998 about. When I started working on advanced algebra while we we driving out to go fishing he told me that they used to have the 2 interns from ASU (AZ state univ.) find a different palo verde tree and go calculate the log that they would use for the bend of a curve for the new road, by hand. If they had different answers they'd both have to go back and redo it. I was astonished because I was just always taught to just push the log button on the calculator. Pretty crazy how things had changed even in curriculum. Thanks for the shared article


It appears Henry Briggs had used repeated square roots to compute logarithms: https://archived.hpcalc.org/laporte/Briggs%20and%20the%20HP3...

Mr Briggs had computed the first table of base 10 logarithms for the first few thousand natural numbers: https://en.wikipedia.org/wiki/Henry_Briggs_(mathematician)


I enjoyed a couple of the footnotes!

> “Euler” is pronounced “oiler,” which is why virtually every sports team assembled by the University of Alberta’s math department has been named “the Edmonton Eulers.”

"Euler’s mathematical brilliance" is footnoted: "I’m not exaggerating. I’ve taken no less than 13 University level math courses, and Euler’s name showed up in at least 10 of them. The man’s contributions to mathematics are probably more significant than Einstein’s contributions to physics. Find him on Wikipedia when you have a minute or thirty."


If you’re a mathematician and you have a conjecture, constant, equation, function or theorem named after you, you can be proud of yourself.

If you’re Euler, that’s 3 conjectures, 11 equations, 9 formulas, 4 functions, 2 identities, 10+ constants, 11 theorems, and 2 laws, some of them shared with others (https://en.wikipedia.org/wiki/List_of_things_named_after_Leo...)

That includes https://en.wikipedia.org/wiki/Euclid–Euler_theorem, which must have been one of the ‘cooperations’ (Euclid died ±270BC, Euler lived 1707-1783, and his proof was published posthumously) on a theorem that took the longest time in history.


What are the top 3 mathematicians when ranked by the number of sports teams named after them? ;)


> all logarithm tables for three hundred years were borrowed from Mr. Briggs’ tables by reducing the number of decimal places. Only in modern times, with the WPA and computing machines, have new tables been independently computed.

https://www.feynmanlectures.caltech.edu/I_22.html#:~:text=Bu...


What, did you guys forget your slide rules at home?


I've always kinda wanted to buy an actual slide rule, just for the lulz of it. Haven't gotten around to it yet. But one of these days...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: