Hacker News new | past | comments | ask | show | jobs | submit login
To compute a constant of calculus (2019) (perl.org)
53 points by tosh 65 days ago | hide | past | favorite | 21 comments



Disappointing, did not see the extremely easy and quite efficient method of calculating e through continued fractions.

  S =[2,1,2] 
  T =0

  for P in range(200):
    S += [1,1,(4 + 2*P)]

  for x in range(len(S)-1):
    T = 1 / (T + S[-(x+1)])

  print(S[0] + T) # value of e


There are many ways to compute e. :)

(The article's original title -- "To compute a constant of calculus: A treatise on multiple ways" -- described its purpose better than the truncated one submitted here on HN. I see posters suggesting various solutions. Perhaps someone will create a public repo based on the article and then folk can add solutions.)

> did not see the extremely easy and quite efficient method of calculating e through continued fractions.

(That could be misinterpreted by readers of this thread who have not read the OP. The OP includes several continued fractions, including ones that seem simpler to me than the one you've shown, though that might be because I'm no expert.)

I'm curious what name is normally given to the fraction you showed? And are you saying it is more efficient than the Brother series?


You made me curious. It's easy to see that the Maclaurin series converges quickly by the nature of the terms.

Since e ≈ 2.7 1828 1828 459, it is very close to the rational number 271,801/99,990 = 2.7(1828) . A continued fraction expansion should pick up on that approximation quite quickly, which will make the expansion look efficient. Will the continued fraction continue converging as quickly if you keep going after you reach that particular rational approximation?


Yes. Continued fractions offer the best possible rational approximations, for some definition of distance which I don't remember at this moment.

Moreover, they are uncommonly good if you truncate just before a large integer. So for π, 22/7 is a great approximation because the next number in the continued fraction is 15, but it is blown out of the water by 355/113 which is not specified to only ~0.3% accuracy as you might expect from the denominator, but rather 0.000008%. This comes from the splendidly large next integer 292.

e is more boring, the continued fraction is [2; 1, 2, 1, 1, 4, 1, 1, 6, ...], and that pattern indeed repeats forever. So you do keep getting these large integers, and they keep getting larger. That first one was 19/7 (0.1%), then 193/71 (0.001%), then 2721/1001 (0.000004%), then 49171/18089, which has about the same error as your approximation, but less than 1/5 of the denominator. So the continued fraction doesn't pick that one up because as far as the continued fraction is concerned it's not very good :D

Side note: It turns out that the above computational approach of "build continued fraction and then evaluate it" is not necessary and you can actually stream these. So you can take a + 1/(b + 1/(c + 1/x)) and rephrase it as sort of f_a(f_b(f_c(x)))). Simplifying by just looking at

> f_c(m/n) = c + n/m = (c m + 1 n)/(1 m + 0 n)

we can rewrite this as a matrix product,

    [ a 1 ] * [ b 1 ] * [ c 1 ] * [ m ]
    [ 1 0 ]   [ 1 0 ]   [ 1 0 ]   [ n ]
and the nice thing about matrices is that their multiplication is associative, so we can compute front-to-back instead of back-to-front:

    Prelude> (a, b, c, d) .* (w, x, y, z) = (a*w + b*y, a*x + b*z, c*w + d*y, c*x + d*z) :: (Integer, Integer, Integer, Integer)
    Prelude> (2, 1, 1, 0) .* (1, 1, 1, 0) .* (2, 1, 1, 0) .* (1, 1, 1, 0) .* (1, 1, 1, 0) -- starting point
    (19,11,7,4)
    Prelude> let matrices = scanl (\matrix n -> matrix .* (n,1,1,0) .* (1, 1, 1, 0) .* (1, 1, 1, 0)) (19, 11, 7, 4) [4,6..]
    Prelude> take 5 matrices
    [(19,11,7,4),(193,106,71,39),(2721,1457,1001,536),(49171,25946,18089,9545),(1084483,566827,398959,208524)]
    Prelude> 
    Prelude> :m +Data.Ratio
    Prelude Data.Ratio> take 8 $ map (\(a, b, c, d) -> a % c) matrices
    [19 % 7,193 % 71,2721 % 1001,49171 % 18089,1084483 % 398959,28245729 % 10391023,848456353 % 312129649,28875761731 % 10622799089]
note that this is streaming due to the scanl, so it'll compute the next term in O(1) given the last terms. (Also note that there are some other "best approximations" hidden in there because I am taking this by these [4, 1, 1] then [6, 1, 1] then [8, 1, 1] chunks.)

Side side note: it also follows that the most irrational number is 1 + 1/(1 + 1/(1 + 1/(1 + ...))) for this metric of distance, as it always has the smallest possible number in its continued fractions; its approximants are never "very good". Solving for φ = 1 + 1/φ and some quadratic formula reveals this as the golden ratio φ = (1 + √5)/2. If you do this process you will find that the matrix always contains Fibonacci numbers and this matrix [1, 1; 1 0] kind of encodes the Fibonacci recurrence directly.


> Solving for φ = 1 + 1/φ and some quadratic formula reveals this as the golden ratio φ = (1 + √5)/2

  x = 1 + 1/x
  x - 1 - 1/x = 0
  x^2 - x - 1 = 0
  x = 1 ± √(1 + 4)
      ------------
      2


This is a lovely comment! I just came by to say thanks for it :)


Super cool article. Brownie points for the callback at the end to "To Compute a Constant of Calculus: (A treatise on multiple ways)" :^)


The code in this post is in Raku[0] — pretty much the perfect language for expressing something in "multiple ways". There's More Than One Way To Do It, as we like to say.

[0]: https://docs.raku.org


I just skimmed through, but I didn’t see the method I expected to find: Use the Taylor series for f(x) = e^x around f(0) to compute e = f(1). Should be pretty efficient and straightforward, right?


>> but I didn’t see the method I expected to find: Use the Taylor series for f(x) = e^x around f(0) to compute e = f(1)

Is that the one where e = 1 + 1/1! + 1/2! + 1/3! + ... ?

Steve Wozniak used that method to compute several thousand digits of e on an Apple II back in the day. I think Byte ran an article on it.

Super easy because each term is just the previous term divided by N.


Yes, that's the Taylor series.


It's included (he calls it Newton's), and he follows it with the even better Brother's.


Yes, the Taylor series converges very quickly, but for some reason the author insists on only using the problem that originally introduced Bernoulli and Euler to the number e.


The author not only included Taylor's (Newton's) but the superior Brother's.


They did. They called it Newton's series


Reading this reminds me of Leibniz's formula for pi:

Although it computes the value correctly, it is almost completely useless on a computer. All the elements of the series are reciprocals of odd numbers, hence you will get errors on every term.


Are you saying that because (typically) arbitrary precision rationals are too slow, and floating point yields errors, or for some other reason?

(Raku supports arbitrary precision rationals, and they're normalized to minimize the denominator size, and remain pretty performant until the denominator exceeds 64 bits, so I'm thinking they might do OK for correctly computing transcendentals to maybe 15 decimal digits or so.)


I'm referring to the float registers, not arbitrary precision. So, 'math.pi' essentially.

You probably already know this, I'll apologise in advance - My understanding is you can only get accurate values of a value smaller than unity on a float if it is a sum of perfect powers of 1/2 (ie. the denominator is a perfect power of two). Therefore any reciprocal of an odd number will get an error (bar the first term, which is 1) since odd numbers don't fall into the above category (except for 1).

I tried using this series years ago to calculate pi. It was a total disaster, the final value was way off.

Edit: 'sum of'


Right, yes, for continued fractions / convergent series, floats are almost certainly going to be a complete disaster whereas if only integer division is involved, as is the case with this series, then arbitrary precision rationals are going to be exact, and often sufficiently performant for most such series for results up to about 15 decimal digits or so.

Damian demonstrates this difference between using floats and arbitrary precision rationals when he shows code for computing Bernoulli numbers in his "On The Shoulders Of Giants" presentation.[0]

(Btw, if you care about programming and appreciate a combination of total technical and geek mastery with standup comic delivery, the whole thing is like watching a great movie, whether you watch it from the "start" at the 5 minute mark, or skip forward to the 45 minute mark for the 15 minute Quantum Computing For Beginners section.)

[0] https://www.youtube.com/watch?v=Nq2HkAYbG5o#t=11m09s


It really is a great presentation. I sent Damian an email after watching it and he was also extremely approachable and took the time to answer my questions and talk about the future of raku (at that time Perl 6).


Thank you for the link, I might start watching at the beginning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: