Hacker News new | past | comments | ask | show | jobs | submit login
How Many Decimals of Pi Do We Really Need? (2016) (nasa.gov)
149 points by totaldude87 on Sept 28, 2020 | hide | past | favorite | 147 comments



The article seems a bit backward to me. The reason they use 3.141..793 is because that's the value of pi in double precision floating point. And that was set by the Intel 8087 coprocessor. So to me, the real answer is that Intel (and Kahan) decided on the number of decimals in 1980, and NASA uses that because it's the standard.

On the other hand, in the 1960s NASA figured out exactly how many bits of precision they needed to get to the Moon and came up with the 15-bit Apollo Guidance Computer; 14 bits wasn't enough and 16 bits was more than they needed. (The computer supports double precision and triple precision for tasks that needed more.)

The point is that in the 1960s, aerospace systems would carefully consider exactly how many bits they needed and build systems with bizarre numbers of bits like 15 or 27. Now, they use standard systems, which generally have way more accuracy than needed.


I think you are misreading the article, or at least taking an oddly limited "developer-only" view instead of considering the audience. NASA is not trying to answer your question about how many digits of pi they use. They are trying to answer a non-technical person on Facebook (likely a kid) who is wondering, vaguely, if NASA needs to use a highly precise representation of pi to fly spaceships, or if something coarse will work.

The answer the kid is looking for: NASA's most precise representation of pi is more precise than 3.14, but nowhere close to 500 digits like the question suggested. 15 is more than enough for most engineers at NASA, and anything that an astronomer might conceptually want to do would take at most 40 digits of pi to do with almost arbitrary precision. The fact that the current representation is architecturally convenient for modern FPUs is basically immaterial to the person's question, even if that's interesting for people with detailed knowledge about such things.


They're still only answering half the question. Okay, 15 digits is more than enough. But how much more? What would the minimum be, and why are we using more than that?

Ideally the article would talk about what "more than enough" looks like (which it does), but also what minimums look like (which it doesn't), and then mention that they chose that specific size because most computers do two specific sizes really fast and that's the more accurate of the two.


I wish there were more details about the historical context in this.

I recently went down a rabbit hole of trying to implement a cosine function from scratch and found that for most applications where I use cosine (low-resolution graphs or 2d games), I need a shockingly low level of precision. Even four decimals was overkill!

For those that are interested, you can read about my adventures with precision and cosine: https://web.eecs.utk.edu/~azh/blog/cosine.html


If you are really interested in approximating a cosine on the cheap with high precision, you should look into approximating polynomials.

The Taylor expansion produces approximating polynomials that aren't that good.

For instance, if you were to ask which degree-4 polynomial best approximates cos(x)?, you wouldn't end up with 1-x^2/2 + x^4/24.

In fact, this polynomial is 0.99958 - 0.496393 x^2 + 0.0372093 x^4; it pretty much coincides[2] with cos(x) on the interval (-π/2, π/2); the error is an order of magnitude smaller than with the Taylor polynomial (see [3] vs [4]).

How to do this? Linear algbera[1].

See, the polynomials form a Hilbert space (a vector space with an inner product), where a decent choice of one is

    <f(x), g(x>) := \int_π/2^π/2 f(x)g(x) dx 
This is Do Gramm-Schmidt on 1, x, x^2, ...; obtain an orthonormal basis, and use the inner product to compute a projection on the first d elements to obtain a best degree-d polynomial approximation. Voila!

Linear Algebra saves the day yet again.

[1]https://www.math.tamu.edu/~yvorobet/MATH304-503/Lect4-04web....

[2] https://www.wolframalpha.com/input/?i=plot+0.0372093+x%5E4+-...

[3]https://www.wolframalpha.com/input/?i=plot+abs%281-x%5E2%2F2...

[4]https://www.wolframalpha.com/input/?i=plot+abs%281-x%5E2%2F2...


Finding the least-squares best polynomial doesn’t get you the minimum possible worst-case error though, or the minimum worst-case relative error. For those you can use the Remez exchange algorithm, https://en.wikipedia.org/wiki/Remez_algorithm

And if you look at the NASA low-precision number routines, you’ll see that’s exactly what they did.


It is important to note two things, though, for those following along:

1. Approximations using orthogonal polynomial bases (the least squares method, and more generally the Chebyshev methods) are, for transcendental functions, typically as accurate as Remez exchange (including with range reduction) to 16 or so digits of precision. Remez exchange is more difficult to implement correctly than the simple linear algebra methods, the latter of which rely "only" on doing a comparatively simple change of basis. Practically speaking you gain nothing from using Remez exchange instead of Chebyshev to approximate exp(x), for example.

2. The Remez exchange (and more generally, the MiniMax methods) does not guarantee anything beyond minimizing the worst case error. For many practical applications you don't care about worst case error, you care about average case relative error. It's not uncommon for the Remez exchange to produce polynomials which actually have worse average case relative error.

This is also covered in considerable depth in Trefethen's Approximation Theory and Approximation Practice.


Here’s a relevant example: http://www.chebfun.org/examples/approx/EightShades.html

Here are the first 6 chapters of ATAP http://www.chebfun.org/ATAP/atap-first6chapters.pdf

Also cf. https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf

* * *

I’d say the bigger point is not about whether the error is marginally better or worse with a Chebyshev series vs. CF vs. Remez, etc., but rather that any of these will usually beat the others if you add just one more coefficient.

Usually convergence speed matters more than picking a number of coefficients and then exactly optimizing to the last bit.

And as you say, running the Remez algorithm is probably waste of time if you are trying to calculate a near-optimal approximation on the fly at runtime.


Sure, and thanks for the link!

The least-squares solution is just a step-up from Taylor series that doesn't require anything deeper than the notion of an inner product (the parent comment I was responding to didn't go beyond Taylor).


Here was Apollo’s sine/cosine code, a 5th degree polynomial which clearly had its coefficients optimized for minimax relative error (presumably by the Remez algorithm):

https://fermatslibrary.com/s/apollo-11-implementation-of-tri...

Here’s a comparison between its relative error vs. a 5th degree Taylor series (if you use high precision arithmetic; I’m sure on the machine itself rounding errors made things a bit choppier). In the worst case for the Taylor series the relative error is about 45 times lower in the optimized polynomial:

https://www.desmos.com/calculator/aelh9kf9ms


If anyone wants to do the thing romwell describes without an avalanche of math, try this one-liner:

    bash$ gnuplot -e 'set format x "%.15f"; set format y "%.15f"; set samples 100000; set table "cos.dat"; set xrange [-3*pi/8:3*pi/8]; plot cos(x); f(x) = a + b*x**2 + c*x**4; fit f(x) "cos.dat" using 1:2 via a,b,c'
It then gives you "optimized" values for a/b/c from your template formula, thus resulting in

    cos(x) ~= 0.999923 + -.498826*x*x + .0391071*x*x*x*x
Which is even more accurate but only to about four decimal places.


Here's a java snippit that may be of interest to you [0] by a user called Riven [1]. It should be considerably faster than the lookup table with LERP (not that it really matters at this point since we're just counting nanoseconds on one hand). I recall going down this rabbit hole somewhere around High School as well five or so years ago, and ended up brainstorming potential faster implementations on an old java forum with several users. I believe you have a Intel-Core-i7-8559U, which if userbenchmark is to be trusted, leads me to believe the snippit I linked should be in the 3ns range assuming a warm cache for the lut. Accuracy is configurable based on the sin bits.

[0]: https://gist.github.com/mooman219/9698a531c932c08ccefd86ba7c...

[1]: https://jvm-gaming.org/u/riven/summary


In games it is very common to see lookup tables for trig functions. Not big ones either, maybe a few hundred entries.


Audio plugin programming makes extensive use of lookup tables for expensive operations. Pre-calculating is a useful technique for such real-time sensitive work (like games that have to render a new frame every 16ms @ 60hz or audio plugins that need to return potentially a buffer every 0.72ms @ 32 samples/44.1kHz).


Calculators actually use lookup tables themselves. I don't know if FPUs do too, but I wouldn't be surprised, as it's faster than series expansion.


Handheld calculators generally use the CORDIC algorithm, https://en.wikipedia.org/wiki/CORDIC


Using the rough equivalence of sin(x) ~= x also works shockingly well for smallish x.


If you have access to modern processors (CPUs, GPUs) then a lookup table makes no sense. The polynomial is faster, more accurate, and needs less space to store precomputed values.


When you peek at the code for computing trig functions in most standard libraries (e.g. C Standard math library in the GNU C compiler), you'll see they typically use a lookup table somewhere in the calculation.

As an example, the LUT will get you in the ballpark of the answer, and then you compute a polynomial to calculate the delta and add the two together.

You can always find a polynomial that is extremely accurate, but it likely will be higher order, etc. A LUT + polynomial is faster. A pure LUT is the fastest, but takes too much memory.


I wouldn't be shocked if lookup tables win on massively it of order CPUs. Of course, I also wouldn't be surprised if it is the it of order nature that makes the polynomial faster.

Would be interesting to see benchmarks. On to my list of things I have a low chance of completing...


Once you take SIMD into account, lookup tables frequently lose. Vector gathers are not cheap at all.


Is there a well-optimised library out there for extremely fast low-precision trigonometric functions? Could it make practical sense to do this?


Yeah, the classic 3D game Elite (1984) uses the small angle approximation throughout, no look-up tables at all (except for drawing the planets).

https://www.bbcelite.com/explore/articles/deep_dive_pitching...


This seems unlikely - if a specific number of bits in a standardized floating point representation would imply relevant calculation errors, Nasa would certainly not use it.

Sure, if 24 bits were already sufficient, they would not go out of their way to avoid the extra 8 bits. So in that sense you're right of course. But it's not just "Hey, single precision floats are 32 bits, so why don't we just use that!"


The article doesn't try to justify the exact number of decimal places - and, by eye, the arguments it uses are likely to work to 14 decimal places as well, since the error would be similarly small.

Instead, it tries to answer the thrust of the prompt question: given the massive numbers used in spaceflight, is pi calculated to the greatest possible practical accuracy? Going over the history of the 15-digit version would divert from the interesting part of the article (the effect of precision on calculation) and dilute a nice teachable moment.

Though that fact about the Apollo Computer would make an interesting part of a follow-up.


The Apollo Guidance Computer wasn't 15 decimal places; it was 15 bits. The point is that they didn't use power-of-two word sizes back then; they used whatever word size fit the problem, even though that now seems bizarre.


But the number in the article is to 15 decimal places. Pointing out that that precision comes from the size of the modern double-precision floating point representation doesn't really answer why that representation is enough.


It's a fair point. They would use more decimal places than that if it were necessary, regardless of whatever a double-precision floating point does. Since it's not necessary, the double is adequate (and already widely available by default across programming languages and system architectures).


Double precision from ieee754 is 64bit, of which 53bits are the significand.

X87 uses an 80bit format with functionally a 63 bit significand (they actually use a 64 mantissa, but for that actually doesn’t gain you anything, and adds many terrible edge cases).

They use double precision values because that presumably provides more precision than they need, but 32 bit float would be woefully inadequate.

They could use more (x87 or software float for instance) but there’s not really an any advantage to the increased precision you get, and there are many downsides.


The coprocessor uses 80-bit floats, that’s more than double precision. It even has a special instruction, fldpi, to get π in the full precision: https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f...


It does however completely bollocks up all the subsequent trigonometry functions. Even intel says not to use them.

Also alas, 80bit is a bit of a misnomer - when comparing to the more regular ieee754 representations, it’s actually fp79 :D


Intel's Manual says to use FLDPI. The x87 FPU represents pi internally at extended-extended-precision, so it's more like fp82 :D Were you thinking about the argument reduction gotcha? http://galaxy.agh.edu.pl/~amrozek/AK/x87.pdf


Fldpi only loads the register, rounding according to the current rounding mode, so you still only have 63 bits of precision when you load the value.

For intermediate calculations it incorporates the internal extended pi via the GRS bits.

And yeah the train wreck of argument reduction was my reference


Here is the insane thing about pi: any digit of pi can be calculated directly in hexadecimal.

https://en.m.wikipedia.org/wiki/Bailey%E2%80%93Borwein%E2%80...

Like, you can compute digit 10billion without knowing any of the previous digits (in hex..)

If this doesnt prove the matrix then I do not know what does ;-)


While this is true, the fact remains that it gets harder and harder to calculate (as far as we know). You can see in the given formula that the base-16 multiplicand is not an integer, so it "bleeds" into lower digits and consequently also receives interference from higher digits (which themselves also have interference, and so on, although the decision converges much sooner than the origin).

There are analogies of digits of prime to sort of infinite libraries, kind of an evolving random universe of numbers. But the digits of pi are not a local (fixed-size) function of the previous digits, so the analogy to a physical simulation (or cellular automaton) isn't quite right. But the idea of a cellular automaton with varying locality (in this case increasing radius of interaction) is itself quite interesting to me.

(in this case it's a logarithmic neighborhood and the rule is chaotic for the initial conditions)


Now that I think about it, it should be obvious the size of dependency cannot be fixed, otherwise pi would be periodic! (there are finitely many fixed size 'parents', so it must recur)

I should also note that the straightforward interpretation in this case is that of a temporal neighborhood for a Cellular Automaton! That is, dependence of several states back in time, and 0 space dimensions. You can also think of a 1D CA if you introduce a special state that signals the "expansion" of the digits of pi (which digit we're currently expanding)

This also enlightens me in the bizarre concept of multiple time dimensions. If you start with a 2D field, and use the same technique of keeping track of the current active expanding cells (i.e. "current time"), starting from a single active cell (time 0) in a top-left corner, then you can expand cells across a diagonal, and they depend on previous states in two different directions.


And, of course, Fabrice Bellard found a faster version of this formula in his copious spare time: https://en.m.wikipedia.org/wiki/Bellard%27s_formula


In between making tv transmitters with video cards, winning ioccc’s, booting Linux in browsers, ...

The guy is a mad man. I pity his keyboard.


How utterly strange. Thanks for that.


Quite. And, it has me wondering.

Would that not mean that we then have just as quick of a way to calculate an arbitrary decimal digit of pi? Hexa and dec feel geometrically close enough for any dec digit to be fully "covered" by a couple calculations of contiguous hexa digits.


From what I've read, it's not equally easy, and nobody has figured out a way yet, including the really smart people who came up with the hex algorithm.

Maybe there is something special about a power of two base.


How deliciously intriguing. I'll have to take some time and try to sink my teeth into this one.. someday :)

e: Although alas, after an ounce of consideration of some blown up exponents for 10 and 16, it feels a little(read: much more) daunting to find a clean, mechanical conversion.


Does this exemplify that there can be unique properties of a base?


> any digit of pi can be calculated directly in hexadecimal

Curious what you mean by "directly". How many FLOPs does it take to compute the n-th hexadecimal?



If I'm reading that right, it's O(n) to compute the n-th hexit.


so that isn't really directly. Directly is O(1)


Directly doesn't mean immediately here. It means without computing intervening digits.


What is it about hexadecial that makes this happen?


How do you verify if it’s correct?


Once you guess the formula, the proof is just a few lines of calculus. It's accessible to anyone who has taken Calc I, I think. The full text .pdf of the original paper [0] is freely available; the result is Theorem 1, whose proof is on pages 2 and 3.

I believe the formula was found by computer search, but my memory could be failing me. The three people the result is named after are all well-known for computer-assisted mathematics, for instance using the PSLQ algorithm [1].

[0]: https://www.ams.org/journals/mcom/1997-66-218/S0025-5718-97-...

[1]: https://en.wikipedia.org/wiki/Integer_relation_algorithm


Back in college, my roommate was working on a program to compute Pi to 900 places or so. This was looong before you could google to find out how to do it, so he was inventing it.

He had a prototype that would calculate about 100 digits or so. I asked him how he knew it was correct, and he said in high school he was going for the Guinness world record in memorizing Pi, and he simply knew it was correct. (By the time he was ready to break the record, someone else showed up with having memorized a couple thousand places or so, and he gave up.)

We were allotted strict computer time limits on our accounts on the PDP-10. He figured he could get a thousand digits on the remaining time on his account at the end of the year, and set it up to run overnight.

The program calculated the digits, but had some disk error and writing the output file failed. He couldn't rerun it because he had no computer time left, and that was that.


Back in the late 90's me and a friend were memorizing PI just for fun. I got to 300 digits. Based on how long it took me to recite those 300 digits, we estimated that the world record holder at that time spent about 16 hours just reciting the numbers. I can't imagine the time they must have put into memorizing it.


When my daughter was a teenager, she could recite Pi to 100 digits. I told her that it meant that she could measure a circle the size of the known universe to within the diameter of a proton. :)


I just memorize 3.14159265 since the following digits are 35 (I'll never be able to un-remember this now) my error is like 1 in 10e9 and I've never used Pi in any context where that was even close to mattering. The most accurate calculations I ever had to do in software needed about 1um in a meter accuracy... so I have 3 orders of magnitude of accuracy margin. Never tried to get a spaceship to the edge of the universe with any accuracy though ;) I prefer to use built in values usually since they are typically accurate to the full precision but on platforms where for some reason that wasn't available... Of course the specific calculations you're doing also matter since small errors can accumulate to larger ones if you're not careful...

Another personal anecdote is that in a company that shall remain unnamed we used to have a Pi from memory competition every Pi day (3/14). The president of the company always won. I don't recall how many digits he knew but it was some ridiculously high number (hundreds). I much prefer my family tradition of eating pie on Pi days.


When ytmnd was all the rage I had https://pi.ytmnd.com/ open for way too long and it unintentionally burned the first 9 digits into my memory.


The source is one of my favorite music videos. https://www.youtube.com/watch?v=XanjZw5hPvE


That was fun. I watched it with my kids, who don't know anything about math nor English, they enjoyed it but had no clue what was going on.


According to my numerical computing classes the best value of PI on your computing platform is:

atan(1.0) * 4

And if there's no math package on your system already defining PI then that's the value you should use - assuming there aren't legal requirements mandating other values to be used instead.


Are there a lot of systems that define arctangent but not pi?


The cheap Sinclair Scientific calculator (1974) was rather primitive, so instead of storing constants internally, it printed constants such as e and pi on the case. The calculator was rather inaccurate, so you were much better off using 3.14159 from the case than computing 4×atan(1), which gave 3.1440.


It'd be interesting to see their implementation and the trade-offs its designers made.


I'm glad you asked :-) I reverse-engineered the Sinclair Scientific calculator, documented its code, and built a simulator: http://files.righto.com/calculator/sinclair_scientific_simul...

The short answer is that they used a 4-function calculator chip with just 320 words of ROM and managed to reprogram it into a cheap scientific calculator, a remarkable feat. The tradeoff was that the calculator was very slow and inaccurate.


That's amazing! Thank you for sharing!


C/C++ don't define PI but have an atan function. According to the TIOBE Index for September 2020 (https://www.tiobe.com/tiobe-index/) those languages are both in the top 5 of languages used by developers.


C++20 defines mathematical constants including Pi.

https://en.cppreference.com/w/cpp/numeric/constants


Thanks for the heads up! I haven't been paying any attention to C++20. You must be on the cutting edge of C++ to know that - I ended up having to work a bit on my Macbook Pro to get a version of gcc working that defines std::numbers::pi.


They don’t but POSIX requires M_PI.


Yes - Fortan doesn't have a built-in pi constant, but does have trigonometric functions. Usually programs define pi based on atan or some other trig relation.


bc calculator doesn't (or didn't) define pi, but has atan. I've used "4*atan(1) method" in bc to get pi.


This works, `echo 'scale=100;4*a(1)' | bc -lq`


I assume the later part of you comment refers to the (in)famous attempt by the Indiana state assembly to legalize Pi to be exactly equal to 3.


That thought had come across my mind, but no, I've seen engineering requirements stating something to the effect that the value of PI used in these calculations is to be 3.1415, for example.


Since space is quantized, this would mean that the infinite nature of Pi is non-physical. A mathematical realist could argue that this makes irrational numbers like Pi a mathematical curiosity that lacks objective reality and that at most a few hundred digits of Pi are "real." (Guesstimating what would be required to circle the universe with a precision down to the Planck length.)

Is there a counterargument to this? Is there a case where the infinite irrational nature of Pi would be physically realized?


Given the formula for relativistic excess radius [0] as applied to Earth gives 1.478 mm [1], or 0.23173 parts per billion, anyone using at least 10 significant figures is wasting effort unless they’re also accounting for general relativity.

Naturally, I had already memorised the first 12 digits of π years before I found out about that.

I’d be surprised if NASA wasn’t accounting for GR as standard, what with Mercury etc., but for the rest of us, 10 sf should be enough.

[0] https://www.physicsforums.com/threads/what-is-feynmans-exces...

[1] http://www.wolframalpha.com/input/?i=G%2A%28earth%20mass%29%...


Thank you for this comment! I remember the radius excess from undergraduate relativity, but for months I've been unsuccessfully looking for its name or a brief reminder of how it works. What luck!


Okay, so you can calculate the diameter of a gigantic circle with ~40 digits worth. Big deal. Most mathematical formulas that use pi aren't about circles at all.

I am wondering if there is a physical application that actually would benefit from more than 40 digits.


What are they about?


I'm sure GP means they are not explicitly about calculating a circle.

Lots of values of pi occur in various formulas all of which end up going back to a circle or trig function at some point, (for example fourier transforms, normal distributions, any periodic motion, etc).


Interesting other take I've seen: The continued fraction of Pi is [3; 7, 15, 1, 292, ...]

The 292 is a pretty big number, so at that point the fractional approximation is very good. 355/113 is good enough for anything you're doing on planet earth.


Some people do work at vastly higher levels of precision. The electron g factor has been experimentally deterred as -2.00231930436256 +/- 0.00000000000035. NASA on the other hand uses corse corrections rather than trying to active insane precision with rockets. It’s simply more efficient.


Also because the course cannot be exactly computed in advance. The environment in space, even far away from earth, isn't an ideal vacuum and particle density will depend on solar activity. Then there are effects which are (or until recently were) ill understood, like the pioneer and voyager anomalies. I suspect however, that the effect of not quite perfect burn schedule and burn intensity of the propellant has a much greater effect. And how well do we know the mass of e.g. Jupiter really? GIGO.


> I suspect however, that the effect of not quite perfect burn schedule and burn intensity of the propellant has a much greater effect.

This is the real answer right here. Rockets have all sorts of uncertainties in them. You have the measurement uncertainty in exact orientation and the measurement uncertainty in the acceleration and thus total thrust delivered, all on top of the physical uncertainty in exactly how powerfully your engine is going to burn, and for how long. Remember, there are physical valves that need to open and close to control propellant flow, and there are chaotic perturbations in the conditions inside the combustion chamber. You simply cannot remotely achieve a perfect delta-v in a perfectly specified direction; there are uncertainties on both.


You are right, one can imagine a chaotic system where any deviation from a path will be amplified and have significant costs in the future (i.e. space navigation in realistic gravitational fields), then extreme precision may make sense (where an error might be amplified millions of times further in the path).


> 355/113

Just memorize those six digits to get you four digits beyond the three that everybody knows.


I've got Pi memorised more precisely than 355/113 as well for no good reason, and for engineering applications it doesn't serve any purpose.

But the fraction is weirdly close and that's interesting for other (more academic) reasons, such as explaining why plotting primes in polar coordinates looks like a pattern: https://www.youtube.com/watch?v=EK32jo7i5LQ


i just memorized it as / 1 1 3 3 5 5 / where you start with the denominator and circle back.


So you've got to remember three digits, the fact they are doubled, the place to start, and the place circle back (or deduce any of those).

Or you can remember 1592 as the year Trinity College in Dublin was founded. You didn't know that? You probably won't forget now.


Ha. I don't think I'll remember '1592' directly. Easier to remember the offset of '+100' and combine it with my existing knowledge of who sailed the ocean blue.


Think of it the way we wrote division in grade school, like:

      ___
  113|355


Memorize six digits to get seven back?


Just for the record, 355/113 = 3.14159292... and pi = 3.14159265...


3.1415927, which I memorized in 10th grade (just a hair more accurate than 335/113) has always been more than adequate for every engineering task I've ever had to engage in - including those in the aerospace industry (both rockets and planes/jet engines), and the oilfield industry (all kinds of stuff, including precision sensors).


The computed size (40 decimal digits) is .. less than the 128 bits of the IPv6 address. So, we can encode the radial accuracy of the universe, to the size of a hydrogen molecule in IPv6 packets


Open to correction, but isn't 128 bits a little less than 40 decimal digits?


Yes, just slightly. It would take 133 bits. So I suppose a packet would only hold the width of the universe up to an error of maybe the width of 32 hydrogen atoms.

Clearly unacceptable. I'm sure this be fixed in IPv7.


There's no IPv7 beyond a proposal, but it was already fixed in IPv9 and deployed in production systems.

RFC 1606 - A Historical Perspective On The Usage Of IP Version 9

https://tools.ietf.org/html/rfc1606

> Whilst there are still many addresses unallocated the available space has been sharply decreased. The discovery of intelligent life on other solar systems with the parallel discovery of a faster-than-light transport stack is the main cause. This enables real time communication with them, and has made the allocation of world-size address spaces necessary, at the level 3 routing hierarchy. There is still only 1 global (spatial) level 2 galaxy wide network required for this galaxy, although the establishment of permanent space stations in deep space may start to exhaust this. This allows level 1 to be used for inter-galaxy routing. The most pressing problem now is the case of parallel universes. Of course there is the danger of assuming that there is no higher extrapolation than parallel universes...


Also you’d need to account for the 3 to the left of the decimal point.


The next question is what is that IPv6 address?

c90f:daa2:2168:c234:c4c6:628b:80dc:1cd1 I think


The highest number in base 2 you can cram in 40bits is roughly 0.34 * 10^39.

I shifted it so the exponent is equal to the number of digits we can fit.

128 bits could indeed only fit the first 39 digits of PI.

And even then you haven't really stored PI, because you haven't stored where the comma is supposed to go.


3.4 x 10^38

So we'd be off by about <30 hydrogen atoms instead of <1 :)


>"How Many Decimals of Pi Do We Really Need?"

Here's the thing about the transcendental numbers, of which Pi is one of, all of them contain information.

All of them (and if someone knows of an exception, please let me know) seem to be able to be generated by functions, iterated functions, where the result of one iteration of the function is fed back into the equation (aka algorithm, aka function, aka "series of repeated steps") for future iterations...

In that respect -- all transcendental numbers -- can be thought of as fractals.

Think of it this way, Nature herself has a way of indexing a whole bunch of fractal algorithms via numbers that have a decimal after the integer component, and an infinite series of digits after that decimal!

Which also seems to imply (via reversal of cause and effect) that infinite information -- can be stored in the proper fractal equation -- although, that's just a personal hypothesis at this point with no real proof to back that claim up...

Anyway, math people out there, feel free to correct me on any or all of this (I claim Socratic ignorance in my reasoning process! <g>) -- but please cite concrete examples to back your specific claim...

You know what someone needs to do?

Look at the number gaps between multiple transcendental numbers... I don't mean like the number gap between pi and phi, or e and phi, or pi and e -- I mean like you take an algorithm for a transcendental, and you bump it (inside of its algorithm) by an integer value of one, then two, etc. If you still get another transcendental, then what is the gap between those transcendentals?

In fact, what is the smallest gap between two transcendentals, and why is this so?

Also, what kind of information -- does that gap represent?

?


This answer is fine as far as it goes, but it ignores the loss of precision involved if you iteratively compute with it. This does happen, even in contexts that NASA cares about. Hopefully they look for this, or they will find they've managed to navigate to the wrong star on some future interstellar mission.


You know, even if I know plenty of digits, I can't think of any real world reason in my personal life for needing more decimals than 3.14 . If I add one more, that's a correction of less than 1/1000 and no measurement I do is that precise. NASA's rocket trajectories are less precise than that. Of course, extremely precise devices need better, but they are removed far from my everyday life.

Has anybody needed more than 3.14?


This is an incredibly interesting question, and the answer is an emphatic YES! Many systems involve iterative schemes, where the output of one step is used as the input to the next step. Here, these precision errors can accumulate, and if there's a multiplicative term in your equations, they can explode!

These sorts of problems are actually very common in a lot of scientific computing and simulation contexts, which is why many in the scientific computing community look aghast at the rise of FP16 (and even fp32) in machine learning applications. Of course, those algorithms are often of a _very_ different nature from (say) the large-scale linear algebra or PDE solvers we're using, but still it's pretty shocking if you're used to worrying about machine precision!


Machine learning might also prefer focusing on the magnitude rather than an exact value. (With the lower precision number part more about being nudged between magnitude bins.) E.G. bfloat16

https://en.wikipedia.org/wiki/Bfloat16_floating-point_format


That's a thousandth of an inch. Lots of things are machined to a thousandth of an inch tolerance, and some things are machined to a significantly finer tolerance. And if you're drilling bolt holes on a diameter that's greater than an inch, you'll need to use a lot more than 3.14.


For work I needed the accuracy out to 15 decimal places. It was for calculating random jitter based on integrating and averaging a large number of phase noise measurements.


I can't think of any real world reason in my personal life for needing more decimals than 3.14

If you keep your feet on the ground, that's probably enough for most things. But your grandchildren will probably need more, as spaceflight becomes commoditized. A couple of decimal places might mean the difference between landing on Mars, and icy death in the vacuum of space.


> correction of less than 1/1000 and no measurement I do is that precise. NASA's rocket trajectories are less precise than that.

Source? If you make that big of an error you're gonna fail your orbital insertion at the other end entirely when going to places like Mars, let alone any farther.


Wow, calm down, CydeWeys, you're scaring me. To be clear: No source, imprecise sentence, should have added a ton of buts and ifs, but that would make the question twice as long and thrice as boring. I meant it more in a philosophical way.

For example: I don't think rockets start on course with a precision of 1e-3, and even if they do, small atmospheric changes will probably cause errors greater than that. I assume the feedback loop of the guidance systems easily smooth over errors this small. So while the theoretical trajectory calculations need more precision, the real world guidance might do not.

Another consideration: Is e.g. the length of the rocket know this precise? If temperature on a day changes over a range of e.g. 10 degrees, the materials might expand and contract more than 1e-3.

Now that's NASA. There are clearly things requiring much smaller errors. My CPU litography being off a factor 1e-3 will not end well.

I meant the quest in my everyday life. If I build a round table for my home, 3.14 will probably serve me well enough on the drawing table. At what point will random Joe Shmoe notice when pi is not exactly 3.14


Do you have any sources you can point to from research you've done on this subject? This is all coming off as uninformed speculation.


How precisely do you have to get it right up front? Maybe it's OK to get your initial trajectory off by a 1/1000 part, and make an adjustment when you're 3/4 of the way to Mars and your relative error is now more like 1/250.


There's no world in which it makes sense to save a few digits on pi and then risk running out of fuel while making your mid-course correction because your initial burn was too far off. The mid-course correction is because you can only be so precise when you're firing rocket engines, not because you're foolishly using imprecise constants for no good reason at all.

So I'll reiterate, I want to see a source that NASA uses only 3.14 as the value for pi in making their trajectory calculations. And I want to point out the absurdity of even so much as having this debate in the very comment thread for an article on nasa.gov where NASA itself is saying that they use 3.141592653589793 for pi. We already have as good as a source as we're going to find, and there's a lot more sig figs in it than 3!


Sure. But the question asked was not about what value NASA use, but what precision you need for space travel. The rule of thumb I use when looking at noisy estimates is that you should shoot for twice the precision (one extra bit of information) that you have in your noisiest other value (or of your total noise - they're usually about the same thing). Any less precision than that and your estimate gets worse, but more precision doesn't materially improve your estimate. I don't know a rigorous proof for this but you can hand wave it from the Nyquist-Shannon sampling theorem [0].

So how much uncertainty comes from other sources when deciding on an initial burn for your trip to Mars? Contributing factors could be your fine control over the thrust from the rocket engine, but also solar radiation, gravity or - I suspect this is biggest - attitude control.

I'd guess NASA can achieve a precision of better than 1/500 but probably not 1/50,000, so they'd need about 5 or 6 digits of pi. But I'm interested in hearing a more educated guess!

[0] https://en.m.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_samp...


And so we're back to still needing a source to do better than guessing :/


Some factors are way more important than others for rocket navigation. For instance, the azimuth, the angle you're headed is very important, since a small error in angle gets you way off target. The Saturn V theodolite, used to calibrate this angle before launch had an accuracy of better than +/- 2 seconds of arc, i.e. 1 part in 648000. That suggests you'd need to use pi to at least 6 decimal places (probably more) for surveying and azimuth calibration.

The book "Inventing Accuracy" is about missile guidance (which has mostly the same issues) and discusses the various sources of error in detail and the various contributions to the "error budget".


I don't have a source to support the OP, but I would think there is a significant allowance for course correction that reduces the required precision. It's not a purely ballistic trajectory.


The mid-course correction is not because you used an imprecise value of pi. If that's all it were about, you'd just use a precise pi from the beginning and then not even need the mid-course correction. Missions have failed because the rocket engines failed to start up for the mid-course correction. Unnecessarily risking that because for some reason you insist on using far fewer digits of pi than your computer is capable of would be insanity.


The context of my statement/question was definitely "given the precision that can be achieved with available precision" and not with an arbitrary imprecise constant in the calculations. As you say, the mechanical aspect of the launch and transfer orbit insertion is going to be several orders of magnitude more imprecise than the mathematical goal - that is what the mid-course corrections are for (aka the difference between theory and reality)


The comment I was originally responding to said "Has anybody needed more [digits in pi] than 3.14?" and "NASA's rocket trajectories are less precise than [one part in a thousand]", both of which I take issue with. You then stepped in to defend those statements against my objections. I'm glad it turns out you don't actually agree with those statements, but hopefully you can see my confusion.


> Using pi rounded to the 15th decimal

So, a 64-bit double, in other words.



Is that "need" or "find convenient"?

1 in a million gives you an error of roughly 132 feet (40 metres) in the circumference of the Earth. 1 in 100 million brings that down to 1.32 feet (0.4 metres).

1 in 100 million is a mere 8 decimal places (out of the billions of decimal places that we've computed for pi.)



IIRC it was a US state that decided we only needed one decimal place of accuracy - I want to say Kansas, but it could have been Oklahoma or Indiana I think?


If they're going all the way out to 39 or 40 digits (and I understand they're normally not), they might as well stop at the Feynman Point.


That's how much we need for space travel within the solar system. What about using the size of the universe and the Planck length?


I have memorized 3.1415926 which is better than 1ppm and have been overkill for every practical use I ever had in my life.


Funny. If you're going to stop there, you should end it with a 7 to minimize errors.

The above, while technically true, is exactly as pointless as it sounds. The error is lower with a 7 than a 6, but if either of them is an acceptable approximation, the difference is irrelevant.


Obviously, I know this, but it rhymes better with 6 at the end (I am Polish).

There is only few instruments I have been around in my life that can actually measure this difference. I have 6.5 digit Voltmeter which would almost be able to detect it in certain situations if I somehow contrived the experiment, but not in any practical measurement where I actually had to calculate anything.


Lots of people memorize more digits of pi than they need, just because it's fun. I've noticed that most who do this stop at 50 digits after the decimal point. When I was younger I knew 250 digits, now I'm lucky to remember beyond 100.

3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647

I haven't practiced in a while, I hope that's right.


> 3.1415926535897932384626433832795028841971693993 [...]

I know this much is correct because that's how much I've memorized.


I know it’s correct because the rest goes off the page.


You don't need any digits of pi. You should be using tau.


And in binary, the bits just shift, so same thing to remember


We used 22/7 when I was at school.


On my best slide rule, it's visibly just a tiny bit to the right of 3.14, but I doubt that I could guess a fourth digit very well if a similar number emerged from a calculation.


That's not going to get you very far if you're looking for precision.


There's always 355/113. It's roughly approximate to 6.6 digits† and strangely it's been easier for me to remember than 22/7.

https://www.johndcook.com/blog/2018/05/22/best-approximation...


I like these kinds of things, but a part of me has always wondered why people don't want to just memorize it to six digits at that point. I mean, how long does it really take you to memorize 3.1415926?

Or you could do what I always did, much to the annoyance of my high-school and college physics teachers, and just write down all my answers in terms of pi and never bother reducing it, since I would always argue that I can keep it exact that way :)


also useful for celebrating ∏ Day in europe, on a pre-kalend day in July.


Yep 14th March and 22nd July are both PI Days. Who knew? :)


Why is the link having a hootsuite in it? Is it because it was posted by a marketeer?


I don't think the person posting it on HN did it on purpose as they didn't include it in their other submissions.


Just seeing this, i really have no idea, just saw the site, copied url, submitted here, just like any other time!


...EVERYONE!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: