Why would you do this instead of just converting the digits to a fraction over a power of 10 and then reducing the fraction (or power of 2, if it's a floating point datatype)? I was thinking it was faster, but the recursive procedure involves a division step, so I would assume that calculating GCD using the binary algorithm (which is just uses shifts and subtraction) would be faster? I guess this is if your numeric data-type can't fit the size of the denominator?
If you assume/know the decimal you got is the result of rounding some computation, and are more interested in a short description than in the most accurate one.
For example 0.142857 would be 142,857/1,000,000 in your algorithm, but ⅐ in one using best rational approximations (which I assume that paper does, as they’re easy to calculate and, according to a fairly intuitive metric, best)
Common Lisp gives you this choice (this being HN, a Lisp mention is obligatory :-)
The CL function #'rational converts a float to a rational assuming the float is completely accurate (i.e. every decimal digit after the last internally-represented one is assumed to be 0), while #'rationalize assumes the float is only accurate to the limit of float precision (i.e. the decimal digits after the last internally-represented one can be anything that would round to the last one).
Both functions preserve the invariant that
(float (rational x) x) == x
and
(float (rationalize x) x) == x
...which is another way of saying they don't lose information.
In practice these tend to be not very useful to me because they don't provide the ability to specify which decimal digit should be considered the last one: They take this parameter from the underlying float representation, which sometimes causes unexpected results (the denominator can end up being larger than you expected) because the input number can effectively have zeros added at the end if the underlying float representation is large enough to capture more bits than you specified when you typed in the number.
In addition, what I really need much of the time is the ability to limit the size of the resulting denominator, like "give me the best rational equivalent of 0.142857 with at most a 3-digit denominator". That function is not built in to Common Lisp but one can write it using the methods in TFA. It loses information of course so a round trip from float->rational->float won't necessarily produce the same result.
> assuming the float is completely accurate (i.e. every decimal digit after the last internally-represented one is assumed to be 0)
Careful though - the float is probably not actually a set of decimal digits but most likely binary ones, so it would be assuming that every binary digit after the last one is zero.
Just because you wrote ‘0.1’ in your source code that doesn’t mean you only have a single significant figure in the float in memory. It’s going to be 0.0001100110011… (repeating 0011 to the extent of your float representation’s precision).
Although the Common Lisp language doesn’t actually appear to require that the internal float radix be 2 - a floating point decimal type would be valid implementation.
> if the underlying float representation is large enough to capture more bits than you specified when you typed in the number.
If you typed in `0.142857` (or however IEEE floats are syntaxed), the correct rational is 142857/1000000. If you want to rationalize relative to number of decimal significant digits, that should be something like `(rationalize "0.142857")` or `(rationalize (radix-mp-float 10 '(1 4 2 8 5 7) -1))`.
That would be correct if the underlying representation were decimal. If it's binary, as it is in most Common Lisps,
(rational 0.142857) --> 9586971/67108864
because the underlying representation is binary, and that denominator is 0x4000000.
My original comment about the representation being large enough to capture more bits than you specified was wrong; the unexpected behavior comes from the internal binary representation and the fact that 9586971/67108864 cannot be reduced.
If you start with integers you get
(/ 142857 1000000) --> 142857/1000000
so you could write a version of #'rational that gives the expected base-10 result, but you'd first have to convert the float to an integer by multiplying by a power of 10.
You might want a simpler representation (smaller denominator) than that will give you.
An example is converting musical pitch ratios to simple fractions (e.g. as used in just intonation).
Literally yesterday I was writing code to do this. The mediant method works beautifully for this, and it can be optimized further than what's presented in the article.
The basic idea to optimize the algorithm (which I got from this [1] SO answer) is to recognize that you only ever update the numerator and denominator of either bound in a linear fashion. So rather than potentially repeatedly update a given bound, you alternate between the two, and use algebra to determine the multiple by which the bound should be updated.
Regarding the intersection of fractions and music, "just intonation" is the term you want to research.
I don't think it's that good. At the very least I implemented it and got different answers for the test case of 0.263157894737 = 5/19 depending on the choice of numerical representation.
float64 ended up at 87714233939/333314088968 while decimal and fraction ended up at 87719298244/333333333327.
Here is the code.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("Q", nargs="?", default="0.263157894737")
parser.add_argument("--method", choices = ("float", "decimal", "fraction"), default="float")
args = parser.parse_args()
if args.method == "float":
print(f"Using float64 for {args.Q!r}")
one = 1.0
Q = float(args.Q)
int2float = float
elif args.method == "decimal":
print(f"Using decimal for {args.Q!r}")
import decimal
one = decimal.Decimal(1)
Q = decimal.Decimal(args.Q)
int2float = decimal.Decimal
elif args.method == "fraction":
print(f"Using fractions for {args.Q!r}")
import fractions
one = fractions.Fraction(1)
Q = fractions.Fraction(args.Q)
int2float = fractions.Fraction
def to_frac(X):
Da = 0
Db = 1
Za = X
while 1:
Zb = one / (Za - int(Za))
Dc = Db * int(Zb) + Da
N = round(X * Dc)
frac = N / int2float(Dc)
print(f" {N}/{Dc} = {frac} (diff: {X-frac})")
if float(N) / float(Dc) == float(X):
return (N, Dc)
Da, Db = Db, Dc
Za = Zb
print("solution:", to_frac(Q))
Another issue is the numerical range. Consider the input "1.5e-318". It causes overflow in the float, decimal, and fraction implementations I gave:
% python frac.py 1.5e-318 --method float
Using float64 for '1.5e-318'
Traceback (most recent call last):
File "frac.py", line 40, in <module>
print("solution:", to_frac(Q))
File "tmp.py", line 31, in to_frac
Dc = Db * int(Zb) + Da
OverflowError: cannot convert float infinity to integer
% python frac.py 1.5e-318 --method fraction
Using fractions for '1.5e-318'
1/666666666... many 6s removed ...6666 = 1/666666666... many 6s removed ...6666
(diff: -1/666 .. more 6s removed ... 6600.. even more zeros removed ...00)
Traceback (most recent call last):
File "frac.py", line 40, in <module>
print("solution:", to_frac(Q))
File "frac.py", line 35, in to_frac
if float(N) / float(Dc) == float(X):
OverflowError: int too large to convert to float
while with the mediant solution I don't need to worry about the input range beyond infinity and NaN:
>>> from fractions import Fraction
>>> Fraction(1.5e-318).limit_denominator(1000000)
Fraction(0, 1)
(I used the float() calls to ensure the fraction and decimal methods stop at the limits of float64. If I remove them I still end up with numbers like 3/2E319 which would not be representable as a Turbo Pascal integer, while the Turbo Pascal mediant implementation would not have a problem.)
Finally, the mediant solution is easier to implement.
I had a similar problem a few years ago, when I was making gears (and gear shaped objects) in a machine shop on old gear driven machines. We had most, but not all, of the change gears, and when making Worm or Bevel Gears, you'd often have a ratio, and have to work backwards to figure out a set of gears to get as close as possible to that ratio.
I wrote the simplest possible pascal/windows GUI application[1] to simply try all the possible gear combinations, and keep the lowest. I was shocked when it returned an answer in less than a second. I thought for sure I was going to be digging into algorithms like this, but nope... cheap compute saved me. I'd quickly get the gears, and the error in parts per million.
I do feel myself getting Nerd Sniped though... must resist.
I have a background in linguistics (computational) and part of what was mind blowing when my schooling approached the deep dive on some aspects of it was the sheer extent to which much of what we put into language is artificially constrained by the limits of any given language being unable to fully express an infinite range of concepts/thoughts/etc… that some things are difficult to put into words because there are no words for every single gradation of a a phenomenon (colors, with their inherent connection to gradients, come to mind). Maybe this seems trivial, I kind of had an intuitive “of course” understanding of it before deep dive work in linguistics, but that’s when it really became abundantly clear how limited we are by lexical entries.
All of that is a preamble to say that it seems like the same thing is true of mathematics. No single number system can completely encompass all of mathematical expressions. This post sort of touches on it with the varying methods of expressing something in decimal vs fractional notation. 1/3 cannot be fully expressed (well, expanded) in decimal notation but it is trivial in fractional notation. More esoteric examples exist when entering the realm of hyperreal numbers.
It is both amazing and daunting to confront these limits, which may be a product of the limitations of the human mind.(or not, really I don’t know)
And maybe I’m just rambling… I don’t have a strong background in number theory… it is all just so beautiful and humbling though.
For the continued-fraction convergents, instead of Mathematica, here's a web page of mine that does the same thing: https://shreevatsa.net/fraction/best (try it with the 3.213432 from the post, and a large limit for the denominator).
Also, the mediants approach mentioned in the post is not unrelated to continued fractions; the latter can be seen as an "accelerated" version of the mediants (specifically, counting how many times to take certain mediants, and directly getting there), as explained in this great article "Continued fractions without tears": https://www.maa.org/programs/maa-awards/writing-awards/conti...
That article is a truly marvelous article and very worth reading for anyone interested in either continued fractions or rational approximations.
For those who are curious, a simple algorithm for generating these good approximations and continued fractions is to take the mediant of a Farey pair that enclose the given number, such as [3/1, 4/1], here 7/2, and then choose the sub-interval that contains the desired number. Repeat as desired. This first step has [3/1, 7/2] as the next interval to look at as 3/1 < 3.213432 < 7/2. When doing this to generate the continued fraction representation, start with the nominal interval [0/1, 1/0].
A few steps:
0/1 : 1/1 : 1/0 R
1/1 : 2/1 : 1/0 R
2/1 : 3/1 : 1/0 R
3/1 : 4/1 : 1/0 L
3/1 : 7/2 : 4/1 L
3/1 : 10/3 : 7/2 L
3/1 : 13/4 : 10/3 L
3/1 : 16/5 : 13/4 R
16/5 : 29/9 : 13/4 L
16/5 : 45/14 :29/9 L
16/5 : 61/19 : 45/14 R
61/19 : 106/33 : 45/14 R
...
The number 106/33 is 3.2121... and is generated by easy computation of the mediants.
Each time the direction of selection is changed, this results in having an ultra-best approximation. If we count the number of steps in between the switches, we get the continued fraction representation: [3; 4, 1, 2, ...].
This is also related to the Stern-Brocot tree, for those interested.
Playing around with these ideas led me to the idea of defining a real number as a number that, given a rational interval, says Yes or No depending on whether it ought to be in that interval. To define the space of such objects, I used properties that the Yes/No answers have to satisfy. The key property is that if you split a Yes interval into two pieces, one of the pieces should be Yes and the other No (unless the splitting point is the real number).
If curious for details, my paper repository on this matter is at https://github.com/jostylr/Reals-as-Oracles where I not only prove that it all works out as it should, but also give an example of doing continued fraction arithmetic in section 7.9 of the main paper for those who just love continued fractions.
He points out that the YouTube video "uses mediants" to compute rational approximations to real numbers. He then says that the continued fraction expansion is better. However, the mediant-based method is totally identical to the continued fraction expansion method. The two are related via the Stern-Brocot tree (https://en.m.wikipedia.org/wiki/Stern%E2%80%93Brocot_tree).
The fractions that one gets from iterating the mediant process are exactly the "semiconvergents" one gets from the continued fraction process. Truncating the continued fraction gives the "convergents"; these are special mediants which are right before switching direction as one zig zags on the tree.
One neat trivium I like to quiz the mathematically-minded on is: what number is the most irrational, i.e., toward which number do rational approximations converge the slowest?
The answer can be determined from continued fractions and is discussed in the video near the end.
Also got me thinking about what fraction is most "pi efficient"? That is, what fraction is closest to pi compared to how many digits you must use to represent the fraction (or perhaps only digits in the numerator or denominator)?
Using continued fractions in Mathematica. Maybe there's enough information in the article to say that this is enough to have answered the question. I don't know enough about all of the methods to say, personally. Also, I never outlined what was the best metric to evaluate how well accuracy compares to the number of digits needed.
As others have said, the convergents of the simple continued fraction representation give the best rational approximation [1] relative to the size of the fraction, in the strongest relevant metric: that of absolute error bound. Proofs can be found in any book on Diophantine Approximation; the classic (and readable) reference is Khinchin's Continued Fractions book.
The answer is already there. If you extend the sequence given in the article longer, the ratio of the number of digits in the rational representation to the decimal representation gets worse, not better, so you know you reached the inflection point early.
If you don't want to do this yourself, use Python's built-in fractions module. Fractions have a built-in method to compute the best fraction given a maximum denominator:
>>> from fractions import Fraction
>>> x = Fraction("3.213432")
>>> int(x)
3
>>> (x % 1).limit_denominator(9999)
Fraction(375, 1757)
Often it will be preferable to limit the error instead of the denominator. "Find the fraction with the smallest denominator that's within 0.005 of 0.33." Does python support this as well?
Not directly that I know of, but that's definitely a straight-forward mediant calculation. I'll do it all in integer ratios and assume the answer is somewhere between 0.0 and +inf, exclusive:
from fractions import Fraction
mid = Fraction("0.33")
delta = mid * Fraction("0.005")
lo = (mid - delta)
lo_p, lo_q = lo.as_integer_ratio()
hi = (mid + delta)
hi_p, hi_q = hi.as_integer_ratio()
p0, q0, p1, q1 = 0, 1, 1, 0
while True:
p = p0 + p1
q = q0 + q1
if p * lo_q < lo_p * q: # p/q < lo_p/lo_q
p0, q0 = p, q
continue
if p * hi_q > q * hi_p: # p/q > hi_p/hi_q
p1, q1 = p, q
continue
break # in range
frac = Fraction(p, q)
print(f"Found: {p}/{q}")
print(f"{lo} <= {frac} <= {hi}? {lo <= frac <= hi}")
For anyone else who was puzzled at this result (rather than 1/3): The above algorithm is finding something with the relative error of at most .005 times the input value 0.33. 1/3 is the best value for the absolute error of at most .005.
I started studying Lisp recently and it blew my mind it can do rational fractions, also virtually infinite length integers, OOTB. I used and programmed computers since 286s and have never seen this before. Now I don't even understand why do people still use calculators (including calculator apps) when we have Lisp which is easier and more powerful.
There's a nice calculator app bundled with Windows 10, and I even have Mathematica installed, and of course Desmos is useable everywhere, but... when I need to compute anything that's more than simple elementary arithmetic, I just open a Python REPL. It's simple to type, it's readable, and way easier to use than a calculator for persistence (ever tried to use variables on a scientific calculator? It works, but it sucks)
> when I need to compute anything that's more than simple elementary arithmetic, I just open a Python REPL. It's simple to type, it's readable
Indeed. It also is fairly hard (to me at least) to enter a long (longer than 2 operands) sequence into a classic calculator without making a typo. Whenever I need even to just add 4 numbers I don't use a calculator to enter 1 + 2 + 3 + 4 nor a spreadsheet to enter the data in separate cells, select them and see the sum (I used this way until recently) - I open IdeOne.com, choose Common Lisp and enter (+ 1 2 3 4). Needless to day I don't only use it for formulae this simple, it is of great help in complex cases.
There are a couple of well-known algorithms to find the best rational approximation to a given real number. A good and short book is "Diophantine Approximations" by I. Niven. For code, you can also check out this repo of mine: https://github.com/alidasdan/best-rational-approximation .
Note that decimal numbers represented as a floating point numbers on a computer are actually rational numbers due to limited precision. So converting decimal numbers to fractions, both stored on a computer, means converting a fraction with potentially large numerator and/or denominator to a simpler fraction, say, one with a far smaller denonimator.
For example, an approximation to pi is 3.14159265358979323844, which is the same thing as 314159265358979323844/10^20 (trivial approximation). Using these algorithms we can covert this fraction to a simpler fraction under different approximation criteria such as a bound on the approximation error or a bound on the magnitude of the denominator. For this example, we then get various approximations to pi such as 22/7, 355/113, ...
In the video this guy uses web vpython but doesn't share what website he is using. Google points me to glowscript.org, but it's clear he is using a different web vpython tool (based on the look and icons)
Anyone know what tool/website he is using to code python?
Algorithm To Convert A Decimal To A Fraction by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 http://homepage.smc.edu/kennedy_john/DEC2FRAC.PDF