Hang on, how do you store that fraction in 16 bits?The naive way, 8 bits each for numerator and denominator, doesn’t work as 355 is too big.

 Naive: 9 + 7 bits (admittedly, a bit of stretch)If I'm correct with Fn, to represent all fractions with denominator 113 or less it requires 12 bits, this leaves 4 bits for the whole number part.
 That Farey sequence is cool, but I’m a bit skeptical that it’s really useful as a general-purpose numeric format.It’s dense around fractions with largish numerators and denominators, but sparser around small numerators, like 0, 1/2 and 1. That seems bad for many applications. It may be able to approximate pi quite well, but how about small angles close to 0?If you know the range of numbers you’re going to store, just spreading the representations evenly seems like a natural choice -- normal fixed-point reals.Floating-point seems like a pretty reasonable extension of that, when you don’t know in advance what the scale will be.
 Sure there are tradeoffs. Everyone in the thread implies I want to completely obsolete FP or whatnot :(Have you thought through the small angles example? Because sin(x) is equal to x for small angles. Then, as x raises further, we are entering densest area of farey sequence, so the precision is fine there too.
 Sorry to be overly harsh! I think I was initially excited by the suggestion, then disappointed when I thought it through and found it less compelling that it sounded. :)Re small angles, I don’t quite follow; if I have a couple of small angles, say 0.01 and 0.015, isn’t it useful to have good resolution around 0 so I can represent them reasonably accurately? The Farey sequence seems to have its biggest gap right at that point, between 0 and 1/N.
 I don't follow either. When sin(0.01rad) = 0,0099998333... and you do need the 0,00000026 precision you just choose representation with enough bits, same as with floating point.

Applications are open for YC Winter 2020

Search: