I invite you to ignore any syntactic issues that may arise, like perhaps having to use x.Add(y) instead of x + y. If those were the fundamental problems, the solution would be trivial. That's not the fundamental problem with rational numbers being the pervasive default; the problem is that for your increased precision, you pay for algorithms that you are used to being O(1) in space and time, albeit with potentially imprecise results, suddenly taking on seemingly-random values for both, with disturbing amounts of things like "O((log2 denominator + log2 numerator)^2)" showing up and then stacking on top of each other (^4, ^6, ^8, etc.).
More seriously, general programming always prefers performance over correctness, because correctnes is expensive and very few programs are rocket surgery where perfect correctness matters. We have languages decdicated to correctnes, like Ada and Coq. They are very niche because they are slow and they require a lot of work the programmer doesn't want to do to hold up their end of the bargain.
All the rest you say applies to floating point alike, plus with fractions is much easier to test whether the result is accurate or not.
In this post you say "fixed-length fractions", but I don't know exactly what that is: https://www.google.com/search?hl=en&q=fixed%2Dlength%20fract... Some of my natural expansions I can see how they'd have algorithmic complexity equal to floats, but they're going to have similar (not identical, but similar) pathological cases.
Based on what you sketch in your post, if you mean, two fields of fixed-length integers, that's got all kinds of problems that will be a non-starter; redundant representations means the equality operator becomes more complicated than a bitwise comparison involving factoring (IEEE floating point has some issues here but they're at least designed to be amenable to hardware, this factoring thing is going to be a problem at base silicon levels), plus redundant representations means you're wasting valuable bit patterns, you can't represent anything bigger than a MAX_NUM / 1 so you're going to pay a lot for larger numbers, etc., and you still have the problem that your operations won't be closed and you'll be approximating anyhow; you're going to end up with most of floating point's problems anyhow.
I know there are alternatives to floating point, such as "universal coding": https://insidehpc.com/2018/05/universal-coding-reals-alterna... They would also have similar complexity to floats, but move the pathological behaviors around. (The question in terms of trying to replace floats isn't whether you can get rid of the pathological behaviors; you can't, just by the nature of the beast. But nothing says you can't hide the pathological behaviors under the carpet, under the couch, in the corner of the room, instead of having it pretty close to sitting in the middle of the commonly-traveled path.)
log2(355)+log2(113)≈15.3, so it seems like you could squeeze this into 16 bits, but what about a sign bit? And since 355>2^8, I think you also need to be reserving some bits to say how the remaining bits are partitioned between the numerator and denominator (see  for an interesting writeup of some experiments with "floating bar").
A fundamental fact of any fixed width number format is that you can only represent 2^n values with n bits, so designing a number representation is really all about deciding how you will distribute those values. Some properties you might want:
1. Enough dynamic range for your application
2. A smooth distribution of values within that range (clustering some values too close together implies gaps somewhere else).
3. Efficient arithmetic operations
4. Exact results in some important cases
Floating point optimizes for 1-3, somewhat at the expense of 4 although it is exact for (small enough) integers and fractions with a (small enough) power of 2 in the denominator.
Fixed-width rationals are less good at 1-3, but are better for 4. The distribution of rationals on the number line is pretty funky, and there's even redundancy because 2/4 and 1/2 represent the same value.
I don't think optimizing for exact results in more special cases is really the right move in most applications, so I don't think we should be surprised by floating point's dominance. On the other hand, I have worked on a project where we decided exact results in special cases really was important .
Here's a whole post on the issue by an actual hardware designer:
To see the limitations of hardware support, let's first look at what hardware can do to speed things up. Roughly, you can really only do two things:
1. Specialization - save dispatching costs in speed and energy.
2. Parallelization - save time, but not energy, by throwing more hardware at the job.
It's not exactly what you're talking about but there is some use for "block" floating point, where you manually track the decimal point through your arithmetic via shifts, which allows you to explicitly control the precision you need for a given portion of your arithmetic. The code is disgusting however.
I wonder though if there's a numeric representation that could make trigonometric functions into a shift/mask operation, kind of like multiplying by powers of the base for integers/floats. That would be real neat.
The naive way, 8 bits each for numerator and denominator, doesn’t work as 355 is too big.
If I'm correct with Fn, to represent all fractions with denominator 113 or less it requires 12 bits, this leaves 4 bits for the whole number part.
It’s dense around fractions with largish numerators and denominators, but sparser around small numerators, like 0, 1/2 and 1. That seems bad for many applications. It may be able to approximate pi quite well, but how about small angles close to 0?
If you know the range of numbers you’re going to store, just spreading the representations evenly seems like a natural choice -- normal fixed-point reals.
Floating-point seems like a pretty reasonable extension of that, when you don’t know in advance what the scale will be.
Have you thought through the small angles example? Because sin(x) is equal to x for small angles. Then, as x raises further, we are entering densest area of farey sequence, so the precision is fine there too.
Re small angles, I don’t quite follow; if I have a couple of small angles, say 0.01 and 0.015, isn’t it useful to have good resolution around 0 so I can represent them reasonably accurately? The Farey sequence seems to have its biggest gap right at that point, between 0 and 1/N.