Runtime would be nice, but ... that's basically what Tokio and the other async frameworks are. What's needed is better/more runtime(s), better support for eliding stuff based on the runtime, etc.
It seems very hard to pick a good 'number' (JS's is actually a double-precision 64-bit IEEE 754 float, which almost never feels right).
Yes, that's true - "number" is probably more broad than I'd really want. That said, python's "int", "float" and "decimal" options (although decimal isn't really first class in the same way the otherse are) feels like a nice balance. But again, its interesting the way even that is probably a bias towards the type of problems I work with vs other people who want more specification.
The key though is probably to have a strong Number interface, where the overhead of it being an object is complied away, so you can easily switch out different implementations, optimize to a more concrete time at AOT/JIT time and have clear semantics for conversion when different parts of the system want different concrete numeric types. You can then have any sort of default you want, such as an arbitrary precision library, or decimal or whatever, but easily change the declaration and get all the benefits, without needing to modify downstream code that respects the interface and doesn't rely on a more specific type (which would be enforced by the type system and thus not silent if incompatible).
"Number" implies at least the reals, which aren't computable so that's right out. Hans Boehm's "Towards an API for the Real Numbers" is interesting and I've been gradually implementing it in Rust, obviously (as I said, they aren't computable) this can't actually address the reals, but it can make a bunch of numbers humans think about far beyond the machine integers, so that's sometimes useful.
Python at least has the big num integers, but its "float" is just Rust's f64, the 64-bit machine integers again but wearing a funny hat, not even a decimal big num, and decimal isn't much better.
I would argue that what "number" implies depends on who you are. To a mathematician it might imply "real" (but then why not complex? etc), but to most of us a number is that thing that you write down with digits - and for the vast majority of practical use cases in modern programming that's a perfectly reasonable definition. So, basically, rational numbers.
The bigger problem is precision. The right thing there, IMO, is to default to infinite (like Python does for ints but not floats), with the ability to constrain as needed. It is also obviously useful to be able to constrain the denominator to something like 10.
The internal representation really shouldn't matter that much in most actual applications. Let game devs and people who write ML code worry about 32-bit ints and 64-bit floats.
Probably most people want accurate fractions (1/3), so they likely want Rationals. Of course machine-adjacent minds probably immediately want to optimize it to something faster.
I've never needed accurate fractions, except for one case where I should have stored the width and the height of the image as the original figures instead of trying to represent it as a fraction. But that's not a big issue, there's literally no downside to having width and height of the image as integers.
I see no reason why I would need to represent it as an accurate fraction instead of two numbers, even if I divide it later I can always just do that inaccurately since the exact aspect ratio doesn't matter for resizing images (<1% error won't affect the final result)
It seems very hard to pick a good 'number' (JS's is actually a double-precision 64-bit IEEE 754 float, which almost never feels right).