No you can't.
> What's your point?
Floats really have lots of advantages over exact real numbers. Being able to compute transcendental functions is one of them. There're others.
They're freaking fast, my cheap laptop with integrated GPU can do almost 1 TFlops. They're fixed size i.e. you can handle them without dynamic memory allocations.
They have disadvantages of their own, though: for instance, y if two of these are equal then in general you can't determine that they are in finite time.
This is arguably a problem with floating point as well. Two computations F and G which ought to produce the same result can end up differing because of rounding errors. So equality testing is not advisable to do over floating point. Also, inequality of real numbers is still semi-decidable, which is useful.
People keep bringing up these problems around not being able to compute discontinuous functions, but those problems still manifest themselves over the floating point numbers, in the sense that a small rounding error in the input can drastically affect the output. This obviously isn't a problem over the rational numbers, but a lot of operations like square-root are not possible over the rational numbers.
While both rationals and symbolics have their use, saying that floats don’t have any advantages over them is just wrong.
1. I did acknowledge that floating point numbers are faster. I expressly stated that they do have some advantages over real numbers.
2. I'm not talking about symbolic computation.
Yes you did. And then you said it’s their only advantage. It’s not the only one.
For example, in some environments, dynamic memory is undesirable. Especially since you mentioned physical sensors: many real-life sensors are not directly connected to PCs but instead are handled by MCU chips.
> I'm not talking about symbolic computation.
The numbers you’re talking about are pretty close to symbolics in many practical aspects. From Wikipedia article “Computable number”:
equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus
I see this all the time with developers who have zero embedded experience like you: you don't seem to understand that in many embedded environments, dynamic memory allocation is simply not possible. You allocate all memory at the start of the program, and that's it. So doing math the way you suggest is completely impossible.
And ultimately, I'm interested in beautiful things as well as practical things. I fully acknowledged that the idea here is impractical, though theoretically possible, with some very interesting theorems coming out of it. I don't want to always think like a hard-nosed practitioner. That's stupidly dogmatic.
Const-me is complaining that I didn't mention his pet interest, which is GPGPU. And then he said some totally irrelevant and obvious stuff (to anyone who knows CS), just like you did. This is why we can't have nice things.
I’ve been programming for living for couple decades now, and I like writing efficient code. Not all ideas, however beautiful they might seem to some people, are good enough for actual software that people use to accomplish stuff. This idea is one of them. Spending thousands of CPU cycles to compute a=b*c is extremely inefficient in vast majority of cases. Even a hardware-based implementation won’t be efficient, see what happened to LISP machines.
Apparently, that thing that you find beautiful, I find impractical and therefore only applicable to narrow set of problems and environments.
Speaking of environments, even on PC, I code for GPGPU as well. Due to extreme levels of parallelism, GPU-running code has almost no support for dynamic memory: there’re append/consume buffers there but they’re very limited and not actually dynamic, just a static buffer with thread-safe cursor over them.