The UI is awesome, amazing work! However, arbitrary precision implies that there is no fixed upper limit to the number of digits - simple tests like `0.1 + 0.2 == 0.3` and `2^53 == 2^53 + 1` (both produce "false") indicates you're still using IEEE 754 double precision floats.
If "arbitrary precision" is not as important to you as "high precision", a 128 bit decimal has enough precision for 99% of real-world applications.
Thanks for checking it out! Should have been more clear that this is actively being worked on. This is ultimately the goal, and I'm currently working on integrating `astro_float` as the base for numbers.
In the previous version of this comment (where I was still reading it incorrectly) I added a fun fact, that the significand of an IEEE 754 double-precision float is only allocated 52 bits, but the "hidden bit trick" provides an extra bit of precision when the normalized form starts with 1.
Hey nice! We have similar interests. I built something similar, but with way less calculator functionality than you did :D
But the main idea I was going for was real-time JIT evaluation with rendered errors (specifically learning / using cranelift JIT) - less to do with the calculator aspect.
If "arbitrary precision" is not as important to you as "high precision", a 128 bit decimal has enough precision for 99% of real-world applications.