This is incorrect. They said "Both
use r = 8 and require hundreds to thousands of digits of precision in θ." That's hundreds to thousands, not hundreds of thousands.
It's still a ton. For reference, a single precision (32 bit) float has about 6-7 decimal digits of precision, double (your typical float in most things that aren't neural nets) has 15ish, and quad has 35ish decimal digits of precision.
So it's totally impractical, yeah, but I just wanted to point out that it's not nearly as nuts as you emphasized.