Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm the main author of Arb. Note that it's an arbitrary-precision library. It's ~100x times slower than hardware floating-point because of using arbitrary-precision floating-point numbers implemented entirely in software. But if you have to do arbitrary-precision arithmetic to begin with, Arb's error tracking only adds negligible further overhead.

For machine precision, I believe ordinary interval arithmetic is the best way to go still. Unfortunately, this not only uses twice as much space; the time overhead can be enormous on current processors due switching rounding modes (there are proposed processor improvements that will alleviate this problem). However, the better interval libraries batch operations to minimize such overhead, and it's even possible to write kernel routines for things like matrix multiplication and FFT that run just as fast as the ordinary floating-point versions (if you sacrifice some tightness of the error bounds).

Regarding the article, using a more compact encoding for intervals is a fairly old idea and I'm not really sure what is novel here.



> It's ~100x times slower than hardware floating-point because of using arbitrary-precision floating-point numbers implemented entirely in software.

Thanks for the numbers, how did you get that estimate? Did you consider SIMD?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: