Hacker News new | past | comments | ask | show | jobs | submit login

Here is the official site[1] of the project. There is a request[2] to add it in the Scryer Prolog[3] (ISO Prolog implementation in Rust). And implementations in Rust[4] itself, and Julia[5] language.

[1] https://posithub.org/index

[2] https://github.com/mthom/scryer-prolog/issues/6

[3] https://github.com/mthom/scryer-prolog

[4] https://gitlab.com/burrbull/softposit-rs

[5] https://juliacomputing.com/blog/2016/03/29/unums.html




He will keep failing to replace IEEE floating point as long as he insists on making NEGATIVE infinity the same as POSITIVE infinity.

Also, IEEE 754 floating point standard guarantees the results of addition, subtraction, multiplication, division, and square root to be the exact correctly rounded value, ie a deterministic result, contrary to what he says.


Not so sure. I mean, yes, what you say is right, but there are problems nevertheless, see eg Wikipedia:

> Reproducibility

> The IEEE 754-1985 allowed many variations in implementations (such as the encoding of some values and the detection of certain exceptions). IEEE 754-2008 has strengthened up many of these, but a few variations still remain (especially for binary formats). The reproducibility clause recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.

https://en.wikipedia.org/wiki/IEEE_754#Reproducibility


Reproducibility is an orthogonal issue to Posits vs IEEE754.

Most developers prefer speed over reproducibility, and are encouraged to use denormal-to-zeros, fast-math optimizations, fused MAC, approximated square root function and whatever else is available to achieve results.

The IEEE754 standard provides a guarantee for deterministic results, and many multi-precision and interval arithmetic libraries depend on this guarantee to be true to function properly.

IEEE754 defines unique -infinity and +infinity values, and any "new and improved" standard that breaks this axiom is just incompatible with all existing floating-point libraries written in the last +30 years.

"These claims pander to Ignorance and Wishful Thinking." Kahan (main author of IEEE754) on Posits claims.


> Most developers prefer

You're free to voice your own opinion, but I take some issue with people asserting theirs as if they speak for "most developers". Especially if it comes from a new account with a name like "Gustafnot". That doesn't exactly scream "unbiased" to me.


Having written scientific numerical software for decades, and having been in situations where I want high speed or I want reproducibility, and having worked with hundreds of developers, I agree wholeheartedly with Gustafnot - the vast majority of developers prefer performance over bitwise reproducibility. Lose a few bits here and there, and most don’t care, because they treat floats as fuzzy to begin with (and almost never care about reproducibility since it’s very hard to obtain due to compilers, libraries, etc.) . But slow down code, and they sure notice quickly.

If you really want to claim the opposite, do you have evidence? Or experience that it’s true?


But slow down code, and they sure notice quickly.

Yet electron is not only a thing for hobbyists, but something even large companies bet their livelihood on.


He argues that underflow/overflow are usually caused by bugs in the code. Having so many special numbers implies reducing the numeric representation (there are a lot of NaN in IEEE-754) and a hardware overhead to deal with all the special cases. That small hardware overhead can add up when working with thousands of FPU.

The vast majority of application dosen't need fine control of the FPU. There is always will be hardware for the few application need the ieee-754 features.


That's great, but if the hardware doesn't support it, then wouldn't the implementation would be slow?


Besides claimed efficiency improvements, posits are claimed to have advantages for implementing numerical algorithms correctly compared to IEEE floats. A big part of Gustafson's book The End of Error is devoted to that, claiming to show that a range of numerical algorithms are implemented better with posits vs IEEE floats (better meaning some mix of clearer, easier to produce stability, better error bounds, etc.). It's something of a competitor to interval arithmetic in that respect [1].

A 'softposit' implementation can at least let people experiment with writing posit algorithms to investigate the claimed algorithm-design benefits, even if it's not going to beat hardware floats in speed.

[1] The interval arithmetic people don't seem very happy with his comparisons though: http://frederic.goualard.net/publications/MR3329180.pdf


Yes, those implementation are slow compared to classical floating points. The goal, as I understand it, is to let people experiment with Posits in order to prove that they bring something to the table.


As far as I read https://posithub.org/docs/Posits4.pdf there are FPGA implementations. Also in systems where floating point arithmetic isn't supported in hardware (i.e. Arduino Uno) you rely on soft fp.


Benchmark performance analysis of accuracy and speed on AVR platforms would indeed be very interesting and an actual, possible, immediate implementation scenario...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: