Hacker News new | past | comments | ask | show | jobs | submit login

I look forward to the end of the float / double.

Having variable-sized mantissa and exponent fields, along with error tracking, we'll get rid of that baloney for real.

http://sites.ieee.org/scv-cs/files/2013/03/Right-SizingPreci... http://www.amazon.com/The-End-Error-Computing-Computational/...




Ok, so tell me, what is the "right size" that will make 0.1 + 0.2 == 0.3?

Because, in binary, I don't think you'll be able to answer.


Computers are here to help us. Creating abstractions is the name of the game.

"Technically correct" isn't the kind of correct we, as humans, want here. Plenty of languages get it right the way you would naively expect already, as shown here http://0.30000000000000004.com


That's a completely different problem, and really, I have no idea why it cycling on the comments.

Yes, some languages print floating point numbers in a friendly way by default (really if you are going with the default format, anything will do). In what way does it change equality tests?


No only print, but store too. If the result is mathematically accurate, then what does it matter how it works inside? That's why we use abstractions, for convenience. Yes, there are trade-offs, but that's the price we should be willing to pay.

https://en.wikipedia.org/wiki/DWIM


What should computers do? Round after every operation?


There are many ways to solve this, most programming languages already have BigDecimal libraries anyway. Some have Rational numbers. Raw floats shouldn't be a default.


If you want to "get rid of that baloney," I think what you want is https://en.wikipedia.org/wiki/Decimal64_floating-point_forma...

Rational arithmetic has some upsides also, but it also has some downsides.


Floats/doubles aren't going anywhere until there's widespread hardware support for their replacements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: