Hacker News new | past | comments | ask | show | jobs | submit login

Thing is, the practical cases where you want to represent amounts exactly all involve decimals, not binary. If you default to decimals, and someone unknowingly uses them in a situation where binary floats would be better, they still get results they expected, just slower. But if you default to floats, and someone unknowingly uses them in a situation where an exact decimal amount is needed, it can actually produce incorrect (in the sense defined by the functional spec) results. Defaults should be safe, and performance optimizations that are more likely to affect correctness should be opt-in.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: