Hacker News new | past | comments | ask | show | jobs | submit login

I agree, although strictly speaking I think conventional decimal representation should be unambiguous, i.e. there is always one float that is closest to the decimal value



Strictly speaking, unambiguous decimal representation - such as Ryu and its less-efficient predecessors - is unconventional, but I agree that it's correct and should be the default. (That's the "0.1 + 0.2 = 0.30000000000000004 ≠ 0.3" in my comment.) My point is that it should also be easy to get a natural representation that corresponds closely to the actual bits of the float, in rouchly the same way that 0xAAAB corresponds closely to the actual bits of (int16_t)-21845, and so gives you a chance to notice that something weird is going on when, for example, you encounter a multiplication by -21845. (Something like hex(*(uint64_t*)&x) can work, but makes printf %A look beautifully readable by comparison. Also most languages - especially interpreted ones - manage to be inferior to C in this respect.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: