Hacker News new | past | comments | ask | show | jobs | submit login

I think you're getting something mixed up. Until you get to numbers so big that n+1 == n, the numbers on the left of the decimal point are represented exactly. They have the exact same precision as fixed point.

Ignore how the mantissa looks by itself. Once the exponent gets involved, everything that influences the left of the decimal point is an integer. That's why you use the same base for the mantissa and the exponent. The exponent losslessly shifts part of the mantissa into the integer range. Any particular floating point number is exactly equivalent to a fixed point number.




My point is that differentiating the integral and fractional parts of a floating point makes no sense. Some operations lose a constant amount of precision, others lose more precision the more different the numbers are. None of them act differently on the integral and fractional parts of the number.

Now, of course, there's a fixed point representation for each possible floating point number. But operations on fixed point numbers act differently on the integral and fractional parts of the numbers (in part because it's impossible to have a pair of fixed point numbers of the same type, but so different that n+1==n).


>But operations on fixed point numbers act differently on the integral and fractional parts of the numbers

Not really. Not in any way different from floating point. Can you explain this point? The efficient way of doing fixed point is to store it as a single scaled number, which is also how floating point stores the mantissa. You could split it into two numbers, but you could also split a mantissa into multiple numbers if you really wanted, it doesn't really change the math.

>(in part because it's impossible to have a pair of fixed point numbers of the same type, but so different that n+1==n).

I'd disagree there. You can have fixed point denominated in 100s as easily as you can have it denominated in .01s.

>Some operations lose a constant amount of precision, others lose more precision the more different the numbers are. None of them act differently on the integral and fractional parts of the number.

Precision is only 'lost' compared to other floating point operations. A long double won't ever have less precision than a 32.32 fixed point number, unless you're using numbers that wouldn't fit in 32.32 in the first place.

In no way do you have "a single bit" of useful precision. Anything that can corrupt the upper bits of a floating point number can corrupt the upper bits of a fixed point number.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: