You can either be very careful to avoid the many cases where this can happen, choose a different number format (fixed-point, decimal floating point, numerator-denominator, arbitrary precision mantissa binary floating point, which one depends on your problem domain), or use units that let you work in integers (equivalent to fixed-point, but easier to explain).
Information theory says you can fit 100 values into 7 bits.
If you're not going to use the right encoding, I can also come up with encodings for which 100 bits isn't enough to fit 100 values.
Fair enough that the title leaves it ambiguous, but the ambiguity is resolved in the first sentence of the post, and further clarified in a lot of detail over the next paragraph and a half.
The whole setup is a problem that shouldn't exist, because you're using 7-bit floats for an inappropriate range. When are you going to run into 7-bit floats as the default? If you're not thinking about it, I suppose this could happen if you scale it up, but it's less surprising to say "4862057400958023983457609283094859 can't be represented exactly in floats".
In the second section the author clearly shoots down the title, while again choosing to repeat that it 'isn't enough', having again as a subheading:
>Seven Bits Looks Like Enough But It Isn’t
When that section clearly demonstrates that it would be enough.
This statement really should be hedged.. You can just straight-up do binary-encoded decimal and change every pair of decimal digits into 7 bits and back again (bin 0000000 for decimal 00 through bin 1100011 for decimal 99, see reference  for chart), end of story and never suffer any degradation.
I realize this isn't the scheme that is chosen, but that is a choice. Both the title and the quoted sub-heading imply otherwise; i.e. that it isn't enough bits. As shown above clearly it is.
Aside: in this scheme you EVEN have some kind of 28% larger space you can make use of somehow for checksumming or the like!
Naive example: when encoding any value that happens to contain digits smaller than 28 you can also encode parity information:
when encoding, if the two digits you're encoding happen to be digits 00-27 you ALSO encode the checksum value 1 by choosing to add decimal 100 (to arrive at bin 1100100 through 1111111 i.e. decimal 100 through decimal 127) or checksum value 0 by passing on doing so (i.e. keep bin 0000000 through 0011011 i.e. decimal 00 through 27).
When decoding, when you decode and encounter decimal 100 through decimal 127, subtract 100 from it before recording, and also record a checksum 1; when encountering decimal 00 through decimal 27 record it but also record a checksum value 0.
This means that in 28% of cases you gain 1 bit of checksumming information while perfectly encoding two binary digits. (If your stream doesn't happen to contain any decimal digits 00 through 27 then you just don't gain any checksumming information.)
You can use these checksum bits however you like! You can't be guaranteed to have them, but you have a good chance of having them, starting with a 27% chance of having a parity bit in the case of even a single pair of digits!
I'm not an expert on checksums, and the above is just a naive example, but the point is this: not ONLY is 7 bits enough to encode 2 decimal digits: it's even enough to add a parity bit in 27% of input cases.
It is made clear from the very first sentence of the article that it is about representations that will round-trip between number bases, not whether or not it is possible to encode decimal floating point numbers with two digit accuracy in 7 bits.
A title isn't 'click-baity' just because it's shorter than you think it should be. I think the title was completely fine and the contents of the article were what the title led me to expect, without actually already fully knowing the issue.
And, if all your remainders are divisible by tens, you should be using http://en.wikipedia.org/wiki/Decimal_floating_point
They are precise in fixed point representations, but floating point has a single bit as the integer part of the mantissa (and even this one can gather error on some sequences of operations).
Ignore how the mantissa looks by itself. Once the exponent gets involved, everything that influences the left of the decimal point is an integer. That's why you use the same base for the mantissa and the exponent. The exponent losslessly shifts part of the mantissa into the integer range. Any particular floating point number is exactly equivalent to a fixed point number.
Now, of course, there's a fixed point representation for each possible floating point number. But operations on fixed point numbers act differently on the integral and fractional parts of the numbers (in part because it's impossible to have a pair of fixed point numbers of the same type, but so different that n+1==n).
Not really. Not in any way different from floating point. Can you explain this point? The efficient way of doing fixed point is to store it as a single scaled number, which is also how floating point stores the mantissa. You could split it into two numbers, but you could also split a mantissa into multiple numbers if you really wanted, it doesn't really change the math.
>(in part because it's impossible to have a pair of fixed point numbers of the same type, but so different that n+1==n).
I'd disagree there. You can have fixed point denominated in 100s as easily as you can have it denominated in .01s.
>Some operations lose a constant amount of precision, others lose more precision the more different the numbers are. None of them act differently on the integral and fractional parts of the number.
Precision is only 'lost' compared to other floating point operations. A long double won't ever have less precision than a 32.32 fixed point number, unless you're using numbers that wouldn't fit in 32.32 in the first place.
In no way do you have "a single bit" of useful precision. Anything that can corrupt the upper bits of a floating point number can corrupt the upper bits of a fixed point number.
The only problem with that is that there's no support in most programming languages for either and no hardware-support for decimal floating points in common hardware.
With floating point you also have endless possibilities how to define them. Next to the standard IEEE ones you for instance have 80 bit ones you had to use on intel CPUs. With fixed point arithmetic you don't really have standards but common ones like Q16.16
... in 1991 ...
It is a real shame nobody ever seems to read it.
A better term might be fidelity.