
Float Toy - lelf
http://evanw.github.io/float-toy/
======
pcwalton
The thing that made floating point easy to understand for me was the concept
of a dyadic rational [1]. Those are just the rational numbers whose
denominator is a power of two (e.g. 1/2, 3/16, 61/32). Fundamentally, binary
floating point represents dyadic rationals (not counting special stuff like
NaN, infinity, etc.) To represent such a number in floating point form, just
write it as a/2^b. Then the mantissa is a, and the exponent is -b.

This intuitively makes it obvious why, for example, 1/3 can't be represented.
I find the rational form like a/b easier to understand than the "scientific
notation" form like a×2^b.

[1]:
[https://en.wikipedia.org/wiki/Dyadic_rational](https://en.wikipedia.org/wiki/Dyadic_rational)

~~~
twic
I think of a float as an integer with a shift. Take the mantissa, stick on the
implicit leading 1, put it in the middle of an infinitely big string of zeroes
with a decimal (well, binary) point at the right-hand end, then shift it by
the exponent (which can be positive or negative).

If you're comfortable thinking about shifting integers, then perhaps this is a
simple way to think about floats. For me, it's easier than thinking about them
as fractions.

~~~
emmelaich
Yep, this is how the Unix `bc` tool works; all numbers are internally simple
decimal but have a `scale` and `length`.

Scale is the number of dec digits after the decimal point and length is the
number of significant figures.

See `man bc`

------
userbinator
I wrote something similar ~25 years ago, when Win32 first appeared, and still
use it today on the (relatively rare) occasion that I need to debug something
that uses floating-point.

One interesting thing you'll notice about IEEE754 is that the values sort
naturally if you interpret them as sign-magnitude integers:

    
    
        00000000  0
        00000001  1.4e-45 (smallest representable positive value)
        00000002  2.8e-45 (next-smallest representable positive value)
        007fffff  1.1754942e-38
        00800000  1.17549435e-38
        3f000000  0.5
        3f800000  1.
        3fffffff  1.9999999
        40000000  2.
        40400000  3.
        60000000  3.6893488e19
        7f000000  1.7014118e38
        7f7fffff  3.4028235e38 (largest finite value)
        7f800000  Infinity
        7f800001  first NaN
        7fffffff  last NaN
    

In other words, if you count up in floats, it starts at 0 and goes up by
increasingly larger amounts, then the number right "after" the "largest" is
Infinity, followed by all the NaNs to the end of the range. The negative side
is a "mirror image", the only difference being the highest (sign) bit is set.

~~~
filmor
Just a little nit-pick: The NaNs always compare false, so this property only
holds for values up to +inf (0x7f800000).

------
constexpr
Wow, I didn't expect a four year old project of mine to hit HN front page! If
you're interested in this, you might enjoy some of the stuff we're working on
at Figma. Here's a link to learn more:
[https://www.figma.com/careers/](https://www.figma.com/careers/)

~~~
bostonvaulter2
I use figma at work and it's really neat. Keep up the good work! Now if you
could only convince the designer to give me write access ;)

------
seanalltogether
I feel like I groked floating point numbers easier when looking at 16bit
floating point numbers. The values are much smaller and easier to juggle in
your head. The fact that the fraction part literally translated to x / 1024
suddenly became clear to me when you're staring at just 10 bits

[https://en.wikipedia.org/wiki/Half-precision_floating-
point_...](https://en.wikipedia.org/wiki/Half-precision_floating-
point_format#Half_precision_examples)

~~~
jacobolus
It’s even easier if you use 8-bit floats. Then there are only 256 of them and
you can plot them all.

------
evanjonr
A similar probably more well presented site:
[https://float.exposed/0x40490fdb](https://float.exposed/0x40490fdb)

~~~
zamadatix
It's certainly got more information but that combined with the "exact" values
makes it a bit overwhelming/hard to understand, especially if you're trying to
get a feel for it the first time and the half and the double have the same
number of digits on screen. It's almost made more for investigation of
something you ran across or need to hardcode than gaining intuition.

But most importantly I think best feature of the one in the submission is
something it doesn't even mention in the description - it has a "click and
drag to write bits" feature that I think is much easier to use/follow than the
2 dragging section on that one.

~~~
TazeTSchnitzel
You can click to flip individual bits on float.exposed too, and they don't
have “the same number of digits”…?

~~~
mkl
They do have the same number of decimal digits displayed. You can see it by
clicking between "float" and "double": the decimal representation stays the
same.

~~~
TazeTSchnitzel
What is wrong with that?

~~~
mkl
It makes it look like they have the same precision (well also, going from 64
to 32 bits zeros out a whole lot, which contributes to that impression). In
fact, both decimal representations are usually truncated, and don't accurately
represent the binary floating point number. It's a neat and useful tool
regardless, and I'm being a bit pedantic.

------
mrspeaker
I was playing with this yesterday for about an hour... getting so frustrated
trying to figure out how to count from 0 to 20. But when I got there, I
suddenly had a "omg, I know floating point!" moment.

Then this morning on the train I started wondering: "wait, but how does
floating point addition even make sense?!"... I'm still stumped on that one -
I need "Float Addition Toy" ;)

(Pro-tip: it took me way too long to figure out that you can click and drag to
set/clear the bits quickly)

~~~
Analemma_
[https://ciechanow.ski/exposing-floating-
point/](https://ciechanow.ski/exposing-floating-point/) is the clearest and
most coherent explanation of floating-point numbers I've found; it was really
what gave me the lightbulb moment I had been missing. This site is a great
companion to it though.

~~~
animal531
Thanks, that's a great write-up.

------
quelsolaar
I think I found a bug: If you set all bits to 0 except the sign bit you get 0
when you should get -0. In IEEE floating point representation 0 and -0 is not
the same. This is why:

f *= 0;

Is not the same operation as:

f = 0;

Since the former will retain the sign bit.

~~~
tboerstad
Thanks for that bit of information, pun intended. I hope I'll never have to
rely on it in production.

------
AceJohnny2
With the 32-bit one, if I set the highest bit of the exponent from 1 to 0, the
exponent goes to -126 (ok), but if I then toggle the lowest bit of the
exponent (rightmost green bit), it modifies the mantissa, not the exponent.

Is that a bug or do I not understand how IEEE754 works (very likely :p)?

~~~
asdvjonbn
That's called a 'subnormal number'.

[https://stackoverflow.com/questions/8341395/what-is-a-
subnor...](https://stackoverflow.com/questions/8341395/what-is-a-subnormal-
floating-point-number)

    
    
        If the exponent is 0, then:
    
            the leading bit becomes 0
            the exponent is fixed to -126 (not -127 as if we didn't have this exception)
    
        Such numbers are called subnormal numbers (or denormal numbers which is synonym).
    

All normal floating-point numbers have an implicit leading significant bit of
1. If the exponent field goes from 1 to 0, then this corresponds to changing
the implicit leading bit to a 0, not to decreasing the exponent.

~~~
sillysaurusx
Hmm. But if the exponent’s significant bit goes from 1 to 0, why would that
affect the mantissa at all? Also, if the significant bit is always 1, how are
negative exponents represented?

I understand there’s an implicit “hidden” bit, but what conditions cause it to
become 0 instead of 1? And does that bit represent the sign of the exponent,
or something else?

After reading [https://stackoverflow.com/questions/8341395/what-is-a-
subnor...](https://stackoverflow.com/questions/8341395/what-is-a-subnormal-
floating-point-number) more carefully, I am more confused than ever. Both
visualization sites seem to be leaving out key details. And subnormals only
seem to apply when exponent = 0; the visualization problem that OP mentioned
happens whenever the leading exponent bit is set to 0, but the rest of the
bits can be anything.

~~~
microtherion
> But if the exponent’s significant bit goes from 1 to 0, why would that
> affect the mantissa at all?

It's not a matter of the "significant bit", but whether the WHOLE exponent is
0 or not. An exponent of 0 is a special case; as soon as you set ANY bit in
that exponent, the mantissa goes back to the "implicit 1" prepended bit.

> Also, if the significant bit is always 1, how are negative exponents
> represented?

Exponents are represented in a "biased representation", counting up from 2
__-127.

> I understand there’s an implicit “hidden” bit, but what conditions cause it
> to become 0 instead of 1?

Exponent bits being all 0.

> And does that bit represent the sign of the exponent, or something else?

It represents the leading bit of the mantissa.

~~~
sillysaurusx
Ah! I get it now. I was confused; float toy is correct here.

Thanks for the explanations.

------
gautamcgoel
This is awesome, I honestly learned more about floating point in five minutes
of playing with this tool than I have in all my years as a coder. One of the
main takeaways should be that 64 bits is usually way more precision than we
need, 32 is usually more than enough. It would be cool if the tool also showed
the new Bfloat16 type, so we could get some intuition for that as well. IIRC,
Intel created Bfloat16 to better suit the needs of deep learning
practitioners, who don't need much precision (especially for large values).

~~~
moultano
32 is plenty to represent the numbers we care about, but you'll find you
probably want 64 to do math with those numbers. 64 has always been enough for
me to never have to care about the precision, but whenever I've used 32 to
save some ram, I've ended up having to think carefully about how some
operation would reduce the precision in a material way.

~~~
platz
At 32-bit you already have a whole 16 number gap per-bit in the 200 millions:

210,828,704

210,828,720

------
divbzero
Does anyone have a similar “toy” for Unicode? (UTF-8 _vs._ UTF-16 _vs._
UTF-32)

------
amelius
Suggestions for improvement:

Add a (+) and (-) button to move up or down to the nearest floating point
number.

Show the local precision (not sure if this is the correct term), i.e. the
maximum difference between the current float and the two nearest ones.

~~~
mceachen
Nice. Also, if you make the css transition for what bits flip on ++ and --
slowish, you can see what's going on.

------
snek
I really like this one:
[https://bartaz.github.io/ieee754-visualization/](https://bartaz.github.io/ieee754-visualization/)

------
FriedPickles
This rocks! I bet I grokked it way faster with this toy compared to a written
explanation. An ounce of demonstration is worth a pound of documentation.

~~~
FabHK
Did you grok infs and NaNs (the two different kinds, and their payload) and
zero (both zeros?) and subnormal numbers?

Otherwise a pound of written explanation might still make sense.

------
TheMagicHorsey
I just realized I did not actually understand how floats worked. I just kind
of assumed I knew ... and I did not actually know the details correctly.

Thanks for this!

~~~
baq
well, the details are an IEEE standard, after all. most people should be fine
knowing why 9.999999999 is equal to 10 without having to go to infinity.

------
riceslush
I built something similar [https://www.isfloat.com/](https://www.isfloat.com/)
It also tells you whether the decimal value given can be exactly represented
as a float. For example
[https://www.isfloat.com/?v=0.3](https://www.isfloat.com/?v=0.3)

------
Aardwolf
So great! I believe it could also show the difference between quiet and
signaling NaN

------
fctorial
So it's scientific notation in base 2

~~~
netsharc
He could've explained the red bits better, it's 2^-1, 2^-2, 2^-3, etc

So if the third red bit is 1, it's 1 x 2^-3, i.e 1 x 1/(2^3), i.e. 1 x 1/8,
i.e. 0.125.

~~~
fctorial
And +/-0 are special cased.

------
kecskesadam
[https://www.h-schmidt.net/FloatConverter/IEEE754.html](https://www.h-schmidt.net/FloatConverter/IEEE754.html)

------
sj4nz
Definitely looking forward to the Unum Type III alternative to floats. This is
a nice visualization and it gives you an idea of how many NaN's waste the bits
being used.

~~~
pmarreck
yeah I was going to ask, what's the deal with all those wasted bits?

I'm not a floating-point fan (I do all computations in integer wherever
possible/feasible), but I've been following the universal number story since
it was announced because it seems less painful/gross than IEEE floating-point
on many fronts, here's a github with a good about page about it:
[https://github.com/stillwater-sc/universal](https://github.com/stillwater-
sc/universal)

------
vortico
Nice work on the "mouse down to toggle" and "drag to copy value" interaction,
a much underused but useful tool to set a collection of boolean states.

------
TYPE_FASTER
Awesome. It would be cool to incorporate it into the Wikipedia article (or at
least, link to it).

------
earthboundkid
Neat, but it should special case -0.

------
vardump
Nice!

Would be nice to get various 16-bit float formats as well. They're getting
more and more common in graphics and NN domains.

Or perhaps generic n-width exponent and m-width mantissa.

~~~
thomasballinger
rreusser made a more generic version inspired by Float Toy:
[https://observablehq.com/@rreusser/binary-
input](https://observablehq.com/@rreusser/binary-input)

------
kazinator
Click, hold and swipe to quickly replicate a bit value across a range of
places.

------
qwerty456127
Very valuable. Thanks!

------
kstenerud
I deep dove into ieee754 binary and decimal float (DPD and BID) while
researching how to compress floating point values [1]. For 25 years it had
been a magic black box that I couldn't trust for comparisons or printing, but
now I get it. I get why it happened this way, what the benefits were, how to
mitigate against its weirdness, and why ultimately decimal float needs to
replace binary float in future. It's definitely worth learning.

[https://github.com/kstenerud/compact-
float/blob/master/compa...](https://github.com/kstenerud/compact-
float/blob/master/compact-float-specification.md)

