
Bisecting Floating Point Numbers - jwmerrill
http://squishythinking.com/julia/2014/02/22/bisecting-floats/
======
adrianN
I like numerical methods like this because they sit neatly in the space
between high level algorithmic ideas and low level implementation details.

The library CGAL has many clever algorithms with floating point numbers. Its
strength lies in the mixture of numerical approximation and exact computation
it does to solve problems.

~~~
jacobolus
Any specific algorithms in CGAL you found particularly clever?

------
lindstorm
Great post!

It may interest you that William Kahan has written quite extensively on the
pitfalls of iterative algorithms on floating point numbers; his discussion of
Muller's pathological recurrence on pp. 14+ of this paper seems particularly
relevant.

[http://www.cs.berkeley.edu/~wkahan/Mindless.pdf](http://www.cs.berkeley.edu/~wkahan/Mindless.pdf)

------
sparky_z
This doesn't work if the output of the function you're checking is a noisy
black box. For example, if fn() creates a finite element model, analyses it
with an external software package, and post-processes the results to get an
output value, you're gonna need a pretty high tolerance for your stopping
condition.

~~~
jwmerrill
If your function isn't a mathematical function (i.e. its output is not
completely determined by its arguments), then it's a little confusing to
figure out what isolating a root would mean. But even in that case,
bisect_root terminates, because it _always_ halves the interval at each step,
even if your input procedure is returning nonsense. Witness

    
    
        julia> bisect_root((x) -> rand() - 1/2, 1.0, 2.0)
        (1.1980511149266218,1.198051114926622)
    
        julia> bisect_root((x) -> rand() - 1/2, 1.0, 2.0)
        (1.595600613503659,1.5956006135036591)

~~~
sparky_z
"...it's a little confusing to figure out what isolating a root would mean. "

Well, to take an example from some of my past work as a structural engineer,
say you wanted to find out how much wind load a structure could sustain until
it buckled under its own weight. So you create a finite element model of the
structure that applies the wind load and then ramps up the gravity until the
structure fails. The input is the magnitude of the wind pressure. The output
is the total gravity at failure (in units of g). subtract 1 from the output,
and you have a function where the root is the level of wind load you're
looking for.

You're point about it always terminating when you use the bisection method is
well taken, however. I was using a hybrid that included some Newton's method
iterations to reduce the total number of steps, since each "function
evaluation" took a lot of computer time.

------
lindstorm
Good! Instead of (x1 + x2)/2, it is better to do x1 + (x2 - x1)/2
[source:kdbuo]

------
castratikron
Floats are evil.

Here's some more fun you can have:

int main() { float a = 0x7FFFFFFF; printf("%f %f %f\n", a, a - 64, a - 65);
return 0; }

~~~
ska
Float may be evil, but if so they are often a necessary evil.

~~~
chowells
I dispute "often". I honestly can't think of cases other than certain
scientific calculations where they are more appropriate than fixed-point
numbers.

The fact that scientists had so much input on the early design of hardware and
programming languages means that floats are often a performance hack that
exchanges semantics for speed. But that's hardly necessary in most
applications.

And if hardware and language support was better for fixed point numbers,
floats would be seen as the bizarre special case that's only appropriate in
rare cases.

~~~
zurn
How big improvements do you feel improved hardware fixed point support could
yield, and what form might the improvements take? Imagining for example that
Intel decided to make a instruction set extension in their next CPU.

~~~
chowells
Modern CPUs are already so complex and so fast that I can't imagine the
potential speedup would be significant as compared to modern optimized fixed
point libraries.

Language support is a much bigger issue. Programmers are fundamentally lazy,
and will always do the thing that takes the least work (for them, right now).
So for fixed point numbers to really be used, that means that they need to be
fully first class in the language. Literals need to exist, and with syntax no
more complex than it is for other numerical literals. And of course, all
operators must use the same syntax for both.

This is hard to retrofit into a language. Java and C can't retrofit either in.
C++, Python, and Ruby can do operators, but not literals. You have to get into
pretty uncommon languages before support for additional numerical types
becomes good. Haskell and Scala, for instance, can meet both of the
requirements.

Given that, I'd say any kind of widespread use of fixed point numbers in the
cases where they're more appropriate than floats is at least 30 years away.
Maybe 50.

~~~
jacobolus
You should give some more concrete examples of domains where you think fixed
point is more appropriate than floating point, but floats are currently used.

As it is, your comment can’t really be discussed or disputed, because it’s
just an abstract complaint with no substantive evidence/analysis presented.

Here are some areas where I think floats are more appropriate than fixed
point, given the availability of CPUs and GPUs with native floating point
support: computer game physics, any kind of 3d rendering, 2d vector graphics
including e.g. map rendering, high dynamic range image/video editing, most
types of statistical analysis, ...

I’m having trouble thinking of any applications where I think fixed point
numbers seem more appropriate than floats or integers, except code running on
embedded devices without native floating point. Audio maybe? Image/video
decoding for immediate playback?

~~~
zurn
Floating point is actually a significant source of bugs in 3D graphics and
games. And a complex thing you have to constantly keep in mind, taking
cognitive resources from other more useful stuff. And a thing you have to code
around, increasing the codebase complexity, making changes harder and taking
time from programming/debugging/testing useful code.

CPUs and GPUs do have support for fixed point (aka integer) data formats, it's
been a DX/OpenGL requirement for a while. Though I don't know what API issues
you run into if you try to avoid FP altogether.

~~~
shele
Floating point arithmetic is a significant source of bugs. And if you
transition to fixed point arithmetic, you will still have a significant source
of same-same-but-different bugs (maybe unless you really only need to add
things and not never scale or multiply them).

~~~
zurn
What kind of bugs do you have in mind for fixed point?

While there no doubt would be some, I have a feeling they are a significantly
lesser drag (in frequency and debugging complexity) than floating point bugs.
Assuming the language supports fixed point.

Of course you can imagine error-prone fixed point too. Writing shaders with
plain integers while trying to manually keep track heterogenous fixed point
scales would be a nightmare for example...

