Hacker News new | comments | show | ask | jobs | submit login

There are much worse thing than NaN's.

They are called denormals. These appear when dealing at the same time with lots of big numbers (very far away from 0) in operations with lots small numbers (close to 0).

In such cases the FPU (or whatever deals with fp numbers), switches to a format that could be very inefficient producing an order of magnitude slower operations.

For example when dealing with IIR filters in audio, your audio buffer might contain them. One of the solution is to have a white noise buffer somewhere (or couple of numbers) that are not denormalized and add with them - it would magically normalize again.

I'm not a guy dealing with "numerical stability" (usually these are physics, audio or any simualation engine programmers), but know this from simple experience.

Denormals are part of IEEE fp. If your implementation is too slow, you can often trade correctness for speed by turning them off in the C/C++ runtimes.

They're also a sign you're skirting on the limits of FP precision (or worse) so a bit of numerical analysis might still be a good idea...

You cannot simply turned them off everywhere. On certain platforms they are produced always, and nothing can be done, but openly deal with them (by expecting them to happen).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact