I actually ran into a bug recently while implementing my first raytracer, where the point calculated from the sphere-intersect test would just occasionally end up inside the sphere due to floating point imprecision, so the diffuse sample rays would have their origins completely in the dark, leading to randomly black pixels. Solved it by bumping every intersection out by 0.01 in the direction of its normal.
And then of course there have been several other "x.abs() < 0.01" cases for various purposes. So I could definitely see that being an interesting experiment.
That's really interesting - hadn't thought of that before. To fix that, would you be able to do a square of the magnitude comparison with the radius and just bump the borderline cases, or is it more efficient without the extra branching?
I just did it across the board; since the error is in the floating-point noise I don't know if I'd even trust a comparison on that. Plus, the discrepancy between "bumped" and "unbumped" samples might cause some visible artifacts.
And then of course there have been several other "x.abs() < 0.01" cases for various purposes. So I could definitely see that being an interesting experiment.