
Collision Detection, Part 1: Overview (2015) - atomlib
https://0fps.net/2015/01/07/collision-detection-part-1/
======
Animats
I used to work on collision detection in the 1990s. You can do convex
collision detection in O(sqrt(m)+sqrt(n)) the first time, and close to O(1) if
the objects just moved a little and you have the result from the previous
time. See the GJK algorithm. It's basically hill-climbing to find the closest
points.

The author glosses over many real-world problems. Some algorithms work badly
in 32-bit floating point because they depend on small differences between
large numbers. (I ran into that in the days before everybody had good 64-bit
FPUs.) Convex meshes should not be triangulated, because that results in
coplanar faces, and hill-climbing can get stuck on the wrong face. It's
necessary to clean up convex objects a bit - require a minimum break angle of
one or two degrees at each edge to make sure that they're strictly convex, not
just non-concave. The author writes "One important class of objects are convex
polytopes, which have the property that between any pair of points in the
shape the straight line segment connecting them is also contained in the
shape." That's a necessary property, but not a sufficient one; it allows
coplanar faces.

Broad phase collision detection usually involves three lists of the bounding
boxes of the objects, ordered by X, Y, and Z. Only if there's overlap in all
three axes do you need to do the narrow-phase test. The author writes "Using
output sensitive analysis, there is also a lower bound of O(n log(n) + k) (for
comparison based algorithms) by reduction to the element uniqueness problem."
That's also the lower bound for sorting by comparison, of course. But that's a
price you only pay at cold start, when you initially sort all your objects.
Thereafter, you update the ordered lists as the objects move, which, if most
things are not moving at high speed, is cheap.

Large-area collision detection, for big MMOs, has an even coarser phase of
binning by large areas. Just divide the world into squares and only look at
your square and the neighboring squares. This parallelizes nicely.

~~~
djmips
People are still using 32 bit floats in games and it's not something that has
been solved by good 64-bit FPUs.

------
aliswe
Actually a collision is an overlap with only one initial event. So collision =
overlapping + state. That is an object collides initially but will perhaps
continue to overlap the other object.

In 2D games programming, at least in the old days, the detection effectively
consisted of detecting whether NOT two objects were overlapping.

Consider this pseudocode formula:

    
    
        function isOverlapping(a, b){
          return a.x < b.x + b.w // a's left is to the left of b's right
            && // And
            a.x + a.w > b.x // a's right is to the right of b's left
    

... And the same for vertical.

Compare with this :

    
    
        function isOverlapping(a, b){
          if(a.x > b.x + b.w) return false; // was to the right
          if(a.x + a.w < b.x) return false; // was to the left
          if(a.y > b.y + b.w) return false; // was below
          if(a.y + a.h < b.y) return false;  // was above
        
          return true; // not not overlapping, so must be overlapping
        }
    

Additionally, these constant additions were unnecessary for each permutation
of the detection matrix, so you would store left and right sides whenever you
moved an object:

    
    
        function move(a, x, y){
          a.left = x;
          a.top = y;
          a.right = a.x + a.w;
          a.bottom = a.y + a.h;
        }
    

Then the isOverlapping guard statements would be changed to:

    
    
        if(a.left < b.right) return false;
    

Etc. Which is a cpu-over-memory optimization. But also makes the code easier
to read ...

~~~
vardump
If you _need_ to optimize this, on modern CPUs, you'd probably want to always
run all tests and have just one exit, instead of multiple possible paths.
That's easier said than done, because "&&" typically does generate a branch.

In other words, doing significantly more computational work but less
conditional branching can run significantly faster, because modern CPUs are
built for this kind of code.

Once you achieve completely branchless collision test, you've completely
eliminated branch misprediction penalties inside it.

I've sped up existing code too many times simply by removing if-statement
checking if the work needs to be done... :-)

That said, I find it highly unlikely 2D collision testing optimization running
on modern CPU yields any measurable performance improvement.

~~~
peterwoerner
A nice trick I have used for similar problems (although finite element
related) is

(x + abs(x))/2 returns x if x>0 and 0 if x<0.

