

Ncollide − a Rust 2D and 3D collision detection library - sebcrozet
http://www.ncollide.org

======
krat0sprakhar
Refreshing to see Rust being used in graphics intensive projects. Would be
great if the author could share his experience using Rust.

~~~
illumen
I can't tell if it is using the GPU, SIMD, multiple cores, memory
compression/decompression, code rewriting(JIT), or other high performance
techniques.

------
doomrobo
Just out of curiosity, what kind of uses would a 4D collision detector have?

~~~
sebcrozet
The one application I know for 4D collision detection is 3D continuous
collision detection. If two 3D objects are moving, you can parameterize their
movement through time to obtain a 4D shape. If those movements are only
translational, and the two shapes are convex and have a support function, then
testing for contact is easy (e.g. using GJK). However if those movements
contain rotations, the resulting 4D shape will be concave so interference
detection may be hard.

I do not think anybody actually uses this method though.

------
dlsym
Can someone explain to me why all this physics demos always seem to run in
slowmotion?

~~~
optymizer
Here's my (very ignorant) take on it:

Having worked through half of the Nature of Code book and implemented all of
its physics examples, I realized that the physics libraries are all
approximations (not simulations) of the real world. To make the approximations
as close as possible to what we see in the real world, you need to make many
more fine calculations. This limits the frame rate at which the simulation can
run. You either run your simulation less accurately at 30 updates-per-second,
or you run it with increased accuracy at 10 updates-per-second. When demo-ing
physics libraries, authors usually want to showcase the small details that
increase realism. This leads to more calculations, which leads to a greater
slowdown.

I think gravity is the first force we notice when it's off, because we're very
used to seeing objects fall. For gravity to feel right, it needs to _feel_
like 9.8N/kg at a specific framerate. If you update less often, you'll slow
down the world. If you update much faster, objects will fall faster. You might
want to increase the value of gravity to 19.6N/kg if your fps is 50% slower to
maintain the illusion of the real speed, but that effectively means that
objects will skip ahead 19.6 units instead of just 9.8units, and that affects
collision accuracy. There are a lot of parameters like this that can be
tweaked, but since many results are then used as inputs to other calculations,
changing one parameter affects the entire simulation. Once you reach a certain
level of realism, you don't want to adjust those parameters again for a
different rate of updates-per-second.

In theory. this shouldn't happen if you run your physics library as a state
machine that takes time as input, and outputs the state of the system into
your renderer. This should let your physics simulation run at 30 updates per
second, and your graphics rendering at 60fps. In practice, they're all running
on the same machine, along with any video capturing, and are competing for
resources. This leads to slow-downs, which you notice as slow-motions.

If anyone can explain this better, please do chime in.

