Hacker News new | past | comments | ask | show | jobs | submit login
Thinking Parallel, Part I: Collision Detection on the GPU (2012) (nvidia.com)
93 points by setra on July 5, 2017 | hide | past | favorite | 9 comments



Coarse collision detection is only expensive if done from a cold start every time. In practice, there's a startup cost of ordering the bounding boxes, but then only a small incremental cost as things move around. Most moves are local, and require only a small number of changes, and many objects in a game don't at all. Here's the classic paper from Ming Lin, from the 1990s.[1]

Fine collision detection can also be done incrementally. Incremental GJK is a good approach. Here's Steven Cameron's paper on that.[2]

I wrote one of those once in C++, using the ordered bounding box approach from Lin and the incremental GJK approach from Cameron, plus some new stuff to guarantee continuity as objects moved. It turns out that incremental GJK has some serious floating point loss of significance problems as faces approach parallel. When you build a physics engine this way, the near-parallel case gets explored very thoroughly as objects settle. Turns out you need a 64-bit FPU. This was a problem on some consoles, and is still a problem on some GPUs.

[1] https://www.cs.jhu.edu/~cohen/Publications/icollide.pdf [2] https://graphics.stanford.edu/courses/cs468-01-fall/Papers/c...


Is gpu based collision detection at all common in modern games?


No, practically unused. The GPU is normally fully used for rendering in modern games.


Is there any evidence you can provide for this? I don't necessarily dispute it, but I don't entirely buy that it's just for rendering - I thought some games were starting to offload particle systems and physics.


This is regrettably true. PhysX is an Nvidia technology and, as I understand it, is supported to run on Nvidia GPUs. The consequence is that developers who are faced with this option will quickly realize that it'll be easier to just run the same tech on a user's machine regardless of which GPU solution they have (Nvidia, AMD, Intel) and optimize their game the same way for all platforms.

  Can I use an NVIDIA GPU as a PhysX processor and a non-NVIDIA GPU for regular display graphics?

  No. There are multiple technical connections between PhysX processing and graphics that require tight collaboration between the two technologies. To deliver a good experience for users, NVIDIA PhysX technology has been fully verified and enabled using only NVIDIA GPUs for graphics.
http://www.nvidia.ca/object/physx_faq.html

EDIT: I'm talking about game development


Yeah unfortunately. Both UE4 and Unity use the CPU versions of PhysX. If your game runs on console, it will be running on an AMD gpu, which doesn't support GPU PhysX.

Some games (many actually) are using gpu accelerated particles, but they're pretty much purely for rendering.

Remember that frame times are super tight on games (16 and 33 ms per frame for 60 and 30 FPS respectively), and consoles are notoriously underpowered.

Working in games, none of my colleagues or companies we work with have much interest in offloading physics to the GPU at the expense of rendering performance.


Unity physics engine uses only CPU implementation of PhysX.

https://blogs.unity3d.com/2014/07/08/high-performance-physic...



Isn't it part of Nvidia "PhysX"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: