
PhysX SDK 4.0, an Open-Source Physics Engine - homarp
https://news.developer.nvidia.com/announcing-physx-sdk-4-0-an-open-source-physics-engine/
======
visualphoenix
Oh man. On Army of Two we were running a fully deterministic simulation and we
had to patch many part of PhysX to maintain determinism. In the end, we had to
get a waiver from Sony to ship a binary-patched version PhysX libraries. Good
times.

~~~
steve19
I am curious, Why did you need determinism?

~~~
chmod775
Without knowing the game: Some kind of replay/recap feature maybe or physics
in cut scenes.

When you replay a series of inputs, you want the accompanying physics to come
out the same. Solvers that use randomness in your engine will either need to
have their RNG seeded with the same value or have the rng removed.

Another thing causing problems might be how your engine handles time or when
it iterates over objects in a specific order relative to how they were added
(might not have preserved that order in your snapshot).

There's many things that can go wrong.

~~~
WorldMaker
Sometimes network code is easier with deterministic engines. Some engines
simply replay the remote/network inputs as if they were local (with various
strategies for handling the time sync/time delays), and rely on the property
of determinism of their underlying simulations to avoid divergence in player
world states.

Knowing only how much Army of Two stressed its cooperative network play nature
(including in its title), that's my best lay guess of the strong reason the
game wanted deterministic simulations.

The impression I got from dev team discussions/features on The Halo
Channel/Master Chief Collection was that Halo was built that way: its network
engine wanted rock solid deterministic physics, so all the replay/recap
features added to later games were a "free" bonus they were able to build on
top from that earlier netcode requirement.

~~~
visualphoenix
Winner winner chicken dinner. We only sent controller input over the wire for
network play.

~~~
hackits
This has it advantages and disadvantages. One its great and requires less
bandwidth, but also means that any floating calculation rounding issue can
throw different machines out of sync from each other.

~~~
visualphoenix
It was definitely a real pain to ensure the whole stack was deterministic.
Since we only targeted ps3 and xbox360 it was easier to ensure determinism.
All in all, I wouldn’t recommend it.

~~~
vvanders
Yeah, unless you have a entity count that exceeds your available bandwidth
you're usually better off with a dead-reckoning system.

That also has the advantage of being able to "fake" events with effects until
the server can resolve them meaning a latent connection can 'feel' faster.

Kudos to you for shipping a lockstep solution, we could never flush out the
determinism bugs on our titles so we always just used dead-reckoning unless it
was some dumb turn based game.

------
Animats
Nice, from a developer point of view.

Havok was once the leader in this area, but they were refinanced in a down
round, acquired by Intel, and then sold off to Microsoft. Now they don't seem
to be very active. Their web site announces the new 2013 version.

(I used to work on this stuff. I'm responsible for the ragdoll-falling-
downstairs cliche, first shown in 1997.)

~~~
nfg
FWIW Havok is very much still trading and active in the games space. I won’t
say any more as the business side of things isn’t my forte. Unfortunate that
the website gives that impression!

Disclaimer: I work there.

~~~
Animats
Putting the documentation back online would be a good start.

------
Impossible
There are a lot of comments about CUDA and GPU compatibility, etc. PhysX is
mostly a CPU library, although there are systems that can run on the GPU, GPU
physics are not widely used in shipping games. Both Unreal Engine and Unity
use PhysX as their default physics engine. It runs on all platforms these
engines support (Windows, Max, Linux, Android, iOS, Nintendo Switch, Xbox One,
Playstation 4), Nvidia hardware is not required.

The position based dynamics code is in a separate SDK called Nvidia Flex (
[https://developer.nvidia.com/flex](https://developer.nvidia.com/flex)). Flex
is closed source, runs on the GPU and is implemented in CUDA (Nvidia only).

~~~
ygra
Is that at the game's discretion where to run the physics code or does that
selection between GPU and CPU in the NVidia control panel actually work?

------
nudgeee
Very cool. Anyone remember the PhysX PPU hardware?

[http://physxinfo.com/wiki/Ageia_PhysX_PPU](http://physxinfo.com/wiki/Ageia_PhysX_PPU)

~~~
cr0sh
It's too bad they didn't take off with gamers, but they were a rather niche
product for even that market.

~~~
jandrese
In the end it made more sense to buy a beefier graphics card with that money
and use some of it for the physics calculations instead.

~~~
devman0
This sort of the way the 3D accelerator card went as well. It just got merged
in to the graphics card.

~~~
WorldMaker
Plus, the graphics card seems on the verge of merging entirely back to the
mainboard (again). Certainly nVidia (et al) are fighting that (inevitable?)
future, but more systems are built with "integrated" graphics than not and
fewer systems than ever are slotting a graphics card in an expansion spot. (An
interesting under-explored repercussion of the crypto-mining GPU boom causing
such a consumer shortage in graphics cards was how many consumers realized
they didn't need one.)

~~~
snuxoll
For normal consumer use cases a discrete GPU has been "dead" for some time
now, with both Intel and AMD offering CPU's with integrated graphics able to
even handle some (very-)light gaming tasks.

Discrete GPU's aren't going anywhere though, PC gaming has been going through
a huge resurgence and their deployment in enterprise workloads is ever
increasing. Warranty costs alone (you don't want to replace a 400+ mm2 GPU die
when a motherboard capacitor dies or the CPU it's attached to fails) dictate
that they remain add-in boards, then you have the upgradability argument.

~~~
WorldMaker
PC gaming is doing well, but anecdotally the number of PC gamers using tablets
and laptops seems to be driving that as much as if not more than traditional
desktop/tower form factors. Admittedly many "portable" gaming GPUs are still
discrete chips, but in a laptop or tablet form factor they certainly aren't
discrete boards anymore.

~~~
snuxoll
There's a large market for gaming-oriented mobiles, I won't disagree. Still,
the traditional tower market is shrinking in every vertical EXCEPT gaming,
where it has actually grown in the past couple years.

Personally, I detest laptops as anything but machines for on-the-go
productivity - I don't want to replace a full system just to swap a CPU or
GPU, or pay a huge premium for the benefits of portability that I simply don't
need in a gaming system (not to mention the performance compromises that you
ALWAYS make with lower power or thermally limited components). I can see an
argument being made for students or other people with more mobile lifestyles,
but in my house there's three desks with gaming rigs right next to each other
in the family room for my wife, my daughter, and myself - the need for
portability simply isn't there.

The discrete GPU still isn't going anywhere :)

~~~
WorldMaker
I think the question is how big the niche remains. I think the diminishing
returns of the performance benefits from GPU model year to model year seem to
be pushing a lengthening upgrade cycle where I find my towers outlasting the
need to upgrade their GPUs. The last GPU replacement I did (a tower ago and a
couple years back) was because the GPU fried itself, rather than for any
perceived performance gain. Sure, it was a benefit in that case that the one
failed component was not enough to sink that particular Ship of Theseus at
that time, but on the other hand, I don't think I otherwise would have
replaced the GPU on it before I replaced the entire tower. Unlike the tower
before that where a GPU upgrade was a magical thing with drastic quality
improvements.

I use a tower myself currently, but I had an (early) gaming laptop in college
and lately have been considering a return to laptop gaming with all the
progress that has been made since the last time I tried it. Partly, because
it's one of the few ways to differentiate _today_ between PC and Console
gaming is having the freedom of more mobile gaming experiences. (Nintendo's
Switch, of course, says "hello" with its docking-based approach to the
console. If Nintendo is right, the future of even Console gaming is probably
mobile, too.)

Anyway, yes the discrete GPU is still around today. I'm just suggesting it
might not be guaranteed to stay. As someone that has been using PCs since the
386 era, there are past versions of me that would have been surprised that the
Sound Card was reintegrated into motherboards, even for use cases like 5.1+
speaker setups. Dolby Atmos support on the Windows PC is a "free" "app" you
download, that asks your video bus (!) to pass through certain sound data over
its HDMI port(s) (or that supports existing 5.1+ mainboard outputs if you pay
the license fee, or that supports headphones if you pay the license fee).
There's a PC world where that would seem unimaginable without an extra
expansion board or three and some weird cable juggling. With the diminishing
returns of discrete GPU cards over time (despite AMD and nVidia marketing), it
does feel like the importance, even to gamers, of discrete GPU cards could
similarly come to an end as it did for sound cards.

~~~
animal531
I want to argue it the other way around. The CPU/memory, motherboard, sound
card etc can easily become extensions of and integrated/absorbed into the
graphics card.

Why should the design stay as it is?

The GPU is already a parallel design, it just needs to be able to handle
generic current CPU tasks and connect to existing media such as hard drives
etc. Integrate system memory onto the card, and add on sound features etc.

------
bgorman
Does this open the door for PhysX acceleration on AMD cards? Why would Nvidia
do this from a business point of view?

~~~
Thaxll
Most of the time physic is not running on the GPU, it's a CPU thing.

~~~
hackits
Got into some heated debates with people about physx where from a developer
standpoint it was off-loading all calculations to the CPU with the present of
a physx GPU. Typically in game engines you need to pause the world and look
into the physx simulation to get the object's pos/vel/mass. This causes the
physx engine to stop the world to finish it's calculations so crossing the PCI
bus becomes more and more expensive.

Vast majority of the time, most game engine's to solve this problem just let
non interactive objects (cloud's, waves, lighting) to be calculated by the
phyx sub-system, and only poking it now and then to prevent the phyx hardware
from swamping to the CPU.

~~~
maccard
> Typically in game engines you need to pause the world and look into the
> physx simulation to get the object's pos/vel/mass. This causes the physx
> engine to stop the world to finish it's calculations so crossing the PCI bus
> becomes more and more expensive.

PhysX keeps two copies of the simulation, you read from one while PhysX is
updating the other, and then they swap the pointers. You don't cross the PCI
bus to get every position/velocity, that info is transferred in bulk at every
step.

------
aleden
# Disable implicit rules to speedup build .SUFFIXES: SUFFIXES := %.out: %.a:
...

[https://github.com/NVIDIAGameWorks/PhysX-3.4/blob/master/Phy...](https://github.com/NVIDIAGameWorks/PhysX-3.4/blob/master/PhysX_3.4/Source/compiler/linux64/Makefile#L68)

Does this actually work? If it has tangible benefits then perhaps the AOSP
could do the same.

~~~
stefan_
AOSP doesn't use makefiles anymore.

~~~
quotemstr
Yes we do. We just compile them to ninja graphs and use those to do the actual
builds. For the moment, the actual input descriptions can (but don't have to)
come in makefile form.

~~~
stefan_
The build uses ninja for a long time now, the "Makefiles" aren't used with
make anymore and instead use kati and there is a treewide effort of converting
any makefiles into Soong specs.

~~~
quotemstr
Yes, but makefiles still _exist_ , and they still provide build configuration.
That they're off to the side during most incremental builds is a separate
matter.

------
electricslpnsld
Rad, I'll have to dig into the source for this! Does PhysX mainly implement
PBD like all the papers from Mueller they publish?

------
chombier
Does anyone know about the new "Temporal" Gauss-Seidel solver in 4.0? I can't
find any reference on it.

------
jjuhl
How does it compare to Bullet Physics?

------
shmerl
Doesn't it still depend on CUDA for hardware acceleration? For it to become
truly open, it should be untied from CUDA first.

So far it looks more like a way to advance CUDA usage even further, by giving
a free higher level library that's locked into it.

~~~
maccard
> Doesn't it still depend on CUDA for hardware acceleration?

Assuming by hardware you mean GPU acceleration, then yes. It's not really a
push for Cuda usage, most games that use physx don't use the GPU acceleration
(not everyone has PhysX cards, and those that do are usually busy using the
GPU for rendering), so in practice for 99% of use cases, it is open.

~~~
shmerl
Modern cards commonly have rendering and compute pipelines available to be
used in parallel. So in theory nothing stops games from using compute queues
for physics.

~~~
HelloNurse
The issue is that CUDA, unlike OpenCL, is only available on Nvidia hardware.

------
acoye
This is a logical step to get physx used by AI researchers.

------
IChooseYou
As far as I know, PhysX doesn't offer x64 libs unless you pay. I couldn't even
find a torrent. But this was years ago so it could've changed.

~~~
maccard
Given that it's now open source, you can compile your own. They have provided
x64 libs for at least the last few years though.

------
nkg
That video... I see a huge potential for "Twitch plays Robotic arms" ! XD
Someone, please do this.

------
xvilka
Too bad they will never open source their drivers or help nouveau instead. The
only hope if they go bankrupt.

~~~
twtw
> never ... help nouveau

[https://lwn.net/Articles/568038/](https://lwn.net/Articles/568038/)

~~~
craftyguy
1) Those documents are no longer available (or the link is broken)

2) The contribution was actually not very helpful for nouveau, much of the
information was already known. nvidia has done practically nothing since then
to support nouveau in any meaningful way, with the exception of a one-off
patchset to implement some support for Tegra.

I suspect you just found the first link that seemed to validate the point you
are trying to make, without actually understanding what is actually (not)
going on.

~~~
twtw
1)link is broken

[http://download.nvidia.com/open-gpu-
doc/DCB/1/DCB-4.0-Specif...](http://download.nvidia.com/open-gpu-
doc/DCB/1/DCB-4.0-Specification.html)

> nvidia has done practically nothing since then to support nouveau in any
> meaningful way

There's an awful lot of stuff in that directory for that to be the case.

> suspect you just found the first link that seemed to validate the point you
> are trying to make, without actually understanding what is actually (not)
> going on.

Funny, I think the same thing when people say "nvidia never helps nouveau."

~~~
craftyguy
Good thing nvidia hardware has not changed one bit since they released that
limited amount of information in 2013!

~~~
twtw
There's a reason I said "that directory" not "that file."

Cutting through the sarcasm, here's some Volta info:
[https://download.nvidia.com/open-gpu-doc/Display-Ref-
Manuals...](https://download.nvidia.com/open-gpu-doc/Display-Ref-
Manuals/1/gv100/)

As reported on in phoronix:

[https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-V...](https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-
Volta-GV100-Firmware)

[https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-V...](https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-
Volta-Display-Headers)

~~~
xvilka
"Before getting too excited, this is strictly about the display hardware and
not about the 3D engine, etc." \- from the link you gave. So still a show but
not a real help.

------
ratsimihah
But does it support ray tracing at 4k 60 fps?

