Hacker News new | comments | show | ask | jobs | submit login
PhysX SDK 4.0, an Open-Source Physics Engine (nvidia.com)
368 points by homarp 6 days ago | hide | past | web | favorite | 117 comments





Oh man. On Army of Two we were running a fully deterministic simulation and we had to patch many part of PhysX to maintain determinism. In the end, we had to get a waiver from Sony to ship a binary-patched version PhysX libraries. Good times.

All that hassle just for them to opensource it in the end. Yay bureaucracy! By the way good work on that game. I thoroughly enjoyed it when it came out.

How did you know what to patch?

Sounds like a potential GDC or Gamasutra writeup.


We could detect if the simulation diverged. Then we had to replay deterministicly up to the frame of divergence and then break on various things that cause divergence. Floating point underflow/overflow, NaN, uninitialized memory access, divide by zero, etc. If my memory serves correct, the physx code referenced uninitialized memory. We had to patch their binary to initialize it.

This is the reason why most game physics engine run with a fixed timestamp. You normally set the hz update of your physics engine to either 120hz, 60hz, 30hz. Now if your rendering engine is rendering at 120hz and your physics engine is working at 60hz, you can do some very basic interpolation just simply using the objects mass, velocity.

Otherwise you run into situation where large deltaV, or very small deltaV completely break the physics calculations. The next is that rounding issue and precision with double still creep in and having a physicist engine that is deterministic for a given hz and float/double speeds up development time.


One of my favorite all time games.

I am curious, Why did you need determinism?

Without knowing the game: Some kind of replay/recap feature maybe or physics in cut scenes.

When you replay a series of inputs, you want the accompanying physics to come out the same. Solvers that use randomness in your engine will either need to have their RNG seeded with the same value or have the rng removed.

Another thing causing problems might be how your engine handles time or when it iterates over objects in a specific order relative to how they were added (might not have preserved that order in your snapshot).

There's many things that can go wrong.


Sometimes network code is easier with deterministic engines. Some engines simply replay the remote/network inputs as if they were local (with various strategies for handling the time sync/time delays), and rely on the property of determinism of their underlying simulations to avoid divergence in player world states.

Knowing only how much Army of Two stressed its cooperative network play nature (including in its title), that's my best lay guess of the strong reason the game wanted deterministic simulations.

The impression I got from dev team discussions/features on The Halo Channel/Master Chief Collection was that Halo was built that way: its network engine wanted rock solid deterministic physics, so all the replay/recap features added to later games were a "free" bonus they were able to build on top from that earlier netcode requirement.


Age of empires was built via input synchronization + local determinism: https://www.gamasutra.com/view/feature/131503/1500_archers_o...

Winner winner chicken dinner. We only sent controller input over the wire for network play.

This has it advantages and disadvantages. One its great and requires less bandwidth, but also means that any floating calculation rounding issue can throw different machines out of sync from each other.

It was definitely a real pain to ensure the whole stack was deterministic. Since we only targeted ps3 and xbox360 it was easier to ensure determinism. All in all, I wouldn’t recommend it.

Yeah, unless you have a entity count that exceeds your available bandwidth you're usually better off with a dead-reckoning system.

That also has the advantage of being able to "fake" events with effects until the server can resolve them meaning a latent connection can 'feel' faster.

Kudos to you for shipping a lockstep solution, we could never flush out the determinism bugs on our titles so we always just used dead-reckoning unless it was some dumb turn based game.


Isn't floating point rounding behaviour also deterministic?

Depends on the mode the FPU is put into, fast or precise. If you want to have some fun, start a multiplayer game of AOE and switch the FPU mode from fast to precise. Doesn't take long to get out of sync, even for a basic game.

More recently we've taken the approach of "close enough is good enough". See Red Dead Redemptions "hilarious" cut scenes where the NPCs get involved, or the horse grts attached to the trains.

Another thing to consider about a replay system is that you don't have to be perfect. What you as a player see isn't what the server(if you have one) sees, and it's not what the other player sees. Your view is only a best guess at that, and for most replay systems in games, they will also be best guesses.


Yup this too. Easy to detect divergence if you compare the output of the rng on both sides of the network at the start of the frame.

You need determinism in all multiplayer games that do not have an authoritative server. Although I would argue that you would want determinism in all games.

In terms of this game in particular, it's a multiplayer coop action shooter so you want everything the behave the same way on both players machines. The player models should be in consistent places in the world, bullets should fly in the same direction on both, grenades and other physics objects should bounce the same way.

If these things do not happen then you will have diverging realities on each of the machines.


Nice, from a developer point of view.

Havok was once the leader in this area, but they were refinanced in a down round, acquired by Intel, and then sold off to Microsoft. Now they don't seem to be very active. Their web site announces the new 2013 version.

(I used to work on this stuff. I'm responsible for the ragdoll-falling-downstairs cliche, first shown in 1997.)


FWIW Havok is very much still trading and active in the games space. I won’t say any more as the business side of things isn’t my forte. Unfortunate that the website gives that impression!

Disclaimer: I work there.


Putting the documentation back online would be a good start.

Havok is still the leader/ best physic engine out there, the only reason why some studios don't use it is because it cost a lot of money.

What about Bullet?

And Ammo.js! There are literally dozens of us using it!

There are a lot of comments about CUDA and GPU compatibility, etc. PhysX is mostly a CPU library, although there are systems that can run on the GPU, GPU physics are not widely used in shipping games. Both Unreal Engine and Unity use PhysX as their default physics engine. It runs on all platforms these engines support (Windows, Max, Linux, Android, iOS, Nintendo Switch, Xbox One, Playstation 4), Nvidia hardware is not required.

The position based dynamics code is in a separate SDK called Nvidia Flex ( https://developer.nvidia.com/flex). Flex is closed source, runs on the GPU and is implemented in CUDA (Nvidia only).


> Flex is closed source, runs on the GPU and is implemented in CUDA (Nvidia only).

Flex 1.0 did require CUDA hardware, but with 1.1 they added a Direct3D backend that runs on any DX11-class GPU.

https://developer.nvidia.com/nvidia-flex-110-released


Is that at the game's discretion where to run the physics code or does that selection between GPU and CPU in the NVidia control panel actually work?

so oss what is useless in future because of inevitable volumetric future(flex)

If you want OSS Position Based Dynamics code you can get Mueller's original C++ implementation here (https://github.com/InteractiveComputerGraphics/PositionBased...). The GPU implementation is not open, but this code can be adapted to run on the GPU outside of Flex.

Scrawk converted the code to C# in order to support Unity and implemented a Unity based version of the fluid dynamics code that runs on the GPU (using portable compute shaders instead of CUDA) as well.

https://github.com/Scrawk/Position-Based-Dynamics https://github.com/Scrawk/PBD-Fluid-in-Unity

The only game that I'm aware of that uses position based dynamics on the GPU is Claybook (https://twvideo01.ubm-us.net/o1/vault/gdc2018/presentations/...). Because PDB is basically an extension of Jakobsen style verlet physics (http://www.cs.cmu.edu/afs/cs/academic/class/15462-s13/www/le...) which is used in a lot of games (Fantastic contraption, Gish, World of Goo, various Bridge builders, etc.) In all cases the developers are using custom physics code and not using a library. This is very different than rigid body dynamics where most shipping games use a licensed physics engine (Havok, PhysX, Bullet, ODE, etc.)


Very cool. Anyone remember the PhysX PPU hardware?

http://physxinfo.com/wiki/Ageia_PhysX_PPU


It's too bad they didn't take off with gamers, but they were a rather niche product for even that market.

In the end it made more sense to buy a beefier graphics card with that money and use some of it for the physics calculations instead.

This sort of the way the 3D accelerator card went as well. It just got merged in to the graphics card.

Plus, the graphics card seems on the verge of merging entirely back to the mainboard (again). Certainly nVidia (et al) are fighting that (inevitable?) future, but more systems are built with "integrated" graphics than not and fewer systems than ever are slotting a graphics card in an expansion spot. (An interesting under-explored repercussion of the crypto-mining GPU boom causing such a consumer shortage in graphics cards was how many consumers realized they didn't need one.)

For normal consumer use cases a discrete GPU has been "dead" for some time now, with both Intel and AMD offering CPU's with integrated graphics able to even handle some (very-)light gaming tasks.

Discrete GPU's aren't going anywhere though, PC gaming has been going through a huge resurgence and their deployment in enterprise workloads is ever increasing. Warranty costs alone (you don't want to replace a 400+ mm2 GPU die when a motherboard capacitor dies or the CPU it's attached to fails) dictate that they remain add-in boards, then you have the upgradability argument.


PC gaming is doing well, but anecdotally the number of PC gamers using tablets and laptops seems to be driving that as much as if not more than traditional desktop/tower form factors. Admittedly many "portable" gaming GPUs are still discrete chips, but in a laptop or tablet form factor they certainly aren't discrete boards anymore.

There's a large market for gaming-oriented mobiles, I won't disagree. Still, the traditional tower market is shrinking in every vertical EXCEPT gaming, where it has actually grown in the past couple years.

Personally, I detest laptops as anything but machines for on-the-go productivity - I don't want to replace a full system just to swap a CPU or GPU, or pay a huge premium for the benefits of portability that I simply don't need in a gaming system (not to mention the performance compromises that you ALWAYS make with lower power or thermally limited components). I can see an argument being made for students or other people with more mobile lifestyles, but in my house there's three desks with gaming rigs right next to each other in the family room for my wife, my daughter, and myself - the need for portability simply isn't there.

The discrete GPU still isn't going anywhere :)


I think the question is how big the niche remains. I think the diminishing returns of the performance benefits from GPU model year to model year seem to be pushing a lengthening upgrade cycle where I find my towers outlasting the need to upgrade their GPUs. The last GPU replacement I did (a tower ago and a couple years back) was because the GPU fried itself, rather than for any perceived performance gain. Sure, it was a benefit in that case that the one failed component was not enough to sink that particular Ship of Theseus at that time, but on the other hand, I don't think I otherwise would have replaced the GPU on it before I replaced the entire tower. Unlike the tower before that where a GPU upgrade was a magical thing with drastic quality improvements.

I use a tower myself currently, but I had an (early) gaming laptop in college and lately have been considering a return to laptop gaming with all the progress that has been made since the last time I tried it. Partly, because it's one of the few ways to differentiate today between PC and Console gaming is having the freedom of more mobile gaming experiences. (Nintendo's Switch, of course, says "hello" with its docking-based approach to the console. If Nintendo is right, the future of even Console gaming is probably mobile, too.)

Anyway, yes the discrete GPU is still around today. I'm just suggesting it might not be guaranteed to stay. As someone that has been using PCs since the 386 era, there are past versions of me that would have been surprised that the Sound Card was reintegrated into motherboards, even for use cases like 5.1+ speaker setups. Dolby Atmos support on the Windows PC is a "free" "app" you download, that asks your video bus (!) to pass through certain sound data over its HDMI port(s) (or that supports existing 5.1+ mainboard outputs if you pay the license fee, or that supports headphones if you pay the license fee). There's a PC world where that would seem unimaginable without an extra expansion board or three and some weird cable juggling. With the diminishing returns of discrete GPU cards over time (despite AMD and nVidia marketing), it does feel like the importance, even to gamers, of discrete GPU cards could similarly come to an end as it did for sound cards.


I want to argue it the other way around. The CPU/memory, motherboard, sound card etc can easily become extensions of and integrated/absorbed into the graphics card.

Why should the design stay as it is?

The GPU is already a parallel design, it just needs to be able to handle generic current CPU tasks and connect to existing media such as hard drives etc. Integrate system memory onto the card, and add on sound features etc.


Very very light games, Flash like almost.

The traditional selling speech from Intel for GPA is going to games done for discrete GPUs and kind of making it run on their integrated ones.

And they now even sell chips with AMD GPUs on them, as they really aren't good for anything beyond accelerated 2D graphics.


Discrete Graphics cards are niche hardware, but the niche isn't shrinking appreciably anymore. Intel doesn't seem all that interested in competing above the middle-low level of performance, nor do they put nearly the amount of effort into their drivers--which means more headaches when you try to push them hard. They're happy gobbling up the majority of machines where people don't give a shit about the graphics and let nVidia and AMD do the expensive battle to capture the enthusiast market.

This strategy did let them down in that it caused them to miss the opportunities for AI, cryptocoin, ML, and other such markets.


Its not the only time, that chasing fat margins let them down. They completely gave the smart phone business away because they didn't did the opportunity. Now they are looking at a half dozen companies from the ARM ecosystem that are hungry and looking to expand their market-share into servers.. Its completely possible that within a few years ARM CPU/GPU combinations will own a huge part of the computing landscape.

Its the same thing that leaves IBM sitting on the sidelines. An unwillingness to take a chance and invest in things where the margins aren't evident. Eventually someone shows up and invents a whole market (AWS for example) and makes a killing leaving them sitting on the side lines.


Does this open the door for PhysX acceleration on AMD cards? Why would Nvidia do this from a business point of view?

Re: business point of view

See the other link currently on the front page: https://blogs.nvidia.com/blog/2018/12/03/physx-high-fidelity.... Gives some ideas of the rationale.


Most of the time physic is not running on the GPU, it's a CPU thing.

Got into some heated debates with people about physx where from a developer standpoint it was off-loading all calculations to the CPU with the present of a physx GPU. Typically in game engines you need to pause the world and look into the physx simulation to get the object's pos/vel/mass. This causes the physx engine to stop the world to finish it's calculations so crossing the PCI bus becomes more and more expensive.

Vast majority of the time, most game engine's to solve this problem just let non interactive objects (cloud's, waves, lighting) to be calculated by the phyx sub-system, and only poking it now and then to prevent the phyx hardware from swamping to the CPU.


> Typically in game engines you need to pause the world and look into the physx simulation to get the object's pos/vel/mass. This causes the physx engine to stop the world to finish it's calculations so crossing the PCI bus becomes more and more expensive.

PhysX keeps two copies of the simulation, you read from one while PhysX is updating the other, and then they swap the pointers. You don't cross the PCI bus to get every position/velocity, that info is transferred in bulk at every step.


From a business point of view they can get developers/researchers working with PhysX early. Eventually these developers will convince their employer to shell out for other non free Nvidia products.

Maybe it's just my perception, but fewer and fewer games these days seem to be using PhysX.

Everything that is made with Unity or UE4 uses PhysX.

That said, not that many games use the GPU-accelerated parts of it; for a lot of gameplay physics CPU code path of PhysX works just fine and does not have special hardware requirements.


> Everything that is made with Unity or UE4 uses PhysX

That's not true - UE4 games can be (and are) built against other physics engines.


Yeah, I should have clarified: most games using Unity or UE4 are using PhysX, since that's what these two engines are using out of the box.

Most engines can use other physics engines. People have also used other engines in Unity

Yeah the GPU-accelerated part is what I was referring to. Doesn't seem like it makes much of a difference these days.

I think you're really misunderstanding how this works. The CPU handles pretty much all game logic. It needs to know the position of each car, enemy, bullet, whatever, so it can decide how to respond to things, if a car hits a player it does damage. As a result it's impossible to hand off such calculations to the GPU, so all GPU accelerated physics do is trivial visual-only effects like cloth and water physics or pieces of paper blowing in the wind. Basically anyone expecting GPU accelerated physics that has logical relevance such as physics based destructible environments is expecting too much. In other words GPU accelerated game logic physics was never the promise of PhysX. PhysX focused on making physics really easy and great from a software standpoint, and that has been an outstanding success.

The visual-only portion of GPU physics is not really that compelling which is why there isn't huge uptake. There would need to be a revolution in how games work on a fundamental level where basic game logic is calculated on the GPU to make true GPU physics happen. We might see that eventually but not anytime soon.


GPU calculated physics works just fine when results are brought back to the Cpu.

Raycasting for game logic is cpu based as you mention because the game logic itself is on the cpu. Yet solvers and the true heavy lifting does work well on gpu.

Except, and this is the true reason we see little gpu physics, no one has spare gpu room. Thus cpu side physics wins for most games. Outside specific physics focused games giving up graphics for faster physics is not a profitable trade.

I say this as a gamedev myself who has several times made this exact decision.


The biggest problem with GPU physics is that it's a very difficult problem to tackle and you'll run into compatibility issues between the hardware vendors. It'll work on Nvidia hardware and won't work on AMD or vice versa.

Many games do have spare GPU room to spare, but since there are no good GPU-accelerated solutions for physics they don't have much of a choice.


> The biggest problem with GPU physics is that it's a very difficult problem to tackle

This is a completely meaningless statement.

> you'll run into compatibility issues between the hardware vendors

Those compatibility issues already exist in the form of DX or OpenGL drivers, and most games have to face them. Writing a sim in OpenCL would work on both Nvidia and AMD, and even on Integrated GPUs.

> Many games do have spare GPU room to spare

Many smaller games mighy but most big games do not. And those games with GPU room to spare normally have CPU to spare.

>since there are no good GPU-accelerated solutions for physics

There is - PhysX.


>This is a completely meaningless statement.

How is "difficult to implement" a meaningless statement?

>Those compatibility issues already exist in the form of DX or OpenGL drivers, and most games have to face them. Writing a sim in OpenCL would work on both Nvidia and AMD, and even on Integrated GPUs.

With completely different performance characteristics and very difficult to diagnose bugs.

>Many smaller games mighy but most big games do not. And those games with GPU room to spare normally have CPU to spare.

Maybe if you're talking about mainstream AAA single player titles, but many multiplayer titles tend to have CPU limits instead.

>There is - PhysX.

Which only works on nvidia hardware and is thus a useless solution.


That doesn't seem right?

https://docs.nvidia.com/gameworks/content/gameworkslibrary/p...

Says: The GPU rigid body feature provides GPU-accelerated implementations of:

- Broad Phase

- Contact generation

- Shape and body management

- Constraint solver


These are generally only used for the visual effects, you still need these features to make the visual effects look good.

> Broad Phase, Contact generation, Shape and body management, Constraint solver

These are absolutely not only used for the visual effects, they are the fundamentals of a physics engine.


The physics of the visual effect.


I am not familiar with PhysX - must you explicitly program with PhysX against the GPU?

No. Most games use it on the CPU, as the GPU accelerated implementation is currently only available on Nvidia cards

Because nobody used it. It's for the same reason Intel open sourced (or at least promised to, last I checked) Thunderbolt.

PhysX is used in both the Unity and Unreal engines, representing well over 50% of games made.

This is not true at all

Which part? If you're talking about units sold rather than games made, then sure, the most popular games are often on their own engines. But that's a different measure than "games made".

"Unreal Engine 4 uses the PhysX 3.3 physics engine to drive its physical simulation calculations and perform all collision calculations. "

https://docs.unrealengine.com/en-us/Engine/Physics

"As announced at GDC’14, Unity 5.0 features an upgrade to PhysX 3.3. Let’s give it a closer look."

https://blogs.unity3d.com/2014/07/08/high-performance-physic...

Most of the stats I can find say unity is 50-60%, and UE is about 10%:

https://www.linkedin.com/pulse/unity-vs-unreal-engine-more-c...

"He said that Unity powers more than 50 percent of mobile games and about half of all PC games."

https://venturebeat.com/2017/05/25/why-unity-was-able-to-rai...


# Disable implicit rules to speedup build .SUFFIXES: SUFFIXES := %.out: %.a: ...

https://github.com/NVIDIAGameWorks/PhysX-3.4/blob/master/Phy...

Does this actually work? If it has tangible benefits then perhaps the AOSP could do the same.


It helps with debug-logging from make anyway (the -d flag), so you get just a few relevant lines of checked rules for each target, instead of a page of irrelevant stuff like "maybe in SCCS there's a .y file which can generate a .c file which can generate this file ... hmm no I guess not" ... for each target.

Come to think of it, it reduces the number of stat() system calls quite significantly, that can make a difference in large builds.


AOSP doesn't use makefiles anymore.

Yes we do. We just compile them to ninja graphs and use those to do the actual builds. For the moment, the actual input descriptions can (but don't have to) come in makefile form.

The build uses ninja for a long time now, the "Makefiles" aren't used with make anymore and instead use kati and there is a treewide effort of converting any makefiles into Soong specs.

Yes, but makefiles still exist, and they still provide build configuration. That they're off to the side during most incremental builds is a separate matter.

Rad, I'll have to dig into the source for this! Does PhysX mainly implement PBD like all the papers from Mueller they publish?

Does anyone know about the new "Temporal" Gauss-Seidel solver in 4.0? I can't find any reference on it.

How does it compare to Bullet Physics?

Doesn't it still depend on CUDA for hardware acceleration? For it to become truly open, it should be untied from CUDA first.

So far it looks more like a way to advance CUDA usage even further, by giving a free higher level library that's locked into it.


> Doesn't it still depend on CUDA for hardware acceleration?

Assuming by hardware you mean GPU acceleration, then yes. It's not really a push for Cuda usage, most games that use physx don't use the GPU acceleration (not everyone has PhysX cards, and those that do are usually busy using the GPU for rendering), so in practice for 99% of use cases, it is open.


Modern cards commonly have rendering and compute pipelines available to be used in parallel. So in theory nothing stops games from using compute queues for physics.

The issue is that CUDA, unlike OpenCL, is only available on Nvidia hardware.


Yep, that's nice. There should be more push to dislodge CUDA grip on the AI industry.

This is a logical step to get physx used by AI researchers.

As far as I know, PhysX doesn't offer x64 libs unless you pay. I couldn't even find a torrent. But this was years ago so it could've changed.

Given that it's now open source, you can compile your own. They have provided x64 libs for at least the last few years though.

That video... I see a huge potential for "Twitch plays Robotic arms" ! XD Someone, please do this.

Too bad they will never open source their drivers or help nouveau instead. The only hope if they go bankrupt.

> never ... help nouveau

https://lwn.net/Articles/568038/


1) Those documents are no longer available (or the link is broken)

2) The contribution was actually not very helpful for nouveau, much of the information was already known. nvidia has done practically nothing since then to support nouveau in any meaningful way, with the exception of a one-off patchset to implement some support for Tegra.

I suspect you just found the first link that seemed to validate the point you are trying to make, without actually understanding what is actually (not) going on.


1)link is broken

http://download.nvidia.com/open-gpu-doc/DCB/1/DCB-4.0-Specif...

> nvidia has done practically nothing since then to support nouveau in any meaningful way

There's an awful lot of stuff in that directory for that to be the case.

> suspect you just found the first link that seemed to validate the point you are trying to make, without actually understanding what is actually (not) going on.

Funny, I think the same thing when people say "nvidia never helps nouveau."


Good thing nvidia hardware has not changed one bit since they released that limited amount of information in 2013!

There's a reason I said "that directory" not "that file."

Cutting through the sarcasm, here's some Volta info: https://download.nvidia.com/open-gpu-doc/Display-Ref-Manuals...

As reported on in phoronix:

https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-V...

https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-V...


"Before getting too excited, this is strictly about the display hardware and not about the 3D engine, etc." - from the link you gave. So still a show but not a real help.

that was 1 doc out of many on http://download.nvidia.com/open-gpu-doc/

multiple docs have been updated in this past year


It happened once and 5 years ago. And not a big deal anyway.

It is only a small piece of documentation Nouveau developers need. Looks more like a show than a genuine help. Also it is a link from 2013...

Isn't the current problem that modern cards will never exit a very low power boot up state with dogshit performance since that requires a signed firmware of some sort?

I think that sums it up nicely. Nvidia want the opensource driver to work just well enough that you can get a picture to install the proprietary one.


May be when Intel will throw in their discrete GPUs with open drivers, Nvidia will stop jerking around and will collaborate with nouveau developers.

Currently Nvidia are complete jerks when it comes to helping nouveau with documenting their hardware.


full disclosure that I work for nvidia, but not on drivers. many driver devs internally do actually contribute to nouveau as well. please don't make baseless claims simply because you're angry.

Well, they're not doing a very good job then. I just upgraded to Ubuntu 18.10 on my machine with a Titan X Maxwell, and guess what? Booted to a black screen. Related to Nvidia's refusal to work with everyone else on Wayland support, no doubt.

The sad thing is that this experience is typical and expected for me by now. Nearly every time I install or upgrade Linux, I have to spend hours troubleshooting Nvidia driver issues. Cumulatively, several days of my life have been wasted on this. People are angry because Nvidia has made and continues to make bad decisions that result in a poor experience on Linux.


> Related to Nvidia's refusal to work with everyone else on Wayland support, no doubt.

this is more complicated than I can really comment on, but from my understanding it was not an issue of nvidia's refusal to work on it so much as it was an issue of nvidia not being allowed a seat at the table to discuss it. the wayland protocol was effectively demanding a ground up rewrite with no ability for compromise purely because nvidia being closed source meant they weren't entitled to an opinion. which is... wow

I'm sorry that's the typical experience you've had with the driver, though I'm a little surprised by that actually. I don't run x on ubuntu, but I know there were some issues in the past where they were attempting to "smartly" configure the driver for certain setups and instead end up causing headaches. Though that is really my main issue with ubuntu in general, that they try to "help" you because they know best, and also one of the reasons I don't run it. I just use the runfile installer and let it auto-generate the base xconfig.


I'm not blaming developers. It's clearly Nvidia management's fault. Developers don't decide this.

And it's not baseless. Where is their documentation on reclocking for desktop GPUs? Once they hid it behind signed firmware, doing it from nouveau became a major pain, because it requires complete blind reverse engineering. All this effort could be spent on making the drivers better instead, if Nvidia would have provided documentation.

It's the reason you can't use nouveau for gaming today on anything recent.


you talk about enabling signed firmware like it was done for a proprietary reason and not a massive security one. take a look at the fake Pascal gpu's on ebay where people are flashing unsigned firmware to old Fermi cards to fake windows into thinking they're actual Pascal cards...

that said, youre right that it's not good that the nouveau driver is so far behind the proprietary one, and Im not trying to say that nvidia isn't at fault for that, just that it's a more complicated issue that people tend to portray it as.

also:

http://download.nvidia.com/open-gpu-doc/MemoryTweakTable/1/M...

http://download.nvidia.com/open-gpu-doc/MemoryClockTable/1/M...


The problem is not just in the signed nature of the firmware, but in the fact that there is no normal way to control it to achieve needed results like card reclocking, which is required to run it in anything but minimal performance mode. And there is no documentation on any of this for different boards. What stopped Nvidia from giving access to these features and documenting it for developers? I don't buy the security argument. Reclocking works just fine through open AMD drivers for instance.

As Nouveau developers put it:

> Reclocking must be done in firmware. NVIDIA now requires signed firmware to access a lot of useful functionality. They will never release the firmware in a nice redistributable manner, so the avenues for implementing it become much harder:

> (a) Figure out a way to extract the firmware from their released drivers (harder than it sounds) and how to operate it to do the things we need

> (b) Find a bug in their firmware to use to load our own code into the secure environment (any such exploit would be patched, but once we have a version of the firmware that's exploitable with signatures, we can just keep loading it instead of whatever's the latest)

> Of course all that gets us is ... firmware which can toggle stuff GPU-side. Then we have to develop the scripts to actually perform the reclocking to pass on to the firmware. This is the hard part -- due to the wide variety of hardware, ram chips, etc there can be a lot of variation in those scripts. A single developer might only have 1% of the boards out there, but by fuzzing the vbios and seeing how the blob driver reacts, we can get much more significant coverage.

> As part of the signed-everything logic, the blob driver now also verifies that the VBIOS hasn't been tampered with, which means that developing reclocking scripts will require different techniques.

> Moral of the story... just get an Intel or AMD board and move on with life. NVIDIA has no interest in supporting open-source, and so if you want to support open-source, pick a company that aligns with this.

In the end, crippling reclocking can easily be seen as an anti-competitive stance against nouveau, to prevent it from competing with the blob.


First Intel needs to learn how to make discrete GPUs that are actually worthwhile to target.

Anyone should, it doesn't mean that Nvidia won't feel the pressure that will affect their anti-competitive tactics.

This is correct. What's the point of open sourcing this library when you can't even use their cards on the open source operating system. From what I understand, it wouldn't be that difficult for them to allow the community to develop a proper driver for their devices on Linux, they just don't want to. I think it's fair to point out the hypocrisy of this announcement.

> What's the point of open sourcing this library when you can't even use their cards on the open source operating system.

The point is that these two are completely orthogonal. It's like saying: what's the point of writing open source software for an x86 when the x86 RTL is closed source.

With an open source PhysX library, you can make PhysX work in any environment.


> With an open source PhysX library, you can make PhysX work in any environment.

Not if their drivers on other environments dont support it. Consoles are all AMD, and NVIDIA only has 3-4% more market share on PC, while intel has around 70% of the market share. That's a really low number of platforms for a proprietary piece of tech.


PhysX works on a CPU, and the GPU acceleration is optional.

It's actual garbage on CPU tho, it's made to be GPU accelerated. There's many more physics engines that arent proprietary to certain hardware that run way better than physx

> It's actual garbage on CPU tho, it's made to be GPU accelerated

Have you a source for this? Plenty of AAA games ship with CPU PhysX only. My experience with GPU PhysX is that it's not worth the resource usage, and the overhead and limitations (e.g. to do with collision filtering) make it not really suitable for general purpose use.

> There's many more physics engines that arent proprietary to certain hardware that run way better than physx

Again, PhysX isn't locked to any hardware. The GPU acceleration (which most games don't use) is locked to an NVidia GPU, but the CPU physics engine isn't. There is Havok and Bullet physics, both of which have pros and cons but aren't necessarily "better" than PhysX. If you have any numbers or sources to prove otherwise, I'd love to see them.


But does it support ray tracing at 4k 60 fps?



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: