Hacker News new | past | comments | ask | show | jobs | submit login
Simulating fluids, fire, and smoke in real-time (andrewkchan.dev)
784 points by ibobev on Dec 19, 2023 | hide | past | favorite | 169 comments



As a person who did a PhD in CFD, I must admit I never encountered the vorticity confinement method and curl-noise turbulence. I guess you learn something new every day!

Also, in industrial CFD, where the Reynolds numbers are higher you'd never want something like counteracting artificial dissipation of the numerical method by trying to applying noise. In fact, quite often people want artificial dissipation to stabilize high Re simulations! Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.


> Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

The first rule of real-time computer graphics has essentially always been "Cheat as much as you can get away with (and usually, even if you can't)." Also, it doesn't even have to look right, it just has to look cool! =)


“Everything is smoke and mirrors in computer graphics - especially the smoke and mirrors!”


This was the big realization for me when I got into graphics - everything on the screen is a lie, and the gestalt is an even bigger lie. It feels similar to how I would imagine it feels to be a well-informed illusionist - the fun isn’t spoiled for me when seeing how the sausage is made - I just appreciate it on more levels.


My favorite example of the lie-concealed-within-the-lie is the ‘Half-Life Alyx Bottles’ thing.

Like, watch this video: https://www.youtube.com/watch?v=9XWxsJKpYYI

This is a whole story about how the liquids in the bottles ‘isn’t really there’ and how it’s not a ‘real physics simulation’ - all just completely ignoring that none of this is real.

There is a sense in which the bottles in half-life Alyx are ‘fake’ - that they sort of have a magic painting on the outside of them that makes them look like they’re full of liquid and that they’re transparent. But there’s also a sense in which the bottles are real and the world outside them is fake. And another sense in which it’s all just tricks to decide what pixels should be what color 90 times a second.


I want to see that shader. How is sloshing implemented? Is the volume of the bottle computed on every frame?

Clearly, there's some sort of a physics simulation going on there, preserving the volume, some momentum, and taking gravity into account. That the result is being rendered over the shader pipeline rather than the triangle one doesn't make it any more or less "real" than the rest of the game. It's a lie only if the entire game is a lie.


Is it really doing any sloshing though? Isn't it "just" using a plane as the surface of the liquid? And then adding a bunch of other effects, like bubbles, to give the impression of sloshing?


This is the perfect example of what I meant. So many great quotes in this about the tricks being stupid and also Math. Also the acknowledgment that it’s not about getting it right it’s about getting you to believe it.


At some point every render-engine builder goes through the exercise of imagining purely physically-modeled photonic simulation. How soon one gives up on this computationally intractable task with limited marginal return on investment is a signifier of wisdom/exhaustion.

And, yes, I've gone way too far down this road in the past.


I have heard the hope expressed, that quantum computers might solve that one day, but I believe it, once I see it.

Till then, I have some hope, that native support for raytracing on the GPU will allow for more possibilities ..


Not being a graphics person, is this what hardware ray tracing is, or is that something different?


Rayteacing doesn't simulate light, it simulates a very primitive idea of light. There's no diffraction, no interference patterns. You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Our universe has a surprising amount of detail. We can't even simulate the simplest molecular interactions fully. Even a collision of two hydrogen atoms is too hard - time resolution and space resolution is insanely high, if not infinite.


And what’s more, it simulates a very primitive idea of matter! All geometry is mathematically precise, still made of flat triangles that only approximate curved surfaces, every edge and corner is infinitely sharp, and everything related to materials and interaction with light (diffuse shading, specular highlights, surface roughness, bumpiness, transparency/translucency, snd so on) are still the simplest possible models that five a somewhat plausible look, raytraced or not.


Btw there is some movement towards including wave-optics into graphics instead of traditional ray-optics: https://ssteinberg.xyz/2023/03/27/rtplt/


This is very cool. Thx for sharing


Raytracing simulates geometrical optics. That it doesn't take interference pattern into account is therefore a mathematical limitation and of course true, but irrelevant for most applications.

There are other effects (most notable volume scattering), which could be significant for the rendered image and which are simulatable with raytracing, but are usually neglected for various reasons, often because they are computationally expensive.


> You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Afaik you can't even simulate double-slit in high-end offline VFX renderers without custom programming


I wanted to make a dry joke about renderers supporting the double slit experiment, but in a sense you beat me to it.


There sure has been a lot of slits simulated in Blender.


Even the ray tracing / path tracing is half-fake these days cause it's faster to upscale and interpolate frames with neural nets. But yeah in theory you can simulate light realistically


It’s still a model at the end of the day. Material properties like roughness are approximated with numerical values instead of being physical features.

Also light is REALLY complicated when you get close to a surface. A light simulation that properly handles refraction, diffraction, elastic and inelastic scattering, and anisotropic material properties would be very difficult to build and run. It’s much easier to use material values found from experimental results.


If I understood Feynman’s QED at all, light gets quite simple once you get close enough to the surface. ;) Isn’t the idea was that everything’s a mirror? It sounds like all the complexity comes entirely from all the surface variation - a cracked or ground up mirror is still a mirror at a smaller scale but has a complex aggregate behavior at a larger scale. Brian Green’s string theory talks also send the same message, more or less.


Sure, light gets quite simple as long as you can evaluate path integrals that integrate over the literally infinite possible paths that each contributing photon could possibly take!

Also, light may be simple but light interaction with electrons (ie. matter) is a very different story!


Don't the different phases from all the random path more or less cancel out , and significant additions of phases only come from paths near the "classical" path? I wonder if this reduction would still be tractable on gpus to simulate diffraction


That’s what I remember from QED, the integrals all collapse to something that looks like a small finite-width Dirac impulse around the mirror direction. So the derivation is interesting and would be hard to simulate, but we can approximate the outcome computationally with extremely simple shortcuts. (Of course, with a long list of simplifying assumptions… some materials, some of the known effects, visible light, reasonable image resolutions, usually 3-channel colors, etc. etc.)

I just googled and there’s a ton of different papers on doing diffraction in the last 20 years, more that I expected. I watched a talk on this particular one last year: https://ssteinberg.xyz/2023/03/27/rtplt/


Orbital shapes and energies influence how light interacts with a material. Yes, once you get to QED, it’s simple. Before that is a whole bunch of layers of QFT to determine where the electrons are, and their energy. From that, there are many emergent behaviors.

Also QED is still a model. If you want to simulate every photon action, might as well build a universe.


Had the same realization that game engines were closer to theater stages than reality after a year of study. The term “scene” should have tipped me off to that fact sooner.


A VP at Nvidia has a cool (marketing) line about this, regarding frame generation of "fake" frames. Because DLSS is trained on "real" raytraces images, they are more real than conventional game graphics.

So the non-generated frames are the "fake frames".


> Also, it doesn't even have to look right, it just has to look cool! =)

The refrain is: Plausible. Not realistic.

Doesn't have to be correct. Just needs to lead the audience to accept it.

Yeah. And, cheat as much as you can. And, then just a little more ;)


Why not cheat? I'm not looking for realism in games, I'm looking for escapism and to have fun.


It's kinda cool when it feels real enough to be used a gameplay mechanism, such as diverting rivers and building hydroelectric dams in the beaver engineering simulator Timberborn: https://www.gamedeveloper.com/design/deep-dive-timberborn-s-...


The reason not to cheat is that visual artifacts and bugs can snap you out of immersion. Think of realizing that you don't appear in a mirror or that throwing a torch into a dark corner doesn't light it up. Even without "bugs" people tend to find more beautiful and accurate things more immersive. So if you want escapism, graphics that match the physics that you are used to day-to-day can help you forget that you are in a simulation.

The reason to cheat is that we currently don't have the hardware or software techniques to physically simulate a virtual world in real-time.


I didn't mean to imply that cheating is bad. Indeed it's mandatory if you want a remotely real-time performance.


I don't think this is a great argument because everybody is looking for some level of realism in games, but you may want less than many others. Without any, you'd have no intuitive behaviors and the controls would make no sense.

I'm not saying this just to be pedantic - my point is that some people do want some games to have very high levels of realism.


Some of the best games I've played in my life were the text based and pixel art games in MS DOS. Your imagination then had to render or enhance the visuals, and it was pretty cool to come up with the pictures in your head.


I realize the thread started about graphics, but I didn't only mean realistic graphics when I referred to realism, because the comment I was replying to just said "realism in games". I do expect there's some degree of realism in your favorite text-based or pixel-art games as well. As you say, it's clear that a game doesn't need photorealistic graphics to be good, because most games do not.


I mean yes, but, ultimately we want to be able to simulate ourselves to the truest to be able to understand our origins, which are likely simulated as well, right.


This reminds me of how the Quake-3-based game Tremulous has just two functions to simulate physics:

https://github.com/darklegion/tremulous/blob/master/src/game...


> ... the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

That is exactly correct. That said, as something of a physics nerd (it was a big part of the EE curriculum in school) people often chuckled at me in the arcade pointing out things which were violating various principles of physics :-). And one of the fun things was listening to a translated interview with the physicist who worked on Mario World at Nintendo who stressed that while the physics of Mario's world were not the same as "real" physics, they were consistent and had rules just like real physics did, and how that was important for players to understand what they could and could not do in the game (and how they might solve a puzzle in the game).


Yes, consistency. Similar in scifi/fantasy, where an absurd conceit is believable if internally consistent, i.e. self-consistent. (I also call it principled) Too much ad hoc prevents suspension of disbelief.

Reactions are an important part of this, it's not just how good an actor is, but how other character react to them. If they are ignored, it's like that character doesn't exist, has no effect, is inconsequential, doesn't matter... is not real.

In a fluid simulation, when water does not react to the shoreline, rocks in the water, columns supporting a pier etc, it is like the water (or the obstacles) don't exist. In this sense, the water (or the obstacles) "aren't real".

For example, the water in Sea of Thieves looks magnificent... but because it doesn't interact (it's procedural), it doesn't feel real. It isn't real.

A further problem here with water is that it's difficult to make a continuous fluid that is consistent and isn't realistic, because the basic principles of fluid are so very very simple: conservation of mass, conservation of momentum.

[ Of course, you can still vary density, gravity. There are other effects beyond these: viscosity, surface tension. Also, compressible fluids, different reynolds numbers, adiabatic effects, behaviour at non-everyday temperatures, pressures, scales, velocities, accelerations etc. ]

There are also simplifications, like depth-averaging (Shallow Water Equations aka St Venant), but again, it's hard to vary it and yet remain self-consistent.

Cellular methods - similar to conway's game of life but for fluid - maybe are an exception, because thet are self-consistent - but pretty far from "water", because not lack momentum (and aren't continuous).

The final issue is error: simulations are never perfectly self-consistent anyway, because they must discretize the continuous, which introduces error. In engineering, you can reduce this error with a finer grid, until it too small to matter for your specific application. For computer graphics, it only needs to be perceivably" self-consistent - and perhaps pixel size is one measure of this, for the appearance* of consistency, in area/volume, acceleration, velocity, displacement (...though, we can perceive sub-pixel displacement, e.g. an aliased line.)


An imperceptible inconsistency can become perceptible if it is cumulative, such as mass not being conserved very slightly, but over time the amount of water changes significantly - one ad hoc solution is to redistribute the missing water (or remove surplus) everywhere, to compensate.


I think the curl noise paper is from 2007: https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...

I've used the basic idea from that paper to make a surprisingly decent program to create gas-giant planet textures: https://github.com/smcameron/gaseous-giganticus


Hey that paper references me. ;) I published basic curl noise a few years before that in a Siggraph course with Joe Kniss. Bridson’s very cool paper makes curl noise much more controllable by adding the ability to insert and design boundary conditions, in order words, you can “paint” the noise field and put objects into the noise field and have particles advect around them. Mine and Joe’s version was a turbulence field based on simply taking the curl of a noise field because curl has the property of being incompressible. Thought about it after watching some effects breakdown on X-men’s Nightcrawler teleport effect and they talked about using a fluid simulation IIRC to get the nice subtleties that incompressible “divergence-free” flows give you. BTW I don’t remember exactly how they calculated their turbulence, I have a memory of it being more complicated than curl noise, but maybe they came up with the idea, or maybe it predates X-men too; it’s a very simple idea based on known math, fun and effective for fake fluid simulation.

We liked to call it “curly noise” when we first did it, and I used it on some shots and shared it with other effects animators at DreamWorks at the time. Actually the very first name we used was “curl of noise” because the implementation was literally curl(noise(x)), but curly noise sounded better/cuter. Curly noise is neat because it’s static and analytic, so you can do a fluid-like looking animation sequence with every frame independently. You don’t need to simulate frame 99 in order to render frame 100, you can send all your frames to the farm to render independently. On the other hand, one thing that’s funny about curly noise is that it’s way more expensive to evaluate at a point in space than a voxel grid fluid update step, at least when using Perlin noise which is what I started with. (Curly noise was cheaper than using PDI’s (Nick Foster’s) production fluid solver at the time, but I think the Stam semi-Lagrangian advection thing started making it’s way around and generally changed things soon after that.)

BTW gaseous giganticus looks really neat! I browsed the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?


> the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?

In 2014 when I wrote it, 3D Perlin noise was still patent encumbered. Luckily at the same time I was working on it, reddit user KdotJPG posted a Java implementation of his Open Simplex noise algorithm on r/proceduralgeneration (different than Ken Perlin's Simplex noise), and I ported that to C. But yeah, I think Perlin is a little faster to compute. I think the patent expired just last year.

Also Jan Wedekind recently implemented something pretty similar to gaseous-giganticus, except instead of doing it on the CPU like I did, managed to get it onto the GPU, described here: https://www.wedesoft.de/software/2023/03/20/procedural-globa...


reminds me of this - https://www.taron.de/forum/viewtopic.php?f=4&t=4

it's a painting program where the paint can be moved around with a similar fluid simulation


This is unfortunately a discussion I've had to have many times. "Why does your CFD software take hours to complete a simulation when Unreal Engine can do it in seconds" has been asked more than once.


Too bad they don't reference the actual inventor of vorticity confinement, Dr. John Steinhoff of the University of Tennessee Space Institute:

https://en.wikipedia.org/wiki/John_Steinhoff

https://en.wikipedia.org/wiki/Vorticity_confinement

And some papers:

https://www.researchgate.net/publication/239547604_Modificat...

https://www.researchgate.net/publication/265066926_Computati...


Maybe you're not the right person to ask, but I'll go anyway: I would like to learn the basics of CFD not because I expect to do much CFD in life, but because I believe the stuff I would have to learn in order to be able to understand CFD are very useful in other domains.

The problem is my analysis is very weak. My knowledge about linear algebra, differential equations, numerical methods, and so on is approximately limited at the level of an introductory university course. What would you suggest as a good start?

I like reading, but I also like practical exercises. The books I tried to read before to get into CFD were lacking in practical exercises, and when I tried to invent my own exercises, I didn't manage to adapt them to my current level of knowledge, so they were either too hard or too easy. (Consequently, they didn't advance my understanding much.)



This does look really good at a first glance. It seems like it uses mathematics that I'm not fully comfortable with yet -- but also takes a slow and intuitive enough approach that I may be able to infer the missing knowledge with some effort. I'll give it a shot! Big thanks.


You'll be able to understand the equations I guess. The hard part is the numerical analysis: how do you prove your computations will: 1/ reach a solution (badly managed computations will diverge and never reach any solution) 2/ reach a solution that is close to reality ?

For me that's the hard part (which I still don't get).

You could start with Saint Venant equations, although they look complicated they're actually within reach. But you'll have to understand the physics behind first (conservation of mass, of quantity of movement, etc.)


Regarding your two questions, some terms you could look up are: Courant number, von Neumann stability analysis, Kolmogorov length and time scales

With respect to 2, the standard industry practice is a mesh convergence study and comparing your solver's output to experimental data. Sadly, especially with Reynolds-averaged Navier Stokes, there is no guarantee you'll get a physically correct solution.


Yeah, I know but still had no time to dive into the theory enough to get a correct intuition of how it works on why there's no guarantee for a physically correct solution... Fortunately, my colleagues are the professors in my university who teach these, so I'll find an answer :-)


Understand the equations, yes. However, I'm sufficiently out of practise that it takes me a lot of effort to. So I guess you could say I'm not fluent to be able to grasp the meaning of these things as fast as I think I would need to in order to properly understand them.


I went to a talk by a guy who worked in this space in the film industry and said one of the funniest questions he ever got asked was "Can you make the water look more splashy"


Was the PhD worth it in your opinion?


From a purely economic standpoint, difficult to say. I was able to build skills that are very in demand in certain technical areas, and breaking into these areas is notoriously difficult otherwise. On the other hand, I earned peanuts for many years. It'll probably take some time for it to pay off.

That said, never do a PhD for economic reasons alone. It's a period in your life where you are given an opportunity to build up any idea you would like. I enjoyed that aspect very much and hence do not regret it one bit.

On the other hand, I also found out that academia sucks so I now work in a completely different field. Unfortunately it's very difficult to find out whether you'll like academia short of doing a PhD, so you should always go into it with a plan B in the back of your head.


I feel this comment. I did a masters with a thesis option because I was not hurting for money with TA and side business income, so figured I could take the extra year (it was an accelerated masters). Loved being able to work in that heady material, but disliked some parts of the academic environment. Was glad I could see it with less time and stress than a PhD. Even so, I still never say never for a PhD, but it’d have to be a perfect confluence of conditions.


> On the other hand, I also found out that academia sucks

I also found that I don't want to be a good academic, but a good researcher. Most importantly, that these are two very different things.


What does worth it mean? FWIW PhDs are really more or less required if you want to teach at a university, or do academic research, or do private/industrial research. Most companies that hire researchers look primarily for PhDs. The other reason would be to get to have 5 years to explore and study some subject in depth and become an expert, but that tends to go hand-in-hand with wanting to do research as a career. If you don’t want to teach college or do research, then PhD might be a questionable choice, and if you do it’s not really a choice, it’s just a prerequisite.

The two sibling comments both mention the economics isn’t guaranteed - and of course nothing is guaranteed on a case by case basis. However, you should be aware that people with PhDs in the US earn an average of around 1.5x more than people with bachelor’s degrees, and the income average for advanced degrees is around 3x higher than people without a 4-year degree. Patents are pretty well dominated by people with PhDs. There’s lots of stats on all this, but I learned it specifically from a report by the St Louis Fed that examined both the incomes as well as the savings rates for people with different educational attainment levels, and also looked at the changing trends over the years, and the differences by race. Statistically speaking, the economics absolutely favor getting an advanced degree of some kind, whether it’s a PhD or MD or JD or whatever.


Hard to say whether economically a PhD always makes sense, but it certainly can open doors that are otherwise firmly closed.


Well, sure. Everything you end up doing in life will open doors that would have otherwise remained firmly closed if you did something else instead.


"Everything you end up doing in life will open doors that would have otherwise remained firmly closed"

Oh no. It is also quite possible to do things, that will very firmly close doors, that were open before, or making sure some doors will never open ..

(quite some doors are also not worth going through)


Of course. In fact, it is widely recognized that in a poor economy PhDs can be in an especially bad economic place as any jobs that are available deem them "overqualified".


I wrote a super simple flame simulation a long time ago as a toy in C after reading an article somewhere.

You just set each pixel’s brightness to be the average brightness of the immediately adjacent pixels. Calculate from bottom to top.

Add a few “hot” pixels moving back and forth along the bottom and boom, instant fire.

Looks very cool for a tiny amount of code and no calculus. :)


This is the laplacian operator (in 1D, just the second derivative, or curvature). The sharper the crest, the more negative; the sharper the trough, the more positive. If you change the value there, by that much, the effect is averaging (and the discretized form is literally averaging).

You've been doing calculus the whole time. There's a difference between knowing the path, and walking the path.

Here's a 3Blue1Brown video with intuitive graphics on it https://youtube.com/watch?v=ToIXSwZ1pJU


Bah! I knew it! :)


> no calculus

"set each pixel’s brightness to be the average brightness of the immediately adjacent pixels" sounds like a convolution ;)


It's a trick! Calculus puts me right off, but explain it in code and visualizable ways and I'm on board.


Could you share the repo?


I will, but I need to dig it out of my archives. It was from a time before repos. :)


They mention simulating fire and smoke for games, and doing fluid simulations on the GPU. Something I’ve never understood, if these effects are to run in a game, isn’t the GPU already busy? It seems like running a CFD problem and rendering at the same time is a lot.

Can this stuff run on an iGPU while the dGPU is doing more rendering-related tasks? Or are iGPUs just too weak, better to fall all the way down to the CPU.


> Something I’ve never understood, if these effects are to run in a game, isn’t the GPU already busy?

Short answer: No, it's not "already busy". GPU's are so powerful now that you can do physics, fancy render passes, fluid sims, "Game AI" unit pathing, and more, at 100+ FPS.

Long answer: You have a "frame budget" which is the amount of time between rendering the super fast "slide show" of frames at 60+ FPS. This gives you between 5 and 30 ms to do a bunch of computation to get the results you need to compute state and render the next frame.

That could be moving units around a map, calculating fire physics, blitting terrain textures, rendering verts with materials. In many game engines, you will see a GPU doing dozens of these separate computations per frame.

GPU's are basically just a secondary computer attached to your main computer. You give it a bunch of jobs to do every frame and it outputs the results. You combine results into something that looks like a game.

> Can this stuff run on an iGPU while the dGPU is doing more rendering-related tasks?

Almost no one is using the iGPU for anything. It's completely ignored because it's usually completely useless compared to your main discrete GPU.


In theory it's perfectly possible to do all you describe in 8ms (i.e. VR render times). In reality we're

1. still far from properly utilizing modern graphics APIs as is. Some of the largest studios are close but knowledge is tight lipped in the industry.

2. even when those top studios can/do, they choose to focus more of the budget on higher render resolution over adding more logic or simulation. Makes for superficially better looking games to help sell.

3. and of course there are other expensive factors right now with more attention like Ray traced lighting which can only be optimized so much on current hardware.

I'd really love to see what the AA or maybe even indie market can do with such techniques one day. I don't have much faith that AAA studios will ever prioritize simulation.


If you enjoy simulation on that level, you might find Star Citizen [0] or the latest updates to No Man's Sky [1] interesting.

You're right that simulation is often an after thought. Most games prioritize narrative, combat, artist's vision and high fidelity environments over complex systems you can interact with. There are a few outliers though.

Alternatively, you get so far into the simulation side of things with stuff like Dwarf Fortress [2] and all visual fidelity is thrown away for the sake of prioritizing simulation complexity!

[0] https://robertsspaceindustries.com/

[1] https://www.nomanssky.com/

[2] https://www.bay12games.com/dwarves/


Simulation is often already done. Except it’s done offline and then replayed in the game over and over.


> No one is using the iGPU for anything. It's completely ignored because it's usually completely useless compared to your main discrete GPU.

Modern iGPUs are actually quite powerful is my understanding. I think the reason no one does this is that the software model isn’t actually there/standardized/able to work cross vendor since the iGPU and the discrete card are going to be different vendors typically. There’s also little motivation to do this because not everyone has an iGPU which dilutes the economy of scale of using it.

It would be a neat idea to try to run lighter weight things on the iGPU to free up rendering time on the dGPU and make frame rates more consistent, but the incentives aren’t there.


I agree the incentives aren't there. Also agree that it is possible to use the integrated GPU for light tasks, but only light tasks.

In the high performance scenarios where there is all three (discrete GPU, integrated GPU, and CPU) and we try and use the integrated GPU alongside the CPU, it often causes thermal throttling on the shared die between iGPU and CPU.

This slows the CPU down from executing well, keeping up with state changes, and sending needed data to the keep the discrete GPU utilized. In short, don't warm up the CPU, we want it to stay cool, if that means not doing iGPU stuff, don't do it.

When we have multiple discrete GPU's available (render farm), this on-die thermal bottleneck goes away and there are many render pipelines that are made to handle hundreds, even thousands of simultaneous GPU's working on a shared problem set of diverse tasks, similar to trying to utilize both iGPU and dGPU on the same machine but bigger.

Whether or not to use the iGPU is less about scheduling and more about thermal throttling.


That’s probably a better point as to why it’s not invested in although most games are not CPU bound so thermal throttling wouldn’t apply then as much. I think it’s multiple factors combined.

The render pipelines you refer to are all offline non-realtime rendering though for movies/animation/etc right? Somewhat different UX and problem space than realtime gaming.


It still annoys me how badly AMD botched their Fusion/HSA concept, leading igpus to be completely ignored in most cases.


I actually have my desktop AMD iGPU enabled despite using a discrete Nvidia card. I use the iGPU to do AI noise reduction for my mic in voice and video calls. I'm not sure if this is really an ideal setup, having both GPUs enabled with both driver packages installed (as compared to just running Nvidias noise reduction on the dGPU, I guess,) but it seems to all work with no issue. The onboard HDMI can even be used at the same time as the discrete cards monitor ports.


It looks like the GPU is doing most of the work… from that point of view when do we start to wonder if the GPU can “offload” anything to the whole computer that is hanging off of it, haha.


> It looks like the GPU is doing most of the work

Yes. The GPU is doing most of the work in a lot of modern games.

It isn't great at everything though, and there are limitations due to its architecture being structured almost solely for the purpose of computing massively parallel instructions.

> when do we start to wonder if the GPU can “offload” anything to the whole computer that is hanging off of it

The main bottleneck for speed on most teams is not having enough "GPU devs" to move stuff off the CPU and onto the GPU. Many games suffer in performance due to folks not knowing how to use the GPU properly.

Because of this, nVidia/AMD invest heavily in making general purpose compute easier and easier on the GPU. The successes they have had in doing this over the last decade are nothing less than staggering.

Ultimately, the way it's looking, GPU's are trying to become good at everything the CPU does and then some. We already have modern cloud server architectures that are 90% GPU and 10% CPU as a complete SoC.

Eventually, the CPU may cease to exist entirely as its fundamental design becomes obsolete. This is usually called a GPGPU in modern server infrastructure.


I’m pretty sure CPUs destroy GPUs at sequential programming and most programs are written in a sequential style. Not sure where the 90/10 claim comes from but there’s plenty of cloud servers with no GPU installed whatsoever and 0 servers without a CPU.


Yup, and until we get a truly general purpose compute GPU that can handle both styles of instruction with automated multi-threading and state management, this will continue.

What I've seen shows me that nVidia is working very hard to eliminate this gap though. General purpose computing on the GPU has never been easier, and it gets better every year.

In my opinion, it's only a matter of time before we can run anything we want on the GPU and realize various speed gains.

As for where the 90/10 comes from, it's from the emerging architectures for advanced AI/graphics compute like the DGX H100 [0].

[0] https://www.nvidia.com/en-us/data-center/dgx-h100/


AI is different. Those servers are set up to run AI jobs & nothing else. That’s still a small fraction of overall cloud machines at the moment. Even if in volume they overtake, that’s just because of the huge surge in demand for AI * the compute requirements associated with it eclipsing the compute requirements for “traditional” cloud compute that is used to keep businesses running. I don’t think you’ll see GPUs running things like databases or the Linux kernel. GPUs may even come with embedded ARM CPUs to run the kernel & only run AI tasks as part of the package as a cost reduction, but I think that’ll take a very long time because you have to figure out how to do cotenancy. It’ll depend on if the CPU remains a huge unnecessary cost for AI servers. I doubt that GPUs will get much better at sequential tasks because it’s an essential programming tradeoff (e.g. it’s the same reason you don’t see everything written in SIMD as SIMD is much closer to GPU-style programming than the more general sequential style)


> Eventually, the CPU may cease to exist as its fundamental design becomes obsolete. This is usually called a GPGPU in modern server infrastructure.

There’s no reason yet to think CPU designs are becoming obsolete. SISD (Single Instruction, Single Data) is the CPU core model and it’s easier to program and does lots of things that you don’t want to use SIMD for. SISD is good for heterogenous workloads, and SIMD is good for homogeneous workloads.

I thought GPGPU was waning these days. That term was used a lot during the period when people were ‘hacking’ GPUs to do general compute when the APIs like OpenGL didn’t offer general programmable computation. Today with CUDA, and compute shades in every major API, it’s a given that GPUs are for general purpose computation, and it’s even becoming an anachronism that the G in GPU stands for graphics. My soft prediction is that GPU might get a new name & acronym soon that doesn’t have “graphics” in it.


This is somewhat reassuring. A decade ago when clock frequencies had stopped increasing and core count started to increase I predicted that the future was massively multicore.

Then the core count stopped increasing too -- except only if you look in the wrong place! It has in CPUs, but they moved to GPUs.


SIMD parallelism has been improving on the CPU too – although the number of lanes hasn’t increased that much since the MMX days (128 to 512 bits), the spectrum of available vector instructions has grown a lot. And being able to do eight or sixteen operations at the price of one is certainly nothing to scoff at. Autovectorization is a really hard problem, though, and manual vectorization is still something of a dark art, especially due to the scarcity of good abstractions. Programming with cryptically named, architecture-specific intrinsics and doing manual feature detection is not fun.


A game is more than just rendering, and modern games will absolutely get bottlenecked on lower-end CPUs well before you reach say 144 fps. GamersNexus has done a bunch of videos on the topic.


You are not wrong that there are many games that are bottle-necked on lower end CPUs.

I would argue that for many CPU bound games, they could find better ways to utilize the GPU for computation and it is likely they just didn't have the knowledge, time, or budget to do so.

It's easier to write CPU code, every programmer can do it, so it's the most often reached for tool.

Also, at high frame rates, the bottleneck is frequently the CPU due to it not feeding the GPU fast enough, so you lose frames. There is definitely a real world requirement of having a fast enough CPU to properly utilize a high end video card, even if it's just for shoving command buffers and nothing else.


Now that LLMs run on GPU too, future GPUs will need to juggle between the graphics, the physics and the AI for NPCs. Fun times trying to balance all that.

My guess is that the load will become more and more shared between local and remote computing resources.


Maybe in the future, personal computers will have more than one GPU, one for graphics and one for AI?


Many computers already have 2 GPUs, one integrated into the CPU die, and one external (and typically enormously more powerful).

To my knowledge though it's very rare for software to take advantage of this.


In high performance scenarios, the GPU is running full blast while the CPU is running at full blast just feeding data and pre-process work to the GPU.

The GPU is the steam engine hurtling forward, the CPU is just the person shoveling coal into the furnace.

Using the integrated GPU heats up the main die where the CPU is because they live together on the same chip. The die heats up, CPU thermal throttles, CPU stops efficiently feeding data to the GPU at max speed, GPU slows down from under utilization.

In high performance scenarios, the integrated GPU is often a waste of thermal budget.


Doesn't this assume inadequate cooling? A quick google indicates AMD's X3D CPUs begin throttling around 89°C, and that it's not overly challenging to keep them below 80 even under intense CPU load, although that's presumably without any activity on the integrated GPU.

Assuming cooling really is inadequate for running both the CPU cores and the integrated GPU: for GPU-friendly workloads (i.e. no GPU-unfriendly preprocessing operations for the CPU) it would surely make more sense to use the integrated GPU rather than spend the thermal budget having the CPU cores do that work.


What the specs say they'll do and what they actually do are often very different realities in my experience.

I've seen thermal throttling happening at 60°C because overall, the chip is cool, but one core is maxing or two cores are maxing. Which is common in game dev with your primary thread feeding a GPU with command buffer queues and another scheduling the main game loop.

Even when water cooled or the high end air cooling on my server blades, I see that long term, the system just hits a trade-off point of ~60-70°C and ~85% max CPU clock even when the cooling system is industry grade, loud as hell, and has an HVAC unit backing it. Probably part of why scale out is so popular to distribute load.

When I give real work to any iGPU's on these systems, I see the temps bump 5-10°C and clocks on the CPU cores drop a bit. Could be drivers, could be temp curves, I would think these fancy cooling systems I'm running are performing well though. shrug


What do you mean in the future? Multiple GPU chips is already pretty common (on the same card or having multiple cards at the same time). Besides, GPUs are massively parallel chips as well, with specialized units for graphics and AI operations.


Welp. It used to be PhysX math would run on a dedicated gpu of choice. I remember assigning or realising this during the Red Faction game with forever destructing walls.

Almost Minecraft, but with rocket launchers on mars


I remember PhysX, but my feeling at the time was “yeah I guess NVIDIA would love for me to buy two graphics cards.” On the other hand, processors without any iGPUs are pretty rare by now.


I'm fairly certain red faction didn't use Nvidia physX at all.


It didn’t!


You don't have to be gaming to use a GPU. Plenty of rendering software have a GPU mode now. But writing GPU aglos are often different from a CPU simulation algo, because it is highly parallelized.


EmberGen is absolutely crazy software that does simulation of fire and smoke in real-time on consumer GPUs, and supports a node-based workflow which makes it so easy to create new effects.

Seriously, my workflow probably went from spending hours on making something that now takes minutes to get right.

https://jangafx.com/software/embergen/

I was sure that this submission would be about EmberGen and I'm gonna be honest, I'm a bit sad EmberGen never really got traction on HN (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)

(Not affiliated with EmberGen/JangaFX, just a happy customer)



Odin is such a neat language.

I was equally impressed by Spall: https://odin-lang.org/showcase/spall/


Yes this is fantastic. One of the best software currently available in the market.


If you like this, you'll also enjoy Ten Minute Physics, especially chapter 17 "How to write an Eulerian Fluid Simulator with 200 lines of code."

https://matthias-research.github.io/pages/tenMinutePhysics/i...


This is very nice! Another person explaining this stuff is "10 Minute Physics"

https://matthias-research.github.io/pages/tenMinutePhysics/i...


Does anyone have any recommendations for a former math major turned SWE to get into CFD simulations? I find this material fascinating but it's been a while since I've done any vector calculus or PDEs so my math is very rusty.


If you're more interested in the physics simulation for research, I can't help ya. However, SideFX Houdini is tough to beat if you're going more for entertainment-focused simulations.

https://www.youtube.com/watch?v=zxiqA8_CiC4

Their free non-commercial "Apprentice" version is only limited in its rendering and collaboration capabilities. It's pretty... uh... deep though. Coming from software and moving into this industry, the workflow for learning these sorts of tools is totally different. Lots of people say Houdini is more like an IDE than a 3D modelling program, and I agree in many ways. Rather than using the visual tools like in, say, blender, it's almost entirely based on creating a network of nodes, and modifying attributes and parameters. You can do most stuff in Python more cleanly than others like 3ds Max, though it won't compile, so the performance is bad in big sims. Their own C-like language, vex, is competent, and there's even a more granular version of their node system for the finer work with more complex math and such. It's almost entirely a data-oriented workflow on the technical side.

However, if you're a "learn by reading the docs" type, you're going to have to learn to love tutorials rather quickly. It's very, very different from any environment or paradigm I've worked with, and the community at large, while generally friendly, suffers from the curse of expertise big-time.


Apply for jobs at a place like StarCCM or similar


I used to work at NASA (contractor) doing this. One thing, get a job in the field. If you really want to, apply to Marshall Space Flight Center and / or Ames (Top500: Aitken [#85, 9.07 PFlops], Pleiades [#132, 5.95 PFlops], Electra [#143, 5.44 PFlops] systems) on either the Federal or Contractor side. GRC, LARC, and JSC also have some. At least a few years ago, the Contractor / Federal integration was actually quite good, and nearly transparent (except for allocating money). These groups are both fairly well known at NASA (MSFC, Propulsion Structural, Thermal & Fluid Analysis [2][3], and Ames, Entry Systems [4])

When I was there we were running ~100 million cell (vehicle) or (vehicle + pad) Hybrid RANS/LES sims with moving overset grids, maybe 10-20 species reactive combustion chemistry (SSMEs and SRBs lighting together), and Lagrangian evaporative particle dynamics (launch water suppression) using Overflow/LARC [5], or Loci/Mississippi State University [6]. Note: My info's a decade out of data, so no idea what the SOTA is these days. More than this.

If you're interested in the field and what a lot of the industry is concerned about, then while old (2014), the CFD Vision 2030 Study is not a bad orientation. [7]

Try to get a ticket to Supercomputing and go wander around. This year was in Denver. [8] Their focus tends to be on "large" though, so you'll mostly get massive weather simulations and stellar cloud dynamics. I like the conference, yet its difficult to get any notice unless you did #CPUs/#GPUs/#FPGAs++.

GOV Outside NASA: NIST (Gaithersburg), DOE (Oak Ridge, Sandia, Los Alamos), Air Force (AF Research Lab), Huntington Beach.

[1] NASA Advanced Supercomputing Division: https://en.wikipedia.org/wiki/NASA_Advanced_Supercomputing_D...

[2] NASA, MSFC: https://www.nasa.gov/wp-content/uploads/2016/01/g-28367g_pst...

[3] Somewhat old examples: https://ntrs.nasa.gov/api/citations/20140016892/downloads/20...

[4] NASA Ames: https://www.nasa.gov/entry-systems-and-technology-division/

[5] OVERFLOW/NASA/LARC: https://overflow.larc.nasa.gov/

[6] Loci/MSU: https://simcenter.msstate.edu/

[7] CFD Vision 2030 Study: https://ntrs.nasa.gov/api/citations/20140003093/downloads/20...

[8] Supercomputing SC23: https://sc23.supercomputing.org/


This is very helpful thank you. I tried looking at a few job pages on those links but can't seem find any open SWE positions. Couple of follow up questions:

1. What am I looking for in a job description that would indicate i'd be working on CFD simulations?

2. What would make me a compelling applicant? This seems very different than the distributed systems/recommendation system work that I have a lot of experience in.

Thanks again!


1. You should look look for jobs that mention terms like computational fluid dynamics (CFD), finite volume methods (FVM), and other numerical methods. It's difficult in a short post to list everything, but terms like large eddy simulation (LES), Reynolds-averaged Navier-Stokes (RANS), structured grids, unstructured grids, etc. would be helpful. The numerical methods used are quite vast and depend highly on the particular application.

2. A background in distributed systems is actually very good for this field, but high-performance computing (HPC) is a bit different than other kinds of distributed computing, largely because if a node fails or communication breaks down in some way, it is easier to just kill the job. There's no need to make it fail gracefully. Some of your distributed computing knowledge will be extremely valuable but other parts will never come up.

I suggest taking some time and getting familiar with Message Passing Interface (MPI). It is the standard used for distributed memory programming in HPC applications. Almost all of these programs use MPI at least in part. There are several good tutorials online. I recommend [1]. A strong background in MPI is invaluable in a lot of these codes, especially if you can master non-blocking communication.

Learning Fortran may help with some projects and be useless for others. It might be worth the time to at least learn some so that the terminology is not foreign.

I'd also add that many of these jobs are difficult to get even if you have a PhD in the field. Many projects only hire when there is a real need for a new person (or when somebody retires, etc.). Timing the market is an unfortunately reality here. Good luck!

[1] https://mpitutorial.com/tutorials/


Thank you for the additional info @atrettel. Quite accurate in terms of what I remember of the field.

On the NASA perspective, mostly use Linux qsub with MPI. An example of such is:

https://www.nas.nasa.gov/hecc/support/kb/running-multiple-sm...


I hate making two posts, but I remembered there is also an CFD industry job board that you may find useful:

https://www.cfd-online.com/Jobs/

Many of the positions unfortunately are for grad students and postdocs, but there are a lot of CFD software developer positions posted here. It is a good place to check periodically to see what is available.

Good luck!


both posts are very much appreciated, thank you


Glad you found the original post helpful, and thank you to @atrettel for the additional posts, the job board was especially helpful. Sorry for the delay in replay, as HN has no "waiting replies" notification, so I only noticed based on number change.

From looking over the jobs @atrettel linked to. One other point, is there are actually a lot of different fields other than the "norm". There tends to be a stereotype that its all rockets, airplanes, and cars, yet there's a bunch of others.

Healthcare CFD (Worcester Polytechnic Institute, Ohio State University), Food CFD (University of Michigan-Dearborn), Ocean CFD (Louisiana State University), Plasma CFD (University of Texas at Austin), Civil CFD (UC Berkeley), Petroleum CFD (Pennsylvania State University), Environmental/Pollution CFD (New Jersey Institute of Technology), Weather/Hurricane CFD (Florida International University), Mining/USGS CFD (Colorado School of Mines), CUDA/GPU CFD (Altech LLC), and a bunch of others.

So being from a certain background should not be a worry. There's a lot of industries to lateral in from. The only issue is the limited job pool. However, I started at NASA simply writing software (Perl, FORTRAN, Linux), with a background and knowledge of CFD.

Also, there are definitely jobs (12/24/2023):

Glassdoor, 780: https://www.glassdoor.com/Job/united-states-cfd-engineer-job...

LinkedIn, 50000: https://www.linkedin.com/jobs/computational-fluid-dynamics-j...

Indeed, 800: https://www.indeed.com/q-Computational-Fluid-Dynamics-jobs.h...


If you need a refresher on diff eq... https://www.youtube.com/watch?v=ly4S0oi3Yz8


Clicked hoping for interactive examples. Did not disappoint. Here's an internet point, friend. Thank you for your service.


Bridson's course notes also cover most of this https://www.cs.ubc.ca/~rbridson/fluidsimulation/fluids_notes... There's a second part by Muller, who's also very active in the area: https://matthias-research.github.io/pages/tenMinutePhysics/i...

It became a book, now in 2ed: Fluid Simulation for Computer Graphics


I recently watched this video about implementing a simple fluid simulation and found it quite interesting: https://www.youtube.com/watch?v=rSKMYc1CQHE


Good explanation why CG explosions suck: https://www.youtube.com/watch?v=fPb7yUPcKhk


While not the main point of the article, the introductory premise is a bit off IMO. When you choose simulation, you trade artistic control for painful negotiation via an (often overwhelming) amount of controls.

For a key scene like the balrog, you will probably never decide for simulation and go for full control on every frame instead.

To stay in the tolkien fantasy example, a landscape scene with a nice river, doing a lot of bends, some rocks inside, the occasional happy fish jumping out of water - that would suit a simulation much better.


Not impossible to do both, but very few tools are built where simulation comes for "free". At least not free and real-time as of now. Maybe one day.

But I agree with you. You'd ideally reserve key moments for full control and leave simulation for the background polish (assuming your work isn't bound by a physically realistic world).


I'm really impressed by the output of the distill.pub template and page building system used for this article. It's a shame it was abandoned in 2021 and is no longer maintained.


I've never seen (x, t) anywhere other than the Schrödinger equation - seeing it used here for a dye density function was weird, but it makes sense!


I think I'm probably smart enough to follow this, but maths notation is so opaque that I get stuck the moment the greek letters come out.


Yoy may want to check out Sean Carroll's book "The Biggest Ideas in the Universe" vol. 1. He explains a bit of calculus, how to read and understand equations etc. Basically, this book contains a crash course in physics-related math smeared out across a few chapters. Perfect for noobs like us.


It looks to me like you did not add gamma correction to the final color?


does not work on ios for me.


It works fine on iOS/safari for me, in the sense that the text is all readable, but the simulations don’t seem to all execute. Which… given that other folks are reporting that the site is slowed to a crawl, I guess because it is running the simulations, I’ll take the iOS experience.


it blows my mind that cfd is a thing we can do in real time on a pc


Yes, we can since a long time. They were already a thing 14 years ago on the PS3 in the game "Pixeljunk shooter" [0].

But real time simulations often use massive simplifications. They aim to look real, not to match exact solutions of the Navier-Stokes equations.

[0] https://www.youtube.com/watch?v=qQkvlxLV6sI


Heck, the Wii had Fluidity around the same time period (2010), and that's a lot weaker than the PS3. Fluidity was pretty neat—you played as a volume of water, changing states as you moved through a level that looked like a classic science textbook:https://youtu.be/j7IooyXp3Pc?si=E79rCrq2mdyZSKoF&t=120


Well, technically yes.

Both use the same technique: Smoothed Particle Hydrodynamics.

But the PS3 was able to fill the whole screen with these particles. Hundreds of them. The game Fluidity seems to have approx. 20.


> The game Fluidity seems to have approx. 20.

It had more than 20, but... not by much.


thanks, i had no idea about pixeljunk shooter! it looks like those fluids are 2-d tho. otoh it's apparently performing a multiphysics simulation including thermodynamics

btw, almost unrelatedly, you have no idea how much i appreciate your exegesis of the voxelspace algorithm


I remember reading about finite element tire models and how complicated they were, they can now run in real time give or take a bit.


What will be quicker. Realtime raytraced volumetric super realistic particle effects or somekind of diffusion model shaderlike implementation.


Real smoke and flames.


Nolan should've used a real nuke


Looks amazing!


this is so useful thank you!


Page is basically unusable on my Intel MBP, and still a beastly consumer on my AS Mac. I presume simulations are happening somewhere on the page, but it would be a good idea to make them toggled. Ideally via user interaction, but if one insists on auto-play when scrolled into the viewport.


Interestingly, no noticeable performance issues on my M2 MacBook Air. Maybe he's already made some of the changes you recommended?


Testament to the insane power and efficiency of Apple silicon


Runs fine in Ubuntu Linux using Firefox, but not in Google Chrome on my Lenovo Core i7-12700H 32GB laptop with Nvidia T600.


Works well for me on my Intel MBP, though I'm using Firefox inside of Ubuntu running inside of Parallels; maybe I'm not hitting the same issue that you are. The author may have put in a new "only runs when you click" setting since you wrote your original comment.


Same thing on a modern flagship smartphone.


Pixel 8 Pro here, using DDG browser, smooth for me.


What I can use works great on FF on my S22 Ultra, but it's not interactive because the shader fields won't register taps as click events.


Seems to work only in Safari, using Chrome I can't scroll past the first quarter of the page and CPU is at 100%.


Runs buttery smooth on my M2 here in Safari on macOS 14.2.1

Tried them out in Chrome and they're mostly all the same though I do notice a slight jitter to the rendering in the smoke example.


Same hardware(s) as you, smooth and 0 issues. Pebkac


Both you and the GP failed to mention what browser you're using.


Firefox and DDG browser


I was encountering the same problem on my Intel MBP, and per another one of the comments here, find that switching from Chrome to Safari to view the page allows me to view the whole page, view it smoothly, and without my CPU utilization spiking or my fans spinning up.


Yeah, I just checked to see what this machine is... Mid 2015 15" Retina MBP with an AMD Radeon R9 and the page is buttery smooth in Safari.


Maybe it is one for the many other tabs you have open


Interesting. I have 64GB of RAM and yet this page managed to kill the tab entirely.


FWIW, works great and animates smoothly on a Surface Laptop 4 with 16 GB RAM (Firefox).


Do you have hardware acceleration disabled in your browser?


I checked and I have it enabled. But, I don't have a dedicated GPU, just whatever is inside of i7-14700k. Maybe it's just not up for the task?


That may be it. The page runs a fire simulation, so it really struggles when my hardware acceleration is off.


Weird. I only have 16GB but seems to run fine for me?


Maybe it's Windows? Seems on MacOS it runs fine with much less RAM.


I am on Windows.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: