Hacker News new | past | comments | ask | show | jobs | submit login

"The average speed is 36 FPS (tested on A100)."

Real-Time if you have $8k I guess.




Good ol' "SIGGRAPH realtime", when a graphics paper describes itself as achieving realtime speeds you always have to double check that they mean actually realtime and not "640x480 at 20fps on the most expensive hardware money can buy". Anything can be realtime if you set the bar low enough.


Depending on what you’re doing, that really isn’t a low bar. Saying you can get decent performance on any hardware is the first step.


> get decent performance

The issue is that in Computer Science "real-time" doesn't just mean "pretty fast", it's a very specific definition of performance[0]. Doing "real-time" computing is generally considered hard even for problems that are themselves not too challenging, and involves potentially severe consequences for missing a computational deadline.

Which leads to both confusion and a bit of frustration when sub-fields of CS throw around the term as if it just means "we don't have to wait a long time for it to render" or "you can watch it happen".

[0] https://en.wikipedia.org/wiki/Real-time_computing


That link defines it in terms of simulation as well: "The term "real-time" is also used in simulation to mean that the simulation's clock runs at the same speed as a real clock." and even states that was the original usage of the term.

I think that pretty much meets the definition of "you can watch it happen".

Essentially there is real-time systems and real-time simulation. So it seems that they are using the term correctly in the context of simulation.


I don't think it's reasonable to expect the larger community to not use "real time" to mean things other than "hard real time as understood by a hardware engineer building a system that needs guaranteed interrupt latencies".


I think it’s reasonable to assume that it means what you described on this site.


Of course. I'm in the "Reality is just 100M lit, shaded, textured polygons per second" kind of guy- realtime is about 65 FPS with no jank.


>> Anything can be realtime if you set the bar low enough.

I was doing "realtime ray tracing" on Pentium class computers in the 1990s. I took my toy ray tracer and made an OLE control and put it inside a small Visual Basic app which handled keypress-navigation. It could run in a tiny little window (size of a large icon) at reasonable frame rates. Might even say it was using Visual Basic! So yeah "realtime" needs some qualifiers ;-)


Fair, but today it could probably run 30FPS full-screen at 2K resolution, without any special effort, on an average consumer-grade machine; better if ported to take advantage of the GPU.

Moore's law may be dead in general, but computing power still increases (notwithstanding the software bloat that makes it seem otherwise), and it's still something to count on wrt. bleeding edge research demos.


Microsoft once set the bar for realtime as 640x480 @ 10fps. But this was just for research purposes. You can make out what it is trying to do and the update rate was JUST acceptable enough to be interactive.


I’d actually call that a good bar. If you’re looking 5-10 years down the line for consumers, it’s reasonable. If you think the results can influence hardware directions sooner than that (for better performance) it’s also reasonable.


It can be run real time. Might be 640x480 or 20 fps, but many algorithms out there could never been run on an $10k graphics card or even a computing cluster in real time.


I mean A100's were cutting edge a year or so ago now we're at what H200 and B200 or is it 300's like it may be a year or 2 more but the A100 speed will trickle down to the average consumer as well.


And, from the other end, research demonstrations tend to have a lot of low-hanging fruits wrt. optimization, which will get picked if the result is interesting enough.


As it seems the first 3DGS which uses Lods and blocks, there might be place for optimization. This might become useful for use cases in Virtual Production, probably not for mobiles.


otoh I remember those old GPU benchmarks that ran at 10 fps when they came out, then over time...

https://www.techpowerup.com/forums/attachments/all-cards-png...


A lot of 3DGS/Nerf research is like this unfortunately (ugh).

Check https://github.com/pierotofy/OpenSplat for something you can run on your 10 year old laptop, even without a GPU! (I'm the author)


I know, I don't get the fuzz either, I've coded real-time gaussian splat renderers >7 years ago with LOD and they were able to show any kind of point cloud.

They worked with a basic 970 GTX on a big 3d screen and also on oculus dk2.


It's the old story of a an outsider group (AI researchers, in this case) re-inventing the wheel discovered ages ago by experts of the domain.


I'm going to guess that the next-gen consumer GPU (5090) will be twice as fast as A100 and will not cost $8k.

So I don't know see an insurmountable problem.


No, not unless Nvidia is thinking about financial suicide. The current split between "pro" and "consumer" isn't because it was impossible to avoid, it's because Nvidia is doing market segmentation in order to extract more money from the pro segment.


I chuckled a bit too when I saw it.

By the way, what's the compute power difference between an A100 and a 4090?


I believe the main advantage of the A100 is the memory bandwidth. Computationally the 4090 has a higher clock speed and more CUDA cores, so in that way it is faster.

So for this specific application it really depends on where the bottleneck is


4090 is faster in terms of compute, but the A100 has 40GB of VRAM.


"Two more papers down the line..." ;)


Indeed, this very much looks like what we'll likely see from Google Earth within a decade—or perhaps half that.


I’ve seen very impressive Gaussian splatting demos of more limited urban geographies (a few city blocks) running on consumer hardware, so the reason this requires research-tier Nvidia hardware right now is probably down to LOD streaming. More optimization on that front, and this could plausibly come to Google Earth on current devices.

“What a time to be alive” indeed!


2 years tops, since the technology is there, it would be a considerable improvement to Google Maps, and Google has the required resources.


Just wait 2 years it’ll be on your phone.


You gotta start somewhere


Presumably, this is can be used as the first stage in a pipeline. Take the models and textures generated from source data using this, cached it, and stream that data to clients for local rendering.

Consumer GPUs are probably 2-3 generations out from being as capable as an A100.


There are no models or textures, it's just a point cloud of color blobs.

You can convert it to a mesh, but in the process you'd lose the quality and realism that makes it interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: