Hacker News new | past | comments | ask | show | jobs | submit login

So you are saying it's completely impossible to start out just rendering the lowest complexity scene and progressively refine it, so that if you stop at any point in time, you still have something reasonable to show for it? And that GPU manufacterers have been working on making steady frame rates more and more difficult instead of easier?



What you are suggesting could lead to horrific flicker. If you render one frame in low complexity follow by another frame at medium complexity, followed by another frame in low complexity... etc., you'd get flashing as (for example) shadows appeared and disappeared repeatedly.


Well, wouldn't that depend on what you mean by "low complexity" ? That's what we mean nowadays, but could you not design a version of "low complexity" that reduces the appearance of flicker?


The problem is that if the low quality render differs in any appreciable way from the high quality one, there will be flickering. So the low quality renders have to be extremely similar to the high quality ones, in which case why not just use the low-quality ones all the time?

Though come to think of it, there is one example that almost does what you describe. Dynamic resolution scaling has some artifacts, but is being used in some games (and is notably used in Dead Rising 3). Though one has to decide before the frame is rendered what resolution to use, so you still get frames that take longer than 16.7ms or whatever your target is.

One could do something similar with render quality, but it has the same drawback that you have to decide beforehand what quality you want to use. One would also have to ramp up and down the quality slowly, which is difficult as the complexity of a given scene can vary wildly over the space of a single second. It also wouldn't help with spikes in rendering time.


I like this post (or at least the direction it begins to take at the end).

"One could do something similar with render quality. There are possible solutions, and I'm thinking creatively about it."

Thank you for taking the time to answer me. <3


I think you have a misapprehension that rendering is or can be done via progressive enhancement and that's never really been a direction games have pushed into.

Rendering a scene is kind of like constructing a building. If you stop at an arbitrary point in the construction process, you don't get a pretty good building, you get a non functional structure.

The real reason is that there might possibly be some way of doing a progressively enhanced, rendered game but it's never been high enough priority for anyone to have done serious work in that area.


This is an interesting point, and I can kind of see the outline of some vaguely political reasoning here. Let us clarify that we are talking about realtime rendering for games. Of course, rendering in general has been done in all sorts of different ways, and it's pretty straightforward to demonstrate a raytracer that progressively enhances is rendering over time.

The point is, this isn't a direction that cutting edge games programming has gone in, because getting more detail into the game has always been a higher priority than having a steady predictable framerate- And to make a further point, practically speaking, painter's/z-buffer algorithm is easier to optimise than some other rendering algorithms. Though, the others are not impossible, just not fruit that is quite as low hanging.


Yes, it is not possible. Learn about what a Z Buffer is and how it works.

Dude seriously, I have been doing this a long time. I am going to stop replying after saying just one more thing:

If you are running on an OS like Windows (which this product is targeted at), you do realize that the OS can just preempt you at any time and not let you run? How do you predict if you are going to finish a frame if you don't even know how much you will be able to run between now and the end of the frame?


I am not talking about predicting in advance, I am talking about just doing as much as you can, and then when the deadline comes, sending the result. With the way software is written now this results in tearing, because pixels are simply rendered left to right, top to bottom. but what if you rendered them in a different order such as with an interlaced jpeg, which would show you a low res version of the image when partially downloaded.


The render pipeline is more or less, compute a bunch of geometry on the cpu, send it to the gpu with some textures and other things, tell it to render, and once the gpu has rendered the frame, you can tell it when to switch.

There's no point at which you have a usable partial frame to display, and it doesn't make sense to compute every other pixel, and if there's time come back and get the rest, because computing a pixel's neighbor will need a lot of the same intermediate work, and you probably don't have the resources to keep that. Parallel rendering generally divides the rendering tasks into different regions of the screen for each unit, not interleaved pixels.

To answer your question regarding why not just use 40fps, instead of going up and down; If you cap framerate at 40fps and your monitor doesn't refresh at an even multiple of 40, you're going to end up with consistent judder which is probably worse than occasional framerate dropping.

If you look at it from the other side, look at all the extra buffers that are needed to support fixed framerate monitors when frame generation really doesn't need to be fixed. At the desktop, if nothing on the screen moves, there's no need to transmit the buffer 60x per second, except for legacy reasons, in a graphics intensive application, the time to generate a buffer may vary. Video usually was recorded at fixed frequency, but it often doesn't match the frequency of the monitor. CRTs absolutely required that the electron beam trace every pixel several times a second, but LCDs don't.


I only used 40fps as an example. I know you'd probably want 60fps, 30fps or 20fps (or possibly 50 or 25 if your refresh rate has more of a PAL bent)

Here's an idea I am curious about now. If you can almost but not quite reliably generate full res images at 60 FPS, can you generate quarter resolution (that is, half the pixels on each dimension for a quarter of the pixels) at 240 fps, or does the overhead for each render outstrip the efficiency from generating fewer pixels? That is, how much of a fixed overhead for a frame is there, and can it be spread over 4 frames, with slightly offset camera?


3D graphics images aren't calculated left to right, top to bottom. They are calculated by drawing a bunch of triangles all over the image to represent 3D geometry. Triangles are often drawn on top of other triangles. Many modern games also use multipass rendering to achieve certain lighting and special effects. Only after a whole image is computed can it be transferred to the monitor if you want the image to make sense. If you stop rendering half way through, the end result would be objects full of holes with entire features missing or distorted. The time needed for the actual transfer of the image to the display is generally a drop in the bucket by comparison.


Well, while you are right for the way games and GPUs currently work, there is more than one 3d rendering algorithm- Zbuffer is not the end all be all. The question is what can be rendered efficiently with the hardware we have. At some point we could collectively decide that having a rock solid framerate is more important than more detail, decide to use a scanline progressive renderer, and there you go. It is possible, but would we do it?

But on the other hand I was confused- I forgot that zbuffer was the algorithm still in use in most game rendering engines. and you cleared that up. thanks.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: