Though come to think of it, there is one example that almost does what you describe. Dynamic resolution scaling has some artifacts, but is being used in some games (and is notably used in Dead Rising 3). Though one has to decide before the frame is rendered what resolution to use, so you still get frames that take longer than 16.7ms or whatever your target is.
One could do something similar with render quality, but it has the same drawback that you have to decide beforehand what quality you want to use. One would also have to ramp up and down the quality slowly, which is difficult as the complexity of a given scene can vary wildly over the space of a single second. It also wouldn't help with spikes in rendering time.
"One could do something similar with render quality. There are possible solutions, and I'm thinking creatively about it."
Thank you for taking the time to answer me.
Rendering a scene is kind of like constructing a building. If you stop at an arbitrary point in the construction process, you don't get a pretty good building, you get a non functional structure.
The real reason is that there might possibly be some way of doing a progressively enhanced, rendered game but it's never been high enough priority for anyone to have done serious work in that area.
The point is, this isn't a direction that cutting edge games programming has gone in, because getting more detail into the game has always been a higher priority than having a steady predictable framerate- And to make a further point, practically speaking, painter's/z-buffer algorithm is easier to optimise than some other rendering algorithms. Though, the others are not impossible, just not fruit that is quite as low hanging.
Dude seriously, I have been doing this a long time. I am going to stop replying after saying just one more thing:
If you are running on an OS like Windows (which this product is targeted at), you do realize that the OS can just preempt you at any time and not let you run? How do you predict if you are going to finish a frame if you don't even know how much you will be able to run between now and the end of the frame?
There's no point at which you have a usable partial frame to display, and it doesn't make sense to compute every other pixel, and if there's time come back and get the rest, because computing a pixel's neighbor will need a lot of the same intermediate work, and you probably don't have the resources to keep that. Parallel rendering generally divides the rendering tasks into different regions of the screen for each unit, not interleaved pixels.
To answer your question regarding why not just use 40fps, instead of going up and down; If you cap framerate at 40fps and your monitor doesn't refresh at an even multiple of 40, you're going to end up with consistent judder which is probably worse than occasional framerate dropping.
If you look at it from the other side, look at all the extra buffers that are needed to support fixed framerate monitors when frame generation really doesn't need to be fixed. At the desktop, if nothing on the screen moves, there's no need to transmit the buffer 60x per second, except for legacy reasons, in a graphics intensive application, the time to generate a buffer may vary. Video usually was recorded at fixed frequency, but it often doesn't match the frequency of the monitor. CRTs absolutely required that the electron beam trace every pixel several times a second, but LCDs don't.
Here's an idea I am curious about now. If you can almost but not quite reliably generate full res images at 60 FPS, can you generate quarter resolution (that is, half the pixels on each dimension for a quarter of the pixels) at 240 fps, or does the overhead for each render outstrip the efficiency from generating fewer pixels? That is, how much of a fixed overhead for a frame is there, and can it be spread over 4 frames, with slightly offset camera?
But on the other hand I was confused- I forgot that zbuffer was the algorithm still in use in most game rendering engines. and you cleared that up. thanks.