The problem is, video cables need to handle the worst case scenario and the worst case is a full refresh. Consider full motion video, whenever there's a jumpcut, nearly every pixel changes. There's no way to compress that data down without loss of fidelity.
I question how necessary all this is. Display resolutions have stagnated while cable bandwidth has followed Moore's law. Until we get past the dpi impasse and start producing true high res screens, we can afford to just keep on upgrading cables occasionally.
Consider full motion video, whenever there's a jumpcut, nearly every pixel changes. There's no way to compress that data down without loss of fidelity.
I think the author is proposing a non-causal protocol, where the data isn't necessarily sent in order. In FMV, perhaps the preceding scene can be sent with less than the full bandwidth of the cable. The computer can send the display many of the commands to generate future frames in advance, tagged with when they should be executed. Now that there is some buffering, there is more time to send the next full refresh.
At least, that's the intention I got from reading the piece.
It is analogous to Variable Bit Rate mp3 audio encoding.
No, it would be analogous to variable sample rate audio encoding, which doesn't exist.
It should also provide a way to degrade gracefully to output devices that don't support high frame rate.
Hierarchical B-frame structures in H.264 already provide this feature. For example (where A is the highest level, B middle, C lowest, and P frames are "ordinary" frames):
Full framerate: P A B A C A B A P
Half framerate: P B C B P (drop the As)
Quarter framerate: P C P (drop the As and Bs)
Eighth framerate: P P (drop the As, Bs, and Cs)
This is possible as P/B/C can't reference As, P/C can't reference Bs, and P can't reference Cs.
Some broadcast hardware encoders are known to use an even more extreme structure that involves 15 frame per hierarchy, allowing down to one sixteenth of the framerate.
No, the point of the article (or at least the problem to be solved) is driving a high resolution display from a device or cable or network that doesn't have the bandwith to do it the old fashioned, constantly updating frame way. DVI is really just a digital hacking up of analog VGA which is not far above old style video, and this makes no sense in a world where all displays sold have frame buffers of their own and memory and graphics hardware are super cheap.
We should move everything to fully digital protocols, and share cables, and get power on the cables (in both ways) too.
FTA: "Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once."
The trouble is, the system has to be designed to accommodate the worst case. "Usually" and "rarely" are of no more use in video than they are in realtime 3D graphics.
I question how necessary all this is. Display resolutions have stagnated while cable bandwidth has followed Moore's law. Until we get past the dpi impasse and start producing true high res screens, we can afford to just keep on upgrading cables occasionally.