
Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers - chdir
https://research.nvidia.com/publication/cascaded-displays-spatiotemporal-superresolution-using-offset-pixel-layers
======
chdir
Video showing its capabilities :
[http://www.youtube.com/watch?v=0XwaARRMbSA](http://www.youtube.com/watch?v=0XwaARRMbSA)

~~~
mtkd
Very useful thanks.

In 10 years the before example in a before/after vid will still look like it
does in that video - just as it did 10 years ago.

The before example shots never seem to evolve - which makes me cynical about
display technology demos I don't see in person.

------
oceanofsolaris
Very interesting approach.

I think one interesting aspect of this is that it couples spatial as well as
temporal interpolation. This means that you get a higher resolution as well as
a higher framerate, but on the downside seems to introduce additional
artifacts depending on how these two interpolations interact.

I have not yet read the technical paper and only watched the video without
sound, but from this video it seems that moving sharp edges introduce
additional artifacts (can be seen when looking at the features of the houses
in peripheral vision at 5:11 in the video). This is what you would roughly
expect to happen if both pixel grids try to display a sharp edge, but due to
their staggered update, one of these two edges is always at a wrong position.

This problem could probably somewhat alleviated through an algorithm that has
some knowledge about the next frames, but this would introduce additional lag
(bad for interactive content, horrible for virtual reality, not so bad for
video).

I intend to read the paper later, but can anyone who already read it comment
on whether they already need knowledge about the next frame or half-frame for
the shown examples?

~~~
lloeki
> _on the downside seems to introduce additional artefacts_

It definitely introduces a form of ghosting visible near the rear end of the
motorcycle.

As for lag, I can already see John Carmack cringing! There may be an
interesting effect though, in that the increase in apparent resolution is
quadratic when the increase in computation is linear. Hardware-wise this
possibly could be done straight in the double-buffering phase without
additional lag if it can be made to race the beam.

------
birger
If I understand correctly the idea is that you get a high-resolution display
by putting two low-resolution displays in fromt of each other?

~~~
aroman
Yeah, this is what I'm wondering about as well. What does the actual
implementation of this look like? Is it just one display being fed 2 low-
resolution image streams? And is there any effort required to synthesize the
cascaded image?

~~~
est
I guess the main point is lower production cost. For 4k screen you need 4x
more density pixels than a 1080p one, which is difficult to make with a high
defect rate, by cascading two 1080p LCD you get as-good results but the cost
is very cheap.

Besides you don't need to 4x your display bandwidth, just double 1080p.

Just my guess.

~~~
staz
I guess this would also be a nice improvement for smartphones' screens since
you would only need to power half the number of pixels for the same equivalent
density which will save on the battery

~~~
tormeh
Nya, the light will have to travel through two LCD layers, if I've understood
it correctly. You'd need a very powerful backlight.

~~~
tgb
In the video, some of the captures for the cascaded displays were actually
brighter (eg: 4:20 mark). I'm not sure why and it looked like they were just
using a single display's backlight. Anyone know?

------
blencdr
I have difficulties understanding the mecanism of this supersampling (2
succesive images to make one ?). Can anyone explain this in a simple way ?

~~~
ygra
They have two layers, slightly offset (by half a pixel in both directions) on
which they show different images which, together, combine to one of higher
resolution. They also can show different frames shortly after another quickly
enough so that they appear to belong to the same image, but each contributing
different parts to either temporal or spatial resolution of the final image.

Since they're using off-the-shelf LCD displays for their prototype, I guess
the final result is not yet flicker-free (they probably cannot show more than
60 fps, and thus not more than 15–30 high-resolution frames per second). Also
evident as they're demonstrating the capabilities with 5- and 10-fps video.
But that's just a matter of a higher refresh rate for the displays, I guess,
unless computing the individual frames is too taxing for now (it doesn't seem
to be, they do plenty of work in shaders, being NVidia and all).

Major benefits seem to be cost, simplicity and size; their prototypes were
built as a head-mounted display and a small projector.

~~~
swimfar
I found this aspect of the build interesting:

"The bare liquid crystal panel was affixed to the base plate, held in direct
contact with the first LCD at a fixed lateral offset. As assembled, the front
polarizer on the bottom LCD is crossed with the rear polarizer on the top LCD.
Rather than remove the polarizers from the thin glass panels, we placed
quarter-wave retarder film between the two (American Polarizers
APQW92-004-PC-140NMHE): rotating the polarization state to restore the
operation of the top LCD."

It's probably extremely basic knowledge for people familiar with polarization,
but I didn't know it could be so simple.

------
npinguy
I would really like to see some data on the memory savings using this
technique. How significant are they?

~~~
druidsbane
I would guess 0. My understanding is that you are rendering at the full higher
resolution then simply computing the proper subpixels on the offset displays
to align them right. You still need all the data there using the full amount
of memory otherwise you can't really perform the calculations necessary for
the subpixel/temporal interpolation.

------
higherpurpose
Unfortunately this will be yet another proprietary technology from Nvidia that
nobody else will use - which means it won't have mass adoption - which means
it's ultimately pointless (unless someone else creates an open source version
of it).

~~~
exDM69
> Unfortunately this will be yet another proprietary technology from Nvidia
> that nobody else will use...

This is a scientific/technical _research paper_ for a computer graphics
conference. It's not even near being a technology that ships.

There's a reason that there are so many "Nvidia only" technologies. Take the
G-sync displays for example. It's a problem dating back to cathode ray tube
display technology but to overcome it, it takes integration between the
display controller hardware (in the graphics card) and the panel control
electronics. Display manufacturers do not make the GPU hardware so the only
option is for a GPU company to try to make the first step.

In the long run some of these technologies will become standard and widespread
but someone has to take the first step and that must be economically viable.

~~~
DiabloD3
Except what G-sync does is not Nvidia only. What G-sync does is it sends
frames at maximum speed (DVI, HDMI, and DP only send frames as fast as
absolutely needed to get done in time, ie, at 60hz, it takes approximately
1/60th of a second to send the frame, even if the connection had an available
clock rate to send it in 1/144th of a second), and then it sends it on demand
instead of at the next frame interval (to reduce latency and jitter).

However, before Nvidia announced G-sync (which requires special and expensive
hardware), a group of companies lead by AMD submitted an addition to the
DisplayPort standard called Freesync which does the same thing using existing
hardware (and some monitors already in the wild could be theoretically
upgraded to support Freesync with only a firmware update).

Nvidia (as a company that makes gamer-grade GPUs) will be required to support
Freesync on their GPUs (no existing GeForce can do it according to Nvidia due
to HW issues, AMD says Radeons that are GCN should be able to do it with
driver updates) because VESA accepted the Freesync proposal and finalized it
into the next Displayport spec (VESA/AMD announced it at CES 2014, shortly
after Nvidia announced their much more expensive and proprietary G-sync).

~~~
exDM69
Yes, you are correct. But G-sync (kind of) ships already, you can buy a DIY
module to mod a display with it. I'm not aware of any other solutions that
would ship yet.

I sincerely hope that these proprietary technologies will get put in a
standard that will ship with several vendors. All the variable refresh rate
demos I've seen look _amazing_.

But the point above still stands, it took the initial effort of the GPU
companies to make progress happen on this. It would have been impossible for a
display manufacturer, let alone a panel manufacturer to make this happen.

------
ksec
What is the real use case for this? Gaming and VR?

we have no problem making 4K Screens and Hardware isn't bound by it either.

~~~
Ashwinning
Well, display hardware isn't the problem. 4K, 8K, there's no end to it.
Cascaded displays using multiplied layers seem to help achieve benefits like
sharpness at super high resolutions & effectively smoother results at low
frames per second (staggered) video playback. This helps remove a major
obstacle that high-res display technologies will face in the short term, which
is processing power. Presently, high-end graphics cards can barely crank out
30 FPS at 4K resolutions for games. Also, any compression artifacts etc. in
textures are much more pronounced on high-res/big sized displays. While
requiring the need to change the workflow (of game development) a little bit,
cascaded displays can potentially help render higher resolution, better
quality/sharper images at lower frame-rates (i.e. much more cheaply) while
still providing that 60fps feel.

Personally, if this takes off, I can see it saving the XBox One's ass, as a
lot of the complaints from gamers have been regarding it's inferior
capabilities for rendering high-end games (It renders many games at 720p 30
frames/second, while Playstation 4 is able to crank out 1080p for the same
titles), and also, play another factor in prolonging the shelf life of the
present generation of consoles, by enabling them to deliver much better
graphics with the same hardware. Kind of like what Normal Maps (among other
things) did for Xbox 360 & PS3, you can see the difference in graphics between
a game released in 2005 vs a game released in 2013 on the same hardware. Among
a lot of other factors, that was why it took 7 years before we saw the next
generation of consoles being released. Comparatively, the Xbox 360 came out
within 4 years of the release of the original Xbox.

TL;DR - It's not about the display hardware itself, it's about the ease of
rendering graphics to meet the demands of high-end display.

~~~
ygra
I'm not sure if processing power is going to be any different here. To get
acceptable results from the proposed method you need twice the resolution (two
displays) _and_ twice the frame rate. Which translates, surprise, to four
times as much processing power needed, just as a quadrupling of the resolution
would.

~~~
tgb
I think they actually propose something like rendering two 1080p streams at
60hz to get the effect of _both_ higher resolution than 1080p and higher
framerate than 60hz. That's their intent, apparently, who knows if the
staggered frames actually create problems for viewers if the frame rate isn't
just doubled.

