

Latency – the sine qua non of AR and VR - mwilcox
http://blogs.valvesoftware.com/abrash/latency-the-sine-qua-non-of-ar-and-vr/

======
kator
I wonder if some of the latency could be soaked up by a physical adaptation of
the display panel. Imagine a display that adjusts position
manually/mechanically in relation to the head movement, enough to give the
latency time to catch up with the underlying movement. People won't just sit
in a chair and spin 360 constantly. If the physical display could shift the
angle of view or the pan/scan of viewable pixels perhaps the rendering could
happen a bit slower and catch up when the user stops for a second to focus
etc.?

You could render a 'latency' buffer past the edges of the panel's physically
viewable viewport and then the panel could expose that zone along with head
movement while the rendering continues to work on catching up with the actual
shift. When the head comes to a stop for even a handful of milliseconds the
display can recenter while the scene is rendered to "time current" position.

~~~
kator
Hmm on second thought you could also just render the "overscan" and have the
IMU work with the display to shift the pixels into view while the processor
works to keep the overscan updated. On pause you can re-render everything
(overscan included) and start all over again on the next movement.

    
    
         +------------------------------+
         |   <<< "over scan area" >>>   |
         |   +----------------------+   |
         |   |   viewable by user   |   |
         |   |                      |   |
         |   |                      |   |
         |   |                      |   |
         |   |                      |   |
         |   +----------------------+   |
         |   <<< "over scan area" >>>   |
         +------------------------------+
    

This might leave enough "headroom" too keep up with potential movements at
very low latencies.

~~~
andybak
Eye tracking would allow even more latitude. The eye picks up very little
detail outside of the central area of focus. It is also very good at inventing
information to fill in any gaps - for example during the periods when the
eyeball shifts position we are essentially blind but the brain fools us that
we see a continuous image.

~~~
ahoge
<http://en.wikipedia.org/wiki/Saccadic_masking>

~~~
wcoenen
In Peter Watts' scifi novel "Blindsight"[1] there is a description of aliens
which could program their bodies on the fly to move only during the saccades
of one human observer in order to hide themselves[2].

[1] <https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)>

[2] <http://www.rifters.com/real/shorts/PeterWatts_Blindsight.pdf> page 236

~~~
biesnecker
What an awesome concept! Thanks for the link, I think I'll be adding this to
my reading list.

------
pascal_cuoq
Michael Abrash's wikipedia page, which predates the blog post, says:

“He frequently begins a technical discussion with an anecdote that draws
parallels between a real-life experience he has had, and the article's subject
matter. His prose encourages readers to think outside the box and to approach
solving technical problems in an innovative way.”

<http://en.wikipedia.org/wiki/Michael_Abrash>

------
geon
The section about "racing the beam" is interesting.

> But that means that rather than doing rendering work once every 16.6 ms, you
> have to do it once per block. Suppose the screen is split into 16 blocks;
> then one block has to be rendered per millisecond. While the same number of
> pixels still need to be rendered overall, some data structure – possibly the
> whole scene database, or maybe just a display list, if results are good
> enough without stepping the internal simulation to the time of each block –
> still has to be traversed once per block to determine what to draw.

You could do a pretty simple hack though, that might be good enough: Render
the frame onto a buffer, as usual, but instead of displaying it, re-render it
using beam-racing with the proper rotation around the viewer to counter the
latency of the rendering and transfer time.

This would be a bit like how Google Streetview interpolates the panoramas when
you move, but very much simplified, since you only correct for rotation.

Edit: Fixed the "ream-racing" typo.

~~~
tgb
Googling "ream-racing" doesn't give any useful results. Could you elaborate?

~~~
hdevalence
I think it's a typo for "beam-racing".

------
matmann2001
So this guy responsible for Mode X. I remember implementing Mode X support on
the first OS I ever wrote. Oh the fond, frustrating memories.

------
andybak
I get the AR bit - but without a 'real' image in the same scene to compare
with - isn't the bar potentially lower for VR?

~~~
apl
Not necessarily. VR is, of course, free of visual misalignment, but there are
other sensors whose readout may conflict with the rendered scene.
Proprioception, balance, Helmholtz's efferent copies: unfortunately we're
pretty good at integrating percepts and detecting state inconsistencies.

------
ucee054
I think you could work with a laggy, semi-VR implementation. This would be
fine at 30ms (or hell 300ms).

Once the demand existed for VR headsets for these applications, perhaps
hardware vendors would/could start producing low-latency displays as a way to
compete in this market.

Then, real VR applications could be made to take advantage of the newly
available VR headsets.

Instead of tracking the head, the fake-VR could define slight translations and
slight rotations corresponding to WSAD etc a bit like the Wii works with Red
Steel.

I think that, without full VR, would already be an improvement for folks who
want to play shooters today but get thwarted by the controls.

