
Oculus Research to Present Focal Surface Display Discovery at SIGGRAPH - srinathrajaram
https://www.oculus.com/blog/oculus-research-to-present-focal-surface-display-discovery-at-siggraph/
======
iblaine
VR sickness is primarily caused by latency. ie you move your head, the image
takes a few milliseconds to respond and you feel dizzy. But there are other
types of VR sickness, like the inability to focus on an object. This research
improves your ability to focus on objects at different depths. Your vision is
less blurry. So yes, this research does help eliminate nausea in VR. To say
otherwise is misleading.

~~~
istorical
Primarily?

I'd say it's primarily caused by a disconnect between what the human
vestibular system (your sense of balance and spatial orientation) is telling
the brain your body is doing vs the acceleration forces that your body is
feeling vs what your eyes are telling you about what the body is doing.

Or in other words - differences in movement/locomotion your eyes are seeing in
HMD vs what your other senses are experiencing.

Which probably would include latency as a subset.

~~~
Balgair
The term in the medical literature you are looking for is 'Simulator
Sickness'. There are 2 main theories on it's occurrence, one of which is the
issues with the vestibular system. Wikipedia has a good introduction here:
[https://en.wikipedia.org/wiki/Simulator_sickness](https://en.wikipedia.org/wiki/Simulator_sickness)

To note, simulator sickness is not that new, we have been grappling with this
for ~65 years since the first flight simulators. Despite the massive funding
that the DoD has at it's fingertips, we have not found a cure for it OR there
has not been a lot of work done to find a cure. However, it seems that more
time in the simulator does help, thought it may then hinder actual flight
performance. Also, the more experienced pilots had a higher occurrence of it.
Again, it's not that well understood.

~~~
wlesieutre
Mayo Clinic has some recent work on hooking electrodes up to your vestibular
system: [http://newsnetwork.mayoclinic.org/discussion/mayo-clinic-
and...](http://newsnetwork.mayoclinic.org/discussion/mayo-clinic-and-vmocion-
introduce-technology-which-creates-the-sensation-of-motion-transforming-
virtual-reality/)

~~~
Balgair
Wow! Now that is some pretty cool stuff! Besides the VR issues, this has some
really good applications to things like vertigo and Meniere's disease
(unfortunately one of the 'suicide diseases'). Maybe this really will help
people!

[https://en.wikipedia.org/wiki/M%C3%A9ni%C3%A8re%27s_disease#...](https://en.wikipedia.org/wiki/M%C3%A9ni%C3%A8re%27s_disease#Prognosis)

~~~
wlesieutre
I'm slightly skeptical at the moment because there was a lot of buzz at the
time of the press release, and then it's total silence for more than a year
now. I tried to find any sort of hands on commentary to confirm that it
actually works and exists, but no luck.

Still, if it were a total scam I wouldn't think the Mayo Clinic would put
their name on it.

------
spyder
There is nice table in the paper which compares the capabilities of the
different technologies trying to solve the DOF problem in HDMs.

[http://i.imgur.com/8rdoeS3.png](http://i.imgur.com/8rdoeS3.png)

------
mhalle
Having a display that can support ocular accommodation (selective focus by the
eye) is an important research development, though it will most likely not
change the viewer's experience in a radical way.

Practical electronic 3D displays requires bandwidth reduction, both data
bandwidth for transmission and optical bandwidth to create practical or lower-
cost optical modulators. The goal is to use bandwidth reduction techniques
that produce little or no visual artifacts. Some of the techniques used are
the same as in 2D (spatial discretization, time multiplexing, compression),
while others are unique to 3D (view discretization, limits on view angle,
elimination of coherence).

Head-mounted displays are basically descendants of stereoscopes, the first 3D
displays developed by Wheatstone in 1838. Wheatstone's amazing discovery was
that you can throw a huge amount of information about the world away, provide
just two images from two viewpoints, project them out to infinity in front of
a viewer's two eyes using two lenses/light paths, and a vivid sense of 3D is
evoked. That's an incredible amount of information reduction from real life.

In the traditional stereoscope, accommodation is thrown away, mostly because
its really hard to recreate electro-mechanically, but also because we're
generally fine with it. Accommodation isn't effective for distant objects (or
for even larger depth ranges as we get older we lose our ability to
accommodate), so we likely have neural circuitry to discount imperfect
accommodation cues. One of the reasons we turn on bright lights when doing
detailed work is to stop our eyes down and increase our depth of field,
reducing the need for accommodation.

However, there have been perennial debates about the physiological impact of
conflicting depth cues involving accommodation, and those debates are more
interesting in VR where objects can be (virtually) very close to the viewer
and the viewer can dynamically change their physical relationship with virtual
objects.

Until you have a light modulator that can let you experiment with selectively
modulating accommodation within a scene, you can't provide real data on how
important accommodation (even approximate accommodation) is for a particular
application. Can't wait to see the studies.

We did some similar focal plane manipulation in holographic video more than a
decade ago, for related reasons (see Fig 7):

[https://www.researchgate.net/publication/255603167_Reconfigu...](https://www.researchgate.net/publication/255603167_Reconfigurable_image_projection_holograms)

~~~
AndrewKemendo
I would argue it's an important but not the most important thing for VR/AR
given what I've seen from consumer feedback. The most important based on real
world talking with consumers is a tossup between FOV and latency.

------
DaveSapien
This is a much better (IMO)approach by Nvidia:
[http://www.fudzilla.com/news/graphics/39762-nvidia-shows-
off...](http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off-its-
light-field-vr-headset-at-vrla-2016)

~~~
IshKebab
If I'm understanding things, that only has two focal planes. They mention the
approach of just having several focal planes in the video.

~~~
randyrand
They time-multiplex the two planes (switching the displays on and off
quickly). Amazingly, the brain interprets it as a 3rd, or 4th, or 5th, plane!

This Facebook optic uses the same technique.

~~~
IshKebab
No that's not accurate. The brain may not be able to distinguish the two focal
planes but it's just wrong to say "the brain interprets it as a 3rd, or 4th or
5th plane".

And this Facebook technique clearly doesn't use the same technique. Instead of
a focal plane they have a focal surface that isn't restricted to a plane. It
might not be noticeably better (who knows) but it's definitely not the same
technique.

------
russdill
Would it be possible to detect the focal distance of the eye and change the
entire focal depth of the display to keep it always in focus, similar to
automated vision testing devices? It could then perform blurring of out of
focus objects as a rendering step.

~~~
noio
That wouldn't account for the fact that the lenses in your eyes are
_physically_ trying to focus on a different plane because of other cues in the
scene. If you look at a close by object in VR (like say the gun you're
holding) your eyes will automatically try to accommodate to render nearby
objects sharp. By doing so they will actually make the gun blurrier, because
it (like everything else) is rendered on a medium-distance plane, while your
eyes are cued in to focus on a nearby object.

Or did I get that completely wrong?

~~~
russdill
That's why you'd need to have the rendering engine blur things that should not
be in focus. From what I understand on how the medical systems work, you'd
shine an IR light into the user's eye. The return from the light would allow
you to measure the focal distance of the eye. You'd then adjust the lenses of
the headset so that the display is exactly that focal distance away. The focal
distance of the eyes would get passed to the renderer. The renderer would do
proper focal distance bluring to objects that are not at the correct focal
distance. The main limitations would likely be the measuring of eye focal
distance in realtime, having lenses respond to it in real time, and the
possibility that the always in focus screen door effect would cause issues.

~~~
russdill
Oh, and accommodation really helps your eyes speed up focusing. Try focusing
on things near and far with both eyes, and then only one eye. You can still do
it with just one eye, but it goes much faster with 2.

------
IshKebab
I'm a bit skeptical of how much of a problem this really is. I have never
noticed it in VR. Perhaps because:

1\. Display resolution is still quite low, so really everything is blurry.

2\. You will never be able to notice blurriness where you aren't focused
anyway because you aren't looking there! Everything is always blurry in your
peripheral vision.

3\. Surely eye focus is a feedback system, like in cameras? I mean nobody has
problems focusing on TVs because your eyes just magically change focal length
until the image is sharp.

I am stereoblind so maybe it is a big problem for others.

~~~
miketuritzin
Have you tried to focus on something that is under 1 foot from your eyes in
VR? From my experience it is virtually impossible, and I think the cause is
related to this problem (not 100% sure though).

~~~
jobigoud
I think for very close objects you need to have a very good match between your
interpupillary distance, the interaxial distance of the HMD lenses and the
interaxial distance of the virtual cameras.

Also it depends on the HMD screen size, at some point parts of the object's
image for each eye falls off screen.

------
Gravityloss
Why are holograms / light field displays not technically possible now? I would
assume think we have bright and dense enough displays, and can shape the
microlenses.

~~~
ryandamm
Because n^4 is nasty.

Full light field displays have to draw every ray, not just every location;
that's two additional dimensions. Resolutions scale badly, and if you make any
of the four dimensions lower resolution, you'll get ugly artifacts.

Microlenses aren't the limitation. Pixel density is, both in physical
manufacturing side and when refresh / drawing.

~~~
phkahler
>> Because n^4 is nasty.

I keep thinking there will be a significant reduction in complexity there. The
intensity of a pixel in a hologram is essentially an integral over all
surfaces visible from that point. So imagine a rather complex formula applied
for each surface for each pixel. Then imagine holographic bounding boxes that
compress complex geometry into a few holograms of what's inside. This would
reduce the n^4 back down, but the resolution required for holograms is still
very very high. But we could use fancy GPUs to evaluate the integrals.

Just hand-waving thinking here...

~~~
erikpukinskis
You can compress most Earth light fields really well. That might be what
you're intuiting. There's a ton of redundancy due to the mostly opaque nature
of reality.

But compressing a light field and projecting one into an eye are totally
different things. Your display needs to be capable of displaying all 3n^4
possible intensities at different times. Depending on the scene and where
you're looking, you can get away with showing only a smaller handful of those
(some megapixels) but the display still needs to be _capable_ of displaying
them all.

If your display is just fixed LEDs behind a microlens array then you still
need n^4 resolution. Like, a megapixel per pixel. Most of the pixels can be
off at any given time, but you'll need them all to display arbitrary scenes.

~~~
ryandamm
It's actually worse than that, for a near-eye display: unless you know where
the person is focusing, you actually don't know where the redundancy is, so
you have to draw the entire 4D raster. (If you know their focus distance, you
can probably just draw a 2D image on the retina and be done -- you get that
4D->2D projection.)

Otherwise, the existence of those additional rays is what allows for focus
accommodation: as you focus in different planes, it 'shuffles' the light
around, to create sharp edges (where similar light rays line up) or blurry
foreground/background (similar light rays strike different portions of the
retina -- and if any are missing, there is a 'hole' in the blur disk).

Pedantic, perhaps... but these are the stakes.

~~~
erikpukinskis
This is one of the reasons why I'm actually quite bullish on handheld VR/AR.
It has none of the optical challenges of a headset. You lose one hand, and
give up some immersion, but in exchange you get all of the other benefits of
VR/AR without any of the optical challenges and lots more performance
headroom.

I think Google is on the right path with Tango-first. Didn't used to think so.

------
6stringmerc
Ctrl+F "inner ear" \- no results. To me, that's going to be the keystone to a
functional VR experience. Until there's some relatively non-intrusive
mechanism to fool the body's systems into playing along, I'm sorry, I don't
think image resolution or refresh or FPS will solve the problem. They're all
very important, sure, but I think the biology of the conundrum is the most
challenging short term.

~~~
oopsies49
How do astronauts combat nausea in space? Practice. I personally don't expect
a technical solution to this any time soon. We just have to acclimatize
ourselves.

~~~
shazow
Not everyone is fit to be an astronaut. Not everyone is able to eliminate
nausea through reasonable amounts of acclimatization.

The target audience for VR is everyone.

~~~
pera
I know many people who can't play first-person games because of simulator
sickness. That didn't stop the game industry.

~~~
pshc
To add to this, motion sickness from FPS-on-a-screen is inherently unfixable
due to the lack-of-locomotion issue. But roomscale VR deals with that issue.
VR done right could very well expand the FPS market.

------
ryandamm
I think this is overstated (though without looking through the prototype, this
is only speculation).

During normal, outside-of-headset vision, we focus naturally and quickly on
whatever we're looking at. We don't spend time with our eyes consciously
defocused on subject matter in our foveal view. So anything that's out of
focus will tend to be in our peripheral view.

So this is a peripheral technology. I think everyone's still looking for the
killer additional tech that will make VR perfect -- but it's not about one
magic tech bullet, it's about ecosystems slowly growing, and content getting
better. (The headsets are better than people think.)

------
migueloller
"It may even let people who wear corrective lenses comfortably use VR without
their glasses."

If just for this, it's a move in the right direction.

~~~
whatnotests
Yes! Many people cannot use the VR Headsets unless they wear contacts, and
even then it feels quite unnatural.

------
eutropia
Wont this also need gaze-tracking to be successful? In their video they
described a manually moved camera.

Is this technology compatible with foveal rendering?

~~~
drcode
No, I think this technique presents differently-focused areas to the eye all
at once (sort of like natural light, just at extremely coarse resolution) and
eye tracking is not needed.

------
highd
Sort of strange to describe it as a "discovery" \- I'm sure a team of
engineers with a variety of fields of expertise spent 1-3 years solving
problems that led up to this. A "discovery" would seem to describe something
that existed in the aether prior to their work - it seems to diminish the
innovation and effort they put in.

------
nsxwolf
Weird that I haven't noticed this yet when using VR. I wonder if I'll notice
next time after watching this.

------
cobbzilla
i guess whatever the faults of the paper, i like that they do have a product
on the market and are demoing and publishing research. magic leap? not so
much.

------
AndrewKemendo
Honestly I found the paper critically lacking where they attempted to make
reference or comparisons to virtual retinal displays. Saying that a VRD is
functionally restricted to _moderate_ FOV in comparison to the 120 degree FOV
of the Rift - using only the embodiment of the deform-able membrane mirror as
reference is ridiculous on it's face.

Even a rough version of the deformable mirror AR VRD described by researchers
at UNC Chapel Hill [1] accomplishes 100 degrees FOV with accommodation.

They went further with the Pinlight achieving 110 in 2014 [2]

The technical limit according to our own work for VRD FOV is H: 200o, V: 140o
(Combined). So either they're ignoring work in the field intentionally cause
they don't want to do VRD or they don't know about it. My guess would be the
former.

[1][http://telepresence.web.unc.edu/research/dynamic-focus-
augme...](http://telepresence.web.unc.edu/research/dynamic-focus-augmented-
reality-display/)

[2][http://www.cs.unc.edu/%7Emaimone/media/pinlights_siggraph_20...](http://www.cs.unc.edu/%7Emaimone/media/pinlights_siggraph_2014.pdf)

edit: I find this whole thing extremely frustrating. Facebook could throw 2
billion dollars at VRD tech and actually get to a working stable consumer
grade system if they wanted to - everything is there for it. Why aren't they?

~~~
joering2
> Why aren't they?

You can say that about any other multi-billion dollar company, let it be
Google, Microsoft, Xerox, Cannon, anything.

The short answer is that VR is not their core business.

~~~
AndrewKemendo
Zuckerburg spent his entire keynote talking about AR - he effectively told
everyone that they are a social AR company. They are investing the most into
AR/VR and creating glasses etc...

How is that not what they are planning as their core business?

~~~
joering2
Might not be. Might as well be strategy to discourage others from pursuing the
same while Facebook works things out eventually.

\- Hey Mike this is VP John Smith, until yesterday we were interested in
investing round A of $10 million in your VR startup but unfortunately we heard
today that Zuckerberg plans to be deep in it, therefore I regret to inform you
we will be unable to go with the round. Sorry and we wish you good luck!

~~~
AndrewKemendo
I think I have about 20 of those email responses at this point!

------
mattnewport
Article headline is misleading. Eliminating nausea is not the primary benefit
of this research and isn't even mentioned in the article. It may help a little
with nausea in some cases but it won't eliminate it.

~~~
sctb
Thanks, we've reverted the submitted title “Oculus Research Presents Focal
Surface Display. Will Eliminate Nausea in VR” to that of the article.

------
sitkack
No one has mentioned magic leap.

------
jimrandomh
Nausea in VR is already a solved problem, at least for the case where you have
a stationary camera. This is for giving people an extra method of depth
perception, but one which isn't strictly necessary.

~~~
istorical
Saying nausea is solved in VR is like a casual observer saying space flight is
solved after the first Wright brothers flight.

I say this as a VR enthusiast who is optimistic about the industry.

You can't say it's solved until an HMD can forcefully move your viewport in-
HMD without triggering nausea caused by the real body not moving.

~~~
Pxtl
To be fair, he did say a stationary camera.

