
Volume – The world’s first personal volumetric display - bahro
http://lookingglassfactory.com/
======
tobr
Despite them doing their best to cover it up with quick editing and cherry
picked angles, it's clear this has very low depth resolution and foggy picture
quality.

If this only uses light, how would it deal with occlusion? It would be
impossible to put a dark object in front of a lighter one, right?

~~~
dezt
I am on the Looking Glass team on Volume. I can answer you.

As far as resolution: each slice is about 104px high resolution in it's
current state, the width is about 1500px in resolution. The current depth
resolution is 10. Because Volume currently has 10 slices. We do use some
tricks to reduce the apparently low depth resolution, for example we allow a
dev to set settings to 'overdraw' the slice so slices draw parts of their
sister slices, which are then blended/interpolated with shaders. It helps a
lot, and in fact our tests with more slices don't seem to do much other than
increase the quality and viability of wider viewing angles. At near straight
on viewing angles (within 15 degrees of the center), more slices don't seem to
affect the quality of the display much (interestingly).

As an additive display, you are correct, a dim object in front of a light one
wont show well... it really does look a lot like 'princess leia' in the 'obi-
wan come help me' scene.

There are 2 ways we found to mitigate this. 1) We allow devs to set a 'black
point' which adds the desired value to all geometry, thereby having a way to
distinguish 'black' from 'void'. 2) Certain applications benefit a lot from
not rendering slices at all but drawing everything as a normal 3D scene and
then using a shader reading the depth buffer to determine the slice of a
particular pixel - hence anything in front of something else causes a 'shadow'
in subsequent slices (so it -CAN- occlude in this way, but by sacrificing the
view from the back side). Things like scanned human heads look very good with
this method... they honestly look like ghost heads suspended in space inside
Volume.

Anyway, I hope it answers your questions. (I am the main tools dev).

~~~
tobr
Thanks for the answers! I think it looks like interesting tech and it could be
fun to play around with it, despite the limitations.

But I really think you should reconsider the messaging around the product. I'd
imagine most of your potential customers would be tech-savvy enough to ask the
questions raised in this thread. Right now your marketing material is making
us suspicious; it looks like you have something to hide.

I think you'd be much better off if you were transparent about the technical
limitations, so as not to be mistaken for another Cicret Bracelet
([https://youtu.be/KbgvSi35n6o](https://youtu.be/KbgvSi35n6o)). Getting that
out of the way would allow you to focus your story on the possibilities, why
this tech is cool (because it is!).

Also, judging by your description some of the material on the site is very
deceptive, such as the animation with the horse. It appears to be very high
resolution, there's no "slicing" even at acute viewing angles, and it has dark
objects occluding lighter ones. That's not really OK.

~~~
dezt
Thanks for the feedback. Knowing precisely what threw you off is super
valuable to know, and I will directly tell the team members in charge of
marketing about it. I can also tell you for sure that there was no intention
to deceive or misrepresent. In fact, anyone at all who is in the NYC area is
welcome to see and judge Volume for themselves at our Brooklyn lab. We have a
'library' set up exactly for that purpose - to let the public come see and
play with it! :)

However, I also definitely see what you are saying (and I will push for the
horse in particular to be changed or updated). If there are any other things
that threw you off, please do post in reply.

For anyone interested to drop by the lab and see Volume for themselves feel
free to email: future@lookingglassfactory.com (the lab is at the North tip of
Greenpoint Brooklyn, across the bridge from the Vernon-Jackson stop on the 7
train)

------
iamleppert
They say it's not a parlor trick, but it's nowhere near true holography or
anything like it. True holography records the object's wavefront as a flat
interference pattern, encoding the phase information from the object (and also
creating a very precise model of its surface), creating a diffraction grating
which can then be used to replay the identical wave when illuminated by a
suitable reference source of light.

This is the only known way to create true images with real depth of field and
parallax, and requires ultra high resolution to both record and display the
microscopic interference pattern.

There is a good book [https://www.crcpress.com/Ultra-Realistic-Imaging-
Advanced-Te...](https://www.crcpress.com/Ultra-Realistic-Imaging-Advanced-
Techniques-in-Analogue-and-Digital-Colour/Bjelkhagen-Brotherton-
Ratcliffe/p/book/9781439827994) that has the mathematical basis for which the
real principle of holography could be applied, given a capable enough display
technology, to re-create an exact replica of the light field/wavefront that is
identical to what we see in real life. Of course, a single static image with
sufficient resolution of about 3"x5" in real life contains over 100 GB of
data. The information carrying capacity of light is amazing.

None of the current VR/AR systems (including MagicLeap) use anything close to
what is required, and we are still ways off in both display tech and GPU
bandwidth to even generate a single, static image in a consumer product.

~~~
Animats
There's a small company in Austin, Zebra Imaging, which generates CG holograms
on big pieces of photographic film.[1] Those are real holograms. The military
buys them as 3D models in a flat, portable form, and they're sometimes used in
place of architectural models. Some people still take photographic holograms
using a big flash laser. (This was a thing about 40 years ago. One guy still
does it.) Almost everything else called a "hologram" is about as fake as a
"hoverboard".

Someday someone may make a display with light-wavelength-scale resolution and
display real holograms in real time, but that hasn't been done yet.

[1] [http://www.zebraimaging.com/](http://www.zebraimaging.com/)

~~~
anotheryou
They look pretty good with the right lighting, but objects must never be
clipped by the papers edge. So you have to keep your 3d inside an imaginary
pyramid on top of the medium.

~~~
Retric
That's just a question of how wide the paper is.

~~~
anotheryou
Practically: if the paper is wider, it will probably lower your viewing angle
and that stuff is expensive!

------
achr2
I saw a comment today on HN showing voxiebox[1] which seems to have a better
volumetric display, unlike this one which has a very small number of discrete
layers.

[1] [http://www.voxiebox.com/](http://www.voxiebox.com/)

~~~
Animats
Looks like that has about 20 layers. The flickering suggests a mechanical
scanner. A rotating mirror? Vibrating mirror?

~~~
achr2
I think it's a spinning plane with a projector beneath. Their limitation is in
the DLP projector's refresh rate. Seems they also may have extra limitations
in colour space, since most DLP projectors use a rotating colour wheel.

~~~
nom
It have to agree with the parent, it definitely looks like a display or a
projection surface oscillating in the vertical direction.

I'm really curious if they opted for a display or a projector. A display is
heavy but it might be possible if they use a vacuum. A projection surface on
the other hand can be very light but you have to calibrate the optics and it
_has_ to have a display and not a DLP chip, etc.. it's a really interesting
problem to solve.

------
iLoch
I'll start by saying this is a really interesting device, and a very
impressive technological feat. However, I have some major criticisms too:

The only advantage I see to this vs. something like HoloLens is that it
doesn't require a head mounted display. Unfortunately for this project, that's
not as big a barrier for serious companies as they need it to be. In time I
think this "advantage" will also dwindle due to shrinking component size, etc.

The price is also considerable - it's about a thousand dollars cheaper than a
HoloLens (I'm going to keep going back to HoloLens because to me they're
directly comparable.) But the thousand saved also comes with some significant
drawbacks:

\- The notion of collaboration seems to be non-existent with this device - I
don't see them showing how that might work based on their promo clips. Seems
to be single user driven.

\- It has a back-face. How do you render something like text so that it's
readable from all perspectives?

\- Control appears to be limited with no standard input. HoloLens tracks the
users hands, by comparison.

\- Does it have a top on it? At that point, what really separates it from a 3D
TV? Seems like you'd only get limited perspective translation.

I'm having trouble coming up with really practical applications for such a
device, that wouldn't be better suited by a head mounted personal device. I
fully support the effort if they find their niche, but coming from a VR/MR
point of view, I can't really see any real advantages. Please enlighten me! I
don't mean to be a downer!

~~~
phedhex
Hey there --

Full disclosure, I'm working with the company right now. I made the Unity
Asset Tutorial Video as well as the rhythm game Rhythm Reach (and am currently
taking a break from prepping for our launch party).

Also, take what I'm going to say w/ a grain of salt as I've been working in
Volumetrics on and off for nearly 10 years with my personal project Lumarca:
[http://www.albert-hwang.com/lumarca/](http://www.albert-hwang.com/lumarca/)
\-- so I've been drinking the Koolaid for a while now...

Volumetric Tech is a completely different medium, full of it's pros and cons.
Many of the things people are bringing up in this thread are totally true and
often infuriating when you start working in this space. You get x-ray vision
(for better or for worse). It's impossible to meaningfully document (video
always smashes it back into 2d). A focus on voxels make traditionally 2d
content a weird problem (like text). And, to be sure, supportive tech for this
isn't nearly as advanced as the stuff supporting VR. In terms of engineering,
this creates a tough balancing act between price, visual fidelity, and
scalability.

So -- why do it?

I do it because it's the only form of digital 3d media that exhibits passive
physical presence. It's viscerally present in a way that no other digital
media is. When people see a truly volumetric display for the first time (in
person), 90% of the time they're totally floored. This is true even of my
other volumetric display that had much much worse visual fidelity. The content
is "worldlocked" in a very real way.

You also bring up the very real problem of practical applications. We're also
exploring these questions (medical visualization w/ DICOM integration,
songrams, games?). While this display is far from perfect, we're hoping that
the low barrier to entry (a little $ + Unity) + some community presence, we'll
make the environment friendlier for finding those solutions.

Anyhow, if you (or anybody on this thread) visits the NYC area, please drop us
a note to come by to see it for yourself in person!

------
Animats
The description is rather vague. It claims 2 million voxels and no moving
parts. That's resolution of maybe 200 x 200 x 50; the depth dimension seems
smaller than the others. Is is some optical system which projects on 50 plates
(maybe less), or something like that? It looks like they're somehow optically
remapping a projector to multiple planes at lower resolution.

There have been depth displays before - vibrating mirror devices (a mirror
mounted on a subwoofer driver) and such. Today we have enough GPU power to
drive a volumetric display, at least at the modest resolution of this device.

But it's awfully low-res. Once you get tired of the 3D effect, it's going to
be painful.

~~~
achr2
I think it only has 10 plates.

I like the 'More 3D than that Tupac hologram' claim - it's true, but it uses
basically the same effect, just with 10 layers.

~~~
joezydeco
You can see 10 layers when the cube rotates at 0:40 in the video.

------
trothamel
It would be interesting to find out what the dimensions of this are in voxels.
The 2 million voxels makes me think this is based on a 1920x1080 projector.

A graphic on
[https://www.lookingglassfactory.com/how/](https://www.lookingglassfactory.com/how/)
showing the obligatory macbook has its display divided vertically into what
looks like 10 slices - so perhaps 1920x108x10?

That would seem plausible as to what the photos on the site represent, but
some of the 3d renders use a much higher vertical resolution for the display.

(This is just an educated guess, of course.)

------
diimdeep
awful pacing in video, 70% of time is headlines 10% is something else, 20%
actual device, and even that is cut in super small sequences. instant ADD

~~~
Pitarou
And a strong scent of techno-bullshit.

------
Pitarou
I believe the video is mostly concealing one important fact: the resolution in
the z-axis is very low. That's the only way to explain the fairly high-res
images in the demo when there are only 2 million voxels.

I'd guess the device is made of something like 20 stacked panels, each with a
300 x 300 resolution.

~~~
DougWebb
On some of the images you can see the layers; it looks like there are 10 of
them. So the resoultion is more like 600x330x10.

------
mpolichette
This is cool, can definitely see this getting better over time, it cool to see
a new approach to display.

------
mikejmoffitt
Fun idea, but the resolution is absurdly low, and as an emissive display it
won't be able to block light (show dark things) without trickery or some
drawbacks (like tinting the view glass, or requiring all lights to be off).

"people of all ages can come together to experience something that’s future
AF."

Is the target audience "gullible ass"?

------
mistercow
I was hoping to learn a bit more about how it works. They hand-wave about
"lightfolding", but what does that mean?

~~~
Animats
Mirrors.

------
vorotato
This will have niche application until they improve things. Pretty neat
though.

------
ChristianGeek
Why would I choose this over VR?

~~~
Zikes
Only one person at a time can use a VR headset, and they can get a bit
cumbersome.

~~~
glibgil
Two people can use two VR headsets

~~~
hinkley
Right, but it's the same reason I got a RealD television instead of a shutter
frame one. You're not gonna give a roomful of people (especially if they're
children) shutter glasses. Too expensive by half. RealD glasses are easy to
come by and when the dog eats a pair (or a couple, in my case) it's an
inconvenience.

Headsets are great when you're only entertaining yourself. Terrible when you
have friends you're entertaining.

~~~
glibgil
Friends will hate you if you trying to entertain them with this shitty device.
It looks awful

~~~
hinkley
That's fair, but just because version 1 sucks doesn't mean you won't still end
up teasing your friends who don't own version 3.

It could work, even if this prototype doesn't.

------
goranb
Pepper's Ghost display?

~~~
sp332
It looks like multiple such displays stacked up, for true depth. Each of your
eyes will see a slightly different image because it has real parallax, and
each person around a display will see from a different angle too.

~~~
mistercow
"Multiple pepper's ghost displays stacked up" doesn't really make sense.
Pepper's ghost requires a lot of space, and it isn't really stackable.

~~~
sp332
You're right. Each display uses diffuse instead of specular reflections. That
way each pixel that gets projected onto the screen shines in all directions.

------
jdennaho
This is differentiated from 3d projection onto a monitor or vr headset how?

------
alfanick
don't ever mess with my scrolling experience

