As others have mentioned, it requires pushing quite a number of simultaneous views of the hologram through a hidpi display, the experienced resolution is not very high as a result and the holograms look a bit fuzzy.
It is right now probably the best out-of-the box way to interact with Holograms, especially in a shared environment. Hololens can't share holograms by default, and even if the app has implemented sharing, the holograms can't be touched. Meta glasses have some touchability thanks to their depth sensor, but again there's no easy shared way to interact with a hologram.
I think AR like Looking Glass is underrated, they were very smart to use natural interaction instead of gestures or a mouse/wand. That being said, I don't see it competing with AR glasses longterm.
The major issues with these are the limited viewing angles and the enormous bandwidth needed to both render the individual points of view and to actually transfer them to the screen. Heck, a lot of computer games have problems to generate stereoscopic (i.e. 2 images) content at 60 or 90fps required by the VR helmets such as Rift or Vive these days. And these guys want to push 45 distinct images at 60fps?
Good luck with that, especially for that ridiculous price for a tiny screen.
These guys offer a 4k, 50" screen:
But there are ways to mitigate it somewhat. It's possible to have a gaussian or random distribution of light rays so that you end up with only average about 10 or so rays per pixel, instead of the 45 here.
But yes, expect to need a massive increase in bandwidth for light-field holographic displays. This includes headset VR mounts, where you can finally have a display without the limited field-of-vision that you'd have from current VR headsets. A VR headset can focus the light rays towards the range of each eye, which can also cut down on bandwidth.
Source: I work in light fields.
Anti-aliasing techniques are common elsewhere in 3-D graphics, and can be used just as well in light-fields.
Rendering for 2D and display are two different beasts -- but I'll own the fact that I'm not formally trained in this and there may be subtleties -- or even obvious signal processing facts -- that I'm getting wrong. (But if I'm wrong, I'd love to know how, for my own edification.)
It doesn't need that at all. Light fields can be interpolated like anything else, just like bayer filter for color on camera sensors or 4:2:2 color compression on the signal side. But if you're doing 3-D rendering, you can match rays exactly to your distribution on the display if the renderer knows the distribution of rays on the display.
Interpolation is always going to reduce quality, but it's better than aliasing, so there's going to be a trade-off analysis that needs to be done. I don't know what the results of that would be, so this is all theoretical.
Furthermore, if you're interpolating rays, you're necessarily not doing what you originally proposed, which is to only light up a (random or pseudorandom or evenly distributed) subset of the pixel display elements, presumably to save on rendering cycles.
Let me just say, more generally, that intuition trained on 2D doesn't apply directly to light fields.
I assume the author means the book '1984' by George Orwell and I come to the conclusion that the author has never read the book and does not know what it is about. Not every dystopian story is 1984. A far more logical reference would have been to 'Ready Player One' by Ernest Cline which is in fact about a dystopia in which people _are_ 'geared up' all the time.
I recommend both books, the first is a work of genius and is, sadly, very relevant today and the second because it is very entertaining and might offer insights into the development of VR in the near future.
The second had me cheering the hero on. It's a great action book with loads of geekery and nerdness from my childhood.
My recommendation: Read 1984 first, so you have good feelings to end on.
Truly holographic displays will emerge once we can control ligth interference in the display.
Could you explain this a bit more please to give an idea of the path to get there?
This interference creates an interference pattern, which can be recorded by film. The trick is, that if you develop the film to get the black and white pattern, you can shine the reference wave onto the film and it interacts with the interference pattern such that the object wave is reconstructed. A hologram such is an exact recording of the light emitted by the object. This is something the display tries to emulate by offering 64 different images, but not quite the same. As the interference pattern is just a greyscale image, one could use an ultra-high resolution lcd display to synthesize that - there have been demonstrators of that, but I am not aware of a large holographic display so far.
For the way you described it, to me, it doesn't sound that hard.
- Create interference pattern
- Record interference pattern
- Shine ref light onto pattern to recover emitted light
If done perfectly would this yield a convincing hologram (what I might try to describe as "visual sense impression of real 3D object being present" ) ?
What are the major limitations in the current technology? You said it might be possible for high end screen, does that mean there is hologram tech out there? I have never seen any demonstrated.
Is there some sort of information processing problem that software could solve on the interference pattern? Or is this more a physics problem -- maybe we do not have the materials that can do the steps required?
From my photog background, yep ISO 10 would be very tough to work with.
That's been tried before, in many ways. The first try was a vibrating mirror. There's a flat rotating mirror system from FakeSpace. It's not bad; you can walk around it. Move vertically, though, and the illusion breaks down. There's a scheme with gas ionized by intersecting laser beams. That's very low-rez, but truly volumetric.
Eventually, someone may come up with a real hologram system with decent resolution. A research group at MIT built one, but it was very low resolution and single-color. It's not impossible. But this isn't it.
But I'm saddened if this is really the Apple II version of this technology because if it takes another forty years I probably won't live to see it. Always imagined that there would be sensors on the floor and ceiling, not watching it in a glass box but if that's how it has to be I'm OK with it.
The same shift now-a-days should take only a year and half (40 / 26), which is about how long it would take if you bought 64 Looking Glasses and built a DIY array, and used ML to construct a 3D video from a 2D one, a la https://en.wikipedia.org/wiki/3D_reconstruction_from_multipl...
Now that GPUs can reliably generate 60 FPS, this is the next step to push that technology. Because you'll need 45x60FPS for the same quality. And then you'll push the 45 number higher.
(yes, I know the 60 isn't visible and has to do with control input).
In my mind you can render $x pixels at 60fps. So the number of pixels for each distinct view is: $x / 45.
With raytracing you will almost get this for free.
That depends entirely on what you're rendering. You could reliably hit over 60 FPS for decades—for some content. And now you can still render at .000000001 FPS for other content.
Saying "GPUs can generate 60 FPS" is presenting the situation as if framerate were a function of hardware only, whereas the reality of the situation is that it depends at least as much on the software.
I remember this moment when Crytek released a real-time raytraced demo of one of their games running on the best hardware avaipat at the time. It felt like, finally the hardware is capable and now it's a slow march to the end of raster graphics. Then 4K displays came along and totally exploded the number of pixels to render and that was pretty much the end of that talk, at least for a couple decades.
For another, the worst case cost is $x/45, but I would think that might improve as 3D programmers figure out optimizations in terms of multi angle view rendering.
From the videos they look higher res than that.
It must be less than 45 unique views and they smear between them.
I guess in their case the microlenses aren't even round, since 45 is 5*9. I guess they have 9 horizontal X 5 vertical pixels per microlens.
It could also make a good in-store display.
For us the DVI bandwidth was the limiting factor to deliver 40-50 views at reasonable framerates (besides raw GPU computing power), so our display actually had 8(!) DVI inputs. That also gave us a natural interface to add distributed rendering, supporting up to 8 GPUs for rendering. In most cases though, one monster PC with 3 GPUs and 5 DVIs was enough to produce interactive framerates.
It also looks like it works best with a black background.