
Chip-scale blue light phased array - WaitWaitWha
https://www.osapublishing.org/ol/abstract.cfm?uri=ol-45-7-1934
======
phkahler
>> Large-scale integration of this platform paves the way for fully
reconfigurable chip-scale three-dimensional volumetric light projection across
the entire visible range.

That's what I thought. Do this a wafer-scale and you have a fully holographic
display. Curious how they phase synchronize so many light emitters but the
full paper looks like $$

~~~
ambicapter
When you say holographic display do you mean, projected in thin air? How would
that even work?

~~~
sp332
No, like a normal hologram, but this one can be animated by changing the light
coming from it.
[https://en.m.wikipedia.org/wiki/Holography#How_it_works](https://en.m.wikipedia.org/wiki/Holography#How_it_works)

------
hwillis
This is quite a cool paper. I haven't read the full thing yet, but any talk of
displays is EXTREMELY premature. This isn't even _one pixel_ \- it's steerable
in only one direction. In order to steer in two directions, you'd need to
stack up hundreds of these. That would give you a pixel... one that's at least
a millimeter wide, or 10x wider than a color block on the screen I'm typing
this into. Certainly it would be incredibly expensive to make an entire
display out of this.

It's even pretty premature for lidar- for safety reasons lidar is in the
infrared, usually around 1 micron wavelength, which makes a big difference in
terms of manufacturability and price. One of the biggest problems for solid
state lidar is the beamwidth, which can be a couple degrees or even >10
degrees in some cases. This is .17 degrees, which is a fantastic result, but
still quite large compared to conventional lidar. Top end lidar has angular
resolution below .1 degrees and you can integrate data over time to find edges
as sharp as the beamspot, which can give you some really insane precision. If
we used this system, the intense blur over the spot would require
deconvolution and strongly limit precision.

Back of the envelope math: at .75 m (2.5'), .17 degrees between pixels would
give you ~2.25 mm pixel pitch. Even at that distance, the blurring will be
intense- the middle picture here[1] is semi-accurate. Obviously fidelity
wasn't the goal and this doesn't represent the limits of the technology! Don't
give up hope. Just expect to see someone demonstrate something that exceeds
the requirement for fidelity before it actually gets made into a consumer
product.

This paper has similar results to an MIT paper[2], but with 3x higher
wavelength. Naturally, 3x higher wavelength way more than 3x harder. This
device is also smaller- the MIT device is ~1 mm by .1 mm vs. ~1 mm by ~10 nm.
That's mostly due to the fact that the MIT device can steer in two directions-
the first direction works with phase shifters just like this device, but the
second direction works by heating up the emitters so that they change size
(genius).

The device here basically just pipes laser light out through the side of the
wafer, which syncs up excellently with the MIT approach. MIT used very long
(500 micron) antennas that have a thin-thick-thin wavy pattern[3]; at each
wide part (IIRC) refraction causes light to escape at a particular angle.
Changing the length of the wide parts changes the resonance of the light
inside and therefor the angle light escapes at, and that's how they did
beamforming. Normally this would require atomically-perfect manufacturing, but
the long antenna averages errors out (although you still end up with just
under 1 degree of beamwidth). With a suitable intermediate layer you might be
able to plug this work into the MIT antennas and get full steering without
needing to stack arrays.

This is all way better than current commercial solid state lidars, which
mostly use rectangular patch antennas to project lasers. The optical
properties of those systems are really bad. It's a shame there isn't more
money in research like this, because it's necessary to make that final jump.

[1]:
[https://en.wikipedia.org/wiki/Gaussian_blur#/media/File:Capp...](https://en.wikipedia.org/wiki/Gaussian_blur#/media/File:Cappadocia_Gaussian_Blur.svg)

[2]:
[https://www.researchgate.net/publication/320235900_Coherent_...](https://www.researchgate.net/publication/320235900_Coherent_solid-
state_LIDAR_with_silicon_photonic_optical_phased_arrays)

[3]:
[https://spectrum.ieee.org/image/Mjc5NjU5MA.jpeg](https://spectrum.ieee.org/image/Mjc5NjU5MA.jpeg)

~~~
nshepperd
Wouldn't the display application here be something like a scanning AR/VR
headset which constructs an image directly on your retina with a small number
of beams moving very quickly (kinda like an old CRT display)? In which case
you really just need three "pixels" of different colors (assuming it's
sufficiently responsive to high frequency control).

1/0.17 = 5.9 ppd is about half the angular resolution of the commercial Index
VR headset, so it does still need a bit more progress before it would be
competitive. Or a lot more, giving it a few additional factors of 2 for the
fact that it's not emitting from directly inside your pupil, and you probably
don't want adjacent pixels bleeding into each other (fwhm is still... half
maximum which is quite a lot).

~~~
hwillis
> Wouldn't the display application here be something like a scanning AR/VR
> headset which constructs an image directly on your retina with a small
> number of beams moving very quickly (kinda like an old CRT display)?

That would only work if the display stays exactly (.01 degrees) in the same
spot relative to your eyes. What's it going to do, project a million images,
each the width of your pupil? You'd need extremely good eye tracking. You
can't put this on a contact lens because the substrate is too thick, in
addition to all the manufacturing problems.

> 1/0.17 = 5.9 ppd is about half the angular resolution of the commercial
> Index VR headset, so it does still need a bit more progress before it would
> be competitive.

Factors of two is a massive underestimate. The screen in a VR headset is
waaaaaay bigger than your retina, which you're projecting onto. Not only that,
but you're really concerned about projecting onto the fovea, which has about 1
"pixel" per 20 microns. At 10 cm (4"), 20 microns width would be .011 degrees
per pixel. The beam is actually pretty sharp, so halving the beamwidth would
lower the bleed to <<10%, which is fine. All together that's ~30x improvement
in beamwidth.

~~~
nshepperd
That would be what is required for an ideal display. Current VR displays get
closer to 0.1 degrees per pixel so it would be a ~3x improvement to be
competitive with existing tech, as I said.

~~~
hwillis
No, I'm talking about what is required to get to the _same_ as current VR.
Beamwidth and pixels per degree are NOT equivalent. Beamwidth causes adjacent
pixels to blur into each other, which is a much bigger issue when projecting
onto the eye because it's such a smaller surface. The equivalent factor for a
display would be the diffraction blur introduced by a pixel.

Back of the envelope math: at 10 pixels per mm, 500 nm light through a
circular aperture[1] has a half-power beamwidth of .0013 degrees, so two
orders of magnitude better than this technology. _That_ is what you'd need for
an ideal display. The bare minimum is just that the blurring at the retina
does not cause pixels three rows over to bleed into each other.

[1]:
[https://www.cv.nrao.edu/course/astr534/2DApertures.html](https://www.cv.nrao.edu/course/astr534/2DApertures.html)

------
ISL
Very cool.

OSA press-release here: [https://www.osa.org/en-
us/about_osa/newsroom/news_releases/2...](https://www.osa.org/en-
us/about_osa/newsroom/news_releases/2020/chip-
based_device_opens_new_doors_for_augmented_re/)

------
mysterEFrank
Awesome, this could enable cheap high-resolution 3D displays

------
cryptonector
Solid state LIDAR?

------
madengr
With an aperiodic spacing, I wonder how high the grating lobes are? The paper
is paywalled.

~~~
mNovak
Being aperiodic should mean there's no well defined grating lobes. Basically
each emitter pair has a different spacing and thus different grating lobe
location, so across the whole array they get smeared out to an even sidelobe
level. But yes, wondering what the mainlobe contrast is.. it's also >lambda
spacing so the efficiency is probably poor.

