
How the LIDAR tech GM just bought probably works - deepnotderp
https://arstechnica.com/cars/2017/10/a-deep-dive-into-the-tech-behind-gms-new-lidar-on-a-chip-company/
======
Animats
Interesting. Frequency-modulated continuous-wave LIDAR is easy to do as a one-
point device. Many such devices have been built. But they're usually short
range, such as the discontinued Swiss Ranger, and don't reject ambient light
as effectively as pulse systems.

Being both eye-safe and sunlight-tolerant is hard. Eye-safe is easier if you
can increase the diameter of the outgoing beam. Eye safety is measured based
on beam energy through a 1/4" hole (an eye pupil), and if the outgoing beam is
made wider (say an inch) the energy per unit area drops. But the optics become
bigger. Flash LIDAR units emit a spreading beam, and if you can keep people
from getting close to the emitter and staring into it, it's not a big problem.
(It's a distance measuring device, so if it detects something at range < 1
foot, it must cut the power way down.)

Sunlight tolerant is done with a few tricks. Narrow-band interference filters
cut out everything but the color of the beam being used. A pulse LIDAR can
outshine the sun for a nanosecond. Continuous-wave systems could in theory
operate below the noise threshold, but the detector has to not saturate.

Anyway, there are lots of technologies that can work. Continental bought
Advanced Scientific Concepts' technology, which is known to work fine; it just
cost too much when each unit was built by PhDs in Santa Barbara. Continental
is a huge auto parts maker; making a million of something cheaply is what they
do.

~~~
joshvm
The Mesa Imaging SwissRanger is an amplitude modulated system, not frequency
modulated. It's also not really LIDAR, as it uses LED illumination, but AMCW
LIDAR is more or less the same principle. A better comparison might be laser
tape measures - these often use phase modulated LIDAR rather than direct ToF.
You can also buy phase shift scanning systems from people like Leica
Geosystems.

In a ToF camera (at least some of them - see lock-in pixels), each pixel is
sampled four times per cycle to detect the phase offset from the outgoing
illumination. The SwissRanger was one of the first time of flight cameras. The
reason it suffers short range is because of phase ambiguity - the lasers are
modulated at around 30MHz which gives a wavelength of 10 m or so. The
ambiguity distance is half this (5 m). LIDAR systems historically got round
this by using multiple modulation frequencies for different distance scales.

This tech is now everywhere thanks to Microsoft buying Canesta.

~~~
mikeha
Leica Geosystems used to sell phase-based scanners. These units were actually
rebadged Zoller+Frohlich scanners. But the partnership ended a few years ago
when Leica introduced the Pxx series. Today the company only makes time-of-
flight scanners.

source: I work at Leica Geosystems.

------
iamleppert
Wondering why no one has thought of using a Acousto-optic modulator?
[https://en.wikipedia.org/wiki/Acousto-
optic_modulator](https://en.wikipedia.org/wiki/Acousto-optic_modulator)

It's solid state, has a very fast response time, super cheap to manufacture,
and can steer a beam with high precision. And they are very efficient,
achieving a >90% first-order diffraction efficiency. The AOM material can be
solid and thermally insulating (such as glass), so you can get very high
power.

In the paper cited, they produced a device that had an effective power of 4 mW
with active cooling. How are they going to scale this up, taking into
consideration first order diffraction depends on the angle of incidence of the
incoming beam? The steering mechanism is basically a tunable diffraction
grating which means the active area is going to be tiny.

Most high power diffraction gratings work by expanding the surface area of the
grating, it would still be very expensive to create a large tunable
waveguide/grating (think how much it costs to make a CPU die), making the
whole cost savings of solid state a moot point. You could make an array of the
devices, but you're going to need a ton of them to get the necessary power and
reasonable deflection angle range.

~~~
thatcherc
From the Wikipedia page:

> Consequently, the deflection is typically limited to tens of milliradians.

This is the big drawback with AOMs. They're very precise, but the angle
through which they steer beams is small. You would need hundreds of scanners
to scan a full circle under the best conditions.

AOMs are interesting though in that they provide a way to modulate the
frequency of the light they deflect. This is how I've seen them used before:
as a very precise way to introduce small frequency shifts into a laser beam
for physics research.

------
deepnotderp
I have a genuine question to all the LIDAR experts here: all the optical
phased array research I've seen, such as the MIT, UCB and DARPA "SWEEPER" work
only has a range of something like 2 meters, and 10 meters after lots of work.
All the research papers I've seen also mention that accurate and high power
phase shifters are a challenge as well as getting the optical phased array to
a high enough output power. On the other hand, Quanergy seems to claim that
their standard optical phased array on silicon is getting them 200 meter range
and really good specs. Do they have something that academia doesn't have? Also
Strobe's advisor's thesis says that their LIDAR only has a range of ~2 meters,
isn't that totally unworkable for SDC LIDAR?

~~~
joshvm
The Quanergy system looks like it's doing conventional pulsed time of flight
(maybe FMCW, but I don't know). They're operating in a frequency band that
lets them put out a lot of power without risking eye safety, which means long
range. Pulsed LIDAR is basically limited only by signal to noise, so provided
you get some photons back and you know they're 'yours', you can measure long
distances.

~~~
deepnotderp
The DARPA SWEEPER was also pulsed lidar, and it only had a range of 2 meters.
And the problem really isn't the regulations for power output at the
wavelength, but rather that the optical phased array itself can't put out the
necessary power, regulations notwithstanding.

------
madengr
I assume LIDAR is needed since the AI isn't advanced enough to do depth
perception? If I can drive with two eye balls, then I'd think a circular
camera array would be plenty. Maybe this is what Tesla plans on using (in
combination with RADAR).

~~~
dljsjr
While LIDAR and stereo cameras (as well as RGBD depth cameras like Kinect)
both effectively do the same thing, LIDAR typically has a much longer
effective range while keeping high resolution.

To give you an example, this[1] commercially available sensor head can give
great resolution stereo depth at around 10m but the included LIDAR unit is
good out to around 30m.

LIDAR also has the advantage of operating on a different portion of the EM
spectrum. It can sometimes be more well behaved in situations where there
might be interference on the visible light wavelengths but not on the LIDAR
wavelengths which is typically IR (e.g. extremely bright sunny days).

I think that there's a place for both in vehicles as a sort of redundancy.
While they both mostly do the same things, they do them in different ways, and
if LIDAR can be made cost effective then having both is a huge gain.

[1]: [https://carnegierobotics.com/multisense-
sl/](https://carnegierobotics.com/multisense-sl/)

~~~
ajross
That sensor you linked looks to have its two cameras about 10cm apart. A car
is easily 1.5m wide, giving a very cheap 15x increase in range for far objects
and beating the LIDAR quote by a ton (and of course you can have multiple
camera pairs for different distance ranges of course).

I'm with the grandparent: I genuinely don't understand the obsession with
LIDAR in this space. It's complicated and fiddly, and seems to be competing
with an "obvious" solution involving $3 camera parts.

~~~
dljsjr
More cameras still doesn't fix the interaction with ambient light and LIDAR
units like the ones produced by Velodyne or Ibeo have ranges out to 200m[1]

Again, I think they both have their uses. If LIDAR can be made not so
complicated and fiddly, I think it brings a lot to the table.

[1]: [https://autonomoustuff.com/product/ibeo-lux-
standard/](https://autonomoustuff.com/product/ibeo-lux-standard/)

~~~
ajross
Sure, it could. But again its competition is cheap camera hardware that can be
had for a few bucks and that outperforms the human eyeballs that we _know_ are
safe enough to drive cars on real roads.

I don't doubt that LIDAR _can_ work with some development. I'm just shocked
that it seems to be the default position in the industry and want someone to
explain this to me in a way that makes sense.

------
generj
One of the advantages for solid-state LIDAR has to be an increase in
reliability.

Nobody wants to buy a car which requires a new LIDAR to be installed after
100,000 miles for the sum of $7,000.

Hopefully once production starts and yield rates increase the unit costs can
shrink small enough to being becoming feasible for integration into lower-end
products like cell phones or laptops. I think there are a lot of cool
applications for LIDAR which have been blocked by the current expense.

~~~
JoblessWonder
It will probably lead to the advent of the "car as as service" sales model.
Although the pushback from traditional dealers will be huge.

~~~
ivl
Traditional dealers push back against anything and everything, so, that can
only be expected.

------
alex_duf
Out of complete curiosity, and knowing nothing about these technologies:

How does these system avoid interference with other cars? Say if ten exact
same car, with the exact same model of LIDAR are on the same street /
crossing. Can I expect some noise being received by the sensor?

~~~
qxamak
Yep. Every sensor will receive the output from the other emitters. This
problem is actually pretty similar to the problem of sharing the wireless
signal spectrum - how do all the cell phones in one area work at the same
time? There's a few standard approaches to solving these problems:

\- Frequency Division: each detector uses a different frequency/wavelength.
This approach is very simple to implement, but the usable range of wavelengths
for LIDAR is fairly narrow. So we'd run out of choices pretty quickly.

\- Time Division: each detector is allocated a different operating time slot,
so that only one is active at a time. For cell phones, this is relatively easy
to implement because they all connect to a central system that can coordinate
the timing. But for cars, this would be more difficult since there's no
central system that links the detectors on different cars together (although
they could still communicate with each other through other means).

\- Code Division: each detector's output is pulsed in a unique pattern so that
the reflected signals will return that same pattern, then the processor can
reject any patterns that it didn't send out. This approach is much more
complex, but doesn't suffer from many of the drawbacks of the other solutions.

Code division is by far the most widely used approach here, but frequency and
time division also play a small role as well since different
manufacturers/detectors use different wavelengths and not all detectors are
being operated at exactly the same time (the time needed to send and receive a
single signal is extremely short).

~~~
alex_duf
thanks!

------
baybal2
I genuinely don't understand obsession with LIDARs in the autonomous vehicle
community, a millimetre wave radar is late seventies tech, does the job many
times better, for less money, and can be made by an electronics engineering
undergrad from radio shack parts.

Commies had them in such abundance that they put millimetre wave imagers (and
that was in seventies) on thing as cheap as vision aids for tank drivers,
field guns, single shot atgms, and even small arms.

~~~
dllu
I don't see how you can say millimetre wave radar "does the job many times
better".

State-of-the art 79 GHz automotive radar: 0.2 m range uncertainty, 5 degree
azimuth angular uncertainty, dozens of points per second.

Spinning lidar: <0.05 m range uncertainty, <0.1 degree azimuth angular
uncertainty, a million points per second.

Radar definitely has its uses as it can see through weather conditions and
tell velocity, but lidar is the vastly superior sensor otherwise.

~~~
baybal2
Looks surprisingly close to specs of a Panasonic made radar.

That is far from what is currently considered advanced for mm-wave radar in
93-95ghz range - 0.5 degrees

Google "GPU Acceleration of SAR/ISAR Imaging Algorithms" and see how much you
can get with with regular high bandwidth 10ghz radar and computing power in
tenths of gigaflops.

~~~
rmonroe
SAR generally relies on assumptions that your target is not changing. I
question how well it would work in a complicated environment like a road
scene. In my opinion, the single-shot nature of LIDAR is more robust. But its
possible that SAR techniques have advanced since my work in it 3 years ago.

------
teemwerk
Skimming those papers and reading about the silicon tunable lasers was a bit
mind blowing.

I studied optics a bunch in college but never managed to work it into my
electrical engineering job. Might be lame, but I didn't have any clue this
kind of integration was possible yet, it's certainly quite exciting. Probably
the coolest thing I've read about in quite some time.

------
samstave
OK I have a question, but dont shit on me if you think its stupid:

Wouldnt any of these smart cars be able to know they physical volumes of many
number of cars, including themselves, as well as communicate with eachother -
and share their velocity, direction, volume, intentions so as to have a hive
mind of the moving bodies?

What if you also just had a beacon which reported this information to other
cars in a short network where the driver is human - so the smart cars all know
where the human driver operated cars are.

Then, in addition, you collect all the point data, from lidar, you subtract
the known data points given "a prius with a shape and mass and volume of this
moving at a speed of that - and over time you have built out a complete point
cloud of the static, non-variable topology?

rather than spending $75,000 on one lidar periscope, spend 75,000 on
developing a beacon such that all cars are telling eachother where they are,
their mass/volume and speed?

~~~
grzm
(Assuming I'm understanding you correctly,) if the issue were only other
vehicles, this might be feasible. However, the world includes much more than
just the other vehicles on the road, and the road and its environs aren't
static.

I can imagine something similar to what you're proposing being a part of the
overall system, I suspect there will always be the need for something vehicle-
based that "sees" the environment.

~~~
samstave
Yeah I meant in addition to, not instead of.

And I was saying that these beacons should be just a part of the car and super
cheap. The beacon just says "hey I'm a Prius and this is my speed direction
and location"

But you'd have to make them completely anon - we already have too much
activity tracking.

So the devices should cycle through random IDs that just spray out the specs
of the vehicle, but they change frequently the ID so you can just say that a
smart car would know that it just sees various cars - not that it knows that
"Jim is just ahead on the left"

------
kuwze
I wonder who is going to buy Luminar[0].

[0]: [http://www.businessinsider.com/peter-thiel-backed-austin-
rus...](http://www.businessinsider.com/peter-thiel-backed-austin-russell-
launches-luminar-lidar-startup-2017-4)

------
perlpimp
I see a number of beams going forward. I wonder what this tech can do however
to avoid side collisions, number of my friends have been involved in those and
those are particularly horrifying - perp running red light and ramming on the
side. Schematically it can be an interesting problem to solve too.

------
tyingq
So "no moving parts" by leveraging optical phased arrays and prisms.

I wonder how much their patents restrict others from going the same general
route.

It does seem like the only obvious truly "no moving parts" solution. MEMS
still has moving parts, they are just tiny.

------
burntrelish1273
How would this be better than / supplement 0 lux-capable machine vision?
Rangefinding? Doppler shift relative velocity?

If the benefits aren't at least 3x better, it might not be marketable enough
to justify the added complexities and costs.

------
datenwolf
In other words: Swept Source Fourier Domain OCT

