
Contrary to Musk's claim, Lidar has some advantages in Self Driving technology - gordon_freeman
https://arstechnica.com/cars/2019/08/elon-musk-says-driverless-cars-dont-need-lidar-experts-arent-so-sure/
======
Animats
What gets me is the obsession with "identifying objects". The first thing you
want for self-driving is an elevation map of what's ahead. If it's not flat,
you don't go there. You don't need to know what it is. This is what LIDAR is
good at.

Radar is not yet usable for ground profiling. Radar returns from asphalt at an
oblique angle just aren't very good. Nor is there enough resolution to see
even large potholes. Maybe someday, with terahertz radar. Not yet.

Now, you can go beyond that. If you're following the car ahead, and it's
moving OK, you can assume that what they just drove over was flat. If the road
far ahead looks like the near road, and the elevation map of the near road
says it's flat, you can perhaps assume that the road far ahead is flat, too.
That's what the Stanford team did in the DARPA Grand Challenge.

Identifying objects is mostly for things that move. Either they're moving now,
or they're of a type that's likely to move. This is where the real "AI" part
comes in - identifying other road users and trying to predict or reasonably
guess what they will do.

Collision avoidance based on object recognition has not worked well from
Tesla. They've hit a street sweeper, a fire truck, a crossing tractor-trailer,
a freeway barrier, and some cars stalled on freeways. All big, all nearly
stationary. This is the trouble with "identify, _then_ avoid".

~~~
TeMPOraL
Even when trying to identify objects, I don't get the push for trying to
understand what they are. The way I see it (and I might be very wrong here),
you should be able to get good enough results by identifying things around you
that are solid (that's the most important part), and tracking their velocity.
This should be doable without any kind of understanding of _what_ the objects
are. Then the car's control system should keep the speed and direction such
that the car can always be stopped before any of the tracked objects hit it.

Having done that, you can play with identifying lanes of traffic and sidewalks
and other "normal" features, and potentially ignoring objects there as long as
they behave according to expectations. But I'd think the first order of
business would still be ensuring that you don't run into solid objects,
whatever they are, and whether or not they're moving.

~~~
coldtea
> _Even when trying to identify objects, I don 't get the push for trying to
> understand what they are._

Without that you don't know lots of things.

1) Whether they might move or are completely stationary (e.g. a pole vs a
motorcycle).

2) How they move.

3) Which you're better off hitting if you need to swerve to avoid another car
(is it better to hit the fruit stand or the 10 year old boy?)

4) Whether they tell you something (e.g. traffic signs, traffic lights, a
traffic cop directing you elsewhere, for starters).

5) Whether they represent some danger and you need to keep a distance (e.g. a
bus with open doors, from where someone might come out at any minute).

~~~
dsfyu404ed
>3) Which you're better off hitting if you need to swerve to avoid another car
(is it better to hit the fruit stand or the 10 year old boy?)

This is a non-issue in practice. Just brake and don't turn the wheel. It's a
naive approach and it leaves a lot of accident avoidance potential on the
table in most situations but it's what most people do and expect everyone else
to do. Trying to do anything else is impossible to justify to the public
because everything you have to say about why you shouldn't panic stop for a
object in the middle of the freeway when there's an open lane beside you will
be drowned out by people telling you you're crash the car if you dare touch
that steering wheel in an attempt to not crash.

~~~
linuxftw
You're 100% wrong. Many people, especially big trucks, have swerved off the
road to avoid killing people and animals. If you'd ever taken a defensive
driving course, you'd know that it's always quicker to steer avoid collision
than it is to brake. This takes a supreme amount of context and situational
awareness.

~~~
dsfyu404ed
>You're 100% wrong. Many people, especially big trucks, have swerved off the
road to avoid killing people and animals. If you'd ever taken a defensive
driving course, you'd know that it's always quicker to steer avoid collision
than it is to brake.

You are 100% failing to properly parse my comment. I'm expressly saying that
braking with zero regard for the situational details is not ideal but it
accomplishes the goals of a self driving car.

* be as good or better than humans at not crashing

* react to situations in a manner similiar to and predictable by humans

* not make any important stakeholders more likely to get sued

* actually be implementable with current or near future technology

>This takes a supreme amount of context and situational awareness.

Which is hard enough to teach to people, let alone an AI.

~~~
coldtea
Err, if a self driving car just breaks in that situation, then it fails all of
the above goals:

1) "be as good or better than humans at not crashing"

It could still crash because of momentum/distance. It could cause a pile-up.

2) react to situations in a manner similiar to and predictable by humans

People would swerve depending on the situation. It's extremely common, and the
logical thing to do in many cases.

> _not make any important stakeholders more likely to get sued_

Getting sued depends on the effect of your actions. If you kill/hurt your
passengers, cause a pile-up, hit the person in front, etc. you will get sued.

> _actually be implementable with current or near future technology_

That's irrelevant...

------
loourr
This author missed Musks point entierly. His argument is that to solve self-
driving you need a deep understanding of your surrounding which you can only
achieve with visible light spectrum video. That's the real hard problem to
solve and you need cameras to solve that and if you solve it then lidar
becomes unnecessary.

The doomed part is because if companies are spending all of their energy on
creating neural nets around lidar then they'll reach a local maximum where
they never begin to tackle the much more difficult problem truly needed for
self-driving.

~~~
cromwellian
Seems to me that "deep understanding of surrounding" and "only achievable with
visible spectrum" are contradictory. Visible light is readily attenuated,
occluded, and reflected.

The first time Tesla runs over a kid chasing a ball into the street because it
couldn't see him between the cars, this will be readily apparent.

Seems to me that Tesla is in the business of selling cars, other self driving
companies are interested in AV for ride sharing or trucking. The latter have
different requirements for styling and costs and the consumer case, so Musk
has several limitations on the sensor suite he includes in a Tesla.

What he's doing is trying to argue a $5k system with cheap cameras and crappy
radar coverage is all that is needed, because a full no-blind-spot multi-
spectrum system would both cost too much AND likely make the car look ugly.

Two people have already been killed, and several injured, by Tesla autopilot
due to blind spots.

~~~
davidgould
Can you explain how a lidar sees a child hidden between cars? I was under the
impression that lidar was line of sight.

------
QuantumGood
Musk's point has always been to combine vision with radar, instead of Lidar.
I'm amazed that this combination is usually overlooked in discussions of
Tesla/Lidar

~~~
llbowers
I’m sure this is a dumb question to anyone with knowledge in this field but is
there any reason to not use all three together?

~~~
danepowell
Price. I worked briefly with teams building self driving cars in the past.
Their budget for sensors far exceeded the cost of the car itself.

~~~
stefco_
Of course, they were presumably using a mostly-stock car with custom niche
sensor products, so that comparison would be a bit more favorable in
production.

------
Traster
I think I would find Musk's claims more compelling if he had actually sat down
with an expert and discussed in detail why he believes what he believes.
Instead we're sitting here discussing quote a kooky quotes with no real
analysis. Even on the face of it

>"They're all going to dump lidar," Elon Musk said at an April event

We know which companies are building self-driving cars, we know what
technologies they're using and we know how long they've been working on it.
Have we seen any signs that any of these companies are dumping LIDAR? I
would've thought it'd be pretty big news right?

~~~
coldtea
> _Have we seen any signs that any of these companies are dumping LIDAR? I
> would 've thought it'd be pretty big news right?_

In fact, why would they not have a multi-faceted system, keeping LIDAR and
alternatives?

~~~
CrazyStat
Because LIDAR is expensive. If camera-based neural networks eventually get
good enough that LIDAR provides minimal additional value, they will drop it.
This is what Musk is betting on. We're not there yet for sure, and it's not
clear to me yet whether that's a realistic goal for the 5-10 year time frame.

~~~
hef19898
Musk is betting on a lot of things lately. I would rather have him focus on
one or two hard problems and solve these first.

~~~
whatshisface
Musk's job isn't to solve problems, it's to find people who can solve
problems, put them together, and pay them.

~~~
coldtea
Musk's job is to secure investment (and government deals) with promises and
signs of hope.

------
ThatGeoGuy
> For example, one of the distance estimation algorithms used in the Cornell
> paper, developed by two researchers at Taiwan's National Chiao Tung
> University, relied on a pair of cameras and the parallax effect. It compared
> two images taken from different angles and observed how objects' positions
> differ between the image—the larger the shift, the closer an object is.

The shift or disparity between sensors doesn't really matter. We've known that
wider convergence angles begets better object point estimation since the 70s.
Yet, even the KITTI dataset doesn't attempt to take advantage of this, and
uses two rather average cameras with a (relatively) short baseline of 0.06m
(see:
[http://www.cvlibs.net/datasets/kitti/setup.php](http://www.cvlibs.net/datasets/kitti/setup.php)).
That's 6cm!!! You have the entire width of the car to separate these cameras
by.

> This technique only works if the software correctly matches a pixel in one
> image with the corresponding pixel in the other image. If the software gets
> this wrong, then distance estimates can be wildly off.

Again, yeah. But the problem is twofold: you need to detect / match similar
points between two images, but the fundamental setup of your system can limit
your precision and accuracy. Use a wider-angle lens with better convergent
geometry. Every publication based on the KITTI dataset doesn't even address
some of the most basic criticisms from photogrammetry.

Which leads to probably why LiDAR gives such a distinct advantage in most of
these data sets. You solve two problems:

1) You solve the correspondence problem trivially because LiDAR doesn't need
to match points between cameras, and there's no baseline / convergence
criteria that the final point precision depends on.

2) Robust geometric data is well-modelled, well understood, and provides an
easier criteria for machine learning systems (particularly ones running over
KITTI, as in the article) to converge on than just using stereo-imagery with a
baseline of 6cm. You get the scale of the system for free and your calibration
troubles are whisked away as LiDAR systems tend to be better-calibrated and
more stable than most lens systems or configurations you'll find in the cheap
off-the-shelf cameras that many autonomous driving startups are using.

I guess I come off a little negative by looking at this, but my first reaction
to Musk saying that nobody should or will want to ever use LiDAR for this is
that he doesn't know a damn thing about what he's talking about.

~~~
tlb
A 6 cm baseline is enough for humans to make adequate distance estimates.

Besides the correspondence problem, a longer baseline makes it hard to keep
the cameras aligned as the vehicle bounces and flexes. You can't mount them
separately to the car -- a chassis can easily twist by a degree or two. So you
need a stiff mounting bar between them, which you can either put outside the
car like a roof mount (ugly, and it gets buffeted by wind) or inside (also
ugly).

~~~
flor1s
Why even limit yourself to two cameras? If I recall correctly multi-view
geometry benefits from having as many cameras as possible.

In the future we will all have walls covered with a checkerboard pattern in
our garage to calibrate the cameras on our self driving cars. :)

------
georgeburdell
The article misses one important point about LiDAR. Frequency modulated
variants, referred to as "FMCW", get velocity information for free via the
doppler effect. You can't get that information from a camera without
sophisticated image processing, and you can't get it with high resolution from
RADAR. Knowing velocity as well as position is important to assessing
immediate safety threats.

There's a good write-up by the co-founder of SiLC, a silicon photonics LiDAR
startup, here:

[https://www.photonics.com/Articles/Integrated_Photonics_Look...](https://www.photonics.com/Articles/Integrated_Photonics_Looks_to_Advance_Safety_for/a64791)

------
m463
I agree with Musk and see lidar where ray tracing was decades ago. It was an
expensive impractical "holy grail".

A set of lidar sensors right now costs as much as a car.

Maybe at some point in the future one of these lidar startups will come out
with an inexpensive (maybe solid-state) version to augment the current
sensors. Or maybe by that time vision will have gotten much better.

~~~
georgeburdell
The cost of lidar is going to plummet due to exactly the end of your post.
Several startups (SiLC, Aeva, etc.) are using silicon photonic integrated
circuits. Several more early stage startups have either mems or phase array
prototypes for completely solid state chips.

~~~
kjksf
The thing is, self driving wars will likely be over before economies of scale
for production of lidars will happen.

If non-lidar system doesn't work, then the cost of lidar, even at $10k, is
irrelevant.

If you can make non-lidar system work better than humans (i.e. with quality
acceptable for regulators) before the cost of lidars drops down significantly,
then lidars lose based on economics.

And the cost of lidar won't drop significantly quickly. The next step-change
in price would probably require mass production i.e. production of hundreds of
thousands of units per year.

Even if lidar robotaxis happen before non-lidar ones, initially they'll be
made in tens of thosands of units per year, leaving a couple of years for non-
lidar tech to catch up.

------
simcop2387
I wonder about the noise aspect of this when you've got 20 cars nearby also
using lidar. Is there a point where these kinds of active sensors begin
interfering with eachother? I know that it isn't lidar but the xbox kinect's
used to interfere with each other if you had multiple in one room

~~~
ThatGeoGuy
That really depends on the modality of the LiDAR. For the record, the Kinect
is techically LiDAR since it is using "Light Detection And Ranging."

As for why the Kinect interferes with other units, it's because of the imaging
modality (structured light). The sensors interfere with one another because
they're largely dependent on detecting a specific pattern of projected dots.
If you detect too many dots or if the image gets saturated, you start to have
a problem.

In the case of traditional scanning LiDAR (e.g. terrestrial LiDAR in the sense
of a Leica or Faro or Velodyne unit), this isn't necessarily the same case.
Sure, if the two lasers point exactly at each other for a given point over
their sweep, then at that point the lasers will saturate the measurement and
that specific measurement will not be useful. In time-of-flight based,
mirrorless systems, this matters less than one might think. I can see this
being consistently a problem when scanning with Velodyne tech since they tend
to only rotate about one axis, but for other types of LiDAR I don't think it
would be as big of a deal. Granted, then you have to worry about scanning
speed and how that affects the final results.

Overall, I don't think that unit interference is going to be a significant
factor in adoption. LiDAR is a broad technology and it's not easy to make
assumptions about the entire industry based on a couple implementations or
modalities.

~~~
flor1s
As an aside, the original Kinect and Kinect for Xbox 360 use different
technologies for 3D detection. The original Kinect projects an infrared
pattern and then detects the deformation of the pattern to determine
distance/shape. The Kinect for Xbox 360 uses more traditional time of flight.

~~~
acd10j
you are confusing between Kinect for XBox 360 and Kinect 2 that is Kinect for
XBox one, Kinect 2 ( kinect for xbox one) uses time of flight. Original kinect
and Kinect for xbox 360 are same and uses structured light sensor.

------
streetcat1
To sum up. If I use Lidar - Lidar is good. If I do not use Lidar - Lidar is
bad.

~~~
nostrademons
That's what each of the companies involved claim, but here you additionally
have benchmark results from independent researchers who aren't building self-
driving cars that say Lidar is good.

~~~
streetcat1
Everything that you see is a piece of marketing (in both sides). This
"research" was drafted, vetted, reviewed, approved (by at least 5-10 people).

------
siliconc0w
Another possible argument is simply pragmatic - by not including LIDAR, Tesla
can actually sell cars and therefore be in the best position to get to L5.
They'll have the biggest fleet, the most data, the most technical expertise
and experience, etc. I mean they're actually selling something people are
buying. It may or may not be the correct technical choice but it seems to be
easily the correct business choice.

------
vecplane
One of the major disadvantages of LIDAR is poorer performance in rain, snow,
and fog, which are quite common in many parts of the world. I'm surprised that
isn't being discussed more.

~~~
rhacker
Is that any better with Radar or cameras. What happens when there is a large
droplet over a lens? Does sound diffuse in a heavy rainstorm and/or completely
drown out the ping.

------
cameldrv
IMO all of the cost and power arguments are currently red herrings. An
autonomous car or truck is worth at least $100k more than a non autonomous
one, so whatever it takes sensor wise is worth it.

Where musk's argument makes sense is that if lidar can't be made to work in
all weather, putting effort into making algorithms for it may be a dead end.
There are a number of companies with various approaches to making lidar work
better in bad weather though

------
dmh2000
a big drawback is not performance but that most lidars are expensive and
fragile.

~~~
MobileVet
This was true... until recently when Luminar announced a fairly affordable
(for an automobile BOM) ~$1k version to be available this year.
[https://www.engadget.com/2019/07/12/luminar-affordable-
lidar...](https://www.engadget.com/2019/07/12/luminar-affordable-lidar/)

~~~
acd10j
Article clearly mentions that these will be ready for production not before
year 2022, until then it's all speculation.

------
linsomniac
How well do Lidar systems work when there are 5, 10, 20, 50 cars with Lidar
units all in a small area (like a busier intersection in my smallish town)?

I wonder if Musk has done some tests of this sort of scenario and that is what
he is basing this judgment on?

------
Havoc
Surely a blend would yield the best all weather solution? Even if the lidar
isn't top shelf.

------
killjoywashere
Musk said he is a fan of LIDAR, the Tesla has forward-facing LIDAR, and he
even helped develop a LIDAR for the Dragon docking system. This is a hit job.

