
Lidar Is a Crutch - lawrenceyan
https://lidarmag.com/2019/01/27/elon-musk-is-right-lidar-is-a-crutch/
======
twtw
Good discussion. The arguments that because humans can do it with two eyes any
approach using more than cameras is wrong frustrate me to no end. Technology
and engineering isn't about imagining the minimum theoretical requirements,
it's about building stuff that works with what is available. People can
navigate cross country without gps, but it's not wrong to put gps in cars.

It seems like the teams working on autonomous vehicles need all the crutches
they can get, and it seems like a good idea to lean on superhuman sensors to
make up for subhuman "cognition."

I do wish, though, that people would stop equating all radar with the adaptive
cruise control style radar - imaging radars are a thing that exist, and can be
competitive with lidar.

~~~
threeseed
Humans also have 576 MP quality eyes with a computational engine behind them
the likes of which humanity isn't even remotely close to replicating.

The idea that you can throw a cheap camera in a car and think you can achieve
the same always seemed strange to me.

~~~
sliken
I read the justification for that 576MP number and it looks wildly optimistic.
Sure eyes can differentiate two lines close together... when at the center of
vision. Then they multiple that by 120 degrees horizontal and vertical.

Unfortunately resolution of your eyes drops off quickly from the center of
vision. Yes you can move your eyes to focus on different things, but so can
cameras.

Sure the mental image you build is high resolution, but not that directly
related to reality. A good example of this is the numerous optical illusions
that depend on you looking at a point of an image, building a mental
representation of that image, then finding a conflict whenever you move your
eye.

Link that seems to be the source of the 576MP number: [https://art-
sheep.com/the-resolution-of-the-human-eye-is-576...](https://art-
sheep.com/the-resolution-of-the-human-eye-is-576-megapixels/)

Not what I'd call a particularly scientific explanation.

~~~
TorKlingberg
> Unfortunately resolution of your eyes drops off quickly from the center of
> vision.

True, but we can move our eyes around very quickly. Cameras in self-driving
cars are usually fixed.

Also, human eyes have better dynamic range.

------
sbr464
I'm sure this is being considered, but I'm curious what the strategy is for
handling signal/laser interference in dense traffic situations, in the future
when all cars have 3d lidar installed.

For example, assuming 3 lidar units on the front of each car, in a large
intersection, say two lanes of cars turning in front of 4 lanes of cars
pointed at them, waiting at a red light. That would be hundreds of potential
reflection/collisions per second.

The user manual for an older Hokuyo lidar with a single scanning plane (for
example, mounted on a forklift to avoid hitting a wall) mentions to offset the
mounting/angle slightly across the fleet to avoid interference.

~~~
etrautmann
current state of the art LIDAR systems have unbelievably tight windows in
terms of space, time, frequency, and pulse coding which collectively allow for
excellent background rejection. In practice, this actually isn't that much of
an issue. Sunlight and retro reflectors are more difficult challenges to deal
with than other LIDARS causing interference.

More specifically, each detector is looking through a soda straw at a tiny
spot in space, and rejecting anything that doesn't arrive within a narrow
range of time, which also has to have the right light frequency and something
like a pseudorandom code sequence which gets match filtered against itself.

In practice, you'd have to send the right wavelength of light to the right 1cm
circle in space, send it during just the right few hundred nanoseconds, send
it with the right pulse sequence (which one could determine but would be
unlikely to happen by chance). Even if it happens once, it would have to
happen many times at the imaging rate of the sensor to cause major issues...

~~~
vonmoltke
Is there an equivalent to the Skolnik books[1] for LIDAR? I keep applying
radar principles to LIDAR, but it seems like the function is significantly
different if it includes things like a "pseudorandom code sequence".

[1]
[https://books.google.com/books/about/Radar_Handbook.html?id=...](https://books.google.com/books/about/Radar_Handbook.html?id=s35UAAAAMAAJ&source=kp_book_description)

~~~
Junk_Collector
Skolnik's books are good but they are pretty old and focus mostly on the
hardware/transmission of radar. The code sequencing is on the baseband signal
processing side and is a direct result of IQ modulation being a thing in
vector signal processing. Lots of modern radar designs use the same style of
pseudo-random encoding to prevent interference between units. It's a major
push in automotive radar right now (70GHz band) for instance.

Think of it as encoding your mac address on each packet you transmit
wirelessly so you can tell which signals come from which source.

~~~
vonmoltke
I don't recall such a scheme being used on either system I worked on. However,
skimming the paper in your sibling's post it would be something done in the RF
front end, which is probably why I never encountered it. The closest I got to
the front end was pseudo-raw IQ from the pulses. It was also probably tied up
in the anti-jamming measures, so it wouldn't get wide circulation on the
programs.

~~~
Junk_Collector
Yes it's predominately a base band technique, although it has also been
written about a fair bit in literature as an efficient power spreading
alternative to a classic chirp. In that case, it is predominantly done to help
propagation leveraging the extensive work done in communications on "noise-
like" signals.

In the case of Automotive radar, using it as an identifier to prevent
interference is appealing because it allows each radar to use the full 6 GHz
of channel bandwidth which greatly improves the performance of the radar
despite the fact that there are tons of same channel independent devices
operating. Each one would effectively be a jammer otherwise.

------
ec109685
That Darpa reference in the article is weak. Because in 2007 the state of the
art needed Lidar, that means that it's still a necessary crutch in 2019? Image
recognition has made unbelievable strides in the interim.

As I watch Tesla's "Autopilot" do its thing, it seems its limitations are less
about object recognition and more about what do with the data it has. Lidar
won't help it intelligently handle a merge of two lanes into one, or seeing a
turn signal and slowing down to let the person in, or seeing a red light and
knowing to start slowing down before it recognizes the car in front of it
slowing down, or knowing exactly what lane to enter on the other side of a
tricky intersection without a car in front of it, or knowing that car X is
actually turning right so you shouldn't start following it if you're going
straight, or having the car see a clear path ahead and having it accelerate to
the speed limit when all other cars are stopped or going much slower in
adjacent lanes, or moving to the left to allow a motorcycle to lane split,
etc., etc. It's still great, but there are a ton of things to learn.

Maybe once there is no human in the driver's seat, you'll need the extra
precision that lidar provide, but there are big gaps before even we get there.

~~~
KaiserPro
A lidar sensor will almost certainly _always_ be lower latency than any sort
of image processing engine.

A lidar gives you a stream of 3d points in the order of megavertices a second.

The processing pipeline for any visual system is at least frame rate of the
camera (the faster the camera, the less light it can get) plus the GPU
transfer time(if you are using AI) then processing.

this means you are looking at ~200ms latency before you know where things are.

Lidar is a brilliant sensor. Maybe it will be supplanted by some sexy
beamforming Thz radar, but not in the near future.

~~~
kevin_thibedeau
Cameras built using dedicated hardware similar to what lidar systems are using
would not have that sort of latency. You don't have to receive a full frame of
data before it can be processed if you're not using generic COTS cameras.

~~~
KaiserPro
sure, you can have the thing spitting out lines as fast like, thats not the
issue.

you are limited by shutter speed, as you know, even if the shutter isn't
global. (lidar is too, but we'll get to that in a bit)

Any kind of object/slam/3d recovery system will almost certianly use
descriptors. Something like orb/surf/sift/other require a whole bunch of
pixels before they can actually work.

only once you have feature detection can you extract 3d information. (either
in stereo or monocular)

A datum from a lidar has intrinsic value, as its a 3d point. Much more
importantly it does drift, event the best slam drifts horribly.

Lidar will be superior for at least 5 years, with major investment possibly
10.

------
Animats
_Todd Neff is the author of "The Laser That’s Changing the World", a history
of lidar._ It's a PR piece for his book.

The terahertz radar people are slowly making progress. Good resolution, almost
as good as LIDAR. Better behavior in rain. Beam steering with no moving parts.
Plus you can tell who's carrying a gun.

Musk is in trouble. No way is Tesla close to real automatic driving. Does
Tesla even have reliable detection of stationary obstacles in a lane yet?
Their track record on partial lane obstructions is terrible. So far, Teslas on
autopilot have driven into a street sweeper, a stalled van, a fire truck, a
crossing semitrailer, a lane divider....

------
01100011
What's wrong with hyperspectral imaging? Active IR systems work just fine at
night. Passive night vision systems aren't exactly cheap but can probably
allow optical navigation using ambient starlight. Throw thermal imaging in the
mix and you have a ton of data to infer objects from.

~~~
dmitriid
What about the sun which never rises above, let's say 10 degrees in winter in
the Nordics?

~~~
JshWright
Self driving cars will have to solve the problem of "snow" long before they
have to worry about that...

------
sethbannon
Interesting counter argument: [https://spectrum.ieee.org/cars-that-
think/transportation/sel...](https://spectrum.ieee.org/cars-that-
think/transportation/self-driving/why-our-companys-trucks-wont-carry-lidar)

------
baybal2
Mr. Musk is wrong, and behind his error is, perhaps, his lack of technical
aptitude.

> Musk and almost everyone else in the business recognize self-driving cars
> and trucks as the future of automotive transportation.

Those people in their prime majority think that some kind of "AI" program will
be somehow think over the inputs and tell where to drive. They all have too
much expectations for technology coming under the word "AI" these days

They all do so without understanding what those "AIs" actually are, and not
realising that such programs can't make any "cognition."

Until that change, any talk of "self-driving" is premature. This does not
preclude however the coming improvements in cruise control, and computerised
collision avoidance.

~~~
iknowstuff
Elon Musk does not know what AIs are? Are you serious?

~~~
baybal2
I am, he did not make an impression of a that much smart person. The man began
his career as a so so game dev, and been going from one random businesses to
another (this is a part of his career he doesn't like to be public about)
before Paypal "happened upon him."

He has no substantial CS or technical background. He dropped out of his
physics masters.

Give him an evaluation.

~~~
gnulinux
He dropped out of a PhD program in Physics.

Also, he has a startup on AI:
[https://en.m.wikipedia.org/wiki/OpenAI](https://en.m.wikipedia.org/wiki/OpenAI)
even if you think he's not proficient in technical aspects of AI, he
definitely has a lot of ideas about social implications of AI.

------
Aser
> For many, owning a car to commute will make as much sense as owning a cell
> tower to scroll Instagram.

This is total rubbish. I don't believe for a second that taking a self-driving
uber to work and back everyday will be cheaper than driving my own car.

~~~
nathan_long
Why not? Suppose your commute occupies the car for 1/24th of the hours of the
day. In theory you could pay for only that usage.

There would be some markup so the owning company has a profit margin, but the
higher the price is, the greater the incentive for competitors to exist, which
would exert downward pressure. So maybe you'd end up paying 1/20 the price of
owning a car.

If you don't agree, why not?

~~~
wil421
Eventually you will own the car and the costs will decrease dramatically. My
wife’s corolla is 14 years old, gets 30 miles a gallon, and costs about $250 a
year due to failing sensors and a MAF replacement.

If I used Uber every time I’ve used her corolla in the past 10 years it would
far outweigh the cost of gas, insurance, and fixing the car.

~~~
bronco21016
You’ve completely left out the cost of the car though. Even over a 14 year
period if you bought the car new for $15k (2005 MSRP Corolla LE) you’re still
at roughly $90/month. Factor in insurance, gas, and continued maintenance and
you’re at a couple hundred a month. Of course that number continues to fall
but eventually you buy a new car. If ride sharing moves to a subscription
model of just a straight $200-300/mo I think it completely obliterates the
market for individually owned small cars used primarily for commuting.

~~~
rimliu
He also left out driving in a car which was not covered in vomit from the
previous rider.

~~~
dmortin
If the car is soiled you'll indicate it via the app, they send an other car
and make the previous user to pay for cleaning.

When people learn they have to pay for cleaning/repairs if they don't take
care of the car then they will be much more careful keeping it clean.

------
iabacu
Not using lidar for self driving cars is a giant premature optimization (just
like premature automation, etc)

~~~
sliken
Maybe, but where are all the lidar cars that drive significantly better than a
Tesla?

~~~
darkpuma
Tesla cars, with the sensor packages they currently have, do not have a path
forward to Level 5, despite the marketing claims of Tesla. Claiming that their
cars have all the hardware required for Level 5 is a flagrant lie, which they
can get away with for now because they've given themselves plenty of outs to
avoid ever having to deliver Level 5 capabilities (when they fail they can
blame inadequate software or an unsuitable regulatory environment.)

A word on that software: As it currently stands, they are delivering software
that enables Level 2 capabilities. This is sometimes called "hands on", as in
your hands should remain on the steering wheel and your eyes on the road,
ready to take control in an instant. According to Tesla, drivers are to keep
their hands on the wheel and pay attention to the road; if the driver fails to
do so then they are at fault for any accident. However Elon Musk contradicts
this company policy and has promoted the system as hands off on national
television. Why would he do something so irresponsible? Because
misrepresenting the hardware and software capabilities of his cars helps him
sell cars. He knows the hype for self driving cars is at a fever pitch, and
stretching the truth helps him profit from that hype.

[https://www.businessinsider.com/elon-musk-breaks-tesla-
autop...](https://www.businessinsider.com/elon-musk-breaks-tesla-autopilot-
rule-2018-12)

~~~
leesec
Well no one knows how to get to level 5 since it doesn't exist yet. But there
doesn't seem to be any theoretical reason why maps+vision+radar can't scale to
level 5.

~~~
darkpuma
Their radar doesn't have the angular resolution to do anything other than
adaptive cruise control, and while they boast 360˚ camera coverage, whatever
stereoscopic data they are getting from those cameras is evidently
insufficient to detect a large firetruck parked in the middle of the street.

Now consider MobilEye, a subsidiary of Intel and a major player in the field
of camera/radar driver assist technology. Tesla _was_ using MobilEye tech,
until MobilEye terminated that relationship because they believed Tesla was
being irresponsible with how far they were pushing MobilEye's tech. MobilEye
had a financial incentive to see Tesla succeed with a camera/radar only
solution, and continues to have a financial incentive to downplay the
necessity of LIDAR. But do they? No. Instead you've got MobilEye's CEO Amnon
Shashua talking about the virtues of a combined LIDAR/radar and camera/radar
solution, while trash talking Elon Musk for being irresponsible.

When you consider who is saying what and what their financial incentives are,
it becomes clear that Amnon Shashua is an ethical person and Elon is a car
salesman who is making _technically_ unfalsifiable claims about the
capabilities of the hardware in Tesla cars to profit from automation hype.

------
offtheradar
I'm not sure that he meant lidar is a crutch in the way that this article
represents. Those tiny wavelengths that allow lidar to provide tremendous
detail also make it unreliable in rain and snow. Until that problem is solved,
and I haven't heard that it has been, lidar dependent cars either 1) can't
drive in the rain or 2) must be able to operate reliably in the rain using
technology other than lidar.

Believe it or not, cars will need to be able to drive in the rain, so if
you're going to eventually need to build a car that will work reliably without
lidar then why not build that to begin with?

It looks like most SDC companies are going with lidar because it is faster and
easier, but if it only covers 90% of use cases then that does sound like a
"crutch".

------
gpm
I'm pretty sure a crutch isn't a crutch-crutch but a body-crutch.

Ducttape is a crutch-crutch when applied correctly.

------
cameldrv
Maybe Lidar is a crutch. For a lot of things, a crutch is an adequate
substitute for a non-broken leg. The trouble with crutches is that if you want
to dance, or play baseball, or ride a bike, you can't do it. I can tell you
how I've gotten a much bigger annotated dataset, and if I keep collecting,
soon I'll be as fast as a normal person at walking down the sidewalk, but
people want to do more than just walk down the sidewalk.

~~~
close04
Just because it's a crutch doesn't mean it shouldn't be used. It means you
should use it until you have a tool that can do better (your healed leg?).

As far as I can tell Musk insists walking unassisted with a _broken_ leg is
better than using a crutch because a _good_ leg is better than a crutch. Sure,
but he's not providing a good leg, he's providing a broken one. It's like
telling people to drag their broken leg today because 3 months from now it
will be better than a crutch. And he's not taking this route because _today
's_ cars do better with normal cameras than with LIDAR (future cars might
though what good is that today?) but because it's cheaper while still allowing
big claims.

Use the right tool at the right time. In the meantime develop the next tool
and start using it _when it becomes appropriate_.

------
trixie_
Lidar can create a great 3d representation of an environment, but then you're
back to the same problem as cameras. You need some sort of AI to identify
objects in the data. So the question is how much of an advantage does lidar
give you in object identification? It's hard enough to identify objects with
confidence in 2d images. Where is the confidence level at for identifying
and/or discerning 3d objects with AI?

~~~
sliken
Sure, lidar gives you way less resolution. Take the velodyn alpha puck for
instance, 300k points/sec. If you are moving at 60 mph once second = 88 fps.
So the puck spreads 300k points over 88 feet of highway.

The tesla system has 10 cameras, but I believe only two of them look forward.
I believe they are 1280x960 @ 30 FPS or 36M pixels/sec. But there's two of
them, 73M pixels/sec. Each pixel is in color (lidar is just a distance).

So the tesla system has WAY more information about the environment, granted
distance has to be inferred, but it also had radar to help with that.

Additionally being inherently more similar to eye sight, a car using cameras
is likely to get along with human drivers better. Slowing down when it's foggy
or rainy, seeing at similar ranges to humans, and being able to use color for
additional context. Is that a UPS truck or an ambulance? Is that a reflection
of a police car with it's light on or just a window reflection? Is that a
boulder or a wind filled trash bag?

~~~
yokaze
> So the tesla system has WAY more information about the environment, granted
> distance has to be inferred, but it also had radar to help with that.

I'd argue the contrary. Intelligence is primarily not about the amount of
data, but the amount _and_ quality of data you receive. If I would have a
magic sensor giving me obstacles in a segmented form, that would be couple of
KB, and it would beat any other sensor on the market.

Inferring the distance from stereo images has its own failure-modes and are
not easy to account for as in LiDAR. LiDAR also gives you reflectivity, so you
will be able to differentiate between a UPS truck and an ambulance.

> Is that a reflection of a police car with its light on or just a window
> reflection?

Fun thing, to my knowledge reflections are a major unsolved problem for
vision. It is easier for LiDAR, as you can rely on the distance measurement
and will have an object somewhere outside of any reasonable position (e.g
underground, behind a wall). Depending on the lidar, the glass might even
register as a second (actually primary) return.

Yes, you need cameras (likely color) to be able recognise any light based
signalling (traffic lights, ambulance/police lights...), so LiDAR is not the
panacea. But having the lidar telling you that there is a window and that
police is behind it is likely vastly more robust with lidar.

Also, the difficulty is that you have to see arbitrary objects, on the road
and possibly stop for them. As long it is larger than maybe a couple of
centimeters (or an inch), it will show on the LiDAR, with stereo vision, you
need a couple of pixel texture to infer it.

~~~
sliken
I've worked with lidar data a fair bit in a VR environment. It can be quite
hard to tell what's going on in any kind of complex environment. The date is
so sparse and the datasets I was working on were static.

300,000 per second... if you are trying to figure out what's going on in
1/20th of a second that's only 15,000 points. Assuming your scanning 3 lanes
(3.7M each) out to 100 meters, say 3 meters high that 3330 cubic meters. So
lidar gives you 2 points per cubic meter. Not exactly going to be easy to tell
a bicycle from a motorcycle, or an ambulance from a ups truck.

From what I can tell machine learning has led to near human levels of object
identification, not nearly as competitive for things like sparse monochrome
point clouds.

At 65 MPH, to be able to avoid something you need some lead time, which means
distance. The lidar stuff I've seen is pretty sparse that far out. Of course
the sexy almost real looking detailed landscapes from lidar are from tripod
mounts and long sample times.

Which leads me to the relevant question. Do you have any reason to think that
machine learning will handle lidar at 180 feet range (2 seconds at 65 mph)
than a pair of color cameras running at 1280x960 @ 30 FPS?

~~~
leetcrew
layman here, does lidar necessarily have to sample the whole environment
uniformly?

I ask because, as I understand it, humans actually have quite poor visual
acuity through most of our FOV, with a small very precise region near the
center. the visual cortex does some nifty postprocessing to stitch together a
detailed image, but it seems to me that human vision is mainly effective
because we know what to pay attention to. when I'm driving, I'm not constantly
swiveling my head 360 degrees and uniformly sampling my environment; instead,
I'm looking mostly in the direction of travel, identifying objects of
interest, and taking a closer look when I don't quite understand what
something is.

is it possible for a lidar system to work this way? maybe start with a sparse
pass of the whole environment at the start of a "cycle", and then more densely
sample any objects of interest?

~~~
ezrast
Lidar generally operate by rotating a laser that pulses at a fixed rate, using
optics to sweep the beam up and down to get a reasonable vertical FoV 360
degrees around the car. They output a stream of samples - basically a
continuous series of timestamped (theta-rotation, phi-rotation, distance)
tuples - that software can reconstruct into a point cloud.

But! The lidar data is useless by itself since the car is moving through space
at an unpredictable rate. Each sample has to be cross-referenced by timestamp
with the best-estimate location of the car in order to be brought into a
constant frame of reference. This location estimation is a complex problem
(GPS and accelerometers get us most of the way there but aren't quite high-
fidelity enough without software help) so it can't be done onboard the lidar.

So to do what you suggest, the lidar would need a control system that allows
its operational parameters to be dynamically updated by the car. But what
parameters? Since the laser is already pulsing at least hundreds of thousands
of times per second, there's probably not much room for improvement there
without driving up cost, and if we could go higher we'd just do that all the
time anyway. The only other option would be to slow down the rotation of the
unit while it sweeps over the field of view we've decided is interesting.

That way is a little more conceivable, but I doubt it would work out in
practice. If the unit averages 10 rotations per second, it has to be subject
to 20 acceleration/deceleration events, which would be a significant increase
in wear and tear on the unit. It would also make it harder to reliably
estimate the unit's rotation at any point in time, again driving up costs.

All this can't grant you much more than, say, a 100% increase in point density
on the road (assuming 120 degrees of "interesting" FoV and a 1/6th sample rate
on the uninteresting parts). If these things are to be produced at scale, I
imagine it would be easier to increase point density by just buying a second
lidar, which would also bring better coverage and some redundancy in case of
failure.

------
panic
I'm looking forward to lidar in human-driven cars. Imagine how cool it would
be to see the 3D point cloud on your dashboard or reflected on your windshield
as a heads-up display.

~~~
dmitriid
> Imagine how cool it would be to see the 3D point cloud on your dashboard or
> reflected on your windshield as a heads-up display

What would be the point of it other than distracting you from, you know,
actual driving?

~~~
panic
Seeing what’s happening all around you in a single glance could improve safety
in a similar way as it does for self driving cars.

~~~
dmitriid
Humans are really bad at scanning and interpreting large amounts of visual
data. When driving a car, your brain is already nearly overwhelmed by the
amount of information it needs to process and analyse.

You want to add yet more information to that.

~~~
panic
On the contrary, humans are great at interpreting large amounts of visual
data. That's how we can drive without needing lidar in the first place.

------
ecpottinger
Personally, I don't think Lidar is needed to make self-driving cars. Otherwise
we have a problem since humans do not come with them to date.

The problem I have with the article is he said Lidar will never come done in
costs to that of a headlight.

Tech marches on and headlights are not really that cheap, in the future I
would not be surprised with mass production that Lidar does get that cheap.

How much did the first Kim-1 computers cost? How does that compare to a simple
Arduino with a hundred times the power and memory cost today?

~~~
KaiserPro
> I don't think Lidar is needed to make self-driving cars.

The only sensor that can give mm accurate, high resolution, long range 3d
spacial data with a low latency, is lidar.

For a purely visual system to supplant lidar we need:

1) a self cleaning all weather/all light condition camera with ~170 degree
field of view at ~30 megapixels

2) a Structure from motion system that has a sub 20ms latency, and can work
with spherical images, at full resolution.

3) a semantic object recognising system that is able to classify any object
class. It must be scale, rotation, colour and occlusion invariant. It must
also update the worl map generated by 2

4) An object threat management system, that can take semantic information from
3, rank it in order of threat, to be passed to 5

5) A world motion prediction engine, that takes threats from 4 and works out
if said object is likely to collide with the car

all inside 70-100ms

Thats not going to happen soon. Of the whole list, 2 is the closest to
working.

Lidar allows you to cut through 90% of this, because it gives you an accurate
point cloud. From there you can do clustering to make objects, and track those
clusters to measure threats. All of this is doable on a small CPU now. Without
AI.

~~~
sliken
You make lidar sound great, much better than I've heard.

I thought lidar was quite sensitive to dust, rain, snow, and fog. This is
particularly worrisome because lidar samples at a MUCH lower rate than a
camera (from 30M samples per cheap camera to 300,000 or so for an expensive
lidar).

While lidar is pretty impressive at short ranges, what about at 2 seconds away
@ 65 MPH? Will it detect a deaccelerating car faster than a camera that can
detect a brake light? Will it be better at detecting perpendicular cars vs
parked cars at that range?

Will weather cause the lidar to decay in similar ways to human eyes?

~~~
KaiserPro
>what about at 2 seconds away @ 65 MPH?

65 MPH is about 30m/s, your average lidar should be good upto about 200
meters, which is about 6 seconds at 65 mph.

> Will it detect a de-accelerating car faster than a camera that can detect a
> brake light?

Now this is an interesting question. yes and no. A lidar will on its own will
not give you object recognition. It will tell you that a reflective surface of
size _n_ is directly in you proposed path, and that since last scan its got
closer. From that you can make a very accurate obsticle map.

I don't think that running a car soely on Lidar is actually all that feasible
or a clever idea. Not without a boat load of mapping and processing to create
a machine readable semantic map first.

Having a camera array _as well as_ lidar is a very good idea, as it can
provide blended information from the lidar to the semantic map being generated
by the camera and radar. Your example of brake lights is good, as it provides
a cue as to what is likely to happen.

It also means that the high latency of a visual processing system is less of a
problem, as the model can be updated by the lidar. Camera picks up a car, the
model attaches it to the pointcloud it thinks is the car from the lidar/radar.
When the lidar/radar updates it can create a prediction of where the visual
system should look.

> detecting perpendicular cars vs parked cars at that range?

You don't have to. lidar gives you a point cloud, those points can be roughly
translated into hard surfaces. if a hard surface is in your path or predicted
path, take action until it isn't. Dealing with pointclouds for object
avoidance is much much more computationally simpler, than having to infer 3d
from either structure from motion, stereo or both.

[https://github.com/raulmur/ORB_SLAM2](https://github.com/raulmur/ORB_SLAM2)
if you look at this slam system, the pointcloud it generates is very sparse
compared to a lidar:
[https://youtu.be/W3ELziPYn5k?t=13](https://youtu.be/W3ELziPYn5k?t=13)

But as I said before, you need other sensors to get other data/corroborate
world model.

>lidar was quite sensitive to dust, rain, snow, and fog.

It is indeed, like any other sensor

> Will weather cause the lidar to decay in similar ways to human eyes?

Most lidars operate in the far infrared. so will handle decay differently.
Depending on frequency it'll either be sensitive to moisture, or not at all.

------
ijafri
I didn't get the Uber reference, have not they given up on driverless cars, at
least for the near future? after the sensor / lidar failure resulting in a
fatal crash.

~~~
JshWright
It wasn't a sensor failure. Multiple sensors (including the lidar) detected
the woman with plenty of time to stop.

~~~
kwhitefoot
It was a management failure. The car had a perfectly ordinary radar based
emergency braking function factory fitted by Volvo but Uber disabled it,
presumably to avoid interference with the cars autonomous driving functions
(if I recall correctly).

------
pauljurczak
He is right about lidar being a crutch, but he is wrong about the amount of
time needed to develop machine vision software good enough to make lidar
unnecessary.

~~~
s17n
Lidar will get cheap before this happens, at which point it won't be question
of whether it is "necessary" but whether it is useful.

------
rwallace
> In daylight, cameras can do that, too, but not so much in the dark

Why not? We've been seeing TV footage of cameras seeing perfectly well in the
dark for decades. Why is there a problems with using them for automatic
driving at night?

