That Darpa reference in the article is weak. Because in 2007 the state of the art needed Lidar, that means that it's still a necessary crutch in 2019? Image recognition has made unbelievable strides in the interim.
As I watch Tesla's "Autopilot" do its thing, it seems its limitations are less about object recognition and more about what do with the data it has. Lidar won't help it intelligently handle a merge of two lanes into one, or seeing a turn signal and slowing down to let the person in, or seeing a red light and knowing to start slowing down before it recognizes the car in front of it slowing down, or knowing exactly what lane to enter on the other side of a tricky intersection without a car in front of it, or knowing that car X is actually turning right so you shouldn't start following it if you're going straight, or having the car see a clear path ahead and having it accelerate to the speed limit when all other cars are stopped or going much slower in adjacent lanes, or moving to the left to allow a motorcycle to lane split, etc., etc. It's still great, but there are a ton of things to learn.
Maybe once there is no human in the driver's seat, you'll need the extra precision that lidar provide, but there are big gaps before even we get there.
A lidar sensor will almost certainly _always_ be lower latency than any sort of image processing engine.
A lidar gives you a stream of 3d points in the order of megavertices a second.
The processing pipeline for any visual system is at least frame rate of the camera (the faster the camera, the less light it can get) plus the GPU transfer time(if you are using AI) then processing.
this means you are looking at ~200ms latency before you know where things are.
Lidar is a brilliant sensor. Maybe it will be supplanted by some sexy beamforming Thz radar, but not in the near future.
Cameras built using dedicated hardware similar to what lidar systems are using would not have that sort of latency. You don't have to receive a full frame of data before it can be processed if you're not using generic COTS cameras.
sure, you can have the thing spitting out lines as fast like, thats not the issue.
you are limited by shutter speed, as you know, even if the shutter isn't global. (lidar is too, but we'll get to that in a bit)
Any kind of object/slam/3d recovery system will almost certianly use descriptors. Something like orb/surf/sift/other require a whole bunch of pixels before they can actually work.
only once you have feature detection can you extract 3d information. (either in stereo or monocular)
A datum from a lidar has intrinsic value, as its a 3d point. Much more importantly it does drift, event the best slam drifts horribly.
Lidar will be superior for at least 5 years, with major investment possibly 10.
> Lidar won't help it intelligently handle a merge of two lanes into one, or seeing a turn signal and slowing down to let the person in, or seeing a red light and knowing to start slowing down before it recognizes the car in front of it slowing down, ....
It certainly seems like it's still needed. Some of the high profile Tesla crashes seem to be caused by the camera system not properly recognizing the side/back of a truck or the divider on an off ramp and plowing right into them where lidar probably would have.
As I watch Tesla's "Autopilot" do its thing, it seems its limitations are less about object recognition and more about what do with the data it has. Lidar won't help it intelligently handle a merge of two lanes into one, or seeing a turn signal and slowing down to let the person in, or seeing a red light and knowing to start slowing down before it recognizes the car in front of it slowing down, or knowing exactly what lane to enter on the other side of a tricky intersection without a car in front of it, or knowing that car X is actually turning right so you shouldn't start following it if you're going straight, or having the car see a clear path ahead and having it accelerate to the speed limit when all other cars are stopped or going much slower in adjacent lanes, or moving to the left to allow a motorcycle to lane split, etc., etc. It's still great, but there are a ton of things to learn.
Maybe once there is no human in the driver's seat, you'll need the extra precision that lidar provide, but there are big gaps before even we get there.