Hacker News new | comments | show | ask | jobs | submit login
Velodyne Announces a Solid-State Lidar (ieee.org)
189 points by sohkamyung 186 days ago | hide | past | web | 151 comments | favorite



Hm, this is not Velodyne's first announcement of a solid state lidar breakthrough.

http://spectrum.ieee.org/cars-that-think/transportation/sens...

Several other companies are working on these (Quanergy, Blackmore) too, but so far they seem to be just press releases. Hopefully we'll see some real ones soon; the current state of the art for wide field lidar are many thousands of dollars and (imo) too fragile for use in production vehicles.


I take all solid state lidar with a giant grain of salt. Research at universities is still very primitive (beamwidth[1] of 30-40 degrees) and nobody has demonstrated a solid prototype. Quanergy in particular looks very fuzzy. These companies all kind of imply they are using some kind of patch antenna/diode source, which is much more primitive than the nanostructured antennas in academic research. At the same time academic research is 15-20 years away from forming a reasonably collimated laser.

The allure of solid state lidar is intense though. Not needing avalanche diodes gives me a bubbly sensation around my prostate. There is probably no such thing as cheap lidar without solid state.

[1] no laser forms a perfectly straight line, but 30 degrees is more like a floodlight. It makes it very difficult to take measurements by applying a complex filter over everything. Basically all of the data is massively blurred when you get it and has to be deconvolved, which is never perfect. It's very hard to turn a blurry image into a sharp one.


This article doesn't mention a breakthrough in solid state LIDAR design, as article you link to does, but rather that they have an actual product ready-ish:

> Velodyne today announced a solid-state automotive lidar ranging system that the company will demonstrate in a few months, release in test kits later this year, and mass produce at its new megafactory in San Jose, Calif., in 2018. The estimated price per unit is in the hundreds of dollars.


Not much info here from Velodyne. What's the range? Is this a flash or MEMS device? Resolution?

Advanced Scientific Concepts has had good flash LIDAR units for sale for years. They just cost too much. They sold them to DoD and Space-X. Continental, the German auto parts maker (a very big company, not a startup) has purchased the ASC technology and expects to ship in volume in 2020. Here's a Continental prototype mounted on a Mercedes.[1] This is mounted at bumper height and has a 120 degree field of view, and has only 30m range. So this is for city driving or slow driving in tight spots.

Continental says they intend to ship in volume in 2020. Nobody is yet interested in ordering enough units in the kind of volume a major auto parts manufacturer needs.

Google uses a high-mounted LIDAR with longer range as well as the bumper height sensors. This doesn't address that market.

[1] https://www.youtube.com/watch?v=pxqFX94zBPI


We shipped one on the OSIRIS-Rex program to gather materials from an asteroid last year, and will be on the new Boeing crewed capsule as well!

Yeah I'm not impressed with Velodyne's lidars either. The work at Continental is going, quietly, to volume production.


From the image at the top of the article: "up to 200m range" and "35 degree vertical field of view".


In the video, the Continental rep says "120 degree field of view" and "reaching out as far as thirty meters".[1] This may be the short range model for bumper height applications. ASC has built units with much longer ranges.

For more range, you need bigger collecting optics, which means a bigger unit, or a narrower field of view. The tradeoffs are straightforward.

You also have to spread the laser output over a wider area to keep it eye-safe. The laser eye-safety requirement is on power through a 1/4" hole, corresponding to the pupil size of an eye. This protects people staring directly into the emitter. The power can be greater if the beam is wider. If you devoted the top inch of the windshield to sensors, and spread the laser output over a wide area of windshield, the power could be much higher.

[1] https://youtu.be/pxqFX94zBPI?t=139


This is mostly hype until they can actually deliver.

I ordered several of their current lidars 5 months ago and now it looks like the total delivery time will be about a year after the order. And they're mostly unresponsive about what is going on.


I was at SPAR 3D in Houston a couple weeks ago (trade show for laser scanning/ surveying industry), and there was some rumor-mill type chatter about what the hell was going on with Velodyne.

Apparently some folks have heard that they're having some sort of supplier issue that they're not doing a good job of working through/ finding a replacement for.


Naïve question : why is Lidar so important for driverless cars? We human drive cars with only visual and audio information. We don't need lidars. Why couldn't cars do the same?


Simple answer: simplicity

Our world around us is 3D. A camera projects this information onto a 2D plane and throws away a lot of information in the process.

When we move in the world we need to know where obstacles are relative to us. We infer this from all the sensory input we receive and have a pretty good idea how far things are away from us because we have learned how size relations should be.

A computer could and can (to some degree) infer this information from regular 2D videos as well, because in the end most algorithms that do obstacle detection/avoidance have to extract positions in 3D to know exactly where those obstacles are.

A LIDAR (and other depth sensing technologies) provide a distance for each measurement (sometimes the distance is the only information but some sensors provide intensity, color, etc. as well).

So we immediately know the position in space of the measured point and thus can more or less directly extract obstacles in 3D space relative to the robot/car/drone without going the detour of "imagining" 3D information from the projected 2D image.

So it is

3D world -> 3D sensor -> 3D points -> 3D obstacles

versus

3D world -> 2D sensor(s) -> Extraction of depth -> 3D points -> 3D obstacles


It's like how we build cars with wheels instead of legs.

No reason to stick with our limitations.


Not so much limitation, as much as optimized for a different problem. Wheels are great for going fast on relatively level terrain, not for traversing any terrain it encounters.

Lidar is great for distance representations, but requires a highly consistent and exact timing source. Dual optical inputs allow us to approximate the same data through other methods that don't require nearly as exact timing.


> Lidar is great for distance representations, but requires a highly consistent and exact timing source.

Just a small FYI:

If you are doing time-of-flight measurement, then yes. But many (not all) LIDARs out there don't do time-of-flight, because to get the resolution wanted or needed, you need to have an oscillator (and sensors for the return pulse) capable of many, many gigahertz (it can get absurd quickly if you want anything below a resolution of around 10 CMs). This isn't easy or cheap (the oscillator actually is fairly easy and cheap - it's the return sensor that needs a very fast rise time that is expensive and difficult, though such devices do exist - some all solid-state, like flash LIDAR).

What many LIDAR sensors use (not sure about Velodyne) is actually using a kind of "doppler shift" of a laser modulated with a carrier wave. It still needs a high-speed multi-gigahertz oscillator to resolve small changes in distance, but everything else is fairly off-the-shelf and nothing real special. One downside of the method is that there is a discrepancy at farthest vs minimal distance measured due to ambiguity when comparing the received modulated beam to the outgoing modulated beam; you can't tell at such a point what the distance is (because both distances look the same).


Multi-frequency phase interference can be used to get sub-cm accuracy without needing ridiculous oscillators. The only problem is that you need to sample your target several times with different frequencies to remove the ambiguity.

Units like this:

https://www.amazon.com/Bosch-GLM-35-Measure-120-Feet/dp/B00V...

that go for $75 (or less) do this. They're super accurate, the only downside is that it takes a second to cycle through all the necessary frequencies.


Are you sure you mean doppler shift and not just phase shift due to the distance the light traveled? AFAIK the Time of Flight cameras like the ones from PMD use the phase shift with the obvious problem of the calculated distance being ambiguos beyond a certain distance due to the periodic signal.

But sensors like Velodyne, SICK and I think Hokuyo are pulsed and measure the actual time it takes for the pulse to travel from the emitter to the target and back to the detector.


Sure, it's the road network that really makes wheels work. But we built that instead of machines with legs.


Depending on your technology level in a few areas, a road network with wheeled vehicles is likely either much harder or much easier than legged vehicles of some sort. Tracked vehicles probably fall somewhere in between.

I winder, if we started heavily colonizing Mars 100 years from today, would we bother to build roads?


I guess that would depend on if there was some reason to try to have extensive industry on Mars or not.

At first people would anyway live by whatever resource they were using. If a deposit ran out they might choose to build a haul road over moving their entire settlement.


But then to save time, they start to just put their domed cities on huge caterpillar tracks. To keep from having to bury their city for radiation protection, they start adding anti-radiation armor. However, the laws on Mars break down after Earth civilization collapses, and these tracked cities start arming themselves with cannon.

Cue action music.

It's a sucky situation, but it makes for an awesome video game!



Since gravity there is only 0,38 of that on earth, flying might be more economical than driving, even for short distances.


Isn't the atmosphere much thinner as well? Depending on the mode of flight, that might offset some of the economy.


One thing LIDAR doesn't provide is immediate knowledge of velocity of a moving obstacle (like other cars). It can be used to infer this (over two or more measurements), but it can't get at that information directly.

For that matter, neither can a camera (or multiple cameras), but I just wanted to point it out, because it is something that can be useful to know in self-driving vehicles, and isn't often considered or mentioned.


What technology can get that information directly, considering velocity is always denominated by time?


FMCW LIDAR can capture velocity directly from Doppler Shift analysis.


Is there a sensor technology that does this more quickly? My naive understanding was that sonar and radar also infer velocity by comparing multiple passes (and light should be at least as fast as radar, shouldn't it?)


Active sonar and radar can get motion towards or away from the sensor in a single pass from the Doppler shift of the pulse.


While I'm no expert, it's obvious right away that humans do a lot of sorts of visual processing that AIs don't on a conceptual, categorizing, and general-knowledge level, and we use these capabilities to interpret visual scenes and draw object boundaries and therefore place objects in modeled 3d space. Example, http://www.optical-illusionist.com/imagefiles/dalmation.jpg - to see the dog, you really do have to know rather a lot about dogs, sunlight and shadows.


>>Example, http://www.optical-illusionist.com/imagefiles/dalmation.jpg - to see the dog, you really do have to know rather a lot about dogs

(Disclaimer: Cynical Plug)

This is exactly the sort of AI project that I'm working on: being able to recognize patterns even with lots of noise or incomplete information. I tried a show HN submission [1] but there was absolutely no interest. :(

Question: Does anybody know if google is yet at this level? Would google's AI be able to recognize the dalmatian?

I think my AI algorithm would be able to recognize the dog but I'm still not at that stage yet. Really need to get more computer power before that. Thanks for the image though, you've given me some really good testing ideas.

[1] https://news.ycombinator.com/item?id=14057043


If that was not a still image the dog would be far more visible. Once you incorporate motion tracking in 2D many things will suddenly become apparent.

Pro-tip: For a moving viewer, anything in a 2D image that doesn't move horizontally or vertically is on a collision course with the camera. This is why deer seem to "come out of nowhere" because they are not moving within the field of view as they run into a car, and could actually remain hidden behind the A-pillar the entire time they are running toward the road right up until you hit em.


> Question: Does anybody know if google is yet at this level? Would google's AI be able to recognize the dalmatian?

Apparently not, based on trying out the palmation image here:

https://cloud.google.com/vision/


A human brain has a datacenter's worth of processing power using algorithms tuned by hundreds of millions of years of evolution to be excellent at divining the underlying forms behind visual information. We can't put that on a car (yet).

Also, Lidar can avoid some of the optical illusions that will cause humans to crash. E.g. [1]

And, hopefully, precise data will let future computerized cars drive more precisely than humans can.

[1]http://www.news.com.au/lifestyle/real-life/wtf/optical-illus...


I was kinda expecting an image showing the optical illusion causing those crashes.


Naïve answer : We humans drive cars despite only visual and audio information? We interpret and often miss things or get fooled by optical illusions. I remember a great story a year or two back about one of Google's cars "weirdly" stopping, before the reporter realised it'd somehow seen through a hedge and noticed a cyclist about to pop into view?

I imagine lidar gets you one step closer to the data you're really after.


Humans have a bit more information than that -- tactile, proprioceptive and complex predictive models they can query (folk physics, theory of mind of other agents, higher order knowledge of roads and places). But I agree, humans perform well despite not being particularly well adapted to driving.

Given that, it would be foolhardy to proceed with vision and sound only because humans can get by with mostly just those. Better sensors push down the intelligence requirements for complex behavior. For example, insects as a group have a wider variety of sensors than any other group of organisms. With pin sized brains and milliwatt power, they forage, hunt, fly, walk and some even learn or communicate with an elaborate (combinatorial) language. Their array of specialized sensors plays a large role in allowing for their surprisingly rich sets of behavior.

While I'm here, I'll also note that optical illusions are not things to get fooled by but instead, show the strength or assumptions of our predictive world models. Brains do not see the world as it is (see the checker shadow illusion and inattentive blindness), they predict based on, smooth, filter and adjust their inputs to more effectively act in the world. All inferential systems must have particular structural biases to do well in their target domain. In animals, between what was evolved and what is learned based on early environment; expectation and experience ends up top-down affecting what we see. It's very non-trivial -- many were surprised to learn for example, that even the "simple" retina receives inputs also from non-visual areas (and the details differ from species to species).


How does lidar see through a hedge?


LIDAR can practically see through vegetation. Some individual beams miss the vegetation and reflect from things they hit on the far side, and the software can detect this.

This is used a lot in mapping and research into e.g. forest canopies in the amazon. LIDAR has been extensively deployed on aircraft for this kind of purpose.

I've seen high-res LIDAR scans made available by my country's state survey for e.g. archeologists and it has the 'true ground' through the vegetation. Its like the vegetation has been stripped away and has a clinical feel when rendered.


This is a good summary.

Some lidars also offer two distance estimates per point, a first distance and a secondary distance. On the lidar I used that had this capability, the secondary distance was not always robust. But on permeable materials like vegetation, the secondary distance would typically indicate solid materials behind the initial curtain of material.

Anyway, the presence of multiple distance returns in a small solid-angle area (whether from the above primary/secondary, or from nearby scan points) can indicate density of vegetation. You can certainly see "through" the small gaps in a hedge. Depending on the lidar, and the range, gaps as small as a few cm^2 could be enough (talking about vehicle-mounded lidar for autonomous navigation, not remote sensing of forest canopy).


Probably was a radar, not a lidar.


Possibly infrared sensing? Some fancy IR cameras can see a human body through walls, so I guess it's not inconceivable that a LIDAR (which is already IR, I think?) could penetrate a hedge.


You're thinking of far IR (typical black body radiation at room temperature for most materials) near IR is what has to be filtered out in web cams and what a lot of these LIDAR devices use, it behaves almost exactly like visible light.


Stereo vision gives you two slightly different perspectives on a scene. A single point in the 3D scene will be seen in a slightly different position in each 2D image. The offset between the two positions give you the depth information. The more offset, the closer the point is to the observer.

The difficult part however is pairing the points between the images. You and I do that easily, because when we see, for example, the top left corner of our refrigerator, we can easily identify that same point in both images, probably because we recognize the objects we are looking at. A computer has more trouble pairing the points.

Lidar is a way a cheating, by temporarily shining a dot on the locations so that the points can be found and coordinated easily from both images.


Just to take this a step farther: the distance-measurement error characteristics of lidar and stereo are very different.

To get accurate range through stereo, you need two calibrated cameras, rigidly mounted (with respect to each other), and with a high resolution (pixels x pixels in each image of the stereo pair). All this implies very expensive sensors, large camera rigs, and lots of image processing. To get acceptable stopping distances for a vehicle going 60mph using stereo only to detect obstacles requires things like 1.5-meter camera bars, 2048 pixels per image in the stereo pair, and onboard computing to compute stereo correspondence at 30 or 60 fps. It's hardware-intensive!

TBH, I forget how the stereo range error scales with range - I think it is linear with range, but may be super-linear. This can be a problem for mapping. I think lidar is superior in this regard, in other words, its error scaling is sub-linear with range.

If you're used to using only stereo, the concept of having a lidar for that point-range measurement looks pretty magical. Of course, stereo vision offers some advantages relative to lidar - it's a passive measurement, for example.


Stereo error is quadratic with range. Bad times. You don't need high resolution, just a wide baseline for accuracy at distance. Resolution will only get you so far because you can't get (and you wouldn't want) sensors with pixels below a micron in size. You usually can't change the focal length much because you have constraints on field of view, therefore the only viable option is more baseline.

e_z = e_d * Z^2/(b*f)

e_d is the disparity error (i.e. matching error) in metric units; i.e. a few microns, Z is distance, b is baseline, f is focal length

It's relatively easy to estimate the accuracy you can achieve on a car because the maximum baseline is fixed to < 2m typically. You can plug in reasonable fields of view, sensors etc. Typically you can assume 0.25 px matching accuracy, assuming the algorithm does sub-pixel interpolation.

Example: e_d = 2.2µm, Z=30m, b=1.5m, f=3mm (for a 2000px wide sensor that's 70 degrees FOV): e_z = ~40 cm. That's not too bad - enough to identify something big. At 100 m you'd have a disparity of 20 pixels. If you had a search radius of 256 px you'd have a close-range of 8m.

Nowadays the compute problem isn't too bad - you just throw a GPU at it (or an FPGA, but you have memory limitations there).

LIDAR has a more or less constant error with range, provided you've got a high enough signal to noise. However this does mean that at short distances, LIDAR is quite poor relatively.


Thanks for this. I wasn't sure that it was as bad as quadratic, but there it is. Your calculations show why lidar is such a persuasive alternative.


Why is having a huge cluster of GPUs/CPUs so important for AlphaGo? We humans only using about 100 Watts of power. We don't need megawatts of datacenter power.

Visual SLAM is an open research problem, and lidar lets companies sidestep a lot of those problems. I think that's a reasonable bet: falling prices for an existing technology is more predictable than hoping for algorithmic/research advances.


The way brains store weights, propagate updates, propagate inputs etc. is facilitated by their form and the materials they use and properties they take on. Computers like AlphaGo are essentially doing a very expensive hardware emulation of a completely different computing system.

Probably some day "true" native neural network consumer hardware will exist, and I would expect it to be much more efficient. It would be a complete paradigm change from what we have now though, so it's probably going to be awhile.


While not exactly "consumer hardware" - there do exist companies that make and sell neural network hardware; think of it as "embedded research and development" level kind of stuff (in many cases they sell development kits and the like - even to hobbyists who have the cash). For instance:

http://www.cognimem.com/index.php

http://www.general-vision.com/hardware/

...and while not inexpensive, and not quite the same as "neural network hardware", Nvidia does sell several deep learning platforms (beyond their GPU offerings):

http://www.nvidia.com/object/deep-learning-system.html

http://www.nvidia.com/object/embedded-systems-dev-kits-modul...

http://www.nvidia.com/object/drive-px.html


A good demonstration of this is if you flip the comparison to something computers find easy. The world's fastest human seems to be able to do maybe 2 calculations per second, I've not got the details but let's assume that's a reasonable yardstick.

That's 0.02 FLOPS/W.

RIKEN apparently is at about 7GFLOPS/W. That's three hundred billion times more efficient.

Not a fair comparison, but that's kind of the point.


> Why is having a huge cluster of GPUs/CPUs so important for AlphaGo? We humans only using about 100 Watts of power. We don't need megawatts of datacenter power.

This argument loses a lot of its bite after AlphaGo beats the top humans.

You could argue that you still need lots of energy for training: Even a 100 watt human needs to live for 30/40 years before he's a top Go player. So it's not unreasonable that accelerating the process might use more power. And if you have all that power for training, you might as well use it for inference.

You can probably build a bad Go playing machine in much less than 100 watts. There might be a way to extract something more power-efficient from AlphaGo, but it doesn't seem to be important to anyone.


The human brain is doing the playing so we can focus on 20 watts. We can also ignore the on going visual and audio processing, complex motion planning, on line learning, signal filtering and focus on just the Go part (in other words, < 20 watts is dedicated to Go playing).

Humans also are unable to learn effectively, due to limits on attention we are not sure of yet, beyond ~4 hours per day. A talented human can reach pro level in about 8 years, enough to beat a neural net only player. The total energy use in joules spent learning is still about an order of magnitude less.


Why can't you apply the same logic in reverse then? We're not able to speed up and compress what takes the 100 Watt machine as much as 21 years to fully develop, into something cars use locally yet, so we sidestep the problem with Lidar.

Honestly both seem a little silly though, would an early, poorly trained version of Alpha GO confirm the need for lidar because it doesn't beat top humans?


One big benefit of LIDAR is that it's an active ranging technique - that is, it illuminates its target. That means it works at night and whereas a human would need 360º headlights, a LIDAR doesn't. The second benefit is that LIDAR is robust and quite accurate. If you measure a distance, you know it's probably right and with what confidence. With vision based methods, you have trouble reconstructing featureless surfaces. Finally as others have pointed out, LIDAR gives you 3D information out of the box. You don't need to do a bunch of compute-intensive image matching to get range.

It's much more likely that a stereo vision based system will give you false positives due to an incorrect feature match.

The main downside compared to cameras is that LIDAR data is much sparser. You can get 1MPt/sec LIDAR, but that's over an entire hemisphere normally so in one camera's field of view you might only get 20k measurements compared to millions of pixels.

In the end you need both (plus RADAR, ultrasound and everything else). Cameras will always be required for tasks like lane marking detection, reading road signs and visual identification of obstacles like pedestrians.


Self-driving cars need to be safer and more reliable for the regulators and mass to adopt. Humans are pretty bad drivers with only visual + audio information when one of both are impaired by natural conditions (weather, night time etc.) or artificial conditions (DUIs etc.).

By introducing more dimensions of sensing (LiDAR, Sonar etc.) as some others put it, it should theoretically help judgement during driving and thus improve the safety aspect.


Hey grondilui it's only 2017 not 2027. "we humans can" doesn't mean computer can. A computer can't even defeat a captcha, many of which are easier than, for example, driving in poor weather under low light conditions. Humans are really bad at a lot of things, such as our reaction time is abysmal, we can be distracted, we can't measure anything perfectly -- but our ability to process visual input (with a bit of delay) into a model of what we're seeing is not one of these areas. Also don't forget that we have stereoscopic vision too and those of us who only have one working eye for some reason are at a disadvantage including when driving. So 3D vision is already necessary for parity with humans. We then do extensive processing not only of still images but parallex effect etc. Computers have trouble even processing a still image.

But I find it fascinating that you can already ask "we can do it, why can't computers." you're just a couple of years too soon. :)


> We human drive cars with only visual and audio information.

And we're kinda shitty at it, killing large numbers of people every year doing it.


I like to think we're kinda amazing at it. The fact that a person can safely drive a car at 80mph while being completely relaxed in a wide range of conditions was really amazing to me as a kid. Accidents happen. Even chairs have their risks.


We kill 30k people a year in the US alone, largely through inattention, speeding, drunk/reckless driving, running lights, etc. All stuff a computer driver isn't likely to do.

In 50 years, we'll be shocked humans were ever allowed to do it.


To be fair, you can already improve that to 10k a year if you learned from European countries like The Netherlands. Better infrastructure, signaling, safety rules and training would help the US a lot.

But sure, self driving cars might also be a solution. Who knows. It might certainly be a faster solution.


Yeah - we seem to hand out licenses like candy in some or most states here in the US.

I'll be the first to say I'm not a great driver. I can and have driven long distances, but I always know I can do better. But all of our training here in the US to pass the test is mainly "ad-hoc". There isn't any kind of "certified training" program that you have to take and pass in order to then get a license. But there probably should be.

Honestly, what there really needs to be is such a program, but done over the course of weeks at a facility like what is used at the Bondurant Racing School (https://bondurant.com/). Having that kind of knowledge and experience to really know your vehicle and its capabilities (both what it can and can't do, and how to handle an emergency) might go a long way toward making better drivers (now that I think of it, I might have to go to that school myself - assuming I can afford it).

I think if we did do a better, stricter job at teaching drivers how to drive, and requiring it for a license, it would go a long way to cutting down on accidents. It wouldn't eliminate them, but it would surely reduce them, probably severely. Learning to keep control of emotions, and also handle stressful situations while driving would have to be a part of this as well.

It wouldn't be a perfect solution, but it would probably be a damn sight better than what we currently do.


It's complicated. In the USA some states like South Carolina have higher rates of fatalities while others have fatalities per billion vehicle-km comparable to the best countries in Europe[1][2]:

USA and territories 7.1 (includes Guam, Samoa, Puerto Rico, etc.)

USA, Massachusetts 3.25

Sweden 3.5

Netherlands 4.5

Belgium 7.3

Spain 7.8

Australia 5.2

Japan 8

S. Korea 16

Brazil 55.9

[1] https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...

[2] https://en.wikipedia.org/wiki/Transportation_safety_in_the_U...


I think that if you compare by fatalities per km you should also take population density into account: 336 per km2 for Massachusetts vs. 501 per km2 in the Netherlands.


It's literally an extra dimension of sensing.

Yes you can infer distance using parallax (2 regular cameras, like our eyes). But if you can directly SENSE depth using a sensor capable of it (Radar/LIDAR), then you have a lot more certainty, redundancy, simplicity, and can even sense in the dark where cameras aren't able to effectively judge distance.


This isn't entirely true. Photogrammetric methods that measure depth from stereo-imagery can and often are more "certain, redundant" and precise when the network is structured correctly. What LiDAR offers here is an easy way to get 3D data quickly without a large computational overhead, and no dependency on camera position. It also means we don't have to solve the correspondence problem, that is, finding the same point on two objects between two images.

But LiDAR isn't necessarily enough either. A lot of scanners end up being very noisy, or have a limited range, whereas cameras are good as long as there's line of sight. Using the two technologies together makes a whole lot more sense than just using one or the other.

Disclaimer: I am a graduate student who works with LiDAR, photogrammetry, and 3D imaging technologies.


The goal is to collect as many information about the environment as possible so you the software can make better decisions than human drivers.


In the curriculum for Udacity's self-driving car nanodegree program, your suggestion was made explicitly. ~"(re: why are we focusing on vision?) Humans drive with just our eyes (even one eye would work), perhaps in the future cars will need only camera data instead." (Perhaps many cameras.)


First off, having multiple sensors and integration of that data is never a bad thing in robotics; in fact, it is almost a necessary thing. More information can mean better decisions by the system driving the vehicle. More sensors can help fill in things other sensors can't "see".

That said, you bring up a good point: Humans do a pretty good job driving vehicles with only a pair of eyeballs (usually), ears, and "feel" (I'm not sure how to put this last one, but a good driver tries to stay and feel "as one" with the vehicle, getting information about the road, various forces internal and external about the car, etc - in order to drive it properly and in control while making various maneuvers). So why couldn't we do the same with a self-driving vehicle?

Well - the truth is - we can:

https://devblogs.nvidia.com/parallelforall/deep-learning-sel...

https://images.nvidia.com/content/tegra/automotive/images/20...

I mention that paper so often in these kinds of discussions, that I am sure people are getting sick of it. Note that in that paper, only a single camera is used - like driving with one eye closed (or blind in one eye), yet it still was able to do so properly. Part of Nvidia's purpose here is to advance self-driving vehicles, but they also are trying to sell self-driving vehicle manufacturers on their tech as well:

http://www.nvidia.com/object/drive-px.html

If the need for expensive and exotic LIDAR systems can be overcome, and simpler systems like cameras and radar can instead be used, it will both be a cheaper way to manufacture these cars, as well as being more reliable (as the article noted, it's one thing to make a LIDAR - making a reliable LIDAR for automotive use over 100s of thousands of miles/kilometers is a different ball-o-wax - but manufacturers already have experience with making radar and cameras for vehicles robust).

For another take on using cameras for autonomous navigation - there's this project using high-speed winged drones (and completely open source - including the drone design files):

http://spectrum.ieee.org/automaton/robotics/drones/mit-drone...

https://github.com/andybarry/flight

That one only uses a couple of small cameras for stereo vision.

These two projects (and others if you care to search) both show that such a system is possible; we probably don't need LIDAR for the majority of driving scenarios. Radar can probably cover other areas, and you mention the concept of humans using hearing - but honestly I am not aware of any self-driving vehicle research on that potential kind of sensing, but if none exists, it seems like it might be a fascinating and promising area of study for someone! Audio sensors would also be another one of those "simple and robust" sensors for automobile usage that manufacturers would like.

That said: If they can make a simple but robust LIDAR system, with no moving parts (such designs do exist, like flash LIDAR, for instance), and a low-cost (that part is key), having that on a vehicle certainly can't hurt. There are areas (particularly at night, or in inclement weather, or in other low-visibility situations) where a camera alone will have a similar trouble as a human would (though I wonder if a camera operating at other wavelengths (FLIR, Near-IR, and Near-UV for instance) could help?)

In those situations, having a LIDAR might be the key to a safer driving experience for a self-driving vehicle, whereas a human without that ability might make a wrong (and potentially fatal) decision.


Redundancy?


Since Tesla is not using Lidar and they have one of the most advanced self-driving systems, i wonder about this too.


they have one of the most advanced self-driving systems,

They certainly claim that, but have they got the test data to back that up? All the companies that have actually demonstrated truly self driving cars under real world conditions use Lidar.


"Since Tesla is not using Lidar and they have one of the most advanced self-driving systems, i wonder about this too."

Who says? The chattering masses on HN?

Tesla has certainly deployed a lot of assisted driving systems, but then again, so have a lot of car companies. Lane-keeping and auto-braking are not new, and to date, that's all Tesla has actually shipped.

People who know better have serious doubts that you can do full autonomy with only video/radar input under real-world driving conditions (like darkness). That's why most of their competitors are using LIDAR.


Lidar makes a ton of things easy. Visually extracting 3d world from cameras is a very hard and computationally intensive problem.

I really hope velodyne delivers. Quanergy seems to have a nice site but seems vaporware in the sense you can't actually buy it.

A $100 light weight Lidar is truly game changing for robots, self driving cars and drones.


> A $100 light weight Lidar is truly game changing for robots, self driving cars and drones.

This is probably going to end up like the Oculus Rift; it won't be $100.00 - it will probably end up north of $500.00, possibly north of $1000.00.

If it were easy, SICK or Hokuyo would have done it already. The fact that neither have can mean many things, of course, but I bet one of the big ones is that it isn't easy to pack a 3D LIDAR into a small package and make it robust and cheap. Both of those companies 2D LIDAR solutions already hit the robust portion; Hokuyo's offerings hit the small package portion (SICK's 2D systems are mostly the size of a coffeemaker - I own a couple), but neither company hits the low price mark.

That could also mean that they have a niche market that's willing to pay those prices, but given the interest and want for fast 3D LIDAR for self-driving vehicles and other uses, the fact that they don't have anything out is telling.

Now doing it all solid-state? Well - there are companies that have these systems as well (supposedly at least) - called "flash LIDAR"; essentially firing a laser to "flash" the scene, then using a grid-array of CCD-like high-speed elements to gather a 3D delay-time between the flash and reception. From what I've seen, for even the low-resolution modules, they make the former two companies offerings look dirt-cheap in comparison...


$1000 is still game changing. $1000 is something that's downright cheap to throw a few of on a car if it allows for a true NHTSA level 4 self driving car.


even north of $1000 would still be cheaper than most of the in-car navigation/entertainment systems that a lot of people order. In relation to a car, $1000 is really not a lot of money. Heck, i just replaced a mirror for 450 EUR...


> and computationally intensive problem

Not an exact analogy, but this reminds me of ancient "software modems", which shaved off a few chips and offloaded processing to the CPU. They were cheap, but had a real impact on your computer's performance.

The trouble with image processing is that it seems basically impossible to perfect with the level of certainty you'd need for 100% autonomy, whereas LIDAR gives you very straightforward data to act on. You'd still need the cameras for recognizing traffic lights and such, of course.


Wasn't the given reason for one of the crashes (where the car went under a trailer) that the vision system thought the trailer was a street sign way above the road? That kind of thing is why you want Lidar.


Finally! Anyone has a guess as to what those might cost?

(Edit: whoops, embarassing : totally missed the "hundreds of dollars." Sorry!)


[flagged]


The guidelines ask you to please leave out things like “Why you don't read?”:

https://news.ycombinator.com/newsguidelines.html


What does "solid state" here mean? If i am not blind the aryicle doesn't explain that, only that it is cheaper.


By 'lidar' they mean '2d depth imager using lasers', which has historically been done with a fixed laser and a physically moving galvanometer. Solid state is to get rid of the moving galvanometer.


Replacing it with what? It seems like something would have to be moving.


There are many techniques for optical beam steering. It some cases you can use MEMs devices (like DLPs) or you can apply a voltage across a optically clear material that changes its properties under the electric field. This is used to bend the light in certain directions (think of an electrically controllable prism.)


Don't forget optical nanoantennas! Essentially a wiggly waveguide- you set up a standing wave inside it and light exits the wiggles at a specific angle, which can be altered by changing the phase of the standing wave.

By having hundreds of wiggles you average out the fabrication error in any individual wiggle and produce an extremely controllable beam. Of course they're still working on the coherent part.


MEMs (as in a DLP tv) seems less than an ideal for the abuse a vehicle takes. Curious if the other option you mention is similar to a cathode ray tube?


It depends on the implementation, the concept seems fragile but they can be amazingly resilient. There are a bunch of MEMS accelerometers that support ranges in the 10s to 100s of Gs.


Square cube law. The mass (cube) of microscopic features is very small compared to their surface area (square), which dictates how thick their connections are. That means they have very little inertia and as long as they aren't scratched or wet they're very resilient.

Electro-optic materials are more like LCDs. Light interacts with the material and the material interacts with electricity. In a cathode ray tube there are electrons, not light, and they interact directly with the electric and magnetic fields. CRTs also need a dozen-odd fields to form a good beam, or 30+ for things like electron microscopes.


There are 3d radar systems where nothing moves, instead the signal is shaped and directed using precisely timed small signals sent from an array of separately controllable antennas. Maybe light can also be done like that?

I mean - obviously you could just create a stationary disc with lasers mounted all around it and switch between them electronically, but that hardly seems cheaper.


See https://en.wikipedia.org/wiki/Phased_array.

Visible light is radio, just with different wavelength, so the principle is the same. The much, much smaller wavelength (centimeters vs sub-micrometer) makes it harder to build a device, though.


This article [1] goes into some detail of what "solid state" means. I hope it helps to answer your question.

[1] Velodyne Says It's Got a "Breakthrough" in Solid State Lidar Design (Dec 2016) [ http://spectrum.ieee.org/cars-that-think/transportation/sens... ]


As far as I'm aware "solid state" always means "no moving parts", at least when used to describe electronic devices.


It means "not a big spinny thing that looks like an old police light on top of the car".


There are some basic diagrams from Quanergy [1] (page 9).

[1] http://on-demand.gputechconf.com/gtc/2016/presentation/s6726...


The article mentions the angles (120 horizontal, 35 vertical), but not the distance covered. This might explain why the interviewee does not believe the solid-state lidar will replace the current tech.


That's answered in the Velodyne press release [1] linked from the article: "The new Velarray LiDAR sensor uses Velodyne’s proprietary ASICs (Application Specific Integrated Circuits) to achieve superior performance metrics in a small package size of 125mm x 50mm x 55mm that can be embedded into the front, sides, and corners of vehicles. It provides up to a 120-degree horizontal and 35-degree vertical field-of-view, with a 200-meter range even for low-reflectivity objects. With an automotive integrity safety level rating of ASIL B, Velarray will not only ensure safe operation in L4 and L5 autonomous vehicles but also in ADAS-enabled cars. It has a target price in the hundreds of dollars when produced in mass volumes."

[1] [ http://www.businesswire.com/news/home/20170419005516/en/Velo... ]


Then why do you think will the current technology still be useful, given an order of magnitude higher price?


I'm not the writer of that piece. But my guess would be that automotive technology tends to change slowly and gets gradually replaced.

Car companies are getting used to reading data from standard Lidar. Getting them to suddenly dump it for Solid-State Lidar may be a step too fast and they would rather go through a transition period first (standard + solid-state) until they are happy with the performance of solid-state lidar.

You are referring to this in the article, right? “I don’t necessarily believe that [the solid-state lidar] will obviate or replace the 360-degree units—it will be a complement,” Marty Neese, chief operating officer of Velodyne, told IEEE Spectrum earlier this month. “There’s a lot of learning yet to go by carmakers to incorporate lidar in a thoughtful way.”


is it hard to put 3 x 120 degree units to get 360 coverage if the unit is only 55mm wide?


It's probably not hard to put the units in, but I imagine combining the data from them wouldn't be trivial.


are seams really that big a problem? you could go to 4 and have overlap, interleaving and soft rollover.


Most autonomous vehicles already do this to some extent or another. Tools like ROS (which Uber at least uses for its vehicles) practically handle this for you.

The ros tf2 (tf for "transformations", not Tensorflow) library allows you to basically input a 3d model of your vehicle, like you might get from CAD, add the pose and location of your various sensors, and it will automatically handle the spacial transforms required to build a singular world model for you.


MIT/DARPA also have a combined solid state steerable beam transmitter and sensor using phased arrays:

http://spectrum.ieee.org/tech-talk/semiconductors/optoelectr...


The MIT lidar has really bad resolution and range iirc


Does anyone know if this or any lidars used for self-driving cars is capable of doing Doppler lidar?


Is lidar still necessary if computer vision advances far enough? We drive just fine without radar.


At the current stage, depending on computer vision may be dangerous. See this BBC article about how it can be fooled [1]

[1] http://www.bbc.com/future/story/20170410-how-to-fool-artific...


Couple of issues with this point:

1) We don't just rely on our vision when driving but also on sound (nearby cars) and touch (feel of the road). I actually haven't seen any self driving car projects talk about this aspect which is interesting.

2) Self driving cars can't be just as good as humans they need to be effectively perfect. Quite a lot of people think they are amazing drivers i.e. infallible so mistakes from a self driving car aren't going to go down well.


There are deaf drivers, and I'd wager the vast majority of human drivers pay no attention to the feel of the road.


I haven't really seen many people talking about "Human sensor fusion" regarding sound/touch/etc, and I think it's a very important point.

Are driverless cars even measuring sound, vibration, and G-Forces? I'd like to better understand how those play into the whole sensor fusion of these systems.


You deserve some up-votes.

Tesla famously doesn't use LIDAR, saying cameras (possibly with parking radar) are good enough.

Here's a nice summary: https://cleantechnica.com/2016/07/29/tesla-google-disagree-l...


I'm not sure whether that isn't just a business decision. CMOS cameras are cheap, small, and they don't change very much. Ergo, once they manage to develop a solution using cameras, they won't have to retrofit the hardware in customers hands to support it, and they can still claim that oneday this car might drive itself.

With LIDAR, well, there's a new type of LIDAR coming out. Then there may be different successor designs and form factors.


Tesla has also killed people with this decision. They've had one fatal crash (at least) due to image based getting confused by a semi truck with a white cargo container in bright sunlight. It thought nothing was in that space and changed lanes accordingly; which the car ended up under the semi truck and the occupants crushed.

Active sensors are needed for human safety applications, passive doesn't make sense.


Tesla has saved people as well by being able to roll out this tech widely. Our eyes are passive and they work. Teslas do have radar sensors.


Sebastian Thrun will tell anyone who will listen that he is sorry he got the self-driving car world hooked on LIDAR. He is 100% behind computer vision now. He would much rather use several $10 web cameras than multi-hundred dollar LIDARs. If you look at term 1 of the Udacity self-driving car syllabus, it is CV and ML, zero about how to process point clouds.

NVIDIA is also betting on CV for self driving cars.


Human eyes have depth perception, a camera doesn't.


Stereopsis and stereoblindness is actually a fascinating topic. Some humans don’t have it, and still manage to drive, ride bicycles, or fly planes.

The Mind's Eye by Oliver Sacks has an interesting chapter on a woman who managed to develop stereovision in her late 40s through vision therapy.


Humans have a huge amount of "common sense" knowledge that makes object recognition a lot easier, even without 3D information.


Some VR users (and IMAX 3D customers) have reported 'learning' stereovision.

I'm pretty sure I'm stereoblind. I think my brain is too lazy to combine images into 3D. Pot seemed to kick my brain into overdrive and process that signal. My friend laughs at me for "seeing in 3D for the first time" and doesn't believe me.


Even with only one eye, it has to focus differently for different distances. I think that information is somewhere present in the brain and can be used (even when not consciously accessible)


Binocular depth perception doesn't work past maybe 20-40ft, and safe one-eyed drivers exist. Other forms of depth perception could theoretically be done just as well with a camera as with an eye.


Does the human eye have anything special in that regard that couldn't be emulated by two side-by-side cameras? As far as i know, depth perception is mostly an awesome feature of our brain (software), not of our eyes (hardware).


The human eye has some interesting optical defects like chromatic aberration that man-made optical systems usually fix in hardware but that the brain fixes in software. These aberrations, when available to the software as uncorrected input, can provide information useful for determining how to focus properly (and by extension, focus distance, which gives you distance-to-point information dependent on DOF). [1]

Modern photographic lenses tend to contain a large number of lenses in series, a few of which often have an exotic property like being apochromatic, aspheric, or made of fluorite, but such well-corrected lenses may be counter-productive for machine vision. Phase detection in a DSLR relies on separate collection sites like stereo vision, but autofocus still hunts under bad conditions.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3189032/


The brain is also hardware, just extremely efficient. When compared to modern processors, the visual cortex does way more while using less energy.


Luminar has announced an interesting lidar as well, but no word on how the cost will come down since it's a 1500nm lidar, requiring indium gallium arsenide and other expensive and exotic metals.


I would suspect any solid state lidar design at this point would be GaN (Galium Nitride) unless somebody has made a very secret breakthrough.


The current generation of solid state lidar uses silicon, because of the different wavelength.


Is there any chance Lidar could be used in consumer drones in the future?


Absolutely yes there is. Even the single point laser rangefinders like what are linked in the top comment here now are useful for consumer drones.

The hack people used to do (maybe still do?) was to combine a laser pointer and a webcam, and infer the distance based on where in the Y axis the laser appeared in the image that the webcam received.

This, which is many points in many directions, is much, much better, and it sounds like it will be cheap (at least much cheaper than these devices have been).


One counter objection: LiDAR on drone will be a whole lot less useful if they all 1) collect the same frequency of EM, and 2) have any way of picking up scattered signals from other drones (because it was pointed at other drones, or from scattered reflections).

Right now there's not much in the way of LiDAR on anything, but once there is, we will need to find ways to stop other active sensors from interfering with our own.


Cheap, small solid state Lidar will have applications across vehicles, robotics, augmented reality, surveillance, mapping and 3D scanning and who knows what else. In theory SSL could be as cheap as phone cameras, and will open up a whole range of possibilities.


Sure, Leddartech vu8 is targeting that in some of it's marketing material at least.


I wonder if we're going to start seeing 3D barcodes that robots use to read each other's ID and relative orientation.


A normal 2D QR code would give you orientation, see all the computer vision AR examples. What would 3D add to this?


Well you see a 3D one has more Ds in it, so more Ds = more better?

/s

Totally not my field at all, was just daydreaming a bit. Thanks for the info.


Velodyne hopes to win the new phase of this game by being first to market. “The first mover sets the standard,” Neese said. “Software is 60 percent of the effort, so if you show up [later] with a new piece of hardware, it may not fit in.”

Garmin has one on the market for $150 https://www.sparkfun.com/products/14032?gclid=CIzZ86-fs9MCFQ...


That thing is basically just a laser range finder, it doesn't produce a 2D "image" of distances (unless you pair it with something to spin it around, at which point you just have a bad version of existing lidars).


> [The new Velodyne unit] provides up to a 120-degree horizontal and 35-degree vertical field-of-view

That LIDAR-Lite is only a single "pixel". The Velodyne press release doesn't specify the resolution, but I'm certain it's > 1x1.


Garmin's (or Lidarlite acquired) is a single point reader which is very restrictive for many applications.

Sometime back Quanergy made the same splash that Velodyne is making now. When you talk to them, their price points are very high, compared to what they were talking about in their press release.

I am still waiting for a decent Lidar unit that is dependable, and costs < $500 (with > 1k points per second spread across the sphere).


the garmin unit seems to only measure a single distance in the direction in which you point it. This new velodyne unit touts a 120° angle (but nothing on the horizontal or vertical resolution)

Their spinning lidars have 16 to 64 vertical beams and a horizontal resolution depending on the rotation speed with a sampling rate of 300.000 to 2.2M points/sec

http://velodynelidar.com/products.html

edit - totally missed how many people already pointed this out, sorry.


That looks to be a rangefinder, mapping the distance to a single point in front of the optics.

The device in the article maps a much larger field of view.


Great, now all you need to do is very accurately scan that over thousands of individual points per second.


Anyone know what the deal is with Quanergy [1]?

[1] http://quanergy.com


I'm not sure what you mean by "what the deal is with Qaunergy". Does this article [1] answer your query?

[1] "Quanergy Announces $250 Solid-State LIDAR for Cars, Robots, and More" (Jan 2016) [ http://spectrum.ieee.org/cars-that-think/transportation/sens... ]


How does this news bode for Quanergy?


Quanergy gave me an early quote on a dev kit last year that was an order of magnitude higher than that. So unless they've reached that price point from their press release, probably not good...


Quanergy gave me a quote too. It seems they won't sell until you buy thousands of units. They use old school contact seller techniques and just seemed very scamy in the whole process.


Are you really basing your share buying/selling decisions off HN comments?


Cute, but no. I've heard of Quanergy's solid-state technology and am curious to hear more from someone with expertise.


Would these be able to be used on drones?


It mentions "The Velodyne package measures 125 millimeters by 50 mm by 55 mm", so pretty small.

But, as I understand it, to do something useful with that, you need to be pulling in a lot of data in real time and processing it. So it may not be the Lidar unit itself that's the weight constraint.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: