Hacker News new | past | comments | ask | show | jobs | submit login
Contrary to Musk's claim, Lidar has some advantages in Self Driving technology (arstechnica.com)
109 points by gordon_freeman 76 days ago | hide | past | web | favorite | 160 comments



What gets me is the obsession with "identifying objects". The first thing you want for self-driving is an elevation map of what's ahead. If it's not flat, you don't go there. You don't need to know what it is. This is what LIDAR is good at.

Radar is not yet usable for ground profiling. Radar returns from asphalt at an oblique angle just aren't very good. Nor is there enough resolution to see even large potholes. Maybe someday, with terahertz radar. Not yet.

Now, you can go beyond that. If you're following the car ahead, and it's moving OK, you can assume that what they just drove over was flat. If the road far ahead looks like the near road, and the elevation map of the near road says it's flat, you can perhaps assume that the road far ahead is flat, too. That's what the Stanford team did in the DARPA Grand Challenge.

Identifying objects is mostly for things that move. Either they're moving now, or they're of a type that's likely to move. This is where the real "AI" part comes in - identifying other road users and trying to predict or reasonably guess what they will do.

Collision avoidance based on object recognition has not worked well from Tesla. They've hit a street sweeper, a fire truck, a crossing tractor-trailer, a freeway barrier, and some cars stalled on freeways. All big, all nearly stationary. This is the trouble with "identify, then avoid".


Even when trying to identify objects, I don't get the push for trying to understand what they are. The way I see it (and I might be very wrong here), you should be able to get good enough results by identifying things around you that are solid (that's the most important part), and tracking their velocity. This should be doable without any kind of understanding of what the objects are. Then the car's control system should keep the speed and direction such that the car can always be stopped before any of the tracked objects hit it.

Having done that, you can play with identifying lanes of traffic and sidewalks and other "normal" features, and potentially ignoring objects there as long as they behave according to expectations. But I'd think the first order of business would still be ensuring that you don't run into solid objects, whatever they are, and whether or not they're moving.


Some things you need to identify and understand: lane markings, police officers, temporary road signs, construction workers guiding traffic, traffic lights, someone waiting at the "yield to pedestrians" crosswalk, emergency response vehicles, ice patches. Just "not running into things" isn't enough to drive on roads. Really you should be trying to understand every vehicle since a naive approach of just extrapolating their velocity is insufficiently cautious since it doesn't account for future acceleration which can cause a crash.


>Even when trying to identify objects, I don't get the push for trying to understand what they are.

Without that you don't know lots of things.

1) Whether they might move or are completely stationary (e.g. a pole vs a motorcycle).

2) How they move.

3) Which you're better off hitting if you need to swerve to avoid another car (is it better to hit the fruit stand or the 10 year old boy?)

4) Whether they tell you something (e.g. traffic signs, traffic lights, a traffic cop directing you elsewhere, for starters).

5) Whether they represent some danger and you need to keep a distance (e.g. a bus with open doors, from where someone might come out at any minute).


>3) Which you're better off hitting if you need to swerve to avoid another car (is it better to hit the fruit stand or the 10 year old boy?)

This is a non-issue in practice. Just brake and don't turn the wheel. It's a naive approach and it leaves a lot of accident avoidance potential on the table in most situations but it's what most people do and expect everyone else to do. Trying to do anything else is impossible to justify to the public because everything you have to say about why you shouldn't panic stop for a object in the middle of the freeway when there's an open lane beside you will be drowned out by people telling you you're crash the car if you dare touch that steering wheel in an attempt to not crash.


You're 100% wrong. Many people, especially big trucks, have swerved off the road to avoid killing people and animals. If you'd ever taken a defensive driving course, you'd know that it's always quicker to steer avoid collision than it is to brake. This takes a supreme amount of context and situational awareness.


>You're 100% wrong. Many people, especially big trucks, have swerved off the road to avoid killing people and animals. If you'd ever taken a defensive driving course, you'd know that it's always quicker to steer avoid collision than it is to brake.

You are 100% failing to properly parse my comment. I'm expressly saying that braking with zero regard for the situational details is not ideal but it accomplishes the goals of a self driving car.

* be as good or better than humans at not crashing

* react to situations in a manner similiar to and predictable by humans

* not make any important stakeholders more likely to get sued

* actually be implementable with current or near future technology

>This takes a supreme amount of context and situational awareness.

Which is hard enough to teach to people, let alone an AI.


Err, if a self driving car just breaks in that situation, then it fails all of the above goals:

1) "be as good or better than humans at not crashing"

It could still crash because of momentum/distance. It could cause a pile-up.

2) react to situations in a manner similiar to and predictable by humans

People would swerve depending on the situation. It's extremely common, and the logical thing to do in many cases.

>not make any important stakeholders more likely to get sued

Getting sued depends on the effect of your actions. If you kill/hurt your passengers, cause a pile-up, hit the person in front, etc. you will get sued.

>actually be implementable with current or near future technology

That's irrelevant...


>This is a non-issue in practice. Just brake and don't turn the wheel.

That's really not what a driver would do. Depending on speed, setup, it risks being hit from behind, causing a pile-up, and of course being forced forward (by the car behind you hitting you) and hitting the 10 year old...

Not to mention the chances of hurting/killing the passengers of your own car if you just break suddenly, as opposed to swerving...


The part on choosing to swerve vs brake based on risk being hit from behind does not make sense to me.

It reminds something motorcycle riders say after a crash: "I felt that I won't be able to stop in time, so I leaned over the bike".

The real reason is: they press the front brake too hard (instinctively), the wheel locks up and without gyroscopic effect the slightest disturbance let's the gravity do it's work. However, admitting the mistake is embarrassing, so a story justifying the action is being told.

In the fractions of second needed for the decision, instincts prevail. Some people swerve, some people brake. If there is enough time to evaluate the action- it doesn't matter which choice was made (either the distance is enough to brake or there are no other cars around).


In a lot of situations blindly breaking will grt the passenger killed, for example a moose or caribou will go through your windshield if you hit them.


Identifying objects means you not only know their current position and speed, you can also anticipate their possible speed and trajectory changes.


Wouldn't you need advanced AI for that? I mean even humans that are basically conditioned from childhood, at least in the western world, to judge traffic have at times trouble doing that. That's why we have safety distances. In case your prediction fails you need enough distance to just react and avoid an accident. Just one of the reasons why true self driving is such a hard problem. Already solving the question of what is ahead and around a car a reliably come up with course around these stationary and moving objects would be a major step forward.


There's no safety distance available when cars can pull into the road on which you're driving. That's one reason it is valuable to ID objects--there's a big difference between a building sitting by the side of the road, and a running car sitting in a driveway with a driver behind the wheel, on the side of the road.

Safe driving can mean taking proactive action in the latter case, like changing lanes away from the driveway (if you can) or slowing down, just in case that other driver screws up and tries to pull out in front you at the last second.

So, yeah, you would need pretty advanced AI for that. Throw it on the pile of reasons I think it will be a long time before we see self-driving cars that are good enough (and safe enough) to replace humans.


You're discussing "drivable area detection".

So far, most manufacturers cars have no problem with this. Maps and localisation are very very effective for it, and both camera and lidar based systems are pretty good at it. Nearly all players in the self driving world use a combination. Overall, it's pretty much a solved issue.

Object recognition, behaviour prediction, object interactions, etc. are the remaining unsolved issues, and that's why people talk about them more.


So far, most manufacturers cars have no problem with this.

Not Tesla.

Tesla hitting construction barricade.[1]

Tesla hitting freeway offramp divider.[2]

[1] https://www.youtube.com/watch?v=-2ml6sjk_8c

[2] https://www.theregister.co.uk/2018/06/07/tesla_crash_report/


Tesla is really reluctant to use maps, and I don't really understand why.

The excuse of "maps can't deal with changes" doesn't hold up. A map tells you 99.99% of the time the correct answer. Cameras tell you the right answer 99.9% of the time. A combination is better than both.


Terrahertz radar smoothly transitions into LIDAR, as the terrahertz increase.


In theory, yes. In practice, terahertz RF technology isn't here yet. The first terahertz amplifier was made in 2014, it was DARPA-funded, and it's not a simple semiconductor device.


This author missed Musks point entierly. His argument is that to solve self-driving you need a deep understanding of your surrounding which you can only achieve with visible light spectrum video. That's the real hard problem to solve and you need cameras to solve that and if you solve it then lidar becomes unnecessary.

The doomed part is because if companies are spending all of their energy on creating neural nets around lidar then they'll reach a local maximum where they never begin to tackle the much more difficult problem truly needed for self-driving.


Seems to me that "deep understanding of surrounding" and "only achievable with visible spectrum" are contradictory. Visible light is readily attenuated, occluded, and reflected.

The first time Tesla runs over a kid chasing a ball into the street because it couldn't see him between the cars, this will be readily apparent.

Seems to me that Tesla is in the business of selling cars, other self driving companies are interested in AV for ride sharing or trucking. The latter have different requirements for styling and costs and the consumer case, so Musk has several limitations on the sensor suite he includes in a Tesla.

What he's doing is trying to argue a $5k system with cheap cameras and crappy radar coverage is all that is needed, because a full no-blind-spot multi-spectrum system would both cost too much AND likely make the car look ugly.

Two people have already been killed, and several injured, by Tesla autopilot due to blind spots.


Can you explain how a lidar sees a child hidden between cars? I was under the impression that lidar was line of sight.


The things detectable in the visible spectrum are what humans use to drive.

Will it be apparent how fundamentally problematic this is when a human runs over a kid chasing a ball into the street because it couldn't see him between the cars?

How many people have been killed by human drivers due to blind spots?


Exactly. Until a car with reliable object permanence is demonstrated Tesla must tone down their promises. This LIDAR controversy is just a sideshow. Though a car having it will be able to outperform a car without it in many scenarios. An improvement over baseline human perception is very welcome.


> His argument is that to solve self-driving you need a deep understanding of your surrounding which you can only achieve with visible light spectrum video. That's the real hard problem to solve

Musk's argument is more that cameras should be sufficient because humans can drive using only two eyes to perceive the driving environment. He always neglects to mention that humans do this with a combination of sight and a brain capable of general intelligence. I'm sure it's true that if Tesla invents AGI, self-driving with just cameras will become tractable. But "real hard problem to solve" doesn't begin to capture the difficulty.

In reality, since no one has yet invented a self-driving computer, it's impossible to say what components are necessary or even whether there may be more than one way to skin the cat. But one source we should probably take with a grain of salt on this issue is those (like Musk) with an intense commercial interest in one perspective.


You could have made the same argument about any hard problem before it was solved.

"So this guy says a machine can outplay chess with just a CPU, some memory and a bit of code. He neglects to mention that humans have a brain capable of general intelligence".

Year later, computer beats human in chess.

"So this guy says you can teach a computer to play Go just by unsupervised training of neural networks. He neglects to mention that humans have a brain capable of general intelligence".

Year later, computer beats human in Go.

"So this guys says you can program neural network to play computer games competitively using vision and deep learning. He neglects to mention that humans have a brain capable of general intelligence".

We don't need AGI for self driving.

The difficulty of self driving is probably less than a dog walking on the street.

No, a dog doesn't steer a car, because he doesn't have hands, but he's performing the same vision and planning tasks like a human (or AI) driving a car.

He knows where he is, he knows where he wants to go and he uses vision and his non-AGI brain to plan a path to get there while also avoiding dynamic, unexpected obstacles.


Do you have any counterexamples which aren't finite games where all the potential moves and outcomes can (at least in theory) be exhaustively enumerated at any point in time?


You can approximate most everything in theory almost perfectly by a set of very large but finite moves. Driving a car can be approximated as a set of finite moves (ie: 0.001 degree changes in wheel position) done at finite intervals (say 10000 "frames" per second). That'd give you more precision and a finer time grain than even a human brain is capable of.


And you don't need a 'very large' set of moves in the first place. 100 wheel positions are enough to give you 1-degree accuracy near neutral, and moderate but sufficient accuracy for strong turns. If you're looking at a delta from the previous position, 30 will do the job. Multiply that by 10 to 40 positions for the pedals, run the whole thing at 20Hz, and you're good to go.


There is Google's Starcraft ai ( https://deepmind.com/blog/alphastar-mastering-real-time-stra... ), the real time nature makes the potential search space absurdly large.


kjksf did give an example: Go. The state space there is much larger than could ever be enumerated before the heat death of the universe.


That's why I said "at least in theory." The fact remains that everything theoretically possible in Go is known ahead of time, and furthermore it is trivial to generate training data and run simulations to improve the algorithm without putting human lives at risk.


It seems to me that one of the difficulties we have with making robots is trying to model them too closely after ourselves. We don’t have a humanoid maid, but we do have a roomba; similarly for lots of industrial automation. Home automation doesn’t look like c3po walking around your house flipping light switches.

It seems like maybe not the best reasoning to say “humans, do it this way, so that’s how my robot should do it.”


Except a Roomba is much more limited than a human maid even in the constrained task of vacuuming the floor. And for a self-driving car you do need human levels of performance. Nor can you cheat and redo the infrastructure to accommodate simpler system like with home automation (ie: replace light switches withe relays).


Except that you could most definitely repurpose some infrastructure to better suit AD. Having a somewhat protected AD only lane in cities would allow, today even, autonomous transport to be utilized.


Even if that is his point, that's making a lot of assumptions.

I don't know why Tesla autopilot keeps missing obvious impervious occlusive surfaces, and detecting obvious impervious occlusive surfaces is what Lidar excels at, it's kinda making the point for the other side.


Cameras also can't really see around objects (in front of car, for instance). Lidar can. With cameras, its as if the goal is "to mimic human vision". That's fine and all but why can't we make it "beyond human vision"?


I’m really curious to learn about how lidar can see around things? You’re the second person to make this claim in this thread and I’ve never heard of it. Please explain or provide a link or something.



Hey, thanks, that was interesting.

It doesn't look like it will be widely available for a few years at least though.


> which you can only achieve with visible light spectrum video

Well, no.

RGB is really useful. But actually, you can get pretty good object recognition from a point cloud alone. I mean its better to have RGB as well. but infra-red works just as well.

The problem that appears to escape a lot of the commentary is latency. Sure, you can have a rudimentary stereo camera setup and get _some_ depth information reasonably fast. But it won't be good enough to tell you if that blob that's 100m out is stationary or moving towards you.

Lidar gives you high resolution long range 3d point cloud at 30hz (or faster). The best most reliable depth from monocular/stereo will have a latency of at least 150ms and will be a tiny resolution.

The chances are that we will have sub $100 CCD based lidar before we have low noise/low latency/full resolution depth from monocular/stereo cameras.

The other big issue is that to get decent high res depth from deep learning, you need to have decent segmentation. Segmentation comes for free with lidar (assuming you impose rgb over it.)

> spending all of their energy on creating neural nets around lidar then they'll reach a local maximum where they never begin to tackle the much more difficult problem truly needed for self-driving.

This does not make all that much sense. You don't just train on lidar, you feed in steering, acceleration, braking, gears, signs, radar, pretty much everything.

The other important thing to note is that tesla's stuff is still level 2. volvo, BMW, and a few truck companies are all at least level 4. We are celebrating a "genius" who has yet to actually release a system that does what he claims it should.


Can you give examples of specific vehicles from Volvo/BMW/truck companies that are at level 4?


Musk's argument was refuted by his own Data Scientists.

They admitted their own models are far from perfect and will likely never be. The concerning one in particular was the "is this a large object" model which initially failed to identity objects such as car-carrying trucks, cranes etc.

With Lidar you can certain at least that it will identity an obstacle.


> a deep understanding of your surrounding which you can only achieve with visible light spectrum video

Why can you only achieve that using visible light spectrum video?


Musk's point has always been to combine vision with radar, instead of Lidar. I'm amazed that this combination is usually overlooked in discussions of Tesla/Lidar


Exactly. What is rarely mentioned is his exact quote on the reasoning for radar vs. lidar:

“If you’re going to use active photon generation, don’t use visible wavelength, because with passive optical you’ve taken care of all visible wavelength stuff. You want to use a wavelength that’s occlusion-penetrating like radar. LIDAR is just active photon generation in the visible spectrum.”

This article is still missing the point when talking about redundancies. LIDAR only works in essentially perfect weather ("not occlusion-penetrating"). Even if it serves only as a "redundancy" there's no advantage in relying on a sensor suite that operates in a less-safe mode in the most adverse road conditions. So basically if you are driving in snow or fog, your LIDAR-based AV has to fall back to Radar+Cameras. If that system can pass all the safety tests in the worst-case road condition then there is no value in the additional sensors that add expense but no safety margin.

What's even more overlooked is power consumption. LIDAR is far more power intensive, especially when we're talking about multiple packages per vehicle. In the future world of Autonomous Electric Vehicle Fleets, the vehicles using LIDAR will get significantly less range efficiency than their radar counterparts and cost significantly more to build. In a fleet scenario where every margin counts this will result in a significant economic pressure to ditch LIDAR.

So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

The real competitive advantage for AEV's is in the software, not hardware. LIDAR is a crutch for bad software that reaches a theoretical maximum far short of what is needed for economically-viable LVL5 autonomy.

I'll restate this clearly: there's simply no economic or technological advantage to using LIDAR for AEV's.


Lidar doesn't use visible spectrum light. They're usually infrared, so Musk's quote makes no sense.

You're making a lot of strong assumptions to draw your conclusions: that commodity cameras and radar can compete on measurement accuracy with lidar systems, and that lidar costs won't decrease with additional investment (we've already seen costs decrease, by like a factor of 10x in less than a decade). The power questions also aren't cut and dry: if you need extra in-vehicle GPUs to support the radar+camera approach, you may well be using more power than a lidar based approach.

There's also no real requirement that AVs operate in snow or dense fog. Those are only considerations in certain climates in certain seasons. You don't actually need the safety system to pass the safety tests (that don't currently exist) in worst case conditions if the vehicle works anyway. Why optimize for the worst case first?

I'll respond clearly: We're multiple computer vision leaps forward away from what Elon needs for success. They're easily half a decade behind Lidar based systems. And people die as a consequence of putting those systems on the road.


> The power questions also aren't cut and dry: if you need extra in-vehicle GPUs to support the radar+camera approach, you may well be using more power than a lidar based approach.

All current lidar-based approaches I'm aware of also supplement with radar+cameras. LIDAR isn't sufficient in isolation. GPUs consume way too much power, there's no way you can cavalierly just add more of them as a scaling solution.

> You don't actually need the safety system to pass the safety tests (that don't currently exist) in worst case conditions if the vehicle works anyway. Why optimize for the worst case first?

Not even sure how to interpret such a statement.

> I'll respond clearly: We're multiple computer vision leaps forward away from what Elon needs for success. They're easily half a decade behind Lidar based systems. And people die as a consequence of putting those systems on the road.

Custom ASICs for ML that Tesla is building is fairly well-understood tech at this point. High-end smartphones have used similar tech for years now (though perhaps an apples-oranges comparison).

Only criticism I would level against Tesla's current approach is their overly optimistic time estimates, and hand-waviness about the complexity of solving certain very complicated edge-cases. However their technological approach is quite sound.


> All current lidar-based approaches I'm aware of also supplement with radar+cameras. LIDAR isn't sufficient in isolation. GPUs consume way too much power, there's no way you can cavalierly just add more of them as a scaling solution.

Yes, but they don't need to get point-cloud level spacial data from a suite of cameras. That takes more compute power to do at 60fps accurately with cameras than with a lidar.

> Not even sure how to interpret such a statement.

Let me rephrase: You don't need the vehicles to work in worst case conditions at all, if economically they're still a success if you don't allow them to run in those conditions.

If Waymo or Cruise who whomever has self driving taxis in 2020 deployed in temperate cities, and Tesla doesn't have L4 autonomy until 2024, at which point it also works in a blizzard, it doesn't matter if they're cheaper and more effective them. Tesla will have already lost.

> Custom ASICs for ML that Tesla is building is fairly well-understood tech at this point.

This is one small piece of the puzzle. You also need algorithms that Tesla doesn't appear to have, and (camera) hardware that Tesla claims to have but others seem to agree can't support what they want.


> If Waymo or Cruise who whomever has self driving taxis in 2020 deployed in temperate cities, and Tesla doesn't have L4 autonomy until 2024, at which point it also works in a blizzard, it doesn't matter if they're cheaper and more effective them. Tesla will have already lost.

If those self-driving taxis are not available for retail purchase and cost their operators over 6-figure sums to add to their fleet - and Tesla have it working in their promised $35k Model 3, then Tesla will win in the long-term.

Also, remember the car industry moves very, very slowly (no pun intended). The Model 3 has been out for over a year now and has barely captured a fraction of its possible market.

Consider that people will buy/lease a car for individual use for 3-5 years before replacing it: someone buys a normal car in 2021 because they need one at the time, but will still buy Tesla’s FSD car when it comes out.

The “autonomous taxis will replace individual car ownership” trope only applies to hyper-urbanised environments where parking-spaces are luxury lifestyle accessories - and where people are already well-services by public-transit infrastructure.

Finally: people who try an autonomous (but unaffordable for exclusive individual use) taxi service in 2020 might be so impressed with the experience that they vow to buy the first available individually purchasable autonomous car - even if that happens until Tesla’s 2024 models come out - Tesla still wins.

By analogy, consider that Tesla is probably the /hottest/ car brand in the world today - and they did it without any traditional advertising - and they got started over 10 years ago. Enough people influenced by reviews and YouTube videos of the Roadster and 2012 Model S translated that into real money being spent on Model 3 buys today. That’s an anticipation gap of at least to 7 years - that’s impressive. Can you imagine people waiting 7 years for an “affordable” version of a luxury product? Why weren’t any of the other manufacturers doing anything to meet this clear demand for physically attractive EVs with over-hyped autonomous capabilities?

(Disclaimer: I own a Tesla Model X)


> only applies to hyper-urbanised environments

That's where most of the population lives in western countries (both US and EU have ~10% of the population in the largest 10 cities and that's without the well connected metropolitan areas which probably at least double the numbers). So if the futuristic predictions of shared vehicles actually materialize the market for the "personal" vehicle would shrink and manufacturers will have less incentive for whatever they produce for the "personal car" slice of the market. Unless we also get a decentralization trend and people start leaving urban centers.

> Why weren’t any of the other manufacturers doing anything to meet this clear demand for physically attractive EVs with over-hyped autonomous capabilities?

While I agree with the "physically attractive EVs" (although for me it's more of a nice to have than a must), the second part is pretty cynical. You're asking why aren't manufacturers knowingly killing people to sell more cars. Well they are. Some promise you clean diesel, some promise you self driving. Both missed the mark a bit. And both of those promises sold lots of cars.


>> I'll respond clearly: We're multiple computer vision leaps forward away from what Elon needs for success. They're easily half a decade behind Lidar based systems. And people die as a consequence of putting those systems on the road.

>Custom ASICs for ML that Tesla is building is fairly well-understood tech at this point. High-end smartphones have used similar tech for years now

joshuamorton is right and you didn't really address the point made. It isn't about understanding the hardware, it's about understanding the software. You point me to any individual who can tell you concretely how a specific convolutional neural network (CNN) arrives at a given solution and I'll take this all back.

We know so little about how CNNs work, and plane old neural networks (NNs) for that matter, it's embarrassing. They're prone to adversarial attacks, not only that, you can successfully mount the same attack against _any_ NN that was trained on a common dataset. We don't know how to effectively defend against these yet in a white box setting.

We have no idea what the solution space looks like. We barely understand why one of the most simple optimization algorithms outperforms almost all others on at these tasks. We barely understand why randomizing data visitation results in solutions that perform completely differently.

The counter argument goes something like "we tested it on a big eval set and it aced it" Well I'm here to tell you that things change. Assumptions made in creating that eval set might not generalize to all real-world cases. And in this case bad assumptions result in death.

As someone who has spent a good portion of my life working on this stuff I've learned that sometimes (most of the time) the best choice is the obvious, known, simple one. The fact that a good portion of cars may be controlled via something we know so little about should worry you.

On a different note, I think you're also missing why Tesla have opted to invest in a percept suite that doesn't use LIDAR. That reason is cost. Tesla needs to sell cars now. They and their customers cannot afford to put LIDAR on their current platform. At the same time they need to move the metal and they know their customers want AEVs. I think their strategy is sound from a purely business cost/benefit analysis. It's a risk but from a financial perspective a good one, because if it works they hit pay dirt.


> The fact that a good portion of cars may be controlled via something we know so little about should worry you.

I bet that we know much, much more about CNNs than about our brains. And today pretty much all the cars are controlled by something that we don’t know basically nothing about. Why this doesn’t worry you?


The fields of neuroscience and psychology are far more advanced compared with the infantile field of deep learning.

Humans also learn independently from one another which means outlier events aren’t as much of a problem. You have the ability to observe the world around you, say you see an accident due to some rare weather event, and learn some abstract lesson extremely quickly. A CNN has to learn this over millions of samples out of band.

EDIT: clarification in last sentence.


Based on the number of actual reproducible controlled experiments done on CNNs, I'm not convinced the field is less advanced than psychology at this point.


Well, we know quite a bit about how humans perform at the relevant levels of abstraction, and that's the level at which driving automation systems should be evaluated. They are not there yet, but there's no reason to believe the problems are insurmountable.


> I think you're also missing why Tesla have opted to invest in a percept suite that doesn't use LIDAR. That reason is cost.

Even beyond pure cost, one problem Tesla has is that they already SOLD tens of thousands of FSD option packages, predicated on NOT needing to retrofit the cars with LIDAR.

So arguably, Tesla might be worse off if they deliver a LIDAR based FSD (and need to retrofit tens of thousands of cars, or pay off the owners), than if they just plod on with a camera based FSD that never quite works safely.


>one problem Tesla has is that they already SOLD tens of thousands of FSD option packages

It's always seemed sketchy to me that they're selling FSD option packages when they don't know when FSD will actually happen or whether the current in-car hardware will be adequate to support it once it does happen.

He's likely banking on a bimodal distribution where FSD either happens very soon (within a year or so) in which case the hardware likely will be fine OR not for 5+ years in which case these cars likely will not be on the road any more and it becomes a moot point.


This is what I meant, I should have been more clear.


> You don't actually need the safety system to pass the safety tests (that don't currently exist) in worst case conditions

I do not agree, making the systems work in all conditions is the hard part of true automated driving.

Also, worst conditions happen 1/50th of the year ( guess). So it's not that rare


Safety systems certainly need to work in all conditions, but “work” in this case may mean refusing to activate in conditions outside its design range. It’s fine for a self-driving system to refuse to drive in a blizzard; it’s not ok for it to try and then fail to drive in the blizzard.


Refusing to drive could work when starting out, but handling changing conditions on the road is harder.

A scaled request for the driver to take over as conditions get worse can train the driver not to use the self-driving system in adverse conditions, so hopefully it wouldn't have to refuse (unless the driver is negligent).

Getting back to the OP, Musk may have a point: people are terrible at evaluating risk for low-probability/high-consequence events like a car accident, so LIDAR might lose in the market even if it is worth it. But if there were standards for when the car asked the driver to take over and LIDAR is able to pester its drivers less often because it is more capable, then perhaps LIDAR can justify its place in the market.


Moderate rain is not an acceptable condition for cars to refuse to drive. Especially in the middle of long trips.

This means self driving cars must also operate without LIDAR, though the safety advantage when it is operating can still be a net gain.


> Moderate rain is not an acceptable condition for cars to refuse to drive. Especially in the middle of long trips.

It is absolutely acceptable for a self-driving system to refuse to drive in any conditions it can’t handle, but it must also deactivate in a safe way when such conditions arise during operation (e.g. pull over to the side of the road if the driver hasn’t positively acknowledged resuming control).

Commercial viability of any given system is a separate issue, but it’s pretty well accepted moral responsibility to not accept control of a vehicle if you’re unable to operate it safely, regardless of the consequences of that refusal(1). I see no reason to not hold consumer-facing self driving systems to the same standard. Otherwise, they require some specialized training for the operator to be able to recognize the situations in which they are safe to use.

(1) Actual life-and-death situations change this a little, but future availability of rescue personnel and equipment generally weight the conclusion towards operating safely in those situations as well.


The pulling over to the side of the road solution doesn’t work at scale. What happens when it starts snowing on a 10 lane freeway during rush hour and 50% of the cars are self driving with this limitation?


It’s unlikely that, in a region that gets such snowstorms, a self-driving system that can’t handle them reaches 50% market penetration. That seems more like a “we’ll cross that bridge when we come to it” scenario.


Rain and fog are much bigger issues for these systems than snow. Being unable to sell cars in places with rain does not leave many options.

However, for rare events it’s very possible for individuals to be fine with something and only collectively do you end up with major problems.


OK then QED. The point some are making is that with at least the current type of lidar, that bridge may not exist. It may make more sense to devote resources to better radars and radar processing.


And I’m not disputing that at all. None of my statements say anything about any particular self-driving technology as I’m not an expert in the various technologies. My point is that what’s “acceptable” is a fundamentally a moral stance, and stated the minimum bar I believe all drivers (automated or not) need to meet.

This is simply a constraint that any system needs to work within, and it’s entirely possible that precludes the commercial viability of LIDAR-based systems. It’s also possible that there’s some niche market that fair-weather-only systems can be successful in long before the general problem is solved, and we shouldn’t artificially throw those out as “unacceptable” when there’s a reasonable framework for them to operate under.


Unless you're driving on a road, and heavy snow develops earlier or more severe than you and the weather service anticipated. Instead of going the next 2-3 miles safely to your exit, your car will decide it's best for you to strand you in the middle of the motorway in white out conditions until the weather passes? Color me skeptical.


> Instead of going the next 2-3 miles safely to your exit

You’re making an assumption here that the system is capable of continuing to travel safely. Obviously being safe at home is better than being stopped on the side of the road, but that’s not the choice you’re actually faced with. Similarly, a system that can operate safely in adverse conditions is obviously better than one that can’t.


I know in my good-old human-driven mode of transport what choice I would make.

I don't give automated driving any allowances, if it can't do what I do, it doesn't belong on the road.


> I don't give automated driving any allowances, if it can't do what I do, it doesn't belong on the road

In that case, you’re almost certainly grouping a lot of current and safe drivers in the “it doesn’t belong on the road” category. Not having lived in a place where I’ve needed to deal with white-out conditions myself, I doubt I’d feel comfortable continuing to drive. I know this about myself, though, and am likely more conservative than you about canceling or rescheduling a trip when such conditions are a possibility. This is the same bar that I’m proposing automatic driving systems need to meet; if that means they only work on cloudless days with 0% chance of rain, so be it. In practice, this means there’ll need to be some way for a human to take control for quite a while yet.


One industry that is very close to full autonomous operations is aerospace. In a controlled environment with less things to run into. And even they disengage the Autopilot in certain conditions. Why would we aim at full autonomy for cars in an uncontrollable chaotic environment with a lot of stuff to hit under all circumstances? In some countries you are not allowed to drive with summer tires in winter for safety reasons. So why not limit self driving capabilities in similar fashion?


From the business perspective, this would not be desirable because it destroys a lot of prospective business cases for self-driving vehicles. Basically all of them that are as-a-service. What good is a taxi service that stops working during bad weather? That's even when there would be more demand for it because people don't want to walk or bike anymore.


Says a lot about some of these business ideas, doesn't it?

But in all seriousness, if self driving cars are not ready as fast or as performant as expected a lot of these long term bets may be in a lot of trouble. This potentially includes Uber, Lyft, Tesla by self-declaration, and others.


And safety isn’t limited to working in harsh conditions. Mistaking a picture on a billboard for a real object, or mistaking a truck color for the sky can also get you a crash by clear weather.

The crutches analogy isn’t a very good one. If you ask a doctor, he will probably advise you to use crutches until you can stand firmly on your own two feet!


Those don't sound like things Lidar would have a problem with. The color doesn't matter only whether the surface is reflective (basically anything not matte black). But otherwise objects are easy to detect and distance.

The problem with weather is different as the individual droplets bounces the Lidar light before it hits the object.


My point is rather that those are things CV would have problems with that Lidar disambiguate.

And yes there are conditions where neither work well, and frankly where humans barely function either. Much of driving in heavy fog / heavy snowing is really a leap of faith at low speed.


>They're usually infrared, so Musk's quote makes no sense.

Many of Musk's statements don't hold up to the scrutiny of professionals or specialists.


> So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

High quality cameras are insanely complex pieces of electronics and optics. Ditto for processors capable of doing quality image recognition. Large scale manufacturing has made them cheap nonetheless. LIDAR is relatively niche, but if it proves useful to deploy it at scale, I'd expect costs to drop very significantly. The underlying technology uses very simple physics (relative to the algorithmic complexity of image recognition); seems like a solid basis to build a sensor off of.

> LIDAR is a crutch for bad software

You could invert this and say that high precision image recognition is a crutch for ill-suited hardware. The final combination is a product of hardware and software. If LIDAR is currently too expensive or energy-intensive to compete cost-wise at acceptable safety levels, that's one argument, but saying LIDAR is a crutch is just moving the goalposts from "good system" to "cheap hardware".

[edit] Also, just want to point out that RADAR resolutions are way too low to operate a vehicle safely (never mind road signage or other things).


> The underlying technology uses very simple physics (relative to the algorithmic complexity of image recognition); seems like a solid basis to build a sensor off of.

The crux of the argument is you will still need the algorithmic complexity in the end with or without LIDAR, so it doesn't add any advantage.

I'll just paste the quote FTA:

> "Lidar is really a shortcut," added Tesla AI guru Andrej Karpathy. "It sidesteps the fundamental problems of visual recognition that is necessary for autonomy. It gives a false sense of progress, and is ultimately a crutch."


> The crux of the argument is you will still need the algorithmic complexity in the end with or without LIDAR, so it doesn't add any advantage.

The approach described still uses the same ML on a Lidar data representation, but with an extra step on image recognition to put it into that data format. Image recognition only adds computational complexity post-hardware-sensor because you first need to generate a Lidar-like 3D map. So that is conceivably an advantage for Lidar in the long run given that its method for making the 3D representation of the space (upon which the ML runs) is dirt-simple physics-wise—much simpler than the image recognition algorithms and camera electronics used by image recognition. It's just not cheap yet because the market for processors and cameras is huge and the products have become incredibly sophisticated and cheap. Again, hard to imagine an idea as simple as Lidar not getting much cheaper with scale.

> I'll just paste the quote FTA

My point wasn't an ad hominem one, so I don't think the fact that an expert like Karpathy said "crutch" changes the fact that it's slanted phrasing. But given that Karpathy works for Tesla and has a vested interest in assuaging investors and consumers, it makes sense why he used slanted phrasing. "Lidar is a crutch" makes Tesla sound more visionary than "we don't want to pay for Lidar because it's still too expensive and we think we'll be able to rival Lidar with image recognition." It's a nice way to subtly jab at competitors who are investing in Lidar and frame it as if Lidar was the tech that had catching up to do (when, in fact, the opposite is true).

Since the key point of the article is that ML algorithms work better with Lidar data representations, it's pretty hard to see it any other way. They both go to the same intermediate data representation, and Lidar still wins in a head-to-head. Again, you can argue validly that cameras + image recognition will be good enough, but calling Lidar a "crutch" seems like pro-Tesla spin by a high-up Tesla employee whose job is to make Tesla look good.


While your comments have gone a long way towards changing my mind, Karpathy's comments are a bit rich, given that Tesla's systems could have benefited from a crutch to help avoid running full-tilt into large obstacles.


If all you use is words, it is of course easy to omit that the spatial resolution of radar is barely enough to tell there are one or or two vehicle sized objects in front of you, maybe one to the side, and they better be moving.

To compare it to LIDAR is ludicrous.


Ford did some research to improve Lidar for use in the rain/snow using a filtering algorithm.

> Ford’s autonomous cars rely on LiDAR sensors that emit short bursts of lasers as they drive along. The car pieces together these laser bursts to create a high-resolution 3D map of the environment. The new algorithm allows the car to analyze those laser bursts and their subsequent echoes to figure out whether they’re hitting raindrops or snowflakes.

https://qz.com/637509/driverless-cars-have-a-new-way-to-navi...

Still seems less than ideal but I'm curious if that will ever reach somewhere useful.


FWIW, lidar is very effective in finding the surface below forest canopies. In fact, it will often identify the underbrush, in addition to the canopy top and the surface.


Aren't there still problems with multiple active sensors sweeping the environment?

I remember it being a problem in cars that used Lidar but cannot find the info anymore.

I think Lidar could still be of help and even the perfect software can use any form of sensory redundancy. But I agree that there might be alternatives.

edit: A laser is probably a lot cheaper than camera and imaging dsps if comparable production scales are reached.


>So in the end I think Elon will be proved right. Those currently investing in LIDAR-based systems will eventually ditch it for purely practical economic reasons. Those that don't will be completely destroyed in the open market.

Maybe, but there's no reason to leave a local maxima until you actually have something better.


Musk badly wants for you to not realize that nobody is proposing LIDAR-only, but are rather proposing LIDAR+optical+radar. Musk argues against straw men.

(Also the radar Telsa is using has jack-shit for angular resolution. It can't tell the difference between a tree next to the road and a fire truck parked right across it. Consequently that radar has very limited utility.)


More accurately, Musk doesn't care what others think is needed for self-driving, so your aspersions about Musk badly wanting us to think one way or another are not supported by facts.

Neither Tesla nor Musk make a big deal of lack of lidar.

The only reason his views on the subject are public (and so hotly discussed) is because during Autonomy Investor Day he was asked by an investor why Tesla doesn't use a lidar.

So he answered the question. You might not agree with his reasoning but he's not on some "NO LIDAR" publicity tour, trying to change your mind.

Here's the source: https://www.youtube.com/watch?v=Ucp0TTmvqOE

Watch the whole thing. The first time Musk mentions lack of lidar is after being asked.


He should care what experts think. Whenever someone probes Musk in their own area of expertise, they find his knowledge is that of a stubborn, overconfident dilettante.


I think any expert is going to be able to outshine Musk in their area of expertise, but where Musk is great is in seeing the bigger picture. Musk simply doesn't have the time to become an expert in everything, he has to execute now.


> "More accurately, Musk doesn't care what others think is needed for self-driving"

That's obviously wrong. Musk is deeply invested in the matter, particularly the public's perception of the matter. He is currently promoting his 'solution', which has not yet proven itself, as being hardware complete and is selling it to consumers right now. Public perception is his priority and he shit-talks LIDAR whenever he feels doing so is necessary to defend the public perception of the product he's trying to sell.


Actually there was a presentation given by one of their Lead Data Scientists describing their ML architecture. At no point was radar mentioned. They are purely relying on vision to identity cars, obstacles, traffic lights etc with dozens of models each focused on one particular 'type'.

Radar by the sounds of it is being used purely as a fallback.

The question is if the vision systems fail to recognise an obstacle at high speed is the radar long range enough to compensate in time.

Presentation: https://slideslive.com/38917690/multitask-learning-in-the-wi...


Actually, they did mention radar during Autonomy Investor Day (https://www.youtube.com/watch?v=Ucp0TTmvqOE).

Also, https://www.tesla.com/autopilot says:

"A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead."

So it's rather bizarre that you would speculate about how they're not using radar when they explicitly say they do.


There is zero chance a radar will recognize a traffic light. It is prone to false positives, to boot. Its only use is tracking large metal objects: cars and trucks, for the purpose of adaptive cruise control.


Elon Musk or the actual Data Scientist working on it.

I wonder who to trust.


I’m sure this is a dumb question to anyone with knowledge in this field but is there any reason to not use all three together?


Most of the non-Tesla systems do. Waymo uses lidar, radar, and cameras. The cruise vehicles I see around have lidar and radar as well, and I just assume everyone has cameras because they're cheap and easy to stick somewhere.

To be specific about waymo (to be clear I work at Google, but don't actually have any special info on this), look at the photo in [0]. The cone thing on top is a lidar, but also has cameras in the larger part under the cone. The spinny thing on the front is also probably a lidar. The fin that looks like an extra mirror on the back, and the two on the front have radar. There's also probably a forward facing radar mounted on the nose somewhere near the grille.

[0]: https://waymo.com/


So self driving cars are basically going to be really expensive for the first while as the sensors take time to reduce in price plus the computer in the back and less distance from the battery.

Sounds like a reasonable trade off. No one needs to own these cars just rent them on demand. Plus some wealthy people in the early adoption curve.


It would make sense to use all three.

Lidar used for long range. Vision used for things like colour recognition e.g. is a traffic light green/red or is an ambulance sirens on. Radar used for reversing etc where Lidar given its location might not be able to see that close.


I suspect that optical will also supplement LIDAR in cases where very precise angular resolution is needed, such as human gesture and posture recognition (which is necessary if only because sometimes humans direct traffic, but also for things like profiling pedestrians to anticipate which is likely to jump into the road without looking.) Being able to detect which way a human head is facing will at the very least be necessary, and while you might be able to read faces with LIDAR from a distance, my gut says that optical will give you better data for that.


Price. I worked briefly with teams building self driving cars in the past. Their budget for sensors far exceeded the cost of the car itself.


Of course, they were presumably using a mostly-stock car with custom niche sensor products, so that comparison would be a bit more favorable in production.


Latest projected price for the lidar unit is $500 to $1000 e.g. https://www.wired.com/story/lidar-cheap-make-self-driving-re...

No-one is expecting to use experimental $75000 Velodyne units in production.


Currently, LIDAR is very expensive, certainly too expensive to build into every Tesla being manufactured. So Musk would not be able to sell a "full FSD capability" option on his cars if he acknowledged LIDAR is useful/necessary to autonomous driving.

The number one link if you search "Tesla" on HN is "All Tesla Cars Being Produced Now Have Full Self-Driving Hardware." It's been an extraordinarily effective marketing gimmick.


LIDAR prices have declined and are going way low-budget now.


For example these guys suggesting $500: https://www.wired.com/story/lidar-cheap-make-self-driving-re...


There is a $300 vacuum cleaner robot by Xaomi that uses a LIDAR. The part can be found for $70 on AliExpress.

Obviously not the same specs as Waymo's LIDAR but at least, it proves that it doesn't have to be that expensive.


According to a recent interview with pony.ai (https://www.youtube.com/watch?v=0VcpZnIg3M0) the cost to retrofit a car with all necessary sensors is $75k.

A Lidar is a significant part of that.

If $40k Tesla can do as well as $40k car + $75k of sensors (including lidar) it's economical game over. Tesla wins by a wide margin.

The $75k will drop in time, but the battle will likely happen before the price of lidars drop significantly enough.


If self driving becomes a reality robotaxis will retail for several hundred thousand dollars.


They will certainly have an economic value of multiple hundred thousand dollars, but that applies to all manufacturers. So if a manufacturer is able to produce non-LIDAR self-driving cars and sells those cars to consumers for $100,000 less than the competition, you can bet that they’ll still capture the robotaxi rental value that is there, through an app store-like agreement. Leaving the money on the table would obviously not happen, unless it was intended to drive the competition out of business.

There would probably be room for both these models (direct sales that capture much of the self-driving value and leasing), but regardless there are obviously strong incentives for a 5- or 6-figure reduction in costs.


I have no knowledge in the field, but maybe the expense becomes high?


expensive large and heavy


That wouldn't be very good click bait


I think I would find Musk's claims more compelling if he had actually sat down with an expert and discussed in detail why he believes what he believes. Instead we're sitting here discussing quote a kooky quotes with no real analysis. Even on the face of it

>"They're all going to dump lidar," Elon Musk said at an April event

We know which companies are building self-driving cars, we know what technologies they're using and we know how long they've been working on it. Have we seen any signs that any of these companies are dumping LIDAR? I would've thought it'd be pretty big news right?


>Have we seen any signs that any of these companies are dumping LIDAR? I would've thought it'd be pretty big news right?

In fact, why would they not have a multi-faceted system, keeping LIDAR and alternatives?


Because LIDAR is expensive. If camera-based neural networks eventually get good enough that LIDAR provides minimal additional value, they will drop it. This is what Musk is betting on. We're not there yet for sure, and it's not clear to me yet whether that's a realistic goal for the 5-10 year time frame.


There is some precedent for this. Precise digital surface models are useful in geomorphological studies, and the state of the art for producing a new one in the field is laser scanning. But earlier this decade, researchers started using differential processing of sequential images taken from a platform moving arbitrarily (plane, helicopter, drone, etc.) and were able to produce DSMs with elevation resolutions below 10cm, which is sufficient for a lot of studies. See this paper on "structure from motion":

https://www.sciencedirect.com/science/article/pii/S0169555X1...

The big difference between this and driving is that in SfM, the platform moves arbitrarily around the feature of interest until it has enough data to build the model. Driving is far more linear; your car can't circle an unknown object 5 times until it understands what it is. So, it might be more important for driving applications to maximize the number of different channels of sensor data available to integrate for creating a model.


>Because LIDAR is expensive

So is the battery technology, and electric vehicles in general. Is there anything unique about LIDAR to assume the costs won't decrease significantly over time?


Doesn't matter. Even if the cost of LIDAR decreases drastically to, say, $100/car, companies will still optimize it out if they can.


Musk is betting on a lot of things lately. I would rather have him focus on one or two hard problems and solve these first.


Musk's job isn't to solve problems, it's to find people who can solve problems, put them together, and pay them.


Musk's job is to secure investment (and government deals) with promises and signs of hope.


I don't think his original two bets have been fully solved yet, electric cars and private space exploration. He seems as dedicated to those two goals as he ever was.


As I saw in a tweet earlier, "there are no experts in self driving, just people who have failed for different lengths of time".

Also he's saying they're going to have to dump it because it's the wrong approach.


Karpathy is leading the ML team and Musk is no slouch. I'm sure he doesn't write code, but he's been through linear algebra, quantum physics, and statistical mechanics. He understands how photons work, how sensors work, how computers work, how the math works. So he can quickly assess the business utility of a proposed solution, or the plan to find solutions. They have more than a couple guys at this caliber. Every person in that presentation was top-of-their-game, mid-career, I'm-not-falling-on-my-sword-for-some-bullshit.


> For example, one of the distance estimation algorithms used in the Cornell paper, developed by two researchers at Taiwan's National Chiao Tung University, relied on a pair of cameras and the parallax effect. It compared two images taken from different angles and observed how objects' positions differ between the image—the larger the shift, the closer an object is.

The shift or disparity between sensors doesn't really matter. We've known that wider convergence angles begets better object point estimation since the 70s. Yet, even the KITTI dataset doesn't attempt to take advantage of this, and uses two rather average cameras with a (relatively) short baseline of 0.06m (see: http://www.cvlibs.net/datasets/kitti/setup.php). That's 6cm!!! You have the entire width of the car to separate these cameras by.

> This technique only works if the software correctly matches a pixel in one image with the corresponding pixel in the other image. If the software gets this wrong, then distance estimates can be wildly off.

Again, yeah. But the problem is twofold: you need to detect / match similar points between two images, but the fundamental setup of your system can limit your precision and accuracy. Use a wider-angle lens with better convergent geometry. Every publication based on the KITTI dataset doesn't even address some of the most basic criticisms from photogrammetry.

Which leads to probably why LiDAR gives such a distinct advantage in most of these data sets. You solve two problems:

1) You solve the correspondence problem trivially because LiDAR doesn't need to match points between cameras, and there's no baseline / convergence criteria that the final point precision depends on.

2) Robust geometric data is well-modelled, well understood, and provides an easier criteria for machine learning systems (particularly ones running over KITTI, as in the article) to converge on than just using stereo-imagery with a baseline of 6cm. You get the scale of the system for free and your calibration troubles are whisked away as LiDAR systems tend to be better-calibrated and more stable than most lens systems or configurations you'll find in the cheap off-the-shelf cameras that many autonomous driving startups are using.

I guess I come off a little negative by looking at this, but my first reaction to Musk saying that nobody should or will want to ever use LiDAR for this is that he doesn't know a damn thing about what he's talking about.


A 6 cm baseline is enough for humans to make adequate distance estimates.

Besides the correspondence problem, a longer baseline makes it hard to keep the cameras aligned as the vehicle bounces and flexes. You can't mount them separately to the car -- a chassis can easily twist by a degree or two. So you need a stiff mounting bar between them, which you can either put outside the car like a roof mount (ugly, and it gets buffeted by wind) or inside (also ugly).


Why even limit yourself to two cameras? If I recall correctly multi-view geometry benefits from having as many cameras as possible.

In the future we will all have walls covered with a checkerboard pattern in our garage to calibrate the cameras on our self driving cars. :)


Great points. It would make perfect sense to have two baselines: One of a few cm for nearby objects and one car-width for good depth resolution of distant objects (which humans can't do, but humans have much better world models than computers, so better depth perception on the part of computers might close that gap a bit.)

I also think lidar or radar will always be necessary. The Tesla fatality last week happened because a big white truck pulled out in front of the car. With a big blank surface, stereo pixel correlation is impossible, but it's trivial for lidar or radar to read such surfaces.


The article misses one important point about LiDAR. Frequency modulated variants, referred to as "FMCW", get velocity information for free via the doppler effect. You can't get that information from a camera without sophisticated image processing, and you can't get it with high resolution from RADAR. Knowing velocity as well as position is important to assessing immediate safety threats.

There's a good write-up by the co-founder of SiLC, a silicon photonics LiDAR startup, here:

https://www.photonics.com/Articles/Integrated_Photonics_Look...


I agree with Musk and see lidar where ray tracing was decades ago. It was an expensive impractical "holy grail".

A set of lidar sensors right now costs as much as a car.

Maybe at some point in the future one of these lidar startups will come out with an inexpensive (maybe solid-state) version to augment the current sensors. Or maybe by that time vision will have gotten much better.


The cost of lidar is going to plummet due to exactly the end of your post. Several startups (SiLC, Aeva, etc.) are using silicon photonic integrated circuits. Several more early stage startups have either mems or phase array prototypes for completely solid state chips.


Blickfeld is working on solid-state LIDAR. From what I've heard from friends working there, their sensor is/will be available for under $1000 which is a huge cost reduction from the current price(LIDAR) == price(car) solutions


The thing is, self driving wars will likely be over before economies of scale for production of lidars will happen.

If non-lidar system doesn't work, then the cost of lidar, even at $10k, is irrelevant.

If you can make non-lidar system work better than humans (i.e. with quality acceptable for regulators) before the cost of lidars drops down significantly, then lidars lose based on economics.

And the cost of lidar won't drop significantly quickly. The next step-change in price would probably require mass production i.e. production of hundreds of thousands of units per year.

Even if lidar robotaxis happen before non-lidar ones, initially they'll be made in tens of thosands of units per year, leaving a couple of years for non-lidar tech to catch up.


Waymo claimed to be able to produce lidar sensors for 10% of market price back in January of 2017 (estimated $7500/unit). If true, it'll be critical to their scaling and success.

https://techcrunch.com/2019/03/06/waymo-to-start-selling-sta...


The waymo cars I've seen seem to have many units on each car.


I wonder about the noise aspect of this when you've got 20 cars nearby also using lidar. Is there a point where these kinds of active sensors begin interfering with eachother? I know that it isn't lidar but the xbox kinect's used to interfere with each other if you had multiple in one room


That really depends on the modality of the LiDAR. For the record, the Kinect is techically LiDAR since it is using "Light Detection And Ranging."

As for why the Kinect interferes with other units, it's because of the imaging modality (structured light). The sensors interfere with one another because they're largely dependent on detecting a specific pattern of projected dots. If you detect too many dots or if the image gets saturated, you start to have a problem.

In the case of traditional scanning LiDAR (e.g. terrestrial LiDAR in the sense of a Leica or Faro or Velodyne unit), this isn't necessarily the same case. Sure, if the two lasers point exactly at each other for a given point over their sweep, then at that point the lasers will saturate the measurement and that specific measurement will not be useful. In time-of-flight based, mirrorless systems, this matters less than one might think. I can see this being consistently a problem when scanning with Velodyne tech since they tend to only rotate about one axis, but for other types of LiDAR I don't think it would be as big of a deal. Granted, then you have to worry about scanning speed and how that affects the final results.

Overall, I don't think that unit interference is going to be a significant factor in adoption. LiDAR is a broad technology and it's not easy to make assumptions about the entire industry based on a couple implementations or modalities.


As an aside, the original Kinect and Kinect for Xbox 360 use different technologies for 3D detection. The original Kinect projects an infrared pattern and then detects the deformation of the pattern to determine distance/shape. The Kinect for Xbox 360 uses more traditional time of flight.


you are confusing between Kinect for XBox 360 and Kinect 2 that is Kinect for XBox one, Kinect 2 ( kinect for xbox one) uses time of flight. Original kinect and Kinect for xbox 360 are same and uses structured light sensor.


Most next gen lidar systems will have coherent mixjng circuits to combat this exact issue. It’s typically called “FMCW”, frequency modulated continuous wave.


To sum up. If I use Lidar - Lidar is good. If I do not use Lidar - Lidar is bad.


That's what each of the companies involved claim, but here you additionally have benchmark results from independent researchers who aren't building self-driving cars that say Lidar is good.


Everything that you see is a piece of marketing (in both sides). This "research" was drafted, vetted, reviewed, approved (by at least 5-10 people).


To be fair, they picked what they felt was the best option available to them considering their requirements and based on their experience. It would be weird if anyone claimed that they were using bad tech while acknowledging their competitors are doing things right.


Another possible argument is simply pragmatic - by not including LIDAR, Tesla can actually sell cars and therefore be in the best position to get to L5. They'll have the biggest fleet, the most data, the most technical expertise and experience, etc. I mean they're actually selling something people are buying. It may or may not be the correct technical choice but it seems to be easily the correct business choice.


One of the major disadvantages of LIDAR is poorer performance in rain, snow, and fog, which are quite common in many parts of the world. I'm surprised that isn't being discussed more.


Is that any better with Radar or cameras. What happens when there is a large droplet over a lens? Does sound diffuse in a heavy rainstorm and/or completely drown out the ping.


Vision is also poorer in rain, snow, and fog.


IMO all of the cost and power arguments are currently red herrings. An autonomous car or truck is worth at least $100k more than a non autonomous one, so whatever it takes sensor wise is worth it.

Where musk's argument makes sense is that if lidar can't be made to work in all weather, putting effort into making algorithms for it may be a dead end. There are a number of companies with various approaches to making lidar work better in bad weather though


a big drawback is not performance but that most lidars are expensive and fragile.


This was true... until recently when Luminar announced a fairly affordable (for an automobile BOM) ~$1k version to be available this year. https://www.engadget.com/2019/07/12/luminar-affordable-lidar...


Article clearly mentions that these will be ready for production not before year 2022, until then it's all speculation.


the draw back for autonomy you need to recognize objects.


How well do Lidar systems work when there are 5, 10, 20, 50 cars with Lidar units all in a small area (like a busier intersection in my smallish town)?

I wonder if Musk has done some tests of this sort of scenario and that is what he is basing this judgment on?


Surely a blend would yield the best all weather solution? Even if the lidar isn't top shelf.


Musk said he is a fan of LIDAR, the Tesla has forward-facing LIDAR, and he even helped develop a LIDAR for the Dragon docking system. This is a hit job.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: