Hacker News new | past | comments | ask | show | jobs | submit login
Waymo's Self-Driving Cars Are Near: Meet a Teen Who Rides One Every Day (bloomberg.com)
236 points by Fricken on July 31, 2018 | hide | past | web | favorite | 301 comments



This is a super interesting article. Here are some interesting bits.

- Miles driven per needed human intervention 2017, Waymo: ~5700; GM: ~1300; Nissan: ~250, everyone else sucks at around 100.

- Tesla is not talked about at all, surprising considering Elon is constantly saying it's around the corner.

- Waymo has an Uber / Lyft esque app that the customers testing it's pilot program are using.

- Waymo's showing (not charging) prices similar to Lyft and Uber ($1.70/mile). An analyst thinks without paid drivers they could go as low as 70¢/mile, and only 35¢/mile by 2020.

- Waymo is only testing is places with perfect weather.

- Waymo plans on launching their first location for a ride hailing service by the end of 2018.


"Miles driven per needed human intervention ..."

Driving in an urban environment is almost nothing like driving in the suburbs.

These numbers not comparable for many reasons: - GM Cruise drives in high density San Francisco (city roads), while Waymo drives in low density Mountain View / Phoenix (incl. highways). [1] - GM Cruise is not optimizing to reduce driver interventions, as that can actually decrease safety (ie. drivers delaying the intervention)

I work at GM Cruise.

[1] https://medium.com/kylevogt/why-testing-self-driving-cars-in...


IF you rely on drivers being aware and able to intervene at any time, then you are creating a dangerous situation. Human beings can’t re engage and obtain situational awareness in emergencies.


Yeah, one key question is whether there a loud alarm triggered for human intervention a few seconds before it's necessary or is it relying on the constant awareness of the human. I am skeptical that an acceptable answer is anywhere near being possible.


Yeah, humans shouldn't be driving at all, but to get there, well, you can't make an omelette without breaking a few eggs.


That's an expression I know as a former military wife that explains the need to accept training deaths. I have enormous difficulty with the idea that we really need to accept training deaths in self driving cars.


Cars kill and injure more Americans than wars.

Cruise and Waymo have been as cautious as they can reasonably expect to be. They both have months long training courses for their safety drivers, and camera in the vehicle that monitor driver attention. While the vehicles are still challenged by complex scenarios, basic object detection and emergency braking are pretty good. Neither has had an at-fault accident, excluding an ambiguous but non-injurious incident between a Cruise car and a lane splitting motorcyclist.


Having a highly trained, standing army is part of what keeps us out of war. Without it, we would soon be invaded and taken over. Historically, every time America shrinks its military, it winds up dragged into a war and having to ramp back up rapidly.

There is no similar need for driverless cars. I'm American. I've lived without a car for over a decade.

I don't see why we can't do more testing before putting lives at stake. I don't see why driverless cars killing people as part of their development is a thing we need to accept.

Your assertion in no way clears up for me why we should shrug at the idea of people dying for this thing.


Having highly trained autonomous vehicles will keep us out of car crashes, currently we just sort of lie down and take it. As though we had 10 9/11 scale tragedies every year and just kind of threw our arms up and said "whatever, nuthin' we can do"

And for the record I don't drive either, haven't since I was a young man.


I don't personally see that as justification for accepting training deaths here.

I'm not talking about stopping the development of driverless vehicles. I'm just telling you that your argument seems like a non sequitur to me.

There are training deaths in the military precisely because humans are being trained to do dangerous things. Why can't driverless vehicles be trained without killing people?

That's the thing you aren't actually answering.


Its simple: opportunity cost.

If a testing plan was going to accelerate the adoption of autonomous cars which are twice as safe as human drivers by one month it would save the lives of approximately 40,000/2 / 12 = 1,666.6 people. To stop the aforementioned testing plan because it is expected to kill 1, 10, or even 100 people will increase the expected number of premature deaths.

There is of course an optimum between training deaths vs. future lives saved, accounting for uncertainty, and etc. but the number of expected training deaths will never be zero and considering the large number of future deaths currently expected, the optimum is likely to be much higher than zero.


How sure are you that the more aggressive development program will lead to faster adoption? If you're talking about a hypothetical program where we are discussing adjusting the acceptable risk level like a slider on an RTS game, then fine, but in practice it might not be like that.

In the real world a more aggressive program might be that way not because someone carefully dialed in the optimum risk, but because of the psychology and attitude of its executives and the same factors might lead to slipshod engineering, ultimately slowing down progress

Additionally, bad press from the resultant fatalities could create a political backlash.

I don't know if this is actually the case, but Waymo comes across as one of the more careful and responsible programs and they seem to have the best engineering and have made the most progress. We don't need 'move fast and break things' in this field. I'd argue we probably don't need it in some other fields as well, but that's a different discussion.


> If a testing plan was going to accelerate the adoption of autonomous cars which are twice as safe as human drivers by one month it would save the lives of approximately 40,000/2 / 12 = 1,666.6 people.

This is being overly generous with the assumption that self-driving cars will be safer than human drivers, to the point of being potentially dangerous.

I say potentially dangerous, because this generosity in your hypothetical is being used to justify deaths that need to happen in order to stop nebulous deaths in the future with technology that might not be as safe as your hypothetical assumes.


Is it really any more potentially dangerous than the inverse - being so risk adverse that we'd refuse to probably sacrifice a few to possibly save the many? All we can do is try optimize our expected value given our current understanding with a reasonable risk aversion.

With cars an improvement to just 90% of the current deaths in the US alone (36k vs 40k) would literally justify running down 10 people a day. The numbers are uncomfortable, sure, but they don't lie and while this is a simplistic analysis I don't see where it is qualitatively incorrect: cars kill so many people even a moderate improvement would be a massive decrease in mortality.

For a comparable situation, see research into emergency medicine. It is impossible to get consent, people likely have/will die as a result of trials, and yet they are (judged to be) in the common good, despite some very reasonable reservations.


You make a very dubious moral argument when you imply that preventing one death can justify another death. That assumption is not at all obvious.

If taken to be true, it can potentially be used to justify any random murder.


We accept that argument all the time in practice.

Seat belts kill. They occasionally strangle people.

Yet they save far more than they kill, so we not only use them, but many countries mandate their use by law knowing people will die as a result.

It is morally justified because we're not sacrificing a known subset of people to save another known subset of people that don't overlap - we're sacrificing a small random subset of people to save a larger random subset drawn from the same larger set, and so reducing the chance of harm to all, rather than transferring it.

This distinction is key, and would reject most "random murders" you might propose.


Which is still under the assumption that there's some orders-of-magnitude-safer-AI-driving-paradise. With seatbelts, we know how many people are saved by them vs. killed by them. With AI driving, we are merely guessing.


It doesn't need to be "some orders-of-magnitude-safer-AI-driving-paradise". If it improves on human driving by 10%, it'd still save thousands of lives a year.


This is still very generous, given that we're many orders of magnitude away from even being close to being as safe as human drivers when it comes to autonomous driving.

Coming from a crowd that is probably intimately familiar with the limitations of Google Assistant's ability to understand the English language, I feel like we're being overly optimistic here.

I don't doubt that eventually we will be able to create safe autonomous vehicles, in the same way that eventually we will be able to treat cancers much more effectively than we do today.

However, I find it odd that posters are not applying same level of optimism to other fields, nor applying the same level of skepticism to this field that they would apply to something like cancer research. Especially given that a breakthrough in cancer treatment could effectively save tens of millions of lives annually, versus the one million lives that could be saved if we completely eliminated automotive deaths. Which, again, is a moonshot given that autonomous processes in other industries still have an annual death toll.


Does preventing 10 deaths not justify 1?

Does preventing 1.1 deaths not justify 1?

You misunderstand or are misrepresenting my position: sometimes lives invested are worth it in lives saved.

Edit: fwiw I'm not in the field of automous vehicles.


Unfortunately, we don't know it that it will prevent any deaths. Some people hope that it will, but there's no hard data to show that programs with more deaths progress faster. The outcome could very well be that this strategy prevents 0 deaths and cause 10.

The road to hell is paved with good intentions, and the most dangerous path is the one in which the ends justify the means. If you don't achieve the ends... then you have nothing but tragedy and sorrow.


You're right, but just as a correction on your numbers, you're vastly underestimating the amount of potential lives saved because of the implicit assumptions that humans only exist in the USA, and that self-driving technology developed in the USA will only benefit Americans when it comes to safety.

There's more than 1 million road fatalities yearly according to the WHO[1]. Once the technology is developed it'll be rapidly exported, just in the EU there's around 25k deaths/yr[2], another 4k in Japan[3] etc.

So even if you only include countries and other areas with a similar GDP as the US (which could purchase self-driving vehicles at a similar rate) you easily get upwards of 100k deaths/yr.

1. http://www.who.int/gho/road_safety/mortality/en/

2. http://ec.europa.eu/eurostat/web/products-eurostat-news/-/ED...

3. https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...


As a non-American I still think that underestimating is the right approach for the simple reason that it pre-empts a "but why should we pay with [whichever country you use] lives" argument from a certain subset. Restricting the estimate to "number of lives saved in the country the testing is done" gives an outcome that is much harder to argue against.


Good answer. Have an upvote.

But, your answer implicitly assumes the total elimination of human drivers and their replacement by autonomous vehicles. This strikes me as unlikely.


Once autonomous vehicles are safe enough, and the cost differential is small enough, expect to start seeing a push to outlaw human drivers.

After every fatal crash caused by a human driver expect to see a "how long are we going to put up with this?" push.

It's hard to predict how soon it will happen, but I can't imagine society continuing to put up with human drivers for very long.


it's not unreasonable given the same opportunity cost argument that most people will switch to and prefer autonomous vehicles if they are shown to be inherently safer more predictable drivers. This is a huge risk to long haul truck drivers since those jobs are already long and grueling as it is. Their livelihood is threatened by the fact that robot truck drivers don't sleep. It's not a matter of if it will be replaced but when, since logistics are a huge business and nothing to mess around with.


I have no doubt that truck drivers jobs are under threat.

But you are justifying this assumption with yet more assumptions, not hard data. For example: If they are proven to be inherently safer.

I'm literally just rather tired at the moment and also finding this argument wearying. But actual reality has a really long track record of failing to conform to human predictions of that sort.

When antibiotics were discovered, it was predicted to be the end of human disease. Fast forward to today and the articles we routinely read are about the crisis of antibiotic shortages, antibiotic resistant infections and whatever will do now?

When the first air planes came out, they had square windows. So did the first jets -- until they began falling from the sky as if some Cthulhoid horror had ripped then to pieces in the sky. Then they changed the windows to rounded designs.

Human ability to accurately predict the future is notoriously lousy.


Because it’s impossible to guarantee that an algorithm will be 100% effective in preventing deaths. So, you either are for driverless vehicles and are willing to accept some risk, or you are against driverless vehicles driving on roads. It has to be one or the other.


You can fail to be against something without being for it.

I find it strange that people seem to have a problem with my question but no one but me seems to have a problem with someone simultaneously claiming that we need to treat training deaths as shrug-worthy while assuming that driverless vehicles will clearly eliminate huge numbers of deaths annually once they are out there.

This is the first suggestion I have heard that human drivers actively need to be eliminated instead of driverless vehicles being yet one more option in an increasingly diverse transportation ecosystem.

Although I no longer have a driver's license, the implication that someone might desire to outlaw human drivers someday for supposed safety reasons while simultaneously justifying accepting death by driverless vehicle seems somewhat disturbing.


“You can fail to be against something without being for it.” Not sure what that means, but testing autonomous vehicles will result in deaths. So, you either decide zero deaths are acceptable and don’t test, or allow some deaths and test. There is no third option in this case


Typically, when we test experimental products to gauge their safety and efficacy, we engage in informed consent with the participating parties.

If everyone involved knew the risks, accepted them and moved forward, I'd agree with your premise.

However, if I am walking down the street and I'm run over by a rogue autonomous vehicle, I didn't give consent.

I don't think anyone would be as blasé as they are with autonomous vehicle deaths in a different situation.

For example, if a potential cure for heart disease was tested by dumping it in the public water supply causing people to die, would we have posters here saying that testing such drugs will result in deaths and shrug them off?


> If everyone involved knew the risks, accepted them and moved forward, I'd agree with your premise.

Do you consent to the risks of letting 16 year olds drive? They're high, but you don't get the option.

I understand not agreeing to extremely-risky self-driving cars.

But if they can beat the very lax standards we use to license humans, that should be good enough.

Requiring them to be infinitely more perfect than humans is nonsense. If a car drives you over, do you really care if it was a robot or a human driving? I don't. The most I can ask for is a universal bar. And all the evidence I've seen is that Waymo is meeting that bar.

> For example

People would object because that's a stupid way to test and unrelated to the job of delivering safe water. If you want to talk about real water treatments, we do make tradeoffs!


> Do you consent to the risks of letting 16 year olds drive? They're high, but you don't get the option.

There's a very low bar for them to pass to have their driving privileges revoked if they prove themselves to be a danger.

Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.

> But if they can beat the very lax standards we use to license humans, that should be good enough.

"If". We have some very lax standards for what we consider intelligible English, yet Alexa can't set a timer correctly when I tell it to.

> Requiring them to be infinitely more perfect than humans is nonsense.

Who is proposing this?

> If a car drives you over, do you really care if it was a robot or a human driving? I don't. The most I can ask for is a universal bar.

Do you apply this accident causation blindness universally? Do you care if a person that hits you was drunk or lacked a driver's license vs driving diligently and licensed?

> People would object because that's a stupid way to test and unrelated to the job of delivering safe water. If you want to talk about real water treatments, we do make tradeoffs!

Some people might object to allowing unproven autonomous vehicles onto the street as stupid, but choose not to use that word in effort to have a respectful discussion.


> Typically, when we test experimental products to gauge their safety and efficacy, we engage in informed consent with the participating parties.

The state consented on your behalf. You, in fact, automatically "consented" to all sorts of dangerous and dubious experiments, including democracy itself, when you became a resident. Though the entire idea that self-driving cars are dangerous and experimental has no basis in reality and by all accounts Waymo's cars are ridiculously safe, even if it were the case that they were dangerous Waymo is operating with the full blessings of the Arizona government.

> Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.

Of course society does not "necessitate" anything. Society is not some natural phenomenon like gravity that operates in necessity. And there are many, many people who would point out that they do not agree with and certainly do not consent to America's dangerous obsession with car ownership that kills 50k Americans a year and has tremendous economic and ecological consequences. But alas.


The social contract is not carte blanche allowance for anything to happen. There's a feedback loop involved, in which the governed can give or revoke consent.

> Society is not some natural phenomenon like gravity that operates in necessity.

However, people are driven by natural phenomenon like the conservation of energy, and thus need to eat. For most people in the US, if they want to eat, it is necessary to drive to work.

> And there are many, many people who would point out that they do not agree with and certainly do not consent to America's dangerous obsession

You're speaking to one of them.


Yes the government gives consent here but what we argue is should that consent be withdrawn if deaths occurs or can occur.


> There's a very low bar for them to pass to have their driving privileges revoked if they prove themselves to be a danger.

Robot privileges can be revoked too.

> Society necessitates that people drive. Society does not necessitate that Company X gets autonomous vehicles on the roads by target date Y so that their investors are happy.

Society necessitates that people use cars to get places. You can 1:1 replace human driving hours with autonomous driving hours.

>> Requiring them to be infinitely more perfect than humans is nonsense.

> Who is proposing this?

Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.

> Do you apply this accident causation blindness universally? Do you care if a person that hits you was drunk or lacked a driver's license vs driving diligently and licensed?

Being drunk alters your ability to drive. They would be under the bar.

If someone lacks a license but would have qualified, I guess I don't really care.

> Some people might object to allowing unproven autonomous vehicles onto the street as stupid, but choose not to use that word in effort to have a respectful discussion.

All drivers are unproven at first.


> Robot privileges can be revoked too.

In a way, we're discussing that right now. We're in a thread filled with posters who do not want to revoke those rights on the off chance that more dead people now will prevent even more people from dying in the future.

> Society necessitates that people use cars to get places. You can 1:1 replace human driving hours with autonomous driving hours.

This is a generous hypothetical. Society certainly necessitates that people drive, as there is no other way.

It is not true to say that we can 1:1 replace human driving with autonomous driving, the article in the OP is evidence of this. The chance that autonomous driving will never reach a 1:1 parity with humans is also just as likely.

> Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.

If this is your takeaway, I implore you to give this perspective more than a passing thought so that you can reply without turning it into a straw man argument.

> Being drunk alters your ability to drive. They would be under the bar.

I'm not sure what you're trying to say here, can you clarify?

> If someone lacks a license but would have qualified, I guess I don't really care.

Would you care if they qualified, but had their license revoked, perhaps for hitting people with their car before they hit you?

> All drivers are unproven at first.

Thankfully, we train and test these drivers on closed courses where injury to uninvolved people is minimized before we allow them to go on the open road. We both severely supervise and restrict why, when, how and what they can drive.


> In a way, we're discussing that right now. We're in a thread filled with posters who do not want to revoke those rights on the off chance that more dead people now will prevent even more people from dying in the future.

Some people are willing to trade more deaths now for fewer deaths later. But don't take that as proof that waymo's cars actually will cause more deaths. They've been pretty safe so far.

I'm not arguing that more deaths are acceptable, I'm arguing that some deaths are acceptable if we're going to be consistent with current road policies.

> It is not true to say that we can 1:1 replace human driving with autonomous driving

You misunderstood the 1:1. I mean that you can take particular driving hours and replace them 1:1. That's what the article is about, even. I'm not claiming it will replace all human driving.

> If this is your takeaway, I implore you to give this perspective more than a passing thought so that you can reply without turning it into a straw man argument.

It seems pretty simple to me. "Are you willing to allow self-driving cars that will kill people, if the number of deaths per mile is under some threshold?" What am I missing? I don't want to strawman people, I just want a realistic assessment of risk.

> Would you care if they qualified, but had their license revoked, perhaps for hitting people with their car before they hit you?

Yes, because it means they went under the bar...

> Thankfully, we train and test these drivers on closed courses where injury to uninvolved people is minimized before we allow them to go on the open road.

Your experience is very different from mine. I trained entirely in public areas. I don't even know where I could find a closed course.


Anyone who says that self-driving deaths are 'unacceptable' is requiring self-driving cars to be infinitely more perfect than humans.

That's a distortion of what I said. Furthermore, it's pretty laughable to have my internet comment treated like some kind of legally enforceable policy.

Last I checked, I'm not Queen of the world whose word is law.


FYI: All caps on the internet is generally considered to be yelling and is a violation of HN guidelines.


I changed it to comply with guidelines. Feel free to respond to substance.


Why should it be disturbing? For me would be far more disturbing to have a safety proven and affordable self driving car that doesn’t drive carelessly causing accidents and to allow humans to continue to drive, at cost of tens of thousands lives per year only in the US.


We discuss privacy issues and the like daily on HN. If software is driving your car, does someone have access to the data on where you go? Can your car be shut down or driven to the nearest police station by a third party? If you are Black, gay or any number of other things, are you cool with giving up such control in an openly hostile social climate? How much cost does it add to the car? If a software update is buggy and you not only can't drive, but it is illegal for a human to drive, does your wife give birth at home while we wait for Google to fix the bug and update the software because you are neither allowed to drive her to the hospital nor is there any such thing as human ambulance drivers anymore?

Etc ad nauseum.


If it can save 40k lives per year it’s in any case a no-brainer. Ask all the millions of people that have a relative killed by a car if they would care at all. Edit: also it is pretty curious that you are against testing self driving cars because they might kill someone during the testing, while you are perfectly fine with 40k people killed per year and you are concerned about the privacy of the people. I’ll tell you a secret. A dead person couldn’t care less about his privacy.


You know, someone made a real cavalier sounding remark about how you need to break a few eggs to make an omelette. I replied to that with saying, basically, I understand that attitude for making peace with training deaths in the military but I don't think it's justified for driverless cars. I tried to make it clear later that part of the difference in my mind is that people die in military training because people are being trained to do dangerous things. But if you are training a driverless vehicle, there's really no reason that absolutely has to involve endangering anyone's life.

And, wow, has that gotten tons of push back while people go to great lengths to frame me like I'm some extremist lunatic. Meanwhile, the person cavalierly brushing off training deaths is making rather extreme comments about how driverless vehicles can completely replace all human drivers, etc and most people are not arguing with that. No, I am the one being argued with.

It's starting to look to me like people are basically looking for some silly reason to argue with me in specific. Because I really did not assert a lot of the stuff being hung on me here.

Again, yes, if we can save 40k lives. That's a very big if. It assumes a 100% reduction in mortality. That implies that you expect driverless vehicles to not merely be better than human drivers, you expect them to be perfect and to have flawless performance.

And it's that sort of ridiculous unstated assumption that has me rolling my eyes and going "Wow, people on HN sure are just looking for crazy reasons to argue with me." Because I don't think that's a remotely defensible position.


Yes, because you keep making this statement that it’s not necessary to endanger lives when testing driverless cars. That statement is false. Endangering some lives is a necessary condition for testing driverless cars. Now, maybe we shouldn’t test them and that’s fine, but you are trying to have it both ways.


> If it can save 40k lives per year it’s in any case a no-brainer.

This is a pretty big assumption without any evidence to support it.

There are many solutions that can potentially save even more lives, such as treatments and cures for heart disease and cancer.

However, I do not see anyone arguing to test these potentially life saving miracles on random people who happen to be walking down the street, like we are with autonomous vehicles.

Certainly, if we relaxed standards on testing cancer and heart disease treatments, we'd rapidly accelerate the development of life-saving cures. The more people we test them on, the better data we'll have to build better models, much like with autonomous driving.

If it can save 500k lives per year from cancer and heart disease, would revoking the need for informed consent to test these potential cures be a no-brainer?


> If it can save 40k lives per year it’s in any case a no-brainer.

Just saying that doesn't absolve you from making an actual argument.

> Ask all the millions of people that have a relative killed by a car if they would care at all.

How motherfucking dare you! My father did die in a car crash when I was a kid. But I also live in a country where totalitarianism actually happened. You should wash your mouth, and then you should sit down and make the argument.

Because to reply to all of it, including

> "If you are Black, gay or any number of other things, are you cool with giving up such control in an openly hostile social climate?"

with

> "If it can save 40k lives per year it’s in any case a no-brainer."

is absolutely not good enough. Would you be okay with that being quoted "out of context" like that (it wouldn't really be, it's the degree of seriousness you decided to muster) like that on billboards with your real name attached to it?


I'm not defending Tesla or Uber, they're the ones that have killed people, and I think they both were (and in Tesla's case still are) behaving irresponsibly.

Cruise and Waymo have both done tons of closed course testing, that's where they validate their respective systems against mission critical stuff like knowing when to slam on the brakes. But eventually they've got to go out into the real world and learn to deal with real traffic on real roads. I wish I could tell you the risk was zero, but it isn't and never will be.


> Cruise and Waymo have both done tons of closed course testing, that's where they validate their respective systems against mission critical stuff. But eventually they've got to go out into the real world and learn to deal with real traffic on real roads.

Waymo has been doing that longer than anyone, though.


Technically Waymo has been acting like cowboys longer than anyone. They smartened up and got serious about safety around 2015, but before that they were winging it. It was just a goofy science experiment back then.

The Google self-drivng car project was under the leadership of Sebastien Thrun and Anthony Levandowski, and as hardcore engineering types their thinking was 'We'll save more people than we kill', and it was as simple as that.


And though there is no fatality for waymo, while we already one for Uber just at the beginning of their program.


Well, to my ear that sounds like "People will still die in car wrecks and I don't even have data substantiating my assumption that fewer people will die than is true currently, but I somehow feel that if robotic vehicles are killing people, that's automatically better than if a human is behind the wheel."


We wouldn't be doing our due diligence if we didn't try. Otherwise it's pretty much a guarantee that 30-40,000 people will die on America's roads next year and every year after that.


That still in no way answers the question I have posed concerning why developing driverless vehicles must involve training deaths.

I have already said I am not against their development per se.


It's an almost nonsensical question, that's why you're having trouble getting people to answer it.

People are saying "a small number of people might die and no one wants that, but it's impractical-to-impossible to guarantee that zero people will die." And you are asking, "but why? But why?"

Now and then there at washing machine deaths. Society accepts these because they're so rare and because it would be impractical to completely prevent them.

Compared to the number of road deaths most experts believe will be prevented over time, the few deaths we may encounter from training deaths seems completely inconsequential.


Acting like my question is dumb because the first however many replies to my comment made no real effort to actually answer it isn't a good faith argument and veers rather close to a personal attack.


> Now and then there at washing machine deaths. Society accepts these because they're so rare and because it would be impractical to completely prevent them.

Are you sure about that? Society just accepts them and moves on?

There aren't lawsuits? There aren't recalls? There aren't redesigns? There aren't safety measures taken so deaths don't happen again? There aren't investigations? Fines aren't levied if they violated regulations? Regulations aren't passed in response? Everyone just rolls over and says, "This is just the price of washing clothes" like we are with autonomous cars?


That person you're talking about, that says autonomous cars are just going to cause deaths and it's okay and we shouldn't try to investigate or improve safety measures or check if regulations were violated? That person doesn't exist.

Accepting that accidents will always happen does not imply you learn nothing and improve nothing.


> That person doesn't exist.

I agree, beyond a select few individuals who have not posted on HN at all, I do not see or believe that anyone is calling for unnecessary deaths if they can be avoided.

I do believe that we're letting optimism and good intentions get the better of us, by allowing our interests in the betterment of the humanity to align with the interests of business, which would like to see autonomous cars on the road unencumbered, unregulated and unquestioned as soon as humanly possible.

Hence why I am advocating for a level of healthy skepticism. I am imploring posters who are taking it on faith that autonomous vehicles will solve the problem of automotive deaths if we suspend our disbelief, to apply the same level of skepticism to this field as they would, say, biotech.


There's no way to possibly guarantee 100% safety so proposing it as the standard is as good as killing off the program.

What's that classic example, robocar has to make a choice between (potentially) killing a bus full of school kids or a bunch of adults standing around on the sidewalk.


> Having highly trained autonomous vehicles will keep us out of car crashes

I hate to be the guy who says, "Citation please," but yours is an assumption without evidence.

For all we know, autonomous driving safety might plateau at a rate that's lower than that of human drivers.


> why we should shrug at the idea of people dying for this thing

I was going to say "nobody's shrugging", but then I remembered the killing of Elaine Herzberg by the Uber self-driving car. Maybe the wheels of justice are turning slowly, but right now it looks awfully similar to the Government just shrugging at that incident.


> Having a highly trained, standing army...

... is why we go to war every 10 to 15 years to justify the MIC spend instead of returning to a pre-WW2 stance.


> Having a highly trained, standing army is part of what keeps us out of war

The US is in a state of constant war. Having a large standing army enables US imperialism, it's not for protection.


> we should shrug at the idea of people dying for this thing.

We already shrug at the idea of people dying in cars everyday.


Not really.

MADD was effective at getting drunk driving legislation passed in the states, and likes to claim responsibility for decreasing automotive deaths by one half.

There are people out there who aren't shrugging their shoulders and are actively doing something about people dying in cars.


Yes really!

30,000+ people die in cars every year in the United States. That's roughly the same number killed by guns but how many news stories do you see about car deaths?


Someone left this comment and apparently deleted it while I was looking for and failing to find citations for actual numbers of lives saved by ambulance, for example. I have redacted their handle, but I want to leave my reply here:

Because people's lives are at stake right now. Every year 10s of thousands of people die due to cars.

Delaying innovation that would prevent deaths dooms those 10s of thousands of people to death in the future.

You say that as if, clearly, cars never save lives. It's all downside.

Ambulances save lives.

Fire trucks save lives.

Those are the easy, obvious answers. But I would argue that lives are also saved and enhanced by access to jobs, access to better quality food because of our complicated infrastructure, access to better medical care, etc.

You aren't counting when things go right. You are only counting when they go wrong.


There are fatal accidents caused by ambulances and fire trucks, do you want to outlaw them because you cannot guarantee that there won’t be a fatal accident during their use?


Well we accept deaths in cars... right now it's hard to tell the number but based on what Tesla said, it seems like self-driving cars stats are similar to normal cars.

Ignoring self-driving car is essentially accepting deaths in non-self-driving car just as much...

I sure hope you never argued that we should stop people from driving altogether...


Every large infrastructure project, such as bridges etc has an expected number of death. We know this and accept this is a sacrifice we can live with. Self driving cars should be seen as no different. And once it is safer than human drivers it will pay back those lives.


Why do we have to accept preventable deaths of any kind?

We can start banning all sorts of things to reduce car deaths.

Lower the highest speed limit to 35mph.

Ban having a cell phone in your car, because access to it is distracting.

Ban listening to music in your car because that is distracting.

Why focus on slowing progress on the best thing for road safety - self-driving cars, while letting humans risk their lives all day and night due to their own negligence?


"breaking a few eggs": We can surely afford to lose a few cars. I hope that is what you meant.


Do you feel good about having such a cavalier attitude towards human lives?


that seems to be the situation with Uber killing that woman. If i remember correctly they said that the emergency braking was disabled and they hoped on the driver taking over (and the driver was probably asleep or something like this as the woman had been for several seconds crossing those 2 lanes while being highlighted by the car's headlights - the video that Uber published and which let them off so easy obviously comes from a low sensitivity configured camera and doesn't correctly reflect the normal sensitivity of human eyes and what humans see in car headlight's field)


The driver was actually watching "The Voice" on their phone.


Yup, the video they first released was pretty misleading considering what the subsequent investigation has revealed.


The part of your post about following hand signals of construction workers is interesting. Could anyone jump out in front of a Cruise car and direct it? Would you have to be wearing something like an orange vest?


Do be fair, anyone could jump in front of any car with a regular driver and direct it with hand signals. If someone was standing on the road in front of me directing me into another lane I would do it, with or without an orange vest.


Most human drivers have an ACL to only accept these inputs from LEOs working in an official capacity. How would we add a visual ACL to a self-driving car?


Eh, I don't think so. When someone is standing in the street waving you into another lane (or not!), the alternative (other than just stopping there) is to ignore the person and drive into them. Most people seem to take the first option and trust the strange, non-uniformed person who risks their life to stand in traffic and direct it.


Yeah, the only reason to not do it is if I suspect some weird robbery heist is underway. In which case my only real option is to just stand there anyway...


In practice your assumption doesn't pan out. Humans follow the directions of almost anyone standing in the street waving their hands. For example, a pedestrian in the crosswalk may stop and wave a car on ahead of them. The car is supposed to stop and wait for them to exit the crosswalk.

By waving their hands, the pedestrian can actually be held accountable if this results in an accident.


Human drivers aren't really aware of the capacity or credentials of those directing traffic. Source: Sometimes I direct traffic. I'm part of a volunteer organization, taking orders from another volunteer organization, often eventually, yes, getting marching orders from police. People who are wearing vests may or may not be cops, construction workers, or volunteers like myself who have little to no oversight preventing them from putting the vest on and walking out into a busy street when they shouldn't.

At an event I was at recently, traffic control involved at least four different organizations, which had their own gear and their own attire, so you really can't count on everyone you see on the road looking the same either.

Additionally in terms of lack of credentials, there's a certification process for directing traffic (I've renewed mine twice), but there's a pretty non-zero chance that the person who has directed you around a construction site actually bothered to get that, and you'll probably never know that either.

Your other mistake is believing human drivers actually do what people directing them tell them to. Source: Sometimes I try to direct traffic.


you can manually control the car as well if something is amiss


You really can't if the assumption is that the person being transported may not be a licensed and otherwise competent driver.


This comment made me think of black hat from xkcd.

(e.g., https://xkcd.com/496/ or https://xkcd.com/1243/)


Possibly more relevant: https://www.xkcd.com/1559/


Definitely :) See? Right up their now wrecked alley.


Potentially most relevant: https://www.xkcd.com/1958/


I don't buy the argument about minimizing driver interventions. Regardless of what metric Cruise is optimizing, there is some metric, and hence there is an incentive to cut corners at the expense of safety. It's the responsibility of every engineer and every manager to prevent this. Given that the incentive will exist no matter what, miles driven between interventions is a pretty good metric to optimize.


> low density Mounatin View

I wish


But most driving in America is suburban. So if Waymo builds something that only works in the suburbs (and just literally pulls over and tells you to get out if it approaches a city boundary) that's still a bonkers market.


I don't think Tesla is really considered a serious competitor in the self driving car industry by anyone besides Tesla.

Which is probably fortunate for all the other players since all of the Autopilot issues get attributed mainly to Tesla rather than to self driving cars.


Or regular public (non technical people). A few weeks ago I attended a meeting in a top 20 university where the head of a inter-disciplinary research department was discussing autonomous vehicles and funding. The first and only name he said was Tesla.

Marketing affects anyone not knowledge in machine learning field.


Waymo is largely unknown by the general public. Tesla and Uber are well known, largely because of negative incidents and marketing.


Google's self driving car's are quite well known. The "Waymo" name is not.


You know what they say, there is no such thing as bad press.


This is unusual. Almost everyone I have talked to in the field thinks Tesla is barking up the wrong tree entirely.


Talking about progress of various players in self-driving industry is like talking about Schrödinger's cat.

They might or might not have made progress but we won't know until we open the box (i.e. until they launch commercially).

No one has any real data about how Cruise compares to Uber compares to Ford compares to Audi compares to ...

The only comparative data we have are yearly reports mandated by California.

Tesla is not there because in 2017, the last year for which we have the data, Tesla did not test on public roads in California in fully autonomous mode so they didn't have to release those stats (https://www.dmv.ca.gov/portal/wcm/connect/f965670d-6c03-46a9...).

At the same time Tesla is taking this as seriously as one can and are working on making it work. From the DMV report:

"As described above, Tesla analyzes data from billions of miles of driving received from our customer fleet via over-the-air (“OTA”) transmissions. We supplement this with data collected from testing of our engineering fleet in non-autonomous mode, and from autonomous testing that is done in other settings, including on public roads in various other locations around the world.

Through all of this data, we are able to develop our self-driving system more efficiently than only by accumulating data from a limited number of autonomous vehicles tested in limited locations."

Karpathy recently gave a talk about Tesla's progress: https://www.figure-eight.com/building-the-software-2-0-stack...


As I understand it, the reason Tesla isn't in those stats is because they come from a California mandate for certain kinds of self driving car tests, and Tesla isn't doing those kinds of tests. (disclosure: I work at Google, but not on anything related to any of this, just follow SDC stuff as a hobby)


Correct. Car manufacturers and automotive suppliers are doing only very little testing of the kind that has to be reported in California, at least not in California. Those reported statistics are mostly marketing by Silly Valley companies.


I wonder how low these self driving cars have to go price per mile to compete with fully mature public transportation systems.

For example, since I live in the Bay Area, the cost of going from Berkeley to Walnut Creek via Bart is around $4 for 16 miles. This would correspond to $0.25 per mile to the $0.35 estimate in the article.

Maybe I’m doing the wrong comparison: cars like these won’t be competing with rail lines but with last mile types like public buses. That would make more sense as I pay around $3 to go 2 miles in my city.

Another thought: the city official, mentioned in the article, seemed enthusiastic in using cars for public transportation. It kind of reminds me of advent of cars when cities focused on building roads and highways instead of investing in public transportation systems. Perhaps the self driving aspect can be transferred to be used on busses? I don’t know whether cost of busses result mainly from labor, like cars, or some other factor.


Complimentary. Mobility at destination. You can't easily walk to much of Walnut Creek from Bart. Nor Berkeley's hills. Take a ride to the station and everyone wins


Don't forget to take ride sharing into account. I'd be happy to be in a vanpool picking up a half dozen people as it drives into SF. Six people per vehicle makes even $1.70/mile cost competitive with BART.


So why isn't that happening today? I suppose it sort of is with Line and Pool but surely $10-ish per hour for a driver can't make a huge difference for this sort of service--which is quite widely-used in other countries.


It is happening today. We've always had some vanpool-ish type companies, though they haven't always been easy to work with. There are some startups trying to change that, like https://www.chariot.com/ -- I've been keeping an eye on them and when they start operating along my commute line I will likely switch away from BART.

The big problem with vanpools is scheduling. Everyone wants flexibility and smarter scheduling systems in a manner very similar to what uber has provided for individual rides.


Where I live the busses cost about 400K in capital costs and last about a decade. I don't know the maintenance and fuel costs but a reasonable guess would be 30K / year. The drivers cost about 100K in total compensation.

So driver labour is a huge component of bus cost. Self driving systems should have a capital cost of about the same as a driver for one year.


> ...the cost of going from Berkeley to Walnut Creek via Bart is around $4 for 16 miles.

That's just what they charge the customer and does not in any way reflect the true cost of the ride.

Plus, you can stuff 6 people in a robocar @ $0.35/mile vs 6 * $4 to take bart.


Waymo have been testing in bad weather but not having the public ride in those ones. I guess they are working on the easier stuff first https://www.wired.com/story/waymo-self-driving-michigan-test...


Most of your bullet points are interesting and relevant. I have to take issue with the last one though.

In any discussion of self-driving cars I've learned to completely and totally ignore all statements of the kind "X plans to launch Y at Z point in the future"


So I've heard that weather was not compatible with driverless vehicles.

However I haven't seen what happens when they are in heavy snow or rain.

And, I'm sure I'm misunderstanding the problem but couldn't electric cars with a motor per wheel be able to detect a slipping or loss of traction in an individual tire and slow it down?


The issue is detecting the lines and edges of the road after it snows. Snow just makes it all an inconsistent white matte texture. Even human drivers have issues with it.

Heavy rain at night time can obscure road features to both humans and cameras.


If we did large infrastructure projects in the U.S. anymore, we could fit existing roads and highways with technology meant to communicate better with self-driving vehicles.

Our existing road signage and markings were made for humans. Instead of training software to use them, why not add ones it can better understand?


You know how the account, transit, and check numbers along the bottom of a paper check use that weird OCR font? And are printed with magnetic ink?

That's what we should do with road signs and lane markings. Make them resemble the current ones somewhat, but also be optimized for a computer to recognize them even in extreme conditions. Not only with wider differentiation between signs of different types, but also using radio, UV, infrared, or magnetic cues. Lanes could be painted with metallic pigments for example. Even when snow covers the edges of the roadway, the car could read those lines and center itself.

All of this would be a change to what we already do. No need for extra equipment or drastic changes in infrastructure. Just update how existing signs and road markings are made.


> You know how the account, transit, and check numbers along the bottom of a paper check use that weird OCR font? And are printed with magnetic ink?

You're talking about MICR (Magnetic Ink Character Recognition) and the E-13B font.


I agree with you that this is a route which deserves investigation. My question is, how difficult is it to make something which works "even in extreme conditions"? I would guess that it's really difficult, particularly when one takes into account deterioration (asphalt needs to be maintained constantly, these things probably would be too) and monitoring.

That said, I don't know much about this space so maybe I'm just overestimating the difficulty here.


Part of the whole point behind mapping everywhere the vehicles go is so they can mark ahead of time the locations of signage and lights, lane markings etc. Now whether there's a bunch of Vehicle to infrastructure sensors all over the place, or carefully curated internal maps, there will inevitably still be unmarked changes in road conditions, so Robotaxis will still need to be able to recognize changing signage and lane markings, along with all the other dynamic elements on the road.


No idea on the cost of this but I always thought it would be neat if we had heated roads (you can get your driveway heated). That would solve the ground visibility problem, the need for snowplows/salt and save tons of money that is normally spent on cleaning up/repairing accidents. It also might get rid of potholes caused by freezing ice saving on maintenance.


> No idea on the cost of this

Driveway systems require about 35W per square foot when operating. Imagine a 36' wide road, and you're looking at around 6MW per mile.

That's a lot.


Add more power and have the roads charge the car like in F-Zero


It may be done if there's a cheap mass source of e.g. geothermal energy. IIRC the main streets on Reykjavik, Iceland are heated this way to avoid cleaning snow, but that's because they're pretty much sitting on top of volcanoes.


There is work going on in that space, but I'm guess it will come to my local roads no sooner than a decade after it would have been useful: https://whatis.techtarget.com/definition/vehicle-to-infrastr...


We could simply mandate it that all "express" lane projects be marked to ease use of self driving cars. express lane is the new word for toll lanes. we could even convert HOV lanes to support self driving cars provided they are in that mode. the benefit is early adopters are more likely to pay for express lanes if that includes being able to go nearly hands free the whole drive and it is in a controlled environment with very limited access points.

which brings up something else. how to communicate to others that a car is under the control of the onboard systems. we have DRLs and third brake lights. So something at both ends?


technology meant to communicate better with self-driving vehicles.

It would 100x cheaper and better to just fit tracks and have trams. Not nearly as sci-fi of course but infinitely more practical and useful, if the goal is efficient mass transit for non-drivers

The whole self-driving thing only makes sense if it is cheaper than retro-fitting all the infrastructure. If it isn’t there is literally no point in it, just stick with human self-drivers and drivers-for-hire.


Here's a little gif Waymo posted, showing how they use Machine Learning techniques to filter out noise from snow that their sensors pick up:

https://mobile.twitter.com/Waymo/status/994091703379415042/v...


I don't think it's as much about compatibility as it is about trying to focus effort on easiest cases first.


I think the big problem is visibility. I don't think anyone's suggesting the problem is insurmountable. Merely that cars trained only in good weather are unlikely to work well in poor weather conditions.


My sister, 29, just got approved for the Waymo program in Phoenix. She has epilepsy and hasn’t driven in 10 years since she had a seizure while driving on the freeway and crashed. Miraculously she had no serious injuries.

We sarcastically joke about how Silicon Valley is making the world a better place, oftentimes for privledged individuals but Waymo is completely changing her life.


One of the coolest jaw dropping examples of this I ran across not long ago, is Aira (https://aira.io/ - I have no affiliation with the company). First we had Google Glass, with the popularly derided "glassholes" (not a term I approve of using mind you, however it was persistently thrown about in media and on tech forums).

I've known a few people that lost their eyesight. What services like Aira could accomplish is nothing short of life changing for the blind. And it, in part, started out from a concept that was mocked as being representative of supposed obnoxious Silicon Valley priviliged types.


Aira seems a great idea.


New tech almost always starts out being available only to privileged individuals but then becomes commoditized and available to everyone.


If the new tech is a robot that cleans your toilet, costs can come down until nobody ever has to manually clean a toilet again.

If the new tech is an on-demand market that hires a person to clean your toilet, the amount of manual toilet cleaning in the world is unchanged :)


Instead of working for a unionized cleaning company with a minimum wage, you too can be your own employer in the new app economy of toilet cleaning to order and be paid less, expected to buy your own equipment, and denied sick pay and holidays. But still subject to sacking if you don't put in 12-hour days, or get a sub-four-star review.


Actually often military first, then privileged individuals!


Part of the reason is that new tech is always expensive and unreliable, so only the privileged individuals have money to burn on costly and risky experiments.


Per Gibson - The Future is here, it is just not widely distributed.


Evenly distributed


Well, Waymo is doing it right, and everybody else is nowhere.

The next big jump will be when the next-generation LIDARs come out. All solid state, and much cheaper. Industry analysts say 2020 for that.[1] They can be built now, but nobody is prepared to order enough of them yet. Continental, the big European auto parts company, is probably in the lead. (Quanergy keeps announcing, but try to order what they announced in 2016.)

[1] https://www.prnewswire.com/news-releases/global-and-china-in...


The hardware will surely be commoditized... so the big differentiator seems to be software. Software is one thing that Google surely does well.


Google does the hardware well too. They built their own lidar to reduce cost compared to velodyne by 90%.

https://arstechnica.com/cars/2017/01/googles-waymo-invests-i...

They also built a second narrowly focused longer range lidar that can be automatically pointed at the ROI of objects visuslly detected beyond the range of velodyne's ~200m.

https://medium.com/waymo/introducing-waymos-suite-of-custom-...

Both are used on their vehicles. If you have better hardware it makes the software problems higher up the stack easier.


Advanced LIDAR requires some less-used semiconductor fab technologies, like indium gallium arsenide. So you can't just send an order into a commercial CMOS fab like TMSC. Continental bought Advanced Scientific Concepts, which had semiconductor physics PhDs figuring out how to do this. The devices work fine, but early models were for DoD and space applications (the Dragon spacecraft has one), built by hand, and priced accordingly.

Once there's a market for a few million a year, the price will come way down.


IMHO that's overhyping things. There absolutely are foundries that will give you a compound semiconductor chip back. E.g. a lot of radio chips use compound semiconductors, you don't necessarily need your own foundry :)

Also flash LIDAR range is pretty bad and using super-sensitive photodetector arrays results in intolerable yield issues.


I've seen the Conti ASC promos but they haven't posted the range on the solid state lidar's. I was veeery excited about the tech, but from what I understand it's very hard directing the laser of a high enough intensity to implement this in a usefull way.


About 200m in the older ASC models.[1] There's a tradeoff between field of view and range, depending on lens option. It's likely that a vehicle would have a pair of forward looking units, a narrow beam long range one and a wide beam short range unit.

Advanced Scientific Concepts has been making good flash LIDAR units for years, price point around $100K. Continental acquired ASC and their team, and is transitioning this from a handmade product made by PhDs in Santa Monica, CA into a volume product made like an German auto part.

I saw the original optical bench prototype back in 2003 when we were preparing for the original DARPA Grand Challenge. They aimed it out a garage door into a sunlit parking lot and took 3D images. But it wasn't portable back then, so we couldn't use it.

[1] http://www.advancedscientificconcepts.com/products/overview....


According to Wikipedia, Velodyne radars have a range of about 120m. If ASC managed to get 200M that is hella impressive. I did hear that their tech has some gotcha's keeping it from series production but it's not really clear to me what is _really_ happening. I do think if they manage to launch this at an acceptable price range it's going to be a big thing.


Do you know if there's any (public) companies who are close to producing solid state lidar at any kind of scale?


About 15 of them are talking about it.[1] About five have demoed.[2] As far as I can tell, nobody is shipping in volume. There are lots of rotating-scanner devices now, still expensive.

LeddarTech is shipping 1x8 and 1x16 pixel flash LIDARs now.[3] That's too low-rez for self driving, and they plan higher resolutions. Velodyne's big scanner is 64x400 (full circle scan). BrashTech, which sells drones for inspecting towers, bridges, and other hard to get at infrastructure, has Continental's flash LIDAR on drones.

So we're at "expensive niche product", waiting for somebody to order in volume.

[1] https://www.tetravue.com/news/ces-2018-summary-of-15-solid-s... [2] https://www.motortrend.com/news/2018-ces-tech-six-budget-lid... [3] https://leddartech.com/modules/leddarvu/


Any idea if this is legitimate? [1]

320x24 resolution and it doesn't immediately look like a time of flight camera (seems too expensive, although the range is poor).

Apparently there's an up to 30m version [2].

[1] https://www.seeedstudio.com/Solid-State-TOF-LiDAR-CE30-p-304... [2] https://diyrobocars.com/2018/04/08/first-impressions-of-bene...


It seems to be real; there are reviews, with videos.[1] It's easier at short range; the exotic sensor materials aren't needed. Those things are going to be a big win for small mobile robots.

[1] https://diyrobocars.com/2018/04/08/first-impressions-of-bene...


Software might be a differentiator for a while, but it'll likely get commoditised. Self-driving doesn't involve complex UI with high switching costs and has little network effects / virality. And Google doesn't really have distribution locked down for self-driving cars like Search or Gmail.

The hardware on the other hand might not get commoditised so easily. It might have patent protection. If nothing else, high capital costs will mean monopoly or duopoly with enough profits.


Yep. Our wetware seems to do OK job with nothing more just a couple of suboptimal optical sensors.


Has anyone tried putting LIDAR in a human-driven car? It seems like a heads-up display with a 3D image of your surroundings would improve safety even for human drivers (plus it would look really cool).


Audi is selling passenger cars with LIDAR, but they don't do a 3D point cloud visualization.


Full disclosure: I work for one of Google's major competitors in this space.

> Well, Waymo is doing it right, and everybody else is nowhere.

... has been said for >5 years and looks ever less true as time passes. Remember when Google was supposed to introduce a finished fully automated driving system in 2017? Google had a very impressive technology demonstrator back in 2012, but they have had endless trouble turning that into a viable product. Turns out "let's just throw ridiculous hardware at the problem" leads to issues when you have to build millions of the thing. Even if there system were perfect already, which it isn't, it is not fit for integration into production vehicles. That's why Google has had very little success trying to sell their system to car manufacturers.

Meanwhile, their competitors are progressing along the "bottom-up" path to full automation quite rapidly. Many basic driving tasks are essentially solved already and will filter down to production vehicles over the next few years. I can't know where exactly Google is right now and obviously I can't state where we are, but I fully expect that the different paths to full automation will converge in a few years. Several companies will have systems that are "good enough" for 90% of common driving use cases. Whether Google's system is at that stage already is unknowable, though their testing seems to indicate that it is not. But if it is, Google are still in a race to "miniaturize" it into a viable product before the competing (already viable) products reach the "good enough" level of performance.


Do you actually do any work with autonomous systems?


This is why cities are gonna have to regulate the hell out of autonomous vehicles to reduce congestion.

The induced demand from affordable autonomous cars is going to be incredible, as the set of available drivers expands to include those below 16, the elderly currently unable to drive, and the unlicensed.

If everyone starts sending their kid to school via their own autonomous car its going to be a disaster.

A much better idea that doesn't run into issues of limited road space is of course is autonomous public transit.


Hopefully this regulation materializes as a regulation on personal transit, rather than a regulation on autonomous transit. There's no reason a self-driving car with one occupant is any less efficient than a manually-driven car with one occupant.

If everybody starts sending their kid to school in a private waymo it's going to be a disaster, but right now everybody drives themselves to work in a private car and it already is a disaster.

The Phoenix public transport director who hopes that waymo can bring people to the high-capacity bus lines and the lrt has the right idea. If the waymo (or Uber, or Lyft) app can integrate with public transit networks, that's the ideal solution. I look forward to a day where you open your ride-hailing app and you see options for "Uber all the way: $14, 18 minutes journey, pickup in 5 minutes" or "Uber+bus, $10, 20 minutes journey, pickup in 10 minutes" where the Uber+bus option syncs with the bus schedules and schedules your pickup with just enough time to catch a bus and schedules a pickup at the end of the bus segment of the journey.


> The Phoenix public transport director who hopes that waymo can bring people to the high-capacity bus lines and the lrt has the right idea.

This is a nice hope to have, but so far all the evidence I've seen is that Lyft/Uber competes with public transit, driving down public transit use and making the streets more congested. I would expect autonomous ride hailing would have an even greater effect.


It's about pricing. If automated driving gets perfected then a ride with an automated bus can be cheaper than with an automated taxi, because a bus have less cost per seat than a car.


Hopefully this regulation materializes as a regulation on personal transit, rather than a regulation on autonomous transit.

That's the most sensible way to do things certainly. But the huge public resistance to regulation of this sort may mean th self-driving cars wind-up with no regulation and thus with the vast mess we can imagine. I mean, the average driver is going to get pretty angry with the idea that the coming of self-driving cars means they have to drive less. I mean, they spent their relatively scarce money on their car.

So what happens? The US has shown a willingness to ignore the need for basic regulation even the results are miserable chaos. I can't think of a more likely circumstance to continue that tradition than the coming of self-driving cars.


If we leave it up to the free market to decide how they should be deployed and regulated it'll be a shitshow, no doubt about that. Of course, our roads are already a shitshow, because we let the car industry decide how we should design our cities, and then enshrined the resulting dysfunctions in law.

Waymo at least is signalling responsible deployment, they announced today a partnership with the Phoenix Valley transit authority to help encourage greater transit use.

One valid point they made was that while mass transit is great in high demand areas and along major transit corridors, out on the feeder routed the suburban sprawl it isn't so effective. Service is infrequent and off-peak busses run mostly empty. Robotaxis will be great for getting suburbanites from their homes to major transit hubs.

https://arstechnica.com/cars/2018/07/waymo-pilot-program-sho...


> Waymo at least is signalling responsible deployment, they announced today a partnership with the Phoenix Valley transit authority to help encourage greater transit use.

Meh, my guess is they're just trying to get their grubby mitts on some Dial-a-Ride dollars...


If I went around criticizing people for trying to make money I wouldn't have any friends.


I'm not sure how much difference there will be between autonomous ride hailing and autonomous public transit.

Surely you'll have "Uber Pool" for these services, basically from the outset. Some may pay a premium to give their kids a safe ride alone, etc. etc., but pooling rides makes complete sense.

Also, in my experience, school busses are somewhat a thing of the past. Parents line up for blocks dropping kids off, acting as individual chauffeurs.

I agree it will increase trip demand, but it could also easily decrease demand for parking in busy areas.

Long-term, it's easy to imagine that autonomous vehicles could provide much higher passenger throughput for a given road-area. Without human drivers to worry about, they only need narrow lanes, could talk to each other and draft, etc.


Even Uber Pool is making traffic congestion worse.

https://jalopnik.com/even-uberpool-is-making-traffic-worse-i...

> Long-term, it's easy to imagine that autonomous vehicles could provide much higher passenger throughput for a given road-area

No this is nonsense. The gains, marginal as they are, would only exist in places like highways outside of cities where you can assume nothing is going to walk out in front of a car.


It seems like such a service could fairly easily coordinate car and van pools during high-traffic times? Private and public transportation might not be all that different in the end.


look I always get downvoted for saying mandatory school busses for k-12. But seriously its summer now, and theres no traffic in my Bay Area town. you can go where ever you need to in the mornings.


My theory is that parents who drive their kids to school stay on the streets to do chores. This puts a lot more traffic on the roads at peak traffic times. During the summer, these extra people spread their chores out during the day. Thus, per day, the car total on the road is approximately the same during school, and not, but school hours change the distribution, and the school distribution is exponentially worse.


That might be a good chunk of it, but what I was thinking about was the traffic lighting that is around the schools and how that is oriented. It becomes highly congested due to the time that it takes for people to get into cars, and the lines that form for people that are trying to get into the lots. This backs its self onto main corridors.

Busses would address this due to the density of students per square foot.


Well functioning SDCs should be able to fit 2-5 times as much throughput onto the same roads as human driven ones.

The solution to congestion is entirely independent of who's driving though: Road Pricing!


> gonna have to regulate the hell out of autonomous vehicles to reduce congestion

They could just have some sort of road pricing - The cars send their movements to a government server and it sends a bill based on so much per mile or similar. You could do the same with Ubers - it just really needs an app to transmit the movements and the pricing could vary with time and congestion.


> A much better idea that doesn't run into issues of limited road space is of course is autonomous public transit.

I suspect you could argue that for metros and trains it already is largely automated. Surely the scheduling is at the very least supported by computer algorithms, or perhaps even automatically planned and then only manually overridden by human elements as needed?


Vancouver's skytrain light rail is fully automated (and thus achieves crazy short headways). Buses however aren't elevated from traffic and so of course they currently require a driver. If we had autonomous buses then a transit system would lower its labour costs, but I don't know what other potential gains there could be.


I'm actually in Vancouver right now for a public transit conference and have been incredibly impressed. Such a great system!


No "need" for a separate train track, I suppose. I wonder if that would be a net savings in the long run though.


Congestion pricing.


Why should we allocate a scarce public resource (roads) to prioritize the needs of able bodied adults who are driving non-autonomous vehicles?


A revealing quote showing how car transport often turns humans into aggressors against pedestrians:

> “Kids walk and it halts,” she says. “It’s so polite. It's like, ‘Oh sorry.’ It’s not rude enough.”


A passenger can potentially be just as rude as a driver.


That quote is actually from the passenger criticizing the self driving car as being "too polite."


I assume their regular driver is always willing to wind down the window and yell at pedestrians, or use the horn.

Perhaps passengers in self-driving cars should have a horn button.


Perhaps the passenger, who is described as someone acting like she has a chauffeur, should learn better manners and more respect for other people instead.


That would be ideal, but I'm not expecting any major changes in human nature simply because of technological change.


> Perhaps [17 year olds] ... should learn better manners and more respect for other people instead

A sentence we can all get behind, I suspect :-D


Jeez, yes. All the people in this part of the thread lamenting that self-driving cars will drive safely around pedestrians.


Walking in front of someone and blocking them is also rude, though. How do you discourage that without 'aggression'?

An eye for an eye is a terrible thing, but an angry shout in response to a slight can be an effective way of applying social pressure and reducing the number of slights.


I expect this will be one of the problems with self driving cars. People will simply walk in front of them crossing the street whenever they want, knowing they won't be hit.

And also human drivers will cut off self driving cars all the time knowing the robot driver is extra careful and won't be aggressive.


I can see the first "greedy" local authority insisting that the cars upload footage of human drivers breaking rules in no time. These autonomous cars provide excellent surveillance capability at scale.


Good! Maybe cities will finally be able to reclaim their streets from automobiles https://www.smithsonianmag.com/innovation/when-pedestrians-r...


Oh yeah. Also they have huge following distances so everyone is going to be merging in front of them. Honestly, I would feel so frustrated riding in one of these.


they can still honk aggressively. and they can take pictures of license plates


Honking won't mean much if it comes from a robot. People won't care about that.

License plates photos may help, but will the car send each of these to the police automatically? Police can quickly get swamped then by these reports from self driving cars.


> People won't care about that.

The people around them will notice the honking as well. That creates social pressure against people who cause the honking. Police only needs to act on a few of these violations to create a new consensus behavior.


Presumably the passengers could shout at the pedestrians if they like.

Starship Technologies autonomous delivery bots you might think would have a problem of people trapping or stealing them but seem ok - they have trackers, cameras and a loudspeaker that people at the base can say 'oi put me down' though. http://elitebusinessmagazine.co.uk/interviews/item/street-sm...


That seems like a solvable problem. If there's so many reports as to be swamping them, then just those with multiple violations would be reviewed.


Even taking a photo of the licensing plate is not trivial. If the robot car does not follow the violating car directly then its license plate may be at an angle where it can't be captured and if it can it may be distorted or angled, so a human is needed to view the footage (if a video is captured) to detect the actual license plate numbers, so automation can be a problem in practice.


I feel like I'm repeating myself, but that too seems like a solvable problem.

A car covered in sensors is likely a lot better at capturing and remember license plates than a human. Even keeping a memory of the last 60 seconds would be more than enough.

Technology-wise, Google is already very good at OCR. They even have algorithms to specifically identify license plates (as seen in Google Maps where they're fuzzed out).

This seems more a problem of policy (how would it be implemented?) than technology. And that too would evolve as necessity grew.


>Trucking: Waymo has outfitted several Peterbilt Class 8 semi trucks with autonomous packages. The hardware is exactly the same as what’s used on its Pacifica minivans, and Krafcik says the software is 95 percent similar

Is that 5% the 5% that lets them pick routes that don't include under-height structures , stupidly sharp turns, highly congested areas, or other places one generally prefers to not be driving something larger than a panel van?

If so I highly look forward to the eventual (and already severely overdue considering how trivial the problem is) release of this feature to their consumer facing maps product.

Yes, I'm kind of annoyed that it's 2018 and I can't just check a checkbox that says "avoid known under-height structures"

>The experience of riding in a Waymo is surprisingly mundane. The robotaxi drives like a very careful human

I would really like to know how careful they mean. There's a fine line between a good chauffeur for grandma and being so timid that anyone capable of driving themselves would be very frustrated with its performance and people would honk or make obscene gestures at you regularly


For the carefulness comment.

>While making a left turn in a large multi-lane intersection, the car signals and creeps forward before accelerating into the turn. Waymo drives conservatively, to be sure, but the robots aren’t cowards. Gone are the days where two self-driving cars facing each other in a parking lot might freeze up from an overabundance of politeness:


You can check that box, just not in consumer grade mapping software. Commercial software exists for that, e.g. pcmiler.com


> I would really like to know how careful they mean.

Most of the Waymo vehicles i saw on the street around Mountain View were pretty timid. Not enough that I needed to honk, but they basically were sending out engraved invitations to pull in front of them. Most recent example, in rush hour, at ~ 5 - 10 mph, maintaining 3 car lengths following distance, signaling for an offramp maybe 10 car lengths before the painted exit lane opened, and then following the traffic until the painted lane started -- but the shoulder was wide and the exit lane was clear. Most human drivers in this situation would either not signal until they were just about to turn, and would probably have driven on the shoulder outside the lines for some of the way, given the wide shoulder and clear exit lane.


I'm a rule follower, and it sometimes gets me into trouble in exactly the situation above. Say there is a merge and a long solid white line before the two lanes are supposed to mix. I always wait until legal to do change lanes, but often I have cars behind me jump the white line into the lane I want to get into, making it difficult for me to get into that lane.

Here's an analogy: Hey, here's a great life-hack. The next time you are in a movie theater, stand up to get a better view. The people who are sitting behind you might complain, but they are simply chumps for not standing up too.


So…following the rules? Keep in mind that the moment someone takes a picture of a Waymo car blasting through a shoulder, the program is going to get a lot more scrutiny.


So following rules is a problem now?

If Waymo gets caught violating even a small rule, there will be a thousand articles on HN on self-centered Big Bad Silicon Valley.


Following the rules is better than say, merging into a bus and hoping it moves or basically anything the Uber cars did ever. But following the rules when it's totally safe to bend them means trips in a self-driving car take longer than trips where a skilled human is driving, which lowers potential accpetance.


You are just saying that the rules are bad.


The rules are missing nuance, but nuanced rules are hard to enforce.

The better rule would be "don't block the shoulder", instead of "don't drive on the shoulder". But then someone ticketed for blocking the shoulder could claim they weren't blocking it; but blocking is a question of degree, not a question of fact.

If you're aware of the situation, and know you won't block the shoulder, it's reasonable to drive on the shoulder, and you're unlikely to be ticketed for it, even if an officer observes you, but not doing it in the presence of an officer is part of situational awareness.

Yes, selective enforcement, but driving rules enforcement is going to be selective unless you live in a police state or have big brother watching your high res GPS all the time.


I would think that vehicle height is so rarely an issue that consumers face that they don't bother integrating it for the free product, because instead that data can be sold as part of a commercial maps offering to the trucking companies who need that information to do business.


How about a checkbox to avoid tunnels? Maybe that won't matter, hopefully there aren't too many autonomous Hazardous Materials trucks rolling around full of explosives.


I imagine the first step is to get them running on manually vetted routes.


> while a longer 11.3-mile trip lists a cost of $19.15. That’s similar to the cost of a ride from Uber Technologies Inc. or Lyft Inc., and cheaper than a local taxi.

I’m really curious to learn more about the economics of this price. My understanding is that the lyft/uber pricing is subsidized. If Waymo can do the same but at a profit, then that will make them hugely competitive.


I guess-estimated the numbers from several angles.

What I came up with is that a single occupancy (i.e. one person per car) cost will be less than $10/hour, which is competitive with public transport ($2.50 (subsidized) bus ticket in San Francisco, assume an average 15 min ride).

And it would drop significantly when sharing (2 to 4 people per car) or when deploying mini buses (they cost more to build and operate but can carry 12 to 16 people, like Chariot buses).

And in the longer term (10+ years) they'll be even cheaper thanks to mass production of cars, transitioning to electric cars, continuously improving reliability of cars based on analyzing what breaks most frequently, contracting own solar plants for cheapest charging etc.)

The obvious conclusion is that not only traditional taxis and uber and lyft are done but also buses because it makes no sense to subsidize them with hundreds of millions per year (for San Francisco) when private alternative is cheaper and better.

You can see my full reasoning at https://blog.kowalczyk.info/article/ac23f6cdd3b543b3b89d9f68... and https://blog.kowalczyk.info/article/e79db1cb2fcf4329ac37591b...


Many bus rides are much longer, available to people who don't have a credit card, and have better traffic density.


Presumably self-driving buses would be cheaper still!


People hate using real buses because they are so expensive in all the non-financial ways. Robot buses won't be much different: https://ideas.4brad.com/sharing-ride-less-sharing-better-tra...


Indeed, estimates indicate 75-80 percent of the cost of running buses is labor. If you could quadruple the frequencies of buses, that would make a big difference.


Maybe. If the model were changed so that people scheduled rides ~15 mins in advance and met for pickup/dropoff at a semi-centralized location.


This idea naturally lends itself to a clustering problem such as k-means! We can imagine a geographical density as a cluster and that semi centralized pickup point as a centroid. As for deciding k, the number of cluster-centroid pairs, we can leave that up to market prices.

One way to adapt the k-means algorithm to this would be to add a “regularizing” or penalizing term proportional to the number of cars needed to be deployed — you can think of this as the cost per car.


A monthly bus pass costs $75.


Uber is not really subsidized in established markets.


Ya, getting self driving cars for the automative industry is the equivalent of rockets landing themselves for the space industry. How do you compete with that?


If you can’t compete then legislate... keep an eye out for incumbent friendly legislation in the coming months and years, no doubt preceded by a lot of FUD.


Uber lobbying for increased regulation as a barrier to competition would be the highest of ironies. I want to believe they would know how bad it would look and stay away, but I think the more likely reason it won't happen is that they're working on their own self-driving tech.


By finding things that self-driving cars can't do such as load your luggage, open the door for you, make sure your car is absolutely spotless.

There's always a way to compete; it's a matter of finding out what people will pay for and people have a long history of paying premium prices for premium services.


It's the 80/20 rule of disruption. Those things are good, but driving from A to B is 80% of everything people really care about.


The flip side of that argument is that everybody and his little sister will dogpile onto the 80% part. The 20% is where the real money is likely to be. It will be a much smaller market, but likely more profitable.

Find your niche.


> By finding things that self-driving cars can't do such as load your luggage, open the door for you, make sure your car is absolutely spotless.

I'm not really sure any of those is intractable for self-driving cars; opening doors is trivial.


I understand that! The point is having a human driver get out and open the door for a passenger, or perform any of these additional services is part of the premium service I alluded to.

Some people will always use the valet even if free parking is available. Those are some of the potential customers I see.


Yeah, but while that might be relevant to the “human involved personal transport industry”, it really doesn't help automotive industry players that aren't competitive in the self-driving space, because once SDCs are good enough, even people providing car-with-included-footman service will use self driving cars to carry the passenger and footman, the latter of whom can then focus on customer service, rather than driving.


The post I was originally responding to was discussing the advantages of SDC over a "real" taxi and asking how it was possible to compete with that, implying that competition was not possible. My response was about ways that a human-driven taxi could compete.

In a general sense, I don't care about the specifics, I just find the thought that you can't compete with something to be intellectually lazy. Discussing the automotive industry at large is diverging from the point I was trying to make.


I see the Waymo test vehicles so often; I am really looking forward to this rolling out on the peninsula!


This might be a dumb question, but are there any negative impacts of all these LiDAR components with our eyes? I feel like I see all the excitement (and I'm excited too) but has there been any studies on the effects of these laser sensors?

What happens when they are ubiquitous?


What does this mean for car ownership/insurance companies/auto dealers/mechanics/public transportation/etc?

Has anyone seen a roadmap that includes how other industries are affected(disrupted)?

Should we think hard about selling our cars now before they are de-valued or become more costly to drive?

Clean Disruption - Why Energy & Transportation will be Obsolete by 2030 - Oslo, March 2016

https://www.youtube.com/watch?v=Kxryv2XrnqM&t=639smost


One thing I haven’t seen much discussion of is the value of the data that fleets of these cars are going to throw off.

It’s possible that the value of this data could subsidise ride costs. They basically become a ubiquitous street level surveillance system and for this to be effective you want a lot of them driving around.

Some examples:

Detailed local weather mapping.

Potential for near real time street view.

Very detailed sensing of traffic flows and trends, can also estimate pedestrian traffic.

Could use the sensors to track non self driving cars by alpnr. Locate stolen or wanted vehicles.

Automated reporting of traffic infringements, dangerous driving or accidents.

Can estimate patronage of almost any business, via rides to destination, counting carparks, local foot traffic etc.

Can sell data on infrastructure conditions (pothole location, missing signs, etc) to cities.

Lots of other detailed profiling on riders, consumers, businesses. For example can automatically distinguish between blue collar bars and cocktail lounges.

Almost certainly lots lots more I couldn’t imagine. It’s the data!


Ubiquitous surveillance is the last thing we need, just the next step in turning the US into an authoritarian nightmare. I don't know who would want to live in a world where robo-cars would call the cops on you for jaywalking or minor non-dangerous traffic violations.

On a side note, the Google Play Services running in the background on most smartphones have already been mapping traffic hotspots and measuring business patronage by time of day.


With Google's existing maps knowledge, I can very easily see them subsidizing the rides by showing passengers advertisements as they pass certain landmarks.

Say, you pass a mall and the passenger sees an ad on the car screen about a sale at Macy's located inside the mall. With a single tap, the passenger has the option of re-routing the car to the mall.

No way Google isn't going to try to monetize it with targeted ads


Ooh that’s not bad. You could also pay to be higher in the rankings when someone asks to be taken to someplace generic like “I want breakfast” or “what liquor stores are nearby”.


All of these things can be done already without self driving cars.


I mean, you could do everything you do on a smartphone before smartphones came out. The consolidation of tools into a single ubiquitous device is what really changed the game.


That consolidation of tools couldn't happen UNTIL the smartphone came out though.

Again, none of those things require self driving technology to put onto a central unit in a car. Add some cameras and sensors that collect data.

A lot of this already happens with things like OnStar and other insurance schemes.


But those sensors aren't going to be installed on every car without a strong core value proposition. Self-driving is a feature that can justify the investment in those sensor installations. Additional use cases can piggy-back on them later.


You've very clearly demonstrated the strong core value proposition.

> Self-driving is a feature that can justify the investment in those sensor installations.

Most of these sensors on their own are pretty cheap for simple data collection. Again, most of this stuff is already tracked, take google maps, or onstar, as examples.


That's such a great talk by Tony Seba. Highly recommended for anyone who hasn't seen it or some of the other ones he's given.

I imagine that the traditional car rental companies will be the first to provide services. They already have fleet maintenance, post-ride clean-up/turnaround, and demand forecasting/repositioning down, all that's left is to setup a subscription model.

Uber could get there but I kinda feel that they will implode if they make an attempt to transition to self-driving cars, from a huge driver backlash or strike.

Though I really want to see most what happens to all the parking lots and parking garages which will slowly disappear. Hopefully they can be transformed into green spaces.


I don’t see how caring for a Waymo or a Cruise fleet would be different from that of a city fleet.

A city fleet is basically the set of vehicles owned and operated by a city government, mostly for use by its employees but could also be interpreted to include public transportation.

A caveat is that the Waymo fleet would obviously be significantly bigger and thus more complex to handle, but I’m sure there’s a physical analog of “sharding” or dividing up fleets into geographical sets.


If I owned a big auto insurer like Geico, I'd be interested in diversifying into other kinds of risks. Of course Berkshire does have a broader insurance portfolio but still I wonder if this is part of the motivation for them to team up with Amazon and JPM on a new approach to health insurance.


I have just read a little bit of information regarding the legal side of self-driving cars at https://www.lemberglaw.com/self-driving-autonomous-car-accid.... I think the most important thing that car companies and lawmakers should think seriously is the regulation. We all know that we would never completely avoid accidents on the roads, so fixed regulation is the most important thing when there are many accidents with these robot cars involved.


The pricing is way too high. The price should be compared to a car sharing program, not with a taxi service.

The trip to school for my kid would cost, when using a car from a car sharing program around 4€. This is quite a difference to the mentioned $19. Even DriveNow, which is maybe the most expensive car sharing service in Germany, we are talking about maybe 9€.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: