Hacker News new | past | comments | ask | show | jobs | submit login
Waymo’s robotaxi pilot surpassed 6,200 riders in its first month in California (techcrunch.com)
33 points by sdan 30 days ago | hide | past | web | favorite | 12 comments



Self driving companies should be required to release the video from any time the safety driver has to intervene. It will quickly become clear to the public that these vehicles are decades away from being safe.


One major problem of automotive transport is society's denial of the risks involved. Mandating showing autonomous interventions while hundreds die every day un-viewed in traditional cars will make everyone less safe.


For all I know, my politicians have been incentivized to allow these autonomous cars on my streets. (For politicians, all it takes is the ability to tout - It’ll create jobs, you’ll be able to get on TV, looking like an innovator) Why should I believe the government that they’re safe? (The most recent Uber self driving death felt like a coverup being encouraged by the politicians as the original unredacted video wasn’t shared for a long time) How can the public become confident that the autonomous cars are safer than human drivers? Certainly there’s an incentive for the autonomous vehicle manufacturers to lie or distort the trust? What if these autonomous vehicles are just safer than a drunk driver blowing 0.2 BAC, but not as safe as a drunk driver blowing 0.08? If presented with those metrics, would you be so dismissive of their risks? Without human drivers intervening, I expect self driving cars outside of boring highway miles to be currently equivalent to around a 0.3 BAC.

Yes, I understand the point that self driving are an incredible potential life saving tool, and when perfected, thousands of lives will be saved. But if they’re released too early (know of any company with a financial incentive to release a product too early?) they are going to have many victims - possibly someone I know. And yes, the safety of autonomous cars driving on bumper to bumper LA traffic is going to be used to justify their safety on single lane highways during a Michigan snowstorm. Remember that when self driving companies are citing their “better than human drivers” claim, the humans they’re comparing against is “all humans”. Who causes the human accidents? The drunk, distracted, inexperienced or very old. Eliminating those drivers from the statistics, incorporate non-boring highway miles from the statistics, and you’ll quickly find that human drivers are likely tens of thousands of times safer than autonomous cars. Isn’t that a story the public should understand?

But, if you add on automatic red light/stop sign detection and auto braking to augment a human driver, even the drunk, distracted, inexperienced and very old will become 1000x better than today, and that’s going to save lives.

Don’t become blinded by the starry future. It’s coming, but before we get there, it’s more likely to be an the fire of an autonomous car that mistook a fire engine as a drivable highway.


But a new problem is people thinking computer are always right so they follow the GPS navigation blindly over a cliff or they don't pay attention when a poorly named driver assist is engaged.

As long as self driving companies release PR materials (and the fanboys as well) there should be fine with everyone to see the other side too.

Btw the claim that autopilot is better then an average human was not proven yet, all the existing statistics are very misleading comparing some new cars versus all vehicles in all conditions. Or the other statistic from Tesla is under doubt due the fact when asked for the actual data to be made public Tesla refused (let me know if there was any progress in this case)


You're going to have to qualify 'safe' a bit. People aren't particularly 'safe' either.

Weather and environmental factors aren't a dominant force in auto accidents - https://ops.fhwa.dot.gov/weather/q1_roadimpact.htm

Idiots are - https://www.reddit.com/r/IdiotsInCars/


Drunk, inexperience, distracted and the very old are the major cause of accidents - for human drivers. But for autonomous vehicles, it’s the long tail scenarios which are going to cause the problems. Weather and environmental factors will increase those long tail scenarios and tax the sensors.

Notice that the human eye is amazing. Resolution is in the hundreds of millions of pixels equivalent, and yet we don’t allow drivers to have less than half the standard resolution (20/40). But, what resolution are self driving car cameras? 50 million pixels? Hardly. They’re legally blind.

Human eyes are even more impressive with their dynamic range. The ability to see a deer on the side of the road while simultaneously being blinded by an oncoming cars headlights, is truly incredible. Self driving car cameras don’t have the dynamic range of a 90 year old.

When one sensor is temporarily blinded by a drop of rain, how does the car react? When one sensor has failed and the other is blinded by a drop of rain?

Weather causes many more problems for self driving cars, and yet Waymo is testing primarily in sunny locales during the daytime.


Granted, but human drivers are also unsafe. achieving an order of magnitude better safety than a human is a much lower bar than perfection.


There needs to be more data released than gross number of disengagements, but video isn't particularly useful for agencies.

There needs to be some kind of common public protocol and infrastructure that allows the government to audit potential crashes and then test other self driving car systems to see how they'd react in the same situation. That'd also likely help with giving visibility into why a particular system acted the way it did, and give the public a better view into how the companies are performing. E.g. how well would Waymo handle the areas that Cruise is currently focusing on?


The problem with that approach is that self-driving companies have their models trained for their specific hardware, which will differ a lot and make the data incompatible. The only way around that would be to enforce a standard platform, similar to what happens in racing, leaving the differences between competitors to the software that drives it.


I wonder if they could build a spec for a simulated environment that regulators could use to test self driving cars against. Start stacking up edge cases that resulted in crashes to vet new releases and new products.


The biggest problem with this article is that it takes as a given that commercial autonomous taxi services are inevitable. The data being collected here includes only trips in a pristinely mapped area. There’s no data about how often service is suspended because of weather, or how much labor beyond the safety driver is required to keep the vehicles in operational order, or how sensitive the sensors are to common visibility issues that happen in the real world, or which rides are turned down because of construction or other temporal factors, or what the failure states would be without a human safety driver...

But even if Waymo beats all the odds and common sense and makes an autonomous vehicle that can function as a taxi in a way people actually want to use outside of highly regularized low-density suburban environment, I continue to be surprised that anyone thinks these services would be able to operate in a profitable way, or that they will change anything about how cities work except for doubling automobile traffic and irritating any human drivers, cyclists, and pedestrians that are forced to interact with them.


TLDR; Waymo is not allowed to take money from the passengers and the program is only open for a select few working there. Most of the miles traveled comes from testdriving between rides, not from actually passenger travel.

Free travel for employees in a limited area.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: