Hacker News new | past | comments | ask | show | jobs | submit login
Waymo outperforms comparable human benchmarks over 7M+ miles (waymo.com)
313 points by ra7 11 months ago | hide | past | favorite | 861 comments



> In July, a Waymo in Tempe, Arizona, braked to avoid hitting a downed branch, leading to a three-car pileup.

> In August, a Waymo at an intersection “began to proceed forward” but then “slowed to a stop” and was hit from behind by an SUV.

> In October, a Waymo vehicle in Chandler, Arizona, was traveling in the left lane when it detected another vehicle approaching from behind at high speed. The Waymo tried to accelerate to avoid a collision but got hit from behind.

It’s worth noting that all 3 of these incidents involve a Waymo getting hit from behind, which is the other driver’s fault even if the Waymo acted “unexpectedly”. This is very very good news for them.


>Waymo getting hit from behind, which is the other driver’s fault even if the Waymo acted “unexpectedly”.

Yes, but...there is something else to be said here. One of the things we have evolved to do, without necessarily appreciating it, is to intuit the behavior of other humans through the theory-of-mind. If AVs consistent act "unexpectedly", this injects a lot more uncertainty into the system, especially when interacting with other humans.

"Acting unexpectedly" is one of the aspects that makes dealing with mentally ill people anxiety-producing. I don't think most of us would automatically want to share the roads with a bunch of mentally ill drivers, even if, statistically, they were better than neurotypical drivers. There's something to be said about these scenarios regarding trust being derived from understanding what someone else is likely thinking.

Edit: the other aspect that needs to be said is that tech in society is governed by policy. People don't generally just accept policy based on statistical arguments. If you think that you can expect people to accept policies that allow AVs without addressing the trust issue, it might be a painful ride.


Ordinary neurotypical drivers act "unexpectedly" on the road all the time. I know that I would brake if I saw a downed branch. And people do much much much more. Suddenly change lanes on the highway with no signaling? Check. Brake suddenly because you almost missed your turn without looking to see if anybody was following close behind? Check. Drive over non-lane portions of the road because you were late seeing your highway exit? Check. Swerve suddenly because you dropped your phone between the seats while texting? Check.

I've seen people reverse up a highway onramp.


I think both points are true- people act unexpectedly all the time, so much so that we've come to expect the unexpected, by the sophistication of our theory of mind. I live in a city where erratic driving is commonplace, but that's in relation to the legal norms. New local norms become accepted, which you can anticipate once you adapt. Some of these 'norms' are handy time savers, others are incredibly dangerous and result in frequent accidents, but persist anyway.

How will AI drivers navigate these 'cultural' differences? Will they insist on following the letter of the law (presumably, for liability) or will they adapt to local practices and potentially lower their overall collision rate with humans driving poorly. Interesting near term question.


You’re being selectively reductive.

The unexpected is, by definition, what was not predicted.


The point is unexpected is supposed to be in norm, self driving cars can not tell what a norm is. The idiom is not about the word unexpected, more about the relatioship between chaos and order


> Ordinary neurotypical drivers act "unexpectedly" on the road all the time.

"all the time" means it is not unexpected. Humans do mistakes and omissions that you've subconsciously learned that humans do, so you can adjust to deal with that.

The difficulty with AI is that it behaves like a completely alien being, making errors that are alien and unpredictable to a human driver.


Each individual behavior is indeed unexpected. When I go through an intersection I don't expect that somebody blows through a red light and t-bones me. But that does happen.

Perhaps a self driving car's unexpected behavior is on the "too defensive" side while human drivers are on the "suddenly do something wildly dangerous" but this is easier to account for as a driver rather than more difficult to account for.


You don’t expect that someone will blow through a red light, but you expect that they could, and so you look both ways before you accelerate if you’re the first one waiting when the light turns green.

You wouldn’t expect someone to suddenly accelerate from a stop while they have a red light, so you don’t typically look both ways when driving through a light that has already been green for a while when there are cars on either side waiting.

I’m not saying Waymo cars do that; just an example of expected unexpected vs unexpected unexpected behavior.


Alien and unpredictable until we have enough experience with them to know their edge cases.

From my experience with LLMs, AIs even when called to be creative are fairly consistent, even if at first they seem to be producing output that’s more creative than human norms.


Because of the number of interfaces in the environment, there are a near infinite number of edge cases. The mail point is that because we evolved a theory of mind with humans, we can more reliably narrow that number of edge cases with human drivers without having to experience it first. Having an AV learn those edge cases bears some risk, and that risk may be carried by unwilling members of society.


Unwilling members of society are already carrying the risk of every nascent driver entering the road for the first time solo and every aging driver who hasn't yet had the accident that loses them their license.


That's the point of the risk part of the comment. Because we've evolved to have a theory-of-mind with other humans, we can at least know or intuit some of that risk. That's different than accepting a black-box risk. The whole point of the trust part of the comment is that, in order to build trust, we need to have risk-informed decisions.


> "all the time" means it is not unexpected. Humans do mistakes and omissions that you've subconsciously learned that humans do, so you can adjust to deal with that.

For example, just because someone has their turn signal on and is slowing down doesn't mean they're going to turn.


The question is how different is the set of unexpected things humans do from the set of unexpected things waymo/autopilots do/will do.

Also sometimes I can intuit that the driver in front of me is probably going to do something dumb because of other minor things I've observed about their driving style.


the swarm doesn't though. I can rely on most drivers to avoid black swan events


But unlike an automated vehicle, when a black swan event does occur in a human driver you can't disassemble their thought process, correct it, and apply that correction to all other human drivers unfortunately.


[flagged]


Maybe not the programmer (unless we get to position where they are licensed like other engineering disciplines), but we can hold a company liable. Some companies have already stated they will accept "full liability" [1]. That reference is nearly a decade old, and some companies have admitted AV is a much harder problem than they initially anticipated, so I'm wondering if they would walk that statement back now.

[1] https://www.motortrend.com/news/volvo-will-accept-full-liabi...


right, but as far as I know, non of the current self driving testbeds are actively insured and liable. maybe it's under reported, but if you used FEMA value of life, you should have seen multiple million dollar payouts.


> I don't think most of us would automatically want to share the roads with a bunch of mentally ill drivers, even if, statistically, they were better than neurotypical drivers.

I'm not scared of mentally ill drivers. I'm scared of rich 16 year olds. I'm scared of drunk drivers. I'm scared of drivers sitting so low they can't see most of what is happening around them. I'm scared of drivers having seizures while driving (my mom was hit a while ago by a man who lost his license due to seizures, and still refused to stop driving). I'm scared of drivers who drive without a license because "fuck them, I drive when I want to". I'm scared of people mixing up gas and break pedals (just got hit by one), in cars which can go 0-60 in 2.6 seconds weighing 6000lbs.


>I'm scared of people mixing up gas and break pedals

This specific problem is just as much a design flaw as it is a PEBCAC issue.

You have two very similar pedals that perform polar opposite functions right next to each other, and they are both operated by the same foot.

I'm surprised this isn't a bigger problem.


I had to look up the 'PEBKAC' acronym, but I think you allude to the problem of human factors engineering. It's commonplace in aerospace, where safety-critical, time-sensitive decisions must be made, and humans are in the loop. I would extend this to autonomous driving systems, particularly when you expand the system boundaries beyond the car itself. Humans are part of that human-car-environment system, whether as pedestrians, passengers, or other drivers and we should give them consideration.


an average of 44 per day in the US. So... common, but relatively, uncommon. And apparently there is software that can help mitigate this.


Slight trolling here: That is a lot of scared. Do you also drive wearing a helmet and protective body suit?

    man who lost his license due to seizures, and still refused to stop driving
If I have to guess, probably the man's livelihood depends upon driving to a job. That is probably why he kept driving.


oh absolutely not. he was like 80, retired, and refused to stop driving. Hit my pregnant mom.

Another one, suspended license, got drunk, and decided to play pinball with the car my fiance was in, causing lifelong injuries.

You can't get away from those. America is just a fun place.


I'm sorry to hear about these tragedies in your life.

    You can't get away from those.
I disagree. I recommend that you either leave the US for a different highly advanced economy, or find a US city with decent mass transit. Yes, it will require major changes to your life. Your chance of personal injury while riding mass transit is virtual nil -- a rounding error.


Ah got it, so your solution is to uproot my entire family to optimize for this one issue. Damage is already done. Plus where would I go? Who would take an old man like myself who won't be an ROI on taxes paid before retirement?


> Ah got it, so your solution is to uproot my entire family to optimize for this one issue.

If these tragedies you describe keep happening (as described), and the fault you identified points directly to the problems of American drivers, I find it shocking you suddenly describe it as "this one issue". Either it's as problematic as you described or it's not.


Something can be quite problematic but still cost more to cure than the harms incurred.

And if one believes that, e.g. self driving cars will do a lot to address it in the next decade, they especially might not want to incur the costs of the alternatives.


> I don't think most of us would automatically want to share the roads with a bunch of mentally ill drivers, even if, statistically, they were better than neurotypical drivers.

I've got some bad news for you.


Is it that 1 in 5 adults in the US live with mental illness[1]?

1. https://www.nimh.nih.gov/health/statistics/mental-illness


Others may not necessarily agree, but at least anecdotally, a sizeable portion of drivers I see make all kinds of mistakes (law of averages dictates more of them are so called "neurotypical" than not no?).

"Acting Unexpectedly" can often mean following the actual laws and general guidelines for safe and/or defensive driving. I would hazard a guess that sometimes doing the intuitive thing is, in reality, unsafe and/or against the law. If the car does this in 99% of circumstances, and still gets rear-ended, who is really the problem here?


My wife gets triggered every time I drive 'only' the speed limit.


Driving the speed limit with people whizzing past you is more dangerous than following the speed of everyone around you.


1. The other people are causing the dangerous situation, not you. That is not justification for you to do so too.

2. Most of the time it is a false perception. You only notice the people whizzing past you, not the people driving the speed limit along with you because they never pass you (a variant of survivorship bias). This of course depends on where you are, but most people in fact do drive around the speed limit.


Another thing is, by driving the speed limit I am reducing the average speed on the road. So any driver whose heuristic for choosing a driving speed involves some kind of averaging will drive slower because of me.


> 1. The other people are causing the dangerous situation, not you. That is not justification for you to do so too.

Do you care about justification, or do you care about being safe?


> This of course depends on where you are, but most people in fact do drive around the speed limit.

Counterpoint: Google Maps thinks you can drive an 86-mile trip from from Springfield, MA to Albany, NY in 80 minutes, a route which is patently impossible if you're driving the speed limit; you cannot drive 84 miles on I-91 in 76 minutes, an average just over 66 mph, on a road with no segments 65 mph segments, without exceeding the speed limit.

https://www.google.com/maps/dir/Springfield,+Massachusetts/A...

But also, this claim just doesn't stand up to scrutiny. Cars take up a lot of space. You can't just drive through them, you have to take affirmative action to change lanes and pass them. You'd find yourself behind people driving the speed limit pretty often, which you will notice.


> Counterpoint

That is no counterpoint. I am talking about people, not Google Maps. Also, the standard freeway speed limit is 65 MPH AFAIK, so I-91 seems like an exception that Google Maps is not accounting for (and perhaps human drivers as well).

> You'd find yourself behind people driving the speed limit pretty often, which you will notice.

I assume you're implying that you're always driving above the speed limit and you and you're saying you don't find yourself passing people. There could be many reasons for this. You could be in one of the exceptions that I mention, or you could be driving at times and/or along roads without many people. You could also be selectively remembering things; you are much more likely to remember passing people if the incident is frustrating and less likely if it is not frustrating.


> Also, the standard freeway speed limit is 65 MPH AFAIK, so I-91 seems like an exception that Google Maps is not accounting for (and perhaps human drivers as well).

You cannot average 66 mph without going over 65 mph, and the states involved don't post speed limits over 65 mph. You cannot get this result with by a simple oversight under the assumption that speed limits represent the speed people actually drive.

Consider what inputs might lead Google Maps to such a conclusion.


I've heard that. In driving safety class they taught us that the speed limit is safer. I tried to look up a credible source either way and came up blank. Do you have one?

I go by feel. If it feels much safer to speed, I do. Otherwise I'm in the right hand lane, usually a safe stopping distance behind a big rig for aero gains.


I think the speed isn’t where the focus matters. Obviously speed is a factor in severity and a driver who is bombing down the highway is probably more dangerous than your average bear, but safe follow distance is the thing people just overwhelmingly do not practice or care about. If you’re able to react in time if something happens in front of you, that’s what is going to keep you out of (preventable) trouble more than if you’re sticking to 60 instead of 70 on the highway.

The unfortunate thing is people take your propensity to maintain a safe follow distance as an invitation to cut aggressively in front of you, potentially across multiple lanes.

It sucks all around, but I don’t think AVs are the solution and I definitely don’t trust any company producing AVs is delivering a product that does what it claims. Waymo doesn’t acknowledge in this about the part where their vehicles have human operators take control when the car doesn’t know what to do. I assume they’re not including that data. That’s going to skew the result. Even if the car is super safe in most driving conditions, ignoring what is arguably the least safe conditions the car can be in in your data analysis is fucked and intentionally dishonest.


I don’t think you get the aero gains at safe stopping distances, though.


Maybe not but I like to tell myself that I do :)


If somebody is whizzing past you and you are going the speed limit then they aren't going 5mph faster than you. I've never been on a road where everybody is consistently speeding by so much that it could be perceived as "whizzing."


Ah, yes, kill or be killed.


Try replacing "acting unexpectedly" in your thought process (which superficially I agree with) with the words "acting safely."

It remains to be seen if autonomous driving systems are actually safe. But if the other driver does something that is safe, there's then an onus on the first driver to have accounted for that.


    It remains to be seen if autonomous driving systems are actually safe.
What a strange, leading expression. The phrase "[i]t remains to be seen if" is almost meaningless and highly editorial.

Alternative example:

    It remains to be seen if microplastics will cause higher rates of cancer.

    It remains to be seen if organic peanut butter will cause lower rates of cancer.
What is your reaction to this data published by Waymo? Do you agree that many would reasonably conclude that this particular AV system was safe in these cities? I say yes. I write this as someone who is cautiously optimistic about AV. Waymo seems genuinely lower on the hyper compared to other AV companies. I hope they continue this path of higher transparency to encourage other AV companies to do the same.


>But if the other driver does something that is safe, there's then an onus on the first driver to have accounted for that.

I don't disagree, but in order for the first driver to "account" for the actions of the second, they have to have some reasonable ability to predict what that driver will do. That gets us back to the theory-of-mind question.


Are we still talking about a car getting rear ended because it braked? Because you’re meant to leave enough room between you and the car in front to stop safely even if it unexpectedly brakes as hard as possible. Running into the back of a car in front of you (that didn’t just pull out) is always your fault.


I think people are often missing point. Yes, in a rear end collision the fault is almost always the following driver. Having a framework for assigning liability is not the same as having a safety framework. Consider an AV that is consistently brake-checking those behind them due to nuisance alarms. Now I have a harder time predicting what the car in front of me is going to do. Is that a safer or less-safe scenario? Sure, I can mitigate it by giving more trailing distance, but now we've traded traffic flow/congestion for an equal level of safety.


To add onto the sibling's point, the "safe following distance" has a rule of thumb of "3 seconds behind". At 65MPH, I'm assuming you're in the US, is approximately 300 feet.

I'm willing to bet that's around 10 times what you were considering as a safe following distance in your head, and probably still 5 times more than what you were picturing for the safe distance behind a brake checking AV.


The problem with the “safe distance” (my new car has it built into the cruise control) is that it’s more than large enough for a vehicle to merge into, which repeatedly happens.

As more and more cars default to this safely, however, it’ll start to equal out.


> now we've traded traffic flow/congestion for an equal level of safety.

If you don't maintain a safe following distance for your speed, you are the one creating the dangerous driving environment. Tailgating is worse for both traffic flow and safety.


This, again, gets to missing the point. If a disproportionate amount of cars are nuisance brake checking, it increases the level of uncertainty in driving behavior. I now have to overcompensate on average to maintain the same level of safety.


> I now have to overcompensate on average to maintain the same level of safety.

Tailgating is bad, regardless of if people brake check or not. If automous vehicles are what it takes to get you to stop tailgating and follow at a safe distance, then that is just an added bonus.


There is no single definition of tailgaiting other than being able to not being able to stop at a reasonable distance. So it is impossible to declare what constitutes tailing, especially in the mixed case of human drivers and robot drivers (who have a reputation for nuisance braking).

Why, for example, do you think trainers post "Student Driver" stickers on their cars? It's because it signals the driver may be more unpredictable and people (rightly) tend to give them wide berth. You're essentially advocating that everyone treat everyone else (and every robot) as a student driver. That's fine for a dichotomous safety mindset, but other people would prefer to recognize the tradeoffs with that approach.

Or maybe you're just deliberately bent on misunderstanding my point, I can't read your mind :)


> There is no single definition of tailgaiting other than being able to not being able to stop at a reasonable distance. So it is impossible to declare what constitutes tailing, especially in the mixed case of human drivers and robot drivers (who have a reputation for nuisance braking).

What is this nonsense? The safe following distance is determined by how fast you can stop, not by who is driving the vehicle you are following.

> Why, for example, do you think trainers post "Student Driver" stickers on their cars? It's because it signals the driver may be more unpredictable.

No, it signals they have less experience and are more dangerous drivers. When it comes to driver predictability, student drivers tend to be far more predictable than the adult, overconfident drivers. I've never seen a student driver roaring past me in stopped traffic on a shoulder, or floor the gas to pass me through a light because they didn't want to turn in a turn only lane, or any of the other unpredictable things I see on a regular basis from experienced drivers.

> You're essentially advocating that everyone treat everyone else (and every robot) as a student driver. That's fine for a dichotomous safety mindset, but other people would prefer to recognize the tradeoffs with that approach.

I think that if more driver treated the people around them as student drivers, our roads would be a lot safer.

I know that if people followed at a safe distance then we would have fewer traffic jams.

Edit: You also seem stuck on the idea that Waymo unsafely unexpectedly brakes more often than human drivers, yet that isn't clear to me from the data we have. Indeed it seems like the opposite is true from the data.


>The safe following distance is determined by how fast you can stop, not by who is driving the vehicle you are following.

So when you're driving, do you somehow know the braking distance of every car and reaction time of every driver around you? You don't, and since their braking distance is needed to know your own braking requirements, you have to use heuristics. Maybe your heuristic is "assume everyone will cram on the brake, full tilt, at any time." But, that is not a pragmatic solution given our current infrastructure. We don't have the road capacity for everyone to drive that way. So we make tradeoffs. Part of that tradeoff means anticipating what other drivers will do and adjusting accordingly. Naturally, this will trade some safety for other things we value. That is the reality of the world we live in. You seem to be advocating for something else. The OP was that we might struggle to apply such heuristics without a theory of mind to guide us.

We probably just disagree on the student driver vs. overconfident drivers. I feel like I'm pretty good at anticipating aggressive drivers, and I fear them must less than the super-tentative driver that tends to put other people at risk. But unless you have data, we're just talking about subjective opinion here so it's not really worth delving into further.

>I think that if more driver treated the people around them as student drivers, our roads would be a lot safer.

Sure. But again, it doesn't really fit with the world we live in. Should we all, in general, drive more defensively? Sure. But I doubt our infrastructure will allow for 25+ car lengths between vehicles that the NHTSA recommends, so we're stuck making some tradeoffs.

I agree on the data point. I'm not making strong claims about safety. I'm making claims about uncertainty. One thing that is clear (and I've advocated elsewhere) is that we don't have good data (in part, because companies get to share only what they want in many cases), which makes uncertainty greater.


> We don't have the road capacity for everyone to drive that way. So we make tradeoffs. Part of that tradeoff means anticipating what other drivers will do and adjusting accordingly. Naturally, this will trade some safety for other things we value.

You keep making the same argument over and over even though I've repeatedly explained that it does not match our understanding of how traffic jams form. Traffic jams are caused by braking, especially hard braking. Tailgating increases the need to brake hard if the person in front of you brakes or someone needs to merge. Following closely does not increase throughout, it is simply bad driving with no upside.

> I'm pretty good at anticipating aggressive drivers, and I fear them must less than the super-tentative driver that tends to put other people at risk.

Wow, if you read the behavior I descrived as "agressive driving" then we have completely different standards. I was describing reckless driving that blatantly violates the rules of the road.

Stop using excuses for your bad behavior and you might even be able to become a better driver.


What behavior of my own have I stated? Or are you inferring unwarranted conclusions to make ad hominem arguements?

>Traffic jams are caused by braking

Guess what one of the main issues has been in AV...nuisance braking. So much so that they suppress safety-critical actions to avoid unwarranted braking.

I also think you're confusing the proximate causes of conjestion for the root causes. Traffic conjestion is a load sharing problem. Tailgating is, in part, a symptom of inadequate capacity. You are advocating a solution that exacerbates it by reducing carrying capacity.

To reiterate (yet again) we aren't in disagreement about whether slowing down, or increasing following distance, will increase safety. It will, but also, that's not the point I was making. I'm just saying that is a superficial understanding of the problem and you aren't accounting for the tradeoffs. Those tradeoffs are the reason your proposal misses its mark.


> Tailgating is, in part, a symptom of inadequate capacity. You are advocating a solution that exacerbates it by reducing carrying capacity.

Why are we still debating something that has been answered by science? Stop assuming your intuitions are correct and look at what the science actually says about how follow distances affect the maximum capacity of a road.


Has it been answered by science? Because some of the studies I've come across list "heavy traffic" as a cause of tailgating. I think you might be speaking with unwarranted confidence.


    25+ car lengths between vehicles that the NHTSA recommends
Is this hyperbole? I tried to Google for it, but I could not. The best I could find was the three second rule. Some back-of-the-envelope calcs: Length of Honda Accord (according to Google): 192 inches. 75 miles per hour / 60 minutes per hour / 60 seconds per minute * 5280 feet per mile * 12 inches per foot * 3 seconds / 192 inches = 20.625 car lengths. Even three seconds isn't enough to reach your recommendation of 25+ car lengths.

Note: 75 mph is 120 kmh


The actual calculation involves how fast the vehicle you’re following can decelerate, and your reaction time.

You can (usually) follow a large semi a bit closer because its braking distance is longer than yours.

But because of reduced visibility you can end up with “revealed brake checkmate” where the semi swerves into the next lane because a vehicle is stopped in the lane, which you then need to swerve or hit.


They recommend estimating 4.5 seconds of stopping distance. But that’s very conservative and probably not realistic.


That's ridiculous. There's nothing "unrealistic" about 4.5 seconds of stopping distance - how could there be, are you thinking that the highways physically wouldn't fit all the cars spaced at that length?


It's unrealistic if you expect the existing infrastructure to meet both that requirement, the capacity requirements, and the time requirements of people travelling. In the real-world, there are tradeoffs. People who don't understand that tend to have superficial understandings of the problem because their mental models show little nuance.


I’ve entirely lost any semblance of a point you might have had initially. You’re doubling down on a weak stance making hyperbolic claims like “our infrastructure can’t handle cars leaving a safe following distance”. What nonsense!

It’s more effective to let your initial point stand and let the discussion run its course. You’re working against yourself now.


The claim seems reasonable. Most of the time infrastructure is fine, but that 20% of the time between 4pm and 6pm when you are actually using it, it often is not.

More following distance => less throughput. There is already a math problem at play for how to get millions of single occupancy to and from a given location in a day. Which is to say, gridlock occurs today - it is a fact. Increasing following distance by 2 to 5x would potentially be quite bad for otherwise heavy but flowing traffic. Eg: NYC, it is known for good throughput. Light turns green, everyone picks up speed and keeps tight following distances. Anything less and that city would lock up more so than it already does.

Part of the issue is just complexity and variety. 2x following distances on most highways would likely be only a good thing. But in each and every circumstance, not necessarily good, perhaps disaster.


There is pretty well established modeling that demonstrates that the stop and go rhythm that arises from tight following distances leads to worse throughput. To avoid slinking, everyone needs to accelerate and decelerate at the same rate. A safe stopping distance is the only way build in the micro buffers needed to achieve relatively consistent speed at human scale, or without large scale traffic orchestration. The naïve takes just sound like armchair hand waving. I guess that’s what I’m responding to.


It does seem on the road that many people lose the ability to predict 30 seconds in the future. That car in front of you has 10 feet to move? GO GO GO! Wait, it only moved just 10 feet - BRAKE BRAKE BRAKE!

I think it can fun to be the driver that smooths out the shock waves, so I do largely agree with your point. Though, all things being equal, double following distance and that will double the length of a traffic jam. Longer traffic jams are more impactful - it takes longer to get through them & the longer traffic jam distance is more likely to cause back-ups on on-ramps and in turn to the surface streets leading to those on-ramps. Which is to say, a higher prevalence of grid-lock.

The AI crowd is right to say that with 100% AI cars, you can have smooth and fast traffic with impossibly tight following distances. I do believe you need some impractical necessities to accomplish that though.


You seem very combative and seemingly deliberately missed the connections in other sub-threads, so why don’t you tell me what point you think I’m making and I can tell you how it’s accurate.


I’m not combatting anything… I am just offering my perspective that I think your OP is stronger on its own. The sub-thread digressions betray your unfamiliarity with the topic and your narcissistic nature rather than reinforce your OP. Take it or leave it, it’s just an internet opinion.


If you can't relay someone's point as you understand it, there's little reason to weigh in, and even less for that person to try and clarify it. It comes across as a need to argue, rather than a desire to understand.


?


But you aren't overcompensating. Instead you are driving safely. If sudden braking is so rare that you feel comfortable riding right behind a car in front of you then when they do suddenly brake (which will happen eventually), you are now in a very dangerous situation.


If it were doing it consistently then that would be less of an issue. The problem is that complex systems make decisions for complex reasons and are very difficult to predict as a result.

That said, if you routinely tailgate the driver in front on the assumption that nothing will go wrong then you've chosen to accept the consequences when an otter runs out in front of them (hey I didn't see that coming either) and they suddenly limit brake and you're sitting in their back seat. Or a steering tie breaks in a classic car coming the other way, which hits the car in front of you head on and now they're stationary.

The question of how much additional caution (in terms of lower speed limits, longer following distances etc.) is optimal in terms of overall QALYs is, I feel, vastly under-considered and under-discussed.


None of these three cases involved the Waymo car behaving in ways that are not that uncommon among human drivers, and our theory of mind does not make us nearly-infallible predictors of what another driver is going to do. Your objection becomes essentially hypothetical unless these cars are behaving in ways that are both outside of the norms established by the driving public, and dangerous.


That’s true, but also one of the selling points of some AI tasks. As a non hypothetical example, the DoD hired a company to train a software dogfighting simulator with RL. What surprised the pilots was how many “best practices” it broke and how it essentially behaved like a pilot with a death wish. Possibly good in war, maybe not so good on a public road.


Your comparison is RL dogfighting? I think you need to brush up on recent autonomous driving systems…


Do you not feel they are related? In general, aviation tech tends to outpace automotive tech (I’ve worked in both). Maybe you could enlighten me?


Modern AVs are not driving like they have a death wish by any stretch of the imagination (and these systems are not developed using raw RL). They are driving safer than humans. Any concern that they are not following established best practices is entirely unfounded and strictly grounded in FUD.


>They are driving safer than humans.

I don’t think you can make this strong of a claim with the available data. Likewise, someone can’t make strong claim in the opposite direction. The best data I’ve seen is from NTSB investigations, and it clearly shows some very dangerous behavior. But it’s just a snapshot of data.

And I think you’ve taken away the wrong point. It was about unexpected behavior, not “driving with a death wish.”

The ability of people to consistently miss/twist the point to fit their own predetermined viewpoint is tiresome.


> And I think you’ve taken away the wrong point. It was about unexpected behavior, not “driving with a death wish.”

Indeed - so how did systems with a "death wish" enter into this discussion? Well, it turns out it was by you, about which you said "...maybe not so good on a public road."

In this light, your complaint about others twisting the point seems rather ironic.


>so how did systems with a "death wish" enter into this discussion?

It was the way the pilots described the unexpected behavior of the RL model.

If I related a story about ChatGPT where someone said, "It wrote like it was drunk" would you insist that I'm saying ChatGPT actually imbibes in alcohol before coming up with a response? I think you might be missing the point of an analogy. This tends to correlate with people with dichotomous thinking, which is also part of the thread and the difficulty with people understanding the intended point.

The part you missed was that the "maybe not so good on a public road" was about breaking "best practices," not about acting with a death wish. The intent was to underscore, yet again, was that unexpected behavior is sometimes beneficial in a wartime environment where you want to keep the other party guessing and off-balance, but not beneficial in a public safety domain. Again, twisting an argument to make a preconceived point rather than reading it as it was actually written. It's hard to get someone to understand a point when their biases are hell-bent on deliberately not understanding it.


> It was the way the pilots described the unexpected behavior of the RL model.

And you chose to quote it, and to make the point that it is not the sort of thing we want on public roads - as if there's the slightest hint in the extensive testing of these cars that the apocalyptic scenario you chose to introduce to this discussion was anything but hypothetical and hyperbolic FUD.

And then you had the effrontery to chide dcow for responding to your claim, as if they were taking the discussion in this absurd direction. You've chosen to repeat yourself here, so I will say again that there is considerable irony in what you are doing - irony in the sense of statements being made that display a lack of self-awareness.


> > It was the way the pilots described the unexpected behavior of the RL model.

> And you chose to quote it, and to make the point that it is not the sort of thing we want on public roads - as if there's the slightest hint in the extensive testing of these cars that the apocalyptic scenario you chose to introduce to this discussion was anything but hypothetical and hyperbolic FUD.

At some point there were numerous reports of cars using driving assist steering into the dividers at highway offramps. I believe there were in fact some some real accidents that happened this way. That definitely was unexpected behaviour and I would argue one could even qualify it as the car acting like it had a death wish so I don't think the OP statements can be qualified as hyperbolic FUD.

Now you're going to argue that these were early incarnations of non AV systems (despite the name), but I think they do illustrate how the systems can behave in unpredictable (and dangerous) ways when they encounter novel situations. That's why I commend waymo for not following the hype and keeping the environment they operate very restricted.


> Now you're going to argue that these were early incarnations...

Well, yes I am, because, as you realize, it is a valid point! I will also add that, I have, myself, made the point that we have to be careful because complex systems have bizarre failure modes, so just because, in everyday circumstances, these systems may appear to function 'sensibly', we cannot simply trust them to be 'sensible' outside of the scope of testing.

I am, however, a believer in the relevance of empirical evidence, and I also agree (with some caveats) that human-driver performance is a valid basis for establishing whether, and to what extent, autonomous vehicles may be permitted on public roads. We are discussing the publication of some results of Waymo's extensive testing, and I stand by what I wrote in my first response here: none of these three cases involved the Waymo car behaving in ways that are not that uncommon among human drivers, and our theory of mind does not make us nearly-infallible predictors of what another driver is going to do.

I agree with you that Waymo is taking the right approach here (and, FWIW, I regard Tesla's two-faced stance as unethical.)

I doubt I would have objected to Bumby's invocation of a death wish in response to the past events of which you speak, but in response to Waymo's test results and the points I made about them, I think 'hyperbolic FUD' is justified. At some point, a rational person has to make accommodations when things have changed and arguments based on old data lose their relevance.


>I am, however, a believer in the relevance of empirical evidence, and I also agree (with some caveats) that human-driver performance is a valid basis for establishing whether, and to what extent, autonomous vehicles may be permitted on public roads.

This bypasses the entirety of the trust argument originally stated by leveraging the very point (statistics) it cautioned against. (As stated elsewhere: "So all the bleating about statistics may be necessary, but not sufficient, to get wide-scale adoption of AVs on public roadways.")

>none of these three cases

Regardless if you think the decision should be based on statistics alone, my further point is that an n=3 sample size is not adequate to make strong claims. Add to it that Waymo only reports the data they want to[1], there is a reason to be careful about making any claims.

Here's how that scenario plays out when we rely on organizations to self-report safety data in a competitive environment, from my experience. Even though it may start with the best of intentions, cost and schedule pressure builds. Now items that would have otherwise been reported as safety incidents are now classified with vague terms like "test anomalies" and essentially buried. They will still report safety metrics, but now it's, at best, incomplete and misleading. Until some event that's egregious enough forces the company to be more transparent.

[1] https://spectrum.ieee.org/have-selfdriving-cars-stopped-gett...


It is absurd of you to suppose that Waymo's testing has yielded just three data points on the safety of its cars... Just suppose there had been no incidents incidents at all - then they would clearly have to be banned from the road, as we would have absolutely no data pertaining to their safety!


Again, missed the point. We shouldn't make strong claims about self-curated data when there's an incentive to make that data look safer than it actually was.

We know they aren't fully transparent. We also know that other well-known and well-funded AV developers have very bad practices that are highlighted when they are forced by regulators to be transparent. While that isn't a smoking gun against Waymo, it should give us pause and make one question a naive perspective in favor of a skeptical one.


"the point"

You made two points right next to each other, about sample size and bias, and the sample size point was invalid.

When called out on that, you don't get to move the goalposts and say that the second one was "the point" and the other person is "missing the point".


There's a subtle nuance you're missing. I am not saying the data is biased, I'm saying we have good reason to believe the data may be biased. I have been careful not to say any strong claims about Waymo here, because my stance is we probably don't have enough data to make such claims. It's a small, but crucial difference. We would need more data (ie a larger sample size) to make a strong bias claim.

Given the protracted nature of this thread, I get why it's confusing. I have a couple sub-threads that make those two points separately, and was simply trying to show how both are related. If one isn't aware of the broader context of the discussion I was making, I understand why it may seem like goalpost moving. But in reality, I was deliberately trying to tie the two related posts together because (IMO at least) they are related.


> Again, missed the point....

If your absurd claim that Waymo's trials provide just three relevant data points is not part of "the point", then why did you make it? It does not give us any confidence in the proposition that "the point" has been well thought-out.

Furthermore, "the point" keeps shifting: recently it shifted to raising doubts about the provenance of the data whipped up from a six-year-old article. At this point, I feel that a quote is appropriate: "the ability of people to consistently miss/twist the point to fit their own predetermined viewpoint is tiresome."


As I've said elsewhere, my point was part of a larger context. My point was about how important trust is to adoption of AV tech. That goes well beyond the Waymo cases illustrated. The sample size and quality of the data illustrates the need for a broader context of information needed, in addition to the need to understand that humans don't build trust simply from statistical arguments.

And in the vein of trying to steel-man your position, I gave the comment to ChatGPT to see if it, too, considered the central point a claim about AV having a "death wish." Here's what it said:

"The statement highlights that some AI tasks, while effective, may deviate from conventional practices. An example is given where the Department of Defense (DoD) employed a company to train a dogfighting simulator using reinforcement learning (RL). Pilots were surprised by the simulator breaking established best practices and behaving recklessly, akin to a pilot with a disregard for safety. The implication is that while such behavior might be advantageous in a military context, it may pose risks or be unsuitable in civilian settings, such as public roads. The statement underscores the need to carefully consider and tailor AI applications to specific contexts and objectives."

So it seemed to recognize that the central point is that "the behavior" in question is "breaking established best practices" and that the "implication is that while such behavior might be advantageous in a military context, it may pose risks or be unsuitable in civilian settings". There's probably some irony in the fact that AI did better at a reading task.


Sounds like you’re arguing that “AI” is better at navigating complex human discussion than the multiple humans in this thread? I’ll take that conclusion, I guess (whether you ultimately believe yourself or someone else arguing against your own points doesn't really matter, does it).

Really the only thing left is for you to take a flight to SF and watch the Waymo cars drive. Or ride in one if you dare.


Impressive as LLMs are, they lack a theory of mind and are inferior to humans in parsing meaning from statements.

This dispute is not over a difficult or subtle issue: all the people who have responded in this thread see clearly the obvious and unequivocal reading - and, in your own example, ChatGPT also does! It has identified the tacit subject of the sentence "Possibly good in war, maybe not so good on a public road" as the specific military system, with behavior explicitly described as like having a death wish, that is the only topic of the preceding sentence. For one thing, the phrase 'possibly good in war' makes no sense in the reading you are trying to pass off: why, out of nowhere, did the needs of the military appear? And unexpected behavior is not something desirable in general in military systems, any more than elsewhere - it would take very special circumstances and a specific sort of behavior for that proposition to even be entertained. We can see, therefore, that dcow was right to respond 'your comparison is RL dogfighting?...' and 'Modern AVs are not driving like they have a death wish by any stretch of the imagination...'

Oh, and next time you invoke ChatGPT's response, include the prompt, verbatim, like this for example:

Prompt: In the statement 'That’s true, but also one of the selling points of some AI tasks. As a non-hypothetical example, the DoD hired a company to train a software dogfighting simulator with RL. What surprised the pilots was how many "best practices" it broke and how it essentially behaved like a pilot with a death wish. Possibly good in war, maybe not so good on a public road.', what is being called ' not so good on a public road'?

Response: In the statement, the phrase "not so good on a public road" refers to the behavior of the software dogfighting simulator trained with reinforcement learning (RL). The implication is that the simulator, which exhibited behavior contrary to conventional "best practices" and behaved like a pilot with a "death wish," might not be suitable or safe for use in a public road scenario. This suggests a concern about the potentially risky or unpredictable behavior of the AI system in a real-world, civilian setting such as driving on public roads.

What we have in this discussion is a motte-and-bailey fallacy, as we can see in your response to my first post here, which was:

None of these three cases involved the Waymo car behaving in ways that are not that uncommon among human drivers, and our theory of mind does not make us nearly-infallible predictors of what another driver is going to do. Your objection becomes essentially hypothetical unless these cars are behaving in ways that are both outside of the norms established by the driving public, and dangerous." Your reply, in outline, goes like this:

> That's true...

Here we are in the motte, where you nominally accept that the relevance of your concern, which is not unreasonable in itself, is constrained by the extensive testing that has been performed by Waymo so far...

> ...but...

Here we enter The bailey, where we are supposed to turn our attention to an unrelated system, which was found, on testing, to have alarming unexpected behavior. The bailey has become a place where Waymo has been curating the data to the point where we simply don’t have the slightest idea whether there’s dangerous behavior outside of the human norms lurking in Waymo cars.

It has also become a place where all of Waymo’s extensive testing has produced just three data points against this view. I must say that it seems generous of you to concede even three, if Waymo is curating the data to the extent you imply.


I'll help you out here, since there still appears to be some difficulty.

I've replaced your word "it" to make the point as clear as possible.

>"And you chose to quote it, and to make the point that" [unexpected behavior] "is not the sort of thing we want on public roads"

Or to put it different terms, the sometimes unpredictable behavior resulting from RL may be a feature on the battlefield but a bug on public roadways.

I still stand by that point. And, yes, that means dcow also missed that point. There's nothing hypothetical about bringing about real-world case studies of autonomous behavior based on RL models. We've been through this so many times now that I'm coming to the conclusion you may be arguing in bad faith or you get incredibly distracted by certain terms that it inhibits reading comprehension.


>I've replaced your word "it" to make the point as clear as possible.

>"And you chose to quote it, and to make the point that" [unexpected behavior] "is not the sort of thing we want on public roads"

Ah, so we're playing the "I didn't say what I just said" game now. Here's what you actually wrote:

"That’s true, but also one of the selling points of some AI tasks. As a non hypothetical example, the DoD hired a company to train a software dogfighting simulator with RL. What surprised the pilots was how many “best practices” it broke and how it essentially behaved like a pilot with a death wish. Possibly good in war, maybe not so good on a public road."

Anyone with the slightest familiarity with language will find no credibility in your claim that in this comment, the thing that is being called "possibly good in war" but "not so good on a public road" is anything other than the one explicitly-mentioned aspect of the system that you have specifically chosen to present as an example. Furthermore it was completely reasonable for dcow to respond "modern AVs are not driving like they have a death wish by any stretch of the imagination" after you chose to make the above statement.

With your latest response, you continue to put more weight on an anecdote about this unrelated system than you do on the extensive empirical evidence from testing the actual system that is the subject of this article.


Ignoring the no true Scotsman-ism of your post, what's odd is that you are telling someone who actually wrote it what was intended. You've made it completely clear you misunderstood it. I've pointed out exactly where you made the mistake. Yet you can't seem to bring yourself to admit that just maybe your biases made you infer more than what was actually said. The "explicitly mentioned aspect" is the unpredictable behavior. You can tell that, not just from the wording, but from the fact that has been the consistent throughline of the entire sub-thread. And not to beat a dead horse, my secondary point has consistently been that we should not put too much emphasis on self-curated data when there is a bad incentive to embellish it. Yet here you are.


> what's odd is that you are telling someone who actually wrote it what was intended

They're telling you what you wrote, not what you intended to write.

They're correct.

> The "explicitly mentioned aspect" is the unpredictable behavior.

No, it was breaking best practices and behaving like it had a death wish. That's something that has some overlap with being unpredictable, but is not at all the same thing. You can have a predictable death wish, even.

> maybe your biases made you infer more than what was actually said

I don't think I have much bias here and I agree with them. Also consider that the person that writes something is biased to think the communication was clearer than it actually was.

> And not to beat a dead horse, my secondary point has consistently been

It's neat to have a correct secondary point, but it won't make your primary point correct, and people don't need to add a disclaimer of "while your secondary point is fine" every time they criticize your primary point.


Why do you think "best practices" are considered "best practices"? Is it because they lead to the most desired outcomes? If so, what do you think happens when best practices are broken?

Now I'll concede it's possible that RL can lead to finding new best practices. But that's not the case in the story relayed. It is only a good practice for the scenario because it was an unpiloted aircraft because it otherwise puts the pilot at too much risk (which is why the pilot say it was acting "as if" it had a death wish. He was anthropomorphizing it.) That still means it's bad practice when the goal is to save human lives.


Yes, I understand the concept of a best practice, and why breaking it is bad.

But that's not the same as being unpredictable. Those are orthogonal complaints, and nothing in your quote suggests that the "death wish" flying was more unpredictable.

And we already know that these cars act super cautious, the opposite of acting like they have a death with.

So that quote has no useful information that we could transfer to the car situation.


> What's odd is that you are telling someone who actually wrote it what was intended...

What I am saying here should be clear to someone such as yourself, who is invoking our theory of mind in his claims: I am making a distinction between what you are now saying you meant all along, and what other people will recognize as having been your intent when you first wrote the passage in question. We cannot present proof, but nevertheless we know, beyond reasonable doubt. Your explanation does not pass the sniff test.


The rules for every vehicle I have ever operated on land, air and water require the overtaking vehicle to maintain separation.


I’m not sure this is the blanket case. Hot air balloons, for example, get right-of-way regardless, on the assumption they have less maneuverability. Weird edge case, I know, but just throwing it out there to underscore the danger of absolute statements.


I have to admit I have not piloted a ballon but even a sailing vessel over taking a power boat has to avoid the vessel in front. I would also question the rationality of operating an aircraft you cannot steer:)


But who gets the right of way between 2 hot air balloons?

Funny story, I actually "crashed" in a hot air balloon as a kid, when the hill we were landing on had a draft running up over it that caught the balloon after the basket had touched down, and dragged us along the ground sideways for a good quarter mile.


If I recall correctly, the lower one has right of way because visibility from lower to higher is blocked compared to higher to lower. It is easier for the one further up in the air column to spot and react to the lower one. Going up is also likely a less dangerous proposition than going down.

This is half remembered from a Snowmass balloon rally conversation.


See https://www.law.cornell.edu/cfr/text/14/91.113

Basically if on the same level the one on the other’s right has the right of way.

But a plane in distress has right of way over the balloon.


Haha well in that case, I’d say the ground had right-of-way :-)


This discussion appears to have run off the road with an unexpected turn.


From the article : "But it’s not enough to answer the most important safety question: whether Waymo’s technology makes fatal crashes less likely."

Wouldn't the most important safety question be "whether Waymo’s technology makes fatal crashes MORE likely?" Why assume Autonomous Vehicles have to perform better?

Waymo less fatalities == Win

Waymo same fatalities == Draw

Waymo more fatalities == Bad Waymo


Status quo fatalities == Bad humans

A significant proportion of the current fatal accident rate is based on incidents which involve driving so bad that humans implicated are severely punished for it, or at least banned from driving for a period.

Why would anyone set the benchmark for a commercial driving company lower?


Yeah. The comparison should be to drivers who were properly licensed and not DUI, not to all incidents.


Why do you assume "properly licensed" drivers will be better? My driving test in the United States was a joke. Many are the same. Nothing about it prepared me to be a good driver (which I am not). To me, the phrase "properly licensed" means nothing more than I hold a small card in my wallet that say I am allowed to drive this car. It says nothing about my skill level.


If your skill level is too poor, you get your license taken away.


I disagree. Some people will drunk drive no matter the legal consequences, or in my case, I once rear-ended someone because I was drowsy but kept driving because I wanted to get home (fortunately no one was hurt, and yes, the accident was completely my fault). The comparison should be against the general population of human drivers because that's the reality on the road.


You really think the safety bar for running a commercial autonomous taxi service should be set lower than the average human legally entitled to drive!?!


because everyone thinks they're above average. If you're only better than the below average drivers, then everyone will think they're better off driving themselves even if that's not true.

Also, we've accepted and made legal frameworks around the concept that people sometimes kill each other with cars. Robots killing people with cars does not benefit from this carve out.


The average data include literal drug addicts and people driving under alcohol influence or people who drive distracted while texting.

I am neither, hence I will not trust any system that does not perform at least as well as I do. The bar is much higher than the average.


> hence I will not trust any system that does not perform at least as well as I do.

...which is a pretty big fallacy and, if implemented as law (which I am working to make sure it is not), will get a lot of people killed.

Literal drug addicts, people driving under the influence of alcohol, and people driving distracted are driving right now. Even AV that performs at the level of the median driver will massively reduce the number of those people actually making driving decisions, and therefore directly reduce road deaths.

Why? Because most accidents and deaths are caused by drivers performing at the low end of the spectrum. In the US, a full third of accidents are caused by drunk driving - which would be virtually nonexistent with AVs ("virtually" because the AVs aren't going to be perfect and are going to need manual intervention).

The bar is literally the average, because if everyone drove average, deaths would plummet.

And, unlike the American public, it's massively easier to improve AV driving behavior than average human driver behavior. A system that starts out eliminating the lower end of driving performance while improving over time is a deal too good to turn down if you care about human life.


I am a terrible driver -- so easily distracted. Almost zero men I have met in my life are willing to admit the same. (However, some women in my life do.) As a result, I try to avoid driving as much as possible and use mass transit.


It is only a matter of time before that changes.


You're right. Also, if AI drivers are safer, there will be knock-on effects of making humans drive more safely as well.


These two phrases are strange to me in the context of OP:

    intuit the behavior of other humans through the theory-of-mind
and:

    If AVs consistently act "unexpectedly"
Are you implying that the three specific incidents quoted by the OP are acting unexpectedly? I hope not. Most good, aware drivers would do the same. If you disagree, please provide alternative driving actions that you consider expected. And, would the safety outcomes be better? Unlikely.


No, I'm making a broader claim about the nature of AI in safety critical domains. I am also saying we should be careful about drawing strong conclusions from a sample size of n=3 from a potentially questionable dataset.


I would imagine that, with more experience, anticipating the (more consistent) actions of a machine would be easier than anticipating the actions of an unknown human in an unknown state.


The entire point of this line of discussion is that an ML based system with extremely weird and unexpected failure conditions and failure states ISN'T "more consistent" than a human who might follow more closely than physics says but otherwise is ACTUALLY predictable because they have a mind that we have evolved to predict.

ML having completely unpredictable failure modes is like the entire case against putting them anywhere. What would you call a vision system that mis-identifies a stop sigh because of a couple unrelated lines painted on it, other than "unpredictable"?


Braking to avoid hitting an obstacle (like that tree branch in the first example) is hardly "acting unexpectedly".

> "Acting unexpectedly" is one of the aspects that makes dealing with mentally ill people anxiety-producing.

Wat.


> Braking to avoid hitting an obstacle (like that tree branch in the first example) is hardly "acting unexpectedly".

Depends on the size of the tree branch.

But also, being hit from behind at low speeds was a pretty common thing in early testing in Mountain View, when they were using the Google name on cars. That there are only a handful of incidents reported in this report means either the software has gotten better at communicating its intent to other drivers, or the driving public is aware that Waymo cars are way more cautious --- if they only make lane changes and unprotected turns by engraved invitation and everyone knows it, that's fine too.

In the early days, it seemed like it might be appropriate to install a 1979 regulation 5 mph rear bumper on these cars, because they'd likely get hit often enough.


>this report means either the software has gotten better at communicating its intent to other drivers, or the driving public is aware that Waymo cars are way more cautious

There's at least one other alternative: selection bias. Since there is no standard industry definition, companies are allowed to not report many incidents.

>"Waymo, on the other hand, ran complex computer simulations after each disengagement, and only reported to the DMV those where it believed the driver was correct to take charge, rather than being overly cautious."[1]

So it may not be reported unless it meets Waymo's (non-independent) selection criteria. I think most people can at least recognize there is a potential conflict of interest when objective reporting isn't required.

[1] https://spectrum.ieee.org/have-selfdriving-cars-stopped-gett...


    Depends on the size of the tree branch.
I love these types of comments on HN: So vague as to be meaningless.

As a thought experiment: Let's put you in a driving simulator, and ask you re-drive the same scenario 10,000 times. In each scenario, we randomly change the size of the branch -- both weight and volume. Repeat the same test with 100 other drivers. Repeat the same test with 100 different AV algorithms. I am sure each driver would have their own definition of "branch too large to safely drive over".

What concrete point are you trying to make?

    being hit from behind at low speeds was a pretty common thing in early testing in Mountain View, when they were using the Google name on cars.
Really? Also, this editorialised phrase "pretty common thing in early testing". Again, to me, so vague as to be meaningless. Common? From what perspective. Early testing? From what perspective.


My recollection is during the early on road testing, Google was required to report on all collisions, regardless of severity (which another poster mentioned isn't the case here), and that in most of those collisions, the Google vehicle was hit from the rear at low speeds when the driver behind expected it to go through; without video or other imaging it's hard to judge the exact circumstances of course (and even with video it can be difficult). There may have been a couple inattentive drivers that ran into them with more speed. And the collision where the Google car tried to change lanes into a VTA bus that their algorithm predicted would move for them, being a professional driver and all.

From driving near the things, they're very cautious and sometimes start to move and don't (I saw one try to make a lane change for about a mile on el camino before it was able to, turning the blinker on and off the whole time).

>> in early testing in Mountain View, when they were using the Google name on cars.

> Early testing? From what perspective.

You know, before they switched the name to Waymo.


The implication is that drivers intentionally ram Google vehicles when they have a ghost of a reason to do so.


Nah, the implication is that many human drivers speed, tailgate, don't pay attention, etc. There's a reason rear end collision blame is usually applied to the car behind by default.

On the road, sometimes I feel like I'm the only one following the rules. :/


Breaking for a tree branch of any size seems preferable to running into highway barriers[1] and fire trucks[2].

[1] https://www.kqed.org/news/11801138/apple-engineer-killed-in-...

[2] https://abc7news.com/tesla-autopilot-crash-driver-assist-cra...


Not for a branch that's 2 inches long and a fraction of an inch in diameter.


>Depends on the size of the tree branch.

Our thinking fast reflexes are to pretty much avoid any obstacle whether living or dead. Hopefully our thinking slow brain has time to do a proper evaluation before doing anything drastic.


But this certainly could be construed as unexpected:

>"In August, a Waymo at an intersection “began to proceed forward” but then “slowed to a stop” and was hit from behind by an SUV."

>"Wat."

Are you saying you don't understand why unexpected behavior causes anxiety? It's a pretty well documented effect, from rats to humans.


It was:

> > "Acting unexpectedly" is one of the aspects that makes dealing with mentally ill people anxiety-producing.

> Wat.

The "Wat" probably refers to the fact that this seems unrelated. Dealing with mentally ill people is anxiety-inducing because they act unexpectedly... so what? Lots of things are unexpected. They shouldn't all be drawn into the analogy that says "well, that thing, plus a load of other things, can induce anxiety, therefore that thing should be tarnished with the same brush as all of those things."

People drive in unexpected ways all the time. Of all the criticisms to level at them, "the thing you're doing plus a load of other stuff can induce anxiety" probably isn't top of the list.


This makes me think of people that change two lanes very quickly to be able to turn at that next corner. We all see them, they piss us off through their bad planning but in the end I don't think an autonomous vehicle would ever try this, depending on programming of course.

Would also be nice to see all incidents, not just injury incidents to see what kind of unexpected "mistakes" these cars are making


It shouldn't, simply because it always knows where it's going well ahead of the lane change.


>Dealing with mentally ill people is anxiety-inducing because they act unexpectedly... so what? Lots of things are unexpected.

I think you're understating it.

Mentally ill people act far off the cuff. If I'm walking outside, people can behave unexpectedly (but within the parameters of behavior that doesn't make me anxious).

Imagine: stopping instantly to bend over and tie one's shoes or to look at a storefront. Taking up multiple spaces on the sidewalk. Dropping an item that causes a loud noise. All of these are unexpected movements that require a reaction.

However, if somebody screams about CIA conspiracies or has very erratic mannerisms, that would create more anxiety.

So, apparently their point is that AI behavior on the roads might be more jarring than normal jackass human behavior on the roads.


I am not painting them with the same brush, I'm drawing an analogy to help people understand the context better. In this case, public policy will dictate to what extent AVs are allowed on public roadways. That, in turn, is dictated by trust. I'm pointing out that "trust" may be incompatible with "unpredictability." I'm not sure what throughline you're drawing, but you seem overly hung up on the use of the word "anxiety," and it's causing you to miss the real point.

So to put a finer point on it, people need to acknowledge that public trust is necessary to wide-scale adoption of AV tech. Plenty of psychological research shows how we aren't intuitively wired to understand statistics. So all the bleating about statistics may be necessary, but not sufficient, to get wide-scale adoption of AVs on public roadways.


> seem overly hung up on the use of the word "anxiety,"

It wasn't just the word anxiety; that's a mischaracterisation, and another psychology-adjacent misstep with "hung up on". Your point wasn't great, or at least was very poorly articulated, and that's what caused me to "miss the real point". You seem to have clarified it with this comment, so my challenge to your original point seems to have helped.


Apologies if you think it was a mischaracterization. I was trying to figure out why you missed (what I consider) a pretty straightforward connection between theory-of-mind and predicting behavior. In retrospect, using a term like "mentally ill" is too loaded of term and distracts people from making that point because it can be triggering. I still think its a valid point, though, and plenty of people seemed to follow it just fine.


Sounds like SUV driver was on their phone at the junction, then continued to mess about with it as the cars pulled away.

I imagine the collision speed was quite low since they just left the junction. There's a reason things like following distance and looking in front of you while driving™ exist.


> Are you saying you don't understand why unexpected behavior causes anxiety? It's a pretty well documented effect, from rats to humans.

I'm saying that your leap from "the car in front is behaving unexpectedly because you don't have all the information it has" to "a bunch of mentally ill drivers" is such a complete non-sequitur that it's deserving of a Wat [1].

[1] https://www.destroyallsoftware.com/talks/wat


I understand how you could miss it, but the sequiter is "unexpected behavior".

Why do mentally ill people cause some people anxiety? Because they may behave erratically.* In other words, we have a lot more difficulty ascertaining what they are thinking, and by extension, predicting their behavior. The same can be said for the general public sentiment towards AVs.

* I also understand "mental illness" is a blanket category and I'm not using it as a stigmatizing term. It may be stretching the analogy too much, but it's simply a proxy for "I have a hard time predicting what's going through this persons mind"


I don't think this applies to any of the incidents mentioned in the article.


In fact, if you don't have enough room to react to "unexpected" behavior, you are at fault lol.


When a human driver must emergency break for a downed branch, it'd ok. When an AI does it, it's unexpected and needs to be hyperanalyzed. I swear, the trolley-car problem is absurd, it's poisoned all debate. 99% of crashes is people not being able to stop in time because people don't drive defensively and can't stop in time when they are called to do so.


>it's unexpected and needs to be hyperanalyzed

There's a good reason for this. It's because the human can be interrogated into what was going through their mind whereas many ML models cannot. That means we can't ascertain if the ML accident is part of a latent issue that may rear its ugly head again (or in a slightly different manner) or just a one-off. That is the original point: a theory-of-mind is important to risk management. That means we will struggle to mitigate the risk if we don't "hyperanalyze" it.


You're missing the context. The AI didn't actually do anything unexpected, unless you expected it to try and drive through a downed branch. The AI behaved exactly as it should. The unexpected part was when the car behind the AI didn't see the branch and, therefore, didn't expect the AI car in front to stop. Unexpected doesn't mean wrong.

Cars can do unexpected things for good reasons, as the AI did in this case.


I'm taking in a larger context. I think just reading the three cited examples is an incorrect approach. For one, Waymo isn't sharing "all" their data, they've already been highlighted for bad practices in terms of only sharing the data from when their Waymo team decided was a bad decision. That's not necessarily objective, and can also lead to perverse incentives to obfuscate. So we don't have a great set of data to work with, because the data sharing requirements have not been well-defined or standardized. Secondly, if you look at reports of other accidents, you can see where AV developers have heinously poor practices as it relates to safety-critical software. Delaying actions as a mitigation for nuisance braking is really, really bad idea when you are delaying a potentially safety critical action. I'm not saying Waymo is bad in this regard, but we know other AV developers are and, when you combine that with the lack of confidence in the data and the previous questionable decisions around transparency, it should raise some questions.


Cory Doctorow - Car Wars https://web.archive.org/web/20170301224942/http://this.deaki...

(Linking to the web.archive version because the graphics are better / more understandable when in the context of some of the text)

Chapter 6 is the most relevant here, but it's all a thought provoking story.


Which raises questions for me about how traffic behaves if 25-50% of cars are self-driving. What "feedback loops" might occur? I'd be interested to see large scale tests that demonstrate how self-driving cars deal with each other in high traffic areas.


Getting hit from behind in all of these scenarios means the people behind did not maintain proper distancing between vehicles. Especially the 3 car pile up. That's why you're supposed to leave one car length for every 10mph you're going, so you have enough space to cancel out your momentum. What if instead of a branch it was a dog or a child?

And I've been to MANY places in the US where people drive terribly and completely unexpectedly. I've seen people put on a left turn signal then go into the RIGHT lane. I've seen someone pass on a one lane overpass, where the car in front was towing a heavy load

What's good about these cars is they will ALL drive consistently, and the more people get used to them the easier it well be to drive with them because you will know how waymo cars react.


Quite frankly, the vehicle in front of you is allowed to stop at any time for any reason (legally and practically, as front vehicle has better visibility on road conditions than follower), and it is always incumbent upon the driver to the rear to leave room.

If humans can't do that, the solution is probably more automation.


Has the nuisance braking problem been completetly solved? If not, I don't know that I'd agree that more automation is necessarily the answer. More good automation, maybe, but there's a logical jump there.

The Uber fatality from years back showed that the software used "action suppression" to mitigate nuisance braking. The idea that that would be acceptable on a safety-critical software application should give us pause to consider that more automation is the knee-jerk solution.


"Nuisance braking" is, like jaywalking, a phrase that prioritizes one party's use of the road over another party's. The best policy is still to leave the vehicle in front enough room to brake for any reason. Mostly because "nuisance braking" hasn't been solved in humans either (ever been behind someone who panicked when they realized they were falling asleep behind the wheel? I have.)


I think you are mischaracterizing the problem. Nuisance braking seems to be far more prevalent in AVs than in humans, partly because image classification has more uncertainty.

Now, couple that with: 1) a general approach of "when in doubt, brake" and the hacky workaround to suppress braking if it's occurring too much, and you've got a bad nuisance braking problem that's primed for a safety incident.

But again, this is disjointed from reality. I'd agree that if we all left the recommended 4.5 seconds of stopping distance, we'd all be safer. But that's not how our roads were built and that's not how humans drive. You're tilting at windmills here to make a point that doesn't need to be made because it doesn't apply to reality.


> But that's not how our roads were built and that's not how humans drive.

Oh, it emphatically is how our roads were built. Most miles in the states were still laid down in an era where cars didn't do more than 45. As for how humans drive... I'll direct the audience's attention to the 3,700 per day road fatality rate in the United States. Even if they brake check more than human drivers, an automated vehicle being forced to follow its programming to maintain a healthy follow distance may very well save lives, especially in an ecosystem where they are the dominant vehicle on the road as opposed to human operated vehicles.


Fair enough. I should have said not how our roads were built in the context of modern cars and population levels. I thought that assumption was a given. So, sure, we could program every car to obey the speed limit and have a 4.5 sec following distance. But do you think those tradeoffs will be palatable to society? The last portion of my comment was aimed at the oft-ignored aspect that public policy will govern the extent of AV adoption. You have to design your product in that environment, not an abstract one that's been sanitized from all those aspects. The best product in the world is still worthless if society says they don't want you selling it.


> Yes, but...there is something else to be said here. One of the things we have evolved to do, without necessarily appreciating it, is to intuit the behavior of other humans through the theory-of-mind.

One of the first thing my driving instructor told me was: never trust anyone on the road except yourself. That’s why you see people waiting for a car with the indicator on to visually hint they really intended to turn.


For sure. As a driver, one of my main inputs is the other drivers overall driving behaviour that I'm aware of.

A good indicator of their next move is their previous move.


Humans are nothing if not adaptable. We will adjust our expectations.


True, but that doesn't mean it's always a better result. I've adjusted my expectations that software on my smart tv will be glitchy and have interface errors with a lot of apps, but that doesn't mean that's the only or best way to program a system. We adjust to poor quality all the time, but I prefer not to lower my expectations on a safety-critical system.


All this focus on Waymo supposedly acting "unexpectedly" but I don't see that word in the original article, and the statistics here implies the opposite -- Waymo gets in fewer accidents overall!

Also, only the 2nd item is even consistent with Waymo behaving unexpectedly (we're not given enough info to know why it stopped). In the first item, the "unexpected" thing is the branch, not the behavior (stopping), and in the third Waymo's behavior didn't contribute to the accident at all -- instead it nearly avoided it despite the other car's bad driving.


It seems like these accidents could have been prevented by humans driving cars with collision avoidance. I'm a big fan of this feature on my relatively late-model Subaru, which tends to come part-and-parcel with adaptive cruise control, which is also quite a positive change in experience driving.

I recently rented an even later-model Malibu that only had collision warning auditory alert. Better than nothing, but I'm surprised cars are still made without automatic braking.


> Better than nothing, but I'm surprised cars are still made without automatic braking.

In the EU, at least, since May 2022, all new cars do have automatic emergency braking, along with intelligent speed assistance; alcohol interlock installation facilitation; driver drowsiness and attention warning; advanced driver distraction warning; emergency stop signal; reversing detection; and event data recorder (“black box”).

Other features like eCall – a built-in automated emergency call for assistance in a road accident – have been mandatory since March 2018.


> along with intelligent speed assistance

Shame. I've never had it work really reliably in any car, it's a feel good but mostly shit. Even more so when it's not even hooked into cruise (many cars will provide a shortcut to copy the sign's speed into the cruise or speed limiter, but far from all of them).


Yeah, the systems I've used are dire.


Sigh, EU cares about your privacy until it doesn't. These are data collection and monitoring nightmares. Big brother here we come.


While I broadly agree with you, at least eCall contacts (via voice and data) the local State 112 emergency services and only self-activates in the case of a collision.

That's far better than the situation in the US, where private services like Tesla, GM with OnStar, or Ford with "Sync with Emergency Assistance", which have no limits on data collection.


The auto-braking collision avoidance system on my 2023 Mazda CX-5 actually is exactly what caused my first collision in 20 years. I was slowing down to avoid a car that was turning off, the auto-braking decided I wasn't slowing enough (or, I might have just let go of the brake) and it proceeded to slam on the brakes bringing me to a full stop on a busy road, leading to me being rear-ended. At no time was any of that necessary. I've also had the auto-braking engage (on multiple cars) because of random debris in the road, or seemingly no reason at all.

Granted, I'm sure this will improve over time. But for the past 5ish years, all my experiences with auto-braking have been dangerously negative.


I've never driven a car with auto braking. I've been yapped at many a time for lane "departures" that were not lanes (concrete grooves on the highway being the primary culprit) and sometimes not even real (that "lane" is the shadow of a nearby power wire.) I've also seen the adaptive cruise control appear to fail once when two cars simultaneously changed into my lane, one from each side. It still had a moment it could have acted so I can't conclusively say it failed. It also fails to recognize cars with too great a speed difference.


The classical caveat of any fully automated system - it works well when everyone has it.


The most likely explanation is that you were tailgating.


So what if they were?

A system that takes driving too close and turns it into something more dangerous is not a good thing.


There is no actually no indication that Waymo cars are making tailgating even more dangerous


Huh? Aren't we talking about the anecdote above, where they were following a turning car, then automatic emergency braking kicked in and they got rear-ended? No waymo involved.


I was thinking about the waymo incidents.


Or just human drivers leaving enough room to brake in.


Human drivers being poor at driving is why self-driving cars exist. See: driving under the influence, distracted driving, aggressive drivers.


This tech is wonderful! Fun fact about the inclusion of this technology in automobiles sold in the US:

The Obama administration (2015) was able to successfully negotiate with and convince most major car manufacturers to voluntarily agree to start making new cars with automatic emergency braking. Their agreement stipulated that all new cars must have it by 2022 [1]. But this negotiated agreement is why we started to see some new car models include it post 2015.

The tl;dr is the Obama administration basically said "look, if y'all don't agree to these proposed minimal standards, we'll get congress to pass a law that is more strict. So the companies decided to take the agreement now to de-risk themselves from having to comply with potentially more stringent requirements in the future).

[1]. https://www.nhtsa.gov/press-releases/us-dot-and-iihs-announc...


AEB and friends also demonstrably reduced costs to insurance companies, who pushed some savings onto consumers to shape demand. My $28k brand new car has better insurance rates than my 2004 car because of all the additional safety and automation prevents enough incidents that would otherwise total the car.


Getting rear-ended is almost always the other driver's fault, but 7 years ago I was involved in a serious accident (minor injuries, both cars totaled) when the driver in the fast lane decided to pull over and pick up a hitchhiker. Crossed over two lanes, hard on the brakes, and I had no chance to even get off the gas.

The responsibility was 100% his because of "an unsafe lane change".


Yup, this is the primary case where the rear vehicle isn't at fault. You change lanes into a lane that's moving faster and get hit, you were wrong even if they hit you from behind.


If it can be proven. 25 years ago a scam was where someone would suddenly change lanes to be rear ended like that, then claim "back pain" and sue for a lot of $$$. I don't know how common it was, but when there are not witnesses the courts tend to side with the person being rear ended.


And this is why you should have a dashcam in your car.


Often there were two cars in that scam. One to first follow really close so you are both distracted and reluctant to stop hard.


That last one is impressive, most humans probably wouldn’t pull it off. And just imagine when all the cars on the road are self-driving, probably none of these accidents would’ve happened.


Not if we have teslas!!!


Why not? Has Tesla had similar incidents?


I've been in a self-driving Tesla vehicle. After hours on the interstate, the person ahead of me slammed on their brakes suddenly. I was caught off guard, not expecting it, and may have crashed by not reacting in time. The Tesla braked. So I have anecdotal experience that the person you're asking for an answer isn't well informed on how Tesla's respond to this type of accident.

Of course, anecdotal evidence isn't a very high standard. Thankfully, statistics on this sort of thing are tracked. Statistically, the Tesla self-driving features reduce accidents per mile. They have for years now and as the tech has progressed the reduction has grown as the technology has matured. So statistical evidence also indicates that the person you are asking the question to is also uninformed.

What is probably happening is that it makes for good clickbait to involve Elon and Tesla into discussions. Moreover, successful content online often provokes emotion. The resulting preponderance of negativity, especially about each driving accident Teslas were involved in or caused, probably tricked them into misunderstanding the reality of the Tesla safety record.


> Statistically, the Tesla self-driving features reduce accidents per mile

While that is the claim, I've never seen an independent analysis of the data. There are reasons to believe that Tesla drivers are not average. I don't know if what claims are true, which is why I want independent analysis of the data so that factors I didn't think of can be controlled for.


My Subaru from 2018 can do this. It's not rocket system and most cars nowadays have a collision detection system. This is not a slef-driving capability by any means.


90% of new cars can do this, it's called AEB. It's not a Tesla self driving feature.


Teslas keep ramming into parked vehicles on the side of the road, including emergency vehicles. So when a waymo car stops because it doesn't know how to safely proceed, the Teslas might just plow into it.


Nope. We'll keep having accidents if everything is self driving as long as we keep Tesla in the mix.


Hit from behind is a classic blunder. Almost always, it means the other driver was following too closely or not paying attention.

Since all of these accidents happened in the US, is the driver that hits from behind normally responsible for the accident? (For a moment, let's exclude predatory behavior where the front driver is doing something toxic, like intentionally pump-breaking on a high speed road to induce a hit-from-behind accident.)


I feel like that doesn't paint the whole picture. I'm guessing incidents like [1] don't make it into those stats:

> The safety driver unwittingly turned off the car’s self-driving software by touching the gas pedal. He failed to assume control of the steering wheel, and the Pacifica crashed into the highway median.

Why are these not counted? Are they really looking at their car crashes, or just autonomous driving software being in control during those car crashes?

Maybe they want to argue the software is safe, but that doesn't change the fact that I'd still be scared of getting into that car.

[1] https://qz.com/1410928/waymos-self-driving-car-crashed-becau...


This seems like an argument that you should be more worried about getting in a Waymo if there is a safety driver than if there isn't. If so, that would definitely be an interesting conclusion.


Well both would be worrying, maybe one less than the other. Really I'd rather have a safe car where the failure modes are not stupid, so I can stop worrying altogether.


Sure, but the failure modes of the traditional human-controlled car are incredibly stupid, we've just gotten used to it.


> the failure modes of the traditional human-controlled car are incredibly stupid, we've just gotten used to it.

This is missing the point. Just because we accommodate certain types of errors by humans, that doesn't mean should or would tolerate them if the same errors were made by machines. (Or vice-versa, for that matter.) The standards we hold machines and humans to can be very different, and that should be expected.


The driver fell asleep and then pressed the gas pedal… and didn’t see or hear tons of warnings and alarms from the car. Very hard to blame that on the car.


> Maybe they want to argue the software is safe, but that doesn't change the fact that I'd still be scared of getting into that car.

As a rider, you can't touch the Waymo steering wheel or pedals, which eliminates the cause of the accident you referenced.


There is no standardized way of collecting safety data. Each company is able to define their own standards on what is an AV-caused accident, the training conditions, etc.


This is the bigger issue.

I do not trust any data in 2023, and 2024 will be worse.


We can create frameworks to mitigate this problem, though. A good first step is better transparency regarding data reporting of AVs.


I don't think we can ever get truthful reporting of auto accidents, because ever wreck I've ever been in involved the other side not being truthful at some point.


I meant specifically for the AV space. That would include turning over all the data regarding an incident, similar to what Uber had to due after the fatal accident in Tempe as well as training data.


> Maybe they want to argue the software is safe, but that doesn't change the fact that I'd still be scared of getting into that car.

Why this car, and not all cars? If I fail to assume control of my steering wheel, I will also crash.


> Why this car, and not all cars? If I fail to assume control of my steering wheel, I will also crash.

The short answer is because this isn't a deterministic "if". The probability matters too.

The thing is in a normal car you're forced to be alert all the time. With autonomous driving, 99%+ of the time you have nothing to do. Humans simply cannot pay as much attention all the time when they're not actively forced to. It's much easier to lose attention (drowsiness, chatting with people, etc.) than if you're physically already driving.

And moreover, regaining control of a car requires some context switching time that isn't there when you're already in control.

If my car is going to disengage and at a random point (whether due to my fault or otherwise), I'd rather just be in control the whole time.


> If my car is going to disengage and at a random point (whether due to my fault or otherwise)

But it's not at a random point. If you push the disengage button, it disengages and tells you it's disengaged. I don't understand why you keep mischaracterising what happened.


> But it's not at a random point. If you push the disengage button, it disengages and tells you it's disengaged. I don't understand why you keep mischaracterising what happened.

I'm not mischaracterizing anything. To the driver who dozes off and presses this completely unintentionally, this did happen at a random point. He certainly didn't intend or expect it to happen.

You have to realize, dozing off in a situation that's boring 99%+ of the time is a human thing. If you design your car such that a driver is prone to pressing the disengage button unintentionally when he dozes off, you get to share the blame when that happens. It's not something reckless like DUI where you get to put all the blame on the driver for it.


> To the driver who dozes off and presses this completely unintentionally, this did happen at a random point

That's not the only perspective, though. It's possible to doze off when driving a normal car, and that doesn't suddenly become the car's fault, despite from the driver's perspective, he/she didn't mean for it to happen.

> If you design your car such that a driver is prone to pressing the disengage button unintentionally when he dozes off

But again - how do you know it's been designed that way? This still seems like editorialising without any additional info.


I believe these stats only cover the miles where there was no safety driver.


i.e., every accident where a split second before the collision the control system yields control to the safety driver is not accounted for in these stats?


Waymo is not Tesla, they're actually building self-driving cars.


Correct, but they also aren't including self driving miles where there is a safety driver in the driving seat, so it's fair.


This is very much a by-product of the current regulations that, AFAIK, mandate that a human driver should be able to take control at all times.

That might make sense but this is obviously a little tricky to implement safely.


Apparently Waymo is aiming for SAE level 4, so ultimately they should get past that issue I guess (since the lack of the “human must be ready to take over at any moment” requirement is pretty much the point of… what is it, SAE 3 and over?) But yeah, for testing, the challenge remains…

It would be interesting to know how often humans have to take over from Waymo, and also how often humans who’ve taken over from Waymo get into accidents.

I mean, it is not really realistic, but hypothetically one could imagine a driving strategy that has a low expected number of crashes, but puts the car in an easily detectable, strategically bad position, and then just dumps that position on the safety driver. Of course I’m sure Waymo wouldn’t do anything like that because it is, like, the most obvious bad-faith strategy which would be caught on a rigorous review and would be ruinous for the company.


Having ridden in many a taxi, Uber Lyft, etc. over the years all driven by humans.... I'd be lying to say I'd be more afraid to get in a Waymo car.


I took a London black cab recently for a trip to the hospital, and the driver at one point overtook about 200 metres of stationary traffic, going around several "keep left" bollards, to go through the red light that the rest of the traffic was queuing for. The driver was in his 70s, so I'm not sure if he's just been doing it so long that he doesn't care about the rules any more, or if he was struggling with some kind of age-related brain degenaration.

In his defence, I did get to the hospital well in advance of my appointment. It reminded me of the old Sega game Crazy Taxi.


IIRC, it is almost impossible to give a taxi cab driver a ticket in London.


Nah - easy to give them a ticket, and they'll be severely punished by one (insurance).

However, most ticket issuing in London is by automated cameras, and Taxi drivers know where all the cameras are.


I recently took a Waymo test ride (round trip, two separate segments) in Los Angeles.

It was extremely uneventful - in a good way - while navigating urban traffic, road construction, unprotected left turns, etc., and felt (subjectivity alert!) a lot safer than many of the rideshare drivers I've ridden with over the past 8 years in LA.

I would definitely do it again and would feel safe putting a family member in one.


I don't even get why these count as negatives against Waymo. There's nothing it can do to stop idiot humans driving too closely or just driving into it.


I agree and I wouldn't hold them against Waymo, but I think when you are developing a self driving car, you should stop analysing it like a crash between two humans with fault and blame, and start looking at it like a system.

If Waymos were having a seriously increased rate of non-fault crashes, that would still be a safety issue, even if every crash was ultimately a human's fault.


Agreed. I think their current performance is super impressive. I think it's possible to get even better and beat humans by lowering the number of not-at-fault incidents too (although there are only so many variables in Waymo's control).

As another commenter mentioned, the fact that the Waymo detected that a vehicle was approaching it from behind at high speed and tried to accelerate to get out of the way is super impressive, and I'm not sure even most good human drivers would have been able to do that.


Yeah. We had an autonomous bus here--involved in an accident the very first day that wouldn't have happened with a human driver. The bus just sat there and let a truck back into it.

I also wonder how it fares in a Kobayashi Maru scenario. I chose to cream a construction cone because the guy in the left turn lane went straight. (Admittedly, I think he didn't realize he was in the left turn lane.) I could see the cone wasn't actually protecting anything, could a car do so?


Bad drivers are reality. If waymo drives in a way that leads to more crashes, even if theyre not its fault, its clear to me that it still deserves some responsibility for not following expected road etiquette.


If Waymo is driving is the way safety engineers are trying to get everyone else to drive, then we should encourage it. There are some things where what everyone else does is wrong. (see the zipper merge)


The stats clearly show that Waymo gets into fewer crashes, though. In the remaining incidents, the other driver was considered at fault.


Bad drivers are a reality because we decide to tolerate them. We could just not tolerate them, like we don't tolerate bad pilots, bad train drivers, bad people etc.


There is widespread support for allowing people with DUIs to drive in this country. Even people who negligently kill someone with their car are still allowed to drive. There is no political support for banning people from driving.


I think that self driving cars will change “road etiquette”, as you call it, for the better.


Stop hard and rear end is a huge risk, this is not surprising and can happen with reasonable following distances. It takes a long time to stop and humans are bad at perceiving it. Just look at how "stop" is painted on a road compared to how you perceive that when driving. Perceived stopping distance is way shorter than actual. This is all to say, for a full and hard stop, the needed stopping distance is huge.

Thus reminds me of highways when you can see a back up a quarter mile up. I tap the brakes about 5 times before letting off the gas to cut speed by 20% before actually even braking,which again is preceding by taps on the brake.

So. I don't think the majority of these cases were even tailgaters. I would love to see the data of what the following distances were, rate of deceleration, how much space there was for the lead car to avoid the obstacle. Ade these Waymo's cars just out there brake checking people? Are they waiting to the last moment before doing emergency brake mode?

At the end of the day, drastically changing speed is up there for dangerous behaviors, human drivers go to great lengths to avoid it and signal the intent. Turning right or left off of a 50 mph road- better believe I'm signaling that turn 2000 feet away and decelerating in stages and incrementally. With luck even that vehicle 4 cars back in the convoy is going to have lots of time.


These are a good examples of the self-reproach Waymo is exhibiting. Contrast to Cruise which appears to have attempted to suppress information about dragging a pedestrian under one of its cars.


I think it is important to track these. It may be true that these are entirely the other driver's fault but it is also possible that these accidents were encouraged by unexpected behaviour from the Waymo car. If you quickly exclude things that don't seem like your fault then you will likely exclude too much. So I'd rather include everything (but maybe flag it as likely not at fault) than risking exlcuding important data.


Before jumping to conclusions, are we sure these Waymos hit from behind didn't awkwardly and randomly stop in the middle of a busy intersection (where no sane human driver would)?

I know YouTube videos aren't always representative of reality, but there are some videos of these cars randomly driving extremely slowly in very busy intersections which might be a contributing factor to getting rear-ended, even if it's not Waymo's "fault" from an insurance perspective.


Before jumping to conclusions...

Nah, I'm jumping straight to this conclusion: if you hit something in front of you that didn't leap out in front of you at the last second, you fucked up. The object that you hit was erratically slowing and speeding up? You should have left more room to allow for the unpredictability. It was raining, can't see, the roads are slick? Leave more room in front of you.

Yes, I way too often don't do that, either. But if there's something in the road ahead of me, and I hit it? Man, there are few scenarios where I can claim that there was nothing I could do. And in the case of a full-sized vehicle in front of me, I don't care how erratically it's driving, don't run into the back of it.


I remember taking Driver's Ed class many years ago. When we got to the section about fault in relation to rear-end collisions, the instructor said, "if you rear-end someone, it is your fault, full stop." The class then spent five minutes asking hypotheticals, to which the answer was always, "nope, still your fault."


He's wrong. What if that car ahead just got there and is moving far below your velocity? And there's the case of where the car ahead stopped in a fashion a car can't--ran into something massive or the like. Under standard driving conditions if the car in front of you is involved in a head-on at speed you're going to hit it. 2 second following distance assumes the car ahead is subject to the same physics you are.


The one exception is if a car pulls into a lane in front of you when you are traveling faster.

His actual statement was, "if you rear-end someone that you are following, it is your fault..."

If someone abruptly pulls in front of you, you weren't following them.


If you cannot see a car ahead of you in time to stop then you are going too fast for conditions.

The only time it can be not your fault is if the slow moving car switches lanes in front of you before there is time to stop.


That is correct. However, it's also the case that in the real world if you drive legally but do things like brake erratically, you will cause accidents.


Not if the vehicles behind you are self driving cars.


I have a couple more scenarios for you. Overdriving your headlights is a great way to hit something you don't see in time. The safe speed in average conditions on low beams is around 25 to 30 mph, and on high beams, it is around 45 to 50 mph. If there's any glare on the roadway from security lights and oncoming drivers, your safe speed drops 10 to 20 miles an hour.

Related to this is glare from the sun or artificial sources. I lived in a small city with antique-style globe lamps on Main Street. The veiling glare made pedestrians invisible, and even if you knew about the glare and watched for pedestrians, you would still be surprised when they became visible halfway across the street in front of you.


If your system is objectively right, but also objectively causing accidents with humans. Well you won't fix the humans...


> If your system is objectively right, but also objectively causing accidents with humans.

It's not causing accidents in these cases. The humans are.


The causality version of "cause", not the blame version. Accidents that would not have happened without the system.


Right, but also imagine how traffic would be if everyone drove with the pre-requisite distance to do that. Can you imagine I-5 traffic if everyone had 10 car lengths between them?

So while your statement isn't wrong, it's also not always pragmatic in the real world.


Of course it's pragmatic. Free-flowing traffic at 50mph beats people zooming up to 70mph then braking, then zooming again. Freeflowing traffic at 50mph annihilates traffic from accidents.


Traffic is rarely “free flowing” at any speed on these kinds of roads. Often I see “moving roadblocks”: clumps of cars going around or just under the speed limit jockeying around each other, impeding other traffic from moving around them. So-called “defensive drivers” are often unpredictably overly cautious: I’d wager they are at least an indirect cause of accidents quite often, but are severely under-represented in the statistics (if/when they’re represented at all).


> Often I see “moving roadblocks”: clumps of cars going around or just under the speed limit jockeying around each other, impeding other traffic from moving around them.

Indeed - this is the 70mph is slower than 50mph thing I mention.


Not sure if we’re talking about the same thing. These slower drivers aren’t any safer: they’re weaving around each other and impeding faster traffic from passing. They’re arguably more dangerous because of that. And the flow of traffic is constrained to the speed they choose, which is on average slower than it otherwise would be.


While they are correlated, flow does not equate to capacity. You can have better flow while still having reduced capacity. That’s not pragmatic.


Most traffic is not caused by sheer volume - this is well studied. It is often caused by inability to maneuver, merge, etc. As a result, your i-5 traffic would likely be much much better if everyone left 10 car lengths.

You do not have to raise the average speed of travel very much to make up the theoretical loss due to increased spacing


Can you link to those studies about traffic? I’m aware of some of the studies regarding merging (eg the benefits of zipper merging) but I don’t think that covers the bulk of congestion.


This is why cars do not scale: by the time traffic slows down there are 5-6 times more cars in the lane than it can safely handle. So by the time people are asking for "one more lane" they really mean 6 times as many lanes, a regular 4 lane highway needs 20 more lanes!

Moral: support public transit.


I don't know what I-5 traffic is like, and kind of weird that you would refer to a local road on a global forum, but I'll assume it's like the M25.

Roads like that are currently operating at bursting point. There are incidents and accidents every single day and constant police presence is required to unblock them. If you alleviate congestion, more people use the road. They just go back to bursting point. In other words, it's utterly insane.

Can you imagine if there were accidents on railways or in the air every day? Imagine the scandal if train operators were found to be unsafely squeezing more trains on to the line that it could handle. Roads are stressful, inefficient and shit. Enforcing a safe stopping distance and pricing journeys accordingly, like trains, is where we want to be.


I-5 is a notoriously congested interstate in California. I don't live in California anymore, but used it simply because there are a disproportionate number of Californians on HN.

I don't disagree about "pricing journeys accordingly," but there are many reasons why this is difficult in practice in the US. Going through those points is a bit of a digression from my main point. Namely, that there are pragmatic tradeoffs that have to be considered. I'm consistently taken aback by the amount of "simple" solutions people advocate on HN and sometimes I wonder if it's due to software developers constantly working with the abstract rather than the concrete.


"Can you imagine how bad traffic would be if everyone drove safely?" is a hell of a take.


So is "can you imagine how much infrastructure would cost to ensure everyone drove completely safely"

Like most real-world engineering, there is a cost-benefit balance. Could we design an interstate highway system that allows everyone a 10 car buffer? Sure. Would we like how much it costs, the effects on the environment, etc.? Probably not.

As an aside, your comment seems to go against HN guidelines by taking the least charitable interpretation of the comment.


Why?

You might have forgotten the ultimate rule of road safety: Everything is a tradeoff. Safer is only sometimes better. Otherwise the speed limit would be 10mph, because it's quite a safe speed.


I’m guessing an accident caused due to not leaving enough room to safely stop is going to cause a bit more traffic than the alternative


So you're saying we should all drive with a 10 car buffer, then?

If not, then you already recognize the probability is less than 100%, so that has to be baked into your statement.


So you're saying we should all drive with a 10 car buffer, then?

The only comments saying that are...yours. If your argument is so lacking that you need to argue in bad faith, perhaps it is best to not bother at all.

Or perhaps you are not aware of the proper following distance (and therefore, part of the problem about which you complain). Two car lengths (EDIT: seconds, not car lengths; oops) is the general advice.


I'm just trying to understand exactly what they are advocating, because so many people seem to be making a dichotomous safety choice. It's not a simple model and my point is there are tradeoffs.

>Two car lengths is the general advice.

No. It's speed, roadway, and car dependent. Two car lengths isn't even sufficient at 25mph let alone at 70mph.[1] Which all goes to show how poorly people tend to think about these things and quickly resort to overly simplified mental models.

[1] https://one.nhtsa.gov/nhtsa/Safety1nNum3ers/august2015/S1N_A...


No. It's speed, roadway, and car dependent.

Moreso that my post has a mistake: two second following distance, not car lengths. Brain fart on my part; apologies for causing you to have to find a URL. But as general rules go, that increases the distance as speed goes up. No, it doesn’t account for everything, but good enough for most circumstances.


> Before jumping to conclusions, are we sure these Waymos hit from behind didn't awkwardly and randomly stop in the middle of a busy intersection (where no sane human driver would)?

I'm nearly certain that I'm alive today because I drove as defensively as something like a Waymo. One day as I was approaching an intersection where I had a green light, I saw a car approaching the same intersection from the cross street at a speed that they couldn't have possibly stopped for their red light.

I instinctively slowed down suddenly and that car, as I predicted , ran the red light at high speed and turned just a few yards in front of me.

If I had been more tired, hasty, or it had been darker out, I might not have seen it or reacted in time. A fully autonomous and defensive driving system wouldn't get tired or hasty, and lidar can see fine at night.

And yes, I might have gotten rear ended, but that's a far better outcome than getting t-boned by a car going 65mph.


>If I had been more tired, hasty, or it had been darker out, I might not have seen it or reacted in time.

If you had just been an average driver. Most people are terrible at defensive driving.


> Most people are terrible at defensive driving.

Mostly because people misunderstand what defensive driving means.

It doesn't mean "go slow". It means always maintain a safety margin between your vehicle and others, adjusted for speed, road/weather conditions, and relative direction of travel.

Those are the kinds of objectives a self driving system can very reliably achieve, unlike a human.


Doesn't matter. If you drive into a slow-moving or stationary object then it's your fault in every sense. If you are driving too closely and are not ready to react to the vehicle in front doing an emergency stop for any reason then it's your fault in every sense.

Most human drivers I observe are taking huge chances every single day. I see them drive at speed around corners they can't see around fully, driving far too closely to cars front, using their phone etc. They get lucky but it's only a matter of time before they have accidents. The three incidents recorded here are simply due to three humans' luck running out.


It “is” the drivers fault who hits from behind. It’s their responsibility to maintain a safe distance required for breaking in such situations.


I've watched Waymo cars, multiple times, from a traffic-lighted intersection, as the lead vehicle, from the left lane, stop 100% when the light it green until other cars pass in frustration and then cross 4 lanes to make the next RIGHT turn.

Traffic disruption with Waymos is UNDOCUMENTED and a real thing.


> It’s worth noting that all 3 of these incidents involve a Waymo getting hit from behind, which is the other driver’s fault even if the Waymo acted “unexpectedly”. This is very very good news for them.

How so? If human drivers did unexpected stuff like brake for no reason, we'd have a lot more accidents.

I think this just highlights how much better humans are at cooperating on the road compared to automated systems.


That makes the waymo sound like a moving bollard, fault or not, I will smash into the back of you if you brake-test me


Typically, you would be considered responsible for damages for not keeping a safe following distance.


Disclosure: I work at Waymo, but not on the Safety Research team.

The Ars article linked to the Waymo blog post [1], but the underlying paper is at [2] via waymo.com/safety . A lot of folks are assuming this wasn't corrected for location or surface streets, but all of the articles do attempt to mention that. (it's easier to miss in the Ars coverage, but it's there). The paper is naturally more thorough on this, but there's a simple diagram in the blog post, too.

[1] https://waymo.com/blog/2023/12/waymo-significantly-outperfor...

[2] https://assets.ctfassets.net/e6t5diu0txbw/54ngcIlGK4EZnUapYv...


This is the first time I’ve seen an AV PR push that is remotely apples-to-apples. Props to y’all for that.


Do you know why Waymo didn’t include the dog they killed in this report? https://techcrunch.com/2023/06/06/a-waymo-self-driving-car-k...


I believe this study was about rider-only miles. The dog incident had a safety driver monitoring.


Sure but the news outlet headline is ‘only three minor incidents’ when in fact they killed a dog. It’s an act of not including all the information in a report about safety.


That's an issue with the headline; not Waymo. The Waymo blog post is clear they are comparing their robo-only-driver with human-only-drivers.


The Waymo blog post goes so far as to talk about human under-reporting of crashes, so it's only fair that Waymo call out their under-reporting of fatalities. It's suspect to claim the AI is better than the human when a notable fatality happened with a human behind the wheel of the exact same vehicle. Waymos have also hit people / bikers but magically the safety driver was always at fault.


There's good reasons to count robo-only miles. The other measure is confounded in both directions:

- Safety drivers prevent incidents

- Safety drivers cause incidents

Since the human+robo isn't the intended deployment mode, the only reason to count its numbers would be if there wasn't statistical power any other way. And that's no longer true.


A robo-only study is fine, but it’s misleading to hide that the human-supervised treatment has injuries and fatalities.


I’m also unsure how a safety driver affects the robo driving unless it’s an entirely separate program or the safety driver is pressing the “damn the dogs full speed ahead” button.

In my perhaps naive view a safety driver failing to act and a robo driver should be identical.


e.g. one of the most significant Waymo incidents in autonomous mode was a human driver falling asleep, bumping the gas pedal, and disengaging autonomy.

Comparing robo-only miles has the advantage of not considering incidents like this, and also not considering incidents where the safety driver prevented harm. It also prevents accusations of "you're claiming human Waymo drivers caused these accidents that you've omitted and are lying" because then they're counting 100% of miles driven without humans in the car.


Waymo calls out the under-reporting of human accidents and yet that’s exactly what they’re doing by ignoring their safety driver incidents that include a fatality.


What safety driver accident included a fatality? You're not obscuring talking about the dog with "a fatality", are you? Egads.

From looking at past Waymo data, you get similar numbers (a few more incidents, but a lot more miles driven) with safety drivers included. They're just noisier and more subject to dispute.


Exactly, the numbers are subject to dispute, so this is a press release and the whole dataset is relevant including the dog that was needlessly killed. Though I’m sure the e/acc crowd don’t count it as a “real” death even though the dog really died.


I've already pointed out why I believe the "robo only" numbers are more useful, and that the broader dataset says mainly the same thing.

I'm sorry, but a dog running out in front of a car is not a "traffic fatality" by any conventional sense of the world, and it's certainly not an at-fault accident for the driver.

At this point, I believe you are trolling.


I wasn’t contesting your opinion but rather the veracity of Waymo’s “study” which is really a press release. I and others have seen Waymos in collisions (e.g. in MTV) where Waymo chase vehicles showed up and acted as private security in order to shield the scene / liability. And then there’s Levandowski’s hack to the car and resulting crash that might have ended self-driving taxis today had there been reporting requirements https://www.salon.com/2018/10/16/googles-self-driving-cars-i...

At this point I believe you are trolling and disinterested in discussion that might be critical of Waymo’s brand and safety.


Keep in mind that you're complaining about excluding some incidents with humans drivers, but it's also excluding a whole lot of miles. That is, the denominator gets smaller.

We've previously had accident rates vs. total miles disclosed. Now, we have a new, better number (for reasons stated above). Of course, it's always good to look at a variety of measures.

> discussion that might be critical of Waymo’s brand and safety.

When someone uses the word "fatality" to obscure that they're talking about a not-at-fault accident where a dog runs in front of a vehicle and the dog dies, it convinces me that the person I'm talking to is not an honest counterparty in discussion.


I don't understand your point. Would it be better if it happened to a kid?


Points simplified:

- Most people reading "fatality" assume that it means a person died. This was not true of the incident. Repeatedly using the word "fatality" here is intentionally misleading. I have not heard the word "fatality" used in accident reporting to describe the death of an animal before.

- This was not an at-fault accident. It was not possible to avoid hitting the dog. Talking a lot about this incident doesn't provide very much information about Waymo's safety. (Of course, humans get in accidents that would be impossible for them to avoid, too-- all else being equal such accidents should be included on both sides of the analysis).

- The analysis didn't include this incident, because Waymo made a very rational choice to calculate using no-safety-driver miles and incidents. This isn't malice; it doesn't change the numbers much (because incidents and miles both go up when you include safety driver miles) and the numbers are higher quality and less confounded without including safety driver miles.


Waymo claims the dog was fully occluded by a parked car before it ran into the street, giving the vehicle and a safety driver at the wheel no time to react.

Not sure why you're focusing on this event. They're talking about miles with no safety driver.


About ten years ago I made a prediction:

Self-driving technology will overtake average human ability with regard to safety within a decade, but the biggest hurdle will be public acceptance. The AI will not make the same kind of mistakes humans make. So while the aggregate number of accidents will be (likely much) lower without a human at the wheel, the AI will make deadly mistakes that no human would make, and this will terrify the public. A intuitively predictable crash will always be scarier than one that makes no sense to our minds. The only way self-driving tech will ever succeed is if the AI can be limited to the same kinds of mistakes humans make, just fewer, and that's a VERY hard technical nut to crack that I do not believe will be solved anytime soon.

That said, I still believe that the ubiquity of cars is inherently a problem, human operator or no. If we put more effort into self-driving busses and autonomous trains—which have regular schedules, routes, and predictable speeds—I think we would see much greater dividends on our investment and far fewer "unintuitive" errors. Our collective fixation on cars blinds us as a society to this option unfortunately. More cars just clog up the road even more, demand more parking, and otherwise monopolize land use that could be more productive otherwise. More idling/circling driverless cars adds to the blight rather than relieving it. We need to transport more people between points in higher density, not lower, and cars are the lowest density transportation options available.


We could legislate around this very irrational/human fear. Personally I’d feel much safer if the roads were primarily filled with drivers that don’t get emotional and are objectively safer. Even if a few accidents confuse me and seem avoidable. I think what your analysis is missing is that most human accidents are also 100% avoidable. Why aren’t we looking at the incredibly dumb things humans do and asking the same hard questions? Why doesn’t it spook us when a human doesn't see a red light and t-bones cross traffic? Or when a multi-car pileup happens because a large pickup truck with a big car complex is tailgating someone at 85 mph and a sudden stop is required?


The problem is it's not totally irrational to be freaked out by this. Think about the distribution of the error rate.

A lot of fatal accidents can be attributed to inexperience, distracted driving, running lights, drugs, alcohol, asleep at the wheel, medical events or extremely aggressive driving.

The distribution of unsafe driving is not even. Personally I had 2 accidents in my teens and 0 for the following 20 years.

Your typical taxi driver will be fired/fined/reported over time if they drive this way. Further, if you are in a taxi where you notice a driver making you uncomfortable, you can end the ride early (I have).

However, imagine a robodriver that is 4x safer, however every single vehicle on the road has the same probability in any instant of invoking the same "no human would make this mistake" driving error fatally.

An analogy would be that at any moment, your calm, courteous, focussed spouse behind the wheel suddenly transforms into a 17 year old teen in a Mustang.


But I’m 4x safer. I’ll take those odds.


There's a difference between "I'm 4x safer", and "I'm expected to be 4x safer than the average person is now".

The GP is arguing that there is a way to actively become safer than the average person by avoid getting into "dangerous" situations. This may or may not be 4x safer than average though. There's this element of control that is lost when it comes to AI driving.

The relevant point to you is, if you already have good driving skills, always drive responsibly, and generally avoid driving in areas/times that have more drunken drivers around, then you might not actually benefit 4x.


Yes, this is the point I am making. If you do not engage in risky behavior, and are not distracted, you are likely already 2-4x safer than an average driver. So AI driving may not make your driving better.

Next problem is everyone else's driving! It's like an arms race / tragedy of the commons issue as well. Your safe driving doesn't matter if other people are still running lights / driving drunk / etc. In absence of near-perfect AI driving, followed by government mandates to take everyone else off the street.. it doesn't matter. And this is never going to happen.


It makes no difference to the pedestrian, cyclist or any other party in a crash. One does not choose whether one crashes with/gets run over by a novice or an experienced driver.


Yes but this falls into the "never going to happen" clause of my response.

Unless the government is taking everyone else except the imaginary future perfect robots off the road, its all a moot point.


It doesn't require government action. Insurance will see computers causing fewer accidents, requiring lower payouts and adjusting rates accordingly. Penalties for drunk drivers might become harsher because they now have an alternative. Ditto for old people. Commuters will like having their hands and eyes free. Robo taxis will also make a dent in the market. Over time the number of human-driven vehicles will decrease on its own and it'll make less sense to get a driving permit in the first place.


10-15% of drivers are uninsured ..


My understanding is that that is a punishable offense in most states, so that's a matter of consistently enforcing existing laws rather than introducing any new ones.

Anyway, the mechanisms I have listed can still drive adoption of self-driving cars and reduce the number of human-driven cars on the road which also reduces the number of fatalities if the self-driving ones outperform humans. And if we look at QALYs instead of fatalities there may be additional benefits.


Here's a sensible way to make that transition – driving is treated as a privilege. If you are a dipshit on the road more than a certain number of times (driving drunk, running red lights, driving recklessly) then that privilege gets taken away, and you have to use an autonomous car. Over time the safety of everyone on the road goes up regardless of what they are driving.


That only works in parts of the world where there is a viable alternative to driving. Speaking as someone who has lived most of their life in the rural south US, that’s not a reality for most of the world.


You missed:

> and you have to use an autonomous car

The idea being you use this hypothetical level 5 autonomous vehicle instead of the ones that let you be a dipshit.

Anyway, we already revoke peoples’ licenses for drunk driving and we do it in the areas you’re referring to.


See earlier note about the public not accepting autonomous vehicles while they make errors humans consider nonsensical. The actual aggregated stats don't matter. The logic doesn't matter. All that matters is the video of a self-driving car veering off course suddenly and slamming into a school or bumping a pedestrian at high speed on video. It won't matter that no kids got hurt or that the pedestrian survived with minor injuries.

It will feel more dangerous and untrustworthy, and that has always been more than enough to kill a promising solution in this country.

If we can overcome that and actually follow the numbers instead of our guts, we won't need most of the cars in the first place—autonomous or otherwise—because the numbers say mass transit is the better option on all metrics: economic, environmental, safety, land use, etc. Cars are at best a backfill option.


The general public doesn't have to accept anything because they still have a choice to use whatever car they want. It's only the drunk drivers being forced in those evil autonomous cars, and they would have otherwise killed themselves/others anyways if left on their own. And then over time the safety numbers, convenience and more will speak for themselves and convince you to switch.


But what’s the point? Why do individuals need level 5 cars?

Why not just make tons of public transit at that point using the level 5 tech instead and cut traffic?


>> But what’s the point? Why do individuals need level 5 cars?

Because there are places that public transit will never cover? I've been places in the American West that are 50+ miles from the nearest paved road, stop sign, or restroom. I don't think they're getting bus or train service any time soon.


No, but you could have public-rideshare in some of these places.

Level 4+ cars may be quite expensive for some time. Operating costs of L4+ vehicles will likely be low compared to human-driven vehicles (counting the cost of the driver). So, it could lead to car sharing in situations where we don't accept it currently.

> that are 50+ miles from the nearest paved road, stop sign, or restroom.

US population density is lower than most of the developed world, but what you describe is a tiny share of miles driven and can be effectively ignored for now.


Ah, you’re correct I did miss that. I’m not sure how I’d feel about a federally mandated solution like this… but I have to admit, it’s A SOLUTION to an otherwise big problem.


It does spook some people, but sufficiently large portions of the population also like to engage in the very things that cause those collisions, such as using their phone or being distracted, driving inebriated, driving fast, tailgating, etc, such that most people feel okay with others being risky, since they are being risky too.

Of course, when a collision does happen and damages have to be paid, the injured party will of course start advocating for full liability even if they previously had no issues engaging in the risky driving themselves. Which is why this is not reflected at the polls when voting for a politician who would promise cracking down hard on moving violations, with things like cameras and increased police stops.


And all because of our over reliance on cars in urban and suburban environments. Mass transit basically eliminates the need for moving violations, traffic citations, parking problems, cameras, and police stops. No more need for stroads, toll booths, etc. Our cities might actually be livable again. Walking and biking wouldn't feel like putting our lives at risk. Fewer parking lots means more stores and other amenities closer to where we live.

It's the cars. Autonomous or not, cars are the problem.

But just like health care, we keep choosing and voting for the absolute worst option because it's what we're used to and fear-mongers selling us a story about how any change will inevitably lead to a socialist hellscape. Fear: it's a hell of an addictive drug even as we witness our own obvious decline.


America is huge and its people are very spread out. Its cities are sprawling and low density.

For what it's worth, I don't actually shop at the supermarket closest to my house that is within walking distance because they don't offer good prices.


America does not have an even population distribution. East of the Mississippi is similar to Europe in population density, as is the far west coast (within ~100 miles of the Pacific). There is a lot of land between the Mississippi river and the Pacific Ocean, plus Alaska that few people live on that brings density down. However if you just focus on where people live density is high enough.

Even the sprawling suburbs are dense enough to support greats transit - but since they don't have great transit everyone drives creating a death spiral that is hard to break out of.


Yes, there is much work to be done. And it will have to be done to move forward. We've eroded sidewalks and put driveways along major stroads. Parking lots often match or exceed the size of the businesses they serve. We've reached maximum density with car-centric city planning.

There are some software decisions that worked well up to a certain point, but simply won't scale reliably to the current level let alone what you need for the foreseeable future. At some point you have to bite the bullet and refactor. Refactoring takes skill, and you need the right folks for the job, but it will get harder and more expensive the longer you wait. It typically doesn't have to be done all at once. Just fix as you go and stop following the older patterns. Have a plan and work toward the goal one step and a time. One commit at a time. One stroad at a time.

Folks may believe in the whole "personal freedom" with a car, but how personal or free are you sitting in bumper to bumper traffic? Cars = Liberty is a pernicious lie.


I live in a Czech city called Ostrava. You can look it up on Google Maps.

We have excellent public transport, but it is slowly becoming too expensive for the municipal budget. Given that the city is historically not compact (there are either old industrial brownfields or rivers with adjacent floodplains that are unsafe for residential buildings), trams and buses need to cross kilometers of mostly uninhabited territory before reaching dense parts of the city again. Of course, that costs money in fuel or electricity, extra wear and tear on the vehicles, plus the polycentric character of the city does not allow for a simple network of lines meeting downtown. You need more of a triangle.

And there is approximately nothing that can be done about it. The floodplains are dangerous to build in, the rust belt of brownfields would be too expensive to redevelop, the economy of the city is far from stellar and won't support any extensive redevelopment anywhere; we are already losing population, though not dramatically so.


I'm not sure what bullet we have to bite and why. I'd rather the personal freedom to just drive across town and shop. Our sidewalks are great, we have bike lanes, lots of parks. What we don't have here is a lack of space. We're surrounded by miles of farmland, as are all of the other cities near me.


There is only so far as you are willing to drive. In theory I have the freedom to drive to New York city for my shopping, but that is a 17 hour drive (best case, not counting stops!), so I would never do that.

Even for cities of normal size, the total distance across means you would not want to make it a regular event to shop on the other side of the city. So if we add transit, and increase density you can find there are more places you within a reasonable range for whatever activity even though you lost the freedom of the car.

Of course if your activity isn't shopping - something a city excels at - but instead camping far away from other humans: then a car means freedom. When the activity you want to do is something a city excels then doing it via transit should mean even more ability to do it and thus more freedom. Of course this freedom via transit only works out when there is great transit and high density. Getting there is often difficult.


It’s difficult for mass transit to ever beat a car in direct speed; even in Rome where the train literally goes from the airport to a block from the hotel my Uber with luggage beat the coworker who rode the rails, and he didn’t even wait very long at all.


Are you also including finding your car in the parking lot and finding a parking spot at your destination? I find a lot of folks neglect that part of the time taken.

When in Manhattan, the subway can 100% beat a car over the length of the island. (Car can beat the busses going cross town though unless of course you add parking back to the equation.) Getting to Queens seems to be faster on the subway than taking a car across the bridge too.

D.C. and San Francisco are both towns I'd also rather take mass transit than drive.


yes, it'd be very easy: you want a vehicle on the road, name who gets sued for damages and criminal culpability for failure.

almost all our tech wants to skirt to legal system.


Easy to say if you live somewhere without winter. AI models are a LONG way off from the kind of driving you need in near-whiteout conditions.


Nobody should be driving in near-whiteout conditions, human or AI.


Of course if’s best to avoid dangerous conditions, but you must realize that weather changes fast and a clear road can become very low visibility in the matter of minutes. There are also urgent and emergency situations where it’s rational to take the weather risk and drive much more cautiously than the alternative. Finally the issue is not strictly visibility it’s more the pavement condition. Autonomous drivers can’t “feel” the road the way a human can, at least not yet, so conditions that a human could safely cautiously navigate just aren’t safe for a non-human right now.


> the AI will make deadly mistakes that no human would make

you'd be surprised what kind of mistakes humans make.

Anyway, snarky comment aside. The biggest reason for optimism is that a world full of AI cars will remove the reptile-brained jostling for position that's 90% the cause of all crashes today, and that it will overall _slow down_ traffic. Slower, calm, tepid moving traffic, a bunch of electric golfcarts puttering around the city. That's a future of AI-only traffic worth signing up for.


It's amazing how many people seem to not see further up the road than the cars directly in front of them. Even when driving tall SUVs or trucks.

My favorite scenario is when someone super impatient pulls around (often suddenly without signaling) a car not noticing:

* The car in front of them is actually going the same speed as the car in front of them

* The lane they were all in is actually going faster than the lane they just pulled into

* Everyone is about to pass a slow person up ahead in the newly selected lane

Person predictably hits the gas to race ahead only to get stuck behind the slow car while the cars they thought they were passing proceed ahead in the lane they just left.

Sometimes frustration and increasingly eratic behavior ensues.

Never gets old :D


This is a genius reply. I 100% agree intellectually as well as from personal experience. I have a British co-worker. When he goes home for the holidays, he is always terrified of how fast people drive on two lane countryside roads with sharp turns and limited visibility. Another Swiss co-worker said the same about snowy local roads in the mountains. Locals drive very fast. As soon as AV is trained on those roads, it will drive much slower, and probably safer.


Why would you think people would drive slower with AV?

They will drive even faster thinking their AI will protect them.

I know this because Tesla has “ludicrous mode” and all other EV manufacturers are bragging about their insane 0-60 and 0-100 mph times, and owners love to show off.


> Why would you think people would drive slower with AV?

Because the autonomous vehicle will be autonomous.


True. I did a bad job explaining myself. What do you think the consequence will be if AV drives too slowly for the impatient?


If it’s truly AV they probably won’t care because they’ll be watching Fast and Furious on the touchscreen.


Why would traffic actually be slower? Have you driven much in busy cities? Usually in California for example, when traffic conditions allow such, the actual advice given in things like drivers ed is to keep with the flow of traffic including in situations like freeway driving where traffic might be going much faster than the posted speed limit. Certainly ai might drive slower in places flagged as actually necessitating it, but if anything I'd think the advantage of a fleet of ai drivers is that cars could go even faster than before because suddenly there's nobody stubbornly slowing a lane down unnecessarily out of either panic or simply obstinance that they're in the right because they're going the posted speed limit.


> the actual advice given in things like drivers ed is to keep with the flow of traffic including in situations like freeway driving where traffic might be going much faster than the posted speed limit.

That's awful advice. It's something that feels right, but in reality only exaggerates the push-pull accordion effect of too fast, too heavy traffic.


One major gripe I have with driving in California is no clear traffic law requiring folks to move over when getting passed on the right.

It's not the speed. It's the speed difference as cars end up weaving between lanes in traffic because slower vehicles sit in the passing lane(s).


It’s actually the law to go the speed of traffic.


an awful law when the speed of traffic is too high for conditions.


Plus it will be MUCH safer for cyclists and pedestrians.


> If we put more effort into self-driving busses and autonomous trains ... we would see much greater dividends on our investment and far fewer "unintuitive" errors.

Everything you said made sense, except focusing on mass transit for FSD, for two reasons:

The intent of focusing on bus/train automation comes from an illusion of control (we can control the lane/track, thereby) – hence we tend to rudimentarily attribute easy outcomes to it (low risk, high value).

If we ignore the control part and properly think about it – mass transit actually higher risk for lower value.

1. Lower value: For something that involves 100+ people on dense economic centers, it's already running at an economy of scale where a human driver just makes sense. I live in Germany where the metro trains & trams are already crazy automated. There is a human driver there just in case, more as a supervisor for the people riding (controlling hooliganism, jammed doors, helping challenged people, dealing with emergencies, etc). I see German trains as already running on FSD4. FSD5 full automation is a waste of time here. Using buses for last mile coverage for few passengers, aka treating buses as "big taxis" is probably worse environmentally than actual taxis.

2. Higher risk: By the same logic you said for cars – "far fewer unintuitive errors" – at a much higher capacity of mass transit – is far more catastrophic. Imagine a self-driving train had just 1 accident in 10 years, but it affected 1000 people. It's sheer terror. Who is liable for it? Government. The problem with going down this mass-transit-first route is, one error means legislating away the entire sector.

Cars are actually lower risk (individual choice, individual liability, accidents don't deter others from adopting) and higher value (last mile, moving away from the dense urban city plans that come with high rents and chokepoints which are crippling even to my beloved, beautiful German cities where even with all the urban sprawl, last mile is still a problem outside A zones).


German trains are not known for being automated. I'm sure you have some, but not as many as you think. No tram in the world runs crazy automation, they all currently have a human on board. Only grade separated trains run fully automated. There might be some automation on your trains, but it isn't fully automated.

Your second point is completely wrong: we have trains, and have been running them for more than 100 years. We have real statistics to show in the real world they are much safer than cars. Sure you can imagine anything you want, but when real science has real data why would anyone look at your imagined data.


I think you got the context of the second point entirely wrong, so it's better to read the comment to which it's replied. As I said before, my entire family are proud, satisfied users of the German metro & public transit system – and we are proudly anti-car. Nothing more to prove when we practice rather than preach.

As for the first, it would be worthwhile taking the AI-tinted driverless-only FSD glasses off, and re-read the original comment and response.


    I live in Germany where the metro trains & trams are already crazy automated.
Where? For example, Berlin has a very large U-bahn and S-bahn network. As I understand, it is not automated (FSD4+). Please correct me if wrong.


The whole industry feels like the cart leading the horse to me.

Not having a track to follow on/in the road (magnets, sensors etc.) Not mandating all cars talk to each other, working together like a mesh/hive/colony.

I understand that has its own set of self starter issues, but it can be built in WHILE also doing what is currently happening. The fact that roads are being replaced TODAY and still nothing is going in them to help cars drive themselves, baffles me.


I am convinced that this will come later. If anything, "hive mind" AV cars will allow them to drive much faster on high speed roads. Imagine if one lane is protected (walled) and reserved for AV cars on an expressway. They could drive crazy fast as a team (150km/h+). Expand that over the years. In 50 years, all expressways might be AV-only and cars driving very, very fast.


you still have weather, unsecured loads, animals. cars and drives arent the only dangerous thing on the road.


I agree that the fixation on cars in urban/population dense areas is a problem and the overall use of cars in these areas should be offset by public transportation.

I feel like the one in five Americans that live in rural areas is left out of the conversation though. You can't eliminate cars for those 60 million or so people.


When you suggest 80% of people live in urban areas, that statistic has a threshold of 2,534.4 people/sq mi. That isn't very dense. You're leaving out a lot more than 20% from the conversation when you talk about eliminating cars.


And, really, that understates it. I'm technically urban per the Census--ex-urban per ESRI. But the idea that anyone near me could reasonably get by with just public transit is laughable. And I actually live quite close to a commuter rail station and there is a small regional bus system.


Sounds like you and your fellow community members should vote for folks who will prioritize public transit rather than widening stroads. Poor transit options are a policy choice, not an inevitability. The best time to start advocating for livable cities was a decade ago. Second best time is now, so that ten years from now, you and yours will have more options than they have today.

Folks need to be transported at higher and higher densities as a city's population grows. Cars are the lowest density carrier available. Think of how many cars fit on a four-lane road on a mile stretch. How many people are in those cars? How many trains or busses would be needed to move that many people? Now visualize the space taken up by those cars versus the space taken up by busses.

That's how you solve traffic problems with a growing population, even for the folks who still need their cars because their destinations are sufficiently irregular. Mass transit helps those who need their cars too!

As a byproduct, you don't need so many and so large parking lots. Think of all the parking lots around you, which I'm sure there are many. Imagine 80% of them were replaced with housing, retail, office space, parks, meeting places, etc. Then convert the remaining 20% to multi-story parking.

Urban sprawl is a choice. Choose different.


I am 50 miles outside the nearest major city. I'm on a busy but 2-lane total country road where my two nearest neighbors are on 10s of acres. There are no nearby businesses (much less stroads) until you get to a nearby small (20K) person city. I don't know how you solve that with mass transit. And it's considered urban as the US Census defines it.

You may not approve that such places exist but they do. And folks like to live in them.


Okay, so where you live is classified incorrectly. That's fine. Mass transit doesn't work without the "mass" part, which you clearly don't have. I have no problem at all with your vehicle ownership or your choice of place to live.


Well, the census has a binary definition and, for different purposes, it makes various degrees of sense although I can fairly easily go into one one of the largest US cities for a day or evening. I'm not in the boonies but I'm also clearly not in a location where car-less public transit can remotely work. And I'm not sure there is a reasonable mid-definition because at that point you're judging what degree of inconvenience is acceptable--which is pretty much the case with the regional transit system around where I live.


Houston and Phoenix are 2/5 top 10 cities in America and both have a lower population density than the small farming city I live in of 50k people. America is just huge.


Yes, car-centric land use is horribly inefficient and the core of the problem. You can either throw good money after bad as a matter of public policy, or you can start strategically increasing density.

But it's a choice. There are a lot of folks out there in the "good money after bad" camp who focus too much on what is and what was rather than what could be.


We’re also only talking about “good weather” regions. There’s no way an autonomous vehicle is capable of handling diverse weather, gravel roads, especially snow and ice, at the moment (my Tesla does not). The conversation is very myopically optimistic at the moment (which is fine, it should be, just pointing it out).


Bet you we could make an autonomous vehicle that handles snow and ice conditions better than the average Seattle driver. Most people aren't any good at driving under those conditions. And half of Seattle forgets how to drive in wet conditions after the summer is over.


My favorite thing about Seattle drivers & traffic (lived there for 15 years).

[rains] Radio traffic announcer: "It's slow out there because it's raining."

[cloudy] Radio traffic announcer: "It's slow out there because of low visibility."

[sunny] Radio traffic announcer: "It's slow out there because it's sunny."


This feels like a straw man to me. If someone who works in construction, works a ranch, tows a livestock trailer, manages a farm, etc. wants an F-150 or F-250 or whatever, I don't think the vast majority of us will even question that decision. Rural residents and (sub)urban residents on average have very different needs and goals, and I have no problem with that. I for one am not fixated on the 20% because by and large, they aren't the problem. They don't greatly contribute to overall traffic congestion, traffic accidents, or even emissions. They also shouldn't block policy directed toward the 80%.

I'm talking about segments of the other 80% that wants a dually truck because it makes them look "alpha". Folks buying huge SUVs to feel "safe" while being more prone to rollovers, less able to avoid collisions, and far more likely to kill others—especially pedestrians—in a crash in addition to monopolizing greater and greater proportions of limited land resources.

You live three miles from your nearest neighbor? Feel free to indulge in a raised pickup with 3 tons of bed capacity and 5 tons of towing with my blessing.

You live in one of the major metropolitan areas in the US? Don't buy a Hummer, Lexus SUV, or F-150, especially if safety is your goal. In fact, those large vehicles should require a new class of drivers license due to their size and performance characteristics just like school busses require a class B and motorcycles a class M due to their different structure and place within our highways. Buy a transit pass. Per capita, folks simply don't die in car accidents when they ride the bus or take a light rail. Don't have good/fast public transit infrastructure where you live? Time to vote for folks who will make it a priority.

Because widening stroads has been tried. It doesn't work. They have never worked. They don't make traffic better, they don't make us safer on the road, they don't get us to our destinations safer, and they certainly don't make the most efficient use of land. It's time to move on. Dump all the stupid, oversized, single-level, paved parking lots and replace them with mixed-use housing, retail, and office space with a public transit hub.

Make just enough parking so that the 20% folks who actually need their daily-use vehicles can visit easily. Preferably they can park in the park-n-rides at the outskirts and hop on a train to the city center so the parking fees are as cheap as possible. Let the 20% decide whether they want self-driving vehicles or not. The 80% should leave them alone and embrace the self-driving busses and trains for themselves.


> the biggest hurdle will be public acceptance. The AI will not make the same kind of mistakes humans make. So while the aggregate number of accidents will be (likely much) lower without a human at the wheel, the AI will make deadly mistakes that no human would make, and this will terrify the public.

I'd argue safety is not a concern. There are a lot of "safe" things we could do, but don't. A significant percentage of the country doesn't even vaccinate its children. Self-driving cars aren't suddenly going to make us aware of our own mortality in ways that life itself hasn't already.

The real fear is lack of accountability. If a drunk plows into a crowd of pedestrians, he will be dragged out of it and (metaphorically) lynched. Justice makes us feel better about circumstances beyond our control. If North Korea test-launched an ICBM that erroneously hit Japan, we'd declare war over their typo. But when a self-driving car erroneously mows down pedestrians, we're told to just accept it, nothing we could do, mumble-mumble-training-data, and tragedies like this happen so we can be safer.

Nothing is going to condition us to resent the idea of Safety more than having our personal agency and sense of justice taken away from us in its name.


What do you think about a separate set of laws that determines pay outs from AV companies to people injured in AV accidents? It seems reasonable to me. For over 50 years, (life) insurance companies have offered specific pay outs for injuries. In some sense, a framework already exists. Also, if you want to make the system "bullet proof", require AV companies to put a certain amount of money on deposit with the state (or buy insurance from a separate company). (In a sense, this already exists for insurance companies through strict balance sheet regulations.) Then, you cannot have terrible AV company with many accidents that goes bankrupt from fines and cannot pay.

To be clear: I do think, in my lifetime, AV will become normal in many places. Some horrific accidents will happen that result in massive fines to the companies. As a result, some of them will go bankrupt.


Personal agency and justice are not provided by cars. To that point, how much agency do you feel you have in standstill bumper to bumper traffic. How much justice do you feel sitting in traffic, focused on the bumper stickers in front of you versus other actually productive activities while the bus or train are in motion.

Cars != Liberty

They never were. They have always been rapidly depreciating assets that are useful for one-off destinations and horrible externalities with regard to city planning.


I’ve been saying this since the 90s. Fewer cars, more mass transit in urban regions. But people are stupid: they would rather sit in traffic for 2-3 hours a day than give up a “freedom” that actively restricts them. The paradox is the size of a galaxy.


There’s no evidence to suggest that a system of self-driving vehicles are safer. The solution won’t come from the US because there are way too many red tapes. It’s coming from a smaller country perhaps Japan/Korea.


No lol we won't have fully self driving cars in our lifetime. We would need cars to fully understand human speech and understanding and be able to react with 100% accuracy 100x faster than humans.

Possible? Yes, but not for decades to come.


The most critical point though: the race stared and will not just stop.

It's a question when not if as it's clear that we haven't found fundamental issues.

The opposite happens: more people use it already than I thought and companies are stepping up with insurance to cover you.

Yes the Tesla car driving into a truck, horrible but there was no genuine global outcry.


    autonomous trains
These already exist in Hongkong, Singapore, and Paris, but they are crazy expensive. And, AV bus sounds harder than AV car due to boarding and fare payment issues. I disagree with your ideas for early AV applications.


I think you're assuming a city bus needs a fare. It doesn't. Santa Cruz County, CA has just passed a resolution where all busses in the county will be free to ride. It already is for folks working downtown, anyone under 18, anyone at the university, and for a small per-semester fee at the local community college.

This not only makes it more convenient to get around without worrying about buying transit tickets ahead of time, it means the busses can load from the front AND the back, making the time spent at each stop shorter and schedules easier to keep.

Much easier problem for autonomous setups when you remove fares and just have to monitor if anyone's in danger.

Public mass transit never breaks even from fares. What it does very well however is free up the money folks would have spent on gas, parking, car maintenance, etc., so they can move around and spend it more freely in the community.

You see this spectrum in various cities. Some cities make you pay depending on how far the light rail travels, so you naturally condition riders not to ride any more or further than they absolutely need to go. Places like New York City on the other hand just sell you a pass; taking an extra trip to the Met or to Central Park doesn't cost extra, so folks do it, socializing and stimulating the economy along the way.

New York's biggest transit mistake in my opinion is keeping the fares. They'll spend HUNDREDS OF MILLIONS of dollars on NYPD overtime to catch turnstile jumpers when they could spend less just letting people ride for free. https://gothamist.com/news/nypd-overtime-pay-in-the-subway-w...

It's like when Florida drug tested everyone trying to get public assistance and ended up spending far more for administering and processing the test to catch a relative small handful. https://www.aclu.org/news/smart-justice/just-we-suspected-fl...


Fares have another purpose as well: crowd control.


According to a quick Google search, the NYC subway has fare recovery ratio of about 20%. Do you really think "NYPD overtime to catch turnstile jumpers" is 20% of the cost to run NYC subway? Not even close -- surely much less.


> the same kinds of mistakes humans make

We're hosed until we can instruct our vehicles to act drunk.


The two scenarios that concern me long term are:

- What happens when all of the infrastructure gets built up around self driving cars and peoples first hand knowledge of how to drive diminishes. Once a near monopoly/duopoly is attained by a select few SDC vendors, it will become a utility. Then what fallback does society have if the likely enshittification happens. Do we just have to live with it?

- Its all fine when the companies able to do this are part of the most elite, technology first companies - but what happens when companies known to take short cuts (like the ones who can barely get bluetooth working for their audio infotainment system) start try to enter the market by focusing on lobbying the SDC oversight board.


Regarding your first point… 15 year olds learn the basics of driving in a day. Generally speaking, it’s not that difficult. A scenario where humans forget entirely how to drive and are unable to learn again is incredibly far fetched.


We need private rich people to buy up large tracts of land and build techno test bed cities for this stuff.


If you're building a new city just do public transit instead of all this inefficient nonsense. Either way it's not a naturally growing city so if you're going to plan one plan one that makes sense


I know Waymo are the investing a lot into the PR that makes them seem successful, but they are the only company I actually see on track to delivering autonomous cars (on existing infrastructure).

I'm still a bit torn on whether autonomous cars are a good thing once you consider all the second and third order effects (even more cars on the streets, less investment into better modes of transport, and traffic will get a lot worse once people are ok with sitting in bad traffic and watching Netflix). But I have to applaud Waymo for their great execution on a very difficult problem.


As others have noted, autonomous vehicles may actually lead to less car use. Currently, many people must own cars for certain use cases. Because of this, for any given trip the decision to take car vs. other means is based on the marginal cost of car usage. In contrast, if people no longer need to own cars because of autonomous taxis, the decision of car vs. other reflects the ammortized cost of car use, which will be far higher than the marginal cost. Put another way, there are plenty of trips being taken by car now simply because people have a car for other reasons, but if they didn't own a car they'd far sooner take another option vs. renting/Uber/Waymo.


People will very likely also be willing to spend much more time in cars if they don't have to actively drive. E.g. you have a 2 hour commute but you can play on your steam deck the whole time, or you can travel by sleeping in your car while it drives 8 hours.

To the extent that self driving taxi services are cheaper than human driven taxi services, they will also increase use of taxi services.

There's no reason to assume that on the balance people will end up driving less as a result of a technology that makes driving significantly more convenient simply because it might make taxi services somewhat cheaper and therefore potentially might make it easier to not own a car and encourage people to use other modes of transportation for some trips.


Well, sleeping is generally done when demand for cars is extremely low. And a lot of people can’t sleep in cars even when they are a passenger. It’s hard to imagine that becoming common enough, even at very low prices, to add to the number of cars on the road.

While I’d certainly prefer to watch Netflix than actively drive, I’ve still got stuff I need/want to do that I can’t in a car even as a passenger. And it’s just not comfortable for long periods of time. A lot of people get motion sickness staring at a screen in a moving car. Etc.

A lot of people own pickups just because they occasionally want to tow something or move something large. A lot of people own second cars for occasional use. These might become rentals instead when it can affordably just show up at my door in a half hour.

There’s no way to tell how this plays out. There will be some amount of induced demand, there will be some amount of reduction in use. One never knows which will be bigger.

What I do know is traffic deaths kill over 40,000 Americans a year, and driverless cars could potentially get that to 0 or near it, whereas human drivers cannot. I do know we can electrify cars and power them all with renewable energy, not immediately of course, and remove many of the environmental concerns. We can enhance mobility for the elderly and children and mentally disabled who can’t drive.

There’s a strange amount of anti-car propaganda that has gotten people worried about this, but I look forward to a driverless future in which cars are cheap, clean, safe, and available to all.


> There’s a strange amount of anti-car propaganda that has gotten people worried about this, but I look forward to a driverless future in which cars are cheap, clean, safe, and available to all.

It’s not propaganda but jumbled concerns which are often poorly expressed. I think the strongest arguments are:

1. Self-driving cars don’t change pollution - even EVs are better for local air quality but still cause massive carbon emissions and unchanged or worse tire particulates, etc. – and may even make it worse locally with the extra mileage from taxi fleets.

2. Self-driving cars only lightly improve congestion, and then only to the extent that they can coordinate and you can ban non-AI drivers from certain chokepoints at certain times. The form factor unavoidably needs far more space per passenger than anything else.

3. Self-driving cars don’t really help with affordability – even if the current prices come closer to parity, that’s a financial stress for many people (e.g. in the region where I live, the average family spends as much on vehicles as they do food).

4. Self-driving safety needs a different relationship with the manufacturer. There are many areas where they can be safer but failures can also be correlated so we really need companies to share liability and have rigorous safety oversight.

As a pedestrian, I’m fairly bullish on the concept given how dangerous the average driver is now compared to 20 years ago but I worry that a lot of politicians are going to ignore the other issues because those require hard choices whereas it’s so compatible with American culture to say you can solve major problems by making an expensive purchase. These shouldn’t be opposing issues, of course, and I’d really like to combine them because autonomous vehicles should soon, if not already, be much better about following speed limits, staying out of bus lanes, etc. Making advanced automatic braking a requirement to enter a city could save thousands of lives every year.


The EV pollution component of this is 100% propaganda spread by legacy auto makers and fossil fuel companies and is blatantly false. They are substantially better for the environment and the delta is growing too. Even in an area where energy is generated by fossil fuels, EVs emit significantly less CO2 per mile driven. Like 2-4x depending on a wide range of factors. And most new power generation being built in the US is now renewables and if anything, we lag much of the world. It's over 80%. (Luckily it's easy to tell the near future in this regard because utility info is all public and planned out years in advance due to permitting, purchasing, etc. so there are functionally no currently unplanned power plants being built in 2024 or even until 2027 or 2028 or so.) This is because it's just cheaper now, and the economics of wind/solar get better every year as generation costs fall and fossil fuel prices rise. Technology usually gets cheaper, fossil fuels usually get more expensive, and both of these seem to be true in this case. You are correct about particulates, but it's basically insigificant compared to carbon emissions, and probably even offset by lack of motor oil or various other fluids that spill and need produced and then to be disposed of, time the car has to drive in for service, etc. Any sane person would happily trade a 50-80% reduction in lifecycle CO2 emissions per car for a 25% increase in tire particulate matter in the environment. It's only propaganda that makes people mention this, even if it's true, because it's just a non-factor.

I had half a mind to write a long treatise on why I think we'll only see significant EV adoption if/when cars become driverless, but I'll save it and just go with this. Someone I know was killed last week in a hit and run. She got in a minor car accident, got out to check on it, and a third driver hit her and took off.

When it comes to affordability, economists generally set the economic value of an average American life at ~$10 million, and 40,000 people die from traffic deaths every year. Even if we just look at the numbers, Americans buy about 3 million cars a year. So 40,000 * $10 million divided by 3 million is a savings of over $133k per car, which is far in excess of the average car's lifetime cost. Even a 50% reduction in deaths, which for all I know currently existing driverless cars could achieve, would be the same as making all cars free in terms of average cost.

And even if driverless cars are a total push in every other respect (and I think they'll be much better) 40,000 families a year (and I assume globally, at least 5x that) not losing a wife and mother that way is more than worth whatever we have to do to make it happen.

Stay safe.


> The EV pollution component of this is 100% propaganda spread by legacy auto makers and fossil fuel companies and is blatantly false. They are substantially better for the environment and the delta is growing too. Even in an area where energy is generated by fossil fuels, EVs emit significantly less CO2 per mile driven.

That’s not the argument being made. Everyone knows they pollute less per mile – but unfortunately the manufacturing is roughly half of the lifetime pollution from a vehicle.

https://cleantechnica.com/2022/07/27/ucs-study-shows-lifetim...

https://www.iea.org/data-and-statistics/charts/comparative-l...

This matters especially because consumers have been getting heavily marketed into getting massive trucks and SUVs, where the sheer size of the vehicle means the lifetime emissions are greater than a small ICE because the lack of tailpipe emissions can’t make up for that even if it’s powered entirely off of renewables.

That doesn’t mean that we shouldn’t be electrifying the vehicle fleet quickly but it’s buying time on the trip to zero emissions, not a solution. Buses and e-bikes get us much further because they don’t suffer from emissions the inherent inefficiency of automobiles.


>Buses and e-bikes get us much further because they don’t suffer from emissions the inherent inefficiency of automobiles.

It's a free country: people are free to choose to use autonomous cars over ebikes and buses and why wouldn't they? The emissions profile of a personal electric car being unaffordable[0] doesn't pass the sniff test.

[0]Fair economic taxation of externalities - considering current status quo.


Those comments always remind me how insular this community is. Go to Cleveland, or Phoenix, or Houston, or literally any city that isn’t in the top five in density, and try getting around by bus or bike and tell me how you like your life.

I don’t particularly love cars or anything, and would be really happy to not have to have one, but there’s no way I’m going to try to rely on buses or bikes. I value my time, too much for buses and my life, and not being either frozen or covered in sweat too much for any sort of bike.

A car gets you from point A to point B quickly, reliably, comfortably, and with cargo. Nothing else does that, and we are willing to spend a significant portion of our income for it.


We’re only talking about pollution here - the problem is that multi-ton heavy machinery has a much bigger footprint than any other common option for moving a person around. It’s not a “free country” debate, just unavoidable physics: using 4-6K lbs of machine to move 200lbs of person is going to require a lot more energy than a 20lbs bicycle or having that person share a bus with 50 other people.

I think taxing carbon would be a great way to encourage people to reconsider how they travel, and would expect many people to pick things like those small EVs for urban usage if that became common.


Both of those links say EVs still have substantially low carbon emissions.

Studies are all over the map but the ones that put it anywhere near 50% are all from China.


Speaking for myself, I would absolutely "drive" more miles if my car were autonomous. I'd take the hour+ trip into the city far more if I didn't have to drive or go on the two hour+ drive to the mountains for a day hike. Even if there are fewer cars (which is mostly about the economics) there will absolutely be more car-miles with autonomous systems.


Yeah, I was offered tickets to a bowl game that’s about three hours away - but it won’t end until around 11 pm, and I have to be at work at 6:30 am the next day.

No way I can do that and be functional the next day, but if the car could drive itself, I’d probably be going.


It would probably be much more expensive

If all you pay is the marginal cost then those that live an hour away will pay six times those that live ten minutes away


I'm assuming I own the vehicle. Whether there's a driver or a computer, I also assume that routine 2-4 hour round trips in a taxi of some form aren't going to be viable for most people.


This is already a reality with the fully electric self-driving tech we have now: trains. And no, people still dislike long commutes, even if they can play steam deck on the train.


I lived the mechanical turk version of this for years: riding a corporate shuttle to work.

I prefer Caltrain.


I do not see how the existence of autonomous taxis is any different than the existence of taxis.

The existence of taxis is (obviously) not enough to curb car usage growth.

EDIT: Some specificity: How would robotaxis replace commuting for millions of people in a way that reduces car rides? The taxi has to move at least from the storage to the rider pickup to the rider dropoff. Without sharing, that's actually more miles and the same number of cars.

Instead, if it picks up two people per day, that's more miles, fewer cars in existence (since both riders dont need a car), but the same number of car trips (plus the to/from storage).

With taxis (robotic or otherwise) the number of miles driven is just going up unless people change their lifestyle. That doesn't do anything to curb care useage.


> I do not see how the existence of autonomous taxis is any different than the existence of taxis.

Cost. The cost of an Uber is way too much for daily travel (vs owning your own car or public transport).

A human-driven taxi needs to pay the driver's salary within an 8 hour shift. An autonomous taxi can run (almost) 24/7, 365 days a year. Which do you think will be the cheaper fare?

Another scenario is someone simply renting out their own car as an autonomous taxi whenever they aren't using it themselves (which is most of the time). Then it'll always be cheaper than current-day taxis because it's just a low-effort bonus source of income to the car owner.


An autonomous taxi isn't going to make many more trips per day. Every hear of "rush hour?" Turns out most people are moving around the city at the same time of the day, then much less trips in the other parts of the day. (except lunch hour when again all the same people are going to lunch). In the middle of the day the trips people make tend to be different (more likely shopping or delivery: different car type than commuting).

I think most people will try the taxi, but if you already own a car (that is transit doesn't make sense for most trips) you will discover it isn't much cheaper than owning your own car, and your own car is waiting outside when you want to go (one big advantage of owning a car over transit is the car is ready when you want to go instead of having to call or hope one is waiting - if cars need to wait outside your office all day in case your kid gets sick that increases costs). Instead you can just buy a self driving car and then leave your things in the car if you go shopping over lunch - something you cannot do with a shared car.


I think the difference in cost between a maybe minimum wage driver and a computer is far less than maybe people assume.

And, for a car driven any reasonable amount, most of the cost is in the mileage.


> Which do you think will be the cheaper fare?

Neither, any savings will trickle up to the investors. The price of robotaxis is going to be just below the limit where it would make sense to own a car.


The idea is that the cost of autonomous car use will be much lower than taxis, because there are no labor costs. If you have to get to work every day, taking a taxi (if you can even find one) is much more expensive than buying a car and amortizing the cost across its lifetime. As the price of autonomous taxis fall, that will reverse.

That doesn't mean I agree with the GP's point about it lowering car usage overall. The reduced cost of auto taxies also pushes against your reluctance to take one, though perhaps not all the way to "use my car whenever I leave the house" levels. I also think that once people begin replacing their cars with autonomous taxis, they'll sign up for all kinds of taxi subscriptions that will further reduce that reluctance. After all, driving your car now isn't completely free: it still costs you gas money, plus the hassle of actually driving it. And other forms of transportation aren't free either. So the bar here isn't 0.


> The idea is that the cost of autonomous car use will be much lower than taxis

I see this all the time and I just do not believe it is true. Uber/Lyft/etc undercut taxis for users to take market share, and have drastically raised prices to become marginally profitable.

Autonomous cars are more expensive, and the labor in non-autonomous cars is not the majority of the costs. In NYC, a 1hr Uber could easily cost $100 against a minimum wage of $15.

The idea that a taxi trip becoming cheaper than a car owners marginal car trip would require dramatic dropping of taxi prices. Even halving is not really going to do it, and I don't think removing the driver even halves the costs.

The autonomous taxi boosters also seem to overlook what happens to unattended, unmonitored public infrastructure in urban areas of this country. The reason I stopped using Zipcar in NYC was because they were typically trashed inside by the previous drivers. Now imagine an autonomous taxi that gets turned over 10x as often. Good luck.


Once human-driven alternatives (eg. rideshare, taxis) are out-competed by autonomous taxis, what would be the incentive to keep those prices low? Especially if Waymo is the one service with suitably performant autonomous vehicles


Exactly. People imagine some sort of future SciFi benevolence from PCs that is not going to happen.


This is fair. I was unclear about the distinction between fewer cars (supported perhaps by cheap taxis - robotic or otherwise), and car useage (not supported by cheap taxis of any kind).


There seems to be an implicit assumption in a lot of cases that robo-taxis will drastically slash the price of taxis relative to today. Maybe cut the prices by 50% at best? That's about the delta between me driving my own car versus getting an Uber into the city. It's enough to get me to drive but is certainly not in too cheap to meter territory. And being able to have the vehicle I want with various stuff stored in it today is useful as well.


Here are a couple of possibilities. Working aged person sends their car to their elderly parent's place so that they can use the vehicle to do their groceries. Families with kids in various activities can get the car to deliver and pickup all the kids at the appropriate times without needing multiple vehicles if the parents need to accompany some of the children. Car pooling becomes more acceptable because you can sleep during the detours to pick people up.

In reality, I don't think it is useful to try to enumerate these small immediate changes that are distinct from the availability of taxis. The long term cultural shift of having autonomous vehicles may lead people to fundamentally share vehicles in a different way. This may lead to a situation where fewer vehicles are driving more miles.


> Car pooling becomes more acceptable because you can sleep during the detours to pick people up.

Only if it is always picking the same people up. Otherwise this is a big negative. People often need to arrive someplace on time. If my car had decided to take a detour to pick someone else up and made me late for my early meeting I'd be mad. Car pools work - to the extent they do - because it is always the same people who need to arrive at the same time.


I think I did the monthly costs to do short commutes with just uber or taxis and it is easily in the high hundreds or low thousands a month (for me, doing a 20ish minute commute each way)

If it ended up being in the low hundreds, well, that's lower than a lot of people's car payments. Couples or roommates could share a car for non commuting purposes or trips.

You factor in intelligent ride sharing and you could halve the number of cars on the road most days.


Is it really the case that those charges are high because the drivers are getting paid so much, or because the vehicles and things like deadheading are expensive? Uber’s been driving driver compensation down for years but there’s only so much room for further reductions and it’s not like the hardware or support for self-driving systems is free.


yeah, I'm not familiar on the economics of it, and I'm not saying you should buy stock in autonomous vehicle companies. This was more of musing that in theory, if the economics of ride sharing are low enough, it could compete with people buying or leasing cars.


> If it ended up being in the low hundreds, well, that's lower than a lot of people's car payments. Couples or roommates could share a car for non commuting purposes or trips.

So the leap here is based on "Autonomous taxi companies will charge less per ride than rideshare"?

perhaps.


Once there is adequate competition, autonomous taxis should be much cheaper.


Except that our experience shows that over time competition decreases and things like regulatory capture happen so it becomes harder for anyone small to enter into competition and then prices get hiked up.

And the cars and autonomous driving software itself is becoming more expensive and more subscription-based over time so those rents are going to have to be passed on to the consumer. Large autonomous taxi services may be able to strike better deals or even build their own software/vehicles if they're big enough, but you're not going to be able to compete with them effectively by purchasing a Tesla (and presumably consumer prices will rise as there's less individual-owned vehicles and companies go seeking after only the highest margins and abandon the toyota-corolla market to the robotaxi corporations).


This can be said for taxis as well, though, right? What's the difference?


Uber tries that, but it turns out in many places you can't offer human-driven taxis much cheaper once you put them on equal footing regarding insurance and other relevant regulations and stop running the service at a loss.


No need to pay a human and fewer total cars because they can operate mostly 24hr/day


Most traffic occurs during the morning and evening commute, you'll need roughly the same number of vehicles for those surges unless those norms change as well.


Can't taxis already operate 24hrs a day? Just rotate out the drivers.


They can, but nobody wants a ride except drunks. Which is one reason why taxis look so bad: when a significant portion of your clients are drunk (throwing up in the back, peeing on the seats and all the other things they do) you don't want a nice car. Nice taxis don't work those shifts. If there is a big shared car market (I doubt it) you will see cars for different times of the day as a profile for potential drunk rider.


It's a lot harder to artificially constrain autonomous taxis with taxi medallions.


The problem is that (in the US) an overwhelming majority of car journeys (and traffic) occur during rush hour. And it's difficult to see how autonomous vehicles could reduce the amount of cars used during rush hour. Rush hour traffic involves a lot of vehicles moving to similar destinations, during the same time window. While some cars could certainly be used for multiple journeys during the same rush hour, most cars would likely sit in parking garages all day, just like today.


This. Uber is very expensive because a human has to get paid. If people could get a car "subscription" for X number of dollars a month and forgo cost of gas, maintenance, insurance, and all the other headaches meanwhile,a company could leverage economies of scale to do all of this I think people would move away from a private car.

This would also reduce the cost on doordash type services so if instead of paying an extra $10 for your food/groceries/everyhing to be delivered you paid orders of magnitude less.

This might reduce the traffic on the road.

The pessimist in me makes me think once they got sufficient market share price would go back up and wed be worse off than before lol.


The only way to reduce traffic without changing your routine is by packing more people in less space, i.e. public transit.


I foresee more people will live in vehicles.

It's far cheaper to live in an autonomous motorhome that drives around all day and happens to arrive at work just as you need to be there each morning than to rent an apartment in San Francisco. Driving about is probably cheaper than paying for parking too, especially if you deliberately head for the busiest traffic.


At $10/hr and 14 hours per day moving, that's $50k/yr just in fuel costs. You'd also want to consider depreciation and maintenance.

Already, though, you've got a budget of >$4000/mo for a more comfortable studio apartment.


Especially when you can automate switching rented batteries instead of waiting to charge them. Drive 4 hours out in a random direction, then 4 hours back for work, with nary a minute of downtime


This is already happening if you pay attention to all the vans and even cars with blacked out windows parking in your neighborhood, maybe even in front of your house. Stealth campers are very real, even if they don’t stick out as much as meth RVs.

You could already commute with cheap taxis (eg in the developing world). The more important thing is that people want to live in San Francisco, they wouldn’t be happy commuting from some far off place in the first place. And as stealth campers have already figured out, not a lot of places available to camp in your car even 40-60 miles out, so might as well be where you want.

But the idea that your car could just involve itself in traffic jams all day rather than pay for parking is interesting, it could also look for time limited but free parking and move on to somewhere else when that expired, which is more common outside the city. Heck, it could park at a shopping mall that doesn’t allow walk offs…because no one is walking off.


The US has so much sunk cost in car centric urban design that all discussion of self driving cars taking investment away from public transit is all wasted words. It’s not just the roads and the number of people absolutely committed to driving on them. It’s urban design that is so sparse that we’ll be locked into personal transit for hundreds of years. Compared to Europe where Romans laid down street plans thousands of years ago, people will still be walking around in another thousand years.

Must as well have the cars in the US drive themselves so we can all get a nap at least.


I would note a lot of research shows coordinated autonomous vehicles using basic control theory can dramatically improve traffic flow with even a small percentage of vehicles coordinating (I think it was around 10%). They found they all but eliminate most human behavior caused traffic jams (I.e., most traffic except caused by emergencies or accidents). In fact if most vehicles are AV then it becomes more of a dynamic convoy model where all vehicles cooperate to maximize flow. This would require a much smaller road infrastructure to achieve the same flow as today. Rather than contributing to the problem autonomous vehicles greatly reduce the impact of transit, while maintaining individual carriage.


> even more cars on the streets, .. , and traffic will get a lot worse

I strongly believe it will go the other way, i.e. the 'robo-taxi' vision. Once cars can pick us up, take us where we want and then disappear, very few people will want to own their own car. I honestly think the vast majority of people already don't want to own one, but we don't have a better option. Why would a sane person want to deal with the maintenance, insurance, repairs, depreciation, etc.

Cars will just show up, take us places then go away to get someone else. We won't need nearly as many of them, and we won't need to dedicate so much of our cities to them, and especially not to parking them. We will be able to reclaim our cities.

NOTE: Old School automakers who can't/won't/don't adapt are going to push back on this HARD. But I still think it will happen.

For the record, I'm a car guy. I love cars. I will likely always have one for the weekends. If I was going into a city or commuting, I would take the robi-taxi every time.


>Why would a sane person want to deal with the maintenance, insurance, repairs, depreciation, etc.

Because it will probably still be cheaper if they use it regularly (as is owning in most cases). Because they want a specifically equipped vehicle for young kids/outdoor activities/etc. Because when they want a car, they want one right now.

I'm also skeptical that, if you own a vehicle, it would make any sense to then also rent robo-taxis locally. Certainly I can reserve a private car for an evening event today but it would be 10x or more the cost of parking/gas.


Hard to know what pricing will be, but consider: - Self-driving vehicle wait times may be reduced to less than a minute as they become more common - Car ownership also requires the expenses of: insurance, maintenance, fuel, registration, and storage space - Self-driving vehicles may come in many shapes and sizes: suited to carrying a single person for a family of 6


Likewise. I would really love it if robotaxis worked out and - crucially - were cheap, because I think it could feasibly increase transit usage, not decrease it. It solves the last mile problem in an elegant way. Nobody said you needed to take the taxi all the way to your destination. You could hop on a regional train or light rail, have a robotaxi near-perfectly timed (if we assume the train runs on time...) to pick you up at your destination stop, and ride it to your final destination. Same in reverse. No waiting for a bus to transfer to, no riding the bus slowly stop by stop, no walking from the bus stop to your final destination, etc.

I'm as much of a transit advocate as the next guy, but I think a lot of people blind themselves to how annoying the last mile problem is for a lot of destinations anywhere outside of urban cores. There aren't going to be train stations built at every possible origin point and destination point, and even if there's a robust bus network, transfers, slow speed/frequent stopping, and the walk to/from your destination/origin are pretty damn annoying. They're not the end of the world by any means, to be clear, (I use buses too!) but it's just, if I have a car, why wouldn't I just drive?

Taxis have the potential to solve that in a great way. But I (...and probably most people?) don't currently use them for that purpose since they're way too expensive. As they should be, it's a whole human being tending to your transport personally for twenty minutes or more. If robotaxis can lower the price, it'd be great, but I don't know how confident I am on that happening. The equipment is presumably expensive, the car itself is expensive (though EVs do have much lower maintenance costs), the R&D is expensive. We'll see. Exciting times!


We won't need as many of them but the ones we use will be almost constantly on the road, 20 cars on the road all the time is more traffic than 100 cars on the road 10% of the time and in people's garages the rest of the time


Why would 20 cars be constantly on the road? The cars would sit in parking lots, ready for a call. The actual road traffic, will not change. 1000 people going to work is 1000 people going to work irrespective whether they own the car.

The biggest change would be the lack of a need for parking. This will allow us to build more densely.


You'd have more cars constantly on the road because the smaller pool of cars has to also travel to where the people are. If I go from A to B and later from B to A, a robotaxi would also need to get to A and B when I need them. It really won't matter if they're in use or drove off to a parking lot somewhere else, that is extra traffic.


The amount of traffic will increase because people who couldn’t drive before and would ride with other people or use public transit (eg. children and the elderly) can now hail cheap autonomous cars, inefficiently using an entire car for their trip.

Induced demand economics. Make something cheaper and people use more of it.

If Autonomous cars make ride hailing cheaper we’ll see an explosion in its use, and traffic will increase as cars are space inefficient and road size fixed.


Wait, are there 1000 people going to work, and then the car parking outside of work? Or are those cars leaving where the people work to go elsewhere? the later creates more cars on the road. Sure those 1000 cars can go somewhere to park - but now we can't build much denser as we still need parking for the cars. Maybe we can move the cars out a bit for more density where people work, but then we need roads to get those cars back out.


Traffic and travel time is a much smaller concern when you can be watching Netflix instead of making sure you don't hit, or get hit by anyone.


Traffic and travel time are still a concern: Will I get to work in time for my shift? When I get home how much time will I have for dinner before [whatever you have planned that night]. When do I need to get into this car to make it to [whatever event]

If you are a single person working a flexible schedule (no mandatory meetings), with no other activities planned traffic and travel time are not a big deal. However if you have any life at all you will care about traffic and travel time because you have places to be. Watching netflix is not your goal it is how you kill time that you would prefer to do something else.


Everything will be the exact same if not worse.

People will continue to all commute at similar times, resulting in the same traffic as before. That it is automated makes no difference.

And supposing a car can go off after and do some taxi work instead of being parked, well it still has to commute from here to there which is adding even more cars constantly onto the roads driving to where they’re needed whereas before they’d be parked.

The likely outcome is permanent rush hour as cars are constantly going back and forth on the highway.


I can see a possible future where the are less cars because people don't feel the need to own one, and are just fine with calling an autonomous cab.


The most relevant stat though is the number of miles of car usage. If there are ~20% the number of cars, but each car is used 10x as much, we're worse off. If nobody owns a car but always calls a cab, the cab might do twice as many miles deadheading to the pickup. And instead of lasting 15 years, a typical car might only last 2 years because it's getting 10x the milage per day. So fewer cars might result in more gridlock, noise and tire particulate pollution. Fewer cars might mean just as many cars built per year.


I agree that the autonomous cars are likely to cause a shift away from car ownership, reducing the total number of cars (which reduces the impact of making all these cars). It might also drastically cut down the required size of parking lots, which especially in America might be a big improvement.

But if you own the car, it's just waiting wherever you left it. If you have an autonomous cab, it has to make an extra trip from wherever it dropped off the last driver to wherever it's picking you up. That alone increases the number of cars on the road. And that's before you consider the cab potentially driving a holding pattern when nobody is actively using or calling it.

But most of all roads are governed by induced demand. People would take a lot more and longer trips if there was the option to just teleport to the destination. The main downwards pressure on the number of trips is the time investment. That's why adding more lanes to roads often doesn't reduce traffic (outside of a short adjustment period): faster trips means more people willing to take it, which fills up that lane. But a trip people weren't willing to do for 40 minutes behind the wheel they might take if it's instead 60 minutes watching Netflix in a driverless car. Which makes the roads fuller and thus slower for everyone.


Where I live, I can get a cheap taxi, any time, to anywhere I might want to go, through an app. The only difference between that and Waymo seems to be that it is controlled by a meat sack rather than a computer. I don't see autonomous cars as all that different to what I have now.


Where I live is similar, it’s just the prices are prohibitive. Round trips just to locations within five or 10 miles of my house cost upwards of $50-$60. I would end up paying 2-3k a month for Uber/Lyft.

Owning a car is simply more economical. Now if I could buy into fractional ownership of a fleet of vehicles, that may make financial sense for me.


That's less cars in garages, not on the roads.


This means there will be more cars on the road and more traffic. Whether or not people individually own the cars is irrelevant.

If autonomous cars drive down the costs of taking a taxi it’ll mean more people will do that versus public transit.

Anything that reduces public transit use or increases individual car use will be disasterous for traffic and transportation in our cities.


But just because fewer people own one personally, will that necessarily mean fewer cars on the road? Might still be an increase in cars, but with a different ownership model. It's tough to make predictions, especially about the future.


There are two questions that may have different answers:

1. Will it mean fewer total cars? Probably. If I have to drive to work and then back home, and you do to, and we each own cars to do it, that's at least two cars. If I can take a Waymo to work, and then it can take you to work, that's only one car.

2. Will it mean fewer cars on the road? (Or, perhaps, let's say fewer car-miles driven.) Plausibly not. If I drive from home (A) to work (B), and you drive from home (C) to work (D), then if we own cars, we drive A-B and C-D. If we use Waymo, it may drive A-B-C-D, which is longer by the B-C leg. That takes up space on the road.

So we may have fewer total cars, but more car-miles driven, and therefore more traffic and congestion.


> the second and third order effects...

You're being overly pessimistic. I can see the opposite occuring on each of your points.

- less traffic due to more efficient driving: once automated driving is pervasive it's natural that cars and traffic as a whole will coordinate and optimise use of the road. You should be able to predict traffic accurately and choose the optimal time to travel. Car speeds will coordinate to maximise flow through roads. Improved public transport will increase the number of passengers per vehicle and reduce personal vehicles.

- more investment into better modes of transport due to lowered costs: the cost structure of buses (and trains) lends itself to larger vehicles with less stops. Without having to pay someone to drive you can remake public transport into something that takes less people at a time to more places, without requiring expensive infrastructure. Think small automated busses that serve a web of points instead of routes, so people can request to get from A to B and the system delivers from as close to A and to as close to B as possible as soon as possible at the lowest cost.

- less car ownership: most people don't want to own cars, so it's very likely that car ownership will drop significantly. With new privately and publicly owned forms of public transport, the need to own a car will disappear in many cases.

I feel that almost all technology is positive (not sure about social media), since it generally gives people more choices and abilities. Automated cars have very few downsides.


More benefits:

- Increase cycling and walking because the roads are much safer

- Less noise from cars revving their engines, or being poorly maintained (holes in mufflers, underinflated tires, etc.)

- No carjacking


> and traffic will get a lot worse once people are ok with sitting in bad traffic and watching Netflix

This one could go either way I think, traffic might actually improve once autonomous driving is the standard.

I also kinda-sorta hope that if autonomous driving takes over, that cars end up gaining the ability to switch onto and off of rails, I think this would be the ideal end-state... people still maintain the ability to move independently of each other but we have the improved safety of transport on rails.


I can't find the report, but IIRC there was a study that calculated that autonomous driving could triple the carrying capacity of highways because they could safely reduce following distance. They also estimated fuel/energy savings due to them being able to collectively draft of each other.


Yeah, right, ever seen "following distance" at rush hour in a US city? I've got a higher chance of seeing a unicorn.

In reality, self-driving cars would help by increasing following distance and leaving a genuine gap so people don't crash every goddamn day on the same arterial roads.


You don't even need rails for traffic to improve. Just think at what happens when a traffic light goes green: human drivers slowly, one-by-one, cross the intersection. Whereas a platoon of self-driving cars can, in principle, just accelerate (or brake) simultaneously. On highways this also improves drag/energy efficiency and has already been tested in Europe as part of EU Truck Platooning Challenge.


My only thought was rails truly require little intelligence for autonomy, and require less maintenance (or at least less involved maintenance), but just doing some armchair engineering...


I'm not sure why you think rails are safer than rubber tires. Metal-on-metal contact has way less friction, which makes for much worse stopping distance and worse safety especially at crossings. Having grade-separated dedicated infrastructure _does_ improve safety, especially if there's no human drivers involved, but we can do that just fine with pavement in tunnels or elevated roadways.


Rails are safer because vehciles are more predictable - they are always on the track, with only limited places where they can switch tracks, and you can control that externally ensuring there are no conflicts. Rails can handle more people because in the form of a train they can pack in a lot more people. Cars use a lot of space for engine and luggage compartments.


Very good points... intuitively, rail seems safer (and simpler to automate?), but your points have changed my mind... Now I wonder though, how do modern roller-coasters stop so suddenly? Magnetic breaks?


Electromagnetic brakes are used, but the biggest difference is that the weight of passengers is a much bigger portion of total weight (you don’t have a locomotive and the cars are not nearly as heavy per passenger) a decelerating quickly is part of the appeal. If passengers on trains were unencumbered with stuff and strapped in as on roller coasters, they could be built to do it too.


The thing is the average car lifetime in the US is about 12 years so, so even if you assume autonomous driving "everywhere" is available in 10-20 years, that means you probably don't have a vast majority autonomous fleet for maybe 50 years. It certainly would be politically infeasible for the government to tell people they have to buy new cars.


I agree that the existing car stock will take a decade or two to age out, but I can't see any reason it takes 50 years to get to a majority autonomous fleet. Maintenance costs for cars in autonomous fleet will be significantly lower due to standardization and economies of scale, so non-autonomous cars will look relatively expensive in comparison (beyond being less convenient), causing them to be scrapped sooner than you'd otherwise expect.


I agree, my original post did not imply this transition would happen quickly, I think transitioning to automated vehicles will rely more on society changing, so at least one generation from the youngest today... and as far as requiring people to get automated cars, if they're safe enough, it will be like outlawing drunk driving, ie you're infringing on others right to live if you don't use an autonomous vehicle. The corollary is that right now there should be laws strictly controlling the use of autonomous vehicles.


OK. I read your comment as saying that it would happen slowly, not quickly, which is what I was disagreeing with. Personally I think the economic advantages of routinely using a self-driving taxi service will drive most people to simply not buy a new car when their old one wears out, and that this will happen many years before human-driven cars are outlawed (or, more likely for a while, regulated/taxed very heavily without being completely illegal).


Like crappy little trains with no capacity?


Magic little trains that can take you from and to anywhere.


Another big negative I think is underconsidered is that a Google owned self driving car fleet will be absolutely plastered with video ads and physical user tracking if they dominate the market enough to get away with it.

Imagine those unmutable video ads that are increasingly common at gas stations, but running constantly inside the car.


They already have those video ads in a lot of taxis. And of course public transit has been plastered with paper ads for decades now. So what you're describing isn't much of a change from the status quo. Unless you believe the advent of self-driving cars will lead to people being okay with ads in the vehicles they own or lease themselves, which I think is highly unlikely.


As an aside, usually those gas station ads are mutable. There are unlabeled buttons on the sides of the monitor - press them. One of them mutes it. I have yet to find a gas station at which this won’t work.


It's feels like it's about 50/50 on whether or not that button is broken when I pull up to one of those pumps.


You mean like the (sometimes unmutable) video ads I've seen in practically every traditional taxi for years now?

That ship sailed a while ago and didn't need Google to push it.


Gmail is not like this, and Gmail is free. Waymo will not want to degrade a paid experience with intrusive ads anymore than Uber or Lyft do.


Gmail isn’t like this because they read all of your emails


Google stopped using information from emails as part of ad selection years ago, the ads shown in Gmail are based entirely on the other ad personalization data they have from other Google properties. They obviously still parse email contents for spam blocking and such, but that seems like a necessary part of running any webmail service, and not a profit center for them.

I guess it's always possible that they're lying about it, but given the depth of the regulatory and public relations fiasco that would cause I'd be very surprised.



"adblock detected! It is a violation of our terms of service to wear noise cancelling earphones, pulling over."


Windows has ads (and you pay for it), Chrome OS and android have never had ads, so it doesn’t follow that Waymo will have ads due to Google.


Chrome OS and Android are surveillance/data capture platforms designed to funnel data to an advertising giant, including being caught secretly sending location data against users wishes more than once.


Such a bad take

> even more cars on the streets

You don’t know that. I could make a prediction that it would lead to fewer cars on the street. Fewer parked cars especially.

> less investment into better modes of transport

I assume you mean subways, buses, and trams here. But I don’t think it’s fair to call them “better”. They’re hugely expensive and can be disruptive in many ways, are much less accessible.

> traffic will get a lot worse once people are ok with sitting in bad traffic

You also don’t know that traffic will get worse. Traffic could potentially get much better with better drivers. But also, if people are ok with it, then who cares?


> I assume you mean subways, buses, and trams here. But I don’t think it’s fair to call them “better”. They’re hugely expensive and can be disruptive in many ways, are much less accessible.

Now this is a bad take. Public transit is _always_ better than individual vehicles when we are talking about a metropolitan area. The amount of resources, land, and pedestrian freedom that is eaten up for roads is insane. Imagine how many people can fit in a subway, and then expand that to each of them individually being in a car on the road.

Public transit is expensive, but so are new highways, highway maintenance, road accidents, speed enforcement... Etc. The worst thing is that many times people who don't own cars pay for those services they won't use. All the while public transit is getting it's funding cut.

I think the original comment is a little off in that more autonomous drivers does not directly lead to less public transit. But it is a concern that these profit/investor driven companies will be competing with public transportation and this has a lot of implications.


I will definitely agree that a transportation system based off public transit is much better than what we have now. The advantage you get with a 100% AV based system is that you can get coverage that you’ll never get with public transit. NYC, which has a great system, still has lots of parts of the city which you can’t really get to without calling a car or walking a long way. The point-to-point routing should not be discounted either. Getting in a car and going directly to your destination rather than trying to make a bunch of connection (and dealing with kids or purchased items or a wheelchair) makes a big difference for a lot of people.


I think it's absolutely fair to call public transport "better" for society.

Every single time scientists and city planners are called to answer "how we make the city more livable and reduce traffic" the answer is always better public transport (more trains especially).

The only part I could resonate with you is that we don't know whether SDC could lead to less cars. That's true if people will use more self driving taxis over personal cars.


I don't know why you think Cruise isn't on track. Their numbers are also good, although probably not as good as Waymo, but they are also much younger than Waymo. Cruise is being punished by the state of California right now because they tried to cover up their vehicle's worsening of a particular human-caused accident, not because of some problem with their overall numbers.

EDIT: If you disagree, please link to the quantitative data that suggests Cruise isn't on track.


For one, Cruise has essentially disintegrated over the past few months? All key execs left https://fortune.com/2023/12/13/general-motors-cruise-executi..., massive layoffs, founders left.

Cruise is done.


Happy to bet on this (and indeed I have, by buying GM stock). Cruise is definitely dealing with a huge, self-inflected PR disaster here. But all signs I have seen is that their tech is very advanced (although, as previously noted, probably a bit behind Waymo). Cruise and Waymo are heads and shoulders above their competitors, and there won't be only one winner (if for no other reason than the threat of anti-trust), so Cruise is likely to succeed.

Again, if you have data that shows Cruise is behind Waymo by a lot, or is behind any other company, please link it.


Could be a good bet, very asymmetric. The question to me is if the execs are leaving because they know Cruise doesn't have it technically and the jig is up, or if it's really more temporary. Hard to know from the outside. It's also hard to translate Cruise's much worse human-intervention numbers (vs Waymo) into a quantative measure of 'behindness' in terms of how difficult it is to catch up.

That's why it could be a good bet. Or not.


The event precipitating executives leaving related to the single accident and the deceptive behavior by Cruise surrounding it. To my knowledge, the data shows the tech is good (at least as safe as human drivers) and rapidly improving. But I agree it's hard to know from the outside, and that the sensibleness of the bet definitely depends on the fact that the potential upside is so massive.


Another aspect to Cruise (and potentially waymo, but it hasn't been publicly stated) is that they claim thousands of miles per disengagement...when on average their cars needed remote assistance every 4-5 miles[0]. Waymo does the same thing, but the numbers just aren't publicly known.

IMO stuff like this is going to lead the public to trust it less, since they're gaming numbers as hard as possible.

[0]: https://www.cnbc.com/2023/11/06/cruise-confirms-robotaxis-re...


You're comparing apples to oranges. Disengagements means there is a safety driver present that takes over to prevent a dangerous situation. The car requesting remote assistance, which occurs when there is not a safety driver, is an inconvenience and expense but does not mean there is a dangerous situation. (Of the 22 Cruise rides I took, it happened 3 times, and at no point was there danger.) It just means the car is confused. Conflating these things and accusing Cruise of deception is itself being dishonest (even though Cruise has actually been dishonest on many occasions!).

The whole game plan is have a bank of human operators who prove remote assistance at initially high rates which is then driven lower over time as the edge cases are ironed out iteratively. The fact that Cruise is only using one human remote assistant to manage ~15 rides, as mentioned in the article you link, tells us that the rate of remote assistance is already so low that it will be a very modest expense. For more, see the comment from Cruise's CEO: https://news.ycombinator.com/item?id=38145997


> Disengagements means there is a safety driver present that takes over to prevent a dangerous situation.

California, at least, cites a disengagement as "whether because of technology failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely."[0]

Would a car being confused and not being able to proceed without input be a disengagement by that definition? I think so, based off of "technology failure", but it's not reported as that.

> It just means the car is confused. Conflating these things and accusing Cruise of deception is itself being dishonest.

When a car is confused what happens? It stops. That is a safety issue by itself, as it can lead to emergency services not being able to properly respond and killing someone[1].

The fact people are trying to downplay this as "nothing" is shocking imo. What happens when a fleet of vehicles get confused, they all stall and it results in gridlock and frustration.[2]

[0]: https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...

[1]: https://sfstandard.com/2023/09/01/person-dies-cruise-robotax...

[2]: https://www.forbes.com/sites/bradtempleton/2022/07/08/cruise...


> California, at least, cites a disengagement as "whether because of technology failure or situations requiring the test driver/operator to take manual control of the vehicle to operate safely.

At your Ref. [0], I just opened up the CSV titled "2022 Autonomous Vehicle Disengagement Reports (CSV)" under the header "2022 Disengagement reports". Under the column "Driver present (yes or no)", every single entry said "yes".

> When a car is confused what happens? It stops. That is a safety issue by itself

No, the car pulls over, just as it and every other taxi does when picking people up or dropping them off. It does not just stop in the middle of an intersection. I had 3 of these events in 22 trips, which means the number of times the car pulled over was overwhelmingly dominated by normal pick-up and drop-off, not confusion.

> as it can lead to emergency services not being able to properly respond and killing someone[1]

This article is deceptive, and you're either being deceived or are furthering it. An ambulance being delayed for seconds or minutes by human-driven cars in the road happens all the time. It is a constant occurrence. "90 seconds elapsed between the patient being put on the stretcher and the ambulance leaving the scene" means that at worst the ambulance was delayed by 60 seconds because stretchers don't teleport instantly into ambulances. The article does not causally attribute the death to the delay because that is extremely unlikely. It's not how emergency medicine works. This is just a classic case of fear mongering. ("Ambulance has to take detour around construction. Patient died. Therefore, construction caused death." No.)

This of course doesn't mean that delaying ambulances unnecessarily by even a second should go without punishment/fine. It's avoidable and should be fixed. But it's wrong to think this doesn't happen with humans, and its slander to suggest the delay probably caused a death in this instance.

> The fact people are trying to downplay this as "nothing" is shocking imo. What happens when a fleet of vehicles get confused, they all stall and it results in gridlock and frustration.[2]

Be more quantitative.


> At your Ref. [0], I just opened up the CSV titled "2022 Autonomous Vehicle Disengagement Reports (CSV)" under the header "2022 Disengagement reports". Under the column "Driver present (yes or no)", every single entry said "yes".

Uh, that's kinda exactly my point? Any time a vehicle stalls, even when it has to be recovered by a person physically, somehow isn't a "disengagement."

> No, the car pulls over, just as it and every other taxi does when picking people up or dropping them off. It does not just stop in the middle of an intersection. I had 3 of these events in 22 trips, which means the number of times the car pulled over was overwhelmingly dominated by normal pick-up and drop-off, not confusion.

Easily refuted. [0], [1], [2], [3] should I continue?

> This article is deceptive, and you're either being deceived or are furthering it. An ambulance being delayed for seconds or minutes by human-driven cars in the road happens all the time. It is a constant occurrence.

"It already happens, so who cares if people die." Not even going to bother with the rest since it's clear you're pushing an angle from these two points alone.

[0]: https://www.youtube.com/watch?v=wWZGZWuUx-Y&t=59s

[1]: https://www.youtube.com/watch?v=h1k8raq83T4

[2]: https://www.cnn.com/2023/08/14/business/driverless-cars-san-...

[3]: https://www.sfchronicle.com/opinion/article/san-francisco-po...


Your first and your third point are not responding to what I actually said.

Your second point is replying to a general quantitative statement with anecdotes. Of course there will be unusual situations. This does not support your original wrong claim that Cruise was being disingenuous.


> Your first and your third point are not responding to what I actually said.

TBH, I didn't even fully read your third point because it's clear you're pushing a certain perspective. I've provided evidence for all my claims, yet a single link hasn't shown up in yours.

10+ vehicles stall and require people to go out and retrieve the vehicles[0]...

Nope, not a disengagement. Clearly the cars didn't have "a technology failure" they just needed to be towed back...for reasons. I guess they all ran out of gas, right?

> Your second point is replying to a general quantitative statement with anecdotes. Of course there will be unusual situations. This does not support your original wrong claim that Cruise was being disingenuous.

He says, while doing the exact same thing.

The vast majority of recorded cases involve the cars "stalling out" in the middle of intersections, roads, or driveways. I have literally never seen evidence of a vehicle "pulling over" when confused.

Please present evidence for your anecdotes.

[0]: https://www.ktvu.com/news/driverless-cruise-cars-cause-traff...


I can believe it. I rode in a Waymo for the first time a couple days ago and it was incredible. No problems with the rain or bad San Francisco drivers. It was a really smooth ride and I felt extremely safe.


I was under the impression the LIDAR approach was compromised by rain? Did something change or did I not understand it right?


That's just some good old Musk/Tesla propaganda. Waymo has developed a really high resolution lidar + some software magic means rain is no longer an issue for them.


Thanks. Any recommended reading links on this?


Their 5th gen Lidar point clouds compared it to previous gen: https://www.youtube.com/watch?v=COgEQuqTAug&t=11601s

They have a ton of literature at https://waymo.com/research/ and tech talks on YouTube (search talks by Drago Anguelov). They make heavy use of simulators [1] where they simulate weather events and create their own weather maps [2]. It's a very sophisticated stack.

[1] https://waymo.com/blog/2021/06/SimulationCity.html

[2] https://waymo.com/blog/2022/11/using-cutting-edge-weather-re...


The kind of comment I expect from hacker news, thanks!

Its impressive how the lidar resolution evolved as per the youtube video. The color added, i wonder if its post-processing.


Thanks!


The current Waymo driver uses cameras, RADAR and LIDAR, which are meant to compliment each other's capabilities.

https://wondery.com/shows/how-i-built-this/episode/10386-the...


I thought the opposite. I thought this was one of the main reasons in favor of LIDAR vs regular cameras.


there are many video showing waymo trips during rain or even (heavy) fog

few examples : https://www.youtube.com/watch?v=S4aBNYcBoLI ; https://www.youtube.com/watch?v=B8TGFA6SfAo


Probably using remote human operators to make numbers look better.


Like others said. Waymo One in San Francisco is great. Smooth / confident drive. Good situation awareness (several times when it made unexpected action, only later I realized there is a person or a car it tried to avoid).

Looking forward to expand its coverage to SFO, that will be a game-changer.

Still not sure of it economics though. Its current price is on-par with Uber Comfort / a little bit over Uber X. How that can support the R&D or future capital-heavy expansion?


I don't think the price they charge for Waymo is related almost at all to their operating cost. Operating cost is undoubtedly much higher. I suspect Waymo has set fleet size based on how many cars they want operating for gathering the best amount of data and testing improvements, and then prices are set by demand (i.e., price that keeps the cars busy while minimizing wait time).


> Operating cost is undoubtedly much higher.

There you go - I would have said their operating cost is much lower. Paying the wages of the drivers for a year costs more than the car - even a car plus all those fancy LIDAR's.

Their development costs are a different story. I suspect only a company like Google could sustain it. But presumably it's one off, and if they spread it over 1 million taxi's in the USA it would only be a fraction of the revenue.

Those development costs have an upside too. It's a moat. If they pull if off they will have a monopoly. The will get away with being able to change just under the cost of a real driver for years. We may well be bitching here in a decades time at the obscene profits alphabet is making off us, and yet we have no obvious way out.


> I would have said their operating cost is much lower. Paying the wages of the drivers for a year costs more than the car - even a car plus all those fancy LIDAR's.

No, Cruise shared a few months ago that operating costs per mile is still higher than a car with a paid driver.

https://gmauthority.com/blog/2023/07/gms-cruise-operating-co...

If operating costs were below a human driver, they would be scaling as fast as possible to recoup the developmnent costs for the current level of tech, which have already been expended.


> No, Cruise shared a few months ago that operating costs per mile is still higher than a car with a paid driver.

The page you link to says a) it (the entire operation?) is bleeding money, and b) it's operating costs are reducing, and c) quote efficiency improvements helping reduce the costs of simulation and machine learning used to test and improve AV performance.

I can't see where it explicitly says it's operating costs alone that exceed they call the magic threshold of $1/mile (which is presume is what a car + driver costs). And given they mention (c) above it seems unlikely they meant it was just operating costs.

That aside, they go onto say quote Operating costs per mile for Cruise autonomous vehicles or AVs has fallen by an average 15 percent monthly over the first half of 2023. That's an extraordinary rate for costs to fall. If indeed operating costs exceed that of a human driver it wouldn't have remained that way for long, had they continued.


Also worth noting that Uber rides are somewhat subsidized by drivers' limited understanding of fuel, insurance, and maintenance costs.


The price on these rides is simply a way to control demand and better approximate real world use cases, not to subsidize operating costs. If Google can make this technology commercially viable there are unlimited avenues for monetization.


> How that can support the R&D or future capital-heavy expansion?

I guess if it's showing enough promise to be profitable on its own (workout R&D and expansion costs), Google can probably spare a few more billions.


People keep getting more expensive, the tech keeps getting cheaper, the economics will eventually work out.


I’m extremely skeptical of the economics of scaling Waymo up to a viable, profitable service. At least, not at a large scale. But the R&D that’s gone into it will require a large scale rollout to pay off.


Removing the driver frees up a lot of money to be used towards all of that.


Does riding in a Waymo One vehicle require a Google account?


> Waymo currently operates commercial robotaxi services in Phoenix, Arizona and San Francisco,

Basically straight lines and 365 days of sun, now send them to Europe small towns/mountain roads/&c.

https://www.youtube.com/watch?v=RIyEg35Stbo

https://www.youtube.com/watch?v=P7wphiL3vbo

https://www.youtube.com/watch?v=O1ZaoRu7okU


Arizona maybe, but I wouldn't say San Francisco is straight lines. I've seen a few of these videos pop up on YouTube. This one I watched recently is full of construction, double-parked trucks, pedestrians, complicated traffic, etc.

https://youtu.be/5wXO05s-pLc?si=5W-SW5zGXIwgnQpG

It makes a mistake in that one, around 5 minutes in, due to not understanding a construction worker's gesture, but I presume it phones home for advice and someone gets it moving again. Everything else seemed to be handled rather impressively.

Disclaimer: Google employee. My job has nothing to do with cars. But I do love technology and hate driving, so I'd love to see this problem solved. I'm actually quite skeptical that I'll ever have a truly self-driving car, as I also live in a place with weather.


It may not get snowstorms but I would say significant areas of San Francisco are absolutely not particularly easy to drive in.

Understanding someone directing traffic is probably one of the harder problems.

To another point, to the degree that it's predictable ahead of time, I'd be perfectly happy with a car that only could self-drive in some conditions. Only highway for some subset of weather conditions? That would still be super-useful for some of us.


I don't understand the point of comments like this, and I see them all the time when it comes to autonomous vehicles.

First, as all the other respondents have pointed out, your characterization of San Francisco as "straight lines and 365 days of sun" is way, way off. But more importantly, as a consumer, I'd be thrilled to own an autonomous vehicle even if it didn't work in bad weather. There's easily enough data to have a car say "there is a storm coming in your area, can't drive autonomously" long before it would become a safety issue.

And, of course, wouldn't one expect an autonomous vehicle to start in places with better conditions vs yolo'ing it in a blizzard whiteout?


It is not just bad weather. Here in the midwest it's common for the road lines to be worn and barely visible. There are many more potholes and other problems with the pavement.

My 2023 car with cameras and radar usually cannot activate lane centering - even in good weather - because it cannot tell where the lanes are.

Having driven in both San Francisco and throughout the midwest, I agree with the parent commenter that San Francisco is a much easier environment for self-driving cars.


I don't really disagree, but again, I don't understand the point about complaining about "Pfft, Waymo can only operate in good conditions". I mean, it wasn't that long ago when Google's autonomous vehicle tech was first announced and everyone thought it looked like magic.

More and more I believe in Louis CK's bit about "everything is amazing and nobody's happy": https://www.youtube.com/watch?v=PdFB7q89_3U


Because there's two things going on. There's the autonomous driving technology which is god damn amazing and a huge technical achievement even if it only works in ideal conditions.

Then there's the sheisters trying to sell that technology and convince governments that "for safety" people should be required to buy their product. Poking holes in the technology is people's natural response to the very real fear that their ability to operate a vehicle that isn't literally controlled by a huge corporation will be taken away.

The technology is amazing, humans suck.

You see the same thing with EVs, rather than it being a purely positive thing that the market has more options for people with different driving patterns you have governments committing to banning the sale of new ICE cars when the deployment is literally in its infancy


> Then there's the sheisters trying to sell that technology and convince governments that "for safety" people should be required to buy their product.

That feels like a massive straw man. I'm not aware of anyone at this point, or with any plans, to require the use of autonomous vehicles.

> You see the same thing with EVs, rather than it being a purely positive thing that the market has more options for people with different driving patterns you have governments committing to banning the sale of new ICE cars when the deployment is literally in its infancy

That also doesn't make sense to me. Governments don't want to push to EVs because the tech is some nirvana or something, but continued use of ICE vehicles is a large contributor to climate change and we have very limited time to address that.


I assume this is referring to how Europe requires new vehicles be sold with automatic emergency braking.

If we do reach a point where self driving systems consistently outperfom humans in term or safety, I would absolutely expect a push to atleast require it in new vehicles if not some form of new law to push people towards using that technology. Personally, I find that idea attractive but I understand why people would fear it.


You have to recognize these as the right-wing dogwhistles that they are. The idea that autonomous driving and vehicle electrification are control strategies by an oppressive deep state are constantly pushed on far-right social media.


> autonomous driving and vehicle electrification are control strategies by an oppressive deep state

Wait.. what? That’s definitely a new one to me…



Why does it matter what people push that idea if there is some truth to it?


Welp, I'm as left as they come in these parts. Literally a card carrying socialist so I'm not consciously dog whistling anything.

Tech is already inundated with corporations using technology to maintain an iron grip on their products against their users interest and I'm just sad to see that autonomous driving seems to be the excuse to extend this grip further with always-on connectivity that you can't turn off "for safety" that will also conveniently used for even more tracking. I don't think it will be used for population control or anything like that but I expect it will be expand the already existing police remote shutdown.

Autonomous cars that are truly autonomous would be amazing. But what I fear we'll get is DrivingAsAService and regulatory capture.


> I'm as left as they come

I am pleased to introduce you to: https://en.wikipedia.org/wiki/Horseshoe_theory


I thought about that when I wrote that but I ended up landing on that it didn't apply because I'm not anywhere near right-wing social media and had no idea this was even a conspiracy.

I was just like the last thing in the universe I want is for a company to exercise Apple like control over my car. And despite making no logical sense companies seem to be using EVs and self-driving as an excuse to create artificial lock-in and add even more data collection. New and "different" seems to mean you're allowed to break consumer expectations for the worse.


It still looks like magic when I see cars without drivers here in PHX. We'll have them on the freeway soon enough - potentially a game changer for our city


> potentially a game changer for our city

My bet is you'll be amazed by how little changes. Sure you won't spend any time at gas stations/car dealerships and you'll be able to watch YouTube on your commute, but that's how millions of New Yorkers live and, well, it's fine. That is to say, we could have just built subways/mass transit and gotten all of the benefits.

There is a major downside though, and it's that we'll be doubling down on cars. That means all the mining we have to do for all the components, all the road/vehicle maintenance we have to do, all the waste we have to manage, all the pollution from tires, all the space taken up by roads, garages, parking spaces, all the batteries we have to build/maintain/recycle, all the sprawl and isolation we suffer will increase.


> My bet is you'll be amazed by how little changes. Sure you won't spend any time at gas stations/car dealerships and you'll be able to watch YouTube on your commute, but that's how millions of New Yorkers live and, well, it's fine. That is to say, we could have just built subways/mass transit and gotten all of the benefits.

I'm a big fan of public transportation, and I think it's sad that so many cities have underinvested. But it doesn't help your point to pretend that commuting on a bus or subway is the same thing as commuting in an autonomous personal vehicle, and these kinds of false equivalencies only serve to discredit mass transit advocates.


Oh I'm not at all pretending there's an equivalence between a personal coach that delivers you anywhere you want to go and a coarse, generic mass transit system that's relatively indifferent to your destination. I'm saying that no one at all is saying that every American (let alone anyone in the world) will have such service. This stuff is 100% marketed to upper class people. Waymo is entirely an Uber-class service, meaning that only the top 3% of people worldwide can even conceive of ever experiencing it, let alone having it be their daily experience. And for what, not having to take a bus and walk a little after taking a train? I'm trying really hard to come up with something more bourgeoisie than that, and I'm coming up with nothing.


> I don't understand the point of comments like this,

I mean the point is pretty straight forward. A study like this is tautological. "self driving cars are safe on exactly the fraction of hand selected roads where self driving companies are willing to put them" doesn't tell us anything. It's a fluff piece.

If you want to make an actual assessment about the safety you need data from randomized, representative roads, not 0.x% of the streets in America. Which itself is a very generous country to pick. Put these things into peak traffic in Rome for an hour


No, you don’t. This study is very clearly comparing to human drivers on the same streets and time period. Thus, you definitely can say that the human drivers are worse.

Tautological puff pieces are what Tesla does with many of its cherry-picked stats, but this data is much more definitive.


>you definitely can say that the human drivers are worse

No, you can't. You can say the drivers are worse only on the segments where Waymo was willing to test it, and that is worthless. It literally is cherry picking. What does it matter to people driving in Boston, Chicago, New York, Paris or Berlin if cars can drive safely on a suburban grid in Arizona?

If you get to pick the sample you get to pick the outcome. We could literally put these cars on the worst 1% of roads and get the opposite result. In fact we could put them on the worst 90% of roads and get the opposite result. Even the average American consumer gets no meaningful info out of this.


This makes no sense. Waymo is not saying their cars outperform human drivers in all situations. They are saying they outperform human drivers in the location where they've been deployed. It's exactly an apples-to-apples comparison.

> You can say the drivers are worse only on the segments where Waymo was willing to test it, and that is worthless.

That's not worthless at all. There are tons of places in the US, maybe even a majority, that are quite similar to something between SF and Phoenix. Yes, absolutely, there are tons of places it's not. Again, so what? I just don't understand why anyone would expect that Waymo would not start with a progressive rollout.


Huh?

Waymo is doing a level 4 rollout to specific areas where they are confident they can operate them safely. This study shows they are doing a good job with that. People in these markets get access to safer transportation once they can scale. Indeed, the majority of the US population lives in areas that are roughly comparable in driving complexity to either SF for Pheonix.

The real question is if these safety rates can be maintained while reducing the operational costs to the point where this can be scaled.


SF is a fairly tricky place to drive. I mean it’s no Boston but it’s no mean feat.

Nevertheless I do believe it might be easier to deploy self driving cars in more “feral” places where anything goes as traffic rules go. In those places, what I’ve observed, is that you can at any point actually come to a complete standstill and everyone will just navigate around you (within cities that is). This actually can work in the favor of these cars to be honest.


Boston is the IDEAL place to test out autonomous driving. The narrow roads, lined as they are with trees, rocks, and parked cars, mean that when an autonomous car makes the wrong decision, the damage will be primarily to property, Waymo's and the other guy, and Waymo can just cut a check, learn some lessons, and move on.

(BTW, Boston's reputation as a place for aggressive driving is undeserved. Insurance companies pay a lot for fender repair for the reason above, making the city look bad, but for injuries against a person, insurers just pay the limit, which an injured pedestrian can run up in the first 24 hours in a hospital. Anything beyond that doesn't show up in car insurance statistics. Houston and Austin have far more aggressive drivers, for instance.)


It's also true that the old elevated central artery was really bad and pretty much required aggressive behavior because of how many ramps there were and the fact that you had to navigate a confusing network of crowded surface streets to access the harbor tunnels. I've admittedly lived in the area a long time but between the Big Dig and GPS, I don't think Boston is uniquely bad these days.


Have you actually driven in SF? There’s plenty of rain in the winter and the roads can get quite interesting. Right turns that are actually 170 degrees and a 15% incline. Red lights that are way off to the side and obscured (one on market right after you turn right exiting 101). It’s honestly a pretty good stress test as far as American cities go.


I lived in SF and besides the occasional annoyance of parking my motorcycle in heavily inclined streets I don't remember anything remotely problematic. I remember plenty of waymo cars glitching though but that was a long time ago.

tbh waymo's ceo said what I said here: https://www.cnet.com/roadshow/news/alphabet-google-waymo-ceo...


that's ridiculous. I live in SF now and the driving situation is seriously challenging.

Even the straight, flat, streets can be nuts - like Polk St: pedestrians can jump out anywhere, slalom of delivery trucks, delivery scooters, ride-hail cars, double parallel-parked, with bike lanes - its a major cyclist route also busses and those little tourist scooters - oh and the electric unicycles that are always blowing stop-signs! the type, shape, size, and behavior is all over the map. go two blocks over and you have to deal with cable-pulled trolleys having the right of way across the 12% grade intersection. last week there were hundreds of drunken santa-clauses jay-walking between bars. oh and the line of tourist cars (who knows what traffic behavior they might follow!) heading up to see the crooked street - as the high school is getting out.

LOL get your memory checked, memory checked, if you recall driving in SF as unproblematic.


You lived in SF and still feel confident saying that it gets 365 days of sun?


>tbh waymo's ceo said what I said here: https://www.cnet.com/roadshow/news/alphabet-google-waymo-ceo...

All the CEO said was that he thinks autonomous driving won't work in every condition or ever be perfect, and that they're always finding and working through unique challenges. That's quite a bit different from "our cars only work on straight lines and with 365 days of sun".


As a stress test, SF is good but not great. Autonomous vehicles seem to have huge problems dealing with snow due to the obscured lane lines. It never really snows in SF (maybe a few flakes in rare conditions but not enough to stick).


> It never really snows in SF (maybe a few flakes in rare conditions but not enough to stick).

It never snows, but between the fog and strong winter storms directly off the Pacific, visibility can often be near zero in SF. I've lost sight of lanes and surroundings many times.


I don't see why not being able to see lane lines would be a the problem, at least for the full-fat Waymo solution. My understanding is they are using lidar sensors to correlate their exact position in addition to GPS which can be accurate to sub-meter level. In fact, they would have much more information to go on than most wet-wear drivers, assuming they can correlate the position of street furniture etc.

The bigger problem I would imagine is finding the car not responding to inputs in a predictable way. I would image at a certain point they are going to give up. Even a human driver might need a push in some conditions.


Have you seen the streets of SF? Sometimes you can’t even see what’s ahead because you’re going from a 15% up to a 15% down and all you see is the horizon


Even in those scenarioes, I'd still trust Waymo far more than the current substandard, falsely-advertised, snake-oil alternative offerings ...


I initially read this as referring to human-driven vehicles as substandard, falsely-advertized, and snake-oil, which... Fair...

For what it's worth, as a pedestrian walking around SF late at night I absolutely do trust Waymo more than human driven cars.


It’s funny that you’ve got several teams of thousands of engineers working on this and you label the one as “snake-oil” that has deployed their beta to hundreds of thousands of cars. Musk Derangement Syndrome is real.


I see no evidence of a reliable, working product delivered as promised/advertised. For years now.

What do you want me to do? Pretend?

Perhaps MDS is real ...


Do you think Tesla's approach to FSD is similarly difficult as their peers' approach, and they are just lazy and incompetent, or do you think their approach is fundamentally easier or harder? Have you thought about it at all?


Who cares about approach? Consumers only care if the product works or not. They haven't made it work and don't look like they will make it work.

You don't get brownie points if you deliberately hamstring yourselves (no sensors, no maps) and make the problem harder than it already is. It's like trying to jump higher and higher to fly to moon instead of using rockets. It doesn't mean you're tackling a fundamentally harder problem.


Consumers don't care, but observers who are trying to understand the nature of what's being developed do. Tesla's approach is not strictly about cars, just like their entire company isn't strictly about cars. They're reaching for something bigger than self-driving with FSD, and if they weren't doing it the way they're doing it, those additional wins wouldn't be in scope.


I think Musk has consistently lied about engineers' understanding of how FSD can work. https://www.carscoops.com/2023/03/elon-musk-overruled-tesla-...

He repeatedly claimed that having contradictory radar and camera data means the system won't know what data to trust. Anybody who has worked with data in any capacity can tell you that is complete nonsense. In the past, people would call combining this data "sensor fusion." These days, even high school kids know you just throw all your features into a model. You don't discard red channel data from your cameras because it might conflict with green channel data. Tesla Insurance doesn't throw away all features except driver age when calculating premiums.


“{name} Derangement Syndrome” is a really neat tool for filtering out opposing viewpoints. Pro tip: save it as a hotkey for easy deployment.


No, there are only two valid values for {name} where this kind of dynamic is both real and useful to observe, the rest I would agree with you.


Nice, you keep those thought-terminating cliches locked and loaded


Cruise and Uber were equally snake oil, too, and have spilled more blood than Waymo or Tesla.


Tesla Self Drive seems to have peaked? I follow the forums and subreddits on this topic, and no one has been talking about major improvements for a while now.


It seems to be forever in beta. It’s not clear that the technology as currently implemented can ever achieve what it set out to do.


That's true, it's still unclear if the method Tesla is taking will eventually work. That's not the definition of snake oil. They're approaching the problem differently, because they think if they solve it this way it will be a better outcome for a variety of reasons, and are working extremely hard on it.


What makes people call it snake oil is the salesperson (Musk) repeatedly making false claims about both the capability (summon your car from across the country...) and availability (... by 2018!) of the technology.


The key part of it being "snake oil" is selling the system before it actually works. It has nothing to do with the approach to solving the problem.


Obviously you roll out a safety critical system like this in the simplest scenarios first as you build confidence.

Also, frankly, humans drive in situations where they just shouldn't. There are times where the conditions are so bad that the only correct decision is to just say it is not possible to safely drive today. Personally, I don't drive in dense fog because your options are basically drive too fast and risk hitting something you can't see, or drive at a sensible slow speed and risk being hit from behind - which isn't your fault but might still kill you. Even the most advanced aeroplanes with all the latest and greatest sensors still have defined limits where they won't fly.

Also, that Arc de Triomphe example should just be replaced with a safe priority system. Priorité à droite is a silly rule, and it's especially silly on a roundabout. In my opinion, crashes there are the fault of the road, not the driver.


I took a waymo last night in the rain in union square in SF. I assure you it wasn't easy, many merges, people walking in the road, cars to pass, etc.


It was raining yesterday and the Waymos were out but it doesn't matter that much. You won't get self-driving cars in your streets in Europe and that's okay.

It's just like how iPhones are wildly popular in the US. I'm sure there's someone who's like "Yeah, try to sell a $1k device to a Romanian and see how it goes" and Apple only gets 25% market share there but they're one of the most successful companies in the world, without Romania.

Some markets can be irrelevant.


Yes, its good engineering practice to solve the easier problems before solving the hard ones.


I read comments along this vein every time an article on Waymo (or self driving cars) comes out, and my question is who cares? Even if it is just the sunny easy locations that get this advance, that’s hundreds of deaths and injuries avoided every year. I understand being pragmatic and practical but we should celebrate this news IMO.


people love to say this like it's some sort of "gotcha" claim, and waymo is cheating at safety by only rolling out their service in places where they're certain they can operate safely.

how about no, let's not send waymo cars into vastly more challenging conditions. i want them to keep being cautious and responsible.


I'm hoping they work up to the complexity of European roads eventually. Start small and iterate, y'know.


I’m waiting to see how it will handle Chinese roads. A few self driving startups in China actually, although I haven’t heard anything from them recently. It would be the gold standard for a self driving car to handle Beijing roads, drivers, and pedestrians.


Kyrgyzstan roads. I was nearly killed by a Chinese truck swerving around a donkey which was chilling in the middle of the road in the middle of the night.


I think both Baidu and pony are allowed to run no-safety-driver taxis for $$ in Beijing now (as of this year).


Oh wow. I totally want to see that on my next trip. If they are running well enough, they’ve probably already beat waymo.


Yeah especially South East Asian roads with all those motorcycles buzzing around. Actually I'd be way more impressed with self driving motorcycles.