Hacker News new | past | comments | ask | show | jobs | submit login
US opens probe into 130k Ford vehicles over hands-free tech (reuters.com)
54 points by ripjaygn 4 months ago | hide | past | favorite | 83 comments



The desire to have a car that drives itself is so high that it overrides folks' self preservation. They'll pay kilodollars for unsafe systems. Regulation is the only solution I can see to prevent buggy beta software unleashing mayhem on the road.


Reading the facts of the accident central to this investigation, I imagine most meat bag driving systems would have also probably hit the car. It's not like the car ran into a stopped police car with flashing lights on.


I don't know if we have enough facts in the Texas case to guess if a human in the same situation would have been able to handle the situation better.

All the facts I've seen are there was a vehicle stopped in traffic lanes with its lights off. At least one human driver ahead of the accident car that avoided the stopped vehicle, but was close enough to observe the collision.

There's lots of scenarios that fit these facts, some of which are hard for humans; all of which are hard for driver assistance software that ignores stationary objects.

Even if an active human driver would be likely to still hit the other car, they may have begun braking sooner and reduce the energy involved in the collision.

If the accident car was closing following the observer car, both in the same lane as the stopped car, and the observer car changed lanes to avoid the stopped car 'at the last second', yeah, that's hard for humans too when there's limited visibility ahead of the car in front (which is a good reason to avoid following so closely, but things happen). If it was just light traffic at high speeds and a stopped car, I'd expect an active driver to at least reduce speed significantly. Yes, it was at night and the stopped car had no lights; however, the accident car's headlights should illuminate the stopped car's reflectors if nothing else, and given the observer car had passed the accident car recently, there should have been some light cast from its headlights as well.


Sure sounds like the Ford was following the observer car, probably pretty closely for the given speed. Having reflectors on the stationary car isn't useful when the stationary car is completely obscured by the observer car. If the observer car missed it by only a couple of seconds, there was probably only five seconds or so for the Ford driver to see it, realize what was happening, and properly respond. And at those speeds and that kind of distance, braking probably wasn't an option especially if you've already burned a second and a half just realizing "oh shit that's a stopped car". I haven't heard either way if there was significant braking by the Ford or not, going on an open Texas highway typically means they were probably going >80mph as the posted speed was probably 80mph. I'm not sure exactly which stretch of I-10 they were on, but a lot of it around San Antonio (especially the unlit stretches) are posted 80. Even if they slammed on the brakes and managed to cut half their speed they're still slamming into a stationary, heavy object at 40ish mph.

I imagine it pretty much is an instance of following too closely and going too fast for the lighting conditions for the possibility of suddenly encountering a completely stopped object in the road. I agree, its a bad limitation of "self driving" systems, but in the end I imagine the average human would have probably had a similar following distance and also be travelling at a similar speed.

The Ford system does track attentiveness and will drop out of bluecruise mode and bring the car to a stop if the driver isn't paying attention. It does a lot of gaze tracking with IR lights and cameras. Its not like on some cars where attaching a little weight to the wheel is enough to fake attention. The driver was paying attention to the road, at least within the last few seconds.

EDIT: Sounds like the stretch in question is a 70mph area. Still, I imagine on an open highway and how loads of Texans drive, they were probably going about 80mph. Also, it does appear this section of I-10 does have roadway lighting.

FWIW, I've seen lots of cars slam into the back of a stopped car on a highway in bright daylight conditions without the assistance of driving aids. My old office had a pretty good view of a long stretch of a major 70mph highway; I saw a lot of collisions. Meatbag drivers are pretty terrible.


Thanks for the link elsewhere. While I agree with your assessment about what probably happened, I still say there aren't many facts. It would be helpful to know how far in advance the observer changed lanes.

> Even if they slammed on the brakes and managed to cut half their speed they're still slamming into a stationary, heavy object at 40ish mph.

A 40 mph collision is often survivable these days (that's the speed used for crash tests!). The accident vehicle driver survived regardless; the driver of the stopped car didn't though.

Driver assistance is really unhelpful in these situations. Most likely the lead car slowed down, at least a bit while changing lanes, so an active driver would also be slowing and may be quicker to slow more when an unexpected car is there, but driver assistance is going to see the lead car left the lane and there's no other cars (because stationary objects are ignored) so go ahead and speed up. Tesla's are known to have a very high rate of acceleration in this case, not sure about the acceleration for the MachE.

Adding more speculation: as an untrained armchair internet crash investigator, my guess is the collison happened at over 40 mph. The IIHS crash test [1] seems to show less damage than the accident vehicle, and the stopped vehicle was overturned in addition to significant damage.

[1] https://m.youtube.com/watch?v=Po_QfX5U8lI&pp=ygUXbWFjaCBlIGN...


I do agree there's a certain level of variability in the possibilities of what actually happened with the facts given so far. I'd really like to see a video recording of what happened so we can actually judge the possible reaction time for the Ford driver. I totally acknowledge its possible there was many seconds between the observer car leaving the lane and the Ford colliding; it is a pretty straight stretch of road so the observer could have seen the car for a bit. Within the unknowns of this incident, it is possible a human without ADAS systems in use here could have responded better. But to me, with what I'm reading about it and my past experiences of watching drivers on Texas highways hitting stopped cars in broad daylight, I imagine a lot of humans would have hit the car in a lot of the possible scenarios.


> Sure sounds like the Ford was following the observer car, probably pretty closely for the given speed. Having reflectors on the stationary car isn't useful when the stationary car is completely obscured by the observer car. If the observer car missed it by only a couple of seconds, there was probably only five seconds or so for the Ford driver to see it, realize what was happening, and properly respond.

Sounds like the Ford wasn't maintaining a safe following distance.


I agree. Maybe these driving aids should force better following distances and max speeds when its dark outside, especially if it ultimately relies on human vision to properly detect and react to a stopped object.

I'd still say a human would probably have also had an unsafe following distance and speed in this scenario.


A human would have likely driven unsafely too, but human behaviors don't scale as predictably, so I think it's appropriate to hold them to a lower bar.

If a self driving car (or driving aid at least) behaves unsafely in some consistent way, that could cause far more accidents than the "typical human driver" typically behaving in that unsafe way. It's frankly outrageous to me that it's literally possible for automated or assisted driving to fail such a common-sense, basic rule of road safety as following distance.

And that's my real concern with self-driving cars (apart from considering them a competitor to the public transit I'd rather see), that it doesn't feel like there's a sufficient baseline level of caution for their bread-and-butter behaviors.


At 80 mph there should be about 100 m distance between vehicles to maintain the recommended 3 seconds separation. 80 mph is 36 m/s.

I drive with traffic aware cruise control always on maximum separation (Tesla S), but hardly anyone else abides by the recommendation. Even 1 second separation is 7 car lengths at that speed.


It's worth noting that we can make assumptions about how a human would have handled the situation, because BlueCruise is not full-self-driving. There was a human at the wheel.

So the answer in both cases is definitely "There was a human operating the vehicle, they saw the vehicle running headlong into stopped traffic, and they failed to use the brake in time." They may, perhaps, have trusted the semi-automation too much, but that's no more excuse for rear-ending a stopped car than setting your regular cruise control to 55 and failing to disengage when you see traffic stopped ahead is.

We want the technology to do better than human, but (unless NHTSA turns up something truly bizarre and awful, like "Driver tried to brake and BlueCruise overrode the command to disable braking," which should be impossible) the situation in both these cases is people died because the car was being driven by a human and the human made a bad call.


This is an important take for this system. BlueCruise does a lot to check for attentiveness with its gaze-tracking. The driver would have been looking at the road at least a few seconds before the accident. As someone who has done several hundred miles on BlueCruise, it can be pretty touchy about you paying attention. Even taking a drink from a cup a little too long will get it to start alerting you and eventually kick you out of it.


Link?



Thanks! Nightime, 70mph highway, stopped car with its lights off. It has:

The witness changed lanes to the right to avoid striking the vehicle and later, in her rearview mirror, saw another vehicle strike the stopped Honda.

If that interval was very short, then I could imagine the witness car blocking the Ford's view of the Honda until there wasn't much time to react?


Yeah, that's definitely the big variable of if this was something that most human drivers would have also collided or not. Given how the observer car still seems to have good visibility of the car and the whole accident I'd guess they probably passed the car just a few seconds before the Mach E collided. I imagine most people's sense of time in that scenario is pretty unreliable, we'd pretty much need some kind of video recording to know exactly how much time the Mach E realistically had to react.


On the other hand, it's hard to build safe self-driving cars without actually deploying them in the real world. If we can tolerate some level of risk, we can get to self-driving cars sooner.

Every year we delay having safe self-driving cars, about 50,000 people die on the road who don't need to die.


It's basically the trolley problem.


Have you used it? I find that when you use it as designed and pay attention it is much safer and easier to drive. Especially in road trips I arrive fresher than I would if I were manually driving the entire time. Certainly there are dangers but there are already huge benefits.


Awhile back (ok, 10 years ago!), there was an interesting New Yorker article [1] about the rise of automation, specifically related to flight, but I think some points are relevant here:

> But, as pilots were being freed of these responsibilities, they were becoming increasingly susceptible to boredom and complacency--problems that were all the more insidious for being difficult to identify and assess. As one pilot whom Wiener interviewed put it: "I know I'm not in the loop, but I'm not exactly out of the loop. It's more like I'm flying alongside the loop."

I think these assisted driving features in cars are the same way. I personally like the idea of them, humans seem to be incredibly horrible drivers (especially here in the Bay Area). However, when attention and monitoring is still required, people get complacent.

All that said, I'm not sure what the answer is -- maybe these systems need to get to a point where they are fully autonomous? But how do we get there and feel comfortable about it in the first place? Take air travel for example -- planes are probably perfectly capable of taking off, flying themselves from point A to point B, and landing without human intervention. Would you travel on a plane with no pilot? Probably not!

[1] https://www.newyorker.com/science/maria-konnikova/hazards-au...


Isn't that sort-of the same as an increase in accidents after seatbelts became mandatory?


The big problem is it's VERY hard for people to stay continuously ready to take over at a moment's notice for long periods of time and then to maintain the discipline to do that over many trips. We just naturally get complacent and pay less attention than the systems currently demand because of their shortcomings.


For me the big advantage of Tesla FSD is that I find I spend more attention watching for moose in rural areas and pedestrians/situational awareness in urban areas because I can let the computer handle the mundane stuff.

Is this also the case with Blue Cruise? Does the eyeball monitoring camera count watching ditches & sidewalks as "paying attention" or does it force you to strictly keep eyes front?


yeah the combination of fsd 12.3.5 + attentive human is incredible



Those stats have always seemed game. At the basic level "autopilot technology" is a wide grouping of various marketing wank Tesla invented. Is it or is it not FSD? Because for the longest time it was just driver assistance tech that everyone else has in different forms. Shit, my Civic can self drive just fine using radar cruise control (it even handles merging vehicles) and lane keep.

The US average passenger vehicle age is also 13 years. This puts _many_ existing vehicles on the road outside the range of modern driver assistance equipment which of course would affect this chart.


You can also get independent data. For example what drivers have to pay for (liability) ensurance based on make and model. The rating system in Germany puts a Model 3 at rating 20 (out of 10-24, where 10 is lowest cost for insurance) this is much worse than many older cars without any assited driving. For example my 10 year old Ford was on 17, my current VW (which has a lot of assisted driving features) is at 16. Of course it is not only the car safety, it is also influenced by the average driver behavior. But it seems in general Tesla isn't doing great when you look at such independent data.


The chart should show like "miles driven with cruise control enabled" and "miles driven with <mild assist> cruise control enabled" (e.g. radar / laser / automatic, lane assist, etc." for all brands of vehicles.


kilodollars - the metric unit that might just swing the US away from imperial.


As an American, I've been trying to get people to use the Everest, the amount of money it costs to climb Mount Everest, as the unit of measure for midlife crises and upper class indulgences.

(One Everest is around $50,000-$100,000.)


I know this was just a lighthearted comment, but for folks like me who like to take things too seriously... "grand" has the same value, and has the benefit of already being in common use.


Honestly we’ll probably just figure out how much a football field costs and use that.


Field-dollars™


I'd rather have slightly unsafe self-driving cars than completely unsafe human-driven cars where the human is distracted by whatever's on their phone, kids screaming, accidents on the road, etc.


It’s unfortunate that they are not only putting themselves in danger, but also the lives of people all around them.


But that's every time a human gets behind the wheel also.

It comes down to trust: do you trust a machine to get it right every time, or a human to get it right every time? I think people will answer differently based on their differing experiences with machines and people.

(... but to my money: when the machine malfunctions, we can crack open its black box, look at its analysis logs, find errors, and update every other clone of the machine to never make that mistake again. Can't do that with humans; for all the lessons we learn about road safety, we are trying inefficiently to imprint them on a new, naive crop of drivers every day).


It’s not that simple!

Machines distribute accidents without bias. Individuals who are otherwise safe drivers could crash viciously if using self driving modes.

Sometimes you can develop intuition for what cars are being driven by bad human drivers and you can avoid them. Machines will appear to be driving just fine until suddenly they’re not, and this can happen very quickly.

Also, if you know a car is being driven by a human rather than a machine, you can use your empathy to anticipate what a human driver is going to do in a situation and prepare for that, where as a machine can behave suddenly and unpredictably, increasing your chances of engaging in an accident.

Keep these self driving vehicles in their own lanes.


> Machines distribute accidents without bias

Isn't that a good thing? Or are we assuming we have control over the other drivers who are bad drivers and will kill us when they wreck into us?

> Also, if you know a car is being driven by a human rather than a machine, you can use your empathy to anticipate what a human driver is going to do in a situation and prepare for that

People keep claiming this but I think it's a claim without sufficient evidence. I've watched humans go backwards up a one-way street. I've watched humans stop at three stop signs and miss the forth. I've watched humans drive perfectly up until the point they had a coronary. I suspect it's something people like to believe because they get to say "I avoided that idiot swerving on the highway; that kept me safe" and they miss the huge confirmation bias that they're really safe because they didn't get wiped out by the guy who had the heart attack and suddenly drifted into the oncoming traffic lane, leaving no time to react. It's an illusion-of-safety feeling that, ironically, comes from so many human drivers being erratic in non-immediately-dangerous ways, far more often than an automated vehicle is erratic at all.

We don't hear a lot about from the tens of thousands of drivers killed on the road every year failed to use empathy to avoid that crash, because they're dead. So we're missing a huge datapoint to evaluate the "humans can communicate with each other empathetically on the road so they're safer than sharing the road with automata" claim.


For me, it's more nuanced than this. Most mistakes that are made by human drivers around me make intuitive sense to me based on how they are driving. They are often somewhat predictable. In those cases, I can partially mitigate those ahead of time (not always - I've been broadsided where I didn't see the other car until the last moment). The wide adoption of cell phones required a significant update to my mental model, but you can spot things like lane drift, uneven speed, etc. that tell you to give space to someone because they are probably distracted. Both chemically-impaired and sleep-deprived drivers tend to give even more clues to stay clear.

I find predictions informed by observation much more difficult with automated driving systems. In many cases, such cars will appear to be driving perfectly, right up until a severe mistake is made. Instead of having observable clues, it requires an understanding of what sensor suite and what version of software is in which vehicle in order to understand the tendencies of that vehicle. Something we can't possibly keep up with, especially given that the developers of the machine learning models being deployed can't deeply characterize each deployment's failure modes.

Given all that, I feel more able to defend myself against human mistakes than automated mistakes.


The current mayhem on the road is killing 40K people every year in the US, and I'm not scared of that. So you'll have to excuse me for not caring about this hypothetical software induced mayhem.


In a way, it's useful that American roads are so unsafe. It means self-driving vehicles can be deployed early, in an environment where they can do some good, well prior to when they'd be a net safety improvement in e.g. Norway.


On the other hand, the behavior that is largely responsible for the difference may also make the technical challenge more difficult.


There needs to be law or case law so that the manufacturers are proportionally responsible for harm the driver assistance features cause. It's the only way that it'll affect their bottom line enough for them to take safely as seriously as they should.


It takes thousands of the smartest, highest paid people to make AI work (sort of) most-of-the-time in closed systems like chat interfaces and image generation. There is also no risk to life there.

Trying to get this stuff to work on the road is a fool's errand. Just work on warehouses and factory floors. There is at least a business model there and the world is usually climate controlled. What a huge waste of human capital pursuing an impossibility with this self driving car nonsense.


> There is also no risk to life there.

Careful with absolute statements [1].

[1] - https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-a...


I think what we really have to fear from AI in the near future is not it taking over but people delegating their decision making to an algorithm that has no intelligence behind it.

(This already happens with non-AI systems, but the scale would risk increasing to an inflection point.)


Huge waste of human capital kind of applies to the auto industry in general. There are so many makes and models and variations all requiring advanced engineering and manufacturing. We could solve a lot of other big problems w/ those engineers.


No, you mean that we could solve a lot of other problems that you care about. At least be honest about it.


yea like sane vehicle design; ultra efficient city design and rural access; multimodal transportation design involving walking, biking, rail or other alternative mode of transportation.


> We could solve a lot of other big problems w/ those engineers.

Most hardware folks are underpaid as is outside of aerospace and defence. What you are asking would put them on food stamps. There's a shortage of capital and low interest rates, not hardware talent.



I don't understand the general tenor and sentiment of the comments here. I've taken dozens of self-driving cars rides in SF in the last few months. They were magical, and _they work_.

It blows my mind that the general defeatist tone is "it can't be done", while it's literally happening right now. It took a bajillion dollars and decades of work, but we're past the tipping point now.

Sure, regulation must happen, it's not like a chat bot where screwing up is worst case scenario being canceled for a week. Lives are on the line. But outright "it's impossible and must be stopped" is literally against progress.


I'm with you on this.

Yes, Tesla's FSD sucked and they generated a lot of bad reputation, both for themselves and the industry. But FSD v12 (end-to-end ML) - their latest release, is leaps and bounds ahead of v11. I only used to use v11 on relatively empty highways, like cross-country road trips.

With v12, I leave it on 95% of the time - their cameras see more than I do, and process things quicker than I can. The onus is still on me to pay attention, and I do. Yes, there would be idiots who don't. But then again, there are idiot drunk drivers as well.

I am beginning to believe that in a year, Tesla v12 will be really really good, and safer on the road than an average human driver. It probably already is. I haven't researched the stats.

But the current state of the art is Waymo - at this point, a Waymo is actually safer than human drivers. People need to take a few rides in them to believe it - its almost a solved problem to navigate on city roads.

I excited for what the future holds.


I think this is why they are moving into robotics. They're close enough to solving autonomous driving that they've sign-posted that it's a tractable problem for everyone else. We can reasonably expect the other major manufacturers to catch up and once the technology is widespread and EVs are the norm, Tesla has much less of a competitive edge in their core market.

There will be tipping points where it's consistently better than humans so long as there's someone supervising, and then again when it's better without someone supervising. Beyond that point it's just incremental changes.


I jumped into my co-workers Tesla to go to lunch. I asked him if he ever uses the self-driving feature. He turned it on and in less than a second, the car veered directly into the middle turning lane. I watched him yank at the wheel and disable the self-driving mode, explaining, "It's good but sometimes it does that".


It's hard to look at a system where you have "AI" directly causing human deaths and not have a knee-jerk reaction that it can't be done and should be regulated out of existence, even if it's objectively safer than humans. It's an emotional position but as they say it's nigh impossible to reason someone out of a position they didn't reason themselves into.


It's the cars themselves that are dangerous. Whether people or robots are driving them. We used to have sensible regulation to ensure safety, which was unfortunately repealed, resulting in millions of avoidable deaths:

"...this restricted the speed (of horse-less vehicles) to 2mph in towns & 4mph in the country. The Act also required three drivers for each vehicle – two to travel in the vehicle and one to walk ahead carrying the infamous red flag."

https://law-school.open.ac.uk/blog/red-flag-act


What you are describing is essentially anecdata in the grand scheme of things. Yes, there are absolutely scenarios and situations where FSD is a solved problem. The issues that relative that all of the situations that occur across the country (and across the world) daily, the percentage of daily miles driven where FSD can perform flawlessly is likely less than 5% of total miles.

FSD can be done, IMO, but it can't be done today.



Are you attempting to use a single video as proof that FSD is 100% reliable in all driving situations globally?

That video creates more questions than it answers.


You asserted that, "the percentage of daily miles driven where FSD can perform flawlessly is likely less than 5% of total miles".

If FSD can operate in situations such as the one in the video, you are wrong by at least an order of magnitude. And FSD does not have to perform flawlessly to perform better than humans.


Except that without a lot more information on that vehicle we don't know if it is optimized for a very specific scenario (low speed driving in crowded areas), or if it could perform similarly on a highway at 65MPH during snow or rain.

There are TONS of examples of FSD working well in various scenarios online. Having worked in video analytics/AI for the last 15 years I have seen all kinds of demo video that is essentially highlight reels, while the evidence of the product utterly failing is not released.


It's not about if it can be done or not. Waymo is very impressive, and they continue to expand their scope. Good on them. But their sensor suite is very expensive.

BlueCruise is not self-driving. Neither is Tesla's Full Self Driving; despite the name, read their letter to the CA DMV.

Both of those systems will absolutely ignore stationary vehicles in the path of travel when travelling at highway speeds.


> But their sensor suite is very expensive.

Very true, but the cost trend for these sensors over the past 10 years has been very good and expected to continue.


World doesn't end at SF borders, nor does it revolve around it.

I can come up with tons of corner cases where I simply won't risk life of me and my whole family just because some tech bro said so on the internet. And you know, tons of corner cases that I sometimes experience all over the world sum up into some major percentage.

By all means be a betatester, but don't force it down the throats of unsuspecting non-tech users who often trust what manufacturers claim.


> [...] I simply won't risk life of me and my whole family just because some tech bro said so on the internet.

There isn't the option to take no risk - there are over a million deaths a year[0] from regular road traffic crashes.

Region-restricted automated taxi services like Waymo are already looking pretty safe compared to human drivers, though there is a lot of selection bias (good conditions, well-mapped US cities, ...).

[0]: https://www.who.int/news-room/fact-sheets/detail/road-traffi...


I se 30K on the interweb (US). Where did the million come from?


https://www.who.int/news-room/fact-sheets/detail/road-traffi...

> Every year the lives of approximately 1.19 million people are cut short as a result of a road traffic crash. Between 20 and 50 million more people suffer non-fatal injuries, with many incurring a disability.

(for disclosure, if anyone is confused about JoeAltmaier's reply: the above link wasn't initially in my original comment - I had edited it in shortly before refreshing and seeing their reply)



Further, those two stats are irreconcilable? There are not enough countries in the world to add up to nearly 2M deaths per year, given that the top rates are in the US and Russia. Who else has as many cars? Even China and India can't contribute much because of the dearth of cars.


https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...

China and India do contribute a fair amount.


This sort of risk analysis is always baffling to me. I live in a big city. It feels like there is some near miss collision to me almost every week. 99.99% of the time it's because of human drivers, not automated systems.

It's virtually guaranteed that in your lifetime you'll have a collision or near miss with a human. And no, it doesn't take "special conditions", it could be a perfectly sunny day and a clear road and someone will do something crazy.

So that's what we put up with on a daily basis, deep threats to human life by human drivers, our safety standards on letting humans drive so low as to be utterly comical. And yet all the handwringing is done over incredibly rare situations where an AI system screws up and its human driver also screws up at oversight.


You don't understand human psychology, nor few simple facts per se.

I choose how I behave in various situations. I know I am way above average driver with most kms driven in real wheel drive bmw, keep my distances, do defensive driving etc. I pick scenarios, I choose how I do the 'battles'. If somebody else does something stupid and 'unique', I trust myself way way more than some 'ai' being in beta test, its not just reaction time but experience, massive amount of anticipation where I see bad drivers and I overtake them before they do something stupid etc.

Maybe its emotional, but I am highly logical person and don't let emotions interfere with decisions much. Still, no. I kept saying "in 10 years" but this goalpost is basically moving as time moves, so I understood its in "maybe in my retirement" category and stopped expecting mass adoption earlier.

Surprise me world, I would love that. But I am being realistic, not bullish just because it would be so nice to have robo cars and taxis.


> World doesn't end at SF borders, nor does it revolve around it.

Funnily enough his "they just work" mentality does end at SF borders though...as recent as 2 weeks ago.

https://techcrunch.com/2024/04/17/seven-waymo-robotaxis-bloc...

> Six Waymo robotaxis blocked traffic moving onto the Potrero Avenue 101 on-ramp in San Francisco on Tuesday at 9:30 p.m. (...) While routing back to Waymo’s city depot that evening, the first robotaxi in the lineup came across a road closure with traffic cones. The only other path available to the vehicles was to take the freeway, according to a Waymo spokesperson. (...) the company is still only testing on freeways with a human driver in the front seat. (...) After hitting the road closure, the first Waymo vehicle in the lineup then pulled over out of the traffic lane that was blocked by cones, followed by six other Waymo robotaxis.

Also gotta point out, the article uses such a disingenuous way to put it. The Waymo didn't "pull[ed] over out of the traffic lane that was blocked by cones" the car stopped in the lane of travel and put it's flashers on as evidenced by the video at the top of the article.


A lot of people don't understand there's Waymo, and then there's everybody else, and everybody else is at least 5-10 years behind Waymo.


I'm sure this will get the same amount of discussion as Autopilot...


Now's your chance to start it :)


Proportional to market share? Probably will.


"...but the system that can kill you is still safer than the average driver!"


how is that not a compelling argument?


Dying because software decided to drive through a stop sign is not acceptable to the American driver. It doesn't matter what statistical evidence is presented, it will never be acceptable.


Nobody believes they're an average driver.


We also probably believe the car next to us is being driven by a below average driver.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: