Hacker News new | past | comments | ask | show | jobs | submit login

From my perspective, as a relatively early adopter of Waymo (60+ rides). I have zero gripes with the driving itself. In fact I've seen Waymos do things that no human would be able to do*. Drop-offs and pickups I'd like to see improvements made like getting a bit closer to the curb. Occasionally the route selections make no sense at all.

* Waymo made a right turn up a steep hill. Two lanes each way. It then pulled a bit into the left lane abruptly and I didn't get why until a split second later a skateboarder was crouched down and went by on my right against traffic. There was zero chance a human would pull that off. Not enough time for a head check or mirror check. There wouldn't have been an accident but the Waymo clearly has insane reaction time and vision advantages - and uses them.




I saw a Waymo come to a screeching halt in a split second as a... less than intelligent individual... skateboarded in front of a bus, into the extremely busy road.

If it was a human that person would have been dead, no question.

Sometimes I'm riding in a Waymo (which I do every single day, 4x) and it does something, and I double take. Then a second later I see whatever it was it reacted to, and I'm like "dang, why did I doubt you, robotic overload?"

The best part is that it works the same way in the day time vs the night time.

Magic.


Waymo is amazing, knowing some of the stuff they do behind the scenes to ensure safety - I would feel safer riding in a Waymo than driving myself.

My biggest fear has always been that Cruise or Tesla would shit the bed so bad we don't get any self-driving, either because of regulatory constraints or ruining the public perception of them.


Absolutely. Glad California DMV is taking the more level headed approach here and evaluating them case by case based on merit.


In the case of that particular incident, it seems that the pedestrian was “thrown” into the lane with Cruise when it was hit by another (human) driver.


Can any engineers in the field comment on whether the [very cool!] Waymo vs Cruise performance anecdotes being discussed here would have been the result of Google's millimeter-level scanning of SF? Or is it just better algos only? Or both?


I’d think better algorithms. A detailed scan of a city’s streets and layout is obsolete before it’s completed.

A driver (bot or not) needs very good reactions to bad situations observed in the field. And ideally a good sense for when a situation is one step from trouble. As a driver, if I can’t see through a bus then I’m going to assume a crazy pedestrian is waiting to jump in front of me.

(A lot of drivers are quite bad frankly and don’t do most of the stuff you’d want someone to do if they’re directing a one-ton blunt weapon around town at 35mph.)


"A lot of drivers are quite bad frankly and don’t do most of the stuff you’d want"

I'd say 90% of driving is good/safe driving is mental, with at least half of that bening based on the decision making capability. Yet our tests are mostly focused on physical ability with a small amount of memorizing a subset of the rules.


I wish we put our drivers in driving sims the way pilots have to accrue flight sim hours.


Pilots (at least to get the PPL) train at least around 40-60 hours on a live aircraft before flying solo. That's more rigorous, yet very close in principle to what we have in place to obtain a driver's permit, I'd argue.


I think it only took me a half dozen hours or so to get my license? But also that only tested me in best case scenarios with good visibility, other good drivers, etc. A better sim should test and train my response to unexpected events in less than ideal conditions.


Mine was 30 hours of lessons, 7 of behind the wheel practice. Most of that practice was very plain and simple. Parking lots, side streets, etc.

Being completely honest, I did not feel totally comfortable behind the wheel until maybe a week or of actually driving myself, post-license. (Not that I didn't have a lot more to learn, obviously)

During that week, I could have easily gotten into an accident and did have one or two closer calls.

I would not all be opposed to maybe an additional ~10 hours of more rigorous behind-the-wheel driving, or a very good-quality sim for that same amount of time.

But realistically, I'd imagine making this a hard requirement would hit some massive walls. Good simulators are expensive, everyone wants to drive, and states have very limited budgets for this sort of thing.


I certainly didn't have 40-60 hours in a car before I got my license. Much less learning permit.


In Germany it’s 28 theory lessons, at least 12 practical lessons (including night driving and Autobahn), 1 multiple choice exam, 1 practical exam to get your driver’s license. Additionally, you need to successfully attend a full-day first aid course.

The multiple choice and practical exams are not done by the school, but by an accredited / governmental institution. Each have a ~30% failure rate.

Counting 1 lesson unit as 45 minutes it’s definitely not quite as rigorous as a PPL, but certainly in the same spirit, I’d argue.


Cruise vehicles have sufficient sensor payloads to produce a 'milimeter-level scanning of SF' with many fewer hours than they've been on the road there. I doubt that provides the biggest difference. It's all down the software at this point. Especially because they're not trying to do anything crazy like vision only.


We keep hearing this is crazy, but is there evidence to back that up? At this point with hundreds of thousands of Tesla vehicles logging millions of miles a month, I feel like we’d be hearing non-stop reports of destruction on the highways if it really is as crazy as the average HN reader has been led to believe.


the only reason we don't hear non-stop about destruction is that you're required to keep both hands on the wheel/yolk at ALL times while FSD is activated. comparing the current cameras on a Tesla to the human eye is just silly really, sure they cover 360 which is superior, but in almost any other way, the human eye is superior having better dynamic range, better contrast, better resolution and best of all, we have eye lids. if you would just sit down and think about it for 10 mins you'll realize that gimping your self driving car by using cameras only is just a very very silly proposition when we have all this cool tech like lidar essentially giving the car super powers.


If FSD drivers were having to constantly intervene because the car couldn’t accurately map obstacles, we’d be hearing a lot more about it. I drive with FSD all the time - I could give you a list of things it needs to improve on, but not a single one has anything to do with its accuracy of understanding its surroundings.


>not a single one has anything to do with its accuracy of understanding its surroundings.

This has been my gripe for a long time. I feel like many in tech have conflated two problems. With current software the problem of perception (ie "understanding its surroundings") is largely solved*, but this shouldn't be conflated with the much more difficult problem of self-driving.

*for sure, there have been issues with perception. A glaring example is the Uber fatality in AZ.


This exactly. Reading the comments and understanding the huge gap between perception and reality of FSD is eye opening. There are a lot of armchair experts here who wouldn’t be caught dead driving a Tesla but are so confident in their understanding of its strengths, weaknesses, and the underlying reasons.


I do see stories about FSD seemingly trying to drive into obstacles fairly often. It’s true that it does see most obstacles, but most is not good enough for this.


Accuracy of surroundings is absolutely something it could improve on. Adding a different modality (like lidar) would be like adding another sense. Seeing an 18 wheeler without under guards would be easier with an additional sense. It makes the intelligence part easier because the algorithm can be more sure about it's interpretation of the environment.


No longer true. Now they use the camera to verify you're looking at the road.

Also, we can drive just fine with two cameras and a neural net, no LIDAR.

In engineering, you don't add something unless it's necessary, not because "well we may as well have it."


you listened to Musk a bit too much.

And no, a neural net and two cameras are not "just fine". The day cameras will be as good as your eyes and your neural net will be on the level of human intelligence (AGI) then maybe it would be possible. But until then you will need to rely on extra hardware to get there.

Go check on youtube how FSD behaves in city with 1/10th the complexity of SF/Waymo. And remember the difficulty is with the long tail of unexpected events.


We don't drive just fine, we routinely kill each other, often because of poor visibility or not noticing motion. Backing into a busy street? Bam. Open your door without checking? Biker down. Passing a bus at a crosswalk? Pedestrian dead. Driving at night with heavy fog? Off the cliff.

Even your basic non-fancy non-AI car these days have a variety of sonar/radar assists to help out their cameras. Tesla is just being cheap (and getting people killed because of it).


Speeding and drunk driving are the two big ones that kill people.


>with hundreds of thousands of Tesla vehicles logging millions of miles a month, I feel like we’d be hearing non-stop reports of destruction

Why do you believe that Tesla vehicles are driving millions of miles a month on autopilot?

I suspect that autopilot accounts for a vanishingly small percentage of Tesla miles-driven. Mostly because it sucks, but also because of how hard it is to not miss a nag on a long road trip, with the result that you are locked out of autopilot until your next stop.

(Yes, even when paying attention to the road, with your hands on the wheel. Hell, autopilot locks you out if you exceed 85 mph for more than a few seconds by pressing the accelerator, such as when passing. This is true even in places where the speed limit is 85 mph, and the slower end of the flow of traffic is 95 mph.)

I love the performance of my Model S Plaid. Autopilot, however, is a joke.


Wow, I've had much more positive experiences with Autopilot on multi-hour highway trips. No shutoffs, no phantom braking in the last year or two. Autopilot on the highway is so much better than any other car I've driven with adaptive cruise. Anyway, the parent is comparing city and highway self-driving and they are completely different.


Tesla reported that there were 400,000 cars in the US and Canada who had access to FSD Beta in January 2023, so I don’t think that’s at all hard to believe.

Edit: Looks like back in January they were adding 10M cumulative miles driven per month (roughly) - https://insideevs.com/news/633328/tesla-fsd-beta-now-active-...


Assuming those data are accurate, it looks like you’re right.

Surprising, but it is what it is.


I probably use AP >80% of the time on the highway. From observing comments in the local Tesla groups, I'm sure there are plenty of people who behave similarly.

Millions of miles per month doesn't surprise me at all.


"This is true even in places where the speed limit is 85 mph, and the slower end of the flow of traffic is 95 mph."

Where do you drive where the speed limit is 85 mph?


Highway 130 outside Austin, Texas has an 85 mph zone.


Poland? 140km/h is the motorway speed limit. Admittedly there aren't that many Teslas here. But they are extremely popular in Germany and you see people drive them at silly speeds.


"Extremely popular in Germany" is a bit of a stretch, from googling around, there are only around 40k Teslas out of 48 million cars overall in Germany (which is about 0.08%).


Well, yes - I think what I really meant was that they are extremely popular[among electric cars]. Whenever I drive through Germany and stop at an autohof all tesla chargers are always occupied, and you see quite a lot of them on the roads just driving around. But you're right, 40k is nothing in such a big country.


It's a bit of a stretch to just randomly drop some numbers... In the US in the last three Q's the marketshare of Tesla is 0,003868139%


Some German cars are driven at sillier speeds. I've been on the autobahn driving my Model S at well over 200 kph and been passed quite quickly by big German cars.


[flagged]


>>no

Care to expand?


There are a number of highways in Texas with an 85 mph limit.


> This is true even in places where the speed limit is 85 mph, and the slower end of the flow of traffic is 95 mph.

Once traffic rules are enforced by autonomous cars that don't let you go over the speed limit, we can start raising the limit to more realistic levels. I guess currently it is priced in that everyone goes over by x km/h.


>Once traffic rules are enforced by autonomous cars that don't let you go over the speed limit, we can start raising the limit to more realistic levels. I guess currently it is priced in that everyone goes over by x km/h.

Bloody noghtmare fuel that, but it's been the inexhorable direction manufacturers and Governments have been pushing for for a while.

Stop making things that enable people more than it constrains them. The act of velvet glove societal manipulation is pretty damn obnoxious once you realize it's a thing.


It's the sensors and Lidar.


> knowing some of the stuff they do behind the scenes to ensure safety

Any info on this? I would love to learn more about what they're doing.


https://waymo.com/safety/ is probably your best resource. You can follow the links from there to learn about all the things they're doing with simulation, etc.


I know most of it was PR but the stuff Waymo did about showing a test course they built and actually having "unit tests" related to driving showed me they're not dumping these things on the road to make money, they're trying to get it right.

The competition:

Uber kills someone and turns out they disabled "breaking due to objects in the way" because it makes it jerkier.

Tesla is often killing the drivers because they marketed it as "auto-pilot"

Cruise rolled over someone for 20 seconds.

I would have to agree. I honestly hate regulatory capture in most industries, but perhaps Waymo's "unit tests" should be turned over to the DMV and made as the gold standard driving test for AI drivers.


> My biggest fear has always been that Cruise or Tesla would shit the bed so bad we don't get any self-driving, either because of regulatory constraints or ruining the public perception of them

Exactly this. When I saw Tesla releasing their half-backed crap I feared they will damage the public perception so badly that we wouldn't have self-driving cars for years.


> Waymo is amazing, knowing some of the stuff they do behind the scenes to ensure safety

Such as?


[flagged]


Ah yeah, when something doesn’t conform to your narrative, it surely is an astroturf. All coming from accounts that are many years old and have thousands of karma points that leave meaningful posts on all types of topics. But the second they post about Waymo in a positive light, that instantly means astroturfing and not their true opinion based on their lived experiences.


Realistically, there are a lot of Googlers and Xooglers on Hacker News.

One of the messages in this thread is literally stating they have special knowledge of the behinds the scene workings of Waymo.

I work in the AV space and I honestly don't think there's much of a competitive mentality between the companies in the space, but at the same time there is a tendency of people to psudeo-astroturf these brands: where they know someone working at the company and take great personal pride because "I know so and so who works on self-driving cars!"


Astroturfing isn't "I have worked here and am therefore biased when expressing support for the product", it's (to quote Wikipedia) "the practice of hiding the sponsors of a message or organization (e.g., political, advertising, religious, or public relations) to make it appear as though it originates from, and is supported by, grassroots participants."

If they explicitly disclose their internal knowledge or affiliation, it's not astroturfing. If they used to work for the company but are now unaffiliated, it's not astroturfing. If the company isn't knowledgeable or supporting of the effort, it's not astroturfing. etc.


Most of them did not mention any relationship: hence the "psuedo-astroturfing"

They have partial relationships that they don't feel the need to disclose.


> They have partial relationships that they don't feel the need to disclose.

Well, that's your assumption. I would preface that with "probably", since we don't know this for a fact.

> Most of them did not mention any relationship: hence the "psuedo-astroturfing"

Side note: I think you mean "quasi-", not "pseudo-". [1]

But, more to your intended point: the point I'm getting at is that that you're really stretching the definition of astroturfing here, even with the weasel wording added on it.

Astroturfing is a central campaign coordinated by a vendor in a manner as to appear decentralized. It's really an accusation against the central entity rather than as the pawns they're using in the process. So quasi-astroturfing might be a situation where (say) the vendor doesn't actually organize the campaign, but pays an advertiser who internally organizes the campaign without the knowledge of the vendor. The vendor might overlook signs that this is happening, without directly supporting it. Or they might not realize it at all. That's something that would be close to astroturfing, but not exactly it.

However, people advocating for a product due to their genuine personal opinions in a way that's clearly uncoordinated and unsupported by the vendor is not remotely "astroturfing" in any sense. It's a biased expression of opinions, with a potential conflict of interest (if they're still affiliated with that vendor).

All of which I think matters not only because words matter, but also particularly because this site's rules treat astroturfing differently from "someone is biased on the internet", and the admins very much don't want accusations of astroturfing without evidence.

[1] Pseudo- means fake, like how pseudoscience is fake science. Pseudo-astroturfing would be fake astroturfing, or the faking of fake grassroots support. Meaning you'd be accusing people of being real grassroots supporters who are pretending to be fake ones, which would make for a funny accusation.


Sure thing.


People taking it upon themselves to talk up a brand is the opposite of astroturfing, not pseudo-astroturfing.


When you're indirectly related to the brand and that's affecting your eagerness, it's something akin to astroturfing in why astroturfing is looked down on: ie. you're biased

But it doesn't have the malice/intent of astroturfing.


Everyone is biased. There are no perfectly unbiased opinions. And "having a bias" is not the same thing as astroturfing.


> One of the messages in this thread is literally stating they have special knowledge of the behinds the scene workings of Waymo.

I may have hinted as such (being an ex-Googler), and I know this thread has gotten kinda buried but I wanted to address the implication anyway.

When I left Google I sold all of my stock. My investment portfolio is independently managed and I do not advise any specific positions except for the exclusion of my current and former employers. I do not stand to gain financially from any company in the AV space.


Thank you. Contrary to other replies, you do not have to be employed directly by [Institution] expressly for the purposes of covertly pushing an agenda to be guilty of astroturfing.

What's important is the existence or appearance of a conflict of interest, particularly if you're not disclosing your connection. Having a friend who works for Google who might be upset at criticism counts. Being a former employee with stock counts. Having any kind of monetary or professional interest in FSD companies succeeding counts.

If someone is presenting their statements as that of a person who is not involved, and they actually are, their behavior is duplicitous, and justifiably characterized as astroturfing.


It's justifiably characterized as biased. We lose a useful term for distinguishing normal bias from marketing or PR instigated efforts. That's the point of the term astroturfing. It's useful in that regard. Or I suppose, if it's definition has truly shifted, was useful.


Can a stand-alone complex be an astroturf? I think so, if the social conditions that create the behavior stem from a single entity that stands to benefit.

The point of astroturfing (in comparison to the term it derives from, grassroots) is that it's not people coming together to support a policy, it's people with a vested interest in an entity taking cues from that entity to act in their interests in such a way that it appears as the former. The definition never shifted, the tactics just became less obvious. I would be less suspicious if more comments were complaining or giving measured thoughts, but they weren't.


If I may really stretch the metaphor, I'd say individuals with vested interests are more like sod. They really hold those opinions, and are motivated to express them, but do so "unnaturally". Vs commercial efforts that are entirely fake and are therefore astroturf :D


Bias exists. But by removing the financial incentive from the definition of astroturfing, you make it useless when it comes to distinguishing from bias. Otherwise you turn it into "how many degrees of separation can I find to Waymos" and it gets infinitely tenuous.

I would agree that Waymo employees or shareholders shouldn't be commenting here without disclosing their affiliation. But that's a relatively small class: Googlers (and, increasingly tenuous, Xooglers) don't get paid by Waymo and don't get exposure to it through their GSUs. You can make the claim that Waymo's success adds to Google's brand prestige and has knock-on effects, and maybe that's true to some limited extent. But comparing that to being directly paid by the company to spread propaganda is quite the reach.


>But by removing the financial incentive from the definition of astroturfing

I didn't. I pointed out the less obvious ways a financial incentive might exist.

Excuse me for being skeptical of someone who uses a cutesy nickname for former Google employees.


> I didn't. I pointed out the less obvious ways a financial incentive might exist.

You did no such thing, which is exactly the issue.


>What's important is the existence or appearance of a conflict of interest, particularly if you're not disclosing your connection. Having a friend who works for Google who might be upset at criticism counts. Being a former employee with stock counts. Having any kind of monetary or professional interest in FSD companies succeeding counts.


I feel kinda the same way about Waymo like I did about Apple as a relatively early adopter. The experience is genuinely magical, and there’s a massive gulf between Cruise and Waymo’s capabilities.

A sufficiently good product may produce customer reviews that are indistinguishable from astroturfing.


Just because some people's opinions differ from yours doesn't mean astroturfing is at play.


Did they give that reason?


The guidelines ask you not to accuse others of being shills.


"unable to move and blocking traffic" is not a new failure mode for cars. I get that it's particularly infuriating to be blocked by a car that is, mechanically, in good shape, but I don't find the story particularly damning. I've blocked traffic that long waiting for a tow truck.


The case he described is different in that, apparently, the vehicles in question lock all of their wheels and can't be accessed, even by law enforcement. A special tow has to be called in. It's a design decision made to protect the company at the expense of the public.


What's your opinion on Prop22?


I can't wait till those are generally available. I'd go out more if i could have a nice night and not have to deal with coming home later. Hell i live about 1.5h from Seattle and i never go because it's just awful (to me) dealing with cars there. If i could get a cab all the way there and back? Oh man, amazing. Or even park and ride would be nice. Cities seem so much more approachable to me if i don't need to drive them. It's a me-issue, for sure, but still.


If feel like Seattle needs an all day and weekend Sounder service, more than it needs self driving robo-taxis. 1.5 hours away you might be at the edge or beyond Sounder reach so this may not help you much unless you get a local bus to your closest station, but you can probably still just drive to your closest station and not having to have to deal with driving in the city.

If the Sounder would run like the ferries do (say every hour until like 8PM and then a couple more trips before midnight) it would really take the hassle of going to the city for much more people than robo-taxis ever could.


I am staring aghast with European eyes at a city of 4 million people where the commuter rail doesn't run outside peak hours. How do you get home if you go out for a couple of drinks with colleagues, or stay late for a call?

You shouldn't be asking for a couple of trains between 8pm and midnight. They should be every half hour, or more! Midnight until 6am on Saturday and Sunday could be every hour or two.

(I live in Copenhagen, half the population of Seattle. Commuter trains run every 20 minutes from 5-24h, every 10 6-18, all night on Friday and Saturday nights. Other trains also run, and the metro is 24/7.)


We taxpayers already spend >$30 per person-trip for the Sounder. It currently doesn't connect well to transit on either end (most users drive to it), and half the city is fighting to make sure it doesn't connect directly with the new "light rail" station. Also our light rail is longer end-to-end than the Paris metro and half the stations are on the freeway with massive parking garage and no pedestrian amenities. Riding transit in the Seattle area is insulting and demeaning.

I've ridden the train in Copenhagen. The stops are near things! Stores and parks! Apartments and hotels! Shopping malls! Like, actually right there! There's a few stops like that in Seattle, but they're also the ones with junkies nodding off, con men "looking for some gas money", and other anti-social behavior.


We sabotage our transit. The Rail Runner doesn't go all the way to terminals at ABQ. That's insane! Were the Santa Fe airport people against it? What gives?


In the wee hours catching a train and bus to get home is kind of annoying in Malmö. Mainly the buses basically stop running more than once an hour between 1-5am.

Most US towns are basically uber/taxis only at that time. I live in Austin where you could theoretically be one of the lucky 1% of the population that both lives near a train station and wants to go somewhere near where that train station is. Even then you can’t use the train past something like 11pm on weekends. Capmetro’s hours are a pretty decent source of consternation for us here


Malmö is a fifth of the size of Copenhagen, or a tenth of Seattle. Once an hour at night isn't so bad for a city that size, isn't it? The trains to Copenhagen run all night, so I can get home...


I feel this way too, having gone down the online rabbit hole of urbanist content. We don’t need cheaper Ubers, we need trains!


Even before talking about traina, you'd need to get rids of the suburbs and the clear separation of housing and commercial stuff.

There is so much thing to undo and people are so overly sensitive of the value of their property it is nearly impossible or would take centuries to fix.


One war will do. Ask Europe.


WW2 was a negative step for many of the cities affected, as it gave them the chance to test the new "car" thing.


Trains are very expensive to build and maintain, so they only work if your city has enough density to support them with sufficient ridership. America really screwed itself by building cities for cars instead of people, and it's hard to change this at this point, especially when every effort at making cities there more walkable and dense is met with fierce opposition.


America has loads of cities and metro areas with way more density than European cities with effective rail systems. You’re right that it’s hard to change due to historical choices, but it may still be worthwhile to transition as soon as possible rather than continue down the wrong path.


Do we actually? European cities that I have been to all have zones with nothing but apartment buildings in dense urban configuration that go on for miles. I've never seen something like that in the states outside of NYC.


There's a select few. Boston. Washington DC. Chicago. The only city in which not owning a car does not involve sacrifice in your lifestyle is New York City.


I think the main problem with the Sounder is not a lack of density (Seattle and Tacoma are both plenty dense; also both the rail corridor and the rolling stock already exist; only missing expensive part is the crew) but rather it is on a very congested freight corridor (particularly the north part to Everett) owned partly by the freight company. So it is hard for Sound Transit to negotiate more usage of the corridor for increased frequency.


Dunno where you are, but trains suck arse here in the UK. Its half the price to fly to Edinburgh than to take a train and my wife recently took a stressful (delays and cancellations and missed connections) two and a half hour train ride that would have taken an hour.

Honestly I think busses are far superior due to not having so many dependencies.


I would have thought this was hyperbole had I not experienced this living in Kent years ago. It's in fact worse. It was twice as expensive to take a train from Canterbury to Stansted than it was to fly from Stansted to Glasgow. Still, having experienced public transport in the UK (the Underground is everything) and everywhere else, the US car culture is the worst. It keeps poor people poor. Your car breaks down, your license gets suspended you cannot go to work. And the acres of parking tarmac filled with cars that are stationary 95% of the time. It's so ugly and such a waste of resources. I happened to live and work along one of the few routes in my city where I could get the bus to work when I first moved to the US. People at work thought I was too poor to afford a car, it was always funny to explain that public transport is the norm in most of the rest of the world.


Trains are much worse in the UK than on the continent, for sure. Especially when it comes to price.


Ya, nothing in the states is set up for that outside of maybe NYC. We just grew our urban planning post war based on cars. It is absolute insanity. Seattle is even considered one of the better cities for public transit.


The Sounder is a heavy rail train on rails that are shared with other trains (e.g. freight). It runs about 7 times per day in one direction and about 10 times per day in the other. There's no reason to run additional trains on that line, because taking it depends almost entirely on finding an empty space in parking garages that are full by about 6AM.

The Seattle area has a newer, dedicated light rail system (Link) that runs every 5-10 minutes from about 5AM to about 1:00AM. The stations are located much more conveniently and frequently, because they were placed based on where a commuter train should stop, not trying to piggyback on existing freight lines. There still isn't as much parking as there should be, but at least the stations are placed so that more than a tiny handful of people can get there without driving and parking.


I’m not so sure about that. The North line connects to Edmonds which is a major ferry terminal used by a fairly large number of people on the Kitsap peninsula, Port Townsend etc. plus another ferry terminal on Mukilteo for Whitby island. These are major connections which are unusable in the 4 trains a day frequency (Sounder only runs 2 trips and Amtrak Cascade the other 2; latter does not stop at Mukilteo). The link is good and all that but it won’t reach Everett for another 20 years and it won’t connect with the ferries either.

The South direction is a little better (and is improving even further with more Cascade runs on the horizon) but it is the same story. For example, Pyuallup and Sumner are both decent sized towns wit centrally located station, which won’t get light rail and the 1 line won’t reach Tacoma until 2035 at least. There is also a decent transit center at Lakewood, however I would argue it should probably be better to run the 620 all the way to Tacoma, from where Olympia people could jump onto a Sounder or light rail.


I agree. I actually live a bit closer to Seattle then OP, only 40 min on bus + boat (add 20 outside of commuter hour for a second bus). The Sounder doesn’t actually run anywhere near me, (well I could take the 118 south to the Tahlequah ferry terminal, 10 min boat to Point Defiance, a Pearce transit bus to Tacoma, and ride the Sounder from there; which urbanists call the “long way”). so this doesn’t affect me personally.

But you are absolutely right, Tacoma and Everett are 3rd and 4th largest cities in the state and are in the same metro area as Seattle. The north line is actually worse, only 2 trains a day (plus 2 Amtrak Cascade trains) even though there is major Ferry terminal connection at Edmonds (and a smaller but significant at Mukilteo). The Sounder service is dismal and needs to be improved (thankfully we are getting more service on the Amtrak Cascades which runs the same corridor but goes all the way to Portland and Vancouver BC. but not nearly enough)

That said, the bulk of the population in the Seattle metro area is serviced with the Link light rail. They are almost done with an extension east to Bellevue (turns out building rail on a floating bridge is hard) and eventually it will reach both Tacoma and Everett. Then they will get the frequency they deserve.

As for me, how do I get home. My ferries actually run all night (although with up to 3 hour gaps in the middle of the night) and frequently enough. If there isn’t a bus on the island, I usually just call my partner and she picks me up at the terminal, sometimes I also meet someone I know in the ferry.

The funny bit is that I actually immigrated here from Europe, and my tiny European island in the North Atlantic actually has way worse public transit system as Seattle. So for me this is a huge improvement. I’m actually going home to visit and am planning to see a friend of my play at a concert it a town 40 min from the capital, but there are no buses, so I actually have to call my mom to pick me up when the concerts are done.


I live in Ballard, so the sounder goes right by me. We (my kid and I) even watch it go by sometime. Nowhere near to get on, though, so I’ve actually never been on one before. It would be a cool way to link Ballard to downtown Seattle by rail before 2040.


Faroes? I visited during Covid, so there was no need to try staying out late.

But the population of the whole country is 1/100 of Seattle. Buses in some place during the day, and taxis at other times, is proportionate to what's available in Copenhagen.

I grew up in a small village in England. Choices for a night out were staying over at a friend's house (much preferable) or splitting the £15 taxi between several teenagers, which was a decent price as far as our parents were concerned. (Or walking for two hours and keeping the money, don't tell mum.)


> Faroes?

Not quite that tiny. Iceland. The public transit system inside the capital area is decent (probably better then Spokane’s despite being similar of size) but after midnight it is really nothing, and on the weekends the frequency gets kind of bad (not North America level bad though). However ones you go outside of the capital area the system really sucks. Despite receiving over a million tourists which more then justifies an airport train, there is only a bus every couple of hours to the airport and adjacent towns (with a pathetic shelter).


While we’re at it, could we get at least one late night/overnight Amtrak Cascades train? I’d go to a lot more Mariners or Kraken games if it didn’t mean choosing between the hassle of driving and parking or needing a hotel room after the game. If sports aren’t your thing substitute any activity that goes beyond 6pm.


The current ride pricing feels about 10-20% less than an Uber in SF - but I haven't really checked properly. So don't expect the rideshare product (Waymo One) to be a super cheap option.


This morning my coworkers rode in together in a Waymo. One told me it was $16 while an Uber would have been $35.


From my personal experience they are priced cheaper for all rides. But rarely that much cheaper - perhaps a surge time?

The variable that differs is time to pickup. I've found Waymo is generally pretty good for me in off-peak times - perhaps on average 2-3 minutes further away on average. But I have got the odd "20 minutes away" and I then just order an Uber.

But I will always pick a Waymo over an Uber if they are the same distance away. I love playing my own music and just being in my own world without another human to consider. It is truly a relaxing commute.


> But I will always pick a Waymo over an Uber if they are the same distance away. I love playing my own music and just being in my own world without another human to consider. It is truly a relaxing commute.

Not to ruin your commute, do you know if there are interior-facing cameras monitoring the passengers?


Per Google, there are cameras inside the car: https://support.google.com/waymo/answer/9190819?hl=en


For how long are the recordings stored?


Are you able to play music on the car speakers from your phone? The process on iPhone seems very awkward (talking to google assistant instead of just selecting a song).


I do it the awkward way too. But I typical play an artist that I loved two decades ago and crank the volume. So "My music" wasn't the best way to describe it. More music that is just for me. I don't find their inbuilt music channels you can play from the center console to be that good. There is lots of UX things that could be done with music that wouldn't be hard.


isn't that just because the waymos are still funded by VCs? an uber might've been the same 10 years ago, but now they've achieved world domination it's time to make a profit


Waymo is fully owned by a public company. Economically they are indistinguishable from Uber (other than having a huge ad profit center to subsidize them). What VCs are you referring to?


It's still in the R&D phase though. It's not out there to make a profit like Uber or Lyft.


Waymo is definitely in the phase where they want to form a positive impression on all fronts: safety, convenience, price. Once widespread, it's possible that they'll charge more than Uber because of the extra privacy and safety in a Waymo ride.


Not to mention that Waymo will have to own their fleet and can't quietly pass depreciation costs onto their drivers like Uber does.


>Once widespread, it's possible that they'll charge more than Uber because of the extra privacy and safety in a Waymo ride.

It's possible, but I don't think it's something you can assume. Having to pay for a chauffeur isn't cheap, and that's what you're doing with Uber. A robotaxi avoids that cost since it drives itself.


Engineers are significantly more expensive than chauffeurs, especially since the chauffeurs get paid roughly minimum wage - or less. I too am very curious where pricing settles out.


Eventually when Waymo is deployed globally the expensive upfront engineering will be absolutely dwarfed by the cost savings of not having drivers.


Only if there is one engineer per car


Well no, it depends on the multiple. Minimum wage is $15000, and an engineer costs Google an average of at least 10X that in base salary, plus another 10X that in equity. I think the break-even is probably closer to 20 cars per engineer.

Currently they have ~250 cars in SF, and 2,500 employees of which about 600 are engineers.

To break even I'd say they probably need at least 12,000 cars, assuming they don't hire any more engineers and the other 1,900 employees are costless. This also assumes that the cost of the Waymo is exactly the same as a regular car, which at least today, isn't even close. And that the server infrastructure is free, and that insurance is comparable. It's probably also too early to know what the maintenance cost of these systems is compared to a vanilla vehicle.


Given how many taxis a typical city has, there's no way that the cost of engineers would come close to the cost of chauffeurs.

In fact, this is the way economics work for many forms of automation: you're replacing a large amount of labor time with a much smaller amount of engineering time. The engineering time is more expensive of course, but the much smaller amount more than makes up for the increased cost.


Litigating accidents will cost more because suddenly it's not "partner" at fault but your own people. Also, someone's gotta buy that new yacht somehow...


Uber was trying the "there can be only one" approach of operating at a loss to squeeze out competitors on both sides of their business model. The pandemic and other factors disrupted that plan. So now they are scrambling for profit while not having being configured for it.


I read that Waymo is losing 2 to 3 billion dollars a year.



Last time I checked (in SF), Uber was $16.70 for a ride I wanted to take and Waymo was $25.40.

All things being equal (or anywhere near equal), I'll pay the human driver rather than the bot.


Funny, I’d choose the opposite.


Why, you don't like people and prefer paying big companies?


Don't forget no pressure to tip your Waymo


There will be. "Donate $1 to Waymo's autonomous safety research fund?"


No, it will be the United Way. Or to help give poor children blankets.

And it will be offered, when Waymo's onboard AI detects you're on a date, verbally, so the date can hear.

So even if you give to charites in a meaningful way, you'll end up doing it again or look crass and be embarrassed. And as with many of these "at the till" charity collections, there is a fee given to the collector.

Who will be Google! Plus, not only do they get to track your co-riders, eg who you associate with, and where you go, and even all conversation in the car (anonymized, of course), now they get to track your charitable predilections, by offering choice.


Ok so say hypothetically they have the balls to record conversations in your car. Wouldn't they already be recording from your pocket?

(I know it's sarcasm and you're spot on about the charities lmao)


I wonder though.

You own the phone. And they mostly get in trouble for lying about the things they record on phone, not for doing it.

But they own the car. And is it illegal for a cabbie to listen, and then tell their boss a stock tip, or that the Jones are buying a new house?

To add distance, they could real time listen in car, process locally, and only upload highlights as tags/text, so no recording. They could also process live, and then offer services in ride, eg "notice your hair is shaggy, Google Maps says this is a good barber!", and of course if you want that referral, as a barber, Google Maps you pay.

Google can't help itself. It is like a kleptomaniac, it knows it shouldn't, but for those voices in its head(managers), do it.. do it! You know you want to, you need to, you should!

Oh it's so shiny oh god...

And it relents, turning all pure dreams into ashes.

Waymo will be the ultimate platform to know even more about you, and to monetize it further.


"It's just going to ask you a question"


In the long term, prices will be determined by what the market will bear. It will be irrelevant if Waymo is cheaper to operate because they don't have to pay a driver. And if Waymo is the only game in town, as it seems to be far ahead of everyone else, expect only a minor discount over the regular human driven cab.


Just curious about prices there. How much would a cab, Uber, or Waymo cost for a trip that takes 1.5h to do?

To me (European) that would cost a fortune... probably doing that distance for a night out would be something to do by local train or bus service.


I took an Uber to/from Seattle Airport, but that was like 7 years ago back when they were in undercutting price mode lol. Iirc it was in the $80 range (per trip).

Well worth it to me because i dislike traffic and parking that much. However i bet i could find a park and ride that's easy and save myself a decent chunk. .. not sure how to find a _safe_ park and ride, though.. heh.

Autonomous rides would be great for me though as they represent stability. Getting a driver was always a mixed bag. Would they be creepy? Talk a lot? Drive poorly? etcetc. Down in Florida i had one break the law several times and in general drive like a maniac. I don't take that many ubers/etc, but the ones i have have really felt like a mixed bag. A dice roll on all the variables, more bad than good.


Does Uber not fill this need today?


No, at least not for me. As my other comment[1] mentions, there's enough friction with ubers that it's not an enjoyable experience. I'll use them in a pinch, but i avoid them if possible.

[1]: https://news.ycombinator.com/item?id=38012649


Good grief, two skateboarders lives saved by a Waymo in two consecutive comments.

A couple of weeks ago I came across a biker partially lying in the road, trapped by his bike. I parked my van at an angle, so if it was hit from behind it should roll past him. I jumped out and performed an assessment (I'm a qualified, OK: certificated first aider). I diagnosed "pissed" and ascertained he was unharmed. We got his bike in the back of the van with the help of some concerned pedestrians who initially thought I'd hit him. I drove him home.

I'm not sure a machine would have noticed an odd lump next to a lamppost, with a civic rubbish bin next to it, in the rain. What sort of sensors do these Waymo things have? They will have to be really, really clever and seriously expensive.

I don't deny that eventually a vehicle with enough decent sensors and some fancy processing will outperform me but I've managed 30+ years on the road with just a few scratches, a broken wing mirror and a rear ending from an articulated lorry.

I've managed to return a hire car around Napoli (Italy) after a couple of weeks without any issues! That may not sound too impressive but have a look at the state of vehicles around St Antonio Abate etc - it looks like the locals play bumper cars. May I also note that I'm a Brit and we drive on the left. Italians don't. So I can adapt to local conditions. I have also driven across large parts of the rest of Europe and the north Americas. I can adapt to local driving styles from the "use your entire car as the indicator" as seen in the Amalfi coastal area (int al) to "no after you" as you tend to see in large parts of the UK and Eire. I can deal with all the different regulations. A classic European one you don't see elsewhere is the white diamond sign with a yellow centre that indicates that you have right of way until it is cancelled with black stripes. Then there is the French "priority to the right" thing (which has also pervaded the UK somewhat wrt roundabouts).

I can negotiate this with ease: https://www.google.co.uk/maps/@51.5628345,-1.7713341,110m/da... and this on a wing and a prayer: https://www.google.co.uk/maps/@48.8654026,2.3222455,464m/dat...

I'd love to see a Weymo navigate those.


Though Waymo et al aren't really competing with unusually good drivers, more with average drivers and they are probably safer there, on the roads they have been trained for.

(one study - 100% reduction in the frequency of bodily injury claims and a 76% decrease in property damage -https://www.ktvu.com/news/waymo-says-its-driverless-cars-saf...)


The thing is most drivers should not be allowed to drive given their low training, driving standard, general lack of empathy and/or medications or substance abuse issues but society incorrectly assume driving is a right while it should be considered a privilege you earn and that can be revoked anytime based on regular tests + eventual tickets.

I estimate at around 20% the amount of drivers that really should be allowed to drive given their current driving standards. Possibly more could be allowed after additionnal training and showing better driving standards knowing their license is at risk.


Ironically, it is very unempathetic to claim 80% of drivers lack empathy.

Your super-claim that you are indeed empathic because you're concerned about everybody else does not justify the lower-level empathy violation.

You have been P.C.'d


> Ironically, it is very unempathetic to claim 80% of drivers lack empathy.

Well I may not have phrased it correctly but I am not claiming that. Lack of empathy is one of the various causes of a majority of drivers being unfit for driving.


Ahem... "lack of empathy", how do you plan to quantify/measure that.

Also, I was under the impression that in most countries on this planet, the priviledge of being allowed to drive a car is handed out with the drivers licence, which in most countries, you actually have to have training and a test for. I also thought it was normal practice that these licenses also get revoked, based on tickets...

The only gripe I have with the system is that elderly are not automatically subjected to a regular test of their mental faculties and remaining sight. I am getting sick of 90 year old grandmas killing 3 year olds "by accident".


> Ahem... "lack of empathy", how do you plan to quantify/measure that.

That is a tricky one.

> Also, I was under the impression that in most countries on this planet, the priviledge of being allowed to drive a car is handed out with the drivers licence, which in most countries, you actually have to have training and a test for.

Training which varies a lot depending on the country. I did a lot of hours + 3000kms under supervision of my father + dedicated classes with emergency from 90kph to 0 braking on varying surfaces, including having 2 wheels on gravel and 2 wheels on pavement to be able to handle situations where the car would naturally go sideways. My Mexican partner in comparison took a handful of classes using an automatic car and only passed the test using a simulator and there you go she had her license.

> I also thought it was normal practice that these licenses also get revoked, based on tickets...

It needs an absurd amount of them for it to happen and we should still require periodic tests and knowledge refresh of the rules of the road.

> I am getting sick of 90 year old grandmas killing 3 year olds "by accident".

I am pretty sure they represent a marginal fraction of the accidents/killing compared to distracted drivers.


> I am pretty sure they represent a marginal fraction of the accidents/killing compared to distracted drivers.

In Germany, if seniors end up in traffic incidents, they are in a vast majority of cases ruled responsible for causing them [1]. They may not be the group that causes the most accidents, partially because they drive less than someone in their 40s commuting for hours every day, but the difference of at-fault cases is nonetheless significant.

https://www.tagesschau.de/inland/verkehrsunfaelle-senioren-1...


vThanks for postng this. It is a small step towards ensuring public safety, but it is a datapoint which might get older people to think, or more likely, revolt. Still, thanks for helping to fight the tyrany of the eldery.


Distracted or elderly, we seem to agree that regular re-evaluation of drivers would be necessary.

And yes, I know that the standards of different countries are vastly different when it comes to obtaining a drivers license. I could have bought a drivers license while in south america. I am writing this because I am blind. Entertained the thought just for fun and to be able to show how broken the system is.


Re old folk driving I think that may be a problem that could be solved by self driving tech. The trouble with just banning oldies when they are getting less good is that a lot of them rely on the car and cause less deaths than teens driving who are much less reliant on cars. I'm wondering if the tech will kick in in time for my mum who's 86. My other relatives have had to stop around 90 due to hitting stuff.


Frankly, I dont care if they rely on a car or not. All I care is how much they endanger other drivers and pedestrians. And I dont want to wait for self-driving tech for this problem to be solved either. I am a blind pedestrian. I cant jump aside if some old fart suddenly fails to react and is about to hit me. I am afraid of mentally and physically unstable drivers. AFRAID!


"I estimate at around 20% the amount of drivers that really should be allowed to drive"

I don't know where you are but in the UK the driving test is pretty tricky. I failed it twice before passing and that was before it had the theory test added on. That was 30 years ago.

We all have our own perceptions of other drivers but you do need to try to think in their shoes. Today, whilst driving home I noticed a police car lights to the left and decided to indicate and move from lane one to two to give them room. The land rover behind me decided to undertake me and blunder through in lane one. Now the LR made a perfectly legitimate move - stay on track. They then undertook me (naughty). They did their thing and came across as wankers.


I think this ignores the way society is governed. It's easy for an engineer-mindset to think the bar to be cleared is "better than the average driver". But these systems must operate in a society that is often employing something other than an engineering mindset.

I doubt the public and their political representatives (absent regulatory capture) would turn over control to a mere average robot. Do you think, for example, most people would be okay in a drone airliner that is equivalent to the average pilot? I suspect most people would balk at the idea. A large part of the public policy piece is about trust; humans have evolved to understand and predict the behavior of other humans which leads to mental models about who can be trusted and who can't. We don't have those same evolved intuitions about robuts and, when combined, with our natural risk aversion biases, it's going to take a lot more than "average performance" to get society as a whole to fully trust robotic safety critical operations.


>(absent regulatory capture)

Yes trial attorneys will keep the bar very high for safety. As they should. But the bar in not infinitely high. Tech companies are salivating to disrupt the transportation market. They want a piece of your car payment, your insurance payment, your taxi spend. Insurance converts the minute risk of large accident costs into fixed premium payment. Lobbying helps convert financial capital into political capital and then into regulatory change. Our political process tends to follow the desires of the top 0.1% more than the bottom 40% combined.

Our current regime of car infrastructure was once unthinkable. Entire neighborhood were bulldozed to make way for highways. Yet today the government paying for highways, roads, and mandating each building be surrounded with amble parking is the status quo.


I’m not sure the original intent of the national highway system can be chalked up to the interests of the 0.1%. It’s generally attributed to national security, which is a very public interest.


While Eisenhower was inspired by the German autobahn, military use of roads was a sales talking point. The real reason for highways was to allow favored Americans to live in detached houses while still accessing city jobs. We could have it both ways!

Rail is a much more efficient way to move troops and equipment. Modern warfare requires lightning fast movement which means cargo planes. Eisenhower himself used a “New Look” policy of relying heavily on nuclear weapons so that the army could be scaled back.


After looking into a bit, I think you're right about the impetus of the national highway system. But, while rail is more efficient way to move materiel, it's also less flexible and more vulnerable. I think in that aspect, a highway system is probably preferable.


I probably should have said better than the average taxi driver as those generally don't include teenagers, 90 year olds and so on. Re the politics say human drivers in an area cause 10 deaths a year vs robot causing 6. Do you think voters are going to say we should have more deaths because humans doing it is better? That said I think people generally expect automated systems to be safe and probably self driving cars will be when they de glitch them. And then self driving cars will compete against other self driving cars more than against humans.


They have multiple radars, lidars, and cameras. They're far more capable of detecting obstacles than people are.


They have different capabilities, but it doesn't follow that such technology is better than the built-in tech in homo sapiens.


One under appreciated benefit they have is the "eyes" on a Waymo car are mounted at the corners and high above the roof, giving it a much better vantage point that just sitting in the driver seat.


It is demonstrably better. They see more and react faster. They can see around obstacles that humans can't and they never get tired or distracted.


See more in the rain y dark? Sure.

But can they look at a driver's face and realise that they are distracted with their phone and haven't seen you?


This is an important point. Humans have evolved a way to intuit the thoughts and behaviors of other drivers. It’s why people who are mentally ill are unsettling; it’s very hard to pick up on their intentions and course of action. That’s why in think the hurdle isn’t “better than the average human driver.” Because humans won’t trust a machine in the same way as another human, the level of safety for AVs may have to be much, much greater before the public as a whole trusts them with safety critical decisions.


Yes they can. They are absolutely capable of looking at gestures. That matters a lot more when you're trying to determine the intent of a pedestrian than a driver. If it's another driver, the system can also do real time physics calculations to avoid a vehicle straying into the av's trajectory. And they might even do some kind of computer vision with other drivers to determine intent. Also if another driver just hits the car because they were distracted, why would the AV be at fault?

These are really sophisticated systems and they've really thought of a lot of issues. All of these companies have hundreds, if not thousands of engineers who have been working on this problem for over a decade, as well as an army of lawyers trying to get things into compliance. I don't think "well it can't do x as well as a human" is going to hold up in 10 or 20 years when these systems are clearly much safer. Human driven cars will eventually be uninsurable.


> All of these companies have hundreds, if not thousands of engineers who have been working on this problem for over a decade,

So do a lot of tech companies that make a lot of dangerous, fraudulant, or just bad products, or that try to develop technology that never comes to fruition.

The tech is impressive, but that doesn't lead to an assumption that it must be able to do whatever is needed. I've seen enough tech hype cycles to know them when I see them. In the end it will have great strengths and great weaknesses, and we should learn what they are.


>They are absolutely capable of looking at gestures.

But 'looking at gestures' isn't the same thing as having a real-time interpretation of the sensors. This, to the point of another comment above, may be conflating 'perception' with the larger problems of 'self-driving.'

Humans see a gesture and understand the meaning very fast because we process a lot of communication on the subconscious level. If you read the safety report of the Uber incident, it shows that the self-driving system could 'see' the pedestrian, but it struggled to effectively classify them. To be fair, that was years ago and another company, but we (as the public) don't have much insight into how good (or bad) the software is.

So 'seeing gestures,' 'correctly classifying gestures', and 'correctly classifying gestures in a time-sensitive manner' are varying levels of difficulty. The last one is the one that matters for safety-critical operations, and the requirements change with the velocity of the vehicle. IMO, there should be a regulatory framework that addresses a baseline of performance on these systems.


That comparison is a very common sales technique and fallacy: Take two products, list the things product B does better than A, and therefore B is better!


I'm not sure I get what you're saying. I'm saying the waymo is probably better than me at detecting and avoiding obstacles. That is the point of this thread.


I understand what you are saying. I'm saying that argument presented doesn't support that conclusion (though it may be true for other reasons).


I do want to clarify that my comment: the skateboarder would not have died. Waymo provided a larger margin of error for the board rider than a human could possibly have.


No need for this comment - you deployed an anecdote from personal experience. That's fine.

However you finish with an assertion with no evidence. Care to elaborate? What on earth does this actually mean:

"Waymo provided a larger margin of error for the board rider than a human could possibly have."


Humans cannot react very fast, this incident comes down to fast reactions and the good all around vision in the Waymo. On these measures a human isn't competitive.


And this is just something you've decided is true or...? One sensor with determinate output based on an input from a finite set, sure I won't argue with you, but in any domain with indeterminate outputs coming from inputs that are of an infinite set, there is no meaningful notion of reaction time, there are imperfect decisions made with incomplete information at a moment during the dangerous spiral of number of possible responses shrinking and information completeness growing, (should I react now or wait until I have more information about how effective each of the dwindling number of path s available to me are shrinking) too name just one complexity.

I believe that we will get there and sooner than skeptics believe, but we are so far away from machines having driving skill equal to that of the average human driver, who happens to be driving while distracted 99% of the time.


Here an objective difference is described: a Waymo vehicle reacted to new visual information in a direction that a human driver would not be likely to be looking.


> I parked my van at an angle, so if it was hit from behind it should roll past him.

Good tip!


The yellow diamond with a black stripe puts you into "French priority to the right" mode and it's not restricted to France, it's pretty common across the continent.

I like how it's indicated with road paint here in Switzerland, e.g. https://www.reddit.com/r/Switzerland/comments/wp6clo/who_has...


> I'm not sure a machine would have noticed an odd lump next to a lamppost, with a civic rubbish bin next to it, in the rain.

As opposed to what? You don’t have RADAR, or LIDAR, or echolocation, or any other way to “detect” a person in the road. You have human vision.

This is the idea behind Tesla FSD. If human vision is good enough, computer vision is good enough. We have cameras that can approximate human vision. All the other companies using sensors are doing so as a crutch, because they didn’t have the chops to solve the vision problem.


The correct question to ask isn't "is computer vision enough", it's "if humans had LIDAR sensors, would they use them to cause fewer road accidents".


I really don't need LIDAR. I have roughly 3 billion years of evolution and the end result - eyes, brain etc.

LIDAR is simply a sensor and a very inferior one, in general, when compared to my visual system. For starters it needs something to interpret it. My eyes have a 52 year old brain stuffed with experience behind them and it isn't always a hindrance!

I can reason about other people's driving style and react accordingly. I can also describe my actions and reasoning on internet forums.

I really do not want an inferior sensor such as LIDAR inflicted on me, nor would I want it to be the sole source of information for something that conveys me.


Quite - I don't have those sensors but I do have madly myopic (sort of corrected) stereoscopic vision and a brain, with 30 odd years experience in it. Oh and I have ears. I generally drive with the window down in town. I can look left and listen right.

My office has an entrance onto the A37 in Yeovil (1) Bear in mind we drive on the left in the UK. Picture yourself in a car next to that for sale sign and trying to turn right. That white car will be doing 20-40 mph or much more. In the distance is a roundabout (Fiveways) which is large enough to enable a fast exit and people love to accelerate out of a roundabout. As you can also see this is a hill so the other side is quite fast because cars have to brake to keep down to the speed limit of 30 mph. That's just one scenario.

Anyway, back to your idea that Tesla FSD (Full Self Driving) ie RADAR, LIDAR etc is equivalent to "me" is debatable. I do have my limitations but I can reason about them and I can reason about what FSD might mean as well.

You assert: "If human vision is good enough, computer vision is good enough." People and computers/cars/whatevs do not perceive things in the same way. I doubt very much that you have two cameras with a very narrow but high res central field of view with a rather shag peripheral view which is tuned for movement. Your analogue "vision" sensors should be mounted on something that can move around (within the confines of a seatbelt). Yes I do have to duck under the rear view mirror and peer around the A pillar etc.

I have no doubt that you have something like a camera with a slack handful of Google Corals, trying to make sense of what is happening but it is really, really complicated. I actually think that your best bet is not to try to replicate my or your sensors and actions but to think outside the box.

Have you ever considered a drone?

Cheers Jon

(1) - https://www.google.co.uk/maps/@50.9471642,-2.6382854,3a,75y,...


tesla vision doesn’t have an agi behind it though lol.

and why gimp ourselves. if we had an extra sense to navigate with we would 100% be using it.


A rock instead of a hammer is also probably "good enough". Still a bad idea.


Surely, some of those "double take(s)" are because you're not driving and thus, not on alert like you would/should/better be if you were driving.


I wonder if this will become new pedestrian behavior? Once people learn that waymo will stop for them no matter what they can just cross in front of it.


Human drivers are supposed to stop too.


Or maybe they know they can do anything in front of the car because it will always yield. It will be interesting to see the game theory develop when most of the roads are full of driverless cars so you can gain an edge on the road risk free


I don’t know about “do anything in front of the car”… it’s still a several thousand pound mass that isn’t exempt from the laws of physics… but I for one would be MUCH more willing to be a cyclist on the road if most of the cars were autonomous vehicles behaving within the letter of the law.


s/overload/overlord


You are on a technology forum you should know better than call it magic, it's talent and sweat.


Yes, but.

Attributing every great outcome to "talent and sweat" is dangerous in safety critical domains. Often it's (at least partly) luck, and missing that distinction (combined with a lot of human biases) can make a team confuse "being lucky" with "being good". The problem with luck is it tends to eventually run out.


Since it seems people are interested. I had one situation which this sort of thing made me a bit uneasy (So one gripe actually). I was sitting back right seat. Waymo was waiting to make an unprotected left turn over two lanes. There was a vehicle waiting to also make an unprotected left in the other direction. Behind it was a truck. Waymo made that turn where I wouldn't have. I thought it through after and it had at least two advantages over a human here. 1. It knew the car situation behind truck well before a driver would have considered it. 2. It's front lidar could see around that truck earlier than a driver could. I still provided feedback that I'd have preferred it to not make that turn even if it was safe to do so. At the end of the day I'm the customer and I want to feel safe at all times not just be safe.

This was the turn - https://www.google.com/maps/@37.7337193,-122.4350752,3a,75y,...


I've been in this situation, and it's something that seems to get tweaked every week.

They definitely pushed the "aggressive" lever up a little over the last month.

I think it's one of those cases where it's sort of obvious it can calculate trajectories and speeds much better than a human, so it's a safe manoeuvre in theory, but it "feels" bad as a passenger.

Same thing for where it can see that it can squeeze through a tight "lane" but a normal human driver would probably wait until the oncoming traffic had passed.


It's a lose lose situation. I've been reading conversation about self driving cars for years and what you always heard back in the days is that they were way too conservative on these kinds of turns and blocked traffic. Hell even in this very thread people are saying that about Cruise. Yet when they make it more aggressive, people feel unsafe.

As you say, this is something they are constantly tweaking to find the perfect middle ground. That being said, maybe people are extra hyper aware of the turns when it's a self driving car, and wouldn't bat an eye or pay attention if it was a normal taxi driver?


People have an interesting “trust curve” regarding automation. At first they’ll be very suspicious and critical of it and any issues they have will be blamed on it. Then some point later when they’re used to the system, this attitude will suddenly flip around and they’ll start regarding the system as infallible. Sounds like this might be happening here.


Fair point. When I take a passenger in a Waymo and it is there first time I'm hyper aware of the driving. But for myself personally, I'm mostly tuned out. For me it started to feel "normal" as early as about the 4th ride. So they could probably subtly adjust the "middle ground" based on the passenger's experience and feedback (As long as it doesn't create inconsistencies for surrounding vehicles).


The thing is, the capabilities are not likely to be a strict superset of a human's.

So, if you make it have the same aggressiveness on "average" by having it take advantage of its superior capabilities, those situations are going to feel sketchy to a person.


You should be able to set the "aggressive" level. I believe Tesla has this option internally on their FSD Beta. Hopefully you will be able to set this level for robot taxis and personal vehicles like you can with (some) human drivers.


Call me cynical, but this feels like a way for tech companies to absolve themselves of responsibility.

“Your Honor, the pedestrian would have never been struck if the driver had set the system in its most conservative mode as we suggest on page 1,047 of Terms and Conditions.” /s


One huge advantage sensor-equiped vehicles have is knowing velocity and acceleration of traffic around them.

Human drivers are often conservative because they don't know the speed/acceleration of a potentially colliding vehicle.

We look at something and estimate (How fast? Speeding up or slowing down?), but we don't know.

If you could put an actual number on it, I think you could drive a lot more "aggressively" and still be perfectly safe.


When a Tesla can accelerate to 60 mph in less than 2 seconds, knowing the instantaneous velocity and acceleration is not very meaningful. You really need to be able to predict what the acceleration profile will look like over the next N seconds of your maneuver. Holding the currently perceived velocity and acceleration constant over the next N seconds is one naive way to do that. But the actual set of possible trajectories of the other actor is much larger, and you need to drive more conservatively to account for that.


but they don't know capabilities orf vehicles, say a vespa vs some sports bike.

While humans do and will behave differently around these vehicles.


That seems like pretty weak sauce.

If your sensors can distinguish a Vespa from a sports bike reliably, compared to all the other things that an autonomous vehicle has to cope with programming it to treat those two as different categories of vehicle shouldn’t be particularly hard.


And really we're talking about mass, right? Which is approximated by size

E.g. bicycle vs motorcycle vs Miata vs BMW 8-series vs Suburban vs tractor-trailer

Because that bounds agility, acceleration, and stopping distance, at least to the precision that it would differ in the next 10 seconds.


> at least to the precision that it would differ in the next 10 seconds.

10 seconds? 10 seconds is an eternity.

Some vehicles of similar size might be more than 1/8th of a mile apart in straight line performance in 10 seconds-- let alone the difference once we've got multidimensional vectors.


Exactly. I was thinking the longest span over which an accident could unfold.


My point is-- vehicle dynamics only make a difference in the very short term, because after like a couple of seconds, vehicles can be almost anywhere relative to you even with low performance.

(But, they can be quite different on the timescale of a second).


it only takes 0.2 seconds to turn a motorcyclist into ground beef

If you can't tell apart a bicycle from a 4-cylinder racing bike, let alone a vespa, that's what happnes. And Tesla can't. It also can't read hand signals given by cyclists. It can't tell apart a donkey and a horse.


I've been saying that vehicle dynamics is useful information in the short term. So if you're trying to argue with me, I don't think you've understood my point.

If your goal was to interject an anti-Tesla offtopic comment to the general discussion of vehicle dynamics, it was unwelcome.


To be clear and dispel your misconception:

These cars estimate rather than ‘know’.


They objectively know in a much truer sense than any human.

(Vehicle) Lidar/image multi-point calculation against a precision chronometer

(Human) Parallax and object size change estimation against an extremely irregular, low-precision chronometer


Really they both estimate, but the vehicle tends to have smaller error bars on its estimates. Which still goes to your point.


Humans are terrible at high velocity estimates. That's one of the conditions described in the accident investigation report for that Irish plane which smashed a bunch of runway lighting due to insufficient take off power.

A brand new top of the line passenger jet would say e.g. "Caution: Acceleration" because it can work out the velocity of the plane, from there the acceleration and the remaining runway length (it uses GPS to identify the runway it's on) and take off speed and conclude we won't make it. Humans only decide that after it's far too late. Because it's much earlier the annunciation allows pilots to abort takeoff and investigate from the safety of the ground - in the Irish case they'd typed the wrong air temperature in and thus the engine performance was much worse than expected, with the right air temp it would have flown just fine


With a limited understanding of truth and know, yes.


LOL. Tell race car drivers and truckers that they're inferior. Their senses are probably tuned just as finely as any vision system. You really discount non-quantitative measurements - as most tech people here do. You are wrong, though, that the best of the meat brains are so inferior.


The only thing wrong with non-quantitative measurements is that physics has a well-known quantitative bias.


In the same way a highway cop's lidar gun estimates.


It is also very important to consider the effect on other drivers.

A self driving car might calculate it can squeeze through a gap in oncoming traffic but doing so probably will cause human drivers of those cars to slam the breaks and create a large crash.

So unless they’re driving on a road where only self driving is allowed they’ll probably need to be much more conservative than they can be.

Also thinking about it, the first roads to become robot-only driven will probably be inner city streets where pedestrians abound, so no aggressive driving there either.


I'm laughing because I know that intersection well from taking my kids to basketball practice. I might have had the opposite reaction to you had my car not gone when it could. Also, you have greater visibility as a driver in the front than a passenger in the back, so it's likely that an experienced SF driver might have also done the same thing (speaking as someone who's driven in the city for over 20 years)


I get your point. But the approach to the situation differs I think. A human driver would creep into the intersection to get a better view and then go if clear. The Waymo proceeded with far more confidence. Just better vision. It starting tracking vehicles earlier than a human would (this is shown in a simplified way on the display inside - but I wasn't watching in this case). As it pulled into the intersection, it's front camera and/or lidar has a much better angle than a human driver so can make a go/no-go assessment earlier and without hesitation.


I suspect a lot of the concern could be handled by having the car display a map of moving things that it will interact with and it's projection of encounter times/distances.


To be fair, a human cab driver will usually know the dimensions of their vehicle and what they can get away with better than a random driver.


Another fun anecdote. When a Waymo is in the vicinity of another Waymo they acknowledge each other in a whimsical/cute way - the windshield wipers swipe once (I'm not sure they still do this). But it wasn't implemented very well at some point since the Waymo I was in was behind another one for many blocks. The wipers went at least 17 times. As an earlier adopter I provided lots of feedback. Even little things like I tried working on a laptop in the back* and when I went to type in my 1password password I reached up to cover the camera that is positioned perfectly to see my keystrokes.

* Not recommended in SF. But I imagine on the 280 it would be great one day.


> When a Waymo is in the vicinity of another Waymo they acknowledge each other in a whimsical/cute way - the windshield wipers swipe once

Pretty sure that's just lidars messing with Jaguar's rain sensors.


That explains a lot. Thank you.


Why is working on a laptop not recommended in SF?


The start stop traffic gave me car sickness quickly. The 280 is a 4 lane highway that, at certain times of day, adaptive cruise control gets very little workout.


May be actually getting better over time. A lot of the screen-related car sickness goes away with the refresh rate bumped up to 120Hz. These are slowly arriving to not-completely-gaming laptops fortunately.


That's interesting. I wonder if that's the same phenomenon that causes VR sickness with low refresh rates


Car sickness happens when reading a book as well. It's your inner ear being out of sync with what you are witnessing. Which is why looking out the front window helps.


There's more than one reason for the sickness to kick in. It can happen with a book, but for some people a slow refresh rate -vs- book or high refresh rate makes a massive difference.



"The" 280? Has Norcal shifted to usage of definite articles for freeways?


probably a transplant.

For those interested: _Generally_, Bay Area natives will refer to highways as "280", "101", maybe even "Highway X".

SoCal natives (and many others) will, _generally_ say: "the 101", "the 280", etc.


I was genuinely curious, haven't lived in the bay since 2011 and it seemed like it could be shifting. When I lived in the central coast it was a clear shibboleth when meeting people.


> Another fun anecdote. When a Waymo is in the vicinity of another Waymo they acknowledge each other in a whimsical/cute way - the windshield wipers swipe once (I'm not sure they still do this).

What is your motivation here? I can see why some think it looks like promotional material.


What do you even mean by motivation? If it's true (though it's probably a sensor error), it's perfectly reasonable to mention. And lying about it would be ridiculous. What scenario is there where you're worried about bad motivation here? It's not unrealistic for someone to like the idea.


It sounds promotional, having little other value. But the question wasn't answered; instead it was attacked - a good way to avoid answering a question. Oh well.


> But the question wasn't answered

That's because you asked a question with no valid answer.

Seriously, what would you accept as being an innocent response? Anything equivalent to "I thought it was neat" is apparently not good enough since that sentiment was already in the previous post.

If you want to accuse them of being a shill, just do it directly. (But only if you have a good enough reason to break the rule against doing so.) Don't do this roundabout "what's your motivation" thing where you implicitly reject the innocent motivation.

And why would it need to have "other value"?

> a good way to avoid answering

Since I'm the only one that responded, are you suggesting I'm in cahoots with them? Come on, dude.


a) Share some anecdotes as I was lucky enough to get early access by simply applying through the app. b) I'm a believer/early adopter and as such I guess I've enjoyed sharing my experiences.

It turned out I was wrong about the windscreen wipers. LIDAR from other vehicle tripping the rain sensors.

In any event, I do wish Waymo the very best. They aren't just trying to do something hard, they are doing it whilst getting lots of opposition from SF gov. Whether the business model works - time will tell.


Thanks!

> b) I'm a believer

That is interesting to me. Not that you shouldn't be one, or that such an attitude is new to the world, but I always wonder: Why care about a for-profit company's product? Usually these companies don't care about anything or anyone else.


> When a Waymo is in the vicinity of another Waymo they acknowledge each other in a whimsical/cute way

I hate “whimsical” stuff like this. Part of maturity/adulthood is coming to the realization that you don’t have the right to conscript other people into your sense of humor.


> Part of maturity/adulthood is coming to the realization that you don’t have the right to conscript other people into your sense of humor.

And yet, you aren't willing to accept that they have a different sense of humor without hating them. Ironic.


I don’t hate them, and never said I did. I find their childish antics annoying.


It is however, interesting to me to confirm they are aware of other AVs and can sync with them, enable better safety, and potentially could optimize routing through traffic with the combined data. Imagine if the car in front could warn the car behind of an incident, enabling it to take action before a human driver would even be aware of the incident.


> Imagine if the car in front could warn the car behind of an incident

They can, that is why cars have all this pretty lights in the back! They can warn of several things, like intention of turning, braking, accidents, is amazing.

And it works wonders if the human driver focus on the trafic ahead of the car just in front


Yeah, the lights in the back of your car can warn of exactly four things:

- The car is going to turn left (one blinking yellow light) - The car is going to turn right (one blinking yellow light) - The car is stopping (three solid red lights) - An unspecified error occurred (two blinking yellow lights).

It's not exactly a high - bandwidth form of communication. Most of the purpose of the lights behind your car is to remind other human drivers of your continued existence. As you point out, some of this deficiency can be made up by not looking at them.

Imagine a world where you could automatically talk over an intercom with the driver in front of you about traffic conditions. I bet you'd find safer, better-informed driving. Self-driving cars make that theoretically easy and humanly pleasant.


"When I became a man I put away childish things, including the fear of childishness and the desire to appear very grown up" - CS Lewis


It's hard to get right, and almost no one gets it right. Remember when Lyft cars had pink moustaches on them?


I disagree strongly, I want more personality in the people and things I see in life. Even if I'm not a fan of someone's sense of humour I can appreciate an attempt at levity and whimsy.


That is actually an interesting question. We track how many accidents the automated cars cause, but I'm not sure how many accidents they prevent are tracked.


That metric is tracked indirectly, by the lower number of accidents that AVs are involved in per distance travelled compared to human-driven vehicles.


> by the lower number of accidents that AVs are involved in per distance travelled compared to human-driven vehicles

Just a note that this number is hard to calculate accurately with an acceptable degree of certainty.

Anyone claiming that AVs are involved in fewer accidents per distance traveled than human drivers is either extrapolating from incomplete data or baking some unreliable assumptions into their statement.

Welch Labs has a good introductory video to this topic: https://youtu.be/yaYER2M8dcs?si=XEB4aWlYf6gnnTqM


Tesla just announced 500 million miles driven by FSD [1]. Per the video, were it fully autonomous they could have a 95% CI on "safer than human" at only 275 million miles [2], but obviously having human supervision ought to remove many of the worst incidents from the dataset. Does anyone know if they publish disengagement data?

[1] https://digitalassets.tesla.com/tesla-contents/image/upload/...

[2] https://youtu.be/yaYER2M8dcs?t=477


This just shows how statistics can mislead. I own a Tesla with FSD and it's extremely unsafe for city driving. Just to quantify, I'd say at its absolute best, about 1 in 8 left turns result in a dangerous error that requires me to retake control of the car. There is no way it even comes close to approaching the safety of a human driver.


I only spent 3/4 of my post adding caveats, geez. Thanks for the first hand intuition, though.


The caveats are missing the point that FSD is very obviously less safe than a human driver, unless you constrain the data to long stretches of interstate road during the day, with nice weather, clearly marked road lines and minimal construction. At that point, my "intuition" tells me human drivers probably still safer, but under typical driving conditions they very obviously are (at least with Tesla FSD, I don't know about Waymo)


The reason why I spent 3/4 of my post on caveats was because I didn't want people to read my post as claiming that FSD was safe, and instead focus on my real point that the unthinkable numbers from the video aren't actually unthinkable anymore because Tesla has a massive fleet. You're right, though, I could have spent 5/6 of my post on caveats instead. I apologize for my indiscretion.


> my real point that the unthinkable numbers from the video aren't actually unthinkable anymore because Tesla has a massive flee

Yes, I'm addressing that point directly, specifically the fact that this "unthinkable number" is misleading regardless of the number's magnitude.


FSD's imperfections and supervision do not invalidate their fleet's size and its consequent ability to collect training data and statistics (eventually, deaths per mile statistics). The low fleet size assumption in the presentation is simply toast.

If I had claimed that the 500 million number indicated a certain level of deaths-per-mile safety, that would be invalid -- but I spent 3/4 of my post emphasizing that it did not, even though you keep pretending otherwise.


You could start by comparing highway driving, where I think Tesla actually is quite good.


Tesla's mileage numbers are meaningless because the human has to take over frequently. They claim credit for miles driven, but don't disclose disconnects and near misses.

California companies with real self driving have to count their disconnects and report all accidents, however minor, to DMV. You can read the disconnect reports online.


Do you trust claims and data from Tesla?


Do you think they lied about miles driven in the investor presentation?

Nah, that would be illegal. Their statement leaves plenty of room for dirty laundry though. I'm sure they won't disclose disengagement data unless forced, but they have plenty of legal battles that might force them to disclose. That's why I'm asking around. I'd love to rummage through. Or, better, to read an article from someone else who spent the time.


> Nah, that would be illegal.

Musk has violated many rules regarding investors.


Note that it would need to drive those 275 million miles without incident to be safer than a human.

Which for Tesla's FSD is obviously not the case.

https://www.motortrend.com/news/tesla-fsd-autopilot-crashes-...


Your video and my response were talking about fatal crashes. Humans don't go 100 million miles between crashes.

Has FSD had a fatality? Autopilot (the lane-follower) has had a few, but I don't think I've heard about one on FSD, and if their presentations on occupancy networks are to be believed there is a pretty big distinction between the two.


Isn't "FSD" the thing they're no longer allowed to call self driving because it keeps killing cyclists? Google suggests lots of Tesla+cyclist+dead but with Tesla claiming it's all fine and not their fault, which isn't immediately persuasive.


> Google suggests lots of Tesla+cyclist+dead but with Tesla claiming it's all fine and not their fault, which isn't immediately persuasive.

With human drivers -- are we blaming Tesla for those too?

You do you, but I'm here to learn about FSD. It looks like there was a public incident where FSD lunged at a cyclist. See, that's what I'm interested in, and that's why I asked if anyone knew about disengagement stats.


It appears that the clever trick is to have the automated system make choices that would be commercially unfortunate - such as killing the cyclist - but to hand control back to the human driver just before the event occurs. Thus Tesla are not at fault. I feel ok with blaming Tesla for that, yeah.


Is that real? I've heard it widely repeated but the NHTSA definitions very strongly suggest that this loophole doesn't actually exist:

> https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_Da...

The Reporting Entity’s report of the highest- level driving automation system engaged at any time during the period 30 seconds immediately prior to the commencement of the crash through the conclusion of the crash. Possible values: ADAS, ADS, “Unknown, see Narrative.”


"It appears" according to what?

Stuff people made up is a bad reason to blame a company.


From here[1]:

> The new data set stems from a federal order last summer requiring automakers to report crashes involving driver assistance to assess whether the technology presented safety risks. Tesla‘s vehicles have been found to shut off the advanced driver-assistance system, Autopilot, around one second before impact, according to the regulators.

[1] https://www.washingtonpost.com/technology/2022/06/15/tesla-a...


You also need to cite them using that as a way to attempt to avoid fault.

Especially because the first sentence you quoted strongly suggests they do get counted.


Yeah, their very faux self driving package.


Can someone summarize the video? That was my first thought as well: crash data for humans is clearly underreported. For example, police don't always write reports or human drivers agree to keep it off the books.


The probability of a human driver causing a fatality on any given mile driven is 0.00000109% (1.09 fatalities occur per 100 million miles driven).

Applying some basic statistics to show to a 95% confidence level that a self driving system causes fewer fatalities than a human you would have to drive 275 million autonomous miles flawlessly.

This would require a fleet of 100 vehicles to drive continuously for 12.56 years.

And in practice self-driving vehicles don't drive perfectly. Best estimates on the actual number of miles driven to validate their safety is around 5 billion autonomously driven miles and that's assuming that they actually are safer than a human driver.

Then you get into the comparison itself. In practice AVs don't drive on all the same roads, at the same times, as human drivers. A disproportionate number of accidents happen at night, in adverse weather, and on roads that AVs don't typically drive on.

Then you have to ask if comparing AVs to all drivers and vehicles is valid comparison. We know for instance that vehicles with automated breaking and lane assist are involved in fewer accidents.

Then of course if minimizing accidents is really what you care about there's something easy we could do right now: just mandate all vehicles must have an breathalyzer ignition. We do this for some people who have been convicted of DUI but doing it for everyone would eliminate a third of fatalities.


Then of course if minimizing accidents is really what you care about there's something easy we could do right now: just mandate all vehicles must have an breathalyzer ignition. We do this for some people who have been convicted of DUI but doing it for everyone would eliminate a third of fatalities.

In a similar vein, if we put geo-informed speed governors in cars that physically prevented you from exceeding, say, 20% of the speed limit, fatalities would also likely plummet.

But people haaaaate that idea.


I'm fine with it notifying the driver it thinks you might be speeding, but I don't like the idea of actually limiting the speed of the car. I've used several cars which had a lot of cases where it didn't track the speed right. Driving near but not in a construction zone. Driving in an express lane which has a faster posted speed than the main highway. School zones. I've seen cars routinely get these things wrong. A few months ago I was on an 85 MPH highway and Google Maps suddenly thought I was on the 45 MPH feeder. Add 20%, that's 54 MPH max speed. So what, my car would have quickly enforced the 30 MPH drop and slam on the brakes to get it into compliance?

I'd greatly prefer just automatic enforcement of speeding laws rather than doing things to try and prevent people from speeding.


Honestly I would think something like transponders on every freeway would work better than GPS. Regardless, I think everyone in the thread could think of 10 technological ways of making this work. I think the biggest barriers are political, not logistical, and definitely not engineering.


So we spend a ton of money putting in transponders and readers in cars which still have various failure modes, or we just put cameras on the highways and intersections and say "car with tag 123123 went from gate 1 to gate 2 in x minutes, those are y miles apart, average speed had to be > speed limit, issue ticket to 123123".

The toll roads could trivially automatically enforce speed limits. They already precisely know when each car goes through each gantry, they know the distance between each gantry, so they know everyone's average speed.


Mostly because I think it would glitch and get the speeds wrong

If it was 100% accurate and you couldn’t get a speeding ticket if it was active I’d be all for it


Yeah, because need to be able to use your vehicle to escape pursuers and also as a ramming weapon. I assume police would get an exception from this rule, but they don't actually have more of a legal right to use their vehicle as a weapon than anyone else, they are just less likely to be questioned on their judgement of the situation as an emergency by the DA. Probably also a second amendment violation, but the Supreme Court might be too originalist (and not textualist enough) to buy that argument, as cars did not exist in the decades surrounding the founding.


> prevented you from exceeding, say, 20% of the speed limit

I initially read this as "20% of the speed of light", and though you were being sarcastic.


Did you mean exceeding the speed limit by 20%?

Because what you actually said is true too, and hints at why "it would be safer" is not a good enough reason to implement something.


I doubt it. Both of these methods are very intrusive.


I sustained traumatic injuries a few months ago when a driver on a suspended learner's permit hit me. The lazy cop issued no tickets for the multiple traffic violations. He couldn't be bothered to show up for the trial and the lazy prosecutor who only notified me of the trial three days in advance went with a bare minimum wrist slap for the suspension. It's as if it officially never happened.


That's awful. The amount of egregious vehicular violence the US has tolerated is disgusting. Waymo seems like the best bet to making experiences like yours a thing of the past.


And I'm not even sure how reliable is the "miles driven" metric. I mean I'm sure you can estimate it somehow, but what's the margin of error there?


Odometers are pretty well regulated and insurance companies will often have a good record of the readings over long periods. I'm not sure how the org doing the data collection does it precisely, but pretty accurate data is out there.


It would be interesting if some kind of active/inductive measurement could be made.

As a human driver, I'm keenly aware of the "close calls" I've had where I've narrowly avoided a probable collision through good decisions and/or reaction times. I'm probably less aware of the times when I've just gotten lucky.

No doubt self-driving companies have internally available data on this stuff. Can't wait til superior performance to human drivers is a major selling point!


Although things are changing, the overwhelming majority of AV miles are generally-safer miles on protected highways. And yet their statistics are typically compared against vehicle miles on all road types.

Further, most AVs explicitly disengage when they encounter situations beyond their understanding. So again, we’re selecting AV statistics based on generally favorable conditions, but we don’t track human-operated miles the same way.

It’s not really a fair comparison.


> most AVs explicitly disengage when they encounter situations beyond their understanding

What do you mean by disengage? Cruise’s AVs don’t have a driver to take over.


I’m lumping things like Tesla FSD.

But also, there are explicitly times and areas where Cruises don’t operate due to being insufficiently able to operate. And times where they do just pull over and wait for human intervention when they don’t know what else to do. Both of which are safe and reasonable, but which also selects themselves into a safer set of miles actually traveled.


Human drivers "disengage" also, though we don't think of it that way. I have aging friends who refuse to drive at night. I'll stay home in bad weather. When I was younger, I'd sometimes ask a passenger to back out of tricky parking space for me.


Yes, but we’re still comparing a highly selected set of AV miles vs the massive variety of human-driven miles.


It's not too useful to lump Tesla, Cruise, and Waymo together here. Tesla is years behind Cruise and cruise is years behind Waymo in terms of driving capability. Waymo doesn't even drive on highways, so we don't know how safe it would be (probably very safe).


>Further, most AVs explicitly disengage when they encounter situations beyond their understanding

the bigger problem here is that the machine may not realize it isn't fully understanding the situation; I think that's the more common situation : a computer situational model doesn't match reality, but the machine has the confidence in perception to proceed, producing dangerous effects.


Not sure that's fair - the number of accidents in human-driven vehicles varies significantly by the human driving it. Do you compare with the teen that just obtained their license and is more interested in their phone than in driving properly, or the 65-year old doctor who's been driving for work and other purposes every day for the past 40 years? According to https://injuryfacts.nsc.org/motor-vehicle/overview/age-of-dr... it's at least a 6x difference.


You compare what matters — mean accidents per mile driven. And like any actuary, you can compare distributions with finer granularity data (location, tickets, ages, occupations, collusion severity, etc). None of this is new or intractable. We can have objective standards with confident judgements of what’s safe for our roads.

As an aside, public acceptance of driverless cars is another story. A story filled with warped perspectives due to media outlets stoking the audience emotions which keep outlets in business — outrage, fear, and drama. For every story covering an extreme accident involving human drivers, there might be 100 stories covering a moderate driverless accident. No matter how superhuman autonomous cars perform, they’ll have a massive uphill battle for a positive image.


I think you compare it to the average as that's who you are encountering when on the road. You have no way of selecting the other drivers to bias toward the doctor.


Teens who’ve just gotten their license are often quite good drivers. It’s after that when they drive poorly. (This also applies to adults, to a lesser extent.)


I still want that metric restricted to geographical area. The average across the entire country seems many times outright malicious to use.


Well, obviously the number of autonomous vehicles involved in accidents is going to be lower, but that's because barely any of them exist compared to the vast majority of people driving their cars. If you had statistics on proportions though, that might be a different story.


You missed "per distance travelled" - that partially normalises the results. You still have to adjust for situation and road type (which Tesla conveniently doesn't do in their advertised numbers) for a better result.


I’ve had several low-light situations where my Tesla identified a pedestrian and applied the brakes before I did (typically dark clothing on a dark street).


This is low hanging fruit I had this feature in my 2015 Volvo.


Sure, it’s just more evidence that these automated systems improve overall safety vs. an unassisted driver. (Although I worry a bit about automation-induced inattention negating these benefits)


There are also issues like phantom braking which Teslas are prone to (or were, I'm not sure if that's better these days). That's part of a whole class of problems which the AVs suffer from which humans don't. I think the main problem is that those problems are really unpredictable to human drivers, whereas good defensive drivers will take into account what another lousy human driver might do based on lots of experience.


On the other hand, if you’re following the car in front of you too close so you cannot react to phantom breaking, the accident is on you. One of the things I appreciate the most about using autopilot/FSD everywhere is that the car maintains a safe following distance basically 100% of the time, even when someone cuts me off. Implemented (and used) consistently, this sort of adaptive cruise control by itself should solve a bunch of accident-causing hazards and other traffic issues.

I haven’t had a phantom breaking issue in a long time either, I’m not sure if this is because of the FSD package or if the underlying autopilot system has improved.


Ditto, FWIW. The car may not always behave like a human does in those circumstances[1], but at this point it's objectively much better at the attention and recognition side of the task.

[1] It tends to be more conservative than human drivers, and in weird ways. If a pedestrian seems at all to be moving in the direction of traffic, even if they're on a sidewalk and just meandering a bit, the car will try to evade (not violently, but it will brake a bit and pull to the outside of the lane).


Yeah, if anything, my Tesla’s issues seem to stem from being overly concerned about accidents than being recklessly dangerous.


The same applies to regular drivers. We do not track accidents they prevent.


Easy, humans prevent nearly 100% of the accidents. A car without a driver invariably crashes in a few seconds.

The question is how many accidents (if any) self driving prevents with respect to average human drivers.


Self-driving cars are used in a very controlled environment.

They will not function in high grass, I guess from my experience with different parktronics.

The best "off road" demonstration of self-driving and/or AI assistance I've found is this: https://media.jaguarlandrover.com/news/2016/07/jaguar-land-r...

Note they avoid going into grass. Human can deduce trails from grass profile, AI? I don't think so.

Will you count an inability to reach a lakeside as an accident? I guess, no.


We have not trained AIs to deduce trails from grass.

AIs need training before they can do things.*

*They just might be learning to do things without being trained based on the emergency behavior I see in LLMs.


My point was not about discerning track from grass, but about driving in acttual grass. Current self-driving tech uses sensors that are useless in high grass.

As for "emergency behavior" (emergent behavior, I guess) - we do not know how LLMs are trained. Thus, what you consider "emergency behavior," could very well be a part of training.


This is such an odd take I don’t know if it’s trolling. Both can be measured against similar metrics (an AV model against the avg AV/avg human or vice versa.


Or even the ones they cause, for the most part. It's notoriously difficult to get clean data about non-fatal crashes.


This Cruise debacle clearly shows it is notoriously difficult to get clean data about crashes caused by or related to self-driving vehicles.


The companies do track that data internally. It would be nice if the DMV mandated it's release to the public. But of course this type of data is a counterfactual, so much more subjective.


They should let an accident happen occasionally, and then show how it had the data to avoid it if humans had just trusted them.


Maybe of interest. There has been research by SwissRe on this topic. https://jdsemrau.substack.com/p/paper-review-comparative-saf...


That's behind a paywall. Do you have a link to the SwissRe paper by chance? (Which might also behind a paywall, of course ; )



Waymo also talks about its work with SwissRe here: https://www.youtube.com/watch?v=9-Qu6HNZu8g


Waymo thinks about this a lot and has posted a good video about it here: https://www.youtube.com/watch?v=9-Qu6HNZu8g


> no human would be able to do.

You say this but as a motorcyclist I've learned to have my eyes everywhere. I suspect many others would say the same. You acclimate to the vehicle you most often control: if that's some woolly handling sluggish SUV then I suspect, no, you probably wouldn't have done well in this situation, but if you were more used to driving a vehicle that requires more attention I suspect you'd do better.


Fair point. For more context the waymo just made the right turn and there was a line of cars in front moving slowly or still. The skateboarder was crouched down and going fast between the cars and the gutter (against traffic). Perhaps from the driver's seat you'd get a glimpse of the skateboarder but it would be very late. No other cars in front made any adjustments. I did hear the skateboarder as it went by.


Maybe this is a fluke, but a few months ago an ABC reporter documented her Waymo ride and it had a couple goof ups. https://youtu.be/-Rxvl3INKSg

Ultimately, I don't think the driving screw ups were that big of a deal (just my opinion). The big deal though is the moronic lack of customer support. How hard is it really to have a customer support staff a phone call away that can reassure the customer and either debug the issue to get them sorted out or at least offer alternatives. It would do so much to improve the users perception.

So, typical Google... Phenomenal engineering, and horrible customer service.

Maybe it was a just hit piece though and they cut out some contextual video? No idea.


> Google... Phenomenal engineering, and horrible customer service

How can a company with THAT MUCH MONEY be THAT plagued with an issue that everyone has been shouting about for more than a decade ??

I wonder if there is a business model in contracting in Google's customer care. I'm sure someone could do a lot better with Google's customer budget. Google can start off with a few departments, and then have performance based expansion. If it goes too well, they can always get acquired in.


Because Google doesn't exists for you, it exists for shareholders. They calculated that not having customer support is more profitable, end of story.


Being a losing member of the cloud race after inventing every modern systems paradigm is profitable ? Missing the entire LLM/Diffusion cycle despite having laid the groundwork for it is profitable ?

Their peer tech companies - MSFT, AAPL, AMZN all managed to become industry leaders into a completely different sector while maintaining their moat in their original revenue sources.

Google is still only makes money on ads. Google has created a monster products in B2C search, ads and software platforms. But, their B2C services are lagging far behind, and customer service really helps here.


> no human would be able to

_defensive driving has entered the chat_

Seriously, anyone here who rides motorcycles can pipe in, but some of us out there never assume anything on the road and thusly avoid a lot of trouble because of it. Assuming incorrectly is the root cause of probably 99.999% of accidents.


There's driving without attention, there's defensive driving, and then there's ability to react to things which would be physically impossible for people to react to. You can train all you want (and yes, that will improve your survival rate significantly), but the eyes->brain->muscles->movement->car->inertia pipeline has delays we will never be able to work around.


In the real world reacting effectively is less about reflexes and more about situational awareness, anticipating what might happen, and reacting not just quickly but properly when something sudden does occur.


I think in that space ML will beat us completely. It's basically a simple optimisation problem of: these rules improve safety, here's the situation you're in, maximise rules applied and minimise changes required to apply them, repeat. The number of ideas ML can remember, test and execute is not going to be comparable with what people can do. It all relies on getting appropriate data from sensors though.


>Occasionally the route selections make no sense at all.

i'm sure this is sometimes true, but at least sometimes i'd guess that the route selections would make more sense if you saw the conditions and traffic on the route the car didn't take you on.


I’ve been nearly hit by a Waymo as a pedestrian when the thing could see me well before it passed.

I hate that they use giant SUVs, really inappropriate car for small city streets even if people love SUVs.


If this ever happens to you or anyone else here, drop them a message with the date/time/place that it happened, and they will watch the footage and probably adjust stuff to not be scary in the future. Without human feedback, it's really hard to program a machine to understand "scary", even if the machine has calculated that it won't hit you.

https://waymo.com/contact/ is the place to go.


You’re subtly gaslighting the parent poster by implying the car couldn’t have hit them and that it was just being “scary”.


Well, has it hit them? Fully automated intersection would be very "scary" for a slow meatbag, yet without accidents.


Waymo uses pretty small jaguar crossovers. Zoox uses suvs.


The Jaguar iPace is a full 4-door SUV, not a hatchback or a Rav4. Just because the rear hatch tapers down doesn't make the car smaller from a safety standpoint.

Jaguar iPace: 4800lbs, 79inch wide, 183 inches long

Pacifica: 4500lbs, 79inch wide, 204 inches long

Toyota Highlander: 4400lbs, 76inches wide, 194 inches long

Chevy Bolt: 3500lbs, 69inches wide, 163 inches long

The iPace might be great for LA but for SF it's totally inappropriate.


They look smaller than a lot of the suvs I see in the city while living here.


I finally decided to turn on the Tesla full self driving this month and it’s been a mixed bag.

Some over-cautious stuff like mentioned above but also some lane changes and adjustments that were better than what I would do.

I still prefer to drive myself so I’ll probably turn it back off at the end of the month, but I’ve got high hopes for when I’m older.


That's my experience, having been on FSD beta for almost two years now. It's still "un-human-driver-like" in a bunch of surprising ways, but the actual meat and potatoes "does it do more or less stupid stuff" is I think ahead of my own driving at this point.

At this point I trust its microdriving (lanekeeping, distance management, speed control, collision/obstacle avoidance, etc...) more than my own for sure. The higher level stuff is still a mixed bag (will it get stuck at that merge, can it make this lane change in time, does it understand that this other car is going to yield or will it stop, etc...).


Can it detect stationary obstacles at freeway speeds? Do we have good evidence? Because autopilot couldn't.


Meh, we've been here before. That's a vague and impossible point to argue. I mean, sure, yes it does. Does it for all possible obstacles? Probably not? I certainly haven't tested. This kind of "$SYSTEM must have feature $X which is possessed by $COMPETITOR or else it will fail" is just plain bad analysis. It's possible for a Tesla to (1) be a better driver than you on the whole while still (2) doing weird stuff that you're sure you never would.

FWIW: Stationary junk in the freeway causes accidents with human drivers every day, all the time. People are absolutely terrible at dodging crap in the road. But you and everyone else will swear up and down that you'd *never* drive over a shed tire or whatever, even though evidence says you totally would.

Basically: no, you'll never get the evidence you want to "prove" the logical construct you've created. So Teslas, to you, will never be "safe". But in the real world they obviously are.


> to "prove" the logical construct you've created

You're making guesses about my standards that aren't warranted at all. They absolutely can be met.

First off I'm specifically worried about large objects, a meter or larger.

And let's see, autopilot has done 3 billion miles? Okay, if FSD can manage 1 billion miles of freeway driving with zero or one crashes into large objects, then I'll be convinced they're solving the problem.

That should be fair, right?

> But in the real world they obviously are.

Obvious according to what?

Autopilot wasn't, because it managed to hit entire vehicles repeatedly. There is (or was) a specific warning in the manual that it might hit stationary vehicles when going over 50mph.

What makes it obvious that FSD avoids this specific problem?

> But you and everyone else will swear up and down that you'd never drive over a shed tire or whatever, even though evidence says you totally would.

You made this up about me from nowhere. Please never do that to anyone.


> You're making guesses about my standards that aren't warranted at all. They absolutely can be met. First off I'm specifically worried about large objects, a meter or larger.

I've personally watched my car go around trash cans and bikes in the road, and (obviously) stop for halted vehicles. So, you'll abandon this argument and concede the point? I suspect you won't, because my anecdote isn't enough for you, and those obstacles aren't sufficient proof, etc...

I think I'm more right than you want to admit. There's nothing I can say here to change your mind, and so coming here and demanding "evidence" isn't really an argument in good faith.

And an edit just to pick on this bit:

> Okay, if FSD can manage 1 billion miles of freeway driving with [emphasis in original] zero or one crashes into large objects, then I'll be convinced they're solving the problem.

That's simply a ridiculous argument. That level of safety is way, way, WAY beyond anything you get from existing transportation systems of any kind, period. A quick google shows that in that "one billion miles on the freeway", you'd expect not merely the "one accident" you're demanding, but in fact TWENTY FATALITIES (I couldn't find statistics for mere collisions, but needless to say it's going to be at least an order of magnitude or two higher).

So basically you're sitting here and glibly demanding that this product be 200x safer than its competitors before you'll consider it acceptable... and getting huffy when I call you unserious?


> So, you'll abandon this argument and concede the point? I suspect you won't, because my anecdote isn't enough for you, and those obstacles aren't sufficient proof, etc...

Are you trolling with this?

I'm asking if it can reliably do that. Of course I need more data than one person can collect. I am not being unreasonable to ask for more data than your personal experience. Most people with autopilot never saw this problem either.

And specifically I need to know about freeway speeds, because speed is an important factor.

There is plenty you could do to change my mind. If you link to something published by tesla or a government body showing autopilot and FSD accident rates by type then that would be more than enough.

> So basically you're sitting here and glibly demanding that this product be 200x safer than its competitors before you'll consider it acceptable

I'm not asking for a lack of fatalities. I'm asking for a lack of hitting stationary vehicle-scale objects on the freeway.

Do you think that particular kind of accident is responsible for a majority of fatalities, or something? My expectation is that it's a very rare kind of accident and I also feel like it's a good canary.

Also the freeway fatality rate is about 5.4 per billion miles, not 20.


It's so amazing how this discussion goes every single time.

> I'm not asking for a lack of fatalities. I'm asking for a lack of hitting stationary vehicle-scale objects on the freeway.

Exactly! You've constructed an impossible gateway to understanding; I either find a statistic to fit exactly your imagined failure mode or... I'm wrong, and you don't need to change your opinion.

I'm, sorry, I truly am, that I don't have a statistic to hand you showing the frequency with which Tesla vehicles with FSD beta hit 1m+ stationary obstacles on high speed roadways. I don't. And I won't, and likely never will.

So, again getting to the point upthread: you're safe. You can't lose this argument framed like that, and I concede that point. I'm just saying that that's not a very serious position to take if you're actually interested in genuine safety using metrics that other people care about.

> Also the freeway fatality rate is about 5.4 per billion miles, not 20.

Not the headline I saw immediately, but sure. That sounds plausible too. The fact that you want to claw back an error factor of 200 by 3.5x is also good evidence that you aren't taking the discussion seriously.


> The fact that you want to claw back an error factor of 200 by 3.5x

I don't. Again, I wasn't talking about total accident rate at all. I was talking about a much smaller number. The "200" is nonsense and that was just another reason it's wrong.

> So, again getting to the point upthread: you're safe. You can't lose this argument framed like that, and I concede that point. I'm just saying that that's not a very serious position to take if you're actually interested in genuine safety using metrics that other people care about.

Ugh. Look, I can wait for general safety statistics, but it will take longer. Those will exist, and they can convince me if they're within 2x of humans.

Maybe it's unfair for me to want specific statistics here, but it shouldn't be so hard to get them.

But it's just as unfair for you to act like a single person's anecdotes are enough. You can't just say it's "obviously" safe and treat that like a real argument. Of course the discussion is going to go the same way every time if that's the level of evidence you expect people to accept.

And I thought you were claiming that no evidence could convince me because I'm unreasonable. If your real claim is "nobody has bothered to collect much evidence, therefore there is no way to convince you without doing that job" then yes I agree and I don't think that's my problem.


I rode in an Argo car (RIP) and it avoided an accident where everyone in the car was like "where did that guy come from". It saw at an awkward angle around the corner before any human could have.


Eyes and ears open in the city... I think a lot of humans would have spared the skater too.

Waymo vehicles impress me, I've seen them perform skillfully like you describe.


I have doubts of the claim you have made. The Waymo just turned up a hill and moved out of the way of a downhill skateboarder? What is it about that that makes it something that no human could do? Was the Waymo's vision blocked before making the turn? As that's the time to determine the turn is safe.

Also, isn't there an implicit assumption that all Waymo "drivers" would perform in exactly the same way?


I’m reasonably sure I’ve done things like that. I wouldn’t need the mirror check because I would already know what’s there during the right turn. However I would personally aim to stay dead center in lane to reduce the skateboarder’s uncertainty during their own tricky maneuver.

This isn’t to say the Waymo car did a bad job. It sounds like a good choice in the situation.


> It then pulled a bit into the left lane abruptly and I didn't get why until a split second later a skateboarder was crouched down and went by on my right against traffic.

Wonder what it would have done if it didnt have room to pull into the left lane without hitting a car. Or perhaps another skateboarder.


> In fact I've seen Waymos do things that no human would be able to do

Waymo has way more sensors. Most importantly, it has Lidar.


Based on this recent-ish YT [1] video Waymo is also crap, just as Cruise is, as any car that stops at a green light, on the left-lane, is a danger to all the other drivers it shares the road with.

[1] https://www.youtube.com/watch?v=-Rxvl3INKSg


There's a lot of legitimate criticism in that short video but the presenter is also extremely dishonest and biased. E.g. they end the video saying that they started with "a lot of optimism" when at the beginning of the ride they literally said that they expected the car to be unsafe. They also seem to mock the AI for obeying traffic laws like speed limits and stop signs, which is a bad start for a video like this.

Someone in the comments also mentions that the destination was in a dead end street, which may be the reason the drop-off is a 5 minute walk away. The 5 minute walk was also apparently indicated when she entered the destination. This along with the pick-up location being on the other side of the road feels like it might be the result of the AI being overly cautious. I'd find that acceptable for an "autonomous car" but not one advertised as a replacement for taxis (which as the presenter mentions are also used as accessibility aids where a 5 minute walk uphill can be unacceptable).

There's no footage of the car stopping at the green light, just her saying it's come to a full stop and then an external shot where it's already stopped. That's not enough information to call the stop "unsafe", even if it was impeding traffic. It also stopped with flashing hazards, assuming Waymo doesn't flash hazards during the entire ride (which I hope they don't). It's not clear why it stopped there but based on the little footage we have it doesn't seem very abrupt so any traffic behind it would have had plenty of time to notice and react the decelerating car in front of them.

The stop seems unnecessary and because it's AI and Waymo didn't provide additional information, it's impossible to say what caused it. The message she saw also indicates it was caused by an error, presumably a navigation issue. Although it is impossible to tell based on the editing, the issue seems to have been resolved within a few seconds. A human driver would have probably decided to just follow the direction of traffic for that amount of time, or change lines to come to a full stop at the side of the road but in an urban environment (not at highway speeds) the behavior is not completely unreasonable. What's more concerning to me is that the AI ran into a problem while in traffic that required the car to come to a complete halt and presumably wait for external (human?) intervention, green light or none.

I'd prefer a car that slowly comes to a full stop at a green light because of a software issue over one that keeps dragging a pedestrian it ran over for several seconds to avoid impeding traffic but the bigger issue with Waymo here is that the car runs into a software issue at all while in traffic.


That looks like a big annoyance, but I wouldn't call that stop dangerous.


Have you ever driven in a congested city? Someone stopping out of the blue when green is on is definitely a danger to all those around the car he/she/it is driving. Granted, that might not be the case somewhere in the middle of the US where there's a car passing every 5-10 minutes or so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: