Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“All rides in the program will have an autonomous specialist on board for now”

This tells me that we’re still a long way from full level 4 (and certainly level 5) autonomy in a busy city like San Francisco. The edge cases requiring immediate human attention are still too frequent for the human safety driver to be remote, as is the case in Phoenix.

Also, just a reminder that Waymo in Phoenix is nowhere close to being level 5, since it is still heavily geofences and requires those remote safety monitors. I still think that true level 5 (i.e. ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver) requires AGI. Would love to be proven wrong!




>“All rides in the program will have an autonomous specialist on board for now” This tells me that we’re still a long way

Did you expect something different? I can't really see a boardroom writing a roadmap that goes straight from

  test rides (no passengers) with a backup driver onboard
to

  actual rides with passengers - no backup driver onboard
with no in-between steps.


Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

Back in 1999 the Chinese government announced that airline execs would be airborne at the changeover as reassurance that aircraft were safe from Y2K. Like or hate them, the incentive logic was sound.

https://www.wired.com/1999/01/y2k-in-china-caught-in-midair/

https://www.cbc.ca/news/science/chinese-airlines-won-t-be-bi...


I think of that as a parachute-rigger solution (https://en.wikipedia.org/wiki/Parachute_rigger).

Historically, people packing parachutes could be randomly selected to jump with the parachute they had packed.


It remains true in the military. Refusal to jump on a chute you have packed will cause you to lose your rigger qualifications.


> Middle steps: (1) Board members use product to travel to board meetings. (2) Board members use product as replacement for personal vehicles. (3) Board members demand pay increases.

How do you know board members aren't already using Waymo product heavily? It still doesn't mean they can go straight from board members using Waymo without backup drivers to arbitrary customers using Waymo without backup drivers.


Because if they did then they would surely tout that fact at every oppertunity. Musk doesnt drive a ferarri to work. He drives a tesla to promote his company's product line.


Elon takes a private jet to work.


Possibly to drive between their various houses?


> Possibly to drive between their various houses?

I'm guessing this is the old joke about Eric Schmidt?


"I save money by using nest thermostats in my various houses"


In Phoenix, people are being driven without a safety driver. They are doing these in between steps.


Maybe open up waymo rides to people with alphabet shares ? Then the owners and customers are the same group. Unfortunately multiple spouses aren’t really allowed in America, or you could limit ridership to spouses.


The acid test will be (4) Board members use product on their children.


Followed by (5) board members claiming legal bills for child custody disputes after caught leaving kids unsupervised in the custody of a 3000lb robot.


Arguably all modern vehicles are incredibly heavy robots. AVs are just supposed to be better robots.


No more so than a bicycle is a 20lb robot or an airplane is a 20,000lb robot. Clearly a self-driving vehicle is a different paradigm.


"The average car has 30 to 50 different computers, and high-end cars have as many as 100, and they're accompanied by 60 to 100 different electronic sensors." [1]

The median modern bicycle has 0 computers and sensors.

[1] https://www.ceinetwork.com/cei-blog/auto-computers-means-com...


Skin in the game. Nassim Taleb would agree with this measure!


Always makes me happy to see some good philosophy poking its head up. Great choice too, since the AV problem is fundamentally about Black Swans.


Waymo is already doing test rides with neither passengers no backup drivers in CA, so they wouldn't be jumping from no passengers plus safety drivers to paid passengers without safety drivers if they did offered paid, full-driverless rides.


The book Halting State by Charlie Stross (2007) [1] had an interesting self driving car model, where it was autonomous on simple roads like highways/motorways, and a human driver took over remotely for more complex city streets.

Of course the book showed some failure modes for that, but I wonder if network coverage and latency, as well as "backup driver response time" could be considered good enough, perhaps this sort of model could have an acceptable risk trade-off.

[1] https://en.wikipedia.org/wiki/Halting_State


For the computer it doesn't make much difference if there's a passanger or if there isn't.


For a company having paying customers does matter a lot.

Having customers paying for the R&D will help make it sustainable.


I suspect the revenue from passengers in this case looks like a rounding error. But the PR and feedback from early adopters is very valuable.


Yeah you can put the people with the good feedback in your ads video and ignore the ones who went thru the windshield.


I wonder where "a country road with no lanes which barely fits 1.5 car in winter in the Czech republic" is on your scale... Something like this, just imagine the snowdrifts around it https://www.google.com/maps/@49.080269,16.4569252,3a,75y,307...


Now add the completely blind switchback turns, where your "visibility' into whether another car is coming comes from a convex mirror nailed to a tree or post at the apex of the corner - if it hasn't fallen off or been knocked crooked...

basically all of Italy


Or an ambulance going on the opposite direction (because that’s the only available choice) on a boulevard in a busy capital city like Bucharest. Saw that a couple of hours ago, the ambulance met a taxi which was going the right way but of course that the taxi had to stop and find a way for the ambulance to pass (by partly going on the sidewalk). I said to myself that unless we get to AGI there’s no way for an “autonomous” car to handle that situation correctly.


You don't even need to go that far, the other day I saw an ambulance going down on Burrard Street in Vancouver, BC without lights or sirens then I guess a call came in , it put on both and turned around. It's a six lane street where normal cars aren't allowed to just turn around. It was handled real well by everyone involved, mind you, it wasn't unsafe but I doubt a computer could've handled it as well as the drivers did.


a very complex looking behavior sometimes comes from the very simple easy to implement principles, like say a bird flock behavior https://en.wikipedia.org/wiki/Flocking_(behavior)#Rules

I don't believe people are using their full AGI when driving (and the full "AGI" may as well happen to be a set of basic pattern matching capabilities which we haven't discovered yet). After decades of driving the behavior is pretty automatic, and when presented with complex situation following a simple rule, like just brake, is frequently the best, or close to it, response.


To me the solution to that is obvious and far better than the current status quo. The cars are all attached to a network and when an emergency service vehicle needs to get somewhere in a hurry there is a coordinated effort to move vehicles off the required route.

As things stand emergency vehicles have to cope with a reasonable minority of people who completely panic and actually impede their progress.


This has to work even if network reception is weak or absent. You can't be certain that 100% of cars will receive the signal and get themselves out of the way in time.


Right, so don't use the network: broadcast a signed message on a band reserved for emergency services.


> This has to work even if network reception is weak or absent.

Or hacked maliciously.


Oh you can have that in Bucharest even with regular cars. Lanes are pretty fluid there, as is the preferred direction of travel, I've lived there for only two years and I've seen more vehicles go in the opposite direction ('ghost riders' we call them here) than anywhere else over the rest of my life. Romanian traffic is super dangerous, especially if you are a pedestrian and you can just about forget cycling in traffic. It is also the only place where a car behind me honked to get me to move over when I was walking on the sidewalk.


That is 101 for autonomous driving. Solved years ago.


People at Tesla and other autonomous driving companies, of course are aware and worry about such situations. If you have a few hours and want to see many of the technologies and methods that Tesla is using to solve them, check out Tesla's recent "AI day" presentation. Tesla is quite cool about openly discussing the problems they have solved, problems they still have, and how they are trying to solve them.

An incomplete list includes:

1) Integrating all the camera views into one 3-D vector space before training the neural network(s).

2) A large in-house group (~1000 people) doing manually labeling of objects in that vector space, not on each camera.

3) Training neural networks for labeling objects.

4) Finding edge cases where the autocar failed (example is when it loses track of a vehicle in front of it when the autocar's view is obscured by a flurry of snow knocked off the roof of the car in front of it), and then querying the large fleet of cars on the road to get back thousands of similar situations to help training.

5) Overlaying multiple views of the world from many cars to get a better vector space mapping of intersections, parking lots, etc

6) New custom build hardware for high speed training of neural nets.

7) Simulations to train rarely encountered situations, like you describe, or very difficult to label situations (like a plaza with 100 people in it or a road in an Indian city).

8) Matching 3-D simulations to what the cars cameras would see using many software techniques.


They're cool about openly discussing it because this is all industry standard stuff. It's a lot of work and impressive, but table stakes for being a serious player in the AV space, which is why the cost of entry is in the billions of dollars.


> People at Tesla and other autonomous driving companies, of course are aware and worry about such situations.

Yeah, a Tesla couldn't possibly drive into a stationary, clearly visible fire engine or concrete barrier, on a dry day, in direct sunlight.


As awful of a failure as that is, and as fun as it is to mock Tesla for it, that claim was that they're aware of edge cases and working on fixing them, not that they're already fixed. So your criticism doesn't really make sense.


A system dealing with 'edge cases' by special casing them is not going to work for driving, driving is a continuous string of edge cases, and if you approach the problem that way you fix one problem but create the next.


I don't think anybody said anything about special casing them.

I dislike saying anything in defense of tesla's self-driving research, but let's be accurate.


Neither could a human, I'm sure.

At least, I never would...


If you never fail, you aren't moving fast enough.

A million people are killed globally each year by motor vehicles. Staggering amounts of pain and injuries. Massive amounts of property damage. Tesla's cars are not supposed to be left to drive themselves. The chance to save so much carnage seems worth letting some people driving Tesla's, that fail to pay attention to the road, suffer the consequences of poor decisions.

Plus these problems are likely too be mostly fixed due to the fact that they happened.


> If you never fail, you aren't moving fast enough.

Start-up religion doesn't really work when there are lives on the line. That's fine for your social media platform du jour but please don't bring that attitude to anything that has 'mission critical' in the description. That includes medicine, finance, machine control, traffic automation, utilities and so on.


But what about that million people who die every year now? Are the few thousand people who will die because of AI mishaps worth more than the million who die due to human mishaps?

Not to say that we shouldn't be cautious here, but over-caution kills people too



You described a lot of effort, but no results.


From what I've seen of Tesla's solution at least - even busy city centers and complex parking lots are very difficulty for present day autonomous driving technologies. The understanding level necessary just isn't there.

These things are excellent - undeniably better than humans at the boring stuff, highway driving, even major roads. They can rightfully claim massive mileage with high safety levels in those circumstances... but throw them into nastier conditions where you have to understand what objects actually are and things quickly seem to fall apart.


That is like trying to judge modern supercomputing by your experinces with a 6 year old Dell desktop.

Waymo drove 29,944.69 miles between "disengagements" last year. That is an average California driver needing to touch the wheel once every 2.3 years.

Tesla by comparison is classed as a SAE Level 2 driver assist system and isn't even required to report metrics to the state. While they sell it to consumers as self-driving, they tell the state it is basically fancy cruise control.


"disengagements" is a disingenuous statistic - that'd be like a human driver just giving up and getting out of the car.

What you want is "interventions". Additionally, look at where those miles were driven. Most of them are some of the most simplistic road driving scenarios possible.


> That is an average California driver needing to touch the wheel once every 2.3 years

From my experience of California driving, that doesn't sound too bad. Compared to the entire Eastern seaboard, y'all have great roads and better drivers.


> Waymo drove 29,944.69 miles between "disengagements" last year.

You know better. If most of those miles were in sunny Mountain View suburbs, they don't count.


It's unclear to me why Tesla's solution is so discussed. They are definitely not on the same playing field as Waymo or even Cruise.


There's a lot of people on here who have invested in Tesla


also a lot of people on here who have actually experienced tesla's self-driving. certainly a lot more than have experienced any other self-driving product (at least above a "lane-keeping" system)


Are there a lot of people who have experienced tesla's self-driving?

As I understand it, if you pay for FSD, you don't actually get anything like self-driving, you just get lane-changes on the highway in addition to the lane-keeping. Effectively, you get lane-keeping, which you have if you don't pay too.

All the videos of "FSD driving" are from a small number of beta-testers, and there's no way to opt into the beta.

Because of that, my assumption would be very few people on here have experienced tesla's self-driving. It's only open to a small number of beta testers, whether you have purchased it or not.

On the other hand, waymo is available for the general public to use, though only in specific geographic areas.


Would you describe Tesla's tendency to crash full speed into stopped emergency vehicles during highway driving as "excellent"?

https://www.cnn.com/2021/08/16/business/tesla-autopilot-fede...


While controversial, we tolerate a great deal of casualties caused by human drivers without trying to illegalise those.

While we can (and should) hold autonomous vehicle developers to a much, much higher standard than we hold human drivers, it is precisely because of excellence.


We actually do "illegalise" casualties by human drivers.


I'm sure the grand poster meant banning human driving entirely in order to prevent human driving casualties.


The failure modes are going to be very strange and the technology is not strictly comparable to a human driver. It is going to fail in ways that a human never would. Not recognizing obstacles, misrecognizing things, sensors being obscured in a way humans would recognize and fix (you would never drive if you couldn't see out of your eyes!).

It is also possible that if it develops enough it will succeed in ways that a human cannot, such as extremely long monotonous cross-country driving (think 8 hour highway driving) punctuated by a sudden need to intervene within seconds or even milliseconds. Humans are not good at this but technology is. Autonomous cars don't get tired or fatigued. Code doesn't get angry or make otherwise arbitrary and capricious decisions. Autonomous cars can react in milliseconds, whereas humans are much worse.

There will undoubtedly be more accidents if the technology is allowed to develop (and I take no position on this).


That's autopilot, not FSD beta though, at this point it's probably 10 generations old


Ah yes, because "autopilot" is not autonomous.


Well yeah, it's like other autopilots:

An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems).


That's just devious marketing on Tesla's part. They can always excuse customer misunderstandings with the original meaning you explained, while normal people can be savely expected to interpret autopilot as full self driving (and I'd be surprised if they didn't have actually tested this with focus groups beforehand). So not really lying (great for the lawsuits), but constructing misunderstanding on purpose (great for the brand image).


Except for the manual and all the warnings that pop up that say you need to pay attention.

3000 people die every day in automobile accidents, 10% of which are from people who are sleepy. Even standard autopilot is better than a tired driver


I would say it's better then the Human's tendency to drive full speed into anything while impaired by a drug. Especially since the bug was fixed in Tesla's case but the bug in Human's case is probably un-fixable.


Drugs (or alcohol)? There are so many more failure modes that drugs are the least of my concerns. Especially of unspecified type. I'm not the least bit worried about drivers hopped up on tylenol. Humans get distracted while driving, by texting, or simply boredom and start daydreaming. Don't forget about driving while tired. Or emotionally disturbed (divorce or a death; road rage). Human vision systems are also pretty frail and have bad failure modes, eg the sun is close to the horizon and the driver is headed towards the sun.


Computer vision systems also have bad failure modes. The camera sensors typically used today have better light sensitivity but less dynamic range than the human eye.


They fixed driving into stationary things? That's news to me. What's your source?

It's not an easy problem to fix at high speed without false positives, and they seem to really hate false positives.


I live in north-central Idaho. 10 minutes from 2 state-universities, but in an otherwise relatively rural part of the county with a 1/4 mile long, somewhat steep driveway.

Every year, I'm amazed at how quickly our personal "veneer of civilization" collapses in the snow.

The prior owners of our home would just keep their kids home from school, and work from home an average of "about a week every winter."

We're a little more aggressive with snow removal, but there are still mornings every winter where I'm getting up at 5 to spend a couple hours plowing and blowing out drifts on our driveway (after typically doing the same thing the night before) just in order for my wife to make it down to our county road which might still have a foot or so of snow covering it.

Similarly, in windy snow-covered conditions, there are a couple spots between us and town where the snow regularly drifts back over the road in a matter of hours, causing a "well, I know the road goes about here, I think I can make it through these drifts if I floor it so here it goes" situation.

Even when the roads are well plowed and clear, there are plenty of situations where it's difficult for me, a human, to distinguish between the plowed-but-still-white-road and the white snow all around it in some lighting conditions.

And let's take snow out of it. Our fastest route into town involves gravel roads. And our paved route is chip-sealed every couple years, and typically doesn't get a divider-line drawn back on it for 6-months or so after.

Which is all to say, I think it's going to be quite a while before I have a car that can autonomously drive me into town in the summer, and global warming aside, I'll probably never have one that can get me there reliably in the winter.


Northern Canada here. We have all been down that road. I had a rental once that wouldn't let me backup as the sensor was frozen over. I doubt AI will ever handle winter situations without help.


> I doubt AI will ever handle winter situations without help.

Sure it will, at least eventually. However, I suspect the humans at the time won’t like the answer: that it’s not always safe to drive in these conditions, and then the car refusing to drive autonomously, even if it is technically capable of navigating the route. It may deem the risk of getting stuck, etc. to be too high. Or you may need to accept a report to your insurance company that you’ve opted to override the safety warnings, etc.


Lol. Good luck selling that in the north, the mountains, farm country or anywhere else more than 10 miles from a starbucks. Sometimes lives depend on being able to move and there isnt time to reprogram robot to understand the risk dynamic. Malfunctioning sensors or broken highbeam circuits (tesla) are no excuse for a car to remain stuck in a snowbank.


Why do you live in a place where you have to shovel snow from 5am on a week day? I mean I appreciate building character but at some point you're just deciding to live life on hard mode.


First, they are "plowing and blowing", not shoveling (or not shoveling much) - if you have a significant amount of snow, shoveling is just impractical as well as back-breaking. Second, even if you don't get snow overnight, you get the drifting they mention, which is where winds blow snow onto the nice clean driveway you had cleared previously. Drifting can be quite significant with lots of snow and big open areas!

Lastly, not the OP, but winter is my favorite season for the most part, and I love being around lots of snow!


A large band of the United States reliably gets heavy overnight snows. In my case we're talking an hour west of a major metro--Boston. These days, the inevitable travel snafus notwithstanding, I just stay home. But when I had to go into an office barring a state of emergency digging out in early am was a regular occurrence.


Jesus christ HN. Not everyone is an IT guy with comfortable salary. Some people have families or other roots they don't want to severe, or lack the money and useful skills to move...


Autonomous driving systems are set at various levels of autonomy.

Level 0 is no automation, level 1 is just a dumb cruise control, level 2 is radar adaptive cruise control plus lane keeping (which is where most production systems like Tesla Autopilot and GM Supercruise are currently at). Level 2 still requires full human supervision, if you engaged it on the road above it would either fail to engage or you'd crash and it would be your fault. Level 3 is the same plus an ability to handle some common driving tasks, like changing lanes to pass a slower vehicle.

Level 4 is where it gets really interesting, because it's supposed to handle everything involved in navigating from Point A to Point B. It's supposed to stop itself in the event of encountering something it can't handle, so you could theoretically take a nap while it drove.

However, an important limitation is that Level 4 autonomy is geofenced, it's only allowed in certain areas on certain roads. Also, it can disable itself in certain conditions like construction or weather that inhibit visibility. Waymo vehicles like these are ostensibly level 4, if you tell them to drive through a back road in the snow they'll simply refuse to do so. It's only useful in reasonably good conditions in a few big cities.

Level 5 is considered to be Point A to Point B, for any two navigable points, in any conditions that the vehicle can traverse. You could build a Level 5 vehicle without a driver's seat, much less an alert driver. I kind of think this will require something much closer to artificial general intelligence; level 4 is just really difficult conventional programming.


It's not obvious that Level 4 falls within what one would call really difficult conventional programming. That level entails something like "in the event of any exceptional situation, find a safe stopping location and safely bring the car to a stop there," and even that alone seems incredibly hard.


Actually it doesn't matter if your cruise control is dumb or adaptive. If you have only cruise control, of either kind, then it's level 1.

And if you have lane-keeping but not cruise control, that's also level 1.

The difference between 1 and 2 is weird.


I'd still buy a self-driving car that refuses to drive on that road.


In the back seat of the Waymo there's a "Pull Over" emergency lever.


You can't always "pull over."


Lots of roads like that in Britian as well and the speed limit is 60mph/100kph. Not uncommon for two cars on a single track road to adjust speed to pass each other at a passing place without slowing down much, so at a closing speed of over 100mph. Perfectly safe for human drivers who know the roads.


This sounds like the sort of "perfectly safe for human drivers who know the roads" that actually results in a fair number of road deaths.


If you look at the accident maps, there are almost none on single track roads and lots on twin track roads. My hypothesis is that driving on a single track road feels much more risky so people pay more attention and slow down more on blind corners. Also, it’s not possible to overtake and a lot of accidents are related to overtaking.


Believe it or not there are tons of two-way roads like that just 30 minutes from Silicon Valley that self-driving cars could practice on. Here's an example: https://goo.gl/maps/1CVb7Mpiwv1VL2sd7


There're also similar roads 30 minutes from Silicon Valley that have all that, plus residences, pedestrians, parked cars, sheer cliffs, unclear driveway splits, and porta-potties, eg. https://goo.gl/maps/57jzzK6fvtCqvu5w5

Strangely I've never seen Waymo vehicles practicing on that. They're all over Mountain View, but I have never once seen them in the mid-peninsula hills.


Just have them drive up to the Lick Observatory and back.


That’s just stunningly beautiful - Czech countryside is something else!

I’d gladly buy a self-driving car that require some additional input on such a road and had additional aids to spot oncoming traffic I can’t see behind the tractor that’s a few hundred meters forward of the spot linked to. It would still be safer.

To really make things work, we need cars to be able to negotiate the way humans do on the right of way, etc. There is a lot of non-verbal (and when that fails, very verbal) communication while driving. Currently, cars can’t communicate with each other and the pedestrians, which limits possibilities a lot.


You can replicate that without going overseas. Send that autonomous vehicle over the Golden Gate bridge, take any of the next few exits, and turn right. The street I live on is a paved horse path from the 1910s. No snowdrifts, but a lot of aggressive drivers angrily refusing to back up, which will be fun to see software deal with!


As someone who learned to drive in the city, those roads make me sweat bullets.

My grandpa who drives on those roads primarily, sweats bullets in the city.

Maybe you’ll have different driving models to load in different scenarios …


My mother thinks nothing of driving on deserted roads in significant unplowed snow. She gets nervous on a dry, Texas highway at rush hour.


Yeah, that seems perfectly rational. There is nothing to hit on a deserted highway. Driving in traffic, on the other hand, is more stressful and has worse downsides.


> significant unplowed snow

Spinning out on a deserted highway and hitting a snowbank and getting trapped in your car kills a large number of people every year. Even with smartphones, calls for help can't always be responded to in time, resulting in death. (Have an emergency kit in your car if you live above the snow line!)

Driving in city traffic can be quite harrowing, but hitting another car at 20-30 mph isn't usually fatal. (Wear your seatbelts!)

The point that GP post was trying to make is that humans have different preferences, and what seems dangerous to one doesn't (and possibly isn't) dangerous to another. Humans are also notoriously bad at judging danger, eg some people feel threatened by the idea of wearing of papers masks.


The computer doesn't have to be perfect; it just has to be better than a human.


Adding to this to really drive the point home: it doesn’t even need to be better than a human that’s good at driving. It only needs to be better than the average human driver. Anecdotally speaking, that’s not such a high bar to pass (relative to the topic at hand).


For general acceptance I think it has to be better than how good the average human thinks they are at driving.

Secondly, its dumbest errors have to be better than what the average human thinks their dumbest errors would be. If there is an accident and every one thinks they would never have made this error, it will crush the acceptance.

Looking at the general accidents stats and saying to people that, on average there are less deaths on the road but they might die in a stupid accident they would never have been into, had they been driving themselves, will be a very hard pill to swallow. Most people prefer to have the illusion of control even if statistically it means worse expectations.


Level 4 is where most of value is. If a system could drive in all cities and highways, that's more than 90% of benefit.


Agreed 100%. There will be special exit/on ramps built along highways and the trucks will largely just stay in their lane even if slower. It would cut the number of truckers needed by probably 50+%.


For depot to depot runs, sure. Most runs aren't that though, and require direct delivery from manufacturer to purchaser. Plenty of deliveries, for example in Chicago, basically happen off a residential street. Alley docking and turning around in these environments is challenging even for a human.

Add to all this one thing and we're further than I think most people realize: Weather. Show me an FSD doing better than a human in the snow or we're not really anywhere yet.



More likely driving on most highways in decent weather which is a big win. I'd pay for that.


The more I travel, the more I consider myself an SAE level 4 driver :)


This is legally required today. It's not necessarily a reflection of Waymo. Whether their system was perfect or not, it's legally required they do this till the government changes their minds.


California DMV regulations do allow testing of autonomous vehicles without a safety test driver if certain conditions are met, spelled out in Title 13, Division 1, Chapter 1 Article 3.7 Section 227.38 [1]. The most technically challenging of these is that the car must be capable of operating at SAE level 4, which goes back to the OP's comment. CUPC licensing for commercial passenger services also allows this [2].

That said, I agree with others that this is the natural progression of testing rollout and doesn't tell us anything about the pace at which the rollout will occur, in particular whether it will be faster or slower than Phoenix.

[1] https://www.dmv.ca.gov/portal/file/adopted-regulatory-text-p...

[2] https://www.cpuc.ca.gov/regulatory-services/licensing/transp...


Yeah, the commenter straight up assumes that a person is present because the L4 tech doesn't work, when in reality there are legal, liability, and even user comfort reasons to have someone on board with this new pilot.


L4 with ability to phone home for remote assistance is good enough.

By the time L5 arrives people will have been happily riding around in vehicles with no steering wheels for decades. L4 cars that phone home less and less every year.

Eventually someone will notice that no L4 car has phoned home for a whole year and almost nobody will care. Just a footnote to an era that already feels taken for granted.


Is remote assistance good enough though? It probably works fine when you have a fleet of 100 cars on the road, but 10,00? 1,000,000?

How many of those cars have caused a traffic jam at a given moment because they’ve encountered a traffic cone? How long does each issue take to resolve? It seems like, in addition to the technical hurdles, there are many more logistical hurdles before this can be rolled out at scale.


When manufacturers (after taxi companies) start competing for this, the manufacturer that can have 1 remote driver per 100 vehicles will beat the one that needs 1 remote driver per 10 vehicles. A manufacturer might require massive halls with thousands of workers to pilot their fleet of cars, but so long as customers pay the bills, that's no problem. And customers will need to pay the bill through subscriptions on FSD packges.


> I still think that true level 5 (i.e. ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver) requires AGI.

This might be true. Most of the time (95%) I am on complete human brain autopilot when I’m driving but those other 5% need my full focus and attention. I shut of the radio and tell other passengers to be quite (if I have the time for it).


This assumes that the challenges that are hard for a human are the same challenges that are hard for a self driving car - that might be the case, but self driving cars may have some theoretical advantages such as 360 cameras/lidar and an ability to follow satellite navigation without having to take its eyes off the road.

Put another way, the 5% of times I need to focus are usually the times where I am somewhere new and don’t necessarily understand the road layout - which something like Waymo may avoid through mapping for instance.

It might be true, but plenty of problems that have been thought to require true AGI have later been found to not require it after sufficient research - for example it’s not long ago that we thought good image recognition was entirely out of reach.


Anybody who rides with me on a normal basis have come to learn to recognize the sudden stop halfway through a word when I switch from autopilot brain to active driving. There are times when you need more focus on everything than others.


It's only AGI until someone achieves it. Then it becomes statistical analysis.


> Also, just a reminder that Waymo in Phoenix is nowhere close to being level 5

Because they are not even trying to be level 5. They've made it very clear that will only ever be a level 4 company and level 5 is not feasible.


Anyone who says L5 is bullshitting honestly.

L4 is enough to be viable and safe, and is all that is needed.

In fact this level crap is bullshit. It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

Real engineers don't stare at their debugging screens going "check out this data, is it L3 or L4?"

Instead engineers look at things like safety-critical interventions per kilometer, non-critical interventions per kilometer, accidents per kilometer, etc.


> In fact this level crap is bullshit. It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

So how do you explain, succinctly, the difference between a car that can hold 55 mph (but do nothing else automatically) without driver intervention and one that can change from 25 mph to 55 mph and change lanes without driver intervention.

The differences between levels are drastic in terms of implications on overall utility of the technology. There's a reason the terms are used.

> L4 is enough to be viable and safe, and is all that is needed.

Needed for what exactly? Each level has its own benefits. That's exactly the reason the levels were established. I want to be able to get into a car, put in an address and fall asleep for the whole trip. When do I get that? L4? L5? If I can't do other tasks while the car is driving, then the point of full autonomous is essentially pointless.

> not engineers.

Precisely. The levels have nothing to do with engineers. It's about understand the benefits for the population. For example - can I fall asleep in the back of the Tesla autonomous car? No, because it's not L5. That's the point.


> When do I get that? L4? L5?

It doesn't really matter so long as there are fallback drivers in all situations. An lower level autonomy car would remotely connect to the backup driver (in a third world call center) several times per hour in order to behave like a higher level one.

A higher level one would connect more rarely. The only difference I notice as a customer is I pay more out of the box for the higher level car and less as a subscription , while the lower level car costs less out of the box and has a higher subscription fee because the human costs are higher.


It's not an engineer's thing at all. The classifications are very specific differences in the overall system. From an engineer's point of view they are just creating a fully autonomous car. L4 -> L5 is more about how many scenarios is that fully autonomous car been tested through.

https://en.wikipedia.org/wiki/Self-driving_car#Classificatio...


I think the key thing people need to realize from the SAE definition [1] of the levels is that they represent designs of the system rather than abilities of the system. I could slap a camera on my dashboard, tell the car to go when it sees green pixels in the top half of its field of view and stop when it sees red pixels. Then I could get out the car and turn it on, and for the 5 seconds it took for that car to kill a pedestrian and crash into a tree, that would be level 5 self driving.

So when people talk about a particular company "achieving" level 4 or level 5, I don't know what they mean. Maybe they mean achieving it "safely" which is murky, since any system can crash. Maybe they mean achieving it legally on public roads, in which case, it's a legal achievement (although depending on what regulatory hoops they had to go through, maybe they had to make technical achievements as well).

[1] : https://web.archive.org/web/20161120142825/http://www.sae.or...


> L4 -> L5 is more about how many scenarios is that fully autonomous car been tested through.

Not really. L5 is impossible, period.

What I think will happen is L4 with 99.999% cases covered and have it come to a safe stop for the 0.0001%, assuming there was a way to safely stop.

L5 which means 100.000% covered, will not happen, but the PR people will continue to use the term.


> Not really. L5 is impossible, period.

Agreed.

> L5 which means 100.000% covered, will not happen, but the PR people will continue to use the term.

Which is precisely why so many people are critical of the term "fully autonomous".

> 99.999% cases covered

What cases? The point is that edge cases are the issue with autonomous driving. I can fall asleep on a train or a plane because I know there is human conductor who can handle the edge cases. This doesn't exist with L4. Everything else that doesn't let me fall asleep (read a book, look at my phone, etc.) is only marginally better.

> assuming there was a way to safely stop.

That's a pretty damn strong assumption.


I've always thought of L5 as a car that can operate via its sensors + onboard computing alone, at least as well as a median human driver.

No communicating with a server to download maps, no perfect performance, just a car that knows the state's traffic laws driving a brand new road in any reasonable weather, and getting into less crashes than a human would.


>It's the speak of MBAs at Bain and McKinsey at who think they understand tech, not engineers.

Really? Because the L5 claims come more out of the Ubers and Teslas than the "MBAs".


It's usually the MBAs and PR people at those companies, not the engineers, that use that term.


My point is that "the MBAs" don't have a monopoly on hyperbole.


Level 5 isn't feasible as much for legal reasons as technical ones.

I don't think any company wants to sign off on the notion that their software will handle all classes of problem, even ones they have no data for at all.


Except Tesla, who are going to be "L5 by the end of the year" every year!


I’m pretty sure Tesla’s whole strategy is to overpromise so much it’s as much a legal liability to not have L5 than to have L5.


Tesla brand sells a lifestyle at this point, not just a vehicle. They have to keep pumping it.


> Level 5 isn't feasible as much for legal reasons as technical ones.

That point is taken into account under J3016_202104 § 8.8:

“There are technical and practical considerations that mitigate the literal meaning of the stipulation that a Level 5 ADS must be capable of ‘operating the vehicle on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle,’ which might otherwise be impossible to achieve. For example, an ADS-equipped vehicle that is capable of operating a vehicle on all roads throughout the US, but, for legal or business reasons, cannot operate the vehicle across the borders in Canada or Mexico can still be considered Level 5, even if geo-fenced to operate only within the U.S.”.


> human safety driver to be remote, as is the case in Phoenix.

It's not a remote human safety driver, it's more like a remote human safety coach.

The difference is giving high level directions vs directly driving the car. They don't remotely drive the car because that would obviously be super dangerous w/r/t connection stability/latency.


Notably, SAE level 5 is actually well below the standard that you've laid out here. The vehicle simply has to be able to make itself safe in situations that it can't handle. This allows room for remote assistance or a human takeover in certain situations.


> I still think that true level 5 ... requires AGI.

In case anyone else was wondering what AGI means, its Artificial General Intelligence. [1]

1: https://en.wikipedia.org/wiki/Artificial_general_intelligenc...


Definitionally, the achievement of self driving vehicles does not require AGI. Doing one task very well requires a subset of AGI called Weak AI.


Part of driving well requires a diverse of array of abilities, right? You know what is litter and what is debris because you can make a guess about material properties based on some observations, like looking at something that moves without being touched and is translucent, you probably conclude that its a plastic bag of some sort and not a hazard. Similarly you probably use a wealth of experience to judge that a small piece of tire is not a hazard, but a chunk is a hazard and a whole one is definitely a hazard.

Or, on seeing an anomalous <children's toy> enter the road you can probably guess that a child might follow shortly after.

I'm not suggesting that the problem cannot be solved without AGI, but you can see why some people might think that though, right?

My personal feeling is that we shouldn't be setting the bar at making a car that can handle any situation anywhere way better than any human at any time, but that we should also try to make roads that are more suitable for self driving vehicles. I'd rather we move to driving agents that don't get bored, frustrated, or angry.


I think the engineer's answer to the child entering the roadway would be: The car SHOULD never drive at such a speed that if the child WERE to enter the visible zone that it could swerve+slow enough to not hit it, forget the toy. After that we can move the goal posts and say it's a FAST child on a bike - but then the reasonable solution to that is a human driver may have also hit the biking child. Then, of course, we get into the ethics of fault for the accident.


My agreement with you falls largely under my last paragraph. I'm trying to illustrate a couple examples where driving as a human on roads built for human drivers requires perceptive powers and understanding that are beyond 'merely' driving safely, but also require a sort of holistic understanding of the world. If your goal is to make a better than human substitute driver then I don't think it is a completely unreasonable position to believe you'll need some level of AGI. Of course, as we figure out how to do concrete tasks and incorporate them into a system they'll stop being considered traits that would require general intelligence, but I suppose that is a different discussion.

And your example isn't moving goalposts, its just another legitimate example of a situation thats gotta get figured out. If you think that things like understanding that some kid learning to skateboard nearby could fall a surprisingly far distance and thus you should exercise caution, or being aware of factors that imply fast biking children (say, an adult and a child implies the potential for another fast moving child on the same trajectory), that this sort of situational and contextual awareness is critical for proper driving.. then yeah, that would be a reasonable sounding argument to support "I think self driving cars will require some level of progress in AGI".

That's all I'm long-windedly getting at.


"Making roads more suitable for self-driving vehicles" will make roads much worse for pedestrians and cyclists if you're not very careful.


Yes, this is very important to keep in mind, thank you. I wonder what sort of things one could do to make roads easier for automation and still serve regular people trying to be outside.


Stacking shelves in a warehouse is one task. Driving is not one task. There are too many corner cases for a modern-day AI system to perform as well as a median driver in, say, 95% of environments and settings in North America and Europe. I think the argument is that such a system might as well be AGI.


The idea that an AI must have the ability to learn how to do anything in order to learn how to drive seems like an extremely pessimistic and misguided goalpost. That is also not how iterative development works.


I think ML is fantastic, and combined with LiDAR, inter-vehicle mesh networking, and geofenced areas where humans take over, we could quickly arrive at mostly automated driving without trying to reinvent the human brain. We should also be more focused on enforcing established legal limits to newly manufactured cars. Just preventing someone from exceeding the speed limit or driving the wrong way would start saving lives immediately. It would also allow traffic flow to be optimized, and eventually prioritize emergency traffic or allow metro areas to be evacuated efficiently for things like natural disasters.

It would be great to see the dawn of AGI, but I don't think it will ever happen with classical computation. GPT-3 spits out nonsense with the input of the largest and easiest to parse portion of reality, and I have not seen any ML approach replicate the abilities of something as simple as bacteria. ML requires constant validation from human operators, so the same is going to hold true for ML powered vehicle navigation.


Driving is a set of tasks, but not AGI. AGI would be if it could drive and then also learn to write poetry without any code update.


There's no such thing as a remote safety driver.

Cell data connections aren't reliable enough, and having the car emergency stop (and potentially get rear-ended) when it loses signal wouldn't be acceptable.


> ability to drive autonomously everywhere with zero human oversight with a safety record equivalent to the median human driver

I think this statement is off the mark. Comparing to a human is hard. Not many accidents happen because people are bad at driving. Driving is honestly pretty easy. They happen because people are distracted, tired, drunk, or perhaps just an asshole driving recklessly for thrills or for speed.

A self driving car might be a lot "worse" than the average human driver but could still be a huge improvement in terms of expected safety record for driving overall.

They don't need to be better than humans, they just need to be not shit 100% of the time unlike humans.


Note sure about the AGI requirement. The current systems will always need the heavy involvement of human intelligence to be able to rescue stuck cars, drive in new areas or monitor for changing driving conditions and update the driving model. There does seem to be at least some hope these systems will be able to run a true driverless taxi service with minimal geofencing. On the other hand, a human can go to a new country with different road markings, signage, rules, and traffic flows and be able to drive safely pretty much immediately or maybe a quick Google search. That would truly require AGI.


Would it be easier to just build a "futuristic" test city built around the idea of self-driving cars to make it easier for them to work? If self-driving cars are so great people will move there naturally due to improved quality of life. Trying to make self-driving cars work in current cities is like building around crippling technical debt

Seems like Google and a few other tech companies could easily bootstrap a small city by planting some offices somewhere


If we're building a futuristic city from the ground up, wouldn't it be better to rely on mass transit? Self driving cars still have almost all the same problems that human-driven cars have.


A futuristic city would have public transport and bikes. 5,000 pounds of metal for transporting one person is horribly inefficient.


> This tells me that we’re still a long way from full level 4

The only thing it tells me is that regulations are more lax in Arizona than California.


I still think that true level 5 ... requires AGI

I agree today's AI tech is a long way off from completely supplanting a human driver. I'm surprised the average consumer I talk to about this seems to think we're on the cusp.

But as vehicles with neural nets become more prevalent I expect we'll see the problem morph as it gets tackled from other angles as well. e.g. Self-driving corridors with road infrastructure aimed to improve AI safety (whether that be additional technology, modified marking standards, etc).

Once upon a time street signs with speed limits, curve warnings, and such didn't exist. After faster cars supplanted horse-drawn carriages, highways became a thing. Eventually when the only reason humans drive is for recreation (e.g. off-roading) the problem from the car's perspective will look somewhat different than it did during the transition.


Is this not a legal requirement?


I was at Yellowstone this last week. https://ridebeep.com/ had some shuttles there doing a pretty straight forward navigation on one straight road, they had human minders as well. The day I went to try them, it was mildly misting, just barely able to see it on the glass. They were not running the machines because they kept stopping because they saw the water drops as obstacles initially. The humans had made the call that it just wasn't an acceptable experience.


> This tells me that we’re still a long way from full level 4 (and certainly level 5) autonomy in a busy city like San Francisco.

I think its in part regulatory relating to paid rides in autonomous vehicles in CA, which is why Cruise is dodging it with passenger-carrying but unpaid rides that are fully driverless. I can't find a good summary of the rules, but I infer from the coverage I've seen that the threshold for having no safety driver when offering paid passenger rides is different from that without paid passengers.


You are making a ton of assumptions about what is driving this decision.


The edge cases requiring immediate human attention are still too frequent for the human safety driver to be remote, as is the case in Phoenix.

I think you can only determine that if you know how many times the human attendant takes over.

Just having a human behind the wheel doesn't tell you much, I don't see how to get full self-driving without an intermediate step of human supervision.


>I still think that true level 5 [...] requires AGI.

Oh, but of course! I'm still surprised by people who think otherwise. The amount of corner cases that you have to solve while driving is pretty much limitless, you cannot "train" something on that, heck, some humans are not even fit for that. We are *FAR* from truly autonomous vehicles.


The key metric is probably "obscure incidents" per miles driven, probably classified manually into various levels of danger. Once the "incidents that lead to disaster" count reaches 0 statistically, it will definitely roll out en masse without the need of safety drivers.

My guess is that they know how many miles they have to drive in order to reach that number, and it's a whole lot. Statistics and math stuff but you can probably pin it down to the month or quarter based on trends. Either that, or it's about driving every road in the city with all sorts of weather / traffic / pedestrian conditions until there's no issue. This isn't generalized AI driving (L5) but it's a much more logical approach to getting autonomous driving coverage where it's the most valuable.

My guess is that each city will involve a safety driver rollout until they have enough data to know the incident rate is zero. There might be a lot of variance between cities - maps data, weather conditions, customs, etc. Then remove the safety drivers.

I'm sure they also are experimenting with disaster/safety protocols while they do the roll out.

My prediction is that waymo will be a mainstream option within the next 5 years.


It doesn't matter for the rider though, unlike a Tesla where you still have to keep your hands on the wheel.


I'm guessing if all cars become Waymo right now, there will probably be reduction in vehicle fatality by 99%.

But people have hard time accepting with the notion that unmanned vehicle may be part responsible for that 1%.


This tells me that you can show progress and draw magnifying glass criticism from people.

can only please some of the people some of the time.


I suspect the main "roadblocks" are about the environment for the SDC.

All traffic signs and signals need to be machine readable at a distance. That is, a traffic light might beam out "I am light 'SFTL783', will be green for 8 more seconds". The location and other data for SFTL783 is in a preloaded database. Same for speed limits and other signage.

An updated 3d map of all roads would also help a lot. As would car-to-car communication systems.


Ah yes, the SF elite will gladly shell out fortunes to ride around in a car just for the opportunity to witness the gross power of AI! Not to mention it's a nice Jag. I wonder if they are hiring models for the "autonomous specialist" role?


They are not legally allowed to not have a human driver onboard. It's a legal requirement that is not a relevant signal one way or another.


All that tells you is that they are cautious in the famously litigious California yet confident enough to actually launch their service there. In the unlikely event something goes wrong, even if it is not their fault, they had a person onboard. It's the difference between that being just a minor incident or getting a class action lawsuit with millions/billions in damages. The law is by far the biggest obstacle to level 4 and 5 driving. So, launching in San Francisco is kind of a big step for them.

Once it has proven itself for a bit of time and they know how to set up their geo-fences and which streets to avoid, they can probably get rid of the person. That would be evident by that person never actually doing anything long before that.

The remote safety monitoring is a smart feature but it's not going to help much for the type of accidents people worry about most where something unexpected happens very rapidly. AIs are actually really good with dealing with those situations. Arguably better than humans for whom this is probably a leading cause of traffic fatalities. The way Waymo operates without hands on the wheel (i.e. level 3 & 4), basically means they have safety nailed already in those kind of situations.

There's no way a remote person would be quick enough to intervene. That person is there for other reasons. It's complex traffic situations that cause AIs to get stuck occasionally that require human intervention. Usually this is less of a safety concern and more of an annoyance.

The key metric of interventions per hundreds of thousands of miles of distance traveled is the key metric that both Tesla, Waymo, and other companies use for this. Both companies boast some pretty impressive statistics for that. Though of course it's hard to confirm those independently.

It's interesting the way Tesla and Waymo approach this problem in different ways. Waymo goes for level 4 but only in areas they've thoroughly vetted. It's taken them many years to start moving to new areas other than relatively safe and easy Phoenix. Tesla on the other hand offers their AI features just about anywhere they can but positions it as level 2 that is really aspiring to be a level 4 but with a requirement for hands on the wheel just in case (which means it's level 2). Level 2 is a legal tool to dodge a lot of legislation and bureaucracy. It basically means that if anything goes wrong, the driver is at fault. It will stay in place a long time for that reason. Liability, not safety, is the key concern here.

Arguably, Teslas would probably do pretty well in the areas that Waymo has vetted as well. But they are not in the ride sharing market and need to sell cars world wide and not just in specific geofences in Phoenix and San Francisco. But I wouldn't be surprised to see Tesla offer a similar ride sharing service in similarly geo-fenced areas at some point to get to level 4 & 5. I suspect that race is a lot closer than some people seem to think.


Beating humans doesn’t require AGI. Just not drinking, texting, or falling asleep will get you halfway there.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: