Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn’t a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.
The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode.
In terms of staffing, we are intentionally over staffed given our small fleet size in order to handle localized bursts of RA demand. With a larger fleet we expect to handle bursts with a smaller ratio of RA operators to AVs. Lastly, I believe the staffing numbers quoted by nyt include several other functions involved in operating fleets of AVs beyond remote assistance (people who clean, charge, maintain, etc.) which are also something that improve significantly with scale and over time.
> Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles
The NYT is definitely implying 1.5 workers per vehicle intervene to assist driving at first read. Only after reading the above comment do I notice that they shoved the statements together using different meanings for “workers” as they didn’t have the actual statistic on hand.
Where are these remote assistant human drivers located (country), and how have they been screened for temporarily “driving” vehicles in those cities (do these humans have American licenses? ). How is all this regulated?
Basically, I am curious if these remote assistant drivers are located in foreign nations without American licenses. And if so, how did you get them cleared to be able to “drive” cars on Americas roads?
Thanks
Ps: I took a cruise once in Austin and it needed remote assistance.
"The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help."
Hoo boy, sure wish the NYT had clarified that. That changes things significantly.
It's telling that you declined a request for an interview, yet still feel the need to clarify on HN. You'd be doing a lot better with transparency and public trust by just taking the interview.
Is there something that's covered by an interview that a list of questions or email exchange wouldn't? Interviews take up a valuable chunk of the CEOs time, so I'm somewhat sympathetic to them declining it.
Reporters take email interviews and interviews with corporate officers besides CEOs all the time. Any CEO can get a media relations person to write up an email response and then check it over quickly -- or have, for example, the CTO do it instead.
Hi Kyle (hello again), thanks for being so transparent with this.
I suspect this is a moment where news are looking for a scapegoat / villain from the AV sector, and your team is an easy target given what has happened recently.
I believe that transparency is the right way to address issues and concerns. Please keep doing that.
>> Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn’t a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.
Funny, since I thought full autonomy was the goal of the company. 2 percent human intervention isn't scalable.
For example - does the vehicle have to come to a full stop, as it can't safely proceed without an operator intervention? At busy periods, do they also have to wait for an operator to become available?
A busy junction could easily have 100 cars through it per minute. If two cars every minute stopped unexpectedly, that would hardly be scalable.
That 2% is not the person in the vehicle, it’s cruise employees. It doesn’t scale because it is paid employees intervening instead of the customer driving. It scales in comparison to ride sharing competition but not in terms of people owning the vehicles.
They could spend $75/hr on employees and the cost per car-hour would be just $1.50. That's nothing.
3.5M people work as truck drivers in the US, enough, in principle, to drive ~175M cars at the same time assuming 2% of cars need help at any given moment, ie ~60% of all the cars and trucks in the entire country being driven simultaneously.
Presumably though they'd be able to shave that down a few fold between where they are and dominating transportation nationwide (should they ever do so). So, it's pretty scalable in practical terms.
This is true, but the numbers inflated even. There's no way they'll pay these people more than $15 or $20 an hour once it scales, which probably further helps your point.
If you sell a ton of these cars, having employees controlling 2% of the time means your costs become enormous rather quickly. You would have to have a subscription business model for this to make any sense where the person buys the car and then has to keep paying monthly for self driving.
Cruise is aiming for a robotaxi market, not for consumer vehicles. In their business plan, they own and operate fleets. In that kind of model, it’s possible that they can get away with 4% human control. Another way to put that is that they would have one operator on duty for every 25 cars.
No-one credible is talking about selling self-driving cars to consumers for a long time. Taxi services make much more economic sense and are all-around more comfortable from a liability perspective as well.
2-4% is very high. It essentially means none of your trips can be completed without remote intervention.
This puts your cars and the safety of the whole city at the mercy of reliability of mobile networks. This is a fundamental architectural change in the design of the city. Do the telecom operators take liability if they can't meet their designed SLOs for availability? What are the worse case scenarios that you have considered?
I'm not really sure what you mean there, but I do know that companies are happy to make unsubstantiated claims. One way news organizations deal with this reality is by interviewing the company and questioning them when things are not clear. Personally I find it likelier that Cruise has things to hide if they don't want to answer interview questions, but I guess that's just me.
I would consider this realistic service design, just as Meta’s Cicero (plays blitz Diplomacy) is smart design. It might work as a service.
What the answer glances over is that even with just 3% of the time requiring human assistance (2 minutes out of every hour) the term ‘autonomous vehicle’ is not really applicable anymore in the sense everybody is using/understanding that term. The idea behind that term assumed ‘full’ autonomy. Self driving cars. And there is no reason to assume that this is still in sight. The answer puts that ‘self-driving car’ on the shelf.
PS. Human assistent seems to me a difficult job, given the constant speed and concentration requirements.
By this logic Tesla does not have "autonomous vehicles" either. They just do adjustments after the car crashes and kills someone instead of doing it online.
Your plan is to make someone in india remote control my car? What if the signal goes down? What if it lags and they accidentally give the wrong instructions?
Hello Cruise CEO, there's a huge market for durable and profitable "dumb" cars. Why don't you get on that market? In a time when electronics represents over 30% of car costs and ~50% of car failures, people like me would be happy to buy a car that doesn't suck (low-tech) and can be maintained for decades for a reasonable price. In the meantime, i'll keep buying old Renault/Peugeot cars from the fifties/sixties i guess :(
I actually concur, my DPF is a nagging beast and I hate it.
But I am betting that quite a lot of the electronic components of cars these days are tied to things, my DPF is a great example, that come from safety and environmental regulation. If you pull the ECU out and tricked the motor into running anyway, I am betting your emissions profile will suck massively. Ditto transmission. The rest of my car seems to involve safety features, sensors and cameras mostly.
By the time you reinvent the car to exclude all these things, then make it roadworthy again I reckon you would end up with almost exactly the standard modern car again. Car makers arent putting computers in for funsies.
> Car makers arent putting computers in for funsies.
There's a bit of everything. Some parts are due to regulations. Some parts are due to providing fancy options (LCD screens, seat heaters..), some parts are useful for diagnostics.
Still, some designs are overly complex because during production it's easier to use a hundred MCUs assigned to a specific task. But for durability purposes, it would be better to have clear circuitry with only a handful of MCUs running open source software.
I'm guessing there's an interesting middle ground to explore between "raw motor + wheels" and "FSD car".
The things you say are just factually wrong on many levels.
> durable and profitable "dumb" cars
Both of these are pretty much wrong.
First of all, just based on regulation and safety, the car is gone have a huge amount of electronics. Second, regulation about emissions also require a huge amount of electronics. You can't get away from that no matter how dumb you want to make a car. Maybe you don't like it, but society prefers less people to die even if it is not inconvenient for you.
Granted, in many way an old inefficient smaller car is still saver and better for the environment then a modern huge car. But that i a failure of the regulation.
Next the idea is that such cars would be profitable. This is simply inaccurate. Car companies can barley make money on cars as it is, in fact, without parts supply they don't really make money on those cars.
Additionally most consumers simply prefer to buy cars with lots of multimedia options and things like that. Having the ability to warm up your car before you get in is useful for example. Having phone conversations in your car is useful. Having GPS in your car is useful. People like having heated seats. People like having good sound in the car.
People simply aren't buying the cars you seem to be demanding and doing such cars simply wouldn't be profitable. In fact, generally if you look at China you will see an increase in the types of features you don't like.
To suggest an autonomous car company gets into that business makes no sense at all.
And I say this as somebody that doesn't own a car and generally thinks cars should be replaced as much as possible and band in many places.
There are many ways repeatability could be improved without going back to 1960s cars. Many of those should be embraced but you will simply not get around some inherent complexity of the modern world.
They do? Nobody is forcing you to buy a decked out Cadillac. If you want to buy a base model Malibu for $25k, I'm sure you can find a dealer that will gladly sell it to you.
If you want a brand new car for $10k you’re going to need a time machine, or move to a country without modern safety standards.
That's not wrong, but still some parts wear out and it's harder to find them nowadays. Although to be fair there's still some production for the most popular cars (eg. 2CH, 4L).
But it would be interesting if we took that old "built forever" approach (remember washing machines that last a lifetime?) and used it to build modern low-tech cars with the newer higher safety standards and efficiency gains. It's a shame how everything that comes out of factories nowadays has a 10 year lifespan.
remote operation of vehicles often makes a lot of sense economically, since you can effectively decouple drivers from vehicles/riders. As you pointed out, this means you can shift to deal with peak loads and all of that - great.
Given everything you know now, was it wise to push for expansion over improvements to safety and reliability of the vehicles? On one hand, there is certainly value in expanding a bit to uncover edge-cases sooner. On the other hand, I'm not convinced it was worth expanding before getting the business sorted out.
My guess is that given the relatively large fixed costs involve in operating an AV fleet, that it makes some sense to expand at least up to that sort of 'break even' point. Do we know what that point is? Put differently, is there some natural "stopping point" of expansion where Cruise could hit break-even on its fixed costs and then shift focus towards reliability?
The first thing that came to my mind after reading, “… makes a lot sense” was the latency overhead that’s incurred when RA is activated and associating it with drunk driving due to the increased response time.
Maybe the article answers the following, but don’t know since I haven’t read it yet.
- median, p95, p99 latencies for remote assistance
I think a lot of the confusion here is over what's meant by "RA". This isn't a remote driving situation. It's like Waymo, where the human can make suggestions that give the robot additional information about the environment.
So when low wage mechanical turk costs turn out cheaper than engineering to improve driverless vehs… this will just be another exploitative gig job for folks in remote locations?
I don’t trust proper attention will be given to improvements in tech once profit and roi is considered compared to human labor costs especially in lower wage nations.
Well, low wage mechanical turk costs have not yet turned out to be cheaper and there's no reason to expect that to happen, so this is one channel for exploitation that I'm not going to worry about.
Huge cojones on the CEO to risk public statements given the enormous legal and regulatory pressure being applied. I certainly wouldn’t recommend this tactic!
Cruise is leveraging human-in-the-loop to expand faster than they otherwise would, with the hope that they will solve autonomy later to bring this down.
I don't think this is a viable strategy though given the enormous costs and challenges involved.
There doesn't exist a short-term timeline where Cruise makes money, and the window is rapidly closing. They needed to expand to show big revenues, even if they had to throw 1.5 bodies per car at the problem.
Prediction: GM will offload cruise, a buyer will replace leadership and layoff 40% of the company. The tech may live to see another day, but given the challenges that GM has generally (strikes, EVs, etc), they can no longer endlessly subsidize Cruise.
Human in the loop can be vastly cheaper than you might think.
If this lets them have the only level 5 system on the market they could double that and millions would happily pay. Suppose your a trucking company would you rather pay 50k / year or 5k/year? That’s a stupidly easy choice.
Americans drive roughly 500 hours per year. If they can replace 98% with automation and the other 2% with someone making 20$/hour that only costs them ~200$/year, which then drops as the system improves.
I'm not saying they can't - I'm saying they are running out of time to do so, and with the DMV shutting them down they've been hamstrung further.
They are burning 100s of millions every quarter. They needed to show either growth/expansion or some sort of positive cash flow. They now have neither.
Imagine a car rental service where someone in an office building drives the empty car to you, then drives it back when you're done with it. No taking public transportation just to get back to the garage to pick up the next drop-off. Imagine simply swapping the driver controlling a ling haul truck remotely when it's time for a shift change. With good handoff the truck can be driving 24/7 without ever slowing down.
Really the only autonomy you need in that situation is enough to pull the truck/car/whatever over and park it if the connection is lost.
Teslas have no redundancy for their sensors and depend upon unreliable GPS.
They have some level of redundant processing, but I've seen FSD disengage because of software faults without anything taking over.
They should never be allowed to run a rideshare service just due to this even if they do miraculously solve camera-only perception and not relying on HD maps.
> Two months ago, Kyle Vogt, the chief executive of Cruise, choked up as he recounted how a driver had killed a 4-year-old girl in a stroller at a San Francisco intersection. “It barely made the news,” he said, pausing to collect himself. “Sorry. I get emotional.”
...
> Cruise’s board has hired the law firm Quinn Emanuel to investigate the company’s response to the incident, including its interactions with regulators, law enforcement and the media. / The board plans to evaluate the findings and any recommended changes. Exponent, a consulting firm that evaluates complex software systems, is conducting a separate review of the crash, said two people who attended a companywide meeting at Cruise on Monday.
After the first [edit: the first performative charade, about little girl in a stroller], why should we trust the second isn't also a performative charade? What independence or credibility does some hired law firm have, that the company itself does not? How about using an independent third party?
Hmm? I saw it exactly the opposite. A lot of people in the autonomous driving industry are driven by exactly what Vogt describes (little girl in the stroller etc.). See also Chris Urmson of Waymo fame's TED talk, he talks about a similar motivation[1].
Its a fallacy everyone conveniently ignores. The woman the Cruise car ran over was actually first hit by a human driver who is still at-large, not a peep about him. The press kinda just accepts this as the "cost of doing business".
The way I see it, Vogt sincerely believes autonomous cars will make things safer from the #2 killer of Children under 19 (outside of guns) by a wide margin [2] and therefore accelerated the rollout past what was safe. I see no evidence otherwise.
We have become so desensitized to human deaths due to cars even though those numbers are higher than violent acts of terrorism et al that actually kill far fewer people each year.
Many people have to be killed AT ONCE for it to be news worthy these days.
> A lot of people in the autonomous driving industry are driven by exactly what Vogt describes (little girl in the stroller etc.). See also Chris Urmson of Waymo fame's TED talk, he talks about a similar motivation[1].
To me, that's evidence that it's performative. First, it's a talking point; it looks, smells, walks and talks just like typical corporate/industry framing and messaging, with even a 'think of the children!' line, and the redirection (from the safety of autonomous cars, the topic, to whatabout something else). Second, its repetition by Urmson is further evidence - that's how talking points work. Third, the public's reptition of it, in surprising detail, such as in your comment, is also what we'd expect. Finally, throw in some tears, 'I get emotional' lines, etc. (per the NYT article), and I don't know how it can be missed.
Could it all be legit? Anything is possible - including fully autonomous cars!
The culture of Silicon Valley dictates that any performative charade be taken completely sincerely. It is just what it is. People like Holmes and SBF naturally arising from this from the obvious incentives are just the cost of doing business is this cultural environment.
Whether the corporate honchos are "sincere" or not is wholly irrelevant to me (and frankly unknowable).
"Think of the children" is usually a vapid misdirect, except of course in the objective measurable leading cause of death right? So in terms of issues where "something must be done", this should be objectively pretty high.
Either we drastically reduce the number of cars on the road and restructure American society around public transit (I wholly support this), or we take the humans out of the equation by making things autonomous. Or some combination of both.
I dont care if this happens under some grand socialist program if we so hate corporations/industries, but it needs to happen yesterday.
The rest is just status quo protection which is unacceptable.
In the US on average, there's a fatality every ~85 million miles driven, and that's an average that includes motorcyclists without helmets, old unsafe unmaintained cars, the worst roads, and adverse weather conditions.
Cruise barely drove a few million miles with new modern cars, good weather, the ability to choose optimal roads and weather, and yet it already severely injured a pedestrian.
We can argue about Cruise hitting the pedestrian, but reportedly, the major injuries were caused by Cruise, after reaching a complete stop, deciding it has to clear the road, and dragging the screaming pedestrian and ending with the axle over the pedestrian.
I'm not sure why you're comparing fatality miles vs no-fatality-accident-that-cruise-didn't-cause miles (i.e. we have no idea how safe Cruise would be if there were no human drivers on the road)
That's not even close to a fair comparison. We just have to admit that there isn't a fair comparison yet and everyone's just got an axe to grind.
> I'm not sure why you're comparing fatality miles vs no-fatality-accident-that-cruise-didn't-cause miles
Because it's not that everyone ignores road fatalities, it's just that cruise hasn't driven (in terms of miles amd conditions) nowhere near to what might result in a fatality with human drivers.
Even then, in an incident they've not initiated, they've unnecessariliy made an existing bad situation far far worse.
> (i.e. we have no idea how safe Cruise would be if there were no human drivers on the road)
Self-driving cars have to exist in a world with human drivers, pedestrians, and the rest of reality. No one cares how well Cruise does in a sterile environment.
They should not only not cause incidents, they should also not make existing incidents far worse because of terrible decisions.
“ They should not only not cause incidents, they should also not make existing incidents far worse because of terrible decisions.”
Just FYI, it made this terrible decision because people were mad at cruise for stopping in the middle of the road to decide if it was safe to proceed. They were asked to change that behavior and pull over and they did, this time just dragging a human along.
So yes let’s set these absurdly high standards, while we leave children to fend for themselves against human drivers that have met non-existent standards on a continual basis.
But then let’s actually leave the autonomous cars on the road to test if they’re actually meeting them.
As you agreed, some statistic they figure out in a sterile or simulation environment doesn’t actually matter. Let’s put them back on the road..
This is the problem with self driving cars. A human has the awareness to pull over when it's appropriate and also is able to recognize they just ran over somebody and it's best to stop completely. But AVs seem to just have a dumb if/else statement to control this behavior (yes, I know it is actually more complex than that, I work in this space. But that is how they behave).
Driving is infinitely complex. It's becoming increasingly clear that the current approach to AVs not up to the challenge.
A humans awareness is not constant. It waxes and wanes, even more so with cellphones in hand.
The status quo is indefensible so setting up moving unknowable goal posts for something to replace them doesn’t make sense to me.
This particular problem can be easily solved by cameras in the under carriage to make sure there aren’t humans shoved in there by other bad drivers. I wouldn’t mind making that a requirement across the board and moving on to the next challenge the unpredictability of human drivers throws at a repeatable robotic system.
There is no evidence that there is a magical different approach that will work better.
> A humans awareness is not constant. It waxes and wanes, even more so with cellphones in hand.
And even with supposedly* perfectly consistent awareness, the automation still failed catastrophically.
> The status quo is indefensible so setting up moving unknowable goal posts for something to replace them doesn’t make sense to me.
AVs are not better than the status quo, making them even less defensible. A human would not have drug that poor women for 20 feet because it was compelled to execute a pull-over maneuver. Even an OCD psychopath knows better.
* None of these things run actual realtime operating systems with fixed, predictable deadlines. Compute requirements can vary wildly depending on the circumstance. When compute spikes, consistency drops. A robot can only way approximate constant awareness by massively undersubcribing the compute budget.
" A human would not have drug that poor women for 20 feet because "
Yea, it would probably be a lot more [1]. This is from...just last week. Its a pretty constant occurence. Its only in the paper of record because it happened in New York.
We don't have the data to claim this, this confidently, and the only way to get the data is let the experiment keep running in the real world (only place that matters).
There will obviously with holes in the awareness (literal missing cameras under the car) that's what the testing is for. If someone says they can sit in a room, in a simulation environment and come up with all potential crazy things humans can do around autonomous cars, they are lying to you.
To me, its either this, or we pull all human drivers off the road, restructure our cities and put em on public transit (wholly support this).
I re-iterate: The status quo is unacceptable and indefensible. The human driver who actually caused the accident has still not been held to account (and probably never will be).
P.S: I accept your point about the system being non-realtime. Though I think there are some critical safety systems (LIDAR/RADAR cutoffs etc.) that might have a real-time response?
> We don't have the data to claim this, this confidently, and the only way to get the data is let the experiment keep running in the real world (only place that matters).
How about we start with something simpler: have Waymo, Cruise and their likes produce a rigorous safety case[1] arguing why their vehicles are safe.
Once the safety case is in the open, we can also evaluate how well their system satisfy the claims in the safety case, and if the assumption do not hold, we can stop the experiment.
They are experimenting on humans. The usual requirement is informed consent.
This is just..more paperwork, but sure, highly unlikely that these companies don't have this report built internally already. And like I said, there will be scenarios not covered by it, because we simply don't know what they are and can't think it up.
But if we're doing this, lets also make human Drivers do this, and for real parity, make sure all human drivers are kitted out with all the same cameras and logging systems we ask of from autonomous car companies, auto submitted to the DMV.
Then analyze all the reports on an annual basis to see if the human and/or autonomous agent should be allowed to continue to operate on the road.
I think people forget that driving is not a right but a privilege, I agree that both humans and autonomous agents should earn this privilege.
P.S: If the claim is that a one-time DMV driving test is enough, then that should be enough for autonomous cars as well (I'm not making that claim)
> I agree, but if we're doing this, lets also make Human Drivers do this, and for real parity, make sure all human drivers are kitted out with all the same cameras and logging systems we ask of from autonomous car companies, auto submitted to the DMV.
Human drivers are the status quo. Once you consistently show that self driving can do better there would be a point in discussing that.
The problem is that you can't because such technology simply does not exist. There is no perception technology that is reliable enough. There is no prediction technology that is reliable enough.
To me it is obvious that Cruise and Waymo (and their likes) simply cannot withstand any serious scrutiny.
> P.S: If the claim is that a one-time DMV driving test is enough, then that should be enough for autonomous cars as well (I'm not making that claim)
The DMV driving test is just one element. We also know how human develop and what skills they acquire and when.
We don't let them drive until they're 15-17 (depending on local laws) because they lack certain abilities earlier than that. For example, humans acquire object permanance at around 24 months.
The Cruise incident shows that Cruise vehicles lack object permanance. They should not be elegible even for a DMV appointment.
And I have REAMS of data showing how the status quo is unacceptable. Humans are impetuous, impatient, emotional, inconsistent and terribly distracted. Just a slow rolling, ongoing, widespread disaster on the road.
I have zero data (not anecdotes) to show that particular autonomous companies are somehow worse. They have object permanence btw (occluded object tracking is a thing), just not for their undercarriage (for now).
So either let’s come up with an objective set of metrics on a set timeline for them to meet and get legalized or let them back on the road so we can figure out what those metrics could be.
When automobiles with human drivers were growing we just let them grow, default accept.
I will vehemently oppose any suggestion that now we must be default reject.
> We don't have the data to claim this, this confidently, and the only way to get the data is let the experiment keep running in the real world (only place that matters).
I don't have to prove it. It's incumbent on the AV evangelists to prove they are better. I signed up to be a driver on roads with other humans. I have zero interest in being part of this experiment. Especially not when it comes out of silicon valley.
For an industry that claims to be all about safety and fixing how dangerous driving is, I expected them to be taking inspiration from Boeing and the commercial airlines. The remarkable, steadily improving safety record of the Airline industry should have been the paragon. Instead, they've copied the move-fast-and-break things playbook from the silicon valley tech bros. Which makes all of these claims hard to take seriously.
When I have talked to Cruise engineers, they use the phrase "Move fast and break things" regularly. They have said to my face that that is their culture. They are proud of it. That kind of culture is not how you get an aerospace-like safety record.
> So yes let’s set these absurdly high standards, while we leave children to fend for themselves against human drivers that have met non-existent standards on a continual basis.
Looks like Cruise was well aware they are not even close to the average human driver when it comes to handling kids.
Independent third parties don't work for free and if you pay them (by your logic) they're no longer independent. The best you can probably hope for is a government investigation.
There are ways to do it. Non-profits don't need payment, always, and their mission isn't profit. For example, companies have worked with environmental non-profits on internal climate change and other issues.
Lots of sad things happen. Why is a sophisticated public communicator taking the time to tell this very self-serving story, tear up about it, etc.? It's not incidental; he prepared it.
People like to say self driving cars are safer than human drivers - but the human drivers that tend to do the most unsafe antics seem to be the humans that are least likely to make use of self driving cars.
Seeing that they're the one who, if there were more alternatives (like those generated by low-cost taxi services enabled by autonomy) would be less likely to have licenses, I don't think this follows at all...
Cruise is trying to save itself from getting shut down by GM. I guess it would look slightly better for optics if the GM board hired them instead of Cruise’s board. But it’s the same money, and it’s GM’s decision at the end.
The first thing? He wouldn't have mentioned it at all. He would discuss the benefits and costs, without this now cliche talking-point framing that they repeat incessently. See my other comment for some quick explanation of talking points.
Having to be remotely operated every 2.5 to 5 miles seem to defeat most of the economics of self driving cars.
Back of the napkin math, cars drive at an average of 18mph in cities, so every 10-20min.
Let’s assume it takes over for 1min, and that you need remote drivers not too far for ping purposes, so at the same hourly rate. To guarantee you’ll be able to take over all demands immediately, due to the birthday paradox, you end up needing like 30 drivers for 100 vehicles? It’s not that incredible of a tech…
Where do you get 30 drivers per 100 vehicles from?
Let's model it as every takeover is 1 min, and vehicles need help 5% of minutes. Then you'd model the # of vehicles needing help in any given minute as a binomial distribution with p=0.05 and N=100, and you find you get 99.99% of the time you need less than 15 drivers per 100 vehicles. By 20 drivers, you cover all but 2e-8 of the time (or once per century).
But that's a bit misleading. It's a small-size effect. With 10k cars, you get cover all but 4e-6 of all minute periods with just 600 drivers (0.06 per car).
By 100k cars, you have 44 9s of reliability with 0.06 drivers per car.
There's some more complicated things that arise since probably the distribution of how long vehicles need help will be Poisson distributed (with an average of 1min) most likley, etc. But the point will stand, for large fleets you only need a modest margin over the average rate to get effectively complete coverage under normal conditions. It would only be in really extreme situations, like a hurricane messing up badly a lot of the Eastern seaboard or something, that you'd maybe run into issues (which, admittedly, is a real potential problem).
Wages can fall off a cliff within modest distances. To use unemployment rate as a proxy for driver pay, Bakersfield, CA 7.5% and San Francisco, CA 3.5%. Go a little farther to Los Vegas 5.7% and one can avoid California's minimum wage.
The current taxi market is already structured that way: drivers in SF aren’t from SF. So no competitive advantage there, or not significant enough to change the game yet.
Just FYI, Most autonomous car companies have backup drivers.
Its the disengagement rate that drives the number of operators you need per driver and therefore the economics. Theoretically, this rate should be improving steadily at all these companies.
Cruise seems to have a bad disengagement rate right now (<5miles seems really low), but methinks nytimes might be partaking in some obfuscation here.
Waymo's should be much better already. Curious by how much though.
> you end up needing like 30 drivers for 100 vehicles?
What? That's literally insane compared to the current standard of 100 drivers for 100 vehicles. They're literally reducing 70% of the labor cost compared to uber/lyft/etc.
It's pretty reasonable to expect that this will improve over time as well. This is exactly how you want a startup to roll out a new technology.
* Build a pretty good base implementation
* Do things that don't scale for the edge cases
* Reduce the things that don't scale over time
Even if they can only improve this to 10 for 100, that's still a massive improvement.
In my area, a small, rural city, this would literally be a game changer. Right now, there's a single Uber within 15 minutes - if I'm lucky. Meanwhile, cruise could drop a handful of car in town, let them idle (at no cost), then pay a driver for a few minutes of intervention every now and then.
This also enables intercity transit. Most of that is highway miles. Outside of the start and end, those are easy and predictable. You could have dozens/hundreds of miles where Cruise can compete with the cost of privately owned vehicles.
Lastly, this makes it feasible for Cruise to reposition cars between cities without huge costs. Currently, that's basically impossible. Any human driven car needs to offer the driver a ride in the opposite direction.
Infrastructure cost could be considered as well. Like it or not but not every part of the world has the same desire nor infrastructure for mass public transport.
Not saying it’s right or wrong, just stating the other half of the equation
Look, isn't remotely assisted driving something unbelievably stupid?
Why should I rely, when I am on my "driverless" car, rely on someone else who is remote, need to be updated at all times about the situation (when things can go wrong in a matter of tenth of seconds, while driving), and needs to react, and it's not as much motivated as me (as I am risking my life, while he is sitting somewhere without having as much skin in the game as me)?
It makes a lot more sense, then, to have just an assisted driving car, or a semi-autonomous car where the "assistant" to the AI it's me and not someone else.
In my estimation, the 1/10th of a second thing isn't what the remote drivers are for. The car should just stop and avoid getting rear-ended, or dodge, or whatever. charitably, the remote drivers are aiding where a human would need assistance too, like "what hand signal is that cop making? do they mean go, or are they telling me to stop?" or "this light appears to be broken, is it safe to treat it as a 4 way stop?"
my subaru can avoid accidents, it can even avoid things that 100% would not be an accident, even on black ice; so i don't think this is what the remote drivers are doing.
If humans need to remotely intervene for a car in motion, that implies it could impact safety.
If that's correct, then the remote signaling of a problem and the human's response and control must have flawless availability and low latency. How does Cruise achieve that?
Cellular isn't that reliable. Maybe I misunderstand something.
Appears Cruise isn’t giving these remote drivers a steering wheel and gas; rather they make strategic decisions: Go around this, follow this path, pull over, etc. The car is able to follow a path on its own. Determining the correct path is where it gets hard.
An overwhelming majority of Americans will choose 45,000 deaths in car crashes annually (last year's number) in human-driven cars over 450 deaths/year with all self-driving cars.
In the American (and probably ALL) mind(s), human agency trumps all.
This was a plot point in Captain Laserhawk: All the self-driving cars and flying drones were actually being remotely piloted by prisoners in a massive VR facility.
> Company insiders are putting the blame for what went wrong on a tech industry culture — led by the 38-year-old Mr. Vogt — that put a priority on the speed of the program over safety. [...] He named Louise Zhang, vice president of safety, as the company’s interim chief safety officer [...]
I hope Chief Safety Officer isn't just a sacrificial lamb job, like CISO tends to be.
Is the "interim" part hinting at insufficient faith, and maybe future blame will be put on how the VP Safety performed previously (discovered after the non-interim person is hired)?
> [...] and said she would report directly to him.
Is the CSO nominally responsible for safety?
Does the CSO have any leverage to push back when their recommendations aren't taken, other than resigning?
It sounds like there are a lot of people working at GM who don't like Cruise and are willing to complain to the NYT about it. One of those frustrating "we're a startup inside a large company" things.
Cruise employees worry that there is no easy way to fix the company’s problems.
Company insiders are putting the blame for what went wrong on a tech industry culture.
What, because car companies with car company culture are doing such a great job building self-driving cars?
I'm rooting for both Cruise and Waymo here. Self-driving cars would be great for humanity. Good luck to the teams working hard to make them happen.
Here we go again with a CEO who proclaims "autonomous cars are safer than human-driven cars." And their definition of "safer" conveniently ignores that autonomous cars create new failure modes which do not exist in manually-driven cars.
It may be true that statistically fewer fatalities per mile happen with autonomous cars than with human-driven cars. But that's irrelevant. If the car kills one person because it did something utterly stupid like driving under a semi crossing the highway or dragging a pedestrian along the ground, the public will not accept it.
This is another example of the uncanny valley problem: Most "smart" devices are merely dumb in new ways. If your "smart" gizmo is only smart in how it collects private information from people (e.g. smart TVs), or it's merely smarter than a toggle switch, that's not what the public considers smart. It has to be smarter than a reasonably competent human along almost all dimensions; otherwise you're just using "smart" as a euphemism for "idiot savant." Self-driving cars are a particularly difficult "smart" problem because lives are at stake, and the number of edge cases is astronomical.
> Having to be remotely operated every 2.5 to 5 miles
Regarding Cruises' suspension, how likely is it that the backup driver restarted the car to drive again after the car stopped with the pedestrian below?
This is the same whacky theory I've been spreading about Tesla self-driving for a year or so. "Imagine Tesla self-driving is like some dude driving your car via videogame on the other side of the world."
Most people are pretty sure my theory is wrong. I have absolutely no evidence this is true, it's just some crazy idea that popped into my head one day.
Yah, exactly. Even if it isn't real, the sci-fi stories you can think of are endless.
Like imagine there's some industrial block in Da Nang where there are thousands of guys and gals who think they're RL for some AI model somewhere. X takes a bathroom break and forgets to turn over the controls to another specialist and when he gets back he discovers the model has crashed.
Next he reads on the news that there's been a fiery Tesla crash somewhere near Oakland, and he realizes that something is horribly wrong in his world.
We could use multiple predictive language models to determine what direction the story line takes next, but I imagine he quits his job right then and there and is determined to find out the truth behind the program.
What will happen next?
Better yet, base the story off-world so that we aren't so close to the horrible reality of it -- if this is true and it's probably not.
I get a feeling Cruise is going to get sold off within the next 5 yrs. Waymo will likely be the leading provider for “autonomous vehicle” software/hardware.
Government Motors can only sustain such a loss on their books for a short time. This is probably why Vogt has been pushing so hard for market dominance.
It's not an allegation. It's the same as using human feedback for tuning large language models. There are no autonomous cars currently regardless of what is written on the marketing brochures. In various "emergency" situations the cars phone home and ask a human operator to take over the controls.
This is completely incorrect. Remote operators cannot “take over controls” at all and hence cannot help in any “emergency” i.e. safety critical situation (e.g. preventing a crash). All they can do is assist the vehicle with things drawing a path to get around a parked vehicle, instructing it to do a multi-point turn when it’s stuck and so on.
What the article says is that Cruise vehicles need some sort of assistance every 2.5 to 5 miles (I highly doubt this number is accurate). Not that they’re getting into emergency situations that frequently.
Based on what I know, yes. Why would they want to do it with the latencies involved? It’s not a reliable solution, so it’s not used in any safety critical path.
> In addition to allowing emergency crews to access and move vehicles, Cruise says that it is also providing its own remote "assistance advisors" the ability to conditionally route its Chevrolet Bolts. This means that if law enforcement directs Cruise to route its vehicles away from an emergency scene, those advisors will maneuver the cars in a way that satisfies the request. The AV provider also says that it has enhanced the ability of these remote operators to clear a scene, should an issue arise.[1]
Can you explain how this supports your assertions? Because this doesn’t say they can take over control of the vehicle or prevent an emergency in the first place. They clear the cars by plotting a new path.
You seemed very certain that I was completely incorrect. My point is that you should consider that you might not have all the details and if you haven't actually worked at an AV company then you do not know what capabilities are granted to remote operators in emergency and non-emergency situations.
"Back in the Waymo office, a “remote assist driver” can view the feeds of eight of the vehicle’s external- and
internal-facing cameras and a dashboard showing what the software is “thinking,” such as if it is preparing to
stop, or the position of other objects around it. The remote drivers can monitor multiple vehicles at once. If a
vehicle gets stuck, the remote assist driver can tell the car how to drive around a construction site or some
other obstacle by using their computer to manually draw a trajectory for the car to follow."
You are still incorrect and unable to prove anything you claimed. The burden of proof is on you when you confidently say they can "take over controls".
I wasn't proving anything. The fact is there are articles explaining that remote operators can take over the controls in an emergency situation and that's exactly what you were denying. In any case, this discussion has run its course. You can continue to believe autonomous cars can not be remotely controlled and I'll believe what I wrote since I'm pretty sure it's correct. Every AV company has emergency procedures for remote takeover and it makes sense that they would because current ML tools and techniques are not good enough for self-driving cars and other kinds of autonomous applications.
Hmm, no. No article explains remote operator can take over controls. There's an important distinction between taking over and instructing a car what to do. You don't seem to get that.
> You can continue to believe autonomous cars can not be remotely controlled and I'll believe what I wrote since I'm pretty sure it's correct. Every AV company has emergency procedures for remote takeover
If your proof is "I believe these companies are lying" and nothing else, then this is not a discussion worth having.
That sounds pretty low if it's city driving or poor conditions. I know some of the trial cities are basically easy mode (wide streets, almost never snows..) but still.
First off: not even close. Waymo has a disengagement rate of 0.076 per 1,000 miles.
Second: You're shifting the goalposts from the grandparent comment's assertion that these interventions are to be expected in an "emergency", when the frequency of the interventions shows they're clearly not "emergency" interventions but part of normal operation.
UAVs don't have to deal with traffic (the thought of driving a vehicle with the latency and intermittent connectivity of my drone horrifies me) and when someone dies in a video game they respawn...
It might not actually matter. Since the car can operate autonomously already, the operator doesn't necessarily need to literally drive the car. They might simply need to hop in to verification of actions in unusual situations.
I'm imagining a situation where a car comes across a parked truck on a one-way road (common in cities). A human operator comes in the loop to ensure that it's actually safe to switch lanes and pass. Check for things like emergency vehicles, unusual pedestrians, etc. They don't need to literally take the wheel, just confirm that the vehicle can take a specific action.
I thought being a social media moderator and being constantly exposed to violence, racism, and child pornography was bad. Having your whole day being a series of "quick, don't let these people die!" moments seems like the worst tech job on earth.
The title isn't news at all as every single trustworthy autonomous driving solution MUST HAVE human operators somewhere to take over but the actual article is a good summary of Cruise's current situation and I'd guess the competition as well.
> "Half of Cruise’s 400 cars were in San Francisco when the driverless operations were stopped. Those vehicles were supported by a vast operations staff, with 1.5 workers per vehicle. The workers intervened to assist the company’s vehicles every 2.5 to five miles, according to two people familiar with is operations. In other words, they frequently had to do something to remotely control a car after receiving a cellular signal that it was having problems."
Title of the post should be edited though since it's not the headline of the piece and this information, while interesting, isn't the main thrust of the article.
That's a terrible disengagement rate. Cruise claimed in 2020 "Cruise, for comparison, clocked 831,040 miles with a disengagement rate of 0.082 (per 1000 miles)" [1] Something's not right here.
Companies measure multiple disengagement rates for different purposes. The DMV numbers are usually safety rate numbers, as in "if a human hadn't intervened there may have been an accident or near miss". The specifics vary company-to-company, and they'll have a large document somewhere laying out exactly what the criteria are. The numbers in the article are some other metric, though I have no idea what. I'm a bit skeptical that it's the average over their entire ODD, given that it's much higher than my own experiences and most of their vehicles were running around the outer city at night, where they seemingly did okay.
It could reflect some particular ODD (e.g. downtown at rush hour) where the vehicles didn't do nearly as well, or something else entirely.
Can we just admit that this likely isn’t possible in our lifetime and put more money into early childhood education, better healthcare and geriatric care?
Cruise AVs are being remotely assisted (RA) 2-4% of the time on average, in complex urban environments. This is low enough already that there isn’t a huge cost benefit to optimizing much further, especially given how useful it is to have humans review things in certain situations.
The stat quoted by nyt is how frequently the AVs initiate an RA session. Of those, many are resolved by the AV itself before the human even looks at things, since we often have the AV initiate proactively and before it is certain it will need help. Many sessions are quick confirmation requests (it is ok to proceed?) that are resolved in seconds. There are some that take longer and involve guiding the AV through tricky situations. Again, in aggregate this is 2-4% of time in driverless mode.
In terms of staffing, we are intentionally over staffed given our small fleet size in order to handle localized bursts of RA demand. With a larger fleet we expect to handle bursts with a smaller ratio of RA operators to AVs. Lastly, I believe the staffing numbers quoted by nyt include several other functions involved in operating fleets of AVs beyond remote assistance (people who clean, charge, maintain, etc.) which are also something that improve significantly with scale and over time.