Hacker News new | past | comments | ask | show | jobs | submit login
Cruise is opening driverless cars to the public in San Francisco (getcruise.com)
729 points by d-jones on Feb 1, 2022 | hide | past | favorite | 636 comments



Based on this image:

https://images.ctfassets.net/95kuvdv8zn1v/6h1C7lPC79OLOlddEE...

They and their VC backers are clearly betting on the concept that radars + lidar + imaging will be the ultimate successful solution in full self driving cars, as a completely opposite design and engineering philosophy from Tesla attempting to do "full self driving" with camera sensors and categorical rejection of lidar.

It is interesting to me that right now this is sitting on the HN homepage directly adjacent to: "Tesla to recall vehicles that may disobey stop signs (reuters.com)"


Cruise CEO here.

Our strategy has been to solve the challenges needed to operate driverless robotaxis on a well-equipped vehicle, then aggressively drive the cost down. Many OEMs are doing this in reverse order. They're trying to squeeze orders of magnitude of performance gains out of really low-cost hardware. Today it's unclear what strategy will win.

In a few years our next generation of low-cost compute and sensing lands in these vehicles and our service area will be large enough that you forget there is even a geofence. If OEMs have still not managed to get the necessary performance gains to go fully driverless, we'll know what move was the right one.

We shared several details on how our system works and our future plans here: https://www.youtube.com/playlist?list=PLkK2JX1iHuzz7W8z3roCZ...


It's good to hear a CEO say "we don't know the answer, but we're making a bet" rather than the typical Elizabeth Holmes style "We are absolutely correct and first they ignore you, then they laugh at you, then they fight you, then you get convicted of three counts of wire fraud and go to prison".


"This is a solved problem.", "We're light-years ahead of the competition", or "We expect everyone with existing hardware to be able to monetize their cars as robo-taxis by the end of next year" are other great examples.


to be fair, this is because Elizabeth Holmes is probably a sociopath, so you'd expect to hear something overly confident like that from such. Not that we knew it at the time.


I've met quite a few people like her and I really don't believe they are sociopaths - they say things that sound real and even convince themselves of it but as soon as you start peeling back details they don't exist. It's like you are talking to a Turing machine and it's you failing the test. People on these forums never really speak to these people because we are already largely scientists and love empirical evidence and work with and hang around similar people. There's a whole class of people who have learned to pattern match talking like us without anything to back it up. See the industry "Comms" for many many examples.


>There's a whole class of people who have learned to pattern match talking like us without anything to back it up. See the industry "Comms" for many many examples.

Have noticed this through my career as well. There is a whole class of people who go through life never actually doing anything. They just talk about people doing things. And get paid to talk about people doing things. And by the time people realize that they are just full of shit, they move on to the next place where they get paid to talk about people doing things. And they never actually did anything in the first place, so it's not like you can even say they were a bad worker.


I see you've discovered the people who work in "enterprise sales" for any niche software or technical product.


Roughly 1 in 20 folks on average out there is a sociopath (some quote from Jordan Peterson). It doesn't have to be full 110% on scheming world domination, but detectable (and obvious in daily life). I am not an expert on her or whole saga, but from my perspective many famous people show this trait. In business its almost mandatory to get/stay on top, it certainly gives one advantages compared to nice fair honest people.


Sociopaths are not uncommon as people think. They think sociopaths are “mad killers” but they’re just people who don’t feel remorse when ie lying and have little to no empathy. I’ve met mild sociopaths who were just bankers who would never even think about second order and third order effects of what they were doing. It’s not some kind of rare condition.

I heard the figure was 1 in 30 but 1 in 20 is close enough.


Yes, but all of those quotes are by Elon Musk…


True. I actually replied to the entirely wrong comment, should have been one up... I have a great deal more confidence in truly revolutionary things that have been factually delivered by SpaceX (such as 100+ re-uses of a rocket first stage now), and I question how much of that is really due to Musk at all. Maybe Musk as a figurehead. I wonder if all of the Elon fanboys know that much of what's been accomplished at SpaceX is thanks to Gwynne Shotwell, or even know who she is.

Things like promising "full self driving" for 6+ years now and charging people $12500 for it leave a really bad taste in my mouth and I find it difficult to square with my overall very positive impression of spacex.


I'll bite. Say I'm a Musk fanboy... I can assure you that all Elon fanboys I know are fully aware who Gwynne Shotwell is. Now, your turn: please explain how the following SpaceX accomplishments are thanks to Gwynne Shotwell (other that the hand-wavy "well if she didn't get the contracts none of this would be possible"):

- Merlin engine

- Vertical landing

- Raptor engine

... or, what exactly is the "much of what's been accomplished" that you talk about? Look, I'm not trying to minimize her role, she was clearly a great COO for SpaceX, but it seems weird to me that you try to minimize Musk's importance while at the same time picking one other singular person to highlight. I could understand the argument that "it's a team effort, no one person did this alone"; but if we're picking only one person to assign credits, then surely, _surely_ Musk is that one person, right? I understand the skepticism that he really does engineering & design, so here is supporting evidence: https://www.reddit.com/r/SpaceXLounge/comments/k1e0ta/eviden...


The anonymous “Interviewer” in the last part of that Reddit post was Sam Altman, excerpted from a 20-minute conversation with Elon Musk in 2016 [1] that I found to be interesting even as a non-fanboy.

[1]: https://www.youtube.com/watch?v=tnBQmEqBCY0


Bracing myself for the downvotes, but I suspect any wins coming out of Tesla/SpaceX these days is despite Musk, not thanks to.


I think we'll see major players leaving this industry soon. Self-driving will be a war of attrition and thus cannot be won by US companies with their insane burnrate. Europe has just as competent engineers making a tenth of their US counterparts. If I was a VC I would be head over heels investing in EU self-driving tech. They are the 'cockroaches' of this tech who will survive. I can't imagine e.g. Waymo bankrolling tens of millions of dollars in payroll for years to come.


Does it really matter? If there's no Elon, there's no SpaceX. And I say this as someone that doesn't buy into the founder cult.


I'll be more impressed when he solves climate change.


You might not give him credit even if he did. We're in a thread where people are wondering whether he's not just a figurehead for SpaceX, so... what exactly are we taking about?

AFAIK Musk's involvement in Tesla was specifically to address climate change/ help move the industry towards electric cars. To the extent that this, plus improved battery tech, ends up reducing our oil dependency and eventually contributes to "solving climate change" - would you credit any of that back to him? Or just say that he didn't single-handedly solve climate change, so it doesn't count?


Not profitable, won't happen.


> If there's no Elon, there's no SpaceX.

Why not?

What mystical gift does this one person have that 7 billion other people don't, that permits him and only him to run the company? This is a really unpopular opinion on a web site that exalts founders, but I don't think it really takes much special skill to run a company. Most (but admittedly not all) CEOs are in their position not because of their know-how, but because 1. They founded the company, and happened to be the one that flipped a coin heads 20 times in a row; or 2. They were born into that Ivy League class that closely gatekeeps CxO and SVP positions for themselves; or 3. Were descendants of one of the above.

Assuming a successful CEO is uniquely skilled is like assuming a lottery winner is uniquely skilled at winning the lottery.

I think many people, if given Elon’s financial war chest and basic knowledge of and an interest in rocketry, could have made SpaceX.


Musk didn't have that much money in the early 2000s. Compared to Bezos, he was small fish back then and SpaceX almost went bankrupt developing Falcon 1. If it really did, I don't doubt someone would explain persuasively why it could not have avoided that grim fate with a jackass founder like Musk; but they would have been forgotten already by now.

Attrition rate among space startups is insane. A lot of exciting projects like Armadillo Aerospace (by John Carmack of DOOM fame) crashed and burned. The graveyard of defunct space companies is huge.


I'm sure there are lots of other people out there who could run today's SpaceX as a space cargo trucking company. But Elon deserves the credit for creating two wildly successful companies that revolutionized their respective industries, both in the face of hugely entrenched competitors in highly regulated markets that hadn't seen successful new players in decades, and both as a side effect of his actual goal of getting humans to Mars.


The only people who were even competing were eccentric billionaires so let's be clear that he only beat a handful of other people who even had access to attempt the business. It's not that huge of an accomplishment because private space was theorized for a long time but NASA sucked up all the air in the room for the longest time but Elon's timing was just right. He out of the handful of billionaires working on this would get a chance to succeed at scale.


Honestly it’s hard to read your comment without an envious tone. You even admitted his timing was right, that alone takes skill. The point others are making is that there are lots of examples of failed companies, yet his have been successful. If anyone could have done what he’s done, why haven’t they?


Well, he was clearly competing with the faceless environment that allowed only eccentric billionaires to appear to be his only competition. If it was an open niche it would have been filled with others. Reading other threads here I learn that there have indeed been multiple failed attempts at space companies.

Maybe it's just a selection effect, but maybe they played their cards wisely and maybe some of the key choices can be attributed to the founder of the company that set the vision and picked the team carefully.

I'm personally not a fan of personality cults, but I don't think it's fair to swing too much on the other side. It doesn't strike me as plausible to think that Musk is just sitting on his ass and reaping the benefits of hard work of other people, and did that successfully with at least two companies.


He wasn't a billionaire for years after SpaceX had its first major successes and the "millionaires bad" narrative got retired since Bernie Sanders became one, so that's not going to work either.


>the credit for creating two wildly successful companies

From reading comments in other similar threads I seen the argument that Elon did not created Tesla, so maybe would be more honest to rephrase your "created" wording, I am wondering how many people know that Elon did not created Tesla so he is assigned the role just because of his big social media presence.


Apparently Tesla was: founded (as Tesla Motors) on July 1, 2003 by Martin Eberhard and Marc Tarpenning in San Carlos, California.

It gets a bit more complicated however: Ian Wright was the third employee, joining a few months later. The three went looking for venture capital funding in January 2004 and connected with Elon Musk, who contributed US$6.5 million of the initial (Series A) US$7.5 million round of investment in February 2004 and became chairman of the board of directors. Musk then appointed Eberhard as the CEO. J.B. Straubel joined in May 2004 as the fifth employee. A lawsuit settlement agreed to by Eberhard and Tesla in September 2009 allows all five (Eberhard, Tarpenning, Wright, Musk and Straubel) to call themselves co-founders.

So I guess it depends on your definition of "created".

https://en.wikipedia.org/wiki/History_of_Tesla,_Inc.#The_beg...


Then Elon is a god , depends on who defines what god means,

When Bob created his company X and later got some money from his dead uncle your "created" definition will assign the dead uncle the creator of X, I really want to see this definition, but don't segfault if you can't manage it.


he didn't create it but can definitely be credited for its success... Tesla that he bought was a failing company.

Do you really, really think that Musk is just a "big social media presence"? Why doesn't Trump or any of the Kardashians achieve similar feats?


Let me be super short

1 if you know Elon did not created Tesla then why would you use the word "created" and not be precise, even if you don't like the truth about Tesla creation you can avoid spreading falsehood and having people correcting you and the others you misinform

2 if you were wrong and thought Tesla was created by Elon, then who is at fault, Elon, Elon fanboys, the Illuminati

> Why doesn't Trump or any of the Kardashians achieve similar feats? they probably don't care about cars and space, one dude in your list managed to accomplish a big thing, he got elected by a large number of people

There are people that accomplished big things and we don't know their names or faces because they are not media stars, think at people that saved lot of lives by inventing medical procedures, or the ones that promoted introduction of safety belts in cars, or the ones that proved some chemicals are dangerous and we stop using them. \ In comparison Elon bought got his hands on an existing car company and used public money and a lot of PR to increase it's value. The timing is not a coincidence, only at this moment the batteries and climate change allinged to make it possible and remember there were electic cars before Elon appeared on the scene.


Musk set an improbable goal and he's heading to it. The other people from the Ivy League could do the same but didn't. He deserves some credit for that.

Another example Cook (coming from Compaq) is perfectly able to run Apple. A lot of people could have thought about iPhones and Macs but Jobs deserves some credit to actually start the company with Wozniak and actually pushing it to deliver those products.

Repeat with any successful FAANG or company in general.


> Jobs deserves some credit to actually start the company with Wozniak and actually pushing it to deliver those products.

...not to forget the period between 1985 and 1997 when he was ousted, founded NeXT and Pixar, and then re-hired to save Apple, which was on the brink of bankruptcy.


If all it took was money, spacex would have lots of competitors.


> Why not?

Check out how Bezos's space company is doing.


Heck, check Armadillo Aerospace if you think that Bezos is "all money no skill" and doesn't count.


To me, Musk's real gift is finding the right people and convincing them to work for him at the right time.


Elon is probably a sociopath too. Not all sociopaths are as dysfunctional as Elizabeth.


Indeed. It is very rare ( if not the only few I remember ) to see Silicon Valley or any VC funded Tech companies that is unsure of his / her tech or themselves. Their absolute optimistic nature ( If you could call it that ). People who are unsure of things, especially with leading edge tech, unproven tech and extreme difficulties tends to get my vote. I will now be keeping an eye on Cruise. Although I still think driverless cars in mainstream use is at least another 5 - 10 years away. There are just too many edge cases, but I hope people continue to work on it, as it will be part of the solution to solve housing and property market issues.


Game theory and personality dynamics strategy applies to C-levels too! For some industries, companies, etc you want to be a Holmes type of total confidence. It depends who your market is, and by that I mean the VC's you want to attract. If they go for the "arrogant boy/girl genius" schtick then that's what you do. If they want a humble intellectual, then that's what you do instead. Conversely, you may alternate between the two depending on your audience. Maybe you're humble in HN comments, but maybe a monster in VC meetings. Look at Elon's larger than life "boy genius" PR persona. It works really well. He may not be a total fraud like Holmes, but his shoddy car AI has killed at least a couple people in cases where if the car had a lidar-like system, that truck or whatever would have been identified instead of seen as part of the sky.

Also Cruise wants to license to automakers, not make their own car, so they have to act like trustworthy partners in their PR. Elon has his own car company and instead is antagonistic and belittling to automakers because he thinks there's a competitive advantage to it. Any positive sentiment towards his competitors is potentially lost sales for Tesla. Capitalism encourages zero-sum thinking and rewards zero-sum strategies.

CEOs are marketers and salespeople primarily and as such know how to play different roles for different situations. They code switch just like everyone else. The role isn't for everyone in tech because a lot of tech people don't have the people, political, and acting skills for it.

tldr; capitalism doesn't work well with honesty, in fact it works best with dishonesty. You don't have a personal relationship with a CEO or company, you're just absorbing marketing delivered via executive personalities. Personalities are perfectly valid marketing tools in capitalism. Take that as you will.


Theranos didn't use the normal set of VCs because they all thought it was a scam; they raised from some random rich people who weren't professional tech VCs instead. It's unfortunate the only thing she was convicted for was defrauding them, since being accredited investors they should be able to live with that.

As for Elon, he's currently doing a bad boy anti-government bit in an attempt to make Tesla "electric cars you can buy even if you're a Republican". Since we want those people buying EVs instead of coal rolling, that's a good thing.


>> It's unfortunate the only thing she was convicted for was defrauding them, since being accredited investors they should be able to live with that.

NO.

Investors are supposed to be able to live with all the usual risks of technology, execution, marketplace dynamics, etc.

They are NOT supposed to be OK with deliberate fraud.

If you invest at Early_Round when the tech looks promising, but then it fails to develop, CEO truthfully tells everyone what failed, the plan to overcome the failures, and you invest in Later_Round, or don't, and it ultimately fails and you lose your investment, fine.

BUT, if you invested in Early_Round and then the tech fails to develop, but the CEO straight-up lies to you and says they are "light years ahead of everyone else", shows phony endorsements from major industry players, and more so that you invest again in Later_Round, and then lose your shirt - that's fraud, and all involved in the fraud should be prosecuted, convicted, and jailed.

Anything less will create an environment where blatant lying for 100s-of-$millions is okay, and that is doomed to systemically fail.


But I think the point is that she wasn't convicted of endangering people's lives, which many would consider a far greater crime than just a con defrauding some gullible marks. People depended on those tests. They made choices (such as whether or not to have surgeries) based on the results.


I agree with your point, and I definitely wonder what was the failure in prosecution that produced those not-guilty verdicts. Not only was people's health involved with the fraudulent testing service, but the healthcare consumers did not in any way sign up for that.

The specific comment that I was responding to seemed to say it should be okay to defraud Accredited Investors should be "able to live with that".


If you're selling a pump and dump like crypto or Uber, you want the CEO to lie to you because it shows he's good at lying! Then you all go out and lie with him, then Softbank gives you a billion dollars for no reason.


Tim Draper wasn’t random.


How many other SV VCs joined Draper? FWIW, I heard that Ellison also put in some early money. If that was it, then will you agree that "mostly outside of the usual SV crowd." is accurate

How many rounds did Draper participate in? If Draper stopped after the first round or two, then it "got some early money from SV but everything else came from outsiders."


Good one!


An extremely wise set of decisions. I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions; nor to accept that only passive sensing (cameras) can be used, when Active Sensing (probing, if you will) that Radar and LiDAR use is clearly giving the control computers more information about Reality (tm).

By using all (or almost all) available Active and Passive sensing technologies, fused with geofencing and operating at 'low-ish' speeds -- surely must be the fastest way to achieve 100% accident-proof self-driving vehicles that operate on ordinary city streets. Congratulations, Cruise. Keep up the Good Work.


One argument would be that once you have many vehicles operating with LIDAR, it's unclear which systems are sufficiently robust against being disrupted by interference from other systems. Same with RADAR - while this is not new technology, we've never really had a regime with potentially dozens of systems operating in close proximity.

For all Tesla's problems, the automation-via-cameras solution is the one I find myself having the least problems with: using a single, obvious input (to humans), you don't wind up in a situation where you can have multiple differently-capable systems disagreeing on what they're seeing.


Generally, you solve this problem by using different (randomized) wavelengths, modulation (e.g. pulsed in a pattern), or if interference is inevitable, do something like WiFi or BLE does. It's not a big problem in practice.

I can only suggest you think harder about the 'all cameras' approach. Imagine say a snowstorm. Hail. Rain. Ice sheets. Sun in your 'eyes'. Cameras, basically, suck. Elon's game is to use suck-tech and make it un-suck with computons. Bad Choice.


> I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions

The reason is simple: cost. The goal isn't to build a proof-of-concept safe AV, it's to build one that meets the safety bar _and_ is as cost-effective as possible, in a reasonable timeframe.

I happen to agree with the target-then-scale approach, but I also agree with Kyle that it's not a given that this is approach is definitely superior to the one that launches everywhere and tries to improve functionality.


Sure, but we already know that humans are not very good drivers (car accidents are #1 or #2 causes of death for age groups between 5 and 50). If you can do better than humans with more input, then that is compelling reason to use more input even if you can do just as good with a cheaper system.


I agree, but was narrowly addressing the claim " I also see no reason, a priori, to 'blind' a vehicle to certain spectra of EM emissions". Cost is the a priori reason, albeit one that is potentially balanced by others.


What’s your view on whether self-driving cars should automatically be 100% liable for any accidents?

I ask this in the context of machines being governed by classical deterministic physics, so there is an argument that there is no such thing as an accident involving a self-driving car: only a design flaw.

This is a genuine question, as I can see that companies with self-driving systems that work, and who do serious fault analysis and rectification, might be in favour of 100% liability. 100% liability would stop cowboys from entering/surviving in the industry and sullying the reputation of self-driving. A company’s system would have to perform well enough that any residual risk of injury could be covered by an affordable insurance policy.


If you listen closely, you can hear Cruise's legal team shouting, "No you can't make a public comment on what level of liability you think we should accept" no matter where you are in the world!


Of course, knowing that self-driving cars are 100% liable would incentivize some people to attempt to be hit by one of these vehicles for a payout. A more realistic level of liability would be for 100% liability for accidents resulting from an "unforced error".


I still think the cars should anticipate risks and behave accordingly. Out and out fraud aside, they should basically never injure anyone.


There's always some kind of risk. The bridge you're on could fall down. Somewhere, someone will have to judge whether a bad outcome was a failure.


Well as a pedestrian or cyclist, I'd like to make that risk judgment, not the vendor of the car that the person passing my bought from.


I agree

For one thing electric cars are too quiet.

There should be a law that someone with a bell has to walk in front of the car to let people know it is coming


Well, a noise making device has already been mandated because the startup car company refused to voluntarily install one. It hadn't been formalized because every other brands basically had it forever.


100% liable as drivers.


The most common cause of motorcycle injuries that make it to the hospital (and statistics) is someone turning left in front of them in an intersection where they have right of way.

Not quite this, but you get the idea:

https://nypost.com/2021/05/20/motorcyclist-rider-survive-hor...

I couldn't find the clip of a motorcyclist patiently stopped at an intersection get taken out by an out of control left turner. Lots of fully legally stopped vehicles get hit.


How does one take into account lack of maintenance by the end user in a strict 100% liability situation?


This problem should be solvable in software. The car can simply refuse to operate in a situation where maintenance is required.


In the general case it's impractical for electronic sensors to accurately measure the mechanical state of the vehicle. How do they tell us the suspension is rusted out and about to break? (In theory you can play some clever tricks with eddy currents or something but that's not going to be feasible for real world sensing.)


That is not much of a problem in practice. A 'rusted out' suspension doesn't happen overnight. There could be regulatory requirements for self-driving cars to be considered 'streetworthy'. Out of compliance, robotaxi disabled.

The tricks you mentioned are already used for some aircraft inspections.

What the software needs to worry about would be other types of failures. Software is much more likely to detect issues before the driver. Say, brake performance is outside the expected range, or appears to be degrading too quickly.


How do you know maintenance is required in a completely automated fashion?


My fear is that car manufacturers will turn cars into a totally dealer serviceable only thing (even more than they are now), like the car version of the glued shut Microsoft surface that gets a 1/10 on the ifixit repairability score.


In the case of Cruise this wouldn't be a problem because you wouldn't own the vehicle. Its a robotaxi service. Your point is still valid, though I'd ask, how do you even solve certain classes of issues? Like lets say you had to replace a camera. You can't just plop one in and have it work. There is a ton of complex calibration work that needs to happen, both intrinsic & extrinsic.


Service intervals based on time and usage combined with certified repair. From a passengers perspective airlines are strictly liable but presumably airlines could then sue the relevant third parties in such a case. I suspect a similar model could work fine for self driving cars.


Put a rfid tag in the tire, store how many rotations that tire takes over time. Once it reaches a threshold, refuse to spin that tire further.


Tire wear is a complex interaction of various factors such as compound, slippage, road surfaces, torque, weather etc and not just wheel revolutions


I wonder about that. The top maintenance issue that comes to my mind is sufficient tread on tires. Bald tires will still work great on dry streets but as soon as it starts raining, you start skidding. I honestly don’t know if software could intervene quickly and reliably enough there.


The software could require a trip to the dealer for a visual inspection of the tires at set intervals. Hopefully free of charge for something so simple. A quick hookup to the computer and the interval is reset.


Tesla vehicles can detect and notify the driver of tires with low tread remaining. It's detected by a delta in rotation speed between other tires and the tire needing replacement. Seems like a software implementation is straightforward.

https://driveteslacanada.ca/software-updates/your-tesla-can-...


That doesn't help with end of service life, it helps with uneven wear.

If all the tires wear evenly this detection won't help.


Was editing my comment while you were commenting (removed service life, as software won’t detect dry rot or other defects undetectable from wheel speed measurements). Assuming tires wear evenly, you could still detect the change in rotation speed over time due to tread wear.


You can detect 3mm radius decrease? I would say no.

Remember that wheel slip depends on surface properties.


3mm is about a third of total tread depth, and 1% of the tire's radius. Why wouldn't this be detectable? ABS sensors tend to have 48-tooth tone rings and there's no reason why you couldn't vastly increase this number if you wanted.

Longitudinal tire slip is caused by thrust in excess of the tire's grip, which is a function of slip speed among many other things. Grip peaks with a mild amount of slip, but slip isn't the norm outside of racing. Mild acceleration produces zero slip.

Lateral slip angle is a different story.


The slip is only 0 when free rolling by definition of the rolling radius.

You also have the rolling radius depend on load (car mass) and tire pressure.

I don't say I know it is impossible, but it feels like there is way to much noise.


I think you can. Remember you have a lot of time to work with. The car already knows when wheels are slipping to discard that data. You just pick a time when you are going straight on a nice somewhat flat dry surface going a consistent speed and measure then, cars do that all the time even on curvy mountain roads you will find plenty of such stretches and you only need to measure every few hours.


It does if you have a GPS speed input. In the race car daq I use I can see speed differentials due to tire wear.

A typical 17" tire has a radius of 318 mm and 8 mm of tread. So a bald tire is 2.5% smaller than a new one.


In a world where most dealers and manufactures what you to pay a subscription for everything it isn't likely to be free of charge.


That seems like bad value when you can look at your own tires. (And should in case they get a sidewall bubble.)


This is what service intervals are for - your car likely requires an service every 12 months or XXXX kilometers whichever comes first. The service doesn't just include actual work on the car it includes an inspection of the lights, tyres, etc and a report to the owner saying "tyres need replacing in the next couple of thousand Ks they're almost at the wear indicator".


I've given it some thought, and I think the SDC manufacturer must be liable for any accidents the SDC causes. Who else is there? The passengers certainly can't be responsible for any programming or manufacturing errors.

There are corner cases and exceptions, but that has to be the rule.

Which should mean that as and SDC owner, you don't have to pay car insurance.


>the SDC manufacturer must be liable for any accidents the SDC causes.

There's the rub... How do you seperate those from the rest?


We already have a system that relies on assigning fault among the multiple parties involved in an accident. This same approach applies just as well to SDC accidents. It would be even easier than the status quo, given the much richer data SDCs could be regulatorily mandated to provide.


If there is disagreement between the relevant parties, through the court system (or through insurance agreements).


The whole event will be recorded by at least the SDC vehicle.


When you say accidents you mean where the robo car erred? Asking because ofc it's possible to get into an accident where you are not at fault and I think this would also be true of robocar


My conclusion from years of self-driving, LIDAR, etc. research is that managing medium to heavy precipitation reliably might be impossible.

Visual algorithms run into the same problem as human brains, and the size of e.g. rain drops interferes with the frequencies employed by radio techniques.

Is anyone aware of any strategies that give us hope in solving this problem?


Though ... how good a job are humans actually doing in heavy precipitation? I know that under normal circumstances our brains constantly do a bunch of work to create the illusion of a comprehensive high res visual field even though we really only have detail at the fovea. When it's raining heavily, and we think we can see "enough" to drive ... are we right? Or are we just lucky and pedestrians and cyclists are more likely to be off the road at those moments and so accidents increase but not to the point of disaster?


Agreed - I think the "but can it drive in a whiteout blizzard" question is best redirected to: "can a human?"

I suspect there are operating conditions the AVs won't solve acceptably - some of those conditions IMO are also conditions where we should not accept a human to solve acceptably. In general I feel we have a lackadaisical culture around driving that encourages/excuses unsafe behavior, and is overoptimistic about people's ability to drive well.


I grew up in a part of Ontario where total whiteouts can happen fairly frequently on both major highways leading in and out of my small town. It is in fact possible to drive dozens of km in near or total whiteout conditions simply by the hazard lights of the car ahead of you. You very frequently will see lines of cars km long, all going <20km/h, white knuckled and crawling home. Maybe one in every couple thousand goes in the ditch.

I don't think vision-based FSV will ever reliably handle winter conditions like this. The engineering and QA effort just isn't worth the cost-benefit when you factor in the very small amount of drivers who are consistently exposed to conditions like that. My father, who spent his career commuting to the city on that highway, was disappointed when I explained this to him.


I was once in the passenger seat in a downpour. My father was driving to the nav, and it seemed like we were traversing Mekong underwater. It was a complete instrument driving condition, except at most a feet of road markings were visible. The car was on local roads. He made cautious turns and drove slow, because it was obviously scary. Suddenly the nav said "Ding! you have reached your destination" in what seems to be middle of a road, and we immediately started making noises at the nav.

Then a person knocked on a window through the brown wall. It was someone we were to meet at the destination. He greeted us, and told us to come out. We tried to explain we can't just walk all perhaps a quarter mile to the place in this heavy rain, leaving the car left at a roadside. He insisted it'll be a short walk, and gave us no choice. Only when we stepped out, we realized that the car is right in the middle of the premise we were looking for, just couple feet from the main door.

This memory surfaces to my mind in the context of human drivers and inclement weathers; I'm still one piece, but maybe that has more to do with my luck, not necessarily due to myself playing every games extra safe.


Humans certainly don't reliably handle these conditions. ;)

It does seem like "something else" is needed for these kinds of low-visibility scenarios -- frankly, when nobody should be on the road.


The reason all that works is people drive to what they expect. In such conditions you might hit a human standing on the road, no human would be there in the first place, only other cars with flashing lights. As such so long as you stay in the correct lane for your direction of travel and go slow you don't need to see because there is no real danger most of the time. Most of the time...


The worst is when one goes into a ditch and the car following them follows into the ditch because their main indicator of where to go was the running lights and tire tracks of the car in front.


Where I grew up people would go together off the side of the mountain this way in very heavy fog.


rumble strips are amazing in whiteouts. its a relief when you hear it because you now know where the side of the road is...


people are actually pretty good at driving in blizzards in locales where they happen often. snow tires (and possibly chains), good clearance, and great caution can get you a long way.

obviously you try to avoid driving in these conditions when possible, but sometimes a moderate storm is much more intense than forecast and you get caught out. pulling off to the side of a snowy mountain pass doesn't guarantee your survival either.


I know next to nothing about lidar engineering but 60GHz band radars can still function out to several hundred meters in rain. It is significantly attenuated as the rain rate (in mm/hour) increases, but it takes a lot of rain to make it completely useless.


This depends on how powerful your antenna are, $200 wigig transmitters will struggle with much range over those distances.


And how directional and narrow the gain pattern is.


Summers in South Florida will put that to the challenge.


People can't see in that weather either


This has been my argument all along. People are driving in unsafe conditions when they should be stopped.


In florida, I will say when it truly pours like that people do tend to drive extremely slowly and turn on their hazards. Also, sometimes these storms just appear out of nowhere. Over the summer I was driving from the Vero beach area to fort lauderdale and on the way to Vero beach, clear skies and on the way back there was an ENORMOUS storm that flooded streets and you couldn't see crap. It just happens


Moreover, contrary to some other commenters:

Unless it truly is "once-in-a-lifetime", or very brief, you cannot just stop and give up. To be a viable replacement, a vehicle must (as a human driver would) continue to make progress even in extremely adverse conditions. The progress might be much slower than usual, might be a re-routing (back up away from floodwater and go elsewhere), etc.


just because you can't write code to handle it doesn't mean a human can't. thousands and thousands of people drive in rain, sleet, snow and hail daily and do it just fine.


Humans do not do just fine - look at the statistics of car accidents. Humans refuse to admit how bad they really are at driving.


Airplanes couldn't fly in inclement weather for decades. Took a while but we solved it.

"Oops it's pouring, can't get a Cruise, gotta fall back to an Uber" doesn't sound that terrible to me, for now.

Cruise could even offer a product that compensates/insures you for that eventuality, if for example Cruise was your primary vehicle.


But if self-driving succeeds then there won't be enough gig-drivers ready-to-go to cover a self-driving outage.


Taxis, available with e.g. a simple phone call, have existed since before telephones and cars really, and they still exist now even as ride-sharing has taken over. They will exist as Cruise rises.


They exist because they're a viable business because there is enough approximately consistent demand. That's not guaranteed to continue.


There might be taxis, but not enough for everyone who wants to catch one. It’s already difficult enough on rainy days.


In addition to going into highly dense cities and inserting autonomous cars into existing driver regulation; an interesting auxiliary strategy could be to partner with a master planned community that was designed from the ground up (physically and regulation wise) to be an autonomous first town where the majority of vehicles were autonomous and the majority of home owners were pro-autonomous cars.

The roads and pedestrian crossings could be much clearly marked with RF transceivers, etc. and the inclement weather could be pre-considered. HOA agreement could have a "I agree to co-exist with autonomous cars" TOS clause and perhaps a built in monthly subscription.

I think a ton of home builders (Lennar, etc.) and senior community developers (Ventas) would be interested if only as a PR concept. I also think a lot of remote Techies/senior citizens would be interested [1]. Sort of like this but replace golf carts with autonomous cars.[2]

[1] https://news.voyage.auto/why-retirement-communities-are-perf...

[2] Tom Scott - City of Golf Carts https://www.youtube.com/watch?v=pcVGqtmd2wM


Cruise acqui-hired voyage, which was basically working towards this. I don't think they've done anything with it since though.


Congrats on your incredible accomplishment! Thanks for doing this the responsible way. Tesla's approach does not inspire confidence. Starting at the high end, with expensive, reliable tech and slowly bringing the costs (and bulkiness of the equipment) down is the right approach!


In my experience, expensive doesn’t necessarily mean more reliable, it could just mean higher fidelity, higher resolution (and possibly less reliable due to the utilization of parts ‘less’ produced on the global supply chain), etc.

This improved resolution don’t necessarily help an AI grok the situation in real time with >20 Hz response time though.


I think the overall sentiment is more the "let's avoid premature optimisation" rather than "let's spend the most money".

If you have pre-sold a 'self driving' capability which you have guaranteed to be backwards compatible on cars you have already sold, then you are effectively cutting out Lidar as an option unless you are going to go back to all those cars and screw it on.

And considering that self-driving isn't solved yet, it seems like a bold move to define both your processing power and your sensing hardware in a way which makes it very difficult to (commercially) change.


"Today it's unclear what strategy will win"

Thank you for saying the honest, obvious answer. I am tired of people claiming to know what the implementation details of a technology that does not exist. As a nobody retail investor, I have long positions on autonomy (Tesla, Nvidia, GM/cruise, Google), not specific takes on it.

I'm fact, I think the radar/vision debate is not going to matter long term, as there can be multiple winners and the tech will likely converge.

https://www.greennewdealio.com/transportation/teslavswaymo/


The challenge I like to bring up is construction zones. How will cars cope when a road is unexpectedly under repair? Traffic is taking turns sharing the left shoulder with a flag man directing you?

Some people I've talked to insist that an up to date map is "all that's needed" and that all such projects will need to be put in the system. Haha, a water main broke and they think people are going to update a database for them?

A traffic light is out and the police are DC directing traffic at an intersection. This will happen inside any given geo-fence eventually.

The list goes on... forever. Tell me how self driving cars don't need full AGI.


A decent chunk of this list can be handled by the car coming to a safe stop and signalling that it is unable to proceed and you need to navigate the situation.

I suspect a lot of these could also be handled by that being a remote connection where a human is given the camera input and can indicate how the car should proceed (i.e. broken water main is a road obstruction that won't clear, and the obvious answer is a manual override to mark the road as unusable so the nav system reroutes).


Let's hope the only passenger has a driver's license and isn't drunk or having a medical emergency.


In both those cases they wouldn't be able to drive anyway, and the result is not more dangerous or worse then the alternative.


>> In both those cases they wouldn't be able to drive anyway...

Well that's not even a robo-taxi then.


Both Cruise and Waymo have remote operators who can direct the cars when they phone home.

Here's an example Vogt discusses a bit: https://youtu.be/sliYTyRpRB8?t=202

...of course this brings up many other problems, like network connectivity and inter-city transport, which the companies have as far as I know not commented on. IMO the sensible solution is obviously to just require passengers be able to take over if given plenty of warning, but for whatever reason Cruise isn't doing this.


Been so impressed with the Cruise approach. No hype, no promises, just keeping quiet, working hard on a very hard problem until it’s ready to launch. Congrats to everybody who’s been a part of this.


Well to be fair they did get acquired and get access to a bunch of resources allowing them to fully execute. A lot of the hype machine is a result of the necessity of getting access to those resources. It’s just a difficult situation.


Congrats on this huge milestone!

So refreshing to see a leader in this field say “we are not sure which one will work out” rather than just hyping their stuff.

Can I get a test ride soon!


Tesla limited themselves to cameras because Musk said “humans can do it with two eyes”. He also didn’t like the look of LiDAR on cars. Such an idiotic decision. Good to see Cruise is not lead by a mega-maniacal CEO.


As of a few years ago, lidar added at least $7500 to the cost of a car. That's a huge price difference for a consumer.


Currently Tesla is charging $12,000 for access to their self-driving package. Even if we assume that the price would increase to $19,500 if they included a lidar (I'm skeptical), it would be the difference between paying $12,000 for a feature that doesn't work versus $19,500 for a feature that might work. This is a luxury option no matter which way you swing it.


Definitely a luxury option for showing off at this point, as a status symbol, lots of people out there daily drive cars that don't have a bluebook value anywhere near $12k for the entire vehicle.


> lidar added at least $7500 to the cost of a car

That was Adam years ago in the Self-driving world. Now, it costs only few hundred dollars only, starting with $99.

https://velodynelidar.com/products/puck-lite/


Problem isn't how much LIDAR used to cost or costs now. Problem is that customers paid for a product, and they still don't have it, many years later. And what is being showed nowadays is nowhere close to what was advertised.


Cruise is a ride service - they don't sell cars. So the actual question is: How much does it cost to pay an Uber driver over the lifetime of a car?


A more accurate summary of Tesla's position is that they beleive that the incoming data from different systems (lidar, radar, visual, etc) must be merged and very often there is contradictory data.

Resolving that correctly takes time (in ms), adds complexity and will sometimes be incorrectly judged.

Since the visual data is the more accurate the vast majority of the time, it will anyways take precedence over the other input. As humans have proven that visual is technically enough, they decided it makes more sense to squeeze the most out of the visual, rather than collecting other data, crunching it, then (in most cases) discarding it.

I am not sure they are right, and am pretty sure that even if so - they need better cameras.

But misquoting them doesn't really help your argument.


I believe those arguments are simply justifications for the fact that most people won't be able to afford Teslas with Lidars


Amazing I’ve been following cruise long time z, those videos were so funny. Keep on going and conquer the world!


Will Cruise eventually be available on existing TNCs and other MaaS platforms? Or is the play here to create a new vertically integrated taxi service?

If you've read Dan Sperling's Three Revolutions, any thoughts on what kind of [transportation future](https://www.planningreport.com/2018/03/21/dan-sperling-three...) you foresee Cruise contributing to building ?


You didn’t need to, but you decided to show up on HN to clearly articulate your strategy, so I applaud you for this.

> Our strategy has been to solve the challenges needed to operate driverless robotaxis on a well-equipped vehicle, then aggressively drive the cost down.

There are broadly two ways to achieve your desired outcome of aggressively lower costs:

1. use money raised from VCs to subsidize the final cost of the product, or;

2. use money earned from customers as a natural consequence of growing demand for your product, in spite of strong competition from established OEMs, to fund your expansion.

Historically, the former has a lower likelihood of success relative to the latter and that’s because the former is really just a cash transfer from VCs to consumers. The latter is how Apple and Tesla have been able to grow into what they are today.

The reason the 2nd kind is so effective is that, when executed correctly, it often leads to a vicious cycle: your growth will lead to steadily increasing order volumes with your suppliers. This will in turn lead to sourcing for more suppliers to keep up with your growth. At a certain point, a supplier will feel confident that you are here for the long haul, causing them to take on more risk by pouring additional capital into their business to expand capacity. This will improve their ability to accommodate your current and future needs quickly, cheaply or both.

In other words, reality is multidimensional. It is rare for an individual company to aggressively drive down the costs of its product single-handedly, unless that company is ready to assume an enormous amount of risk currently being borne by its ecosystem of partners and suppliers.


I expect the media led zeitgeist to slime you on a few fronts:

1. AI/automation/tech bros undercutting the working class.

2. The mortal danger of self driving cars to pedestrians and the public- perhaps with an AI bias/racism zest.

3. The price, location-availability, or otherwise explicit exclusion of people that damage the cars or are otherwise unprofitable being harmful.

4. The proliferation of self driving cars reducing public transit use, thus reducing public transit investment, reducing transit access for poor, increasing pollution, and clogging roads.

5. Something something self driving taxis are subsidized by the government via public investment in roads.

All of these arguments are bullshit and I am not excited to hear people recite them to me in 5 years.


All-electric fleets of safe, non-honking AVs that are fine with whatever routes are required of them and go to designated areas to park and charge are going to make our downtown areas so much better.


The urbanist PoV is that anything car-shaped is bad for a city and it doesn't matter how smart it is. The proper answer is micromobility, aka ebikes and smaller vehicles that don't need to ever go highway speeds.

Full size EVs are still bad for air quality too because of tire dust.


Full sized EVs have their place in cities. However it isn't for mass transit, the train or bus is for getting people around. EVs are for getting goods and maintenance tools around. This is a small minority of traffic in most cities.


Just because you don't want to think about externalities doesn't mean they don't exist. There are a lot of strong ideological assumptions you have to make to handwave all these away as "bullshit".


From personal experiences I do not see how they are ready. They actively avoid the rules of the road and engage in dangerous driving actions because the car “sees” and obstacle or warning.for example when a car is double parked the self driving vehicles will swerve into the opposite lane, and in some cases almost hit another car, bike rider, or person.

When at stop signs they will sit back and wait even thought it is their turn. At times they will slam on brakes causing rear end accidents because the car saw a bird or reacted to steam from the ground.

Please talk with your legal team about embellishments made in insurance claims against other drivers.


How long ago did you see Cruise cars doing these things?


Unless you have a conceptual AI with causal systems understanding reacting in real-time to a spacetime model of the world based on current and recent events, people are going to get injured and or killed by unusual real-world events riding in these autonomous cars. Although cats and dogs have great perception, we don't let them drive our cars for a reason.


> people are going to get injured and or killed by unusual real-world events riding in these autonomous cars

I don't think anyone inside or outside of the AV industry is expecting that there will be zero injuries or fatalities involving AVs. Why would that be the bar, when AV rides displace human drives that already injure and kill tens of thousands?


The difference would be frequency of injury and/or death as AV cars can't think and react dynamically to non-pattern situations.


Sure, and they can't get drunk or fall asleep either. Even for taking for granted that AVs can't find an operating domain in which "non-pattern situations" are covered by failsafes, it's far from obvious that the net advantage goes to human drivers.


Does Cruise plan to try compete with Uber, Lift, etc.?

If feel like this tech could have a massive social impact if you sell it to local goverments so they could offer a highly efficient subsidized robotaxi service to their residents. It would democratize access to transportation and enable so many classes of underserved people to gain access to reliable transportation.


I'm no where near as experienced as you and your team, but that is what I was thinking as I read this. Tesla rather quickly went from the expensive sensors to the camera based setup they have now and it'll be interesting to watch how all this unfolds, safely from my 2006 vehicle with nearly no computers.


There's absolutely nothing stopping Tesla from adding back lidar/other sensors if the technology becomes cheaper or it turns out visual-only isn't accurate enough; Elon has other advantages that no other company is anywhere near competing with though too, and he clearly understands this, he understands his position well - and it's strong. He's also a very agile entrepreneur/engineer and not afraid to pull the trigger on whatever ideas come to his attention as being the best decision. He's also already succeeded in Tesla's mission - which was to get other vehicle manufacturers to transition to EV, so anything else after that is really just icing on the cake; Tesla stock holders however still believe strongly in him - and I'd argue rightfully so.

For now by using the cheapest technology he's arguably selling more EVs and/or making more profit per vehicle. If the market's competition requires a course change, then I don't see why he wouldn't take it - I don't think he'd fall pray to sunk cost fallacy; the reason for decisions may not be obvious to the public either, as we likely don't know details of his nuanced master plan.


what's stopping them is fsd has been promised on all these cars without those sensors. taking that away would likely mean a lawsuit


A promise isn't a contract, so whether it's actually guaranteed in the language in whatever agreements may be signed, will be the determining factor.

And how the automative industry has functioned since its existence is risk-benefit-cost analysis, so if the cost of a future fallout is less than the short-term benefit then they tend to decide for the short-term benefit; most disgustingly in regards to known problems of vehicles, where only recalls happen if the potential harm/death rate and the cost of that is lower than the cost of replacing whatever needs to be replaced; I'd hope that practice has greatly improved, but who knows - most of our government agencies seem captured by industrial complexes.


No, courts look at the spirit of the language and the letter of the law. The letter takes precedent only when it is clear that the two parties are not intending to defraud each other and there is just a misunderstanding. If the court decides both parties had a different understanding of the contract than the letter, then what they understand is what is used. As a lawyer in court your jobs it to make the court believe what you understood the contract was about is what they use - if the letter supports you then you yell that, and since the letter is a easy to prove while a shared understanding that is different from the letter is impossible the letter normally wins.

Marketing is admissible in court as evidence of intended contract. Since marketing is generally easier to understand the legalese, if the court decides the marketing is misleading they will tend to punish you for that and accept the marketing as the shared understanding over whatever the letter of the contract is.

Note that I used a lot of wishey-washy words like tend... Each court case is different, and there is no real rule of what courts will do in any given situation. Consult a lawyer for legal advice about your specific situation.


What does the word “aggressively” accomplish here? You’re talking about a future hypothetical, so why bother?


Does your car stop at stop signs ?

Because at least one of y'all have to for this to work.


I’m assuming well equipped means more capable data processing and bandwidth needs, is this the case?

If so, do you have a sense for how many orders of magnitude more bits of data your sensors are acquiring versus Tesla?


thanks for pushing the frontier of self-driving cars and articulating this strategy.

historically, the pattern in tech is to succeed with strategy 2 -- that is, ride moore's law and achieve exceptional performance by combining commodities into super systems. google server farms are the canonical example.

obviously, this is only a pattern and not a law.

tesla's pathway represents strategy 1: start with super machines then drive costs down.

for non-SDC experts like me, could you share why it felt more compelling to start with super machines then drive costs down?

excited to see cruise help lead society into the future!

thanks again.


Sounds exciting! Are you hiring? I had a recruiter from Cruise drop out on me because I wanted to stay and work from Canada and wasnt in a position to relocate to the US.


Hey Kyle,

Why did your previous CEO Dan Ammann quit just before this launch?


I see a few neighborhoods missing on the signup sheet. Are the crazy Bernal Heights streets a bit too much for this stage? :)

Looking forward to ride from my home there!


As a bike rider, father, and sometimes inattentive human, I would like to say thank you for the safety you are bringing to our cities.


Thank you for sharing! Sharing details on how your system works brings confidence to customers...


Did you watch the videos?

Based on the reply to the question: What sets Cruise technology apart from others like Waymo, Tesla...In other words, how was this difficult technical problem, solved in a way others were unable to do so far... And whose reply you can hear here (video at the correct time):

https://youtu.be/ABto5nqWgc0?list=PLkK2JX1iHuzz7W8z3roCZEqML...

Thank you, but wont be volunteering to ride one of these.


Lol, what answer do you expect from a question like that? At a high level, Cruise is taking the same approach as Waymo, and a different one from Tesla: start with lots of hardware, HD maps, and a targeted operating domain, then try to scale. Answering in any more detail would be a) giving away trade secrets and b) rely on knowing trade secrets about Waymo's cars that they probably don't know.


Will you share your systems safety work? How many fatalities per drive hour do you expect?


You guys ever think about selling your LIDAR data?


No, the different strategies are that Tesla has a vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data, while these demo companies are doing if statements around the block.


> vehicle actually being used by hundreds of thousands of people and is slowly incrementally improving their self driving with massive amounts of feedback and data

Throwing data at the problem isn't going to solve it. Only people without expertise in AI think that's how it works.


> while these demo companies are doing if statements around the block.

[citation needed]

I only ever see Tesla fanbase making outrageous claims like this without any supporting evidence.


Well, you are ignoring the point. The whole differentiating strategy between Tesla and everyone else is the incrementally improvement of large amounts of vehicles versus the magic “hey, we came out of nowhere and now just drive ourselves”. This has been repeated though out tech history and the incrementally improving real life one always wins.


The large number of vehicles makes it harder because you can't do hardware upgrades and a regression will kill someone.


And Tesla will just ingest large amounts of data from their fleet and magically dump an L5 solution one day? That's believable?

Elon Musk has been promising imminent L5 self driving every year for the past 7 years; that requires more than incremental improvement. The ones actually doing incremental improvements are companies like Cruise and Waymo, making it work one geography at a time.


The coca cola company sells even more units of non-self driving products than Tesla, and for a fraction of the cost!


They are betting that hardware combination is the fastest to market, given the constraints of today software.

When the ml stack is capable of leveraging purely camera sensors, Cruise and others like them, own the fleet and can swap out the hardware. Tesla does not "own" the fleet per se. So perhaps its different bets on which cars will still be on the road when the ML threshold is crossed.


> when the ML threshold is crossed

If, not when.

Even the most cutting edge research today still pales in comparison to LiDAR.


Is it really an "if"? I think it would be pretty safe bet that in 100 years human-quality CV object detection will be solved (note, we both know that it is possible AND this doesn't require AGI). So then it's really a question of when (presumably you don't need the full 100 years).


As an amateur (non-AI-expert) it seems to me that behind every corner is lurking a sub-problem that is AGI-equivalent. I don't see any reason to believe that humans do human-quality object detection without also deploying tremendous contextual understanding of the world. So perhaps it will turn out that a computer needs something similar?


I think decision making in driving is highly contextual, but LiDAR doesn’t help there either. Purely visual field extraction is something even very simple animals can do (presumably which much weaker abstract context processing capabilities).


> note, we both know that it is possible AND this doesn't require AGI

Not who you're replying to and not saying you're wrong, but how do we know this?


We know in a sense that very simple animals do it and it doesn’t require decision making (in a sense that LiDAR only helps with perception).


So I can teach my dog to drive the car?


Your dog can reliably detect objects, judge distance and avoid them.

That's all the person is saying. Simple animals can use only vision to do what we're using lidar and radar to do. But neither camera, lidar nor radar or any combination of them guarantees that you'll be able to make a computer drive a car in all situations.

For me, intuitively, the problem of reconstructing a distance field from cameras must be way harder than say, trying to predict what a person on a bike will do next, or detect road lines on the road in heavy rain or snow. So it seems very likely that an "AI" capable of driving a car in all situations would be powerful to not need lidar or radar (though I don't see the point of dropping radar, as it gives you some ability to "see" around objects which can make cars better than humans)


Would Tesla be ahead if they had incorporated LiDAR?

Because they already have the data advantage for ML.


Also you can't sell sexy Teslas if they have ugly lidars on top.


I think it's not just that. When Tesla started to with their FSD journey, they had to determine what sensors they can add to the car. Lidars back then were way more expensive than they are now, and it wouldn't have been feasible to add them at that time.

They can't add them now to new vehicles because they promised the vehicles back then are only a software update away from full autonomy [1]. Building on Lidar now would mean developing on 2 heavily differentiating stacks. Going back on the promise of old Tesla's being "FSD capable" would introduce a huge liability.

Long story short, Tesla's stance on Lidar determined 8 years ago, without the option to revise the decision with future developments.

[1] Note this has turned into "we'll only have to replace the computer in the car", which is still doable, contrary to adding sensors to the existing vehicle.


Tesla has really fallen behind. I think Karpathy will be fired this year if his team can't achieve at least L4.


> I think Karpathy will be fired this year if his team can't achieve at least L4.

Ah so Karpathy will be fired this year. Because they're not reaching L4. FSD isn't even L3 yet.


I do expect at least some companies will hit L4 within the decade(?) but it's going to be under limited conditions that won't include urban driving. Which could actually be a very useful capability but isn't the "don't own a car" future that some really are focused on.

------

Level 4 _ High Automation

System capability: The car can operate without human input or oversight but only under select conditions defined by factors such as road type or geographic area. • Driver involvement: In a shared car restricted to a defined area, there may not be any. But in a privately owned Level 4 car, the driver might manage all driving duties on surface streets then become a passenger as the car enters a highway.

Example: Google’s now-defunct Firefly pod-car prototype, which had neither pedals nor a steering wheel and was restricted to a top speed of 25 mph.


> "driver might manage all driving duties on surface streets then become a passenger as the car enters a highway."

This is the kind of setup I can't wrap my head around. The car might "require" you to take over when you exit the highway, but it can't exactly "make" you. If you fall asleep on the freeway and the car isn't willing/able to drive at the end of your journey, or in edge cases if you were to pass out, etc. what does it do when it gets to your designated exit, or to the final exit of a designated highway? Are there going to be a bunch of cars all stacked up with flashers on the shoulder by every off-ramp waiting for their people to wake up / quit playing on their phones and engage manual mode?


I'm thinking more like a semi truck. Get onto the freeway on-ramp, pull over and get out, then the truck continues on without you for miles before taking an off-ramp where a driver is waiting to handle the nearby streets. I expect truck stops in rural areas will (with DOT help) get special on/off ramps that are approved (maybe a special stop light?) so that trucks can go to a full service pump for fuel and get back on the freeway.

As you say, city driving is hard, but there are a lot of trucks that cross the US on freeways that are easy to automate.


I'm as skeptical about self-driving as just about anyone. But this seems to be getting into real edge case territory. Person falls asleep/is watching a movie and doesn't respond to increasingly urgent alerts? Is this really a problem? And is it a problem that's greater than fatigued driving today?


Pull into breakdown lane.


What happens when there isn't one? Roadworks, and accidents cause frequent closures of the breakdown lane. L3 has a lot of edge cases where the vehicle is supposedly too dumb to drive, but smart enough to know it shouldn't drive. It may be death by a thousand cuts.


Achieving the “don't own a car” future doesn't require automation, as much as urbanization. No car technology would help reduce the climate crisis we're facing, unless that technology eliminates private transit (as in rides not shared by multiple people, not as in private ownership of said transit)


As someone without a drivers license or a car, automation would still be wildly useful in driving down the cost of occasional journeys to locations inconvenient to serve with public transport to a point where it makes living without a car an option for more people.

You're right, though, that the challenge is that it also makes it easier for everyone to opt for cars over mass transit.


Oh sure, L4 within the decade for companies other than Tesla, totally doable. You could argue Waymo and Cruise are already there with geo limitations.

But Tesla within a year with no lidar? Yeah, no. Not happening.


“I would be shocked if we do not achieve Full Self-Driving safer than human this year. I would be shocked.”

-- Elon Musk

He set the milestone.



He told investors the same thing last year. Elon milestones mean nothing.


What comes first? Tesla FSD or the year of the Linux Desktop


Tesla has not fallen behind, it's rapidly catching up. It's just not that easy to catch up if you are 10 years behind the competition and handicap yourself with inferior hardware. Maybe you are right and Karpathy will get fired, but at that point it's time to sell your tesla stock.


I think you have to be ahead first before you can fall behind. Tesla never had that problem.


What is success?

Does 50 miles of geofenced and daily mapped streets mean Cruise won self driving?

What if Waymo gets to 20k miles of geofenced roads and monthly mapped?

What if Tesla gets to the point of one intervention/crash every 100k miles? 10M Miles?


The human accident rate is about one per 500K miles, so if they were able to get in that range, then yes, they would have succeeded; drivers would be able to stop paying attention to the road without putting themselves and others in danger.

But the current FSD beta's intervention rate is more like one per 10 miles, judging from some quick googling. I see no particular reason to assume that incremental improvement can take us from 10 to 500K.


As it can't solve this in 2022 (video at correct time): https://youtu.be/wTybjJj0ptw?t=238

Or even worst, just managing an empty intersection (video at correct time): https://youtu.be/wTybjJj0ptw?t=280

At which point releasing software so bad, becomes a criminal liability? Another one at correct time: https://youtu.be/wTybjJj0ptw?t=652

There are simply no words...Correct time: https://youtu.be/wTybjJj0ptw?t=722

Should not be allowed out of the labs...


This is from from November 2021, but I'm still highlighting it because it is just terrifying (Correct time, though the video later on also exhibits inabilities of the system): https://youtu.be/9wRRClg_aM8?t=113


After watching your linked videos I'm actually really impressed with it.


It looks great for an early alpha. It needs a fair amount of improvements before it will be ready to be released to end users, though.


Now they just need to draw the Rest of the Owl...

https://www.reddit.com/r/restofthefuckingowl/


That looks like an incredibly stressful way to drive.


> current FSD beta's intervention rate is more like one per 10 miles

Maybe in rural areas? The videos on YouTube are far more than one per 10 miles.

https://www.youtube.com/watch?v=wTybjJj0ptw


On quick watch the driver intervenes at 4min 45sec and 5min 47sec.


A helpful link for your perusal: https://en.wikipedia.org/wiki/Selection_bias


But are you using confirmation bias to find a cognitive bias that fits here.

But, In all seriousness we don't have access to the data across all 60k FSD users to know what the intervention rate is and how it has been changing over time.


We do have previous statements that as they get better they are moving to harder situations. Start with empty roads, and once you can do them well start finding harder and harder situations. When you start you avoid construction zones, once you are doing well you start looking for them.


Dirty Tesla used to track these stats in his testing and gave up because “it’s not changing”


Which could be a sign some drivers are simply overly cautious. Suppose 1/10 of disconnects prevented a crash, reducing the risk of crashing to 0 only reduces the number of disconnects by 10%.

To actually reduce that number you would need to make drivers feel more confident in the vehicle which is a useful metric, but only indirectly relating to safety.


What is the appropriate point of comparison though? All human drivers? Sober human drivers? Sober cautious human drivers? Sober cautious human drivers with driver assistance technology (e.g. auto-braking and blind spot warning, or potentially even more sophisticated LiDAR tech)?


I don't think this question is even meaningfully-defined. There is no "the" point of comparison. The relevant point of comparison is whatever ride it's displacing.

The rideshare explosion has already had a measurable effect on drunk-driving deaths; to the extent that a theoretical lower-cost AV will make rideshare even more accessible, then its effect on drunk-driving reduction absolutely makes non-sober drivers a relevant comparison.

For an average young person who'd get in the car with one of their friends, or drives a bit recklessly themselves[1], an AV at sober-human-driver level would be valuable.

For a guy who needs his kids driven around, a "sober cautious human driver" level of safety may feel right.

For questions like "what should the regulatory bar for launch be", all human drivers seems like an easy answer.

[1] I'm probably guilty to a degree here, on the rare occasions I drive


Is it reasonable to assume AV will be lower-cost than rideshare? The key thing that makes Uber more affordable than a taxi is that vehicle purchase/maintenance/depreciation/liability are all externalised.

In a full-self-driving situation you no longer have to pay your driver, but you do have to pay for all of the above. With the inevitably higher standards of maintenance required for AV fleet vehicles I can't really imagine it being cheaper than it currently is.

Sure the sensor/cv/vision tech will get cheaper, but machines still wear down.


> Is it reasonable to assume AV will be lower-cost than rideshare?

That's what the industry is betting on. I think it's reasonable in the steady-state: labor costs are expensive as hell.

> vehicle purchase/maintenance/depreciation/liability are all externalised.

These aren't 100% externalized with Uber, as they show up in the labor cost. They're only externalized with Uber to the extent that drivers do the math wrong on the costs they're paying[1]. Most of the analyses I've seen of this choose every possible pessimistic assumption, and still end up with net wages that are very high. They're of course low relative to "a living wage", which is what the analyses are focusing on, but that's precisely the point of what we're talking about: even the floor of labor costs is very high, when you're looking at expenses.

[1] Completely tangentially, but also note that this ignores the extent to which people derive value from being able to convert assets around. It's hard to imagine for us SWEs making 1% salaries and sitting on mountains of wealth, but liquidity is a constant and pressing concern for a large portion of the country. See also: payday lenders, where there's a stark difference between the opinions of those who've actually studied the economics of the industry and the midwit affluent John-Oliver-watcher.


> The human accident rate is about one per 500K miles, so if they were able to get in that range, then yes, they would have succeeded; drivers would be able to stop paying attention to the road without putting themselves and others in danger.

Unfortunately, I expect that automation will be held to a higher standard than human drivers, rather than the same standard. When an accident happens, people want to know who to blame, and an unimpaired human driver gets somewhat more latitude for a genuine accident, while a piece of software is always going to be perceived to be at fault (which it may well be, even in a situation where a human wouldn't be considered to be). And conversely, people (somewhat validly) want to have more control: every driver thinks they're above average, and the software won't be as good as their accident rate, and if something happened at least they were in control when it happened.

I don't necessarily even think those are incorrect perspectives; we should hold software to a high standard, and not accept "just" being as good as human drivers when it could be much better. But at the same time, when software does become more reliable than human drivers, we should start switching over to make people safer, even while we continue making it better.

(Personally, I wish we had enough widespread coordination to just build underground automated-vehicles-only roads.)


> Unfortunately, I expect that automation will be held to a higher standard than human drivers, rather than the same standard.

The average driver in a crash is worse than the average driver. Why would we compare FSD with reckless drunks, etc.


I'm expecting that we should compare self-driving vehicles to the average driver, not "the average driver in a crash".


Ye.

Also, I should have written "the average driver in a crash is worse than the median driver".

"* In 2016, 10,497 people died in alcohol-impaired driving crashes, accounting for 28% of all traffic-related deaths in the United States.

* Drugs other than alcohol (legal and illegal) are involved in about 16% of motor vehicle crashes." https://www.cdc.gov/transportationsafety/impaired_driving/im...

If we include recklessness, FSD maybe need better than half the fatality rate of human drivers, to be on par with the median driver.


The real averages of FSD intervention are unknown since some 2,000 Tesla employees also have NDA'd Beta access, and it would surely differ between rural, suburban, and urban roads.


In many areas it's more about how many interventions per mile are necessary. Anything outside of sunny highway driving is on the edge of that.


It also depends on what kind of miles. Are they running at the same speed? Only easy highways or complex urban intersections?


Not accident rate; crash rate.


Yeah, they’re not trying to solve the same thing?

I think Tesla is right that to solve it for real you need to solve the general case which can’t rely on high resolution maps.

The city cab case is smaller and can, so the cruise approach makes sense for that use case. It’s just narrower.


The truth of it is that it’s just not possible (with currently existing technology/ML architectures) to create a truly autonomous taxi without HD maps. Everyone in the robotaxi industry knows this - even Tesla builds HD maps, they just don’t call them that.


My knowledge only comes from Karpathy's talks about this (which are great, worth watching if you haven't seen them).

I found his and Tesla's arguments convincing for the general case. That doesn't mean that the narrow cases aren't super cool or valuable (I signed up for this Cruise thing in SF).

I just think that if the software is unable to make decisions based on visual data alone without up to date high resolution maps it'll never achieve true FSD in the general case (not geo locked). You'll end up trapped in a local max otherwise because there are just too many conditions in the real world that vary (and the world is too large to economically map fast enough for that approach). You have to solve the vision problem.

I don't know enough to comment on the approach differences beyond that, but my understanding was that Tesla did not rely on the same stuff that Waymo and Cruise require (largely Lidar and these high resolution maps).


> I just think that if the software is unable to make decisions based on visual data alone without up to date high resolution maps it'll never achieve true FSD in the general case (not geo locked). You'll end up trapped in a local max otherwise because there are just too many conditions in the real world that vary.

My contention is that there’s no way to actually solve for the general case with currently existing technology. The amount of novelty in the real world is too great for any system to account for it without disambiguating via HD maps or remote support.

>You have to solve the vision problem.

This isn’t a vision problem specifically - even if you had LIDAR and high resolution imaging radar and 8 A100s on every Tesla, “true generalized self driving” wouldn’t be achievable without HD maps with our current understanding of Machine Learning.

>My understanding was that Tesla did not rely on the same stuff that Waymo and Cruise require.

Tesla maps individual traffic light elements, stop signs, and lane markings, but will attempt to drive even if the area isn’t mapped.

Disparities in FSD performance in different areas is largely attributable to some areas being better mapped than others - the mapping data has a huge effect on its performance. There are key elements of the driving task (including recognizing and reacting to every single type of sign other than a stop sign) that FSD can’t do and relies entirely on maps for.


Novelty isn’t nearly as big of a problem as you might think. One of Wamo’s famous videos was someone on an electric scooter chasing a duck in the middle of the street. That’s very odd behavior, but the car followed the rather simple option of just not hitting them and going forward when possible.

Cars really don’t need to identify what something is just it’s location and movement which is a vastly easier problem. A trash can rolling down the street can be treated just like an oil drum doing the same thing etc.


> Cars really don’t need to identify what something is just it’s location and movement which is a vastly easier problem. A trash can rolling down the street can be treated just like an oil drum doing the same thing etc.

You’d think that, until you encounter something like a turn restriction sign with a bizarre conditional restriction that it’s never seen before. At which point the car needs to OCR the text, parse the semantic meaning, and apply to the scene.


Right by my house I have a four lane (on one side) intersection with a traffic signal. Each of the lanes goes straight ahead. However, each lane has its own traffic light, and when the traffic light rotation is in that direction, it alternates the two left most straight lanes red while the right most are green, and then switches (because very shortly after the intersection there is a quick lane reduction to two lanes).

I can't imagine how AI would _correctly_ see four straight arrowed lights in front of it in the intersection, some of which are red, some are green. Humans of course recognize that they correlate to the lanes, but this is a more esoteric case for AI to assimilate.


Or treat that turn restriction as applying 100% of the time.


And now we’re already making concessions about the car’s abilities.

There are 10 MPH speed limit signs on Market Street in SF that specify in incredibly small text “when behind trolleys”. Assuming we take your approach, the car will just always go down market at 10 MPH.

Imagine if it’s a negative turn restriction - IE, it’s permitting turns except for during certain hours and conditions. Now the car is treating it as always permitted and turning into traffic. An edge case, but something it’s going to encounter in the real world.


And now your moving the goalposts. We are talking extreme edge cases in some random small town not common signs in a major city. They can always get updates on what some random sign in some random location means as long as their safe and don’t block traffic that’s all that’s needed.

Also, negative restrictions can again default to full restrictions. Permitting a car to say park in a snow lane doesn’t require a car to park in the snow lane.


I don’t think I’m moving the goalposts - we were discussing whether autonomous driving (which I take to mean L4-L5 driving without the need for a human in the loop) is possible without geofences or HD maps. “Edge cases in some random small town” are exactly the sort of thing you need to worry about without a geofence.

Not to mention these sorts of edge cases are way more common in large cities than small towns - one of the examples I gave was down a central avenue in San Francisco.

>They can always get updates on what some random sign in some random location means as long as their safe and don’t block traffic that’s all that’s needed.

What if it truly fails to parse the sign accurately and does something illegal or dangerous? What does sending an update out look like? Does a human take a look at a crop of the sign and review it? Why not just map it in that case?


> edge cases are way more common

It’s not a question of parsing a known sign, even extremely complex rules can be encoded. Further that process can take place from a photo of the sign uploaded by the car to then be encoded by the rules. The general case is stopping and having a remote driver slowly tell the car what to do.

An unknown sign in a place without cellphone reception is about the only case where it really need to just figure it out on it’s own rather than simply avoid causing an accident.

> What if it truly fails to parse a sign accurately and does something illegal or dangerous?

Not much, people regularly disobey traffic signs especially ones with complex instructions. Don’t hit stuff or jump in front of another car is generally enough.


> Further that process can take place from a photo of the sign uploaded by the car to then be encoded by the rules. The general case is stopping and having a remote driver slowly tell the car what to do.

So you’re now agreeing that you need some level of remote support to handle edge cases like this?

>An unknown sign in a place without cellphone reception is about the only case where it really need to just figure it out on it’s own rather than simply avoid causing an accident.

Yes, and again, this is the sort of thing you actually need to worry about when trying to come up with generalized self driving solution.

> Not much, people regularly disobey traffic signs especially ones with complex instructions. Don’t hit stuff or jump in front of another car is generally enough.

What if it misinterprets a one way sign at night when there’s no other signal that it’s turning on to a one way lane and it suddenly finds itself traveling opposite the direction of traffic for a long period before encountering another car? You have to consider all of these edge cases when talking about a generalized solution.

Maybe you still disagree with me in sprit, but do you see how when we really look at edge cases how you have to fall back to some level of remote operation or mapping?


> So you’re now agreeing that you need some level of remote support to handle edge cases like this?

As a bootstrap step yes, after that no just regular updates for new traffic rules and such. You can’t make a purely offline self driving system that doesn’t get updated for 30 years because laws change. But presumably a non geofenced self driving car is going to be tested by driving on every road either directly or via someone’s mapping project.

> What if it misinterprets a one way sign at night when there’s no other signal that it’s turning on to a one way lane and it suddenly finds itself traveling opposite the direction of traffic for a long period before encountering another car? You have to consider all of these edge cases when talking about a generalized solution.

You mean in some location without maps? There are a finite number of roads in the world and they don’t change that quickly. If you’re worried that the AI is going to say end up on an ice road that melts, sure that’s the kind of thing that happens once. But the threshold isn’t perfection it’s ~30,000 dead people per year in the US. Beat that and you win.


> I think Tesla is right that to solve it for real you need to solve the general case which can’t rely on high resolution maps.

But they do relay on maps. You cannot use FSD without latest, high resolution maps.


Or you solve for a subset of highways in a subset of weather conditions. That would be more useful to a lot of people than city cabs which exist today (with human drivers).


Cruise is interesting insofar as they are not simply looking to sell their technology, but they also want to monetize it as a service. Not only will they not need a driver, they will also be able to buy the hardware (the car) at cost. If it's successful, their margins will be much higher than Uber and Lyft by a long shot.


On the other hand, Uber and Lyft externalize many costs including liability.


Externalizing liability and automated driving seem quite at odds unless Uber somehow manages to bypass laws again.


Is this not what effectively everyone who is doing this (outside of Tesla) is looking at?


As a taxpayer who pays for roads, and suffers from traffic congestion caused by one-occupant and zero-occupant vehicles, I'm eagerly looking forward to reducing the taxes I pay, by taxing those margins, instead.

Ideally, the taxes could be high enough that driverless taxis will operate at barely above break-even. The financial comfort of me and my neighbours are more important to me than the profit margins of a firm that barely employs anyone in my town.

Unlike a factory or a corporate office (that can threaten to move offshore, eliminating jobs and impoverishing a town), the firm in question is a hostage of local politics - not the other way around.


My cynical take: the government is not going to forgo collecting a tax from you that you are already paying. Instead it will tax you and start collecting per-ride fees from Cruise, etc.


Do you think your experience of congestion would be improved by everyone driving private vehicles instead? Not sure I follow the logic here.


Yes, because in the common case, a taxi (driverless or otherwise) drives empty at least some of the time, to pick someone up, thus creating congestion, compared to a private vehicle, which doesn't drive empty.

The cheaper and more convenient you make zero-person and single-person automobile transportation, the more people will use it, and the more congestion they will create.

The more expensive and less convenient you make it, the more trips will use non-automotive, or public transportation, both of which produce far less congestion.


I actually agree with everything here, but on the other hand the decision of whether and how to actually build the massive amounts of non-car infrastructure we need to have transport be efficient and accessible without private cars of any kind, is in a whole different place. At least in the US, it's pretty clear that in most areas there is very limited political will, even in the grass roots, for things like "build good high-speed trains" and "dig new billion dollar subways" etc. So I think pragmatically speaking things like robotaxis are going to be the "solutions" that we'll actually get.

(And yes, I agree that that's dumb since the same politicians and voters have no problem indefinitely subsidizing and expanding the massively money-losing infrastructure called Roads at taxpayer expense!)


On the other hand, once a sufficient percentage of cars on the road are autonomous, couldn’t they use cooperative navigation algorithms to improve throughput a whole lot?

There are so many inefficiencies with human drivers—chaotic merging, unnecessary lane changes, blocking of passing lanes, and so on. I could imagine that optimizing all those away would make a huge difference overall.

You could also probably increase speed limits. And fewer accidents should cause a significant reduction in traffic jams.


Of course. If one assumes relatively inexpensive robo-taxis people living outside cities will definitely come in more often. I certainly would.


Success? Go from point A to point B with minimal incidents. It's not that complicated as most people make out of it.


More importantly, "driverless" means no one at the driver seat. What Tesla has is barely even Level 3. Waymo right now is doing rides without anyone in the driverseat, aka Level 4.

What Tesla is doing is not driverless.


I just drove through the Alps, at night, during a snow storm. This is hardly everyday driving, but it's the sort of experience Canadians are no strangers to.

Success is when I trust the autopilot to handle the weather conditions where I live, not just sunny days in California.


I always wondered why the rejection of lidar by Tesla. My guess is that it is more about profitability/availability than anything else. Just because humans use eyeballs doesn't mean that it is the best bet for a computer. This sort of naturalistic fallacy led people to believe (~130 years ago and beyond) that the ideal flying machines would have flapping wings because, well, birds have wings and that is how they fly.

Maybe I'm just salty my 2021 Model Y had lidar stripped out of it, and to me the more tech toys the better. Not that it matters because Tesla FSD is a scam and I wouldn't use it even if it came for free with the car.


Lidar was expensive back then and would've added huge costs to the vehicles. Not to mention, it looked ugly on consumer cars. Elon conveniently used "humans use only vision" as an excuse and promised every Tesla has "sufficient" hardware for full autonomy. It's that premature promise that doesn't let them add sensors even now (and perhaps Elon's ego) without breaking trust and/or eating big costs for retrofitting.

In short, Elon made a high risk bet that vision-only would be enough and so far has been proven horribly wrong. But I've got to say, it was brilliantly executed because it gave Tesla mindshare as a tech company, drove sales and contributed massively to their insane valuation.


Tesla pivoted to video only. They originally tried with radar as well. So any such statements by musk are already a pivot.


Their radar removal was baffling. There were rumors (perhaps Musk's tweets?) of Tesla investigating 4D imaging radars to replace their really old Continental radars, but they suddenly decided to remove it altogether. Many have attributed it to chip shortage and yet again Musk using vision-only as an excuse for removal.


My money is on chip shortage. There were some real problems with the radar (like reflections from overpasses) though. I think not being able to ship cars because not enough radars is what forced this decision though (vs. trying to fix the radar and/or the way it's fused with the vision).


> It is interesting to me that right now this is sitting on the HN homepage directly adjacent to: "Tesla to recall vehicles that may disobey stop signs (reuters.com)"

Based on https://www.forbes.com/sites/bradtempleton/2022/01/13/a-robo... Tesla's FSD has other issues as well.


In Lex fridmans interview with George Hotz, hots talks about why he thinks radar AI is a non starter and predicted that even though Tesla was still adamant about using their radar, they would eventually realize they only needed cameras.

Hotz is the founder of comma.ai which is doing open source (I think?) Company.


What were his reasons? Searching the web reveals lots of machismo and assorted hero worship, but no actual solid technical arguments. Nobody in the field seems to think a vision-only solution is practical, other than Tesla, who are also not providing solid technical arguments that I can find.

This IEEE item seems to summarize the situation well.

https://spectrum.ieee.org/tesla-places-big-bet-vision-only-s...


It was a discussion about how such a system must function with all the different inputs vs the vision only model.

If you have radar and lidar and vision, then you have at least three different specialist machine learning models running, and then another model running that takes their outputs and decides what the car is going to do. You may have even more than that, some doing specific tasks like localization.

Neural nets and vision only is a more difficult but in the long run straightforward solution. The example he brought up was alphazero vs some other chess engine that has a rook engine, and a knight engine, etc.

Basically he's backing the end to end neural network back approach over some kind of multi-sensor fusion.

https://youtu.be/_L3gNaAVjQ4?t=3801


That argument doesn't make any sense though. Clearly radar + lidar + vision is a superset of vision only. It can perform as well as vision only if you disable the other two.

So the claim is there is absolutely no scenario where the other systems can contribute which seems to be false. E.g. Tesla's cameras are blinded pointing straight on into the sun, the car even tells you the cameras are blinded. If the cameras are splashed with mud they'll also see nothing. Tesla's radar was able to see "through" some obstacles which the vision system can not (and you can see that in the traffic visualizations).

Now how far do you want to fuse, do you just want to overlap the unique sensing abilities of each of those systems? How do you handle conflict when you're in the regions where all systems sense "equally" well. Sure, there's questions. But clearly having more data can't make things worse.

Elon's argument that that's the way to go because that's how humans drive is just ridiculous. And I say this as a proud Model 3 owner (great car, will never ever be autonomous, I don't care). It doesn't pass the sniff test.


> So the claim is there is absolutely no scenario where the other systems can contribute which seems to be false.

No. The claim was that pouring resources into those other systems is better spent improving the vision system, within the context that Tesla and Comma.ai is operating.

> But clearly having more data can't make things worse.

There are several examples of cars hitting stuff because one part of the sensor suite thought there was a problem, or that there wasn't a problem.

In general, the more complex you make a system, the more complex the failure modes get.


The real equation is closer to: Radar + LIDAR + Vision + Cost + Latency - Battery

Doing it with vision only saves on cost, latency & energy consumption.


I agree it saves on cost and energy consumption. Not sure about latency. But it's an inferior sensing system [and] it remains to be seen whether it can do the job or not, so far seems like not. Even if it can do the job (by some measure) whether it can outperform the better sensing systems which seems unlikely. I'd pay a little more for better safety, we know the cost of "human safety" as it reflects in our insurance premiums...


If fusing two data sources is necessary prior to making actuation decisions, how could adding another data source not introduce additional processing & latency?

It's not just about paying more dollars, but also range.

You're assuming that a dual system will be safer, but what if such systems are more prone to perception confusion, or other anomalies?

Even if a dual system were safer, it doesn't make sense to say that you will pay for safety absolutely. For example, you can always add an additional smoke alarm to your house for some marginal improvement in safety, but at some point most people decide they are safe enough.


Hotz makes his own company making self driving using vision only https://comma.ai/


Didn’t Hotz publicly give up on self driving and say driver assistance is all that will ever be possible?


If its what I think you're saying, he mentioned that its a question of liability.

If your car is 'driver assistance' then you don't have liability for accidents. If your car is 'self driving' then now you're going to get sued every accident.

So instead you're always 'driver assistance' just from the risk analysis perspective.


The question is do the economics of self-driving cars work when you have to add and integrate all these additional equipment. Correct me, if I am wrong but aren't LIDAR cars supposed to cost you $200K+ ?

Tesla is imitating humans in a way that removing LIDAR and relying on the compute to make up for them and build more accurate picture.

Also, isn't Cruise owned by GM - who are the VCs here?


Lidar costs have dropped massively and continues to drop. Waymo, for example, claimed 4 years ago they were able to reduce cost of their Lidar by 90% from $75,000 to around $7500 [1]. In the meantime, range and resolution of these sensors have increased. Anyone not making use of Lidar in this day is just hamstringing themselves.

[1] https://www.businessinsider.com/googles-waymo-reduces-lidar-...


200k isn't that much for certain use cases, like shared cars. NYC taxi medallions were significantly more expensive yet the revenue from taxi rides was high enough that people kept buying medallions.


Not exactly apples to apples. A $200k hardware device depreciates in value over time, whereas the taxi medallion was a fairly liquid, appreciating asset.


Yup that's a really good point.


If you search for "bosch lidar" there's a bunch of 2 year old news about them selling one designed for autonomous vehicles for $10,000, with statements that they could likely drive the price towards $200 with mass production.

So $200,000 is probably not correct.


It is correct for current pricing. Advances may drive those prices down. Right now the sensor packages exceed the price of all the other components.


> Tesla is imitating humans in a way that removing LIDAR

They are trying to imitate a fraction of what humans can do. And the state of the art ML research still does not account for issues like whether a photo of a person on a truck is real or not.

You really need LiDAR for accurate bounding box detection.


I recall a few years ago, GM/Cruise acquired a LIDAR manufacturer in 2017. Not sure if it has worked out, but their rationale for the acquisition for vertical integration makes sense.

Strobe’s solution will reduce the cost of making these sensors by 99 percent, Vogt says. “The idea that lidar is too costly or exotic to use in a commercial product is now a thing of the past.”

https://www.wired.com/story/gm-cruise-strobe-lidar/


What makes you say Tesla is imitating humans? Their motion planning is all traditional robotics logic, not anything learned.


Vision-based driving. Humans don't have lidar sensors.


Humans don’t regress per-pixel depth or use convolutions and region proposals to draw bounding boxes around objects. They don’t function on models with fixed weights trained by backprop either. The idea that “vision only” somehow more closely resembles how humans drive quickly falls apart if you inspect the internals of these systems.


The similarity is in the problem to be solved, not the details of the compute pipeline. Depth must be inferred somehow, rather than measured by actively interacting with the surface, as it is in LiDAR.


If you really wanted to make this argument, you wouldn’t even want to bother inferring depth, since that’s not what humans do, not directly at least. If you’re actually trying to obtain a depth map as part of your pipeline, LIDAR (or LiDAR + vision feeding into a denser depth prediction model) would always be a better strategy, cost aside, since determining depth from images is an ill posed problem.


My claim is that humans use their eyes as a primary input for driving. I don't think it's controversial. We don't let eyeless people drive. Eyes do not shoot out lasers.


I think the comparison comes from the fact that humans infer a 3D map from stereo vision whereas LiDAR to some extent gives you the ground truth 3D map. You’re right that it falls apart pretty quickly though.


Except Tesla isn’t inferring a 3D map from stereo vision either, at least not outside of the front facing cameras - they’re using monocular depth prediction.


Neither are humans. Our eyes are so close together there's almost no disparity between the eye images beyond a handful of meters. We do 3D by inference beyond the near-field.


Humans aren't just a stereoscopic camera - we can sense depth by knowing what things should look like, or by moving our head, or refocusing, or…

Did you know we can see light polarization?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528539/


You’re describing vision, and not LiDAR. We are in agreement.


They are wholly owned by GM are they not? Starting this way doesn’t preclude them taking Tesla’s vision only approach in the future. Even Tesla initially had a radar vision combination approach before moving to pure vision. The real question with whatever technique being used is can they drive the hardware cost low enough that it can be widely deployed.


They moved to pure vision because they were constrained by radar suppliers because of supply chain issues and the chip shortage, not because of any ML progress - they were actually investigating using a higher resolution imaging radar before the pandemic.


They also have Lidar cars in Fremont that drive around every so often, that doesn't mean they plan to go into every car anytime soon. It'd be short-sighted to not continuously evaluate solutions that previously had constraints (despite what Elon says, they'd add Lidar if it made economic sense and it showed an improvement over camera-only, as their camera detection is pretty accurate these days in FSD).


I’ve seen their LIDAR cars in SF too - if I had to guess they’re gathering ground truth data to train monocular depth models on.

And even really naive integrations of LIDAR will show big improvements over camera only. You can do something as simple as overlay the returns from the most recent LIDAR spin over a camera image as a fourth channel and feed it into your models and most of your depth prediction/spatial predictions will improve dramatically.


No, they're not. The linked post even mentions Softbank being a shareholder. It would be good to know what % of Cruise is owned by GM, but as far as I know it hasn't been made public knowledge. Hope someone can correct me on that.


This is true but there’s a bit more to it. Going vision-only is a big move that requires innovation in a lot of areas and changes huge parts of the stack.

Tesla was already heavily relying on vision before going vision-only.


I don't know who owns them. (see other discussion). I do know that automakers have aggressively partnered on self driving. Once this is ready for mainstream a most automakers will all introduce it because they all have rights to it.

I don't know the contract, it is likely that there is some ordering where the luxury brands can sell it first. It will be rolled down to cheap brands as needed (depending either on market demand, or legal mandates)


The reason camera only won’t happen is because it won’t work. Elon has been fighting reality for years.


How does a human identify obstacles, VRUs [0], and other cars?

0: vulnerable road users, eg pedestrians and bicyclists


> How does a human identify

With the most complex, context-aware, intuitive computer in existence. In addition to eyeballs that are dramatically more capable than any camera Tesla is using.


Not going to comment on the overall lidar vs. pure camera approaches, but I don't think human eyeballs are more capable than a full 360-view array of cameras looking at everything (including blindspots) at once. For once, human eyeballs cannot see in full 360 or view the environment from outside of the car cabin (aka unrestricted by pillars and other things that obstruct the field of view from inside the cabin).


Have you ever used night mode on your phone? It looks nothing like what you are taking in terms of colours, but it shows a lot more detail than the human eye can pick up. Same thing with zoom cameras. Also there are more cameras than eyes, and in better spots.


Have you ever used night mode on your phone while driving on 60mph at night? It's basically sophisticated long exposure that won't work for driving.


How does a bird fly?

For human flight, we borrowed the wings, but made them fixed and added propellers, to cheat the mechanical incompetence.

I wouldn't be shocked that radar/lidar/sonar/whatever sensors are what it takes to cover the incompetence in matching human brain+vision.

Heck, use multiple "brains" and give each veto power on moving the vehicle. Supposing that stopping doesn't kill you, that would at most frequently annoy the driver, and sometimes save his life or someone else's.


Well also the largest animal to ever fly only weighed a few hundred lbs so we’re also just limited by the kind of flight we’re trying to do.

At least today we probably could build a craft that flies by flapping its wings but what would be the point?


Forget birds, insects do it, with vision and puny brains.


The human visual system works more like a high resolution event camera than a frame-based camera. Event cameras can deal well with glare and other problems that would otherwise require high dynamic range per frame.


Humans would drive better if they could shoot fricken laser beams out of their eyes.


Why would Elon want to model humans when humans are mostly responsible for the problems we want to replace humans for.


Can someone explain whether there is a principled reason to not use all the sensors available and choose just cameras?


From public information it's a matter of price and "appearance." No one who's half serious about understanding how the system work would use cameras alone, but remember of all the companies who's serious about putting self driving cars on the road, Tesla is the only one who designs, manufactures and sells cars. Incidentally, Tesla is also the only car company pushing camera only as a "solution" which again, it is not. All other outfits are tech pure plays, so their incentive is more geared towards a working system, because that's the only thing they can sell aside from the dream. If Tesla fails with a camera-only self driving car system, they can still sell cars. Source: used to work in robotics, all my classmates from grad school work or have worked at a self driving car company.


It sounds very stupid from their side to be stubborn, at least, that just means they have an existential risk to the whole company, if they don't get their fav solution to work.


Think along the lines of:

1. What is necessary vs sufficient.

2. Processing power.

3. Power consumption.

4. Hardware cost. (even if may have decreased now, the cost of initiating the programme years ago)

5. Training cost / volume of data.

The approach that Tesla is using balances all these things. I can't fully explain why it's so controversial, but I suspect this topic attracts folks from autonomous startups who are using very different approaches, people who repeat what they've heard elsewhere, and the usual anti-Tesla folk.

If you want to know whether Tesla's approach can work, first realise that the sensor suites only help with the 'perception' part of the autonomy problem. Then watch some of the Tesla FSD videos on YouTube and check whether the visualisation seems accurate or not. It's certainly not 100% perfect yet, but it's clear to me that the perception part of the problem is mostly solved. The biggest remaining problems seem behavioural.


As long as Teslas regularly crash into stationary objects because they have no depth sensing system and rely only on camera images, I wouldn't call the perception part of the problem solved.



In the end I think self-driving regulations will require depth checking beyond computer vision as it can be tricked on any new situation. Depth checks using LiDAR are extremely efficient up to a football field away down to the direction someone is facing. RADAR is not as good but better than video/flat 2D depth detection though is limited by range, however it does work in weather where LiDAR doesn't and computer vision struggles.

Autopilot and now FSD on Teslas doesn't have depth checking beyond visual/cameras. They removed the RADAR/sonar and have zero physical world depth checking currently. Tesla recently instead of adding LiDAR, they [just removed RADAR to rely on computer vision alone even more](https://www.cnbc.com/2021/05/25/tesla-ditching-radar-for-aut...).

Self-driving cars need cameras and physical depth checking sensors like LiDAR, or at least RADAR. Telsa has only cameras and some sensors but not for depth anymore, that is insane.

Humans have essentially LiDAR like quick depth testing. Humans have hearing for RADAR like input. For autonomous units, depth may be actually MORE important than vision in many scenarios. Humans have inherent depth checking with 3D space, movement, sound, lighting, feel, atmosphere, air, pressure, situational awareness, etc that computer vision converted to 2D flat images for depth checking will never be able to replicate.

A human can glance at a scene and know how far things are not just by vision but by how that vision changes with these distance inputs. Humans are able to detect 3D/2D imagery easily where it is all based on 2D with a camera. LiDAR is faster than humans at depth checking in the actual physical world not just from an image flattened.

With just cameras, no LiDAR OR RADAR, depth can be fooled.

Like this: [TESLA KEEPS “SLAMMING ON THE BRAKES” WHEN IT SEES STOP SIGN ON BILLBOARD](https://futurism.com/the-byte/tesla-slamming-brakes-sees-sto...)

Or like this: There is the [yellow light, Tesla thinking a Moon is a yellow light because Telsas have zero depth checking equipment now that they removed RADAR and refuse to integrate LiDAR](https://interestingengineering.com/moon-tricks-teslas-full-s...).

LIDAR or humans have instant depth processing and can easily tell the sign is far away, cameras alone cannot.

LiDAR and humans can sense changes in motion, cameras cannot and even RADAR struggles with dimension (frame to frame changes).

LiDAR is better than humans on changes in motion, depth, seeing all around always, and much faster at all those things.

[LiDAR vs. RADAR](https://www.fierceelectronics.com/components/lidar-vs-radar)

> Most autonomous vehicle manufacturers including Google, Uber, and Toyota rely heavily on the LiDAR systems to navigate the vehicle. The LiDAR sensors are often used to generate detailed maps of the immediate surroundings such as pedestrians, speed breakers, dividers, and other vehicles. Its ability to create a three-dimensional image is one of the reasons why most automakers are keenly interested in developing this technology with the sole exception of the famous automaker Tesla. Tesla's self-driving cars rely on RADAR technology as the primary sensor.

> High-end LiDAR sensors can identify the details of a few centimeters at more than 100 meters. For example, Waymo's LiDAR system not only detects pedestrians but it can also tell which direction they’re facing. Thus, the autonomous vehicle can accurately predict where the pedestrian will walk. The high-level of accuracy also allows it to see details such as a cyclist waving to let you pass, two football fields away while driving at full speed with incredible accuracy. Waymo has also managed to cut the price of LiDAR sensors by almost 90% in the recent years. A single unit with a price tag of $75,000 a few years ago will now cost just $7,500, making this technology affordable.

> However, this technology also comes with a few distinct disadvantages. The LiDAR system can readily detect objects located in the range of 30 meters to 200 meters. But, when it comes to identifying objects in the vicinity, the system is a big letdown. It works well in all light conditions, but the performance starts to dwindle in the snow, fog, rain, and dusty weather conditions. It also provides a poor optical recognition. That’s why, self-driving car manufacturers such as Google often use LIDAR along with secondary sensors such as cameras and ultrasonic sensors.

> The RADAR system, on the other hand, is relatively less expensive. Cost is one of the reasons why Tesla has chosen this technology over LiDAR. It also works equally well in all weather conditions such as fog, rain, and snow, and dust. However, it is less angularly accurate than LiDAR as it loses the sight of the target vehicle on curves. It may get confused if multiple objects are placed very close to each other. For example, it may consider two small cars in the vicinity as one large vehicle and send wrong proximity signal. Unlike the LiDAR system, RADAR can determine relative traffic speed or the velocity of a moving object accurately using the Doppler frequency shift.

> Though Tesla has been heavily criticized for using RADAR as the primary sensor, it has managed to improve the processing capabilities of its primary sensor allowing it to see through heavy rain, fog, dust, and even a car in front of it. However, besides the primary RADAR sensor, the new Tesla vehicles will also have 8 cameras, 12 ultrasonic sensors, and the new onboard computing system. In other words, both technologies work best when used in combination with cameras and ultrasonic sensors.

LiDAR and depth detection will be needed, no matter how good the pure computer vision solutions get.

The accidents with Teslas were the Autopilot running into large trucks with white trailers that blended with the sky so it just rammed into it thinking it was all sky. LiDAR would have been able to tell distance and dimension which would have solved those issues.

[Even the most recent crash where the Tesla hit an overturned truck would have been not a problem with LiDAR](https://www.latimes.com/california/story/2021-05-16/tesla-dr...). If you ask me sonar, radar and cameras are not enough, just cameras is dangerous.

Eventually I think either Tesla will have to have all these or regulations will require LiDAR in addition to other tools like sonar/radar if desired and cameras/sensors of all current types and more. LiDAR when it is cheaper will get more points almost like Kinect and each iteration of that will be safer and more like how humans see. The point cloud tools on iPhone Pro/Max are a good example of how nice it is.

Human distance detection is closer to LiDAR than RADAR. We can easily tell when something is far in the distance and to worry or not about it. We can easily detect the sky from a diesel trailer even when they are the same colors. That is the problem with RADAR only, it can be confused by those things due to detail and dimension especially on turns like the stop sign one is. We don't shoot out RADAR or lasers to check distance but we innately understand distance with just a glance not just from vision alone though.

Humans can be tricked by distance but as we move the dimension and distance becomes more clear, that is exactly LiDARs best feature and RADARs/CV trouble spot, it isn't as good on turning or moving distance detection. LiDAR was built for that, that is why point clouds are easy to make with it as you move around. LiDAR and humans learn more as they move around or look around. RADAR can actually be a bit confused by that. LiDAR also has more resolution far away, it can see more detail far beyond human vision.

I think in the end on self-driving cars we'll see BOTH LiDAR and RADAR but at least LiDAR in addition to computer vision, they both have pros and cons but LiDAR is by far better at quick distance checks for items further out. This stop sign would be no issue for LiDAR. It really just became economical in terms of using it so it will come down in price and I predict eventually Tesla will also have to use LiDAR in addition.

[Here's an example of where RADAR/cameras were jumpy and caused an accident around the Tesla](https://youtu.be/BnbJvUwbewc?t=262), it safely avoids it but causes traffic around to react and results in an accident. The Tesla changed lanes and then hit the brakes, the car behind was expecting it to keep going, then crash.... dangerous. With LiDAR this would not have been as blocky detection, it would be more precise and not such a dramatic slow down.

Until Tesla has LiDAR it will continue to be confused with things like this: [TESLA AUTOPILOT MISTAKES MOON FOR YELLOW TRAFFIC LIGHT](https://futurism.com/the-byte/tesla-autopilot-mistakes-moon-...) and this: [WATCH TESLA’S FULL SELF-DRIVING MODE STEER TOWARD ONCOMING HIGHWAY TRAFFIC](https://futurism.com/the-byte/watch-tesla-self-driving-steer...). [They are gonna want to fix FSD wanting to drive toward moving trains](https://twitter.com/TaylorOgan/status/1487080178010542085).

In the end I bet future self-driving, when it is level 6, will have computer vision, LiDAR, RADAR and potentially more (data/maps/etc) to help navigate. [Tesla FSD has been adding more of data/maps in which is basically what they said they didn't need to do](https://twitter.com/WholeMarsBlog/status/1488428565347528707), so LiDAR will have to come along eventually. Proof of them [using maps data](https://twitter.com/IdiocracySpace/status/148843350997893939...) and possibly previous driver data.

LiDAR is extremely fast and accurate up to multiple football fields even determining which way a pedestrian is facing, with high fidelity. That is much better than a camera or computer vision. LiDAR can get many reads in the time you could do one complete CV pass.

The best process is to do depth checks across the viewable area, then overly the results of the computer vision tick, then again check the differences and when it comes to depth, go with the LiDAR feedback as video can be wrong/tricked. This is probably how the cars that use LiDAR are doing it (Waymo/Cruise/etc), the equipment is also on the roof for better vision/coverage.

Tesla is going to be massively behind when the regulations come down that self driving needs depth checks beyond CV. You can still base lots of the process on CV. However doing depth solely in CV can be tricked and uses massive amounts of processing power which ends up with faster usage of power. CV will always be limited in distance, dimension and depth to LiDAR and physical world detection/sensors.

To think a Tesla can drive you without intervention or watching it closely, there will eventually be a distance confusion and it won't go well. The name Autopilot was better as it inferred like a plane where you still have to watch it, though planes are much further apart. The name Full Self Driving should be changed immediately even in beta, it is going to be ripe for lawsuits and problems.

Tesla is trying to brute force self-driving and it will have some scary edge cases, always.


Humans don't actually have any direct depth sensing either. I don't think your ears count because most objects you need to detect while driving aren't really emitting anything your ears can pick up...

So in theory if Tesla had cameras instead of eyes, and those cameras were positioned were the driver's eyes would be positioned, and its computer systems were a human brain, and those cameras has the dynamic range, resolution, auto-focus abilities of the human eyes (or better), then the car could drive just like a human. The problem isn't really that humans have direct depth sensing and the car doesn't, the problem is the vision system of the car is inferior in many ways to the human's vision system, and the brain of the car is far inferior to the human brain.

Better sensing can (maybe) make up for that. If a car can sense all objects on the scene that it may collide with accuracy and far enough in advance then avoiding accidents becomes collision avoidance rather than general intelligence. In ambiguous situations just don't crash into anything and don't go too close to other moving objects. The car might not obey traffic rules perfectly but it probably wouldn't crash ;) If the car occasionally goes through a red light, or doesn't stop in a stop sign, but does so safely (for itself and the surrounding traffic) then it's not a big deal. If the car relies on recognizing a stop sign even if it's behind a tree or partially obscured and failure to recognize the sign leads to a side collision with other traffic or running over pedestrians that's a little bit of a bigger deal.


Even with the two camera approach, you are still doing depth checking with a flat computer vision algorithm. Also keep in mind humans can turn their head, and can handle new situations without training. Humans also can tell situations better that includes dimension and movement.

LiDAR is a physical world depth checking system. It will always beat simulated depth checking. It also does dimension and movement better than computer vision. LiDAR is a 360 degree depth check.

Essentially Tesla is trying to do a LiDAR like point cloud from camera inputs only. That may work in many cases, it will be beat by LiDAR in all cases due to the difference of virtual vs physical data.

> The justification for dropping radar does make sense, says Weinberger, and he adds that the gap between lidar and cameras has narrowed in recent years. Lidar’s big selling point is incredibly accurate depth sensing achieved by bouncing lasers off objects—but vision-based systems can also estimate depth, and their capabilities have improved significantly.

> Weinberger and colleagues made a breakthrough in 2019 by converting camera-based depth estimations into the same kind of 3D point clouds used by lidar, significantly improving accuracy. Karpathy revealed that the company was using such a “pseudo-lidar” technique at the Scaled Machine Learning Conference last year.

> How you estimate depth is important though. One approach compares images from two cameras spaced sufficiently far apart to triangulate the distance to objects. The other is to train AI on huge numbers of images until it learns to pick up depth cues. Weinberger says this is probably the approach Tesla uses because its front facing cameras are too close together for the first technique.

> The benefit of triangulation-based techniques is that measurements are based in physics, much like lidar, says Leaf Jiang, CEO of start-up NODAR, which develops camera-based 3D vision technology based on this approach. Inferring distance is inherently more vulnerable to mistakes in ambiguous situations, he says, for instance, distinguishing an adult at 50 meters from a child at 25 meters. “It tries to figure out distance based on perspective cues or shading cues, or whatnot, and that’s not always reliable,” he says.

> How you sense depth is only part of the problem, though. State-of-the-art machine learning simply recognizes patterns, which means it struggles with novel situations. Unlike a human driver, if it hasn’t encountered a scenario before it has no ability to reason about what to do. “Any AI system has no understanding of what's actually going on,” says Weinberger.

> The logic behind collecting ever more data is that you will capture more of the rare scenarios that could flummox your AI, but there’s a fundamental limit to this approach. “Eventually you have unique cases. And unique cases you can’t train for,” says Weinberger. “The benefits of adding more and more data are diminishing at some point.”

> This is the so-called “long tail problem,” says Marc Pollefeys, a professor at ETH Zurich who has worked on camera-based self-driving, and it presents a major hurdle for going from the kind of driver assistance systems already common in modern cars to truly autonomous vehicles. The underlying technology is similar, he says. But while an automatic braking system designed to augment a driver’s reactions can afford to miss the occasional pedestrian, the margin for error when in complete control of the car is fractions of a percent.

https://spectrum.ieee.org/tesla-places-big-bet-vision-only-s...


> They and their VC backers are clearly betting on the concept that radars + lidar + imaging will be the ultimate successful solution in full self driving cars, as a completely opposite design and engineering philosophy from Tesla attempting to do "full self driving" with camera sensors and categorical rejection of lidar.

Tesla's approach is a textbook case of premature optimization.


Tesla's approach is a textbook case of differing constraints.

Despite Elon's commentary, the reason Tesla does not use lidar is sensor cost (waymo/cruise cars probably have 250k+ worth of sensors on them), and reason Tesla does not use radar is supply chain.


Sensor cost is a lame excuse, especially coming from a company that's one of the leaders of EV world. Battery prices are decreasing, but sensors wouldn't?

Tesla put constrain on themselves, because they wanted to sell product years before they existed, and they were super cheap on HW, to optimize their costs.

They optimized their solution, before they even had a slightest idea what's needed to make it work.


core to tesla's strategy is to do massive data collection from consumer-owned cars using beta software (and hardware, that the consumer pays for). that model is not compatible with expensive lidars, which contrary to some other comments in this thread, are still very expensive (just because the entry-level pucks are cheap, does not mean full lidar coverage is cheap). there is no way they could push $100k of sensors on consumers to build out their data collection pipeline. when tesla was first starting out, affordable lidar did not even exist so it's hard to call that a lame excuse.

all that said, I'm still pessimistic about tesla's chances at making camera-only L4 work in any short time horizon. we will see if they pull it off, but it's such a severe disadvantage compared to fully-kitted competitors.


I don't think it's a lame excuse at all. Tesla barely managed to get a profitable car on the market given battery costs, and had to bet their company on a strategy of massive investment and reducing costs over a decade+ period.

They are also ahead of every other auto maker in commercially purchasable L2 autopilot.

With that context, it doesn't make sense to add $50k+ to the price of their cars (e.g. doubling the cost of a Model Y) just to get slightly better autopilot performance. Lidar would help them in some cases (e.g. stationary objects), but it's not a panacea. Their strategy makes perfect sense given they want to sell semi-affordable cars to the public.

On the flipside, Cruise & Waymo's strategy of "geofenced L4 at any initial cost" makes perfect sense for their short term robotaxi ambitions.

Maybe in ~10 years these strategies will intersect, but for the time being they are completely different products.


> Tesla barely managed to get a profitable car on the market given battery costs, and had to bet their company on a strategy of massive investment and reducing costs over a decade+ period.

Numbers came out a few days ago showing that Teslas had some of the highest margins in the mainstream auto segment.


Why do we have to discuss Tesla whenever self-driving comes up? Cruise has this technology. Waymo has it. There are a smattering of niche players out there with various levels of self-driving. Tesla emphatically does not have it. They are not in the race.


I agree, but they sure claim to be. Literally marketed as "full self driving".


Musk is a true car salesman.


Actually, I just noticed earlier that todays' wording is ""full self driving capability".

Sells for 12k USD here, for instance: https://www.tesla.com/modely/design#overview


Where can I buy one?


Yeah, but that's not a tech issue. The few thousand that have the full self driving beta just have the opt-in option to turn on rolling stops. That just has to be removed in their next update.


As you’ll note in the comments on that thread though, FSD has a lot more issues than just that; particularly with stationary objects and at night.


I don't have FSD enabled on my Model 3, but I have the FSD visualization preview. I'd be terrified of FSD at night. During the day I don't see it have any issues registering cars and other obstacles, but during night it barely detects anything.


Seems fairly good at night in 10.9 https://www.youtube.com/watch?v=01QowBvtraE


Tesla will release L4 or L5 self driving this year. Musk said it himself.

Please ignore the fact that this is the 7th (or more?) year in a row he has said this.


I still want a functional autopilot that doesn't phantom brake on the freeway which has gotten worse since they stopped relying on radar. Or have the self park feature not curb the wheels. I won't even get started on the summon feature.


"Use Summon to bring your vehicle to you while dealing with a fussy child", unless you ask Legal, in which "pay attention to vehicle at all times. Do not use distracted" is the use case, you mean?


The few times I've ridden in a Tesla using "self driving" the phantom breaking was really jarring. It was surprising and concerning how often it slammed on the breaks for no reason.


I’m concerned about the timing of this. Former CEO Dan Ammann, fired in December, was a big champion of the robotaxi business model. It is speculated he was fired because GM CEO Mary Barra disagrees with this strategy and wants them to focus more on integration with existing GM vehicles.

I am afraid that the robotaxi timeline was pulled in so that the rest of the believers inside Cruise can get supporting data to prove it’s a viable model and to make it harder for Mary to change its course. This may come at the expense of safety.


[Disclaimer] I work for cruise.

Safety is the most critical value that our company has, and we live it every day in all of our processes / culture. I won't comment on the executive changes, but I can guarantee that this company (unlike some rivals...) would never compromise safety in any decision.


I once threw myself in front of a cruise vehicle to test it for safety. (That’s a bit of an exaggeration, I saw it coming, sped up and jumped into an open parking spot on a trajectory that would take me in front of I didn’t stop) It performed admirably. The safety driver didn’t see me coming but the computer did. The car decelerated to the point it would be able to stop if I kept coming. The safety driver couldn’t figure out what was happening at first. He was PISSED when he figured it out. Having done that and having never seen one commit an error or safety infraction, I now have a high degree of trust in the safety of cruise vehicles.


Can confirm from personal experience in crosswalks with self driving cars: cruise is the most timid (read safest!) of the self driving cars being tested in SF IMHO. Cruise will proactively alter trajectory (such as deceleration) for pedestrians and cyclists at a noticeably earlier threshold than Waymo. This is much more pleasant for everyone surrounding the vehicle as it clearly expresses that you have been recognized as a being needing space.


I live in a part of San Francisco that has a lot of self driving car testing, and I generally agree with this. I'm not saying that I think Cruise's cars are good at driving, but all their errors that I see appear to err on the side of being slow, albeit sometimes to an almost laughable degree. Sometimes it's actually unsafe when you're driving behind them as they'll randomly slam on the brakes for no obvious reason, but I was taught to leave enough following distance and to pay attention to the road. (Still, I bet they get rear-ended a lot.)

That said, I'm probably not signing up for this.


Urban bicyclist here. I don't life in SF, but I'm really curious how current autonomous vehicles behave around cars. If you're biking down a one-way street with little room, do most autonomous vehicles just wait behind you? Do they try to pass? What kind of follow distance would they give a bicyclist who "dominates" the lane because it's too narrow to let a car pass safely?


I haven’t had this exact scenario you describe. In many cases in SF it is you passing them, not the other way around. Cruise gets confused in intersections sometimes, esp in the presence of unexpected cyclists and pedestrians. Their error state is to freeze. Then of course very easy to get around them. I also notice if I close pass the vehicle, they will brake or adjust course to varying degrees depending on street conditions and which tech stack.

I have never witnessed a self driving car exhibiting aggressive behavior towards cyclists or pedestrians. I have witnessed many humans driving cars exhibiting aggressive behavior.


As a fellow cyclist who must be vigilant in defending against frequently bad behavioral answers to these questions from human drivers, I’d love to know how any self-driving system approaches them.


are there narrow streets in america?


Occasionally, yes.

Though it probably depends on your definition of 'narrow'. I'm guessing a lot of Americans think the average SF street is narrow, and it would be a medium or even wide street in some other countries.


Yes, especially in towns and cities that contain street layouts/buildings that pre-date cars.


Unfortunately, during the massive storms in the fall, I witnessed a Cruise car drive right through an intersection where the lights were out. I witnessed a Waymo car make the stop at the same intersection. I live above said intersection and watch it a lot.


Good data point! In reading all of this feedback it does make me think there may be utility for a third-party monitoring service. Think Nielsen but for self driving cars. The intelligence collected from on-road movements could be valuable for both competitors and regulators.


Timid is a good word to describe how I would prefer self-driving vehicles to drive. Consistency and comfort will be more important than minimizing travel times. It's impatient human drivers that are behind a lot of accidents.


This sounds like a shill comment.


I’m a former taxi driver and safe streets advocate, so if anything my motivations are not sympathetic with their cause, just observing behavior of them.


Contrast this to the catastrophic self driving test Uber did on the streets of San Francisco before their cars got thrown out of California.


Hi Nicholas , Long time!

Yeah Cruise definitely seems to be moving with safety in mind. Props to the team for this milestone!


I wouldn't read too much into firings at Cruise. Working there was like a company-sized game of "The Weakest Link". Anyone worth their salt gets fired eventually, it's just the way they operate.


> Anyone worth their salt gets fired eventually,

Isn't The Weakest Link about firing the worst performers?


It's a gameshow. You win by being in the last two people and getting more questions right. So, you need to be better than at least one person and then eliminate other competitors. Being "The Strongest Link" just makes you a target for everybody who suspects they can't beat you in that final head-to-head round.

So the game can resolve to two devious but not-so-bright people who've schemed their way to the final two, and then one of them gets lucky with the topics.

If you want to watch something where people are just ludicrously good at general knowledge try "Only Connect" (even the title of the show is a relatively lesser known reference) but note they don't win prizes - why would they, they enjoy quizzes, they're not here for the money anyway.


Based on just a few mistakes and a poll and where a firing happens approximately every 5 minutes.


Super cool comment. Got any proof or evidence or anything?


Just read through the negative reviews on Glassdoor. https://www.glassdoor.com/Reviews/Cruise-Reviews-E977351.htm “Churn” is not an uncommon word.


"Cruise CEO to step down as GM accelerates self-driving car plans"

https://www.engadget.com/cruise-ceo-to-step-down-as-gm-accel...

"GM’s Barra Dismissed Cruise CEO Ammann Over Mission, IPO Timing" https://www.reddit.com/r/wallstreetbets/comments/rk0v7h/gms_...


I fear this also. It seems like the former CEO wanted to go public so the employees would get liquidity but the GM CEO disagreed. This seems like a political effort (perhaps not fully) to show that Cruise can generate a lot of revenue, and then make the case to focus on its self driving taxi business rather than serving GM exclusively.

I guess time will tell. I only hope that the worst thing to happen would be lost money or ego and not lives.


Can we just call out how incredibly awesome this is? We might stumble along the way but this is as big as moving from horse carts to cars. Respect to all of you working on self driving cars!


> Can we just call out how incredibly awesome this is? We might stumble along the way but this is as big as moving from horse carts to cars.

I find it a bit hard to believe that people aren't more concerned about calling out what exact legal framework this operation is being allowed under. Having a car without a driver in it driving around seems like a massive risk for the drivers, pedestrians, and bicyclists sharing the streets of San Francisco; they should be entitled to know why they are test cases for these car-driving programs.

The most I could find was that it seems to be this permit [1], and the link in it states:

> Cruise’s permit is available at www.cpuc.ca.gov/avcissued.

...which is a 404.

It also states:

> More information on the CPUC’s Autonomous Vehicle Passenger Service Pilot Programs is available at www.cpuc.ca.gov/avcpilotinfo.

...which is also a 404.

This does not...inspire confidence, to say the least.

Given that official information about this seems to be a bit thin on the ground, I guess I'll ask the obvious question – if the cars hit and injure someone, is Cruise's CEO going to be held personally liable? The engineers who worked on the tech? The passengers in the car (who are the nearest thing to drivers)?

It's not like this is an unexpected scenario, we have preexisting example of this happening [2]. Or is the plan to sweep real-life consequences under the carpet with euphemisms like "stumble along the way"?

----------------------------------------

[1] https://docs.cpuc.ca.gov/PublishedDocs/Published/G000/M387/K...

[2] https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg


I don't know the answers. They're definitely worth figuring out, but the legal questions do not take away from the sheer gall to dream and achieve. Figuring out liability is in the domain of law, which is easy compared to building such engineering marvels. I am sure we can figure it out. Self driving car companies take safety VERY VERY seriously not just for potential legal consequences but because a single incident would make consumers lose trust, which is a death knell for such companies.

For folks who think about Cruise's achievement cynically (not directed at the commenter), please consider what your stance would have been back in the day in the following situations: 1) Cars being introduced to challenge horse carts in the late 1800s and early 1900s 2) Countries like India spending on a Space program while having a considerable population below poverty line, rise of SpaceX 3) The rise of the internet in the 90s with its trivial applications like radio, silly websites etc

If your answer is - oh of course I would cheer each of these, but view self driving cars cynically, good for you.


None of the other situations you’ve mentioned are similar to a self driving car being unleashed on an unsuspecting dense urban area without setting clear rules for what’s to happen in the worst case. These are the frameworks which guide decision making. Without clear guidelines, I have a lot of reason to doubt that eg a product team that really wants to deliver a feature will ship something without the proper amount of testing.

If you’re letting autonomous multi ton vehicles roam around a dense city, I would expect them to go over and above in playing by the book.


I don't claim that laws don't have catching up to do, or that Cruise shouldn't be liable for any loss to life or property they may cause. We can be appreciate their achievement while being cognizant of the need for laws around this.

My second comment was targeted at naysayers and pessimists who find faults and issues everywhere. Yes gaps exist and that's how tech and society plays catch up, but let's not miss the forest for the trees here.


Cynical take: Since human drivers are almost never held accountable for killing pedestrians or cyclists due to inattentive driving or poorly maintained vehicles, it's going to be the same for autonomous vehicles: it's always going to be an unfortunate, unavoidable accident.


Cruise in particular has been around in San Francisco for a while. https://techcrunch.com/2020/12/09/cruise-begins-driverless-t...

> The California DMV, the agency that regulates autonomous vehicle testing in the state, issued Cruise a permit in October that allows the company to test five autonomous vehicles without a driver behind the wheel on specified streets within San Francisco. Cruise has had a permit to test autonomous vehicles with safety drivers behind the wheel since 2015.

I saw their cars a lot while I lived in San Francisco. But back then they had safety drivers and such.

Information about the permit can be found here https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


Sure I'm concerned about that. I'm also very concerned about how bad human drivers are and how deadly cars are. So long as they beat humans I'm happy to have them. (I'd rather have great transit, but I'll take what I can get)


Yeah I'm not keen on that euphemism either. I know from experience and messing around with ml/ai that the tech is not there yet, so I wager they're going to cheat by having a remote driver. In a dense city that has a lot of weather.

I'll stay out of SF until this ends in a huge lawsuit.


Cruise has been in SF for many years. It's actually hard to miss their cars if you're around the Sunset area a lot. Or it was when I lived there. I'm not sure if their area of operation has shifted out of there. So I hope you've been out of SF for a while.

That was probably what made me feel like I was in SF more than anything else in SF. Seeing their cars drive by with the LIDR, cameras, and the person in the passenger seat staring at a MacBook Pro.


They’ve always had a person in their cars though. So this is a big difference since they’re saying the vehicles will be fully autonomous (no human supervisor in the vehicle).


I... don't think it's as big a step as that, because back when horse carts and cars were around you could hire a driver. Net effect (not having to drive yourself) is the same. It's an evolutionary step, not a revolution. I don't see it having as dramatic an effect on society or how cities are built as cars did.


It's like the change from horse and cart to a crappy early car that wasn't much better. But it's the start of the AI revolution which will have quite an effect.


The Cruise CEO is here, along with many grand stories of how good/safe their vehicles are.

I can’t help but think this thread may be a marketing ploy. If this were the case, is it allowed on HN?

Edit: In addition, any negative/challenging/sceptical comment is quickly downvoted.


HN's criterion is whether a post is intellectually interesting, or more precisely whether it can support an intellectually interesting discussion. What's nice about that is you can decide it by looking at the article itself, and the thread—you don't need to know nebulous things like the intentions behind the post. I'd say the current post clears the bar fairly easily. Here are a couple of past explanations about this, in case they're of interest:

https://news.ycombinator.com/item?id=20186280

https://news.ycombinator.com/item?id=22871601

As for negative/challenging/sceptical: that depends on the quality of the comment. Thoughtful critique is always welcome. Shallow dismissals and snark are not welcome—not that the target of the criticism always deserves better, but the community deserves better.


Thanks for the response dang. Someone below has brought to my attention that my comment doesn’t sit within within the guidelines - might it be best for me to edit it with something relevant to the article?

Out of curiosity, and regardless of this case, how do you deal with artificial manipulation of sentiment on HN?

Nudging the views of one of the most active developer communities in the world has value if you were an adversary. Do you see signs of this?

And if the campaign is sustained over a long period, with all comments/threads being within the guidelines, but heavily biased, how does one defend against it?

It would be naive to assume Reddit is the only platform that manipulates user sentiment. And despite the average user here being more capable of critical thinking, we are all still fallible. Can you speak to any of this?


I've seen dang post this link when something like this question is asked: https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


Their CEO is here in part because he's been with YC nearly since the beginning (he was the cofounder of Twitch and JustinTV). He's one of the first users of HN, and is definitely a long standing member of the community.

It's a marketing ploy only in the sense that they just launched public signups and wanted to let us know about it.


As someone unaffiliated, I really like that their CEO is here answering questions.


I do like that too! As long as the CEO isn't being shielded by an army of supporters.


Agreed. It is one of the things I find most attractive about HN, that Important People will actually show up in the discussion and participate.


Public, fully autonomous taxis in SF is big news regardless. Hard to imagine it not being extensively discussed here.


No need for a conspiracy. This is hackernews. Not reddit. I doubt they are spending their marketing dollars here.


Tech companies, especially software-heavy tech companies, know the value of good marketing on HN.


You're right. Perhaps they simply announced this post company-wide, so lots of happy employees are here.


By this point you're just repeatedly making insinuations about shillage and astroturfing which is in the guidelines as a thing to not do.


The motivations for my comment was seeing well-meaning comments being downvoted for no apparent reason. On the other end of the spectrum, and perhaps more importantly, someone claiming the car is so safe that they jumped out in front of it without the driver noticing, and it stopped safely. Does that, to you, seem like a responsible thing to post, even assuming it were true?

As for the breaking of rules, dang has commented, and I’ll take responsibility for my actions.


As for the breaking of rules,

You should just read them when you have a chance since they address the stuff you bring up simply and directly. Don't start meta-threads about votes. Don't accuse fellow users of shillage, brigading, astro-turfing, etc, without evidence. If you suspect abuse, email the mods. It's all in there, along with the rationale.


If it's not it should be, this is cool stuff, a little shilling on a capitalist inspired news site should be expected and I think we're smart enough to sort the BS out from the good stuff.


Why is this considered a big stride when Waymo is doing it for 6 months already?https://blog.waymo.com/2021/08/welcoming-our-first-riders-in...


Because it's not Waymo. It suggests that Waymo has a real competitor and Waymo-level tech has been replicated outside Waymo and their long lead over everyone else is evaporating because they have (to outside appearances) dithered so long. (One wonders if Waymo would even have 'been doing it for 6 months already' in SF if Cruise hadn't been ramping up in SF the past few years...)


From your Waymo link:

> All rides in the program will have an autonomous specialist on board for now

Cruise is the first to do driverless (zero humans in the front seat) testing in SF. Waymo have been doing this for a couple years in Phoenix, but have not yet tested driverless in SF as far as I'm aware.


Yes, though there's the big caveat of only doing it at night (10pm - 6am). Oh and also limited to 25 mph.


Almost exactly 40km/h


Because #2 is still close to #1? I personally am glad to read about this stuff and not just the next "point version of {$SOFTWARE_STACK} released today". I mean that's why we get to choose which article to click on. amiright?


6 months is not really a long time in the timeline of self driving cars which have been in development since the 70s.


...because no one other than waymo is doing it? This is still a big achievement.


The next thing is a driverless car and a driverless bus “docking” at speed, so you can change vehicle without affecting your trip time.

Economics will outcompete 50 self-driving cars going in the same direction for 30min when you could have one driverless-bus doing the same.

If this transition is smooth and fast, think what this will do to traveling time, car prices and property prices. How far can you travel without stops and no traffic jam?


In-transit transfers is a fascinating idea, but there probably more practical solutions to the problem you're trying to solve.

More realistic I think is finding the right size for vehicles somewhere in between the size of a car and a bus: A bus gets you somewhere at maximum efficiency while sacrificing accuracy (walking to and from bus stops) while a taxi gets you somewhere at maximum accuracy while sacrificing efficiency (roads full of single-passenger vehicles). Something like 6-10-passenger vans are probably the right middle ground to maximize both efficiency and accuracy.


We have public transit right now. All we need to do is to not stigmatize its users.


Yep, and beyond that, there's still a need to remove cars from the world, both to fight climate change and to make cities livable again (see any video from the "Not Just Bikes" YouTube channel for support for this argument). Moving everything to autonomous cars (even, yes, electric autonomous cars) will not solve those issues. Smart urban planning and structural reform are the only things that really stand a chance there.


With the urban sprawl in the US you frequently will need to at least change vehicle once and that will likely involve a substantial wait and might involve a walk at least on one end. This quickly can turn what could take 15-20 minutes into a hour or more. A docking model as the parent suggest might address this.

I'm also gonna go out there and say what many are afraid to talk about openly. When I used to commute by public transit in SF the crowd frequently wasn't pleasant as a whole. There were a lot of very nice people going to work, but there also were a lot of folks who caused problems for others. People with mental illnesses, people who seemed to not have access to showers and folks who were just generally quite roudy. There were days were I wished that there was a separate compartment that's exactly identical but costs $0.5 more. I feel guilty that I wanted that, but I think that's just the reality of life in a country with such a unequal society. I think this is one of the reasons why public transit works better in countries with more wealth redistribution and a stronger social system.


>> All we need to do is to not stigmatize its users.

smh


I live in a city in Europe with excellent public transit (not stigmatized, clean, used by everyone from the pauper to the millionaires, for some routes faster and more convenient than a car). Sometimes it still sucks due to the waiting and the "last mile" problem. A 10-15 min walk from the nearest transit stop to the actual destination, having to change, the waiting times and lack of flexibility (the bus may only be going once every 30 minutes) means I'm still taking an Uber for some trips.

Smaller vehicles going more frequently with fewer stops and more routes would make public transit a lot nicer.


Fun fact: If you try to submit the form with ad-blockers enabled, it errors out saying "Blocked request. Please disable any add blockers" (typo included) :-)


Probably because some ad-blockers block reCAPTCHA.


Good eye


The Cruise tech presentation from last fall seems very relevant:

https://youtu.be/uJWN0K26NxQ

(Highly recommended for anyone interested in self-driving cars!)


Thanks for sharing!


This headline may technically be correct, but it sure does suggest a bit more than what's being offered. "We're opening a sign-up page on our site"?? And (below [1]) Kyle mentions geofences?

How much of the public plans rides in advance, for a limited service area, via the web? I want to hear when these services finally have the capability and capacity to match the experience of Uber and Lyft: get a ride when and wherever you need one.

[1] https://news.ycombinator.com/item?id=30169708


The form asks which mobile device OS you use. Where did you hear that this is only something you must book via the web?


> Today we are opening up our driverless cars in San Francisco to the public

I doubt that means what they hope we think it means. So your employees have been taking your carefully designed ride and now you've let a couple of non-employees ride the course. If it means more than that, especially allowing random rides with random passengers, I'd be very surprised.


Get ready to be surprised. The routes are created by the car on the fly. There is no preset course. The service area is bounded by the current permit, but within those bounds, the car can go anywhere.


I want to be surprised, but after all the hype by FSD boosters, I'm prepared to be disappointed.

I'm skeptical because San Francisco is probably the worst city in North America to do this. I could believe this in San Jose with it's wide flat boulevards and I'm suspicious it's not San Jose first. San Francisco is unique in that the city planners decided to ignore that they had mountains in the middle of their city and laid a grid plan over the mountains. Most other cities lay roads that gently circle around and up a mountain. Not San Francisco. You go a straight line up and then straight down at an extreme angle.

The only way this can work is if they've cherry picked the safest streets and geofenced the cars to those routes. That's not what most people imagine about robo taxis. Not glorified light rail without the rails.


I think your criticism in this thread is fully valid. But I wanted to say that even "glorified light rail" is an incredible accomplishment... The challenges are immense for getting failure rates as low as that would require. Behavior prediction, avoidance, pedestrian interaction, etc are all issues for any subset of streets.


I thought the same thing after I had time to think about the light rail comment. I certainly don't want to downplay the technical achievements. I am really excited about the progress. I think myself and others just want these companies to be honest with us, give it to us straight and tell us how we can help create the future, not blast out PR releases meant to suck in the less informed investor. I suppose if that's the only way to get the money to create this then I should learn to accept it.

Edit: two other concerns of mine 1) SF is a great place to kill a pedestrian with not completely proven driving technology, especially kids, because there are so many of them. And 2) These companies give very little consideration, if any, to the millions of people who drive for a living who might no longer have that means of supporting themselves.


It is not a "course", the employees request to be picked up and dropped off anywhere in the city, just like requesting an Uber/Lyft ride.


Well, that's surprising. I've driven San Francisco as an Uber/Lyft driver and it's a nightmare. I'm really curious how it checks for side traffic when it's nose is pointed at the sky at a stop sign on a steep hill.


It might actually do a lot better than you and I. Remember, it's equipped with radar so it can probably see cars the next block over.


radar doesn't see around corners. At any rate, would have to see some evidence that it can handle the worst of The City before I took a ride, like the incessant construction, steel plates on the road covering new holes, taped off lane changes, resulting traffic jams where merging is happening and so on. It's a really difficult environment for a human and I've never heard the claim yet that self-driving was better than humans.


Cruise publishes a lot of their AVs driving around SF for over an hour. In a video from their YouTube channel [1], at around 22:29, you can see exactly how they handle a steep 4-way intersection. You can also see on the above visualization what objects their system sees as it is performing the maneuver.

The evidence is easily accessible - next time I suggest you just research a little bit first instead of going straight to speculating.

[1] https://youtu.be/HiG__iqgYHM?t=1349


Where was I speculating? I expressed skepticism and said I'd have to see some evidence. You provided some and I'm still not convinced from this. Those are fairly mild hills on wider streets in bright sunshine. But now I know where to get more evidence, so thanks.

Edit: The Cruise CEO kvogt says it's geofenced. That's what I thought.

In a few years our next generation of low-cost compute and sensing lands in these vehicles and our service area will be large enough that you forget there is even a geofence

https://news.ycombinator.com/item?id=30169708


The car handles a lot of those situations on its own, but can also call remote operators if it's confused, who will instruct it on how to proceed.

As to the earlier question about intersections at steep streets, I would guess they just avoid the most difficult blocks. But they do also have superhuman vision, since they have lidar and cameras on the roof of the car (higher vantage point than a human).


The tech behind it is explained very well here. https://www.youtube.com/watch?v=uJWN0K26NxQ It actually gave me confidence on overall self driving car approach


I would love to hear the "underpants planning" of self driving cars iterated, critiqued and discussed.

Telsa's underpants plan appears to be (still?): [1] Base self driving software on affordable components that already exist in production models. [2] Get lots of high quality data from an existing fleet using this hardware [3] Climb the "levels of automation" ladder step-by-step, straight to consumer [4] Robotaxis, with Tesla owners lending their cars to Tesla's Uber (Tuber)?.

If/when version X of Tesla's self driving software "solves self driving" in the sense that it can use roads/infrastructure as-is and achieve superhuman safety... Tesla hit a massive jackpot. They'll already have millions of cars on the road. Producing millions of cars takes time, and a lot of capital. This amounts to few years' head start and is a big advantage.

Waymo's underpants plan is more of a traditional prototype/proof-of-concept/R&D lab thing. Google already poured >$20bn into Waymo without a start date for a business model. But... they have every advantage. Expensive sensors & hardware. No limits imposed by industrial engineering, manufacturing, marketing or cost concerns. Limited, hand picked routes areas of operation. Waymo is set up to smash performance milestones as quickly as possible.

Waymo, on paper, are better set up to achieve milestones... a working L5 vehicle. But, the road from this milestone to profitable revenue & scale is still pretty long. What if the hardware costs too much? Even if it doesn't cost too much, how long to start and scale production to Tesla levels?

George Hotz has his "Android to Tesla's Apple" underpants plan. No vehicle.

It seems to me Cruise (kvogt, the ceo has commented) is doing something similar to Waymo, strategically. There's a lot of interesting stuff to discuss. Personally, I'm interested in takes on how self driving "rolls out." Will the infrastructure (roads, signs, etc.) need to be adapted? How can this all play out?

BTW... kvogt. It would be great to hear you on Lex Fridman.


Amazing job to everyone working at Cruise who is making this happen... absolutely incredible!


I'm not interested in cars I can't own. This idea that we'll turn cars into some kind of service and convert roads into places filled with company-owned vehicles is completely foreign. At that point it'd be cheaper to have self-driving buses and subways. I can only think this viewpoint comes from people who have only ever lived in cities where car ownership is inconvenient given the lack of personal garages. At the moment the only company working towards this idea is Tesla, which is unfortunate. I'd like to see more companies working towards selling self-driving vehicles to customers.


Cars are large, depreciating assets that spend the vast majority of their product lifecycle doing nothing. Conversely, metropolitan areas have to dedicate up to half their available space to empty blacktop storing these unused assets that sit around doing nothing.

The sooner we solve on-demand transportation at scale without requiring a human driver the sooner we can move away from this colossal degree of waste.


All this tech we have is large, depreciating assets yet it still keeps happening. Humans are a large depreciating asset to the elite, lets hope they don't get tired of us anytime soon and replace us with AI and robotic servants...


That's an Ayn Rand novel, not real life; if you somehow had all the money in the country and fired everyone (i.e. stopped trading with them) then by definition you're not rich because the ability to trade with people is what being rich is.

Instead you've created two economies, one with just you and one with all of them, and the other one's better because it has more people.


Automation breeds new job creation and QoL improvements. Coming closer to a post-scarcity society with little waste would be considered a boon, not dystopia. The capitalist idea that your worth is tied to how many hours you work is fading; the newer generations are already calling for work reform.


By this logic you should promote trailer parks.


Trailer parks are peanuts against really dense buildings like apartments/condos with dedicated park/fitness facilities. Stuff I saw in HK is amazing.


Buses and subways cannot operate door to door. That is a fundamental service restriction which will always create a need for personal transit. People with accessibility needs, safety concerns or timing restrictions create a legitimate and important market for door-to-door personal transportation.

Buses are also inherently less efficient outside of peak operating hours. Why should we operate a fleet of million-dollar, high energy consumption buses looping practically empty late at night? A fleet of cars would far better suit the transportation demands for off-peak small scale transport. Cars as a service are also easy for a local government to subsidize so that lower income riders can access the benefits.


You should visit Tokyo when they let you back in. If your city lets you get a car up to every door, it's not dense enough.


Do Tokyo'ians enjoy being packed like sardines, other than the fact that they have no choice over it?


You're confusing density with overcrowding. They don't have roommates, making it strictly nicer than SF.

Also, Tokyo is the best city in the world, so yes. The rural areas are depopulating because nobody wants to live there more than a lack of work. There are suburbs for families too, but they're still transit-oriented.

(Not that Japan is against cars. They just don't do all their transportation with them.)


If taxicabs can function in the evenings during low demand hours then we can replace those with these robocabs. In Tokyo that's the case as well for when after the busses and trains stop running.

In terms of subsidization, we've had enough problems in the US from the US picking winners with the interstate highway system to the detriment of railway. They shouldn't subsidize anything and let the market handle things. There will be a small demand for robocabs but most will still want to own a car, if they want a car at all.


City dweller, raised in a village in the east of England. I got my license 3 months after my 17th and already had a car - I've had a car ever since.

I'd give it up in a heartbeat if hiring a car was easier and more economical, but the reality is that for visiting family over weekends, renting would cost about the same as owning the car full time. The convenience of then having the car ready to go whenever you want it wins out.

I sincerely hope that self driving cars bring in the possibility of renting it for a couple of hours as it drives us to Kent to visit the in laws then drives itself back to London, or simply re-clusters itself into the local network ready to take us back on Sunday evening.


I think that depends on your taste in cars. If you trade in your car for a new one every 3 years then shared cars will save you money. However if you drive a 23 year old car (ie the car I drive) then it is fully depreciated and so paying for a nice new car to drive you around has to cost more.

Of course it also depends on how much you drive. If you only drive a few times per month that is very different from driving hundreds of miles per day.

I've concluded that the majority of suburban dwellers will own their own car even where shared cars are possible to get. Because of the amount of driving they do a shared car won't save them much if anything, and by owning a car they can leave "stuff" in the trunk.


In suburban US, automated taxis are way more accurate and for folks like my parents who don't drive. Taking an uber to supermarket and back? Easy. Taking a bus? Much more difficult (primarily because we don't have a good bus or tram culture here). Same with e.g. visiting friends/family.

Limited capability folks benefit a lot from easy to access taxi service. It won't replace folks who want to own a car or need to (e.g. commuting when public transit isn't good enough).


Even cheaper: don't have computer driven transit, but human driven transit. The current money sunk on the automated driving tech startups could pay drivers for years to come.

What I would like to see is companies and people working towrds eliminating as many vehciles from the road. It is untainable, if we want an habitable planet, to have have a dependency on cars.

This means we have to move back to a denser environment (for the US, think pre WW2 suburbs, vs the current car-dependent sprawl). Either that, or boil the planet. We don't have any other way right now – and as such, Tesla is working towards the idea of heating the planet. Musk is very amandmanet on that, with his constant efforts to destroy public transit with his other companies.


On any given timeline, dedicated human drivers will always have larger variable costs and waste than dedicated computational drivers, on top of being cost-prohibitive to people who can't afford to pay for human driven rides.

You could make the same argument for ditching email and calendar technology in favor of everyone having human secretaries typing inter-office memos. It's absurd now, yeah, but that's because we live with the benefits.


> Even cheaper: don't have computer driven transit, but human driven transit

Cheaper on what time frame? Obviously the upfront R&D costs start out as more expensive, but the marginal cost goes toward 0 while human cost is flat/increases.

Also cheaper isn't the only axis, safety is a big reason to invest in not having humans drive.


Paying humans have better utility, for society in whole, than cheap machines.


If that was a universal good why do we keep inventing machines to replace labour? Would it be better to hire servants to wash dishes and fetch water to scrub clothing in the tub, or is this labour that we can largely get machines to do, and free humans from?


Paying humans to do things that machine does better and safer definitely does not have better utility.


I always assumed that cruise technology is behind self driving functionality like "super cruise" (https://www.chevrolet.com/electric/super-cruise), if only because of the name and based on the fact that gm owns cruise. Can anyone clarify?


The naming is confusing, but the "super cruise" feature of GM vehicles predates the founding of Cruise. I have no knowledge of whether there has been cross-pollination between the two teams since GM acquired Cruise.


Who said all cars would be replaced? There is PLENTY room out there for company cars and personally owned cars.


What is the benefit of a taxi being driverless, for the passenger? Is it cheaper?


At least in the UK, one of the most prolific serial rapists was a licensed taxi driver (not even an Uber driver (who get a lot of stick about not being as safe as "real" taxis) - but a licensed and supposedly-vetted London black taxi driver). https://en.m.wikipedia.org/wiki/John_Worboys

So I guess safety is a major plus based on that.


Totally illogical supposition. I'll bet a lot more taxi drivers have rescued people in trouble than assaulted them, a young female relative of mine was rescued by a passing taxi driver from a dicey situation outside Sheperds Bush tube station. The chances of thugs standing in front of future driverless vehicles like highway robbers to stop them is very high, especially if the predators can see their prey has something of value to them


We can do anecdotes back and forth, but many people would absolutely love “Uber but there’s not a stranger in the front seat.”


Perhaps people would "love" that but I have a feeling that the continued retreat from all forms of socialization due to technology is not a good thing by and large for society. I would guess that most people who think they desire this wouldn't even think they desire it in terms of safety but instead in terms of avoiding distraction or awkwardness.


Technology lets me socialize an order of magnitude more than I'd otherwise. We as a species have never been more connected. If it also gives me respite from having to engage with someone I'd rather not in a car, all the better.

Honestly, we could all use a bit more of a break from each other, in my opinion.


Point well taken, I meant physical / proximal socialization. I would contend that most forms of socialization over the internet are lower quality than a candid conversation with a stranger in a car ride.


They will love it until they are leaving from the bar and the last bar attendee left them a big wet bile-smelling present all over the seat. At least when this sort of thing happens on a bus there are other seats you could use.


Reject the ride due to cleanliness and request another one.


And who knows how long that will take? In some places it takes long enough just to request a single uber trip much less two in a row. Imagine how livid you'd get when a half hour after you first intended to leave the bar, the second self driving car arrives, and it to is soiled.


Hey, with a huge fleet of self driving cars that aren’t taking Saturday night off and only have mostly bar traffic to consider, I think you’ll manage.

And the robot won’t take offense.


'Your ride request has been refused due to your social credit score, spare the air day and your recent equity transgressions. Here is a 5% off coupon for sensible walking shoes'.

It's not the robot that might take offense, it's the folks in the panopticon and their databases.


The scenario you described already happens in San Francisco and the Bay Area on a regular basis to regular human drivers. There are plenty of videos of peoples' back window being smashed while they're sitting in traffic and their expensive goods are taken. The solution to this is not having human drivers, it's cracking down on property crime and treating it as equally as important to violent crime.


Exactly, I'm in the bay area also. Not hard to see next generation highway robbers standing in the path of self driving vehicles, which will stop to avoid hurting the 'pedestrians' in the road


I’ve always heard the opposite, that real taxis are significantly more dangerous.

Ubers are sent directly to you, and everyone’s identity has been verified.

Compare that to flagging down a random taxi, where you have no idea who the person is before you get in the car.


How is the taxi driver's identity also not verified? Are you assuming there could be a random person who isn't employed by a taxi company masquerading as a valid cab? Since you would need a genuine looking physical cab to pull this one off, that seems a lot less realistic than simply pulling up to a bar in any car, telling a drunk girl you are her uber, and driving off before she thinks to check the picture on the app, which is something that does actually happen.


Likely be able to get a taxi anywhere. No discrimination. No 20/25/30% tip suggested. No taking advantage of passengers, no weird routes to tack on extra fees.


> ...no weird routes to tack on extra fees.

At first. They will optimize that later.


There's no need for that. Rideshare drivers' pay overwhelmingly goes to cost of labor rather than fuel or maintenance, especially in big cities. Uber/Lyft take another 25-40% from the gross fare the passenger pays. Cruise will be capturing all of that and only needs to pass a fraction on to the consumer to get an unassailable price advantage.


This is only true now, until such rides become the norm. Then the margins will shrink and companies will optimize every way they can.


Maybe you haven't ridden a cab in the last 10 years, but like any other major industry these days, they do have an app you can use. You can call one up immediately like an uber or schedule it in advance like a traditional cab, it gives you a fixed price up front and a set route, tip can be included in the app how you like. In my experience taking a cab like this is better than ubering to the airport since its the same price any time of day rather than in flux between $50-infinity.


You clearly haven't taken a cab lately in a place without Uber or the like.


Fine in the UK. I assume yellow cabs in SF are still just as scammy as ever, since that's basically the reason Uber was started.


Are you arguing that software in general never discriminates or prioritizes the company over users; or that there's something special about self driving tech that will make it far more utopian than other products in the wild?


I would much rather prefer my own private car than one being driven by someone else. It'll be interesting to see if the level of cleanliness though as most drivers tend to take care of their car because it's their car. If someone throws up in one of these cars does it know or does it just show up to the next pickup?


That raises another question for me: I thought Uber and Lyft[0] had COVID-related cleaning protocols that require their drivers to disinfect some surfaces of the car after each ride. How can a driverless car do that?

[0] https://www.lyft.com/driver/clean


Sounds like getting rid of some hygiene theater is another benefit of self-driving cars then.

Covid transmission via surfaces is basically non-existent, as far as we can tell.


Do they? I've ridden plenty of ubers in the past year and haven't noticed any such disinfecting.


Why would you? Presumably they don't do it while you're there.


And you still think they do it?


You would smell the alcohol


Easy to add cameras and have someone remotely decide to send the car to get cleaned.


I'm sure that any of these services are going to depend on customers reporting when a car shows up in an unacceptable condition.


That sounds like a great way to burn reputation among your customers. "Why would I take a self driving car from the bar, the last one was full of puke?"


Cheaper, you don’t have to tip (in the USA where that is required), availability (driver doesn’t need to sleep), and safety (you don’t have to worry about your driver trying to cheat or rob you).

People aren’t getting cheaper. Indeed, this pandemic is showing us just how fragile the labor market is and automation that was too expensive to consider before (like fry robots at caliburger) are now very reasonable upgrades. Uber was great when they cost $40 from the airport, but not $200 given the driver is now considered a full time employee with full benefits.


Where are Uber drivers considered full time employees?


Didn't that almost happen in California?


The UK, for a start.


Eventually I think it will reach a point where it’s safer too. The car doesn’t get tired, meanwhile I’ve had plenty of taxi/uber rides where the driver has been driving long, long hours.


> What is the benefit of a taxi being driverless, for the passenger?

Not risking kidnapping, sexual assault, etc.—or even just “this person won't shut up”—from the driver.

> Is it cheaper?

Eventually, though perhaps not initially.


If you can reduce the cost of a car trip to fuel + maintenance + depreciation, you are providing a very compelling alternative to private car ownership. It will change everything.


nobody owns a car because its cheaper than other forms of transit. and if you want to be cheauferred around by an ai instead of taking a bus, you will pay for it.


People own cars for convenience. In the US public transit is impossible. A self driving service can be as convenient or even more convenient compared to owning a car. (No registration, oil change, insurance, fueling, parking, maintenance chores) I’m a car guy and I see the appeal.


Indeed. Exactly how this all will play out is unclear, but if it works the impact on land use will be substantial.


> In the US public transit is impossible

No sarcasm. Why it's impossible to have public transit like every other country in the world but it's possible to pour tons of money on driverless cars development?


The suburban land use pattern doesn't admit good transit because not enough people live close to the stations.

Also, we have bad combinations of when to make things public/private - the US actually has much more public services than many other countries (other places do have privatized mail, transit, etc) but then doesn't fund them. If this was Japan all the good stores would be in the train station and would fund the operations.


Thank you, I really appreciate your answer.


You will pay more than the bus, yes.. but perhaps less than the cost of owning and maintaining your own car.


The benefit, one day, is if you're a criminal whose face is detected, the car doors will lock, and you'll be driven directly to law enforcement who will be ready and waiting to apprehend you.


Although possible.

I suspect and hope there will be some push back. Imagine a counterpoint in either a bug or malicious attack where innocent person is locked in and then driven into a lake or something.

I dont think autonomous cars should be able to lock people in. The car can report to police silently, that's fine.


>> The car can report to police silently, that's fine.

No, it should not do even that. You are making an assumption here that police/government always have good intentions.

What society needs is kind of power balance: law enforcement being able to catch 98% of criminals is a noble goal, but pushing that number up to 100% is not possible in democratic society, it requires totalitarian control. That's why we need to make a choice here and oppose surveillance, event if it seems well intentioned.


This is basically the point I was trying to provoke. We forfeit a good bit of our privacy and autonomy with these technologies. Rather disgusting how convenience always wins.


Seems like it would be a lot less dramatic to just trigger a head on collison on the highway which you could do today if you broke into the tesla fsd system.


There's a story to that account: https://www.vice.com/en/article/xygzvz/one-star


Wow...spot on! That was a fun read. I don’t think the scenario is that outlandish.


That won't ever happen. If it did, it would disproportionately affect people of color, be declared a racist "feature," and be banned.


yea, you take the biggest cost of taxi out of equation.


The $250,000 SF taxi medallions?


Assuming it is $250k one time purchase, it seems cheaper than annual pay for a person to drive around SF, and the overhead to manage the person.


You still have overhead to manage your fleet of self driving cars. They aren't self repairing, self fueling, and self cleaning yet.


Cheaper than having individuals manage their own cars.


The Uber trick is that people don't understand how much it costs to own a car, so riders get to extract money from overinvested drivers.


Let’s assume you get a loan for 30 years on it for 6%. That is $18k a year. What is labor? $50k a year?


It will likely be cheaper (maybe not initially), but more restrictive in pickup/drop-off and availability (ETA).


Don't you mean less restrictive? A human might have preferences against areas that a computer might not. Further if it's cheaper and less complex to operate (not dealing with employees is a significant decrease in complexity) I would expect more car owners to enter the market and provide services.


The restrictions on pickup/dropoff locations (if any) might have more to do with road conditions. Cruise might not be comfortable with their cars stopping in certain places.


More restrictive because without a human driver it has to have absolute certainty it will stop in a safe place (unlike a human driver who may make the right or wrong call to just double park and let you hop out).

Think of self-driving cars as likely to use their own version of "bus stops" but the route and flexibility would be greater than that of a standard bus


It will more restrictive because they haven't mapped all streets.


Labor is the most expensive part of a taxi.


1) cost (majority of the uber ride cost is paying the human)

2) social anxiety / norms

3) experience tuning / consistency


For the passenger – unless the cars drive significantly better there's really no difference. Rides may get cheaper but the overall market will decide that.

The benefits to the company who runs the service is, of course, huge.


No forced chitchat, no chance I’ll be rated badly, (eventually) safer than many of the drivers I’ve had.

I’d use it daily if it were an appliance rather than an interaction. Right now I have a car.


This is why I like cabs. Customers should not have a rating.


The Omicron variant of COVID-19 is the most infectious disease known to mankind. In mid-December, with caseloads similar to now, the asymptomatic positivity rate was around 15%, so you can roughly put the chances of your driver being infected with COVID-19 at around 15%. So, right now, I'd feel safer in a driverless cab. (Of course, that calculus changes as the number of cases changes, which is right now happening rapidly.)


>The Omicron variant of COVID-19 is the most infectious disease known to mankind.

Based on what? Certainly not RO


Rt multiplied by serial interval it is, so "fastest and most widely spreading contagious pathogen in modern history" is probably correct.


R₀ for Omicron is 8-10, but the generation time is 5 days instead of 16 days for measles.


Omicron is the "live vaccine" variant of COVID-19. I've had it and it was no problem.


Cheaper and greater availability at all hours I'd imagine, as the driver doesn't need to take shifts.


Yes. Instead of the variable cost of a human, it's just the variable cost of maintenance + gas.


no. a billion dollars wasnt invested to socialize transportation. ultimately it may be more, if there is value in not having to interact with the smell of a fleshbag


Disappointing the amount of ragging people are doing towards Tesla here, Tesla is far ahead and I can't believe people are falling for this promotional video and assuming they are farther ahead.

Tesla's vehicles are in the consumers hands and have many many thousands of drivers using their system every day. Also the VERY first comment here mentions a misleading article "Tesla to recall vehicles". This is misleading since Tesla probably didn't recall even a single vehicle, they probably all got an over the air update within a couple days of the problem being spotted. When a problem comes up Tesla is quick to send out an update and the news loves to write misleading articles every damn time because bad news about Tesla sells, there are so many wrong articles out there it's insane.

So disappointing that even the people I assumed would be smarter (Hacker News), are falling for this shit. Also this debate of cameras vs lidar is so old and outdated because LIDAR IS NOT GOD DAMN MAGICAL it has it's problems too. Tesla has also dropped radar because they don't see it being beneficial enough.

In reality talking about lidar vs cameras is not too important especially in the long ron, how far the tech is underneath is the most important thing currently and over time the amount of compute power you have in each car will be the most important factor.


I'm not sure what there is to "fall for". Cruise is offering truly driverless rides (in a limited domain), and Tesla does not have any kind of truly driverless product (although they still call it Full Self Driving).


Any information on boundaries within city limits?


I remember when Cruise started as a post here on HN asking for people to submit their resumes.

Congrats to them!


I notice that they are refereed to as "driverless" rather than "self-driving".

Are these actually self-driving or are they remote controlled by a person?


I saw a presentation at a conference that pretty strongly suggested they will have a remote driver take over at certain points (the car will assess it can’t continue and remote driver takes over)


A remote driver, or remote navigator/coach?

The former is incredibly dangerous, the latter is what Waymo is doing.


Second one. Someone remote can help label and plan until the car has enough confidence to proceed.


Right, which isn't a remote driver.


This is a bit surprising to me. I haven't seen many Cruise cars on SF roads in the last few months. I've seen a hell of a lot of Waymo, on the other hand.

Separate thought:

> I’m still surprised I can even write those words — this moment really snuck up on me.

This seems to be poorly worded PR considering how much general worry there is over the safety of self-driving cars. If I were writing it, I would have phrased it along the lines of "I've been waiting months and months for this. We've been ready for months but we understandably had to triple-check all our compliance etc."


I see Cruise a ton around downtown, but Waymo and Zoox seem to be test driving through a larger part of the city.

Some times I will see a fleet of Waymo cars in quick succession, going through seemingly the same route


I still see a lot of Cruise cars training in the mission.


They have a nearby garage next to the Costco in SoMa. Zoox also has a small garage across the street next to the SPCA building.


I still want to know who goes to prison when a pedestrian or cyclist is killed.

Because no-one is arguing someone will be killed, only that "human drivers kill people too" or "machines will kill fewer, eventually".

Will it be the passenger? The coder for bad code? The person who assembled the hardware for improperly tested malfunctioning sensors? The person who made the malfunctioning sensors?

Who? Because someone is responsible. You can't just spread the blame around and the corporation just pays a fine and shrugs after someone's life is ended.


Unless the driver is drunk, has malice, or willfully breaking a bunch of other rules, I don’t think they go to prison now.


If someone is playing with their phone and they kill someone, they are doing prison time.

No "oops oh well, I'll do better next time, sorry you're dead but it's your fault for being there". 2+ ton machine, willful negligence.

There's even a connection log with handy timestamps proving the phone was in use.


Virtually all auto accidents in which a third-party fatality occurs result in criminal liability.

It's extremely rare for a third-party fatality in an auto accident to be caused by something other than criminal negligence or intent.


Most auto accidents that cause death don’t involve third parties. And there are shockingly a lot of people killed in auto accidents, only a very small percentage of those result in prison time.


Exactly why I specified. Multi-party fatal accidents are rare, but they usually involve drunk driving or gross negligence/recklessness


Right, but the original comment didn't specify that, so sounded a bit weird to me.


I don't think it will result in jail time for anyone, unless there is gross negligence and/or reckless disregard for safety by those who worked on it. Even so, due to benefit associated with autonomous vehicles, unless court can find deliberate and malicious attempt to endanger people, I doubt anyone would see a jail time.

There are plenty of similar precedents for this, and it's airline industry. Literally hundreds of planes have fallen out of sky and thousands of people perished due to design flaws, improper maintenance, be it using wrong parts, bad repairs, failing to inspect, etc.

Yet, individuals working at plane makers or mechanics rarely get jail time. Sure, companies are held responsible, can pay restitutions, but jail time is exceedingly rare.


When I was in Vegas last I called for a "driverless car" via Lyft but instead of an empty car a male driver with a second man riding shotgun showed up to pick me up. The entire scenario made me feel sketched out so I refused to take the ride because I've seen so many news reports of people getting assaulted by drivers. I and many of my friends would feel more comfortable with the option of a driverless ride so I definitely understand the appeal. I'm excited to hear how this goes.


I'll believe it when it actually happens. I signed up for Waymo in SF last year and still haven't gotten any updates for that.


I have 3 thoughts: (1) Amazing to see progress, lets make self driving cars a reality! (2) Oh, its another limited private beta like Waymo has been doing. I'd like to actually use these, and this doesn't seem that much closer. (3) What the heck does Cruise's cap table look like anymore? Acquired by GM, then almost immediately spun back out?


Parent organization is still General Motors, they just operate independently


There must be a little more to this, since clearly they are not a wholly owned subsidiary.


According to the most recent SEC filing I can find with an explicit ownership percentage, "As of June 30, 2019, external investors held 17.1% of the fully diluted equity in GM Cruise Holdings." [1] Presumably that means GM as the parent company owned 82.9%.

[1] https://investor.gm.com/node/19751/html


Signed up. Hopefully they get back to me faster than Waymo’s response time (not hard to get better than never)


Amazing considering that he said he wasn’t comfortable putting his own kid in them just a few months ago.


He didn't say that. He said they weren't ready to yet. I think he meant legally because his son is under 18 so their insurance wouldn't cover it and also it was employees only at the time so again their insurance wouldn't cover it.

I didn't get any sense he meant that he didn't trust the technology (otherwise he wouldn't be in it either).


People routinely take risks themselves they wouldn't impose on their kids.

It's much harder to believe that he didn't bring his son because of some weird insurance exclusion (presumably 1. they were insured for riders under 18 outside the vehicle and 2. he would have explicitly called out that it was a legal formality) than that this was extremely new technology which still had a reasonable chance of getting into a serious accident.


Got a source for that? I searched around and couldn't find anything with him saying that.


https://youtu.be/dmvZBiWYkFQ?t=242

“I want to bring my little son along on this ride, but obviously that’s not where we’re at today.”

edit: Fixed link


Not sure that means what you think it means. He might not have had regulatory or insurance permission to take non-employees in the car. Or it was just company policy to not allow family members (which would seem to be prudent before getting approval for general use), and he's being good and not using his position as CEO to get around the rules.


Either you posted the wrong link, or your link was taken down within the last 20 minutes...

Lots of Cruise employees in this thread.


https://youtu.be/dmvZBiWYkFQ?t=242

I had an extra slash in the link, I corrected it in the original and above.


lol. The conspiracies! It was a bad URL bro.


Yep, it was definitely taken down. I watched it only half an hour ago.


It wasn't though


18+ is reasonable since it's still a dangerous experiment.

I actually would love an automated version of this that goes at 200 or 300 miles an hour, the ultimate thrill ride. Have it run on something like the Autobahn. While it's impossible for a human being to drive at that speed, computers definitely could .

It would be like sky diving but on land


Why not go on a train?


I love it when the self driving threads always circle back to reinventing the concept of public transportation as if it doesn't already exist. Want to know what the best hyperloop system in the world is called? The subway, invented 200 years ago.


Can't sit on the front seat :)


The fastest passenger cars today can only make it to 200mph. Even a computer with perfect vision and 0ms reaction time would not be able to stop in time for a blocked road at that speed unless visibility/road were perfect.

Maybe on roads in the Mojave Desert/South Dakota I90/etc?


I think I saw one of these at night with no humans inside. Drivers or passengers. Is that legal?


Yes, Cruise has obtained the relevant permits from the CA DMV. The only permit they're currently missing is the one that let's them charge customers for rides.


That is interesting, because presumably self-driving cars, like all cars, are more dangerous to pedestrians than (that or other) vehicle occupants.


>Is that legal?

It is if you pay the right people enough money. Welcome to the future; where the mass dregs of humanity are now a liability to our elite cybernetic enhanced 1% overlords.


So is this just SF? How many people can actually sign up and use it?

Wow thats a nice amount of funding? Is this going to be their last funding round before they IPO? Is there a VC who give more funding? Are employees gonna be rich or is this gonna be a wework scenario?


Cruise was aquired by GM in 2016, unless they are spun off there won't be an IPO. https://en.wikipedia.org/wiki/Cruise_(autonomous_vehicle)


They were spun off when Softbank invested. There's SEC filings on it.


What is the level of oversight by remote humans? Is it one teleoperator to one car?


What a difficult city to drive in, too. There are many "unexpected" pedestrians, bicyclists, skateboarders, and people staggering about in every direction. I avoid driving in SF as much as possible.


I would love to see these in more rural areas without great transit options. I would love to be able to take a driverless car to and from a bar when there isn't population density to support many taxis.


Is there any solution yet that doesn’t require the driver to be sober in order to intervene at any given moment?


The vehicle in this very article! :)


About one month ago, the CEO of Cruise --self driving car industry leader--been fired by the CEO of GM--EV industry leader---. One month later, the car start drive itself at most challenge city. What a great achievement is. Then I take a look a video, I think the cruise still use 10 years old google self driving car technology. When I saw 3 lidars there, I know maybe millions of line code added as always. more hardware, more code, it means far away from production. NIO, XPEV, TUSIMPLE, HUAWEI use lidar, but not something like this.


At last! Uber and Lyft prices have been sky-rocketing in the city. Great to see a new option on the horizon


Asking people who know. Is this BS? Are people on public roads gonna be dying from this soon?

Also how is this legal?


Wait is steve huffman is one of the "test" riders in the promo video at the bottom?


Waymo's public beta included a remote driver. Does this?


Neither company has a remote driver. I think the role you are referring to is an operator, who Waymo makes clear never directly drives the car.


Since they’re launching in SF will the autonomous Cruise vehicles be available to queue as get away cars for coordinated shoplifting sprees and/or parked car looting runs?


Echos of the first Uber ride ever. Exciting.


An outside observer reading Hacker News comment threads might suspect HN to be a luddite community, pouncing on new tech with an assailment of negativity.


The idea that human drivers can create cars that are somehow better drivers than them...has got to be some kind of bunk.


Robots are better at plenty of things than humans.


Specialized things, sure. Agree. But driving has to be almost AGI: an almost entirely open and unrestrained field of endeavor.


Not really. It needs to be more flexible than other current robots, but definitely not AGI.


Really? You think full autonomous that's better than human drivers is not AGI? Perhaps time will tell if your opinion is right.


Yes, it seems pretty obvious to me.

I mean, we already have cars that can handle driving themselves in some contexts (e.g. Waymo in suburban Phoenix) but we're obviously nowhere close to AGI.


That's the thing tho. Specialization (in some contexts), versus a general field of endeavor.

Plus you got, "handle themselves", versus, better than humans drivers.

You may be underestimating how hard that is.


How much does it cost?


do Cruise and Waymo do rolling stops?


Am I still asleep or is everybody here missing the "May Stop Quickly" printed out in the back of the car.

1. Is this even legal? Is this a legal requirement or some alert text they come up with?

2. If the car stops quickly, it'll eventually result in accidents and possibly not for itself (but other drivers). Did they think this is not risky and safe enough to put it out on the street.

3. The video demos are all at night when there is little traffic. How does it respond at day when you have more traffic?


Anyone want to take bets on how long this lasts?


Do Cruise driverless cars roll their stops when safe to do so?


What problem is this solving exactly? Talk about a waste of human potential. Great job putting more drivers out of work—- if this thing is even safe.


I got hired for my dream job at Cruise, and then was offered near-as-makes-no-difference 3x the TC to work with a friend a week later. I basically completed the new-hire onboarding and then quit. It fucking hurt to do that. What I saw there made me a believer. Cruise is doing amazing shit.


You've got to do what you've got to do. As cool as the company is, they failed to compensate you well enough and lost you.


I need friends like yours. Damn.


Well, to be fair, Cruise offered me a really low salary ($50k paycut from my prior position) and no liquid equity, so it was a TC-to-TC comparison. I was willing to do it because dream job, but low pay and a 5-day-a-week commute into the city and I just couldn't refuse.


"Today we are opening up our driverless cars in San Francisco to the public - I’m still surprised I can even write those words"

I, uh, don't know if I like a CEO who is surprised that he's being allowed to do the thing he is doing.


Oh please. This is so pedantic. Clearly he is emphasizing that this is a dream come true, and a few years ago he might not have believed they could do it. He's not saying "I'm shocked that San Francisco is dumb enough to let us do this".


A charitable interpretation is that he's pleasantly surprised the team reached a challenging milestone and they are able to deliver the product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: