Hacker News new | past | comments | ask | show | jobs | submit login
Subsidiary of Toyota to acquire Lyft’s self-driving car division (lyft.com)
362 points by bsilvereagle 8 months ago | hide | past | favorite | 341 comments

Funny in hindsight from 2016: "Lyft's president says 'majority' of rides will be in self-driving cars by 2021"



I saw Levandowski do a presentation of self-driving at an Uber all-hands and it was then and there that I knew that self-driving is 20+ years away.

The example video showed so many small things that we take for granted as a human driver that would need to be built in or hand coded that AI or neural nets would never be able to get. There just isn’t a way that you can train every single little thing that could cause an accident. For some things you just need experience and fear which I imagine can’t be modeled via any current AI.

AI/neural nets are good at detecting patterns, but only marginally so.

Humans have such incredible pattern recognition that we have built-in fear mechanics for when things are _not_ a pattern we recognize.

What percentage of the time do you experience fear when performing a task such as driving? Think about it - that's the margin for error that humans have in pattern recognition for that task.

You're raising an interesting point here. Humans pattern match everything (and integrate it into a world model). When something out of the ordinary appears, attention is brought to it immediately. By comparison, today's AIs just match what they can, leaving everything else out. I don't think that FSD can happen as long as that's the case.

I think a good rule to make is that to be fully selfreliant an AI would have to recognize random adjustments that are intentionally wrong or notably confusing for what they are or rather aren't.

I drove to work yesterday and a driver driving over the freshly painted lines to avoid a parked truck had basically caused a swerving lighter set of the lines in the middle of the road to appear. Similarly in my country at construction sites where the road is redirected for a while the new lines are often just in a different colour and the old ones remain. Sometimes they aren't and i've gotten confused at least once.

But what if it was intentional or more pronounced? Perhaps we really should paint ourselves a looney tunes-esque scenario. The kind where our hero paints some lines towards and a fake tunnel on a wall. Except instead of some by radar easily recognised wall it should be some other danger.

Also. If I put a cardboard cutout of a cartoon character next to or on a road will it cause these cars to slow down? Will it take a risk to swerve if the cutout appeared around a corner?

I think the recognition performance is irrelevant, the failure mode of a model not knowing when there's something it can't recognize is the issue. Most clssifiers will just fail silently and let the toy car drive into a trailer.

Your argument implies that "I don't know what is in front of me" can just be used as a signal to "stop the car", but that still isn't OK (slamming on the breaks on a freeway because a trailer has someone's face on the back of it, for example)

Recognition performance matters.

I don’t know much about the space, but always found it interesting that most of the conversation is about autonomous driving for individuals, which to me seems to be the last step after e.g cargo outside cities and public transportation has been automated. I reckon because the latter are less sexy, lead to job losses and maybe require more public acceptance?

I know there is a lot of work being done in this areas as well but what is the current state of it and what are the barriers?

A city bus carrying 20 people has 5% the drivers-per-person that an individual vehicle has. The driverless system only would save 5% the human effort that it would for a solo vehicle, so it needs to be 20x as good before it's worthwhile. Put another way, the bus with 20 riders is already equivalent to automating 19 cars.

Roughly the same idea with freight - a single truck driver serves the needs of many people.

Human drivers still do meaningfully cost money to employ, and they do annoying things like needing to sleep or take rest room breaks. Autonomous trucking would completely revolutionize the industry.

Autonomous trucks for urban trucking are a pipe dream, but maybe for long-haul routes.

That said, it's not at all clear to me that developing autonomous trucks for long interstate routes would be meaningfully cheaper than laying rail on literally every high-traffic route on the planet, which uses perfectly-understood technology and has entirely predictable returns.

We already have a lot of rail in the world, but still have lots and lots of trucks. Is it just not enough rail? Roads are also useful for people in a really dynamic way that rail isn’t (especially for big countries like the US).

There is some path dependency here, trains came earlier than cars and early industrialised nations build rail networks that were far more extensive than they are now. There were also freight trams in cities like Berlin, which delivered cargo to smaller industrial sites in the city.

You can have a driver in the truck for the city portions (at the beginning and the end of the trip), but sleeping on the highway rather than having to stop the truck for a full night.

Presumably you would be able to leverage a lot of that effort and expand into autonomous driving for other scenarios, right?

That's what's being attempted, essentially.

Autonomous driving in good conditions on highways is already working quite well, but once it's wet/foggy and there are animals wandering onto the roads and other cars are behaving unpredictably... those are not straightforward problems to solve.

Sure, but when you load 20 people onto a vehicle, if you don't have a driver you still need a security guard.

Subways the world over do without just fine.

The train system in Sydney, Australia has a guard cabin which it's recommended to ride near at night when there aren't many people about.

I don't know if it's because I've only been here four years, or because I'm from a city/country with a lot more crime, or it's my train line, but I've never felt unsafe on the late trains or the night buses.

Also, for the unfamiliar, 'guard' here does not mean an armed guard.

Every train in London, even the fully automated ones, have at least one person onboard. Maybe at some point that will change, but there's a lot of pressure against it.

Subways in my country at least definitely do have guards (but they also have human drivers, so I don't think the two are really related).

Profit margin. The driver just isn't an expensive or limiting part of cargo transport.

To be clear, the conversation also isn't about autonomous driving for individually-owned cars. Tesla's disasters aside, its about autonomous taxi services.

Are you in the freight industry? Because drivers are exactly what is expensive and limits growth right now. there is a humongous supply of trucking jobs, and they are paying really high rates right now (up to 70-80 cents per mile)

Fuel is still wayyyy bigger than the driver.

Both would be valuable - but a truck that is 30% more fuel efficient is probably going to save more money than one that doesn't need a driver (but still needs people more loading).

There are different estimates everywhere - but I usually see 20-30% of trucking costs are drivers. In other freight industries (cargo ships, trains, etc.) it would obviously be less.


Thanks for that chart, but I read it differently. Just because something isn't the No 1. cost doesn't mean people aren't interested in lowering the cost. So let's say there are 3 million truck drivers in the US that earn an average of say 50K {1}. That's $150 Billion in costs per annum. Sure, there are other costs that are higher, but if you could lower that only by 50% -- say automate some (but not all routes), that technology would save $75 Billion per year, and that number would grow with inflation. Even at a real discount rate of 8% (10% with 2% inflation), that is worth 937 Billion dollars -- nearly a trillion. Now let's say the software vendor who solves this splits the savings with the trucking industry, so they get 37.5 B per year and the industry gets the other half. That's some nice recurring revenue, worth significant investment if it's technologically feasible.

So this is all about scale. Everyone is obsessing over Uber for cultural and political reasons, but there are 1/10 as many taxi drivers as truck drivers, and they earn half as much. The big money is in trucking, and even if the problem is only half solved (say just for some long-haul routes), it would be worth hundreds of billions of dollars, even if truck driver wages are not the single most expensive cost. Hell they could be the 10th or 20th most expensive, what matters is the billions that could be saved, not the rank ordering.


Another thing is that autonomous truck can drive 24h. In most countries there are regulation how often drivers have to take a break, e.g. in Poland after 4.5h of driving truck driver need 45min break, after 9h there is mandatory minimum 11h break.

In Australia on the major trucking routes, many trucks are company owned and are handed over to fresh drivers at rest stops. The trucks can proceed to destinations while the drivers sleep

I think at a minimum all the cars need to be taking to each other over a protocol.

I don't disagree (and nor am I certain you are right), but if that is so, then there are some follow-on issues:

1) The system still has to react safely to pedestrians, cyclists and other entities outside of the network.

1.1) During the transition period, that is going to include vehicles driven by people (I do not think it will be feasible to ban the latter before self-driving vehicles are commonplace, except possibly in city centers and on some major highways.)

2) We do not want systems to learn how to take advantage of the protocol to drive aggressively and antisocially.

I wonder if the software industry has it in itself to produce a safe intra-vehicle network that wouldn't be taken advantage of, either by malicious actors or the manufacturers.

I am sure there is a technically sound protocol someone can come up with, but it would have to survive the corporate food chain, proprietary advances, and the three letter agencies.

Would we see the same dynamic we are seeing now with "net neutrality" and fast lanes for certain endpoints? Like if you pay for a premium service included in the BMW package, you get to travel at full speed, while the test of the vehicles are artificially slowed down. Or if you are on your way to Disney land with a paid ticket, all the stop lights will change in your favor.

Thankfully BMW doesn't own the roads.

They do this in LA already, you can choose to be toll'd more (wireless tolls) or you can stay in the normal lanes. I'm much more ok with the regional infrastructure charging it because you have at least a chance at knowing where the money goes.

The car manufacturers would love to have some sort of residual billing for crap like that I'm sure tho.

Plenty of places have privatized toll roads, in which case a private company owns the toll road and there's nothing (currently) stopping the owner from giving a special pass to particular car owners. BMW could partner with the toll road company to give free passage to BMW owners as an example.

The fact that it doesn't currently happen is a good sign, but as corporations grab hold of society in more ways I don't see why a premium car company wouldn't try to differentiate with those kinds of benefits. As cars become more like a service provided than an object you own, it makes sense that those kinds of things would spice up a particular "package deal" for your car service.

However that has to be secure that a hacker can't simply stop all traffic or cause speeding

Which is pretty much impossible if you let people have the physical cars. They will report their speed to other cars, they will measure their speed from pulses generated by something rotating on the vehicle, replace rotating thing with your function generator and then you can tell other cars you are going any speed you want. Turn up your reported speed and watch all the other cars get out of your way!

Well, you can take the input as a data point, but make the final decision based on sensor data. Thus the shared data can make the flow smoother.

But I'm also waiting for fake street signs to troll/manipulate self driving vehicles :)

I tried “tesla swatting” with a fake speed sign a couple of years ago. It worked then (autopilot went from 45 to 15 MPH) and I assume it still does.

Can I copyright the phrase “Tesla swatting”? :)

It's easy to put up fake street signs to troll/manipulate human drivers, yet that hardly ever happens. I have read stories of people putting up fake speed cameras in their front yard to slow down traffic, but no one dares to fake speed limit signs.

Here in Gemrany I there are signs with "voluntarily 30" or similar combined with some surrounding images etc. and yes one can fake it, but most fakes can be identified realtively easily by humans (positioning, color, material (non-reflecting cardboard), ...) some high quality fake sign might however exist for a while.

Also there is the reverse: people stealing signs (https://news.ycombinator.com/item?id=25223633) or putting stickers on them (or snow, mud, ...) where most humans know what to expect (stop sign, yield, ... have specific shape for a purpose; empty frame where city starts; ...)

Going out of a somewhat controlled small area in SV/SFO or AZ is a huge task. Going to a different country requires complete re-learning of the models ... and then there are humans ...

Except Finland during elections. Where advertisement with nice white round circles with numbers inside are a standard. Single vote for candidate system where number of candidates can run to over hundred.

No human would think that number next to person is speed limit, but machines aren't trained for that...

Sensor data is the input and can be faked!

I've always thought we should just start with a dedicated lane on the highway for self driving vehicles, raise the speed limit on it, and let the vehicles communicate with each other.

Beyond experience and fear, I think an autonomous vehicle will need a reasonably accurate mental model of how other human drivers on the road are likely to react.

The technical term for this is 'situational awareness'

> Situational awareness or situation awareness (SA) is the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status

There's an bearish case to be made - not only are the state-of-the-art algorithms nowhere near this - it takes kids quite some time to acquire this skill. Playing ball games with my kids, I've noticed even a 3yo still doesn't intuitively grasp Newtonian mechanics - they're trying to catch the ball where it is now, not where it's going to be in a few seconds. 8yo - notably better, though not fine tuned yet. Heck, I know some adults who're lacking in this respect.

Cats can catch a sparrow in mid-flight, though; it may take specialized hardware or software, but not human-level intelligence.

I feel this way too.

I've noticed over the years that the morning commute is a symphony of madness. Drivers going over the speed limit. Drivers doing illegial things in order to get to work, and keep the traffic moving.

Cops seem to let a lot of infractions go during the morning commute. (The rest of the time they let nothing go, but in the morning commute they just seem to know it's a symphony of driving?)

I picture a self driving car going 25 mph with 20 vechicles behind it.

I don't think you could legally program a car to break any law, even in emergencies. Well you could, but lawyers would have a fun day in court?

If the 25mph speed limit doesn't make sense on that road it needs to change. If it makes sense, then the self driving car going at speed limit is the right thing to do and human drivers should do the same.

It's as simple as that.

That's the kind of overly simplistic programmer thinking that doesn't work in the real world.

I chuckled! I spoke to Anthony at the Udacity office in Mountain View around 2018 (he’d already left Uber to found his trucking company). He seemed like he’d given up on city streets being autonomously drivable any time soon, but highways were still on the table in the next five years. One idea he floated was to employ drivers only from the exit ramp for the last mile journey. For trucking, this made sense.

Do you mean after he left Google? He left Google to form Otto which Uber bought. Frankly it’s the biggest mistake of TK’s career and caused the downfall in my opinion. He went against the advice of Emil and Salle, which was a huge mistake, and there were totally right.

Can you share examples of these things? Isn't training data considered "experience"?

Example: One of the examples was a self driving car driving down a busy road. There was a car trying to cross that road perpendicularly. As a driver you could see that car inching its way into traffic but the self driving car didn’t recognize that as a threat. Suddenly the car drove into traffic and the self-driving car almost t-boned it but the human driver intervened before it hit. That’s something that as a driver I would have caught and slowed down because the car was acting weird.

No, training data influences the values of weights and biases in a neural network. An over-fitted NN might encode the training data in its network, but in general, the training data can't be recalled in the same way that a human can recall, learn from, make judgments and adjust their future behavior and perceptions based on their experiences.

Self-driving cars are perennially five years away.

Putting aside the question of whether the tech is there or not (and I would argue it isn’t there for commercial consumer use), the infrastructure and the laws will take years to catch up. This isn’t an overnight problem and many of the barriers are things that tech can’t just design around.

Not to mention there’s a huge voting block that’ll be against automation. I think the best first bet is just localized (robots in factories).

I’m personally of the opinion we are 10 years away or more technically. Main issue is weather and the risk threshold people can tolerate. All the tests thus far are in environments are high sun, no rain environments. If we are already localizing to southern cities, we may as well just add fiducial markers and guide wires.

In terms of risk threshold, look at COVID, risk of death under 45 is something like 0.5%. Or the black unarmed homicide count (<30). People have no way of measuring risk, just a few fatalities would have the public in a panic.

In the snow belt the reflective top layer of the lane markings gets scraped off by plows. When it rains the markings will completely disappear at night. Machine learning optimized for the ideal case is never going to work in many parts of the country.

The CDC's best estimate of COVID-19 infection fatality rate for people under 45 is more like 0.05%.


Self-driving cars will always be x years away until purpose built and dedicated self-driving car roads are built to take out the variables that put safety at risk.

Back in late 2015, I was working at a media firm and one of our top directors called together a meeting with our engineering team to talk to us about the impact of our products/content. He was convinced that by 2018 most people would be using self-driving cars regularly and that this was going to hurt our core product.

I don't think I ever tried harder in my life not to LOL.

Ha! I hope he did not pivot the company or shutdown the core product

One of the curses of software projects is "premature convergence", where people to a ton of extra work for an overoptimistic launch date. I have to wonder how much wasted effort went in to trying to be seen as a leader in what turned out to be a hype bubble.

Steve McConnell also talks about a common estimation pattern: managers keep asking if something can be sooner, and are only stopped when some engineer proves that a date is 100% impossible. Normally that means that the launch date starts out with a 1% chance of success. Clearly it was lower than that here, but I wonder if that's because a) it's very hard to prove what's impossible here, or b) the Uber/Tesla hype bubble was so intense that it didn't matter what the nerds said.

I dunno, I remember that many of the commenters here were equally optimistic in 2015-16. It took a lot of arguments and discussion before people realised that it was much harder than it seemed.

Then that raises another interesting possibility: that absolutely nobody at the deranged-optimism companies knew what they were doing. Because there were definitely experts who knew those dates were impossible. Less 18 months after that Lyft prediction, roboticist and AI expert Rodney Brooks said that free-roaming auto-taxis wouldn't be running before 2035: https://rodneybrooks.com/my-dated-predictions/

It is difficult to get a man to understand something, when his stock price depends on not understanding it.

Wisdom for the ages.

Hard not to feel like we’re seeing a swoon in self driving like we saw in crypto back in 2017 and 2018.

They hype got ahead of the tech.

This is just the typical Gartner hype cycle. It applies to most new technologies, although the time scales differ.


My kids are now 7 and 10 years old, respectively.

I wonder if they'll have to learn to drive, and make a driver's license?

It could be that by the time they come of age, there are self-driving cars, but they're much more expensive than making their license (which in Germany costs around 2k Euro, including lessons).

Or maybe there'll be no viable self-driving, fully autonomous cars by then? Who knows?

nothing new here, AI has been "5 years away" since the 1950s

AI currently does lots of useful things in all sorts of industries. It just can't do everything that humans can do.

The world is already largely running on AI, just not the version of it that is in people's heads by watching sci-fi movies.

The world is running on good old software.

I don't know of anything important that's running on anything that can be reasonably called AI.

I think we can get by without Siri.

Of course, if you include the entirety of machine learning under the AI label, then whatever. But in my opinion, a principal component analysis does not an AI make. Statistical inference has been around since before computers were invented.

That's the rub, isn't it? Your definition of AI is basically "things we can't do by computer yet", which will always be unobtainable, by definition.

Meanwhile, you've got marketers that are also trying to use that definition, in order to sell you on some software solution, despite it being oxymoronic.

Two better definitions of AI that I'm a fan of are "Algorithms that model intelligence" (where "intelligence" is the ability to use knowedge in inference), or the broader definition of "Second-order algorithms" or algorithms that, rather than encoding a process that finds a solution, encode a process for finding a process that presents a solution.

Why, I'm perfectly willing to take either definition of AI.

My objection to the parent comment still stands; our world is not running on that.

For example, Wolfram Alpha, or OpenAI's algorithms that can learn how to play Super Mario will fit the definitions you provided, but our present world does not depend on systems like that.

I think there is some argument to be made that the Web is "running on" AI, since AI-driven ads businesses are the source of all money in the Web (essentially); and I do think that it's fair to call NNs "AI".

I for one am highly skeptical that the kind of automatic personalized targeting that AI enables is actually worth the price, but the industry overwhelmingly believes it is. So, removing AI from the picture would be considered to greatly diminish the value of advertising on the web per industry pricing, and with the ad money, much of the web would dry off.

People have wildly different definitions for what "reasonable AI" is, especially people working in the field. Colloquially the last decade or so of advancements in machine learning and data mining is all considered AI.

Slap AI on sufficiently many things, and sure, the world is running on AI!

For one, I think it's silly to apply that moniker to something that's not at least interactive.

Machine Learning / Data Mining pipelines that I've touched were running nightly, and furthermore, the systems they powered could continue functioning even if the pipelines crashed / weren't running.

Because with ML, both the input and output are data (information, at best). ML doesn't interact with commands.

The whole point if "A" in AI is that it's contrasted with Human Intelligence. Nightly batch processes simply ain't it.

Sure, people will still slap that label on anything, but it's boring to do so in the context of this discussion :)

The sci-fi version being as the term was defined by Alan Turing and the marketing version being as s/ML/AI

It was 20 years away for a while. And may be again.

weak AI is here, AGI is a ways away..

I wonder if it’s possible that this is true in some restricted area where companies are heavily testing their self-driving cars, and COVID reduced the amount of driving people do.

From very brief observation a year ago, it seemed like Lombard Street [1] was 90% self-driving-car training. Definitely a very special case that just disproves the quote from Lyft's president.

[1] https://goo.gl/maps/oXrvVYhDXWVxaoQC8

Uber gave up in December 2020 and sold off their technology to Aurora.

Waymo is now offering rides in Phoenix AZ with no "safety driver".[1] "Waymo One is our fully public, fully autonomous ride hailing service. Now anyone can take fully autonomous rides anytime they're in Metro Phoenix. Just download the app and ride right away."[1]

Waymo also now has significant non-Alphabet investors, having raised US $2 billion in 2020.

No pricing yet, so this is still a demo.

[1] https://arstechnica.com/cars/2020/10/waymo-finally-launches-...

> No pricing yet, so this is still a demo.

If you’re referring to the Waymo rides in Chandler/Phoenix, they do charge. JJRicks (a YouTuber with no affiliation to Waymo AFAIK) has a spreadsheet of the rides he’s taken.


Oh, good. Waymo's site says you can sign up, but there's no indication of pricing there.

Toyota's interest may not be in Level-5 autonomous taxis. They have a history of striving to fully understand every part of their supply chain; this could be a play at developing advanced in-house driver assists.

As an example, I was surprised to learn that Toyota designed their own ECU chips rather than buying off-the-shelf automotive grade MCUs until 2019, when they spun that business off to a subsidiary[1]. They really take vertical integration seriously.

[1]: https://www.reuters.com/article/us-japan-fukushima-anniversa...

> advanced in-house driver assists

One thing I wish any car manufacturer would do is create a system that helps their users use the turn indicator. It’s an endemic problem here.

Something like an annoying voice telling them that they should have used it immediately after turning. And if they fail to use it in three turns during the same drive they get a mandatory safety briefing before they can lock the car.

I think the better way would be automated cameras that fine vehicles that turn or change lanes without turning on the blinker beforehand.

More Orwell(TM) is the last thing the world needs.

You wanna write tickets post up a real meatbag cop and make them write tickets. Don't create some dystopian dragnet that you can tune for revenue generation.

Waymo is at the head of the pack. I think we will see full self driving limited to cities + highways this decade.

ZOOX is another company I'm watching https://en.wikipedia.org/wiki/Zoox_(company)

Have some friends at Zoox and can't say I'm confident in them. Retention appears to be a massive problem, entire teams up and quitting.

In the end this makes sense; an actual car company is much better positioned to actually profit from marginal improvements in self-driving (read: safety improvements) year over year, rather than wait for the windfall of FSD taxi service at some unknown point in the future.

Toyota is the last company I expect to bring something new to market well. The last time they did that it was the Prius and it was kind of a fluke IMO.

Toyota's entire market is convincing people (the bulk of whom will trade in in a few years) that a Camry/4Runner/Sienna is worth substantially more than an Altima/Tahoe/Pacifica based on a bunch of promises about reliability after 150k that the statistical first owner will never see (spare us all the anecdote about your relative who trades in at 300k on the dot, I have one of those too). If robotaxi fleets become the game then Toyota is gonna lose because commercial buyers who buy us a fleet of whatever has the lowest all things included TCO over 3/5/10yr tend to buy a hell of a lot of Chrysler 200s, Nissan Sentras, Chevy Colorados, Chrysler Pacificas, Ford Transit Connects, and other not so premium vehicles.

If FSD tech is viable in the next 20yr I'd be very surprised if the reality of what that will do to Toyota's business doesn't keep them from going all in enough to bring a viable version to market early enough to get a good market share at it.

The average age of a car on the road today is 11 years. Toyota's reliability really makes a difference for secondhand owners, and therefore used Toyotas command a premium, and you get more for your car when you sell it or trade it in.

Otherwise, it's true that new cars from many different brands are reliable enough nowadays, even brands that used to be bad like Hyundai (unless you want to buy a car and keep it forever). But used car sales far outstrip new car sales.

Used Toyotas command a premium because relatively none of them find their hands in the "oil changes, lol I can barely afford gas" crowd that wears stuff out and doesn't maintain stuff.

You never see a Sienna getting loaded with bags of concrete at Home Depot. Can't say that about a Town and Country. You never see a family of 5 pile out of a (newish) Corrla. Can't say that about an Altima. You never see a 4Runner towing something way too big. Can't say that about a Tahoe.

Toyotas sell for a premium used because they get sold to "premium" customers who keep them nice.

If Toyota financed the people Chryler and Nissan would finance (insert neggity joke) then in 20yr you would see all the upper middle class HN types shitting all over Toyota because Toyota would be the poor people car. You'd see them in all sorts of states of disrepair and neglect and they would be stereotypes as unreliable, after all why else would they be driving around on 3/4cyl?

I can't believe I'm defending Chrysler and Nissan but the blind Toyota worship is just that, blind worship. Most of their niceness comes from people keeping them nice. The real world edge between manufacturers is very, very slim.

In end of life they are no better or worse than anything else that's been treated the way they have (VW's regularly scheduled electrical fires notwithstanding)

This is patently false. Check Australia and Africa where Land Cruisers rule and survive like roaches (as do old Mercedes).

It's because they are well engineered.

So why don't they dominate in Russia and South America then?

What do you mean? In South America Corollas are considered to be far more reliable than equivalent Renault, GM and Fiat competitors. They cost a few thousand dollars extra, which is not a small deal for many people.

They do in south America. Russia because they were only able to import non-soviet cars starting in the 90s. But they are also everywhere there.

Not only are Toyota’s more reliable in the long run, they have fewer defects out of the factory, fewer lemons, and because of this reputation they don’t depreciate as quickly. I am going to get a new Tacoma every 2 years because I can sell the old one for nearly the same price as the new one. No more losing 50% of the value as soon as you drive it off the lot.

Aren't the new Tacomas made in Mexico? Do you think that will affect their reliability?

You'd be surprised how many good things come out of "Made in Mexico"... the biggest issues are always out of spec components and starting at a new factory and having people be scared of reporting QC problems.

I'd expect the Toyota methodology to trickle down.

If we were talking about VW or Ford making stuff in Mexico people would be shitting all over made in Mexico. "Hurr durr side stepping muh UAW and muh OSHA" and stuff like that

But brands that start with T and end with A can do no wrong on the white collar parts of the internet so Toyota gets a pass.

I mean, you're not wrong, but the history has shown that Japanese manufacturing is good. I don't know whether Toyota will pull it off, but if they do, they'll save some cash.

>but the history has shown that Japanese manufacturing is good.

Last I checked Nissan and Mitsubishi (though they're on the up and up lately) were Japanese. The people of the Camry tax brackets don't exactly hold them in high regard.

I think you're conflating self driving cars with ride-sharing here. Yes, getting rid of human drivers would lower the cost of ride shares and hence increase that market at the expense of having your own car, but the vast majority of current car owners will still have their own cars. Hell, we might even see car ownership increase once you can have your cake and eat it: all the comforts of your own car plus an AI doing the actual driving.

Ride sharing companies have it pretty good now. I'm not sure why they would even want to get rid of Independant Contractors?

1. They get desperate people to buy specific four door vechicles, and don't even have to worrry maintance costs.

1.5 Driver pays for fuel.

2. They don't pay for insurance.

3. They don't have to pay for liability.

4. They pay a Independant Contractor a percent of sales.

5. Most countries have a burgeoning sector of low skill help whom that are begrudgingly copacetic with being exploited. (This I find this very sad, especially in a country I used to love. For decades there wasen't a lot of takers for shit jobs. Now there's a line.)

Not sure whether you have factored in lost days of revenue into your back of envelope TCO.

When a cheaper less reliable car is in the garage being repaired it's not just about the cost of repair but also the opportunity cost of lost revenue.

So how do you reconcile this with all the cheap domestics being used as fleet vehicles in commercial applications where downtime directly translates to money?

Pretty much every minivan fleet in the US runs on Pacificas and Transit connects. Small trucks are always the Colorado.

Of course the cabbies love their Priuses but they only begrudgingly switched after the supply of ex-cop crown vics dried up.

I can buy a Toyota here but not an Altima, Tahoe or a Pacifica. So Toyota's market is a little bigger than that.

Regardless, there is no other car company doing anything like the Mirai.

I feel like all these self-driving companies aren't actually in it to achieve the success. They know it's a long way off, they're just trying to hit enough milestones to get scooped up by somebody bigger. Perhaps with the exception of Tesla, who I have no explanation for. Self-driving cars are so far down the totem-pole in my opinion--if we're going to have to rework our roadways in order to accommodate self-driving cars, why not just innovate there?

A roadway can be MUCH smarter than a car. Each segment of smart road is unique, and only has to worry about itself and whatever cars are present in the area. Smart roads could be managed by a few operators (in fact one operator could shepherd many road segments).

Each smart car has to be able to account for all possible kinds of roads. Smart cars each require an operator who is essentially just sitting there waiting for the car to fail (and therefore the failures become more catastrophic as the tech gets better, since trust rises and attention spans fail).

Put the tech into the ROADS, and just let the cars listen.

> Just trying to hit enough milestones to get scooped up by somebody bigger.

A few years back, during the peak of the self-driving hype cycle, I considered doing exactly this. I think with a small team of sensor and Bayes filter experts, a car outfitted with expensive but off-the-shelf sensors, and about a year of dev time, you could get a car that self pilots 95% of the time on a wide suburban road in clear conditions. That extra 5%, other road types, and worse visibility conditions would each take you far more effort than that. But if your goal was just to show results fast and get acquired, it'd be easy to convince people you were almost there.

> A roadway can be MUCH smarter than a car.

If we're building specialized roadways, you can steer a vehicle with two strips of metal tied together by wooden boards. And it can travel twice as fast as a car while holding a hundred times the passengers. And it was invented 200 years ago.

> At best, I think, we decrease emissions associated with them.

Amen to that. It just shows how crappy our economy is that we don't have trains, pneumatic tubes, and other old school technology that hasn't been beat everywhere.

At a technical level, I think smart roads are more straightforward. However, I think a purely technical analysis ignores why self driving cars have been so popular.

The cost of replacing the entire US road system with "smart" roads would doubtless be astronomical - as well as increase the maintenance and infrastructural requirements. We've probably had the technical capabilities to create self-driving-capable roads for years, but the costs are probably too high for people to take it seriously "at scale."

Machines that do things "as well" as humans in similar roles have been a science fiction staple for over a century. That machines do focused tasks much, much better than humans haven't stopped people from searching for "drop in" replacements for tasks that currently use a human. Suddenly, the scale is on the level of a single vehicle - very manageable if it works. That's why people were so willing to spend billions of dollars to avoid spending trillions on all that road replacement - because they wanted the worlds they saw in science fiction.

You don't have to replace the entire road system. A hybrid model can also exist. Complicated city streets can be "smart", while self driving tech can be good enough to traverse long stretches of highways (which it can basically already do).

Great! Do you want to sit in the first city council meeting and bring up this topic? :P

Software would be great... if it worked. It's going to take a very clear failure of all self-driving software for anyone to even possibly invest in smart roads. Smart roads don't seem to be a startup thing to do - massive friction (like most infrastructure) and no promise of fantastic profits.

Imagine the investors' horror of building smart roads a few years before someone figures out a pure software solution. It's hard to be sure that it won't happen.

I agree with you 100% but I'll try to condense what I think is the biggest hurdle for buy in. People want "their" car to be smart, they don't want "our" roads to be smart.

I'm generalising but I hope you get what I'm saying.

I get it--pride. But mostly, it's easier to sell.

> I feel like all these self-driving companies aren't actually in it to achieve the success.

To that extent, was this acquisition actually a liquidity event for the employees or just an effective "change of ownership"?

Lyft was already public for them.


"In December 2020, Lyft announced that it will launch a multi-city U.S. robotaxi service in 2023 with Motional."


Urbanites have self-driving. They just need to pay a sub-minimum wage driver to do it. Oh, real self-driving you say. That's going to be on a subset of highways in a subset of weather conditions.

And it will happen with trucks first. Either a lead truck running a convoy or remote piloting via drone technology.

Why? Because a truck does 500 miles per day. A self driving truck can do 1000. Amazon cannot exist without long haul trucking

Many long haul trucks carry two drivers and can do more than 500 miles per day.

Perhaps. A lot of "long-haul" trucking is called trains.

In the US at least, trucks are by far the dominant means of transport, at 5x more freight than trains

(slightly out of date, and by value): [1] https://www.bts.gov/newsroom/2017-north-american-freight-num...

Lyft still works with third party partners, like Motional and Waymo to provide self-driving services on the platform. L5 division was a separate effort to have own self driving car.

They also have a partnership with Waymo to provide autonomous rides. Ultimately, I think Lyft, Uber and the likes will just become a provider/maintainer of robotaxi rides, while the actual tech will be done by Waymo et al.

Google tells me that Uber takes a 25% cut for human-driven rides. So a rider's fare is 3:1 driver to Uber.

If (just to make the math easy) you need to pay a robot a third of what you need to pay a human, I wonder whether the negotiation would come out closer to

1. One part Uber, one part driver (so Uber's cut maintains its value) or

2. Still one part Uber, three parts driver, but the values are all lower and the total volumes higher.

I'd think it would need to shake out closer to the latter case than the former to keep Uber in the game, for a few reasons:

- Self-driving car companies are software companies. They can make Uber clones, given enough resources and time.

- There isn't really a two-sided marketplace bootstrapping process. These companies have enough capital to put a lot of cars on the road, and I don't think the economies of scale are actually that big -- there isn't much cross-benefit from scale in disconnected cities, and inside a given city you need 4x as many cars to halve wait times so the scale benefits tail off sharply. And I bet there's little brand-loyalty.

- If there are few self-driving car companies in a market, they'd have relatively large market power, so they could dictate rates more effectively than an atomised driver pool.

Shrugs, it could happen if the rates were low enough, but I suspect it won't. But I'm probably greatly underestimating the benefits of partnership -- dispatching is probably easy, and AV companies will necessarily be savvy with regulators, but customer-interaction and "app surface-area" are likely places where the existing players have a real leg-up. That said, the labour market is pretty porous in the Bay Area...

3:1 of the gross goes to the driver. Much less after driver expenses like wear and tear, gas and insurance.

that's the rub of a robotaxi service. A hypothetical taxi launchable today would probably have

~30 billion R&D 1:10 remote safety operator ratio (1p employees, not contractors). 100K Bill of materials/vehicle for the lidar sensors etc.

Compare this with an externally owned 8 year old vehicle being driven by someone who doesn't confer liability to the company.

Even if the dream was realized there likely isn't a viable business for robotaxis until the cars are on the road for 10-20 years.

Vans exist, and rides can be pooled in denser areas. I imagine a 24-hour robovan with multiple paying passengers at a time would pay itself off relatively quickly.

People here like to whine that this would be recreating the bus, not realizing that the reason buses suck is because they have rigid departure schedules, and fixed startpoints and endpoints. People mainly take Ubers over the bus because they're much more flexible.

Yeah, I remember when the company I worked for continually made forward looking projections that were ridiculous whilst in the process of negotiating a sale. Including the pressure not to report anything that would need to be raised as part of the acquisition.

Not surprisingly it takes a lot of capital to make serious in-roads into self-driving. Ultimately if your main business is not profitable it is hard to justify spending 100m a year on L5 with no reasonable end in sight.

I think this is why only Waymo or Cruise or one of the other well-funded-by-profitable-other-business will continue to progress towards the true self-driving vehicle... its just too damn expensive and uncertain as to when the breakthrough/development will happen

What will happen afterward? They'll get acquired by Uber or something?

I think Lyft's core ridesharing business will continue to become more profitable in a duopoly setting with Uber. Ridesharing is a network effect business, if the riders are there, the drivers will come, the more drivers you have, the more availability so you get more riders. The US has proven that it is capable of supporting two overlapping players (not all markets can support this and meet regulatory rules)

You’re assuming ridesharing in the way Lyft and Uber do it will last for long.. not a single place I’ve seen anyone confirm that it’s profitable or break even for the drivers. Almost no one drives for years with them, and they’re bound to run out of drivers who haven’t learned the lessons. Or their prices jack up to more than regular cab prices like in nyc. One way or another it’s a non zero possibility these companies won’t survive in the same shape.

I don't think this is accurate at all.

1/ It is profitable/break even to the best drivers (this means you have to optimize and consider the external costs of doing the work... which unsurprisingly most people are bad at doing so). There are certainly cases of people who have leased a brand new car, with poor gas mileage, and then drive them into the ground surprised it hasn't paid off. However, using a Prius or all-electric vehicle with high reliability results in lower costs. If you run a business and poorly optimize around your costs you don't make profit...

2/ Drivers do drive for years for them, but the vast majority do not because it is not a great long term career and usually serves as a bridge of full-time work or regular part-time work. You can't get promoted, you can't get paid much more for tenure, and you don't pick up any new marketable skills. Even a career warehouse worker or McDonald's front-line worker has a path upwards...and people do work there for short and long periods of time. Just because everyone isn't a long term employee doesn't make a profession unsustainable...its just the nature of the job.

The last statement is pure conjecture so I'm not sure how to address it outside of the fact that in an urban setting car ownership comes with a lot of costs and these services expand the radius of people's ability to navigate cities. There will be a premium for that and Lyft and Uber are working their way towards providing that service AND generating a profit.

Probably the opposite, instant massive userbase, then just slowly phase out the meat bags.

I learned to drive in Mexico at 13, where there are almost literally no rules or traffic signals (see the black arrow on the wall? that means don't stop). I remember thinking about how different things were when trying to get my driver's license in US at age 16 - almost easier - in spite of how many more rules there were. I wonder how self-driving cars perform when trained in a world of rules and order and are then thrown into complete chaos in other places.

What does training look like? Do self-driving cars only work in certain countries where they have been trained?

Not even per-country, self-driving algos are trained per city. Just in the tri-state area, you have NYC's extreme aggressiveness, New Jersey's jug handles, and Pittsburgh's lefts.

I'm curious what company tunes locally to this level of driving characteristics (not for different laws which is sometimes necessary, but for different expected aggressiveness and situational awareness like you mention)

I work in this space and am not aware of what you're describing. M/L calibration is tuned where it's legally allowed to run on the streets as well as where there's interesting problems to practice against (bridges, tunnels, etc.)

I was thinking of nuTonomy when i wrote my comment.

This sounds hilarious. What are jug handles, and what are lefts?

Well, they don't really "work" anywhere, if your standards are high enough. But your point is valid; making a self-driving car for many national markets has to be harder than making a regular car for those same markets.

Either the car is trained on Mexico's streets and so behaves as aggressively as the rest of the traffic, or a significant enough number of vehicles are self driving so they can essentially regulate the streets themselves.

>What does training look like?

The same way how human learn/train.

Observe how most people behave then imitate.

>where there are almost literally no rules or traffic signals ...

That's a lie, though. The meme that México is an uncivilized place is dumb and doesn't hold much substance.

Edit: Downvote me all you want, GP's statement is laughably easy to disprove. ¯\_(ツ)_/¯

OP was being a bit hyperbolic, but driving in Mexico (and most of Latin America) is much different than in the US. I lived and overlanded with motorcycles in Mexico, Guatemala, Colombia, and Chile for the last 12 years and could talk for hours about how the mentality of driving in the US doesn't translate to Latin America.

Blinkers can mean something different at different times, depending on context, while traffic lights and signs are treated as a suggestion. Speed limits and their enforcement are non-existent (outside of expensive toll road), and lane lines aren't something that have meaning. Pedestrian traffic (or hell, animal and farm traffic) is unexpected and unexplainable, as are public transportation options (imagine giant school buses flying through roads, playing leapfrog to get in front of one another to be able to pick up the next people waiting for a ride).

The consequence of these differences for self driving cars will be a very, very difficult problem to solve unless the majority of the vehicles are self driving, which is not a solution that will happen anytime soon in Latin America.

+1 If you've never seen or riden a camionetas it's hard to describe!

I go (or used to) Guatemala a lot and the Camionetas are so insanely scary! Going over huge mountains, roads with giant holes in them, tipping this way and that, filled 3x per seat or omre with mounds of carrots and cargo on top. lots of times the money guy hangs out the front entrance while it's driving!

They go SO fast it's super scary to me I'm surprised I haven't seen one tip over on those corners or bust a brake and run off the mountain.

The one time I went to Mexico, my taxi driver nearly crashed in to a police car because he and the police car both ran their stop signs at a four way stop. The police didn't pull him over, it's a normal occurrence, apparently.

There was more mayhem, but that one is illustrative.

In my experience, the laws are not as closely adhered to as I have seen in other places.

As a Mexican-American living in San Diego, driving in TJ is way different.

Like, so chaotic that I refuse to do it different. I leave the driving to the locals when I'm there.

I don't think anyone's saying Mexico is uncivilized. It's a beautiful place with beautiful people, massive cultural output, great universities, etc.

I'm just not going to invest in a startup trying to build self-driving cars for Latin America anytime soon.

100% - definitely not saying Mexico is uncivilized, I love it very much. But the traffic laws are just not the same people!

I've seen some directors recently leave Lyft's "Level 5" division recently, in the last couple of months. I guess there were a lot of warning signs.

This is probably for the best. Actual engineering (and actual engineers) and Lyft are two very different animals.

How long will this game of musical chairs go on? Technologically we are not there yet. What is called as AI by all these self-aggrandizing people is nothing more than glorified "pattern matching". Can the current technology have its use cases? Sure. But calling it self-driving is laughable at best and deceptive at worst.

>What is called as AI by all these self-aggrandizing people is nothing more than glorified "pattern matching"

You can describe humans in exactly the same way.

Toyota / Woven Planet also just announced it was picking Apex.AI as their software platform:


100m a year, 0 roi expense gone. Also no more shared rides due to COVID (which I expect will never come back to Uber or Lyft since they were gigantic money sinks). Assuming ridership is bouncing back from COVID I wouldn't be surprised if their first profitable quarter was this year.

Uber and Uberlikes are doing fine here in New Zealand. As long as they can pump money into/scam drivers into being cheaper than taxis they will keep market share.

Why we're shared rides such gigantic money sinks ?

In theory at least, it can just be some algorithm and some screen space on the app.

I can try to explain:

- Ideally in a shared ride what happens is that instead of 2 drivers, driving 12 miles for passenger A and 14 miles for passenger B, you have 1 driver who drives 15 miles for both passengers trips.

- So 1 driver to pay who is now more efficient, and 2 paying customers. You charge each customer X% less, pay the driver Y% more, and theoretically you could keep your margin the same but now fulfill more rides (another driver is free now that you put two rides in 1 car)

- However, now let's consider how much cheaper it can really be...

- Sharing a ride for a cheaper cost makes sense when you and the person you share with have a generally overlapping route. The discount you get as a customer is a function of how likely you are to get matched with someone.

- Turns out there aren't a lot of rides with good overlap (airport rides might be the best type of ride tbh). Thus the discount is quite small. If the discount is small it means you have less people using it. Less people using it makes the discount even smaller! Eventually you have no discount and no incentive to use the service.

- To keep users incentivized to use the shared mode, Lyft and Uber have to subsidize the pricing to make sure that match rate stays high. Every "shared" mode ride that has only 1 person in it is a big loss, but incentivizing more people to use it can result in a smaller net loss across the marketplace

Most ride-hailing companies that invested in self-driving cars are giving up on these projects. Most of these startups struggle to sustain their business.

*Most ride-hailing companies that needed a narrative ahead of their IPO for why they had a defensible long term moat and leaned heavily on 'self driving' as their answer to that are now getting pressure from investors on why they keep sinking more money into a problem it doesn't seem they'll ever be able to solve

It truly is the issue of not knowing if it will ever be solved. The uncertainty of progress and uncertainty of a timeline is what really irks investors (who are right to question the investment given it is dragging down the not-quite-profitable main business)

I wonder what is the general feeling towards startups that take a different approach such https://comma.ai/ and https://wayve.ai/?. These startups take the approach of building "Android version" of self-driving cars, a technology that can be used as add-on in the existing cars.

I'm surprised nobody mentioned Prop 22. At least in California there's no risk of losing margin to "employees" anymore. Maybe FSD is not required anymore for long-term viability of transportation network companies.

I keep reading about a shortage of Uber and Lyft drivers in California. I think this is correct because I was a driver and they've been trying hard to get me back. None of this would happen with FSD.

Anyone hawking FSD is a stock promoter (liar). Uber/Tesla/Lyft. Elon Musk claims Tesla FSD will be L5 by end of 2021, but they recently filed a document with the CA DMV stating Tesla FSD is L2 automated.

Which one do you think is more realistic, staying L2 by EOY 2021, or magically leaping forward to L5 FSD in 8 months?

The only company working on self-driving that I believe when they issue press releases is Waymo, because Google isn’t trying to juice their stock price all the damn time, and they have operable robotaxis in AZ. I don’t think Waymo claims L5 capability either.

It's those edge cases that make me think real FSD (vehicles with no steering wheels) is a decade-timescale problem.

Figure out 99.9% of driving, but otherwise take a family off a bridge when the sun is blinding the camera? Still need a steering wheel.

I used to live in Mountain View where Waymo tests their cars so I would observe them quite often. I sat a railroad crossing with one, thinking about what could go wrong. What if the signal didn't work, but you could hear the train coming and see it way off to the right or left if you looked. Are Waymo cars listening for trains? Or are they looking at railroad guards only? I suppose they could program the train schedule in, but what if the train system changes the schedule? That could be adjusted for, but how much would I trust this system? Not enough to take a nap behind the wheel.

In in Whitby, Ontario, Canada, a dad and his son were killed driving a car across a railway crossing with no gates. The dad (driver) was on his phone at the time and it is believed the phone blocked his view of the train and he was talking to much to hear the train.

The person he was talking to said they were talking like normal and suddenly the phone cut off.

Computers probably can not be worse that the people already on the road.

Even if you're right, that's the same argument for airplane safety. The problem is being in control versus putting your hands in the hands of an unknown third party.

The psychological, ethical and legal implications are completely different. If tomorrow I drive a car and run a kid over then I'll be in trouble and you'll probably never hear about it. If tomorrow I get in my Tesla self-driving car and it runs a kid over then you'll hear about it everywhere and Tesla's responsibility will be invoked. Because whose else?

The bar for self driving cars is not being as good (or bad) as a human driver, they need to be orders of magnitude safer in all situations. They need to have airplane industry numbers, not Average Joe drunk driving numbers.

If the bar is "better than someone talking on cell phone while crossing a railroad crossing without looking," I'll stick to manual driving.

Self driving needs to be better than me, not better than average.

Hell, it needs to be better than you with a "train is coming, I'm going to slow down" driver assistance.

If you tell me that no such driver assistance exists, just apply it to scenarios where it exists, like lane keep assists and automatic emergency assists

It would be interesting to put a black box in a car tied to the driver. Anyone who shows patterns of driving worse than average gets a self driving car.

If you are better than average you can keep your dumb car.

Does the average driver even look both ways at the railroad crossing? I do every single time. Such a low-cost preventative measure that could very well end up saving my life one day.

At least where I learned to drive, not only was stopping to look both ways at railroad crossings taught in driving classes at school, it was part of the written test to get a permit/license, as well.

How many people on Hacker News think self-driving cars will be a better driver than themselves in the near future?

Probably me. Waymo may already be in most situations.

I drive about once or twice a year, always in a rental, and consequently never really know how well my car handles or its dimensions. That, plus generally being rusty, tends to make me fairly nervous.

Yet, nevertheless, I am in charge of a 2000kg block of metal hurtling around pedestrians and cyclists at 70mph.

Scares the shit out of me, to be honest.

What are your circumstances where you simply couldn't get a driver? If I could at all afford it, I would in that case same as I do in very remote 4WD territory.

I don't own a car, so I only ever really drive on holiday... or in extreme situations.

The last time I drove was taking my rabbit to the vet at 2AM on X-mas day, where the vet said due to Covid I would have to wait outside. Not really practical when the temperature is below zero, so I used Zipcar!

Besides that, mainly if I'm going somewhere for a week with poor public transport and high distances to cover (impractical to cycle/walk).

Honestly, I don't think self-driving cars will /ever/ be a better driver than I am. But that also hinges on what you mean by "better". There may reach a point in time when self-driving cars have a lower statistical likelihood of being in an accident than I am, but I doubt that they will ever do that while getting me to my destination more quickly than I do.

Both of those things relate to the fact that I have significantly better situational awareness and driver training than the vast majority of people, including significant time on track racing cars. To match my situational awareness it will not be enough for self-driving cars to rely on their own sensors, but they will need to communicate with one another and act in concert, otherwise my reading of traffic patterns and looking far ahead in the distance will always beat it out for the quickest path through traffic, or the safest response.

To me, where I see a benefit for myself personally, is that self-driving cars don't necessarily need to be better than me, they just need to be better than most, and turn driving from an active to a passive activity. It means I can be engaged in other things. But, that's also the rub. I /like/ driving, so I don't want to exist in a world where I am /forced/ to have a self-driving car to use the roads, as I still want the ability to go out and enjoy myself from time to time. But for a daily commute, turning driving into a passive activity would make me a happier and more productive person (other people are horrendous drivers and it frustrates me observing their unsafe and careless behaviors).

I'm sure almost nobody even outside HN thinks that self driving cars will be better than themselves. Everybody thinks they're above average drivers.

That's a psychological problem that self-driving cars will have to overcome. They need to be so obviously better than any human driver for people to actually consider them. A handful of fatal accidents that people think they could have avoided as a driver and they won't want to get on board.

People are bad at risk assessment, but that's a fact of life. As a result, self driving has to be better than what people _perceive_ as their risk of being in an accident while driving, not what the actual risk may be.

Not a problem if you only sell to the bottom 50% of drivers.

Only a small minority of drivers consider themselves to be in the bottom 50% of drivers, so you might have trouble with that strategy.

People are great at some things that computers suck and computers are great at some things that human suck.

Machines doing horrible mistakes that no conscious person would ever do is problematic because we don’t have reliable error correction methods for that kind of mistakes.

A FSD car will never accidentally press acceleration pedal whey trying to press the brakes or loose control when trying to read an SMS. Instead it will mistake a bird for a train, hit and run someone and it would be like “something slowed me down, are my batteries degrading?”

How do you deal with a driver that fails to understand what’s going on?

A human being can drive a million miles without a serious accident. 2018 saw 1.1 deaths per 100 million vehicle miles driven[1]. Human beings are extremely good at driving safely overall.

[1] https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...

Yet driving deaths are the leading cause of death in older children and young adults.

It's not that we're bad at driving, I suppose, it's that we do an awful lot of it and it's quite a risky activity - especially if you're not 100% engaged.

That being said, if the US cared about road safety, there's quite a lot they could do to improve it already. Many techniques have been used successfully in other countries to significantly reduce road deaths, for example, a nationwide vehicle safety inspection standard and lowering speeds on urban roads.

The fact that these aren't being done leads me to think the concerns about road safety are actually rather irrelevant for the country as a whole. When self driving cars exist and are convenient, people probably will switch to them, as long as the accident rate is vaguely comparable.

> Yet driving deaths are the leading cause of death in older children and young adults.

All vehicle-related deaths in the US average around 32,000 annually, of which about 50% of those are alcohol and/or drug related, leaving the true accidental vehicular death number around 15,000 annually.

The problem we're trying to solve with FSD just doesn't exist at the levels FSD proponents would have you believe. Humans are incredibly safe drivers when you account for the number of people driving, and the number of miles driven per year.

I’m not sure you can remove any driving deaths as alcohol or drug related. A hypothetical FSD system would be a solution to those deaths too, and as long as you are driving on the same road as everyone else, abstaining from them yourself still puts you at risk.

And if everyone drove modern high-end cars with the best assistive driving systems, especially as those improve to something approaching FSD on highways (or even achieve on certain highways in certain conditions) over the coming few years, it would be better still.

Yeah, I think people conflate seeing people drive erratically or doing dumb things in cars with an inability to operate a car in a way that ultimately prevents deaths. Humans are incredibly good at that in the bigger picture for reasons that are currently impossible to replicate with a computer.

Criticize human driving all you want, but even the worst of us can typically manage to get from point a to point b... Somehow.

One of the eye-opening things about the wikipedia statistics is that advent and universality of mobile phones at worst caused a very small uptick in fatalities. With all the bullshit you see with on the road, somehow people manage to mostly avoid death doing it. My guess is people can pull attention back to the road at moments it's really needed.

For sure. Well, humans don't really need to make great decisions individually to be protected, ultimately, by most people making an effort not to kill or be killed – to some degree at least. It's a very strange and fascinating thing about us.

I mean, that industrial agriculture eventually results in food on tables across the globe. So many systems at work. Run by the same people using their smart phones in their cars, haha. We manage to collectively make things happen.

Easy to criticize, but hard to argue against the results?

> A human being can drive a million miles without a serious accident

A million miles sounds like a lot, but it isn't. The average American drives 12,000 miles annually. Assuming a driving career of 52 years (from age 18 to 70), the average American will drive 624k miles. So the lifetime chances of getting into a serious accident are not exactly small.

(I don't know how to calculate the probability here, actually. Is 624k/1M the lifetime probability of getting into a serious accident?)

Just to comment on the math (not the actual numbers):

624k/1M is the expected number of serious accidents.

The probability of getting into at least one serious accident is 1 - (999999/1000000)^624000 = 46% (Every mile, you have a 99,9999% chance of no serious accident. You want that 624k times in a row).

Neat, thanks! So nearly a 1 in 2 chance of getting into a serious accident in a lifetime. Not great odds IMO.

I believe the mileage should be higher because most of us spend time as passengers in cars too.

> Computers probably can not be worse that the people already on the road.

And yet there is no evidence that computers are better drivers than people are.

I do have to ask, are we using the same computers? I've been using them for decades, and they're consistently buggy, error prone and straight-up factually wrong a lot of the time.

Self driving cars would have to kill over 1.35 million people each year to be as bad as human drivers[1].

But there is no reason we should be happy if a self driving technology "only" kills 1.2 million people in a year. That number is absurd and should not be considered acceptable. I think in the semi-distant future we are going to look at manual operation of a motor vehicle as a dangerous party trick, something only to be attempted by professionals or in limited circumstances like pulling into a field to park or some other low speed maneuvering.

[1] https://www.asirt.org/safe-travel/road-safety-facts/

We should be delighted if we have a self driving technology that only kills 1.2 million people a year.

1. That's an incredible amount of saved time, people now get that time back that they use to have to spend driving. We would have eliminated the largest suck of human time on the planet (truck driving). Etc. The main benefit of self driving is not safety.

2. We have a working baseline that we can improve upon to drive that number down, and since computer programs don't have the "new drivers need to learn from scratch" problem, those improvements will stick around approximately forever.

> semi-distant future we are going to look at manual operation of a motor vehicle as a dangerous party trick

totally agree but I'm skeptical of current tech to get there in the near time frame being talked about. It's all in how you define semi-distant

Hell, I would happy if self driving cars only kills 1.5 (slightly more) million people each car.

> Computers probably can not be worse that the people already on the road.

Computers are better than some drivers, but they're worse than others. If the goal is safety, computerized assistance is almost certainly better than self driving. Keeping the human driver engaged and doing their best, but having the computer supervising works a lot better than having the human supervising; it's hard for a human to remain alert enough to intervene quickly enough when needed. I don't know if there are good studies yet, but I expect automatic emergency braking to reduce severity of a lot of collissions. Cross traffic warnings probably eliminates a lot of minor (and some major) collisions. There are systems now to detect wakefulness; if those are combined with something to safely pull over and stop, that could reduce a lot of tragic collisions as well.

> If the goal is safety

If the goal was safety, a computer assisting a competent driver would be the best. If the goal is private profits for a few individuals, then a computer that doesn't take a paycheck would be best.

Unclear. There's some level of computer driving competence where humans will simply zone out.

that's not quite right.

Computers currently are far worse than the good drivers out there. It is not clear if they ever could be better than a trained and cautious driver.

For example you can easily avoid the situation above by not talking on the phone while driving across rails.

if the AI abruptly crosses the median how are you going to avoid that?

Now look at AstraZeneca craze. Vaccines save lives, thrombosis cases are extremely rare. Yet the guy on street is talking to someone on the phone: “Do not take AZ! I repeat: do not take AZ!!!”. Technology has to make just one mistake to lose layman’s confidence.

>I suppose they could program the train schedule in, but what if the train system changes the schedule?

I laughed at this, thinking about how the German trains are usually ±30 mins and often off by hours.

The train schedule would probably not fare better than a coin flip, except in Switzerland or Japan.

> I suppose they could program the train schedule in, but what if the train system changes the schedule?

Just check the train locations instead :)


Theoretically it should be able to use side-facing lidar and cameras to sense that a big truck-like thing is crossing.

Why not radar?

Because radars lacks resolution. Why my Tesla phantom brakes due to low hanging signs or slow cars in the other lane. They tune it down to ignore these anomalies and it has issues running into disabled cars and emergency vehicles.

You don't need resolution to detect train about to ram into you.

Similarly, lidar resolution is a great overkill, and overcomplication over a moderatly sophisticated mm wave radar, which is on top of that will be much more durable, and reliable.

In addition to obeying all railroad crossing signals, I'm certain the Waymo Driver has the concept of trains built in. The long-range sensors on the vehicle would project the probable path of the train and determine if it was safe to proceed along the car's planned route.

I'm not certain the Waymo Driver has the concept of trains built in, but what if they did? How good is it? I applied at Waymo and found out they don't do well in the rain. What if they train is splashing up water in front of it? I'd like to know how it deals with falling rocks, which is a problem in California. Would they tell me or lie about self-driving, like some well known spoke persons do? What are the hackable weaknesses of Waymo? Have they tried to have visual and computer hackers fool the car and send it the wrong way? It just seems to me it's going to take awhile, maybe decades in my amatuer opinion.

Unless the train was obscured from view by foliage at the height and angle that the lidar rests at, or if a truck pulled up next to the car, or... Well yeah, the only reason people can handle that situation reasonably well is that they have a large range of sensory input, most of that input has symbolic representation in the mind, and there's a healthy fear of dying that causes us to notice when the ground is shaking and reassess whether the train signal is working correctly.

To be honest, this seems like one of the easier edge cases to handle. Before crossing tracks, check if there's anything coming down the tracks.

Much easier than all of the weird edge cases related to the behavior of other drivers or pedestrians.

In England at least, if you're crossing HSR by foot, you pick up a handset and call train operations to cross. Probably overly conservative but obviously a simple thing for a car to implement along with the usual safety measures (though those would presumably involve gates or not at grade anyway).

This seems like the edgiest of all possible edge case arguments against self-driving cars, considering that a meat-driven car crashed through a working barrier and into a perfectly obvious train in the Bay Area just this morning.

Obviously meat-based drivers have flaws. The question is if the FSD robots at least retain all the existing capabilities of meat-based drivers. Trading one set of deficiencies for another raises the question of which set is preferable.

The reality is that for a long time we will combine both sets of capabilities and use "self driving" tech to enhance human driver capabilities.

In that case self-driving first needs to be able to avoid the relatively simple case of not of smashing through a barrier and the human driver can use their wetware to figure out how to handle railroad crossings which are diverse and complicated.

I agree that it’s an edge case but there are always completely bonkers people, out of the world on drugs or just suicidal.

I don’t want to risk getting figuratively in a car with one if I turn on FSD.

This is where you're wrong because I'm obviously a far better driver than that person who drove in front of a train this morning and it would never happen to me. I trust myself more than I trust other drivers. I trust myself more than I trust Waymo. I'd be happy to have Waymo prove me wrong.

The edge case that I always go to on why truly autonomous vehicles are decades+ away is winter driving. Lane markers and road signs disappear. Slush and salt spray constantly obscure sensors.

Winter driving as a human driver requires an entirely different approach and Waymo hangs out in the Arizona desert where there is basically never any inclement weather.

That, and construction, detours and city driving. In the latter, driving contexts can change without a second's notice, without any signs, markings or signals denoting the change. You learn what those contexts are from experience and understanding of what are ultimately complex social situations, and a shared understanding between you and the people around you.

Simultaneous location and mapping (SLAM) was solved a while back. Autonomous vehicles can locate themselves using any detectable landmark, not just lane markings and road signs.

don't even need to go to bad environmental setting. Let's talk about other countries. I haven't seen a lot of self-driving cars in Rome or Mumbai yet, there's a lot of places where traffic rules are treated more like suggestions than actual rules. American style neat wide highways and grid-like streets are not how the places look where most cars are being driven right now already.

Mumbai literally has bumper to bumper traffic, I think any self driving car's sensors would just go crazy and give up.

You can simulate poor weather with sensors by either 1. simulating it or 2. driving at night.

Digital data instead of signs seems like the obvious solution.

IMHO FSD cars (along with drones, secure programming languages/OSes, and distributed manufacturing) are a post-war technology. The economics don't make sense while we're at peace and locked into the Pax Americana economy. However, once folks start trying to kill each other, these technologies will give a very large survival advantage to those factions that adopt them. Imagine how useful not having to risk human lives on supply lines would be. And risk expectations get reset when things go from "nobody ever dies unnaturally" to "people are actively trying to kill you", which eliminates the biggest barrier to FSD adoption.

I think we'll get such a war within the next decade, so we may see FSD vehicles sooner than expected, just not in the way we want.

In a hot industrialized war (one that isn't wildly asymmetric), a supply line has so many soldiers' lives depending on it that having the transports manned or not is insignificant. But I think I know the general concept you are getting at, I'd call it the wartime risk economy: when a large percentage of your combat aircraft sorties don't return, it becomes reasonable to operate the engines much closer to the limits of reliable operation than in peace time. The extra performance can save more pilots than the occasional engine failures caused by the emergency operation mode cost.

Just like reliability SLAs, I suspect every extra nine requires an exponential increase in investment. My armchair-know-nothing assessment is we're maybe at 90% coverage* right now, with maybe 10x all investment up until this point needed to reach 99%; then 10x that to reach 99.9%. Even at 99.9%, though, a failure scenario still threatens injuring or killing 0.1% of drivers (350,000 motorists for US alone).

* Coverage as a percentage of scenario's occurrence over the total duration of driving. For example, over 90% of a long-distance trip will be spent on a highway following traffic patterns within a lane with the occasional lane change.

We regularize traffic more than your estimate though. Like 99.99% of the 300 mile drive I took yesterday was sitting in a lane.

The hardest part was that Google hadn't mapped a service drive, so it thought the adjacent service drive was the best route (which would be addressed pretty fast if you were trying to deploy self driving service in that area).

I don't think we get to level 5 very soon, but level 4 cars will have the ability to go lots of places pretty soon. If I overestimate driveway distances from yesterday, it's like 99.966% lane miles. There was some construction, but it was already well marked for human drivers.

Would not they detect the gates where down ? or are there still that many crossings without gates in the USA

Where I live in the Southwest US there are many crossings without gates. I'd say very few of the ones outside of well populated areas have them.

Gates are only a suggestion - signals/gates malfunction so often you can find compilation videos of them on youtube.

The place I talked about had no gates, after the accident they added gates.

That shouldn't be the standard. It probably will be, but it shouldn't be.

Humans are terrible drivers. If a self-driving car got into half as many accidents as the average human, it would save millions of lives. And kill people, to be sure, but fewer on net.

I also think you could make a reasonable argument that all cars should be banned, right now, based on how many people they kill, but since I don't think that's gonna happen...

You are correct on the aggregate, but that doesn't necessarily work when applied to each individual driver.

I'm a pretty attentive and cautious driver. In the 20 years I have been driving I've been in one accident and that was because another driver was attempting to make an illegal left turn, came across two lanes of traffic, and t-boned me. So if self driving vehicles are only doing better than the worst human drivers, I'm going to be pretty hesitant to turn over control. I'd be in favor of that other driver that hit me using an autonomous vehicle though.

Everyone thinks they're an above-average driver. Half of them must be wrong, so the reliability of any individual's self-assessment is about equal to chance. And even if you're one of the genuinely good drivers—your record certainly sounds excellent—our laws should account for the fact that most drivers can't judge their own skill.

...and they might be right.

It doesn't seem impossible for there to be a long tail of driving (in)ability: most people drive pretty well, with a small fraction that are distractible or reckless enough to account for most accidents.

Just wait until self-driving "personality" becomes part of a car company's brand identity. I suspect that we will see no lack cars that are designed to be terrible drivers. Of course they will call it "confident", "assertive", perhaps for some brands even "masculine". Cars could become even more civilian tanks thank that already are. I desperately hope that I'm wrong.

To be fair, humans are literally the best species on earth at driving cars

I think you’re right. For the AI we have today, it’s the equivalent of 1990. My guess is that as sensor arrays improve, wireless networks improve and quantum laptops are available in 2040 or so.

Think about it — most software is deployed cloud first these days, but one of the most complex computing tasks we have is relying on some black box computer.

> It's those edge cases that make me think real FSD (vehicles with no steering wheels) is a decade-timescale problem.

I'd be willing to bet it's a quarter century or longer problem. (Longbets, anyone?)

FSD is absolutely achievable, but the task is much bigger than some proponents give it credit.

Or you need a high-bandwidth coverage-everywhere comms system and a robust infrastructure for remote teleop, like what Starsky was building:


Then the steering-wheel-less car at least has the ability to call for help when it's only 98% sure of what to do instead of 99.99% sure. But obviously this kind of model only makes sense in a fleet context, not as an extension to something you own, so it requires greater shifts, at least for personal automobiles.

Most remote assistance is not live driving. A remote human adds labels and maybe even marks out a safe path for the vehicle so that vehicle has enough information to proceed.

Starsky's was. Did you see the picture of their console?

Driving in all conditions requires the computing power of a human brain. When we have computers that powerful, we can talk about L5.

If that logic were valid, then Garry Kasparov would have beaten Deep Blue in 1997.

chess doesn't require the processing of rapidly changing visual, inertial, tactile and auditory information while at the same time performing fine motor-coordination.

> but otherwise take a family off a bridge when the sun is blinding the camera?

I love these wildly over-the-top exaggerations.

When was the last bridge you saw without a barrier to prevent going off the edge?

What makes you think a vehicle vision system will handle "blinded by sun" any worse than humans already do?

Remember it's projecting and predicting the road ahead, even around corners and in the dark - so being blinded by the sun isn't going to cause it to swerve wildly off course and off the bridge - it can continue to use the data it had before being blinded (just as you do).

Also remember it has eight cameras it uses for this. The 16 year old new driver texting and talking to friends coming towards you at 60mph has two.

> When was the last bridge you saw without a barrier to prevent going off the edge?

That's a disingenuous question. Many bridges have wooden guards that will prevent you from going off if you make a small mistake, but not a large one.

> What makes you think a vehicle vision system will handle "blinded by sun" any worse than humans already do?

Humans can move their head, and block the sun with a hand, hat, or sunshade in the car. Humans have two eyes so if one is obstructed, the other may still get good vision.

> so being blinded by the sun isn't going to cause it to swerve wildly off course and off the bridge - it can continue to use the data it had before being blinded (just as you do).

Unless it's an incredibly well-trained AI, it may mistake lens flare for oncoming traffic, or a pedestrian. Car AI is not at the point where it has common sense to assume that lens flare is incorrect information.

"Blinded by the sun" is an understandable failure-state when you design a driving system that's reliant on cameras. From https://arstechnica.com/cars/2019/05/feds-autopilot-was-acti...: "Theoretically, it should be possible to detect the side of a truck using cameras. But it's not always easy. In some lighting conditions, for example, the side of a trailer might be hard to distinguish from the sky behind it."

How many of the 100 daily fatalities on the roads in the US do you think are caused by drivers not "seeing clearly"?

The goal of an AI driver isn't no crashes. The goal is less crashes than human drivers.

Idk if it's that simple, what if you reduce highway deaths by 100 but rural roads go up by 10. If I'm primarily a rural driver that's not enticing.

OK then, let's not put it into full scale production until all types of driving are an order of magnitude (10x) less.

>The 16 year old new driver texting and talking to friends coming towards you at 60mph has two.

My two analog eyes see this perfectly well, why didn't the AI?


I think there's a lot unfair criticism of human drivers in this thread. I don't think we're at the point where we can call machines better than humans when it comes to these tasks.

The human driver's two eyes still have a wider dynamic range than any current affordable video camera. This makes a huge difference in difficult lighting situations like looking into the sun.

>What makes you think a vehicle vision system will handle "blinded by sun" any worse than humans already do?

Self driving software tends to have very poor object permanence.

It is mostly a hardware problem. Tesla's can't do FSD with current hardware because there aren't redundant cameras in each direction. One camera gets mud on it and your car runs itself into oncoming traffic. (It does that anyway right now, but even if software bugs are fixed this is still intractable).

A real solution has to be capable of cleaning all cameras (probably faster than a windshield wiper would), because the distortions caused by even normal rain are hellish to train an AI to handle.

Tesla would be lucky if the extent of its challenges were having enough cameras pointing in the right direction. Tesla handicapped itself by only developing its driving systems with cameras and not lidar.

I don't disagree. But since adding more cameras to a car that already has cameras is a much more intuitive extension for Tesla to select, I used that as an example.

Right, because people have LIDAR installed already.

100% agree but i wonder about the standards applied.

Where I live the streets are tight and most drivers mediocre at best, unaware that cyclists might have right of way and about 1/20 doesn't seem to know the difference between different kind of light settings in the car. At least half the cars on my street have visible scratches/dents.

For me that is the standard to be beaten, not perfection. And the car could still give signal and ask for a human to take over in some cases. Self-drivinf cars for me could also be much slower, no need to speed when you can read a book or play games

So, essentially, you live in Italy where traffic signs and lights are nothing more than street art and some extra frills on the side of the road.

Silly generalization, please avoid.

It's a generalisation but not silly.

It’s not a generalization.

I loathe to think about more complicated regulation, but I think Elon is pushing the boundaries of disclosure. That, and that we have a lot of new populism in stocks, makes me think the SEC maybe needs an update.

I kind of like Elon being able to say 'whatever', but on the other hand, it's not his money now, it's crossed the threshold into public financing, so statements like L2 v. L5 are 'material' and saying the wrong thing is a 'lie' and 'bad'.

Again with the paradox is that he's going to be hosting SNL which is kind of fun to see, on the other hand, it's going to be another occasion to hustle a stock or some kind of crypto which is distasteful.

Waymo has given up on ever getting L5 capabilities. They have so much invested they can't just shutter it, and keep it going in hopes it attracts an interesting partnership or purchaser.

They did the same thing with Skybox imaging/Terra Bella.

> Waymo has given up on ever getting L5 capabilities.

Except Waymo has never tried to claim L5 capabilities. They are and have always been a strictly L4 technology and they have said multiple times that L5 is not possible to achieve.

I dunno, I've seen the Domino's [Nuro](https://www.nuro.ai/) car roaming Houston and it seems to be doing okay and I haven't heard of any issues... If it's good enough for my pizza, it's good enough for me -I'm mostly pizza anyway! :P

I think it's remote-controlled though?

No, fully autonomous! https://www.nuro.ai/technology

You are probably right about 2021, but I think it is very likely that when L5 does come, it will arrive suddenly and unexpectedly. The same as AlphaGo did. Yes, I am aware that AlphaGo was the result of years of effort. Obviously there is an element of “how do you define ‘arrive’“ here.

I don't think Tesla will have L5 in 2021, but the FSD videos on youtube are pretty impressive no ?

They're impressive when they're on a straight road. But I think it's very concerning just how noisy and low resolution the data the cameras are working with is. You can watch some of the videos and the car won't recognise other traffic until it's literally within 20 meters of the car. The 3D positions of the cars are jumping around constantly. It will get the speed of traffic wrong and try to turn straight in front of someone that would cause an accident. Yes it's Beta but it's still a huge huge huge way behind what Elon claimed it would be by now. And he's doubled down on the current hardware being all that's required.

I am 100% unconvinced that Tesla can get anywhere with their current system. I don't see how their low resolution cameras can get the necessary information for Level 5 autonomy. It almost feels like a reckless brute force approach to the problem. "Just let AI figure it out". Every autonomous vehicle company is going to be using some sort of machine learning but they're going to be feeding in huge amounts of data. Waymo for example is using multiple LIDAR scanners to build up an accurate 3D model of the world surrounding the car. That's what you need. Not what is effectively guesswork by an AI.

We still don't have truly reliable face detection even after decades of research and we're supposed to believe that a car can reliably drive itself on shitty low resolution cameras alone.

I agree with most of this. I would be happy of Tesla (or anyone else) could just achieve L5 on highways so I could read a book on long drives on say the I90 in NY or I95 going to Florida.

That seems like a problem that could be solved with their current sensors.

Yes, they are but also a lot are sped up. The videos that are raw show the car still struggling to see around corners since the cameras are just not in a good enough spot. They really should have put cameras on the very front corners.

But TBH I'm not really THAT impressed it can take corners and follow more lines. It still doesn't handle very important edge cases, and the people testing and uploading videos are naturally biased to show how good it is to push their referral codes.

I'm not anti-Tesla, I love my M3, but we need to be realistic about the future of what these particular cars can do. They're never going to be L5 with the current sensor suite - and they're certainly never going to be robo taxies. Who really wants that anyways? Last thing I want is some drunk bros to destroy my car and have to deal with Tesla support.

I agree, I would not loan my car out to be a robo taxi. As it currently stands, the people that can afford Teslas probably would not take the risk of damage to their cars for the extra income.

They are impressive for what they are, yes, because it's a hard problem.

But it's non stop interventions...

Do you have a link to Elon's FSD claim?

He definitely uses ambiguity to his benefit (eg "soon", "by fall/winter/spring/summer" (in which year?)), but I haven't heard anything about Tesla being L5 by the end of 2021.

This feels like bad faith. Musk has been making these claims every year since 2016. His coverage of these claims on HN is regular and thorough.

I went ahead and Googled one for you from 2020:

Tesla will be able to make its vehicles completely autonomous by the end of this year, founder Elon Musk has said.

It was already "very close" to achieving the basic requirements of this "level-five" autonomy, which requires no driver input, he said.

- July 2020


Do you mean this comment?

I remain confident that we will have the basic functionality for level five autonomy complete this year.

Basic functionality for level five isn't level five.

I mean, off the top of my head he claimed 1 million robotaxis on the road in 2020, and that the car would be able to drive LA to New York by itself, charging and all, in 2018.

Who are you trying to fool?

That's not what I asked about in my initial post.

Who are you trying to fool?

"Basic functionality for level five isn't level five."

I'm surprised you say this totally seriously. For me this means that level 5 will be this year, and then there might be extra new cool features, which are not necessary.

Basic functionality for level 5 sounds like both necessary and sufficient to me

I view part of basic functionality of level five as "Can the car drive on all roads in at least some circumstances?"

That's still not sufficient, because it needs to do that in all circumstances (vehicles, people, animals, stuff, weather, etc), but even if it can handle all the circumstances (also basic functionality), if it can't at least all roads, then it's not level five.

Right now, it's level two, and it's IMO going to be level two for a while. People can ask about level five, but I take any statements about Tesla's current autopilot and/or beta as statements about the level two system.

People can assume that those statements are about some future level five system, but I don't think that's an accurate assumption unless someone from Tesla says autopilot and/or the beta is now level three, four, etc...

If that is your understanding of the phrase "basic functionality for level 5", can GM announce that they are installing two cameras in each car and now they have basic functionality for level 5?

If basic functionality does not need to be sufficient, then even the tiniest necessary thing is basic functionality? Like installing a camera on each new car? In that case, Tesla already has basic functionality for level 5 in terms of both hardware and software.

Having hardware present could be the basic equipment required for level whatever functionality. Assuming the hardware had sufficient coverage/functionality.

Tesla does have basic hardware needed for level two functionality, maybe more. They could have basic software functionality needed for level two or more with the beta, but someone would need some kind of audit to confirm that.

> Do you have a link to Elon's FSD claim?

Google is universally accessible to everyone. Please don't be that guy who corners himself into a blind spot.

Musk claimed coast to coast self-driving trip by end of 2017.


I've searched, but I haven't found any comment from Musk about level five being ready by 2021.

edit - The closest is about the basic functionality for it being ready by the end of the year (see above).

end of last year.

2020 Q4 Earnings call, 27th Jan 2021 (according to https://www.fool.com/earnings/call-transcripts/2021/01/27/te... )

Musk: So -- and this is -- basically I'm highly confident the car will drive itself for the reliability in excess of a human this year. [...]


Director of Investor Relations: [...] The next question is, why are you confident Tesla will achieve Level 5 autonomy in 2021? [...]

Musk: I guess, I'm confident based on my understanding of the technical roadmap and the progress that we're making between each beta iteration.

Saying he's confident about each beta iteration, and the FSD beta is currently level two, is not the same as saying Tesla will hit level five by the end of the year.

He is explicitly asked by the person feeding him question why he is confident that they will reach level five this year. He is confirming that he is confident they will reach level five this year. Not confident that they will improve, confident they will reach level five.

He was asked that question, but he didn't say he was confident they would reach level five this year.

He did refer to the current FSD beta, which is a level two system, and expressed confidence in the technical roadmap and progress they are making between each beta iteration. Then he talked about the level of reliability they would need, capabilities of the system, and challenges they have.

I guess, I'm confident based on my understanding of the technical roadmap and the progress that we're making between each beta iteration. Yes. As I'm saying, it's not remarkable at all for the car to completely drive you from one location to another through a series of complex intersections. It's now about just improving the corner case reliability and getting it to 99.9999% reliable with respect to an accident.

Basically, we need to get it to better than human bio factor at least 100% or 200%. And the business happening rapidly because we've got so much training data with all the cars in the field. And the software is improving dramatically. The -- we also write the software for labeling.

And I'll say it's quite challenging. We're moving everything toward video labeling. So all video labeling for video inference and so there are still a few of new met that need to be upgraded to video training inference. And really, as we transition to each net to video, the performances become exceptional.

So this is like a hot thing. The video -- the labeling software that we work for, video labeling, making that better has a huge effect on the efficiency of labeling. And then, of course, the Holy Grail is auto labeling. So we're -- we put a lot of work into having the labeling tool to be more efficient when we used, as well as enabling auto labeling where we can.Dojo is a training supercomputer.

We believe it will be -- we think it may be the best neural net training computer in the world by possibly an order of magnitude. So it is a whole thing in and of itself. And this is offer potentially as a service. So some of the others need neural net training, we're not trying to keep it to ourselves.

So I think there could be a whole line of business in and of itself. And then, of course, for training vast amounts video data and getting the reliability from 100% to 200% better than average human to 2,000% better than average human. So that will be very helpful in that regard.


If your response to "why are you confident about X" is "I am confident because...", you can't claim you didn't say you were confident about it.

That's not the situation though.

The question is "why are you confident about X", and the answer is "I am confident because we're making progress in these ways with Y".

If someone says by "fall/winter/spring/summer" and they don't mean the immediate upcoming instance of that season, they are lying. That is not being cleverly ambiguous, it is pure dishonesty.

They could also be wrong.

I think the assumption that it's a specifically lie else really highlights where these ideas are coming from.

He made the claim during the Q4 2020 earnings call for Tesla in January 2021: https://www.fool.com/earnings/call-transcripts/2021/01/27/te...

An investor asks “Why are you confident in L5 self-driving before the end of 2021?” and Elon goes on to explain why he is confident that Tesla will achieve L5 self-driving by the end of 2021.

> Do you have a link to Elon's FSD claim?

Musk said he's 'confident' reaching l5 autonomy by end of 2021 in the Q4 earnings call, which is on youtube.

Even if I don't want to, I can't help but keep up with his marketing-claims through the interests I have. My take, being aware of him but not focused, is that beta-FSD has already been released! ... I mean ... what? Maybe I've got it wrong.

It's been released to a small number of people, but... beta-FSD isn't level five.

The only specific statement I've read from him is that the basic functionality for level five will be available this year.

What's interesting to me is that people seem to attribute what news articles synthesize about his statements as statements made by him.


>news articles synthesize about his statements as statements made by him

Musk promised the coast to coast drive in 2017:


Then admitted he couldn't do it in 2017:


This isn't the media delivering a dishonest commentary. These are his words.

I'm referring to my initial post in this thread.

But yeah, we can have both.

Those were his words in that situation, and he was called out for it. In this situation, the media is delivering dishonest commentary, and trying to call him out for their commentary.

He made the claim in the Q4 2020 Tesla earnings call in Jan 2021, here is a transcript: https://www.fool.com/earnings/call-transcripts/2021/01/27/te...

But you see how he/they should be clearer about it?

To me, FSD is ... FSD ... what's FSD if it isn't level 5?

Sorry, to Elon fans (and I appreciate Tesla), but it's nonsense and people are fools to put up with it, let alone pay $10k or whatever it is for the "opportunity" to have it later.

They could, but that's the nature of sales, and language to some degree. I drive on a parkway and park on a driveway.

As for the specifics, it's what's described on the Tesla website.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact