Ok would anyone like to do a long term bet with me? I bet that Carmack/Atwood's bet will be too ambiguous to resolve, and, neither wanting to look bad, they will both pay out to charity.
I would bet 10k that the definition of Level 5 will change enough to make this a moot point, or that another new classification system will be introduced.
There are so many assumptions in this space that it's a fool's errand to try to list them.
I think the next paradigm shift will be in the cultural and regulatory perception, not in technology.
I have NO IDEA what that might mean, and I could be completely wrong! I accept that.
I think the technology has to demonstrate what it can do, and society has to come to terms with what the tech can do, and by doing those things we will painfully update our priors. Our existing prior beliefs will not be our new prior beliefs.
Changing classifications aren't going to affect the bet, because both of these gents know what they're betting on.
If I could make a $10,000 bet with John Carmack, I'd go the way Atwood has. No way there will be fully self-driving cars by 2030. That's only 8 years away. Fully self-driving cars need one of two things:
A) All cars are self-driving, and are networked, and centrally controlled. No exceptions.
B) The AI is smart enough that we should be worried about the ethical implications.
If an AI can react to terrible human drivers as well as a good human driver can, then I'd invite that AI over for dinner.
That's not sufficient - nowhere near sufficient. If an AI can react properly when kids are playing in a 5mph zone with cars parked on either side of the street, before their ball bounces into the street. If it can react appropriately to seeing a young kid clearly struggling to ride a bike. If it can understand that the tennisball bouncing across the road will be followed shortly by an enthusiastic dog... et cetera, et cetera.
I don't need an AI to compensate for shitty driving (taking a roundabout the wrong way). I need it to drive so as to protect non-vehicles that it encounters.
A) also doesn't work unless all other road users (bikes, scooters, skateboards, pedestrians, etc.) are also banned from the roads. The other option is limiting the autonomous cars to tunnels or dedicated tracks, but then we're just reinventing trains again.
Why do you need A and B? Waymo is operating driverless taxis at Level 4 in Phoenix; what do you see as fundamentally missing (aside from expanding range) for Carmack to be able to win?
Basically, if you can find videos online of Waymo giving up and requesting backup for a situation not severe enough for humans to do the same, then you are not really at level 5 yet. If you cannot rely on the car to handle driving though any circumstance you could reasonably expect your hired chauffeur to handle, well then you are not level 5 yet. Examples of situations where humans give up are hurricanes, and blizzards.
And there are plenty of videos of it stopping for situations where no human would ever be equally confused, plus a good handful of videos of the cars doing absolutely unacceptable things resulting in near misses.
Are you really that confident that by 2030 Waymo's roadside assistance program will be disbanded, or at least reduced to a killswitch, summoning a replacement car and a tow truck (because obviously the car must be broken if assistance was called)? Because if they need more than that, it sure isn't level 5 yet.
Those taxis can’t even reliably drive in the rain yet, hence Arizona for the staging grounds. So they’re a far cry from meeting the standard, unless rain is now considered a natural disaster.
I’ve driven in one. Felt like I was driving with a 16 year old driver. I’d never drive in the rain with it. Through intersection with faulty traffic lights, and police directing traffic? Nope. It never got onto the freeway, and it avoided complicated driving/merging decisions by driving through neighborhoods.
> IMO, minimum requirement for level 5 driving is merely to match human driver accident rates.
Minimum requirement for nerds, maybe. Minimum requirements for ordinary folks will probably be more along the lines of "will be noticeably better than me on average" and "will not make any given mistake more frequently (or more severely) than I would".
i.e. I expect people a lot of people would want "strictly better than me on practically every axis" as their minimum requirement, not just "on par with me based on the average accident rate". People just aren't going to put up with seeing cars make dumb mistakes that they would never make, even if the accident rate ends up being good on average.
Not in the courts, no. If switching to self-driving reduces accident rates in 99% of situations, but in that last 1% someone gets killed, then there's a lawsuit waiting to happen.
God it's going to be unbearable isn't it. There's going to be 5 more years of basically no progress on level 5, big tech will lobby to change the regulations so they can basically call anything self-driving and you'll have Tesla bros running around shouting "We did it! Fully Autonomous!" when really the cars are just doing a decent implementation of cruise control.
If we can take really good cruise control and change the environment sufficiently to the point we can claim "fully autonomous", then perhaps that's also a solution.
I'm not sure changing labels and regulations will affect things much. Level 5 is basically tell the car anywhere you want to go and then have a snooze while it drives. Either that works or not, subject to some safety level, say less crashes than human.
For me, the shift in perception isn't going to happen before the technological leap.
The best driver aid systems I've seen are still just well-executed implementations of lane keep/change assist and radar cruise control- technologies that have been available in luxury cars for ~15 years. None of this stuff is actually new technology. It took decades to make existing driver aid technologies reliable and cheap enough to make them available to the average new car buyer. I have no reason to believe it will get easier from here.
I also have no reason to believe that the legal dilemmas behind autonomous driving features will go away. One of two things must happen- in fatal accidents caused by a car operating autonomously:
1. The driver (occupant?) will be liable. OR,
2. The manufacturer will be liable.
Option (1) is unprecedented, morally reprehensible, and would likely be overturned when some manufacturer eventually screws up on a Ford Pinto scale. Option (2) means that autonomous driving features will forever be hampered by the "keep your hands on the wheel!" warning bells that we already deal with today.
We already have big corporations that will risk huge liabilities for fatal vehicle accidents; they're called insurance companies.
So there's precedent for big corporations risking huge liabilities. Just instead of collecting an annual "insurance premium" they'll be collecting an annual "self-driving service subscription"
Thanks! Funny enough, the topic touches engineering, business, actuarial science and peoples life or death situation. All ending in courts inconconclusively. A huge part of human endeavors put together and nothing came out of them. Very representative of our species.
I understand you to mean that there was not a dramatic result; I believe there were results - engineers and managers changed their goals to include more testing of components, IIHS and regulators also added testing. Automakers do respond to forced recalls. Reputational harm taken as seriously as direct fines and punishment.
If it comes down to ambiguity, it will be on the minutiae of what "Level 5" (full autonomy / no human in the loop) actually means in practice. Because in the world we live in, all these vehicles will be networked regardless. The marginal cost of having a human available for whatever fallbacks are needed approaches zero, which means that even if L5 were technically achievable there's little value in actually deploying anything like that.
But for sure count me with Carmack. My car is already driving me around my western suburban environment without trouble; zero-intervention drives are the rule now, not the exception. And it's been a long, long time since I've had to intervene for anything reasonably interpretable as a "safety" concern (mostly it's about stuff like navigation decisions, or the car is being a jerk at a merge and I don't want to be honked at, or it's being too slow pulling into an intersection and there's a lot of traffic, etc...).
Don't be fooled by the memery regarding Tesla. There are 60k+ FSD beta cars on the roads now, all of them with eager drivers wanting to post interesting failures to youtube. If these things were genuinely having trouble, we'd know.
It's not "done" for sure (in particular a lot of tuning needs to be done still), but it's at an effective level 3 already at least in my areas. And some work with remote recovery interfaces (something that Tesla doesn't seem to be focusing on) would get it to L4 in the kind of constrained environments Waymo and Cruise are operating in reasonably quickly, IMHO.
>Don't be fooled by the memery regarding Tesla. There are 60k+ FSD beta cars on the roads now, all of them with eager drivers wanting to post interesting failures to youtube. If these things were genuinely having trouble, we'd know.
You are completely out of touch with reality if you think the people driving "FSD" beta are eagerly trying to show fault. It's precisely the opposite. You think those who get beta are randomly chosen?
>It's at an effective level 3 already at least in my areas.
That's a loooonnng way from an unattended drive across the country.
You own a car and, I bet, a financial stake in Tesla. Am I right?
> You think those who get beta are randomly chosen?
Certainly not. We qualify with a safety score and wait for a new batch of cars to be added.
I genuinely don't understand the level of vitriol about this product. There are a lot of these cars out there. Given the demographics of this site, surely one of your friends has one already. Call them up and get a ride. See it for yourself.
I mean, everything looks terrible if you only watch greatest hits on youtube. There are literally whole subreddits dedicated to bad human driving, even. Yet... things work out. This is working far more than it's not, is all I'm saying. And the echo chamber effect has IMHO gotten a little out of control.
This isn't a hard technology to find for yourself. Go try it.
> If these things were genuinely having trouble, we'd know.
Hm, I see videos of pretty terrible FSD fails all the time - stuff like the car veering off into the other lane, trying to drive the pillars of bridges, failing to spot pedestrians and other fun things. It's quite impressive that the system is nearly there, but that I suspect this is the famous final 20% which is gonna be a monumental challenge to overcome
>It's quite impressive that the system is nearly there, but that I suspect this is the famous final 20% which is gonna be a monumental challenge to overcome
The self-driving AI marketers will tell you it's 99% of the way there!
It will never be 100% correct. But that has never been the goal. The goal is to be 10x safer than humans. I don't know exactly when it hits that level, but at this point there doesn't appear to be any reason it cannot get there eventually.
> And some work with remote recovery interfaces (something that Tesla doesn't seem to be focusing on) would get it to L4 in the kind of constrained environments Waymo and Cruise are operating in reasonably quickly, IMHO.
It takes more than that. Just think, Uber's driver had lidar, safety driver, and still killed a woman. There's _a lot_ going on under the hood that separates the L4 from L2/L3 drivers.
One thing I think Tesla's missing is a repeatable evaluation framework for how their software performs (a.k.a what you need to convince yourself of the safety). As I understand it, they mostly test autonomous software by deploying it in ghost mode (current/future miles) instead of simulating on past miles. A lot of long tail problems (which is what's hard about autonomous driving) are only visible on the order of millions of miles. Some cases you may never encounter at a high enough rate, and you may need to build synthetic data or use other tricks to teach your driver how to act.
Consider construction zones: where are the lanes formed by the cones? Where should I enter, and where should cars come shooting out? Every construction zone is quite nuanced and contextual -- can you prove the autonomous driver can handle situations reasonably? I think Tesla's software isn't close to being able to crack those driving situations. I mean they make basic mistakes with traffic lights still (which are very challenging in and of themselves; how about those lights where there's an arrow? light that is out?)
If you lack infra to save difficult cases and no ability to generate synthetic situations and resimulate, it gets real dicey when you try to draw conclusions about the real-world performance. Their simulation capabilities are also limited by the fact that they only use cameras; depth estimation / object contours are not accurate enough to do the kinds of sims the Cruise and Waymo do.
Without good ground truth it's really hard to do proper safety evaluation. I suspect Tesla would either need a huge breakthrough in vision technology (maybe they can use something like this https://waymo.com/research/block-nerf) or at least map places their cars are driving so they can properly assess rates to give themselves, regulators, and customers confidence that they are safe for L4. Otherwise, their "good performance" will just be a product of survivorship bias, and bad performance will really hurt people (this is kind of happening now).
Well, I remember the last time someone did such a bet with me. [0] It seems that, whenever the ‘predictor’ turned out to be wrong, they seem to have gone into ‘hiding’. I wonder why that is in my case…
So as for your offer, its a no thanks and no deal from me.
Heh yeah well I would probably never actually do a long term bet with anyone over the internet, unless I was well known like these guys are, so they can't really hide.
That said I wish there was a good site that allowed doing bets like this online, where you put the money in up front, and deputize a list of people to be judges to decide the winner. (yes it would typically be shorter term than 8 years, and obviously there would have to be rules for dealing with what happens if the judges aren't all available, incentives to reduce the chance of that happening, and so on)
I imagine anti-gambling laws make it difficult to do such things. But I can wish. It would reduce the number of people making poorly thought out assertions as to the future.
Interesting. I wish it didn't require it to be to charity, and wish it had better ways of judging outcome, and wish it allowed any bet (too many requirements as to bet type). But still interesting.
I read the definition and it wouldn't be too hard to narrow it down. It's basically on whether you'll be able to get a robotaxi in a place like NYC, tell it to go anywhere you want and take a nap while it drives. Which seems fairly clear cut. (They have more than one top 10 US city rather than NYC).
I am not so confident that even in 8 years the outcome will be clear. Just like today Tesla is selling hot garbage and calling it "self driving", I am sure manufacturers will be selling cars that claim to satisfy the requirements of the bet, and even do so... sometimes. I am not confident they will work as reliably as the bettors here are intending, even if it is marketed as L5.
Edit: What I'm getting at is even though it is a friendly bet, they should define the terms of success / failure a little more clearly. Unless there is some governing body that grants L5 status?
Given that these gentlemen are known to have at least ordinary prudence, even in the face of a $10k loss, an operational definition might take the form
L5 = the person asserting that L5 has been achieved is willing to be driven by the vehicle, without access to the controls, through mixed conditions for X hours.
I feel that is pretty bad. It doesn't require system to actually work too well. It doesn't name anything about effectiveness of system. Just safety. Still, effectiveness might come to abstract things like:
L5 = vehicle can achieve similar travel times and destinations to average human driver in mixed conditions.
What if the car "works" under all conditions, but moves extremely slowly, takes huge detours to avoid tricky roads etc. In fact what is stopping this "technically correct" approach: car calls a human driven tow truck, gets hitched and towed to destination?
I don't really care for the L5 definition. Humans can't drive on all roads in all conditions even if their vehicles are theoretically capable of it. There are certainly weather conditions I don't want to be driving in. And there are certainly some unpaved roads I don't want to be driving on even with high clearance 4WD.
The realistic definition would be that if a human could not reasonably be expected to drive in it (fog/smog with less than 6 inches visibility, hurricane, blizzard, military invasion) then the cars are not required to drive it in. And perhaps the cars can accept a circumstance or two where humans can't drive, in exchange for one or two where humans could, but the cars can't. But it would need to be circumstances that humans can predict/understand. Like if the car won't drive if the countrywide car-to-car comms network is down or something.
Of course. I meant this in response to dharamon above talking about manufacturer's misleading claims. But I see their edit and I think I also misunderstood your point - in the context of this bet the selected criteria are probably good enough.
If the car moves extremely slowly in a major city, it would get banned after a handful of traffic jams, and so wouldn't be commercially available. And anyways no company would risk the bad PR of launching a car that tops out at below the speed limit.
If the car calls itself a tow truck, it has obviously failed to drive itself.
If the car is programmed to avoid tricky roads, it can't "drive everywhere in all conditions" per SAE.
Exactly. "All conditions" means all conditions. Once a self-driving 4WD truck can be relied upon to get me over Echo Summit pass on US-50 during chain control / whiteout blizzard conditions, then maybe, just MAYBE I'll start to take it seriously. "Full" self driving, at least as of 2022, relies too heavily on all cows being spherical.
If a company falsely claims level 5 they can be disproved pretty easily by some posts on twitter. If the stoppages are too rare to even show up there, then it might as well be level 5.
As someone who has worked on self driving for a few years, and holds some patents in the space - I do not feel at all qualified to make a guess. It is always a little odd to me, then, to see how many folks are. Can we all tone down the confidence? Skepticism is healthy, but armchair expert-ing is not.
Carmack is a genius, and 8 years is a long time. If we look at what cruise is doing in SF, it certainly does not seem impossible.
I like that people are seeing this as 8 years from now when the original bet was from 2003. That there is still a reasonable chance that this could happen by 2030 and by that point was predicted 27 years in advanced. It will just add to the Carmack myth if it goes well.
Personally I'm not sure we will see it but there is nothing that jumps out and says that it is impossible at the same time. 8 years ago I would have said it was a no brainer that we would be there, nowadays not so much. It feels like with a lot of things, the last 10% takes 90% of the time.
I have no candle in the game and am an arm chair speculator at best, but it is fun to wonder.
Let just agree to reconvene on this in 8 years time. It isn't that far away.
Wait where did that “original bet was from 2003” fact come from? Atwood’s post doesn’t mention 2003 at all, and looking at Carmack’s twitter thread didn’t bring anything up for me either.
The terms of the bet require the car to navigate through New York in the winter. Definitely a lot of challenges there that aren't present in Austin. I think 2030 is a nice contentious number to bet over. Far enough in the future that there could be key breakthroughs to make it work.
The terms appear to actually suggest any of the top ten most populous cities in the US, plenty of which are in places where it doesn't snow.
Atwood's bet is extremely pessimistic, and I don't blame him. I don't even think that weather is the biggest issue to self-driving cars: dealing with roads full of human-driven cars will be harder to handle. It's not that AIs can't learn to aggressively accelerate onto a crowded highway or aggressively turn out of a busy stop sign, it's that the companies running the AIs will be unwilling to accept the legal liability that comes with being responsible for a fleet of aggressive AI drivers.
Interesting. I interpreted "work in any of the top ten cities" as meaning I could pick any city and it would work. I can see the other interpretation. Hopefully they agree on the parsing!
What do you mean by "operate a self driving car"? That's the point of SAE5: You can't really operate a SAE5 vehicle other than telling it where to go. No steering wheel, no gas pedal, no brake pedal, no nothing. There is no way you could argue that the owner of a SAE5 car is responsible for it. You might as well be legally responsible for the bus, train, airplane you ride in.
Operate as in, put fuel in it, turn it on, tell it where to go. You can operate a dishwasher without washing the dishes yourself.
People are required to buy insurance to operate a vehicle. I don't see how why this needs to be any different.
Companies go out of business. How can a defunct company be held legally liable? Companies and employees can easily die over the 20 year lifetime of a car. Liability can only be placed on the operator.
Companies release new models every year. They get safer every year. Older cars will be more expensive to insure. Will a company want pay the ongoing insurance cost of your 20-30 year old self driving vehicle? What if they can't afford to? Will you still be able to use the car you bought despite the company not being able to insure it? or refusing to insure it (and wanting to force you into a new vehicle purchase)?
All these problems magically go away when you make the operator responsible. They are able to make the choice every month whether they want to continue to pay to insure it, and are legally required to have insurance even if they die in the accident.
Some brands will have better safety, and buying insurance for them will be cheaper. Everyone with the same car model will have pretty much the same insurance rate, unless you are negligent when it comes to maintenance.
Insurance companies are more than capable of calculating these costs. Especially over 10-20 year time frames.
If this sounds crazy to you, it's not that much different than buying liability insurance for pets. Owners are legally responsible.
If you don't want to be legally responsible, don't buy a self driving car. Or a dog for that matter.
No sane person would take that risk. And even if regulations made it a purely civil covered by regulation thing even if the car runs over some kid, I'd fully expect the insurance company to go after the manufacturer.
Drivers are not qualified to assess the safety of a self-driving system. These cars should simply not be allowed on the road unless manufacturers take liability.
Well, Waymo is already working in San Francisco. Winter in New York adds traction control and additional obstacles. Traction control is already mostly the vehicle's job anyway, what with anti-skid systems. It can go way beyond that. Stanford has an autonomous, electric, drifting, deLorean.[1][2]
2030 seems impossible to me. In NY, you might not be able to see the road surface at all due to snow. Curbs and other landmarks that are necessary for Waymo to work (they map _everything_) will be covered. Seems unlikely to account for all conditions necessary for level 5 in 7 years.
Yeah. The against side of this bet seems like a no brainer. Is it possible (maybe even more likely than not) that there will be a fully autonomous system that will be available in that timeframe that can navigate many limited access highways in certain weather conditions by then? Absolutely.
And that actually looks very interesting to me for long drives. I actually care less about handling local driving including urban driving near me. I want the long highway drives to be handled.
Yes, 100% autonomous is another kind of interesting and opens up use cases like dropping me off at the airport. But that seems really hard.
Yes, snow is worse in every possible way than rain (unless the rain then freezes to the road). We are so far away from that though.
Don't forget, for L5 it also needs:
- Do the right thing in construction zones where signs and/or barriers may indicate that following the lines is wrong (including flag-men)
- Detect when an out-of-site emergency vehicle with sirens running may be approaching the next intersection and slow down
- Go 25 "when children are present" and 55 when children are not present (though TBH, humans are mediocre at this, and L5 only requires human-level ability)[1]
To my knowledge, there's nothing in the prototype stage that can handle the above in sunny weather. Going from "No protytpe can do this in sunny weather" to "Commercially available in 8 years" seems unlikely even if we restricted it to "only in areas with no snow"
> The terms of the bet require the car to navigate through New York in the winter.
By NY, do they mean manhattan, new york city or the actual state? Because much of manhattan is gridlike and almost made for self-driving cars. Though the sheer amount of pedestrians, cars and road work could make it especially challenging.
Navigation in much of Manhattan is straightforward. Driving at high traffic hours is certainly not in large areas of the city. I hate driving in Manhattan and will do just about anything to avoid it.
My reading of the terms of the bet is major cities, which means more than one city with a population in excess of 250K. In my reading just Phoenix and Tucson would be sufficient.
We need solid Level 3 self driving, at least on freeways. The commercial products are stuck at level 2. None of those systems have achieved "does not run into stuff". That's not good enough. People just don't pay close enough attention when the system has control.
Waymo and Cruise have level 4, since they already have driverless taxis in use. Only in some conditions, true, but in actual use. Waymo just got approval to start charging fares, so they're out of experimental mode.
GM Ultra Cruise will supposedly allow real self driving on some freeways, at least in certain conditions. It's not available for consumers yet, due to launch next year.
I think they work in most conditions. How someone can look at Cruise and Waymo already in operation today and think they won't be generally available in 7 years is amazing to me.
There's a huge difference between "works fine 95% of the time in a place with very cooperative weather" and "works 99.99% of the time no matter the weather"
I mean there are times when human drivers are insufficient for the conditions.
I disagree with all three assertions above - the dififculty increases exponentially for the last few percentage points, just like energy required increases towards infinity as you approach the speed of light,
California is a small part of one country.
All we will get out of self-driving cars is more traffic jams, more consumerism, and less security in our lives because you might be locked out of essential service in case you didn't sign a lisence agreement or made an offensive tweet, or replied to a spammer
California weather is very forgiving even if you account for places like Tahoe. AFAIK, none of the self driving companies have much experience with scenarios like New York in winter.
To be commercially available for passenger use in major cities at level 5, regulators have to also agree it's level 5. It's not just a technical problem.
Not really, and they aren't level 5. Essentially the bet is that getting to 5 is harder than people think, and getting to commercially available SAE5 is going to take (at least) longer than 7 years. Bear in mind all commercial systems now are level 2 and testing features for 3. Waymo/Cruise are testing 4.
Unfortunately I tend to agree with Atwood on this one. Machine learning systems often demonstrate amazing results initially, but then seem to lag in that “last mile” compared to their human counterparts. The most irritating example that comes to mind for me is my HomePod - more than a decade after Siri was commercially released, it still regularly wakes up and interrupts my meetings with “I’m sorry I didn’t get that” when I say “I see”.
I think self-driving technology is probably in the same boat. It’s amazing what it can do, and it is definitely useful today in many contexts. But it will be a long time before it is able to achieve parity with human drivers in the enormous number of different contexts that humans are able to handle with relative ease.
For any engineers working on self-driving cars, though, I hope you prove me wrong! I would love it if my next car came with a level 5 self-driving option.
If you look at AI type computer technology in general it tends to get steadily better while human performance remains fairly constant. So in chess or go you can see the computer players steadily get better in the rankings until one year they get better than human. A few years before some people say these things are rubbish they'll never get there but the rate of progress is the thing to look at.
With self driving, things like Teslas can drive from A to B but are a bit rubbish and need interventions to avoid crashing but I think you'll find the level of that keeps increasing year on year till one day they get safer than human.
8 years seems like a pretty short time, especially considering all the technical and legal hurdles that are still basically just being kicked down the road for the time being. I'd like to see fully autonomous cars (I'd like to see cheap, accessible, a ubiquitous public transit first) but I'm not holding my breath (for either of those things)
Curiously, back in 2016 there was a bet very similar to this one posted on Long Bets (https://longbets.org/712/) -- the bet required commercial self-driving cars capable of driving through Las Vegas by 2024. The timeframes are even the same (8 years).
I think the requirement that the "vehicle performs all driving tasks under all conditions – except in the case of natural disasters or emergencies" will enable both sides to declare victory. I'm sure that there will be some other unforeseen edge case where it doesn't work even if for all intents and purposes L5 is reached.
Edit: Or to put it more accurately, enable Jeff to declare victory even if he loses.
Yeah i mean they are both honorable guys AFAIK and wouldn't try to welch in the event of a technicality but I can easily see the industry get to a place where there's a long period with debates as to whether L5 was actually attained.
Interesting bet. I don't really know much about self driving car technology. That being said, there are construction zones I've driven through where they didn't really cover the old paint lines, such that its a tossup which lane people choose. There are busy roads I've driven through with the painted lines faded out of existence, on an S-bend curve that simultaneously adds more lanes that still captures people by surprise even after it was repainted. Or the NYC free for all where the all the cars drive together like a school of fish, avoiding all the obstacles jutting into the road, yet not quite block the lane. And snow == anarchy.
Is there any indication self driving cars can handle these scenarios?
Last year, I sat by a construction zone along a popular self-driving circuit and watched. At least a few times the car made it through without the safety driver taking over, but most of the time the safety driver took over.
This was in the summer, but it was a poorly marked lane shift situation.
This is an easy thing to bet against, so much so that I would raise the bet too. I don’t think the current technology is capable of L5 at all but obviously lot of people feel differently
I think the hardware is capable of L5, but the software is still going to take another 20 years.
Humans only have two eyes that can only look in one direction at a time, and we achieve L5. Yes, we get into crashes, but because humans do stupid things like text while driving, drive hyper-aggressively, drive drunk, or do simply careless things like change lanes without even looking first, none of which are mistakes a computer would make.
On a side note, I am happy to see that all this hubub over self driving has resulted in some really sexy hardware designs. Just thinking about how the Tesla computer is so optimized that they are measuring instruction per pJ just really satisfied the inner side of me that wants to hyper optimize everything.
For that I'd purpose a "taxi Turing test" - a vehicular automation engineer rides in the back wearing augmented goggles that block out where a driver may or may not be sitting & distort all voice audio to sound robotic. The engineer requests navigation thru various routes & environments or whatever testing they'd do for normal L5 evaluation. Not sure I see the point, but doable
> I think the hardware is capable of L5, but the software is still going to take another 20 years.
Doesn't this inevitably mean that the hardware isn't capable too with extra steps? Like in 20 years with tens of millions more lines of code, we're going to need more hardware resources to run it (in particular at near real-time) than the hardware would provide today.
I agree with you, to be clear. I am just pointing out that 20 years worth of software is going to be a monster to run at the timings that will be required. So we could hit a hardware limitation between then and now on top of the hard software problems that must be solved.
> Humans only have two eyes that can only look in one direction at a time, and we achieve L5.
Humans use parallax to perceive depth via the overlap of both eyes, which as far as I know isn't used by automated vehicles for reasons unknown (patents?). Automated vehicles using visible light cameras are mapping the world in 2D, humans perceive the world in limited 3D.
Stereo vision is oversold in humans. You can easily drive with one eye. We use a lot of information to estimate depth including relative size of things, as well as “structure from motion” which is the relative movement of things over time.
Tesla does estimate depth using structure from motion (and probably other cues). They call it virtual LIDAR in presentations.
Who does? Parallax requires the cameras to be very carefully angled (toed-in), so they overlap in a very specific way, it isn't simply two cameras recording the same image straight.
Most automated vehicles companies using visible light cameras have them pointed straight forward, that isn't therefore using parallax like a human does.
Parallax only requires you to know the separation and angles (and fields of view) of the cameras. Two parallel cameras at 1 meter separation and 90 degree fields of view are fine. You could change the separation and angles every frame, if you wanted, as long as you always did your calculations from the correct position data.
It is a tough space, but Magic Leap is a poor reference because that’s a very different space.
I have no idea how close we are to level 5, but I suspect the challenge is closer to “invent a new algorithm that works fine with less data” than it is to “make new hardware”.
I think the intent is to clarify that the vehicles should be available to the general public, rather than just in some DARPA lab somewhere or something.
> L5 in major cities is a bit strange for this, since by definition L5 isn't bound by any restrictions, including geography.
I suspect it is just easier to define a "win" that way. L5 on a test track or L5 on a quiet freeway are very different challenges than what most people think of when they think of L5: Driving in downtown [major city] without intervention.
L5 driving downtown for an entire day with zero interventions, in a consumer vehicle, is really the gold standard for this tech. You nail that, you're done it.
It's also just a weird definition in general. I'm a human, and seemingly a competent driver (zero accidents or moving violations), yet there are still regions, specific roads and definitely road conditions where I will say "nope, not gonna try that." Does that mean I am not L5?
Yes actually, I don't think humans are L5. L5 automation is supposed to be the holy grail, it's supposed to be far superior to human driving. Widespread L5 automation is expected to save countless lives every single day.
Isn't the restriction to availability in major cities contradicting the requirement of fully autonomous driving in "all" conditions? Driving in a major city with (mostly) intact infrastructure is much easier than e.g. driving in sand or mud.
The bet says within cities, and humans struggle with those sorts of extreme conditions too. Natural disasters and emergencies are excepted. I suppose it just needs to do approximately as well as a human.
I'm with Jeff, self driving cars have been just a few years away for well over a decade now. I think it's one of those cases where the last 10% of the problem takes 90% of the effort. If we built cities and the road infrastructure around self driving capability that might work, but currently cities aren't built that way and that wouldn't meet the definition of Level 5 anyway.
> I suppose it just needs to do approximately as well as a human.
I think that's a fallacy. We allow humans to injure and kill each other in "accidents" with limited liability because we are human; I can't be expected to 100% reliably decide in a fraction of a second what to do when a car cuts me off, so if I hit it, or avoid it and hit another vehicle or pedestrian, I don't go to jail.
But a computer can be expected to make that decision fast enough; if it can't, it's because the designers or programmers made errors that the manufacturer missed because they rationally calculated they were not worth catching or fixing.
There's some level of safe driving it's unreasonable to expect from self-driving cars – but it's not the same level as humans.
> But a computer can be expected to make that decision fast enough
I don't think this expectation is reasonable. It makes sense only if you conceive of humans as flawed computers. But certainly it is an expectation people have, including a large segment (if not majority) of HN readers, and this expectation is absolutely slowing development and adoption of the technology.
> I suppose it just needs to do approximately as well as a human.
From a practical/logical sense this is true, but I'm not sure it would work out that way. People would never accept FSD cars with accident/injury rates comparable with humans. FSD would have to be at least 10x better than human drivers, maybe more. Psychologically we are much more risk averse to when not in control. It's why people are afraid of elevators but not stairs, despite a 50x difference in safety.
Historically computers go from "barely able to do X", to "better than any human could ever be" very quickly. I'm with Carmack on this bet.
The more inserting thing to me is will people accept FSD cars that tell them "no"? For example, we're about to get freezing sleet here, and a FSD car may very well decide "this is an impossible situation to drive safely" and refuse to move.
And it may very well be right; most people way overestimate how good they are at driving in very adverse conditions.
You can probably manage to have 10x less chance of a serious accident than the average human by being sober, calm, attentive and caring about braking distances anyway. "Approximately as well as a human", measured by a pool of human accidents dominated by serious driver failings is a pretty low bar.
> humans struggle with those sorts of extreme conditions too
People drive in a wide range of weathers. If you need to check the weather in a major city every day to see if your will drive itself, your car is not self driving (IMO). If that weather happens more than 5-10% of days every year, that’s not an “extreme condition”.
That's what I'm thinking would be required, infrastructure changes and not solely 100% onboard technology. 8 years is not enough time for cities to plan for and build enough of that infrastructure (not to mention, what to build?) for an automaker to commit to a level 5 car when they are already struggling with L3.
In a snow storm on icy roads it’s safe to assume the speeds would be as slow as safety dictates, just as the speeds should also be for humans in the same conditions.
Also, computerized traction control is amazing, especially with electric drive and no transmission lag. When driven by a human it’s not infallible, but if the car doesn’t have the flawed yet behaviors that humans sometimes have, it could be pretty competent. They’ll have to get it to the point of being able to do without lane lines too of course.
As someone who lives in a region where snowfall and icy roads are common I know for a fact that traffic regularly flows at highway speeds in those conditions.
Level 5 implies "all conditions" so it follows an autonomous system should be able to safely operate along side human drivers. If a level 5 system cant keep up it would become a major hazard to non autonomous vehicles.
Driving the speed limit or higher on an icy road is wildly reckless, it would be insane to program a computer to do that, regardless of whether you could.
By "highway speeds" do you mean appropriate for the weather, or going at the speed limit?
A car should be able to see better than a human in those conditions, and at an appropriate speed it could be reasonably safe. It probably won't go at the speed limit, but a human can't safely drive at the speed limit either.
I think by definition a level 5 system can do in winter exactly what it can do in ideal situations. See the chart on the linked page... "The feature can drive the vehicle under all conditions".
I am genuinely curious to what technology, based on my assumption it grew out of today's tech, a car would need to have onboard to safely operate at L5 under all conditions.
It wont. I am betting with Jeff Atwood. In any case with the current rate of inflation, in 2030, 10,000 dollars will the price of a Pumpkin Spice Latte...
I have seen the ,,last 95% effort’’ mentioned in many comments here, but the experience with deep learning in the last 10 years is that algorithms are getting exponentially better (training the same task at a specified precision), but as researchers need to run lots of experiments, we just need time and the same effort for the algorithms to improve at the same pace as they have improved before, similar to Moore’s law (just faster).
Regarding the bet I wouldn’t bet against John Carmack (although I’d love to see human on Mars :) ).
Solving self-driving in a select group of cities with permanent 5g connectivity, hi resolution gps, and 30k worth of sensors is a massively easier problem than solving “Tesla fsd”.
For this bet cars can deal with unusual circumstances in all sorts of suboptimal but harmless ways. Pulling over, stopping, going very slowly and cautiously is all allowed. You’ll still get to your destination safely.
> Pulling over, stopping, going very slowly and cautiously is all allowed
Not according to the law though. If these "L5 cars" are obstructing traffic on the regular there's going to be a big problem. And generally, this wouldn't be L5 driving.
> Solving self-driving in a select group of cities with permanent 5g connectivity, hi resolution gps, and 30k worth of sensors is a massively easier problem than solving “Tesla fsd”.
And yet I don’t trust that we’ll be there by 2030 either.
> For this bet cars can deal with unusual circumstances in all sorts of suboptimal but harmless ways.
Stopping or slowing down isn’t always a “safe state”. Stopping in some situations (like on a highway) is the opposite of harmless.
What about infrastructure changes? We are using cars in a really shitty way currently. We have the technology to allow cars to go extremely fast, why are we limiting them?
For instance, if you could make a highly aerodynamic truck with huge, beefy tires and put it in a tunnel with very little atmospheric drag you could potentially go up to around 300 miles an hour. If you layer the infrastructure with sensors, the car itself doesn't need to be self-driving it simply needs to communicate with the sensors and stay in it's 'track'. No human would be able to drive within such tolorances, but a program can do that quite easily. All this can be done with current technology.
There's no way. I'm basically trailer trash, so take this with salt and tequila, but Level 5 is gonna be a 95% problem. You can get to 95% by spending gobs of money, but to get the other 5%, you're going to need exponentially more money PER PERCENT. And nobody is going to want to invest that much.
Trailer trash or not, sometimes it takes the kid to point out the nudity of the emperor. This hick from Indiana would take Atwood's side of the bet against any that care to pony up the money for the other side. The only thing about this that surprises me is that this post isn't from, like, five years ago. But Carmack took this bet within the past week? I just don't think we have the evidence to suggest this is realistic, and plenty to suggest that it's a long way off. We do seem to have the evidence that if it happens, it probably won't be Tesla doing it.
EDIT: ah, perhaps Carmack knows better and is simply willing to write off $10K for the lulz (from the TFA): I'd like to thank John for suggesting this friendly wager as a fun way to generate STEM publicity.
We don’t at all have evidence that Tesla can’t do Level 5 within the timeframe of this bet. Tesla is the farthest along of anyone who’s trying self-driving. Waymo and Cruise are geofenced, and still don’t work much better than Tesla. Aside from those, Comma.ai is doing seriously impressive stuff (sneaking up on Tesla tbh) on a way, way smaller budget, with less hardware, and notably, they’re going for a vision-only stack this year. Judging by what Comma’s been able to accomplish with basically smartphone hardware (maybe one step up from that), I’d be shocked if Carmack loses this.
How long have we had commercially available voice recognition for? And why does it still not work ~25% of the time? I'm with codinghorror on this, there's so much interpretation that needs to be done of other drivers that I don't see how this will work.
I am not so sure. Think of the potential value to be unlocked: It will radically change the way real estate is planned. If cars drive themselves, you can switch over to a "Car on demand" model for most current vehicle owners. Why would you need to waste space on parking lots, single garages etc. You also unlock the ability for more people to live further away from work.
It has the potential to extract a massive amount of value out of other parts of the chain as well. Aside from software(which in my opinion is just stealing existing value from everyone else) pretty much everything else is minuscule growth. What is the next leap in maintaining growth of the economy? Extract the value out of these blue collar jobs(trucking, taxi driving, other logistics).
Combine this with the rise of simpler designs in EVs + less car ownership you also can bring in mechanics and repair into your value chain and extract that value as well.
You just outlined the advantages to be had should such come about. But am I missing the part where you explain why you think this has any chance of coming to fruition? Because that's what parent is on about. I mean, if a frog had wings it wouldn't bump its ass when it hops. But no one is explaining how frog wing technology has taken (if you'll pardon the pun) some big leaps the past few years.
It would be way cheaper and more practical to just to have everyone work from home that can or live close by to where they work or have cities invest in better public transport and biking infrastructure.
Sure none of those are as cool as the JohnnyCab but they already exist so require no R&D.
Sure. All it would take is razing LA to the ground and rebuilding it from scratch.
I don't think I'm exaggerating, either. In a city plan like that one, any public transit route is going to, even at optimal configuration, have simultaneously too many stops and too little:
* Too many stops, because the train or bus spends more time stopped than it spends moving, so rides across the city take forever.
* Too few stops, because you still have to walk a mile to actually get where you're going.
Fixing this requires a walkable city, which can be summarized as "don't spend half the land area on parking and stroads."
Lots of people -- including me -- don't believe that it's possible without significant technological advances that look to be extraordinarily expensive. ie open R&D problems.
I will say that if anyone can do this, it will be Waymo/Google. Unlike Tesla, it's not a fraud.
That said, it's notable they're doing this in Arizona. Limited rain, no snow, clear weather all the time.
>> "Car on demand" model for most current vehicle owners
I don't see that. Cars are not just transport. They are fashion statements. They are virtue signaling. They are portable storage lockers. I just don't see everyone who owns a car now giving all that up just because the car can drive itself. The traffic/congestion implications of "car on demand" are also staggering, basically doubling the number of trips made as each car delivers itself empty to the next customer. And further reducing the cost of car ownership will only pull more people away from mass
transportation/biking/walking.
There is so much wrong with your statement. First, Gen Z doesn’t care nearly as much as previous generations about driving and what vehicles mean for status. Expect that trend to continue especially when ownership goes away.
And as far as “doubling trips”, the car won’t have to drive far to find its next pickup and it may not have to do empty pickups at all with the equivalent of Lyft Line.
If we don’t need parking lots and spaces we can use that real estate for more mass transit.
Also there are numerous other complications that get glossed over. How many L5 cars are going to be available in my area on a popular holiday travel weekend that have a roof rack for kayaks and a 7 pin trailer connector and electric brake controller for my camper? Not to mention giving up the other customizations I enjoy with my current vehicle - removed rear seats for more space/less weight, dash cam, winter tires, etc.
I suspect if self driving comes to fruition it will be a mixture for many people. I can definitely see our 2 car household just owning one car that fits our niche use cases, and then one on-demand cars for daily work trips and filling in the gaps.
> How many L5 cars are going to be available in my area on a popular holiday travel weekend that have a roof rack for kayaks and a 7 pin trailer connector and electric brake controller for my camper?
Weekend reservations. Just like people do with kayaks today.
> Not to mention giving up the other customizations I enjoy with my current vehicle - removed rear seats for more space/less weight, dash cam, winter tires, etc.
None of this is necessary with robotaxis. You have way more flexibility in terms of space. You won’t need a dashcam. Weight and tires won’t make a difference to the rider.
Anyone who kayaks frequently is much more likely to own than to rent.
And you are far too dismissive of the customization/equipment/storage aspect. For just one example, there are vast numbers of parents who have child seats, strollers, diapers, wipes, etc stored in their cars. You can be sure they have no interest in hauling all that stuff around during their shopping or whatever because the robotaxi is off getting another customer.
You are in a privileged position. Regulations will follow the introduction of self driving cars to make owning a car a much more expensive proposition. With wages stagnant for decades, the mass market will be forced into the rental model. That will be able to cover their use case as well as your use case because there won't be as many of you.
> They are fashion statements. They are virtue signaling. They are portable storage lockers.
Flip it around the other way - if you had a L5 car at your fingertips, would you give it up so that you could make a fashion statement, virtue signal, or have a portable storage locker?
Yes. Instantly. I keep all sorts of stuff in my car, from shopping bags to a trauma kit. When i am on alert (military) i keep a duffle bag full of my stuff so i dont have to go home before a snap exercise. And in a town of pickup trucks and suvs i enjoy being the smallest car on the road, especially when i see trucks in the ditch needing help. Ever seen a 2door honda pull an f150 out of the ditch? I have. I got a picture of that one in my office.
Think about the possibility of changing cars on demand. In the morning you can rent a mobile bed like car to get to work(take a nap while it takes you to work or get some tasks done), when coming home you can rent an suv to pick up groceries/pick up your kids and on the weekends if you need a pickup truck/van, you can request one of those for the occasion.
Yes there will be massive traffic implications. This is probably something that will have to be addressed when it happens. If there are dedicated self driving car lanes, can the cars communicate with each other to help reduce traffic? At the very least pollution will be significantly reduced from today's smog filled cities like LA thanks to the transition to EVs(I don't expect this revolution to occur before the transition to EVs).
Finally, in regards to public transportation: it is already a failure in the US outside a few cities. It is a lost cause in a society that focuses on individualism vs collectivism. In terms of walking/biking, I can see improved access to these services. Can you imagine how a car that never gets tired/drunk will reduce accident and subsequently insurance rates? If the cards are played right, there is an opportunity for a massive transformation of how bicyclists and public walkers are treated in relation to cars.
I think part of it has to do with the possibility that they are just poorer on average compared to their parents. Circumstances help define the culture so rallying against excessive consumption is more in vogue when you cannot consume in the first place.
I see lots of self-driving car advocates talk about this "on-demand car" model, but I don't see any way it works in practice.
No actual sharing: At rush hour, I and most of the people around me will need a car to get to/from work. Therefore the rental fleet will have to be large enough to meet this peak demand, and effectively there will have to be a car in the fleet allocated to me (to get me to and from work). So (amortized over time) I will have to compensate the company for the cost of the car, plus the cost of bringing it to me every day, plus overhead, plus profit.
Reduced convenience: A car may not be available immediately when I need/want it. I have to either risk not having a car for some important event or plan in advance and book what I need. The car I'm allocated on any given day may be too small to meet my needs. I have to wait for the car to arrive, can't store my stuff in it, have to take time to install/uninstall car seats, get charged a fee if I leave my empty soda can in it.
IMO, for a regular car user (i.e. someone who currently uses a car for multiple trips per day) moving to an on-demand model will be at least as expensive as and less convenient than owning a car. There might be exceptions in very dense urban environments where parking is very expensive, but those places usually have a good public transit alternative.
It's hard to coordinate and doesn't save money since you still need your own car to do errands. Pre covid many people absolutely were using uber pool to carpool to work.
> I am not so sure. Think of the potential value to be unlocked: It will radically change the way real estate is planned. If cars drive themselves, you can switch over to a "Car on demand" model for most current vehicle owners. Why would you need to waste space on parking lots, single garages etc. You also unlock the ability for more people to live further away from work.
none of that value accrues to the people who make self-driving cars though (or their investors). It's not like if waymo gets rid of your car for you, they get your parking spot or garage.
It's a huge social benefit but investors are looking for return-on-investment, not making a multibillion loss to improve society.
Maybe I didn't word it properly. I discussed the value gained as well as how the advent of self driving will transform existing stores of value like parking spaces.
In this case: the value is captured by the car rental company (which will likely be the car manufacturers themselves). Part of the value comes from the rent that the parking space would have extracted.
>It's a huge social benefit but investors are looking for return-on-investment, not making a multibillion loss to improve society.
Its hard to predict how much value they can gain unless we have more information but I'd imagine if the majority of cars become on demand, then there has to be at least a few billions of dollars in potential value.
In regards to if people will actually take up rental: Since everyone else is moving to a rental model today(streaming services vs owning, cell phone "refresh" contracts, potential apple offering a monthly subscription that always gets you a new phone, car companies offering all included monthly car subscription) by the time this thing rolls out it is plausible that there will be high uptake on a rental model.
That was the whole point of my response. I am not saying every piece of software steals value from everyone, I am saying software in general steals value from everyone else. So in your example, self driving software steals value from real estate, taxi drivers, truckers, other logistics services. By making them redundant their margin is now owned by the software makers that made them redundant in the first place.
This is what is driving the valuation of all these FAANG companies. They are just vacuuming up all the existing value from the rest of the country/world.
Sure, but that is more of "reassignment of ownership of capital" and that makes it less likely and not more likely that such an endeavor will be funded (for political reasons). Unlike say nuclear fusion (safe and practically unlimited energy at an order of magnitude previously unseen is worth the spend and most countries would bite I think), is self-driving worth hundreds of billions in expenditure?
>Sure, but that is more of "reassignment of ownership of capital"
ha ha your phrasing is funny. It is more like "sudden unexpected reassignment of ownership of capital".
>and that makes it less likely and not more likely that such an endeavor will be funded (for political reasons).
Yes the powers that be are finally starting to slowly turn against the software people. You see Europe getting tired of them losing every last bit of the little value chain they have left. Meanwhile in the US, elites are getting tired of software empowering people other than themselves and are starting to make moves to try and put the genie back in the bottle. We saw all these tech CEOs being brought up in congress but all that it showed is that they still don't fully get how they have been robbed yet, just that they know that they have been robbed. If they manage to stop the software industry (a big doubt) yes I can see them putting a stop to self driving tech(although China seems determined to steam ahead no matter what). If they do manage to stop self driving that means they have also managed to kill a lot of the software industry as well and with that one of the last few paths to a solid comfortable middle/upper middle class life in this world.
>Unlike say nuclear fusion (safe and practically unlimited energy at an order of magnitude previously unseen is worth the spend and most countries would bite I think), is self-driving worth hundreds of billions in expenditure?
Now you are pulling an exact number out of thin air. Will it really be hundreds of billions in cost? When you talk about "hundreds of billions" you are talking about a noticeable chunk of the US's GDP and the complete GDP of most countries.
Since when has software or even advanced AI research ever cost that much to develop? Don't get me wrong: I believe it will be in the billions or even tens of billions over many years, but this is a cost that that can be absorbed by the companies/government entities that are pursuing this and even tens of billions can be justified by the potential gains.
Maybe hundreds is too much but 100 billion spent over years is not necessarily an exaggeration. Google and Cruise together have spent like 20B to get to where we are right now (easily googled) which IMHO is not the 95% mark where the returns get much harder. The cars are heavily geofenced and need help from service personnel regularly.
Correct me If I am wrong but getting to the point where humans are completely unnecessary is still a research problem. We need more research and more breakthroughs which costs a lot of money and is non-linear (you are not guaranteed results). It could very easily take 100B to get to a point where a car can ferry people anywhere they want with zero human assistance (no service personnel to guide out of traps etc).
>Since when has software or even advanced AI research ever cost that much to develop?
Since now? If software is eating the world, you should expect software research to also eat up a lot of money. As the low hanging fruit is picked, further gains are going to cost a lot more.
why does everyone think this is a pipe dream when waymo has been already offering driverless rides? they were sae level 4 already IIRC.
the key part of the bet is the "in major cities". you may have complications on some road without much data, but major cities will be tested to the inch multiple times by then.
of course waymo has huge sensors but 8 years to shrink those doesn't seem impossible.
Companies like Waymo and Cruise are both working on making L4 scalable to multiple geos and it's going to take a ton of time and money. But L5 is another beast altogether.
Yeah I've wondered about testing conditions for a while, at least for a human snow certainly adds a big twist to things. Off the top of my head the most challenging driving conditions I semi-regularly encounter in the Northeast US are:
- Heavy snow falling at night on an unplowed, narrow road with ill-defined shoulders. The falling snow ruins visibility, and the smooth, white, unplowed surface of the road makes it super hard to identify where the pavement actually ends.
- In traffic on a busy highway in snow. Berms of slush make sudden movements very dangerous, and there is lots of unpredictable behavior from other cars nearby. Often there are cars/trucks nearby getting stuck, sliding off the road or spinning out. Going slowly enough for conditions might mean others start passing you dangerously. Other times taking a more aggressive approach is required if cars around are going too slowly and don't have the momentum to avoid getting stuck on hills.
In either of those cases I'd likely decide just to not make a trip at all if I knew in advance what it would be like, but sometimes you get caught or otherwise simply have to push through. I guess that's an interesting question: at what point does the car tell you it's too dangerous to go or keep going? Hopefully not while on a back road in the middle of nowhere at the beginning of a two day blizzard.
yeah I reckon they started on the easiest place to implement. but it's still impressive seeing the thing driving around a shopping mall parking lot with pedestrian traffic, they put a lot of trust in this thing: https://www.youtube.com/watch?v=tBJ0GvsQeak&t=3820s
> Waymo has a staff of fleet operators that oversees the cars and provides guidance—such as confirming that the car has chosen a safe route through a construction zone (Waymo says that its remote operators never drive the cars directly).
I ask because I think that there are useful points short of "never makes a mistake."
The alternative to self-driving cars isn't perfection. It is humans, humans who not only vary in their peak performance, but in their context-specific performance. A car that drove better than me when I'm tired is a good thing for everyone.
There's lot of talk about liability, but it seems both wrong and incomplete.
One incompleteness comes up with an "accident" happens that a less-than-perfect self-driving system would have avoided. The legal system might not be willing to penalize in that situation now, but the upcoming Breathalyzer mandate is moving us that direction.
One wrong focuses on "will the author of the self-driving system be held liable?" I think that the liability will go to the driver/owner, as today. If it's better than me and my insurance company is willing to insure me, then why wouldn't my insurance company be willing to insure my use of a self-driving system?
> to get the other 5%, you're going to need exponentially more money
Yeah, but you're going to be making exponentially more money in the markets you have already entered. Investors will be able to do the math rather than envision dreams. If the math adds up, they will be willing to deploy a truly staggering amount of capital to secure the growth.
I don't think the growth is unlimited; there's got to be an upper bound of "value" even if it's literally "cost of a chauffeur for every single car on the road today".
Also, we are presuming that this will always be a one-sided affair, and that self-driving cars will always have to adapt to roads and not vice versa. I'm almost certain that once self driving cars become a thing somewhere, they will be so desired by everyone else that people will become willing to clean up messy infrastructure elsewhere that would otherwise preclude their deployment.
Ditto, but I think that failure will be regional, and that this will push the fervor. Neighboring regions have L5 but we don't? Suddenly we have a monumental impetus for change.
If you can sell the car at 95% codinghorrer wins the bet. It's only gated by the ability to buy a car. You could even implement a remote driver where a human drives your car using the internet and that would fulfill the conditions of the bet.
There's no possible way we get to level 5, as described, by 2030. I mean, "everywhere in all conditions" is really, really broad. My parents live 1.5 miles down a dirt road, single lane, and if you come across another car one of them has to back up to a wide point in the road. There's a hill where in the winter the only way to get up it in the snow is to accelerate to a fast speed, slide around a corner, and have just enough momentum to get to the top. Most humans wouldn't be able to pull it off; I can because I lived there, and even so, I used to keep chains and a comealong in the trunk in case I needed to hook the bumper to a tree and drag the car out of a ditch. I'll eat my shorts if an AI manages that within 8 years. I give us a 50% chance to get to level 3 and 0% chance of getting level 4 or 5.
What do you think the regulatory timeline would be for this? If my startup had a working L5 car today, how long would it take regulators to approve so the general public could buy it? I'm guessing optimistically 3 years but probably 5+. But my opinion has no experience behind it.
Why not? Apartments solve the housing problem, why would home ownership even be necessary?
If I live on the outskirts of a city I don’t want to wait an hour for a robo taxi that might smell like vomit to arrive and overcharge me because it’s “high demand time”.
You use your primary home 100% of the time either to live in or store your things in. It makes sense to own it outright. A vehicle is only necessary for short periods for the vast majority of people.
Of course if it is successful it won’t take an hour because there will be far more and I would expect different price points for different levels of service. I don’t check into a Marriott expecting it to smell like vomit but at Motel 6 it wouldn’t surprise me.
To be clear, I think this is one scenario of many possibilities.
If they are not too expensive, I can see families own them (to transfer kids to/from school), disabled or frail people, and people who like getting drunk often.
I'm siding with Jeff on this one, for the exact reason he states - the problem is massively underestimated. There are too many things that us creatures with brains take for granted that computers simply can't handle.
They define cities but does this bet apply to all highways in the US? I ask because at least one third of the year the lines on the highways are under snow and dirt. Sometimes ice. DoT do not consider this to be emergency conditions. There aren't enough signs to guess where the road is. There are no reflectors as the plows would remove them. The power lines are all buried. Will all the states be injecting flush or below-surface RFID sensors into all the roads by 2030 every so many feet?
There would never be a mix of self driving car and human driving car.
There will be self driving cars on special self driving lanes.
Another option is a personalize self driving AI model per driver. I.e. most driving occur in repeated manner (same time and same routes), so training a self driving models for each driver routes, might solve most of the issues (I.e. let each driver train his/her car model).
However, Tesla business model rely on a driverless taxi which suppose to support all routes in all times.
> There would never be a mix of self driving car and human driving car.
There already are in Phoenix and Arizona.
>There will be self driving cars on special self driving lanes.
This doesn’t solve any hard problems for AVs. Rules around the self driving lanes would be violated by pedestrians and human-operated vehicles, and the car would need to react to that appropriately. Would also prevent last-mile trips, which sort of eleminates the whole point of AVs.
So AV can travel anywhere in Phoenix ? It actually prove my point.
So the problem with AV is not the AV but the HV. If AV cannot communicate with other AV, or at least trust that other cars will not behave irrationally, they will never be able to self drive.
With mixed AV/HV those hard problems cannot be solved.
Note that this true for any kind of robot.
Look at amazon warehouse robots, they are operational in their own part of the warehouse, while humans are operational in other parts. This is one of the reason that we do not have home robots.
I just recently left the auto industry. And I should lead with the fact that I was not ever directly involved with the autonomous driving development. That said, I don’t think anyone is remotely near level 5, and I’m frankly doubtful that level 4 will ever really ship. I don’t doubt that there are some really smart people working on this, but I think this is only ever going to be something that works on clear days with well-marked roads.
I predict that in 2030 none of the top 10 car manufacturers will offer self driving (but cars may have emergency braking). Even those manufacturers who offer some self driving capability now will stop offering it by 2030. There may be some startups (nowhere near to 10) with really advanced tech but only because they haven't been sued into bankruptcy yet due to their low volume.
Only if the self-driving is augmented by a remote driving human. Think mechanical turk, but for self-driving cars. 95% of the time, the self-driving car will self-drive, but for that remaining 5%, it needs to detect that it is out of its depth and engage a remote driving human to navigate it back into the 95% space.
I don't think that there would be much need for a 100% L5 car.
Remote driving where a single driver controls a few cars, possibly by people from emerging countries, will solve this problem, will be quite cheap, and will always add an extra safety element.
Given that, i don't think the last 5% will be an obstacle for the technology making it.
If it's not legal to drive drunk, why would it be legal to drive up to 5 cars at once through a computer, where latency & limited views would likely create similar impairment?
Sure, 95% automation w/ remote drivers can work most of the time. If there's an obstacle blocking the road an automated car can stop for a remote driver to carefully pass by. But it would be a huge liability in any situation where "stop & wait for a remote driver" isn't a safe option.
I do think there will be a significant market for this kind of thing, but it will be more of a premium subscription for L4 vehicles, mostly sold to older folks & less confident drivers. Using it to approximate L5 directly seems dubious
> VR just… isn't going to happen, in any "changing the world" form, in our lifetimes. This is a subject for a different blog post, but I think AR and projection will do much more for us, far sooner.)
Curious why OP thinks this (in particular for VR computing).
Cost estimates are all over the place, but I could easily see LIDAR that adds less than 30% cost to a typical car model, and that's within the range of many many middle-class households.
This would require the 2030 vehicle to be self driving and absolutely trustworthy.
If your life would be dependant on your mobile phone never crashing, never running into a bug- would you use one? I would have died twice in the last 6 weeks if that would be the case.
That line of reasoning gets a little weird, your car already has computers required for it to function, so you are already trusting computers not to crash to not kill you.
The software stack in this case is at least two orders of magnitude more complex than an ECU and ABS controller, though.
Yes, in a modern car you have computer running life critical software. But these components have redundacies and a very very limited scope of operation. With self driving there is one central decision maker where it is hard to come up with redundacy. If it fails that is a major problem.
If for instance your ABS fails (happed to a friend on motocyle) you are still braking.
Is there an agreed upon definition of level 5? I mean what's "everywhere" and "all conditions". I can't drive everywhere and in all conditions for some definitions of everywhere and all conditions. Am I not level 5? :)
"In major cities" is going to be the contentious part of the bet. If a manufacturer releases a car that can be fully autonomous in a few blocks of downtown Phoenix but switches to manual mode everywhere else, who wins?
That is a fun bet, I consider myself WAY more pessimistic about self driving than most people interested in the topic, but 2030 doesn't seem totally implausible to me. I would not be surprised if either person won.
in an ideal world we wouldn’t need self-driving cars. Trains are much superior: faster, more energy-efficient, cheaper, can carry hundreds of people with a single operator who can probably be automated anyways.
In an ideal world i imagine most things would either be in walking distance, or walk to a train/bus and then walk to your destination. Subways and bus-only lanes cost a lot upfront but pay off in the long run. Unfortunately most subways are underfunded, overwhelmed, and falling apart because they were mad decades ago and not maintained well.
Considering that the U.S. Federal Reserve hegemony is probably going to end by 2030, $10k doesn't seem like much money at all. It would more interesting if they set aside 0.2 BTC.
IMO the ultimate standard for self-crashing car technology should be that an SCC can win every event of WRC (World Rally Championship) across all the countries.
Rally is the pinnacle of motorsports - tarmac, gravel, snow, forests, thin roads, hairpins, jumps, livestock, the full gamut of constantly changing conditions at high rates of speed. When a SCC can master that without some guidance system that turns it into a glorified slot car, I'll acknowledge that SCC tech is officially better than human drivers.
Until that point, a well-trained human managing the strategy of driving with the computer handling the tactics (execution of strategic intent) will always be the optimal combination.
Frankly I can live without the specialised maximising speed around gravelly bends stuff if they're reliable enough at dealing with real world edge cases of weird driving and jaywalking and stuff in the road to be able to do quarter of a billion miles without a fatality (yes, that's billion, and that's not much above the low bar of average British driver, which includes an awful lot of inebriated, incompetent and reckless driving)
That's a lot more driving edge cases to eliminate than "works surprisingly well", but the problem isn't difficult driving, the problem is essentially bug free software and perfect exception handling.
I'd rate the probability of software surpassing us at going really fast without worrying about g-forces or the possibility of the terrain flipping them quite a bit higher than that tbh.
My heart says bet yes because I want one of these cars so badly, but my mind say bet no because that level of technological change inside 8 years seems unlikely to occur.
American drives 3T miles each year. There are 6M accidents each year. That comes out to be 1 accident per 0.5M miles. Waymo's disenagement rate was 1 per 12K miles.
While disengagement and accidents are not the same, one can argue that for acceptable L5 self-driving, driver is not expected to be engaged and therefore we need these rates to be same.
If Waymo continues to improve disengagement rate by 50% each year, it would still take at least 10 years before it becomes acceptable. So assuming funding keeps pouring, advancements continues, L5 self-driving can be expected around 2032. Carmack might lose just by couple of years.
The "driver" in a Waymo is software, passengers can't become engaged and might not even be legal to drive a motor car. So if these rates were the same then somehow at least one of:
* Waymo's driver is implausibly safe, never having any situations that cause disengagement except that other people hit it (resulting in an accident per your statistics)
* Now we're expecting Waymo to press on after a road accident, because hey, I'm just a little dinged, lets get this passenger to their destination? That sounds both more dangerous (what if there is undiagnosed damage) and potentially illegal (doesn't the US have "must report" laws?)
One disengagement per 12k miles seems comparable to public transport. Once a week I'd get on the train, it goes to Town uneventfully and back again, but twice in five years it didn't. Once it was stuck behind a train that hit a person (most likely a suicide) so we sat in the half-dark for almost an hour (recovering even a body, much less an injured person requires the electricity is turned off, and the train wants to reserve its battery power for emergencies) and once there was some kind of actual fault. I was in no danger either time but obviously my journey was much delayed, that seems comparable to a disengagement.
In a lot of cases a human can remotely diagnose the problem and either tell the Waymo everything is fine now (e.g. the fire truck parked at an angle across the highway has moved, simply drive around it now) or correct its understanding (e.g. that does look like a school bus stopped in a residential area, but it is actually an advertisement for an upcoming Speed prequel, so, it's fine to just drive past it) and then your journey continues, interrupted but no worse than that. Maybe you get a refund from Waymo.
A level 4 car that’s dependent on HD maps, remote support, and constrained ODDs is not a level 5 car. A very safe level 4 car wouldn’t fulfill Carmack’s side of this bet.
This was my first thought, especially since they clarified that it has to happen in a "major city" -- the biggest traffic hazards are going to be other drivers and pedestrians behaving erratically/unpredictably and creating unsafe situations the car has to react to.
You don't even need to have a crash; if you're trying to drive from Manhattan to New Jersey on any given Friday afternoon, you're going to find roads that are "in principle" navigable blocked off by cones and traffic cops waving you through reds etc.
Ok great, you've switched to a language which eliminates a class of memory bugs. Now what do you do about every other type of bug that exists?
You still need the exact same sort of comprehensive testing infrastructure, policies, procedures. You've perhaps marginally reduced your overall bug count at the cost of everything that comes from switching to a new language. Doesn't seem like a particularly great decision.
> (My take on VR is far more pessimistic. VR just… isn't going to happen, in any "changing the world" form, in our lifetimes. This is a subject for a different blog post, but I think AR and projection will do much more for us, far sooner.)
Can Carmack toss in a free Quest 2? It looks like the last time he tried VR was back in 2015. The state of the art is a lot better now.
since Carmack thinks it will be available by 2030 I have to take the possibility more seriously. Maybe he also is privy to some developments not publicly available. But if I had to place money of my own I'd bet with coding horror here.
I really don't believe self driving cars should be on the road at all. If it can't be perfect I do not want a computer without human intelligence controlling cars. I think a better investment would be on denser communities and pushing for more biking, walking, and public transportation.
Ah but your moneyed overlords would prefer that you buy $60,000 cars with inflated prices justified by the software and AI features and continue to live in a dystopian car-dominated landscape.
> If it can't be perfect I do not want a computer without human intelligence controlling cars.
Not even if the AI is less imperfect than humans? In America last year, 46,000 died from road traffic incidents, almost all of which were humans not being perfect.
I mean it depends I guess. If it can make perfect judgement, never hit pedestrians or people riding bikes even if it's dark and rainy, then maybe. My bias is heavily toward living in the city so as far as that goes I would rather remove cars as much as possible than make them autonomous.
If your argument is that AI is hard to get right an slow to develop, and they we can achieve an fast easy win by doing all the things that reduce car use, I absolutely agree.
It just also sounds like you’re making the perfect the enemy of the good by holding AI to a higher standard than humans.
Seems like we have a bunch of ultra realists on this thread, so I will take the opposite position. Where's the optimism fellas? We have 8 years - we can do it.
True level 5 is essentially AGI - or at least, if we’ve achieved level 5 AVs in a hypothetical future, I’d say it’s a safe bet we’ve also achieved AGI. On that alone I’d bet against Carmack. Level 4 ridehailing in all major cities would be a much more reasonable bet.
I remeber when about 15 years ago Google startet with the self driving cars and alot of buzz was created about that. It was that in aout 5 years time every car will be selfdriving and all of the automotive industry has no clue about that.
Now, years late Google outsourced it to Waymo and said it will takes much more years.
Funfact: It will take much more longer than any of the IT people think. Because many of the IT people have noe glue about OT and Safety.