I guess that isn't as pithy but it's closer to the truth.
When it became clear that Uber's strategy (under Kalanick anyway) was premised on replacing drivers with AIs before the cash ran out I couldn't see general self-driving vehicles coming within 20 years. I still say that's true. AI assistance? Sure. But there's an uncanny valley there too where the AI will be good enough in most circumstances that drivers lose attention and people will die. You already see this with Tesla autopilot.
Here's a simple counterexample to the idea that self-driving cars are "just around the corner": in NYC, quite a few buildings have doormen. This is great for residents. Part of this is dealing with deliveries and so forth but there's also an issue of general security. People can sneak in (and I'm sure do) but just the fact that a human is there acts a strong (but not complete) deterrent. Just like having a dog is one of the most effective burglary deterrents.
What prevents a lot of bad actions on the roads is actually fear. Fear of what other drivers might do. Fear of road rage by other drivers. That sort of thing. This is just how humans work.
Once a driver knows the car next to them isn't driven by a person it changes their behaviour. They will do things they wouldn't do if it were a human behind the wheel, particularly because they know an AI won't ram them, cut them off, yell at them and whatever. There's no fear there. Even if there's a passenger in the car, it's still (psychologically) different.
How do you program around humans changing their behaviour to take advantage of there being no driver in your car?
I jokingly call it management myopia: anything I don't understand can't be that hard.
Brilliant line, that probably goes a long way towards explaining why software in general simply sucks from a users perspective.
If the car next to me isn't driven by a person, but is bristling with high resolution cameras that will immediately upload footage of my face to the authorities if I collide with it, I would indeed change my behavior.
They'll be able to correlate across time and space, similar to how phones running Google Maps are reporting their position and velocity to create up to the minute traffic overlays. They'll be able to approach politicians and police forces with a message like "hey, want to get serious about safety? These are the top one hundred drivers in your area who need to be taken off the road now— click through to our portal to find dozens and dozens of videos of each one speeding, weaving, failing to yield, running stop signs, etc."
Each individual incident may be hard to prosecute, but when you have all of them in a bundle, a few a week for months at a time, it'll become impossible not to act on it. When crashes happen, they'll be able to shame the jurisdiction after the fact by publishing dumps of the incidents leading up to it that were not acted on.
The thing is, that car was far enough away that both cars could have easily turned in front with room to spare. My wife had seen it before starting her turn, and realized there was plenty of time. In fact, after a few seconds, the driver in front of my wife realized just how much time they had, and finally made the turn, still before this other car arrived. Because my wife is an attentive driver this wasn't a problem, but you could certainly see a driver in her position starting to make the turn while looking to the left, and rear-ending the car that suddenly stops for no apparent reason. Of course they would be at fault for doing so, but the first driver's excess of caution, if you want to call it that, would also be a contributing factor.
Folks that hit the brakes crossing a green light. Braking well before the off ramp so they are going 40 on the highway. Slowing down while merging.
The worst ones are those that are scared to go but decide too late. Like stopping in the middle of the intersection while turning left because there is a car coming from the distance but late enough they end up blocking the left lane going straight.
Sometimes it gets truly absurd. Once I was standing on the sidewalk at an intersection waiting for an Uber when a driver stopped at a green light and started honking at me, furious when I refused to cross the road.
But I generally agree with you. it’s a often egregious and can be extremely dangerous for both the pedestrian and other vehicles when a lone driver stops unexpectedly on a >2 lane road.
Also, the other day, I was making a left from a one-way street, from the left side of it, and someone tried to drive around me on the shoulder to the left, because they were unaware I was waiting for a jogger pushing a stroller to cross in front of me.
Even so, this never made sense as a business strategy, unless Uber somehow has a comparative advantage in achieving self-driving cars and then can get a long monopoly on them.
If they don't -- if others have SDCs around the same time, then sure, their costs go way down, but they can also charge a lot less, because the competition has the same "advantage"!
"Hey, man, do you really think your grocery store can make those superhuge profits when you sell at the wholesale price?"
'Oh, no, I have a plan. You know those barcodes they're putting on products now? I'm going to invent a way to integrate that with the checkouts, and boom, I don't have to pay for people to put price tags on. Labor costs go way down!'
Same problem: "No, you're not. Someone better at that will, and they'll sell the tech to all grocery stores."
There was never a reason to believe Kalanick's operation as it stood in 2010/11 had that comparative advantage, so that strategy never made sense. And today, not surprisingly, Uber is one of the worst at SDCs, and is correspondingly unlikely to have that advantage.
I got a D in my only economics class, but I think that I've seen comparative advantage used to mean that "X is better at doing Y than doing Z", whereas competitive advantage is "X is better at doing Y than Z is at doing Y".
If you're not using it as jargon, then the fact that it is jargon might be confusing.
A canonical example would be the doctor who is a better secretary than their secretary (produces more value per hour, looking only at secretarial value). The secretary still has a comparative advantage in secretarial labor because they forgo $0 of doctor income to work that job, while the doctor would forgo $100/hour they could be earning as a doctor in order to work as a secretary.
This is true even though the doctor has an absolute (or competitive) advantage in secretarial work.
The real test for whether something is financially optimal is if you have a comparative, not absolute advantage.
Uber not having a competitive/absolute advantage in SDCs would also be a reason not to do it, because of the details of this case, but comparative advantage is correct as well, though perhaps an unnecessarily strong criterion for the point I needed to make.
Yes, however I don't see a relevant difference between that and my phrasing.
It still looks to me as though you were clearly describing a lack of competitive advantage, while using the word comparative. Sure, you can discuss comparative advantage if you want.
This might be due to the fact that I don't really care much about road rage. At least where I live, the chance of someone getting out of their car and actually escalating it into a situation that matters is basically 0. When I lived in a major city, I felt the same way but for different reasons (most people in big cities Don't Have Time For That). But I can't imagine it's an isolated opinion.
This is kind of nitpicking, but this has always been a backup plan at best for Uber. Look at their expenditures and it is clear that their strategy has never been "all in" on self-driving.
Their main plan was and basically still is a combination of driving competitors out of the market, increasing their efficiency, raising prices, and lowering payments to drivers, before the money runs out. If self-driving cars happen, it will be a tremendous boon to Uber's bottom line, but it is not a necessity.
So, textbook monopoly behavior.
The self-driving problem on highways is much more solvable, and can further be improved by convergent infrastructure evolution (forewarning of traffic jams from central control, embedded rfid to aid pathfinding in bad weather, weather reports, wildlife nets/barriers, highly standardized signage, internetworked cars). It would solve and serve a number of functions that are extremely valuable in efficient logistics, safety automation of boring long term driving, and better utilization of infrastructure (automated overnight driving when the interstates are much less used).
I don't really care if my trip to Taco bell is automated. Sure it would be nice, but that is so insanely difficult compared to automating a 400 mile trip on an interstate.
When they can full automate a 400-800 mile interstate trip so that I can sleep in my car while it drives overnight, it will massively disrupt the airline industry in so many positive ways.
But the article is from a sensationalist news source, so what can one expect.
This could be implemented today with train tracks and automated trains.
Someone will comment that there is not enough profit but are the roads making profit?
If you have the money you can travel A class, much more comfortable then sleeping in a car or an airplane.
I am not against cars so don't reply with some examples of why you need a cat, I understand that... my point is that for long distances transport where you could sleep,rest or do something else trains could be done today where AI driven cars would take at least 20 years.
Long highway drives may be something they don't do that often. So the ability to own a car that can automate that subset of their driving probably doesn't strike them as all that interesting.
Personally, as I've said elsewhere, this seems like a much more tractable problem and gives me a lot of the total benefit of self-driving.
Compare to city driving: 10 minute drive, $5. That's at least 5x more profitable and takes less capex. Plus, if anything goes wrong, your facilities and backup vehicles are already nearby.
As you said the Taco Bell trip doesn't matter.
The one area where it does matter is for the longer distance commuter--the unpleasant experience of spending an hour every morning and every night in stop and go traffic.
Well, it would probably be a significant benefit to someone who is blind to be able to commute to work by themselves.
For example, I avoid doing things on the road that an inattentive other driver might be slow to react to and hit me, such as suddenly cutting across 3 lanes of traffic.
The AI car will have sufficient cameras to record your actions and report them directly to your insurance company if not the police.
This will be MORE effective at deterring stupid driving, not less.
Look at the number of "Plug this in so we can track your driving continuously and we'll give you an insurance discount" apps that now exist.
People will be even happier to tattle on others.
Now please laugh, put down the phone and don't even think about implementing this. It's a terrible idea.
In countries where regulating things is understood as a purpose of government the availability of self-driving will end the "everybody needs to" justification for lax driving standards.
First thing you'll see is Continuing Education. Today for personal drivers testing is once and done. If you can hold in your instinct to aggressively cut off other drivers, pass on the inside and generally be a maniac for the length of the test you're set for life. There are small moves towards more continuing education (e.g. Speed Awareness requirements for people who keep getting tickets) but that'll speed up enormously once self-driving is viable.
Compare the regime for driving an articulated lorry (mandatory refresher courses, licenses automatically expire and you must be re-tested) to my grandfather being legal to drive long after he was in no physical or mental condition to be safe on the road.
Then I think you'll start to see tightening of basic requirements. That incautious fast turn you took becomes a Test Failure not a slap on the wrist. All fatalities result in lifetime disqualification. And when your lawyer says "My client needs a car..." the judge says "This isn't a license for a car, it's a license for driving. Buy a car that drives itself" much more often. Rich footballers who used to get away with this have already started to see judges say well, why don't you just hire a chauffeur? Self-driving tech would push this down to middle earners.
HighwAI or HAIghway or AI-95 or Night on the Interstate where the plucky action hero has to save the day with the help of an old car, a grizzled mechanic, and his sweetheart from a malignant swarm of AI controlled cars that have terrorized and paralyzed the country by killing any humans they sense near the roadway. Or a small town is terrorized by a swarm of autonomous vehicles that lurk the stretch of highway between them and the next town, and some disaster necessitates a massive human piloted convoy protected by the town sheriff and traffic cops as best they can until they can eventually make their way to a rendezvous with the State Patrol. Or teenagers having to avoid a gruesome death at the whims of autonomous vehicles on the way to Grandma's house.
Then you all have to bring insurance into it.
Seriously HN. Keep this up, and I don't think we can be friends anymore.
In absence of competition they might use it as a way to gouge you for more money, but there is no lack of competition in the insurance space.
20 years is a long time, arguably you're already wrong because self-driving vehicles already exist. But Self-driving with the ability to replace Uber drivers will be here by 2030 at the latest.
If you’re talking anything below level 5 then all we will be doing is making current human drivers worse. Some of the deaths so far have been people who knew the limitations of the technology and still got complacent. If it requires you to have constant supervision then people simply won’t do that and will die.
The real limiting factors for level 5 are less legible to humans. We understand the physical world via years of experience and good priors. Teslas crash straight into concrete lane separators and Ubers hit pedestrians that they know are there. In the latter case the actual reasons for the collision would strike a human as absolutely nuts - things like the algorithm not having a proper sense of object persistence or being terrible at commonsense reasoning. Yes, the Uber system was quite primitive, but the way that it broke gives us some insight into how AI drivers face a totally different set of challenges than what we would think of as "difficult conditions".
(Disclosure: I work for Google which is a sibling of Waymo)
The radar can penetrate fog, rain, etc. https://navtechradar.com/features/autonomous-vehicles/
Ultrasound works well up close for detecting a potential collision.
In theory you could have all four.
* Car double parked. Do I cross the center line to go around them?
* Stop sign that's been bent and it's no longer obvious which street it refers to.
* Semi with its hazards on in a left turn lane. Do I make a left turn around it?
* Cop directing traffic by hand at an intersection.
Most city driving is a series of continual exceptions to the rules, or situations that are one-off. Who thought this would be easy?
Especially that last one -- nobody seems to consider how much we communicate while driving. Several types of communication require having a body: flagging someone on, waving "thank you", eye contact, LACK of eye contact ("they don't see me, I better [prepare to] brake/swerve"). Some involve subtle movements of the car: rolling forward after a stop sign ("I intend to go next"), or hugging one side of lane. Even something as apparently binary as beeping the horn can indicate probably a dozen or more different things depending on physical context and length of the beep, not to mention local culture.
In such a future, if you lived in a city but owned your own real car, there'd be no road to use to drive it from your house to the city limits; and so it'd have to live at the city limits in a garage—just as people with personal planes have to keep them in a rented hanger at an airfield. You'd take this fancy pod-transit to the City Common Car Park, summon your car down from storage, and then get right onto the highway.
If you go that far, the problem is already trivial right? Just making roads specifically for self-driving vehicles would make the problem trivial. What the companies are trying to accomplish is self driving cars in current conditions.
• cities can afford to "change the problem" to fit the solution (as they already have huge transit-system and other public-works budgets, and extensions to cities are mostly centrally-planned.) Some cities in Europe have already entirely banned cars.
• everywhere other than cities—e.g. small rural towns only connected via roads—doesn't have the resources to "change the problem." (But also, roads are optimal in this resource-constrained context, anyway: you can spend very little laying down massively-long stretches of road, pushing the costs onto the people who wants to drive on it in the form of vehicle maintenance.) But "everywhere other than cities" also doesn't pose nearly as complex a problem in the first place as cities do (at least in AI judgement terms, not in sensor-requirements terms.)
Thus, it makes sense to choose a hybrid solution, where cities gradually de-car themselves, while self-driving cars get licensed to work everywhere except cities.
Rural areas simply don’t need self driving cars, so the hybrid solution is a non-starter, economically speaking.
I don’t see why. Certainly, there’s the room for cars, and far more people outside of cities own cars and have room for cars—but these are just questions of the capacity for car ownership, not the capacity for driving.
Consider: being unable to drive yourself places in the country is a far worse problem than being unable to drive yourself places in a city. Every job is far away and expects you to commute to it; there’s little-to-no public transit; and far fewer, far more expensive taxi/ridesharing services. Given the distances involved, walking or bicycling are impractical.
The single real solution to this problem, for the class of people involved (people too young to drive, people old enough their faculties have failed them, teenagers living in suburbs who want part-time jobs in the city, disabled people currently relying on privately-operated minibus service) is personal or family-owned self-driving cars. It’s essentially the middle-class equivalent of the accessibility advantage granted by having a dedicated chauffeur.
Also, I feel obligated to mention that specifically in the Rust Belt in the US, there are a lot of people who have lost their licenses because disaffection drove them to alcoholism, which led to a series of DUIs. These people want to work, but they’ll never again be trusted to drive—so how will these people ever get another job?
Cheap driverless cars—ones that don’t have to be smart enough to drive in complex city conditions, only along country roads to about town limits—are clear winners here. (The current attempted semi-solution to these problems is electric bicycles, but they just don’t have the range if you don’t already live pretty close to town. Most such people end up having to move, usually away from their families, which takes away even more of their support-network.)
Just like “why not just give homeless people a house”, a very simple solution to persistent joblessness in these areas is to give the people without driver’s licenses a car that doesn’t require a driver’s license (because it drives itself.)
And the cheaper such a car is, the more of them you can afford to give away on a constrained budget; so you’d better constrain the problem domain as tightly as possible, and ship as MVP-like a product as possible. (Analogy: did the OLPC need to be a powerful computer? No, not for anything it needed to do. So you can cheap out on hardware, and thus make more of them.)
Given these numbers, we would expect car #10 to have to wait 30 seconds or so before they start moving.
As congestion is a feedback loop (lack of car movement results in other cars being unable to move), self-driving cars with a coordination mechanism and microsecond-level control would reduce the problem exponentially.
So Personal Rapid Transit? This idea has been tried before, and has never lived up to expectations.
It's not just cities but spend a few hours walking around Boston or Manhattan and just make a note of all the crazy things drivers do. (Sometimes by necessity.)
(Unprotected lefts in busy traffic are one challenging area. There's a certain social aspect to it and the right decision in one set of circumstances is either way too aggressive or conservative to the point where you'll never turn in another.)
...How do you write the code for "Always stop at a red light unless you see a police officer waving you through?"
As I recall, he took a sabbatical to work on self-driving at Toyota. I'll check out that video from last year.
A civilian trying to direct traffic around their broken down car...now that would be harder.
A quick tilt/shake of the head is one I've seen often. Easy enough for humans to read that sort of body language, but probably a harder problem for computers.
The highway goes from N lanes to N5 (or more!) lanes and then back into N lanes, usually with the lane markings being completely absent in the process. You have to pick a lane to get into, with your choice being potentially impacted by such things as "are you a truck?", "exact change only", or "I have an EZ-Pass", not to mention the potential for some lanes to be outright closed. The signage is likely to be somewhat unique and potentially counterintuitive to normal meanings--the lane under the big yellow light is the kind of lane I usually want to take, especially since it's usually higher speed than the lane under the big green light. Drivers are going to be more erratic than usual ("oops, I don't have EZ-pass" SWERVE*). And they usually occur smack-dab in the middle of limited-access highways where people might want to use self-driving cars.
They could also include a transmitter to vehicles.
From the very new drivers, to people who simply don't drive often, to people who got their license in another country where the rules are entirely different (me!), to the very elderly who are losing their sight, reflexes and judgement and people playing with their cell phone or otherwise seriously distracted.
It's a wonder only 35,000 people are killed on US roads every year.
> do an exceedingly bad job of it.
No, they're doing an amazingly good job of it. Given the cumulative number of miles driven and the conditions driven in it should impress you rather than that you come to the conclusion they are doing a very bad job. If the current crop of AI would be unleashed in those same conditions the carnage would be unbelievable. Humans are very well suited to adapting to changing environments, it is the one thing that we have over the rest of the animal kingdom. From hanging from tree limbs to riding on the autobahn with the same software.
People are great drivers if you take into consideration the conditions that they drive in, and that traffic deaths include all vehicle types, not just cars.
Yes, cars are safer than they've ever been. But there is also much more traffic than there ever was (both because there are more people, people are more affluent and some countries have been more or less designed around vehicle ownership), and infrastructure has not always kept up, and - again, in some countries - the car as a status symbol translates into 'right of way for the best protected', which leads to unnecessary accidents.
The number of deaths is - indeed - not the right metric to evaluate driver quality, instead, the number of deaths per total number of miles driven is much better, and further breakdowns into occupants, pedestrians, motorcycles and so on should be made before drawing conclusions. And on that scale - again, taking into account the conditions that people drive in - they are doing very well indeed, some countries excepted.
My own best tricks to avoid getting into accidents: don't drive when there is ice / snow / excessive wind, never drive when you're tired, stay away from countries where driving is a bloodsport, keep your car in excellent shape and never ever drive impaired, and that impairment includes cell phone usage.
FWIW I'd be totally for a law that punishes cell phone usage while driving with immediate vehicle confiscation when spotted, as well as instant revocation of the driving license of the driver.
For me, that's been one of the big wins coming from working from home at least some of the time for almost 20 years at this point. I'm pretty much fully WFH or traveling these days but even prior to that I rarely needed to drive in if conditions were bad.
Because of various circumstances, I've had to do a few fairly long drives in bad weather for personal reasons the past couple of years and it's not something I miss at all. (And I came way closer to having an accident because of ice last weekend than I am happy about.)
In other words, human driving is far more dangerous than we care to admit; that's not a technical issue, that's pure denial.
Stopping traffic only works if everybody starts out with the same playbook in mind.
But to start driving in those conditions (or even if those conditions are likely to occur) that's madness, unless you are a first responder.
I think there's an abundance of drivers that are almost always unsuitable for the most challenging situations, or for whom the roadway or conditions encountered are completely unsuitable themselves.
Collisions are everywhere and are dwarfed by near-misses. Fatalities are statistically limited by hazard mitigation, safety measures, and a seemingly larger portion of miraculous good fortune.
Some drivers are completely out of control and some even like it that way. There is a much smaller amount of mechanical or health failures.
These all can be the riskiest due to unpredictability.
We will have to admit that each out-of-control driver can exhibit perhaps completely unexpected and unique behavior from each other, because that's what they've always done.
That's not a very high bar for different out-of-control AI events by comparison.
Tragedies occurring from individual human driving deficiencies have been largely attributed to social effects, because the underlying mechanical engineering performance has been so overwhelmingly predictable by comparison.
With AI different unforseen tragedies are to be expected, and it will not be due to any recognized or imagined individual human driver deficiency. Attribution will be squarely on the engineering team deficiencies when it comes to the programmable electronics that were overlaid onto the fundamentally predictable 20th-century-proven mechanical platform.
There could be deaths that can not be dealt with using natural human social constructs.
Being killed by an out-of-control robot is always going to be something that's supposed to be nearly impossible to occur, compared to being killed by an out-of-control human.
Anyway, with emerging programming technologies sometimes actually achieving highly suitable goals which were only reachable when combined with ideal support, and a sales/advertising mentality that can make up the difference when the incredible engineering effort has done all it can and the goal is not quite met nor suitable, everything's looking better than ever so what could go wrong?
That’s going to be really challenging. Good one!
I never figured out what the problem was, this was in the middle of the road and not at an intersection. I was more concerned with doing the right thing than trying to determine the root cause.
You would err on the side of caution, but ultimately you would go with the help of 5G remote assistance so that the passenger can stay napping.
How would humans handle this? They would rely on previous knowledge of the area, aka maps could help.
Not challenging at all imo, this should be trainable.
Recently, I bought a newer Subaru, with EyeSight. It has adaptive cruise and lane keep assist. The LKA is fine - it'll beep if you sway outside of a lane, and automatically adjusts the steering, but it won't keep you centered. It's more of a safety thing, and it works well from that perspective.
The adaptive cruise is really good. It's camera based, and I have had zero problems with it. It works well at night and in pouring rain. It'll even stay pretty close to the car ahead of you if you turn the "tolerance" all the way down. I'm always impressed.
Since I've had this car, I've thought a lot more about the practical implementation details of actual self-driving. I more often notice situations when driving that are seriously complex.
The more I think about it while I'm driving, the more I realize how fucking hard self-driving would be.
However, at one point the guy in front of me turned off onto a small side road. It was at night, and I don't think the car realized he had moved into a turn-off lane. It slammed on the brakes. I probably went from 90kph to 40kph before it realized I was not going to hit that car.
I completely failed to react to the situation. I was worried my erratic braking would cause an accident behind me, but in the moment, I didn't know how to stop it. That was not a type of emergency I had considered or prepared for.
Where it gets dicey is the scenario where the "imminent collision" (hazards on, seatbelts tightened) detection is triggered, and the driver continues to push hard on the accelerator. Tesla has a fairly lengthy statement in the manual about this scenario. The bottom line is there are all kinds of heuristics at play that may or may not result in an override depending on the specific sequence of events.
Why don't you just drive yourself or take a taxi?
I view these as assistance to driving. It's a comfort that the steering and brakes are not overridable. And honestly, if the system messes up so badly as to go flying into a barrier, well, that's not so different from a tire popping, another car careening across into yours, or other catastrophic and unlikely events that do happen. We have seatbelts, crumple zones, airbags, pre-tensioners, cargo hold-downs, and emergency services to help us survive what even 40 years ago would be unsurvivable accidents.
Those are just arbitrary numbers and a simplistic framework, but the point is, you can have a huge increase in safety by the numbers, and a very small increase in problems due to the safety system that cancel it out, because the prior underlying rate of crashes was pretty small.
I think this is an abstract pattern that comes up in other contexts and it doesn't seem to be intuitive.
Definitely surprised the heck out of me though the first time the car slowed way down on the interstate because the car ahead of me pulled onto the off-ramp.
Now currently the Tesla system does give you a much more clear idea of what the car does see around it but still no option to see what it fully records; there are means to get this footage but its not something every driver can.
Now my TM3 goes in at the end of the month for the hardware 3.0 upgrade which will allow it to process more of what it sees and also relay that to me. The difference in what other's have show in just what the car relays back to the driver exposes just how much information has to be processed.
Then comes the simple fact, the real issue is that the hard decisions are ones we make all day, driving by exception. We make so many choices that are exceptions to the rule we are numb to it, it is nearly subconscious.
Then the other issue, other drivers. Not just people who drive badly but those who will go out of their way to cause self driving cars issues. with the number of people on the road you will find them with too much regularity. More might pop up if regulation comes down which demands self driving cars or semi autonomous cars obey all traffic laws, especially speed limits. On some roads I drive just obeying the limit is enough to impart rage on other drivers.
The problem with widespread L4/5 is that you need to get to a car that can literally drive itself between 2 points on a map with a high degree of reliability, in a wide range of weather conditions, on roads of varying conditions, with unexpected/unmapped obstacles that may require doing something technically illegal to get around, without human help. And that, as you say, seems really hard.
Put it this way: If a driverless car would be safer than human drivers, then that would imply that all the necessary technology would already exist to allow humans to be the driver while the car still keeps them out of deadly situations. If such tech is not possible to develop, then it seems unlikely that true driverless tech (which would need to combine that safety tech with a lot of other technology) will happen.
That's the kind of weird edge case that makes me think we're farther from real self-driving than most people want to admit. I'd be hard pressed to define exactly how I'd tell the computer to avoid that. Maybe the answer is that it can't deal with that until it results in hydroplaning, and then it reacts however it can.
Worse than a mere car-destroying pothole, what if the flooded portion of the road no longer existed at all? That's a common enough occurrence that student drivers are generally warned about it specifically, warned to never drive across flooded sections of roadways because your car might fall into 10 feet of water without warning. If a self-driving car doesn't avoid a scenario we teach teenagers to be wary of, I don't think it deserves to be called self-driving.
There's a few other places in town that often flood, including one on a main road that doesn't really have any alternative route. There's also a section of the road along the coast where high surf sometimes hits the sea wall and splashes up and over it on to the street. It's quite a sight, but I wonder what a self driving car would make of _that_.
What do human drivers do there when they are unfamiliar with the road? Seems like the auto pilot should be able to do at least no worse than human drivers.
I assume the same as me -- I avoid water-filled ruts whether I'm familiar with a particular road or not.
It may well get advanced enough some day to pick up the difference between wet pavement and water rut, and car-to-car communication could potentially help, but that's a level of technology improvement that feels considerably farther away than just a few years.
I have 2020 Subaru and it has lane centering on top of that. On the highway, with clear lane markings it comes very close to driving itself. It won't slow down to handle curves on its on though.
> the majority of the miles
The "miles" metric that's not very relevant, not all miles are created equal. Driving in a straight line with almost zero challenges is nothing like driving as a whole. Would a power supply that's only able to take idle loads (majority of the time) be considered any good? How about a phone that can only make calls only most of the day?
Even a "dumb" car can be considered self driving by this definition. All you need is cruise control or if you want to get fancy, ACC and LKA. This would allow you to drive for hours on end or hundreds of Km on a highway with little (fractions of a second at a time, maybe a total of 1Km with hands on wheel) or no human intervention.
When the models without those downsides require constant strong attention and nearly unbroken eyesight? Yes, sell me those models now. I'll switch between manual mode and somewhat-limited-feature mode when necessary.
Many cars spend a lot of time idling, especially in crowded cities. Would you call them self driving because they can perform unattended this very, very narrow task (but long) from the whole activity of driving?
As for the “yes sign me up” I call bluff. There are plenty of situations in daily life where you need to have constant supervision. You wouldn’t let your child operate in them under the assumption that “there’s a good chance they won’t die”. You will take the constant supervision in place of the device that only works for 1% of the features you need and even then it might kill.
Take an iron that can iron by itself but only small, cotton clothes, and once in a while it burns down the house. Do you leave it unattended? Do you even call it self ironing?
I don't care what you call it. I just want to be able to ignore the road until the car beeps at me. (with at least 10 seconds of warning)
> Many cars spend a lot of time idling
But a car can't be idling while I use it. A computer can have idle power levels while I browse the web, and a phone that works in a certain range of hours is still very useful. If you wanted to make an analogy for "useless", that didn't come across.
> You wouldn’t let your child operate in them under the assumption that “there’s a good chance they won’t die”.
> once in a while it burns down the house
Who said anything about 'a good chance'? You said those products worked for specific uses. You didn't say they would also fail at random, even inside those limits. This is a different scenario now.
If the car can handle "a straight line with almost zero challenges", sign me up for that easy highway driving. It's only when you remove the word 'almost' that it becomes a cruise control missile / deathtrap.
> I just want to be able to ignore the road until the car beeps at me. (with at least 10 seconds of warning)
You may want it but you are not getting that from any "self driving" system in use now. It's even illegal for you to do so. And that isn't self driving anyway, it's driver assists like ACC and LKA assisting you in very particular conditions as long as you're still in control and alert.
> But a car can't be idling while I use it.
If you're in traffic and the car isn't moving it's idling. In a traffic jam you may even idle more than you move, and many highways and crowded cities give you exactly that.
> You didn't say they would also fail at random
It was an analogy to self driving cars killing people even with those very narrow specific uses so I didn't think I had to spell it out. That was my whole point. Consider it spelled out now.
> If the car can handle "a straight line with almost zero challenges" [...] It's only when you remove the word 'almost' that it becomes a cruise control missile / deathtrap
If a system left to its own devices turns a degraded lane marker into a major life threatening challenge then it just makes my point that you're the driver and it can only assist.
To be a driver you are tested in all kinds of conditions. But you consider a car to be "a driver" because it can drive a straight line most of the times?
Self driving cars today are Silicon Valley's "not hotdog" app. That was meant as a joke but it fits the current situation of self driving to a T. Great if your needs are ultra specific relative to the whole spectrum of the task at hand and you're willing to take the risk of getting killed in the process.
For a narrow enough set of conditions you can define anything almost any way you want and be correct.
> If you're in traffic and the car isn't moving it's idling.
Let me rephrase. A car does not idle continuously while being useful, it only idles for a minute or two at a time. A computer running at "idle power levels" is more like a self-driving car that only goes up to 15mph. Which would actually be very useful if you're in stop-and-go traffic.
> If a system left to its own devices turns a degraded lane marker into a major life threatening challenge then it just makes my point that you're the driver and it can only assist.
Depends on what you mean by "left to its own devices".
If you mean that I'm zoned out watching a movie, and the car crashes all by itself, then that car doesn't meet the standard I laid out.
If the car beeps at me, and I have to intentionally choose to ignore it to get into a crash, that's acceptable. Because I won't ignore it, and won't crash, even though the car can take over for hour(s) at a time. Go ahead and call it "driver assist" if you want.
> Self driving cars today
Are level 2, and I want a level 3 or 4.
Level 3 is the minimum I described, and I don't care if we call it "self driving" or "driver assist", it's useful.
Level 4 is just level 3 plus the ability to pull over when confused, but I would say it's unambiguously "self-driving" at that point.
I do agree that we're still at level 2: it's relatively amazing (i.e. see what can be done with the limited toolset available), but not really that great: outside of predictable, menial tasks, the rough edges become acutely limiting.
I checked, assuming it's actually using a radar - and you're right. They seem to use a stereo camera system. Neat.
I'm surprised it works well in bad weather, but I've never tried it.
But I agree with the parent, the suite of driver assistance features is very good, but a long way from "self driving".
As I see it, self-driving is a cursed problem. If you could choose a different problem space, or to ignore some complications, self-driving would merely be very hard. But the requirement to handle ANY external behavior with ANY external conditions and navigate to ANY destination, while also maintaining safety is impossible* satisfy.
One cursed corner of the problem space is ML itself: ML is amazing in that it enables emergent behavior , but ML is terrible in that it gives rise to emergent behavior. The traditional engineering mindset wants a map of inputs to outputs, but you don't get to choose your inputs in the SDC world, and you can't specify all of your outputs.
Another cursed corner is the Always/Never problem. You want safety features like Automated Emergency Braking to Always kick in when there is a problem, and you Never want them to kick in when there isn't a problem.
I really don't know how any of this gets fixed. I do think that sensor fusion and advances in AI can reduce the size of some of the cursed area of the problem, but the problem is also meta-cursed in the definition of Self-Driving: the solution to normal cursed problems is to reduce or change the solution scope, but if you reduce or change the Self-Driving solution scope, then "it's not real self driving".
3. edit: [pdf] https://core.ac.uk/download/pdf/81580604.pdf Especially see section 4.3
DNNs (alone) are incapable of Level 5. OTOH, it's conceivable that in this limited domain, a "Society of AIs" might yield 'acceptable' performance ( 1K deaths per year? ).
I would also like to see a given driving system rated at maximum speed for autonomous use. If you're moving at 5 MPH and your stopping distance (reaction time + coming to a stop) is 5 feet, then your 'Zone of Despair' is the 5 feet in front of you (and some on either side). If you absolutely know this Zone is clear, you can proceed with confidence.
It's a hellishly difficult problem; but I don't think it requires AGI to solve 'adequately'.
Why would they need to force Level 5? They can operate with the promise of L5 in the future, until the moment the statistics show it’s safe to turn it on.
Between now and then they can provide a steadily increasing supply of value to their customers. And steadily ramp the data they have to improve the system.
I have no idea if it will be 2 years or 10 years or 100 years before they can turn on Level 5 without a geofence.... but what difference does that number make? It’s academic. It’s not an existential question for either company’s self driving value prop.
And consider that different jurisdictions will set different dates. All it takes is one government to OK L5 in some area, and Tesla’s ass is covered. And every jurisdiction that signs on will put more pressure on other jurisdictions to follow suit.
To me, that’s the brilliance of Tesla’s strategy. They have decoupled their forcing functions from the L4/L5 legislative question. Governments can drag their feet as long as they want but it won’t hurt Tesla’s ability to sell the tech or collect the data they need to improve the tech. They are already selling autonomy features, and they can make that value prop incrementally better year over year for as long as it takes.
1. Anything less than 100% FULL automation is MORE dangerous than manual driving, because the "driver" will almost certainly lack any situational awareness. When the need for manual intervention happen, it will be at the moments where you need maximum awareness and split second reflexes.
2. There are SO many edge cases and never-seen-before situations that happen when driving "at scale" that the automation features will fail unexpectedly and in strange ways.
3. G and Cruise might be exceptions, but most of the companies in this space are cowboys with reckless disregard for public safety and terrible "iterate quickly" coding practices.
4. At some point there will be an accident that kills a photogenic "middle America" person or people and at that point the government will crush this industry with regulation, with the financial backing of automakers, UAW, and other people who benefit from the status quo.
The only way 100% fully self driving cars will ever happen is for the infrastructure itself to be built to accommodate them. Mixing regular cars, parking, trucks, bicycles, scooters, pedestrians, dog walkers, hoverboards, etc all together on the same roads ensures that the problem is unsolvable.
Something I worry about is that if SD became normal, then people would never get the experience of thousands of hours of driving a car in countless situations that is needed to develop good judgement, much less quick reflexes. And so when a rare situation arises when they need to take over, they won't be able to do it well.
There's a really great YouTube video called "Children of the magenta" that's part of a lecture the chief training pilot for AA gave about 20 years ago or so as part of their continuing education. He goes over incidents and situations and the essential conclusion is that pilots are getting too used to turning dials and flipping switches when in many situations they need to just take control and fly the plane.
Airline pilots already face this -- it's hard to stay engaged in flying when the plane can fly itself. By the time something bad happens and the plane gives up and hands control back to the pilot, the pilot lacks the full situational awareness he would have had if he was flying the whole time.
With enough reliable bandwidth a decent workaround is to have the system fall back to human control by someone other than the local driver. I'm imagining a Car Traffic Control Center where your onboard robot driver sees a situation it doesn't understand and throws control to a remote driver wearing a VR rig with your car's video feeds as input. The remote human driver assesses the situation, steers you carefully past the weird obstacle/issue then returns control to your robot.
A system where robots drive automatically, say, 95% of the time while human remote drivers handle special cases 5% of the time still seems like a big improvement over the status quo - there's a market for that.
Throwing a remote driver into a dangerous situation with no context sounds like a terrible solution to me. See also: Cpt Dubois from AF447. Doing that deliberately and repeatedly just multiplies the chances of catastrophic error.
And the VR driver job would be so stressful that there may not be many takers. Who would take responsibility if they made a bad call and caused a crash?
But there has to be an assumption that this is a rare event and that it takes place in a context where a VR driver has time to establish some situational awareness. (Oh, and in a lot of situations, there is no "just pull over option." I've gotten into some bad weather situations but there are often only sub-optimal options at that point. Pulling over can also be dangerous or may not even be an option.
We might be thinking of different situations. I'm mostly imagining a car or truck that does great on the freeway but poorly on surface streets or poorly on particular KINDS of surface streets or even particular KINDS of weather...and we KNOW this and can recognize those situations. The remote driver typically jumps in BEFORE the part that is actually dangerous.
This isn't a new problem - consider a big ship that delegates harbor navigation decisions to a harbormaster and/or tugboat, or a big plane that delegates final runway approach decisions or parking at the gate decisions to a control tower and/or local guy driving a tow vehicle or waving directional flags. You could slice the world up into "regions we can reliably navigate without help" versus "regions where we still need a little help", with the latter group shrinking over time as technology advances and maps get better and edge cases are better handled.
The initial product offering might be for long-haul truckers - the truck drives itself for hours on separated freeways and then throws to a handler when it needs to navigate unfamiliar local surface streets for a delivery. But once you've GOT that sort of infrastructure - basically a map with geofenced areas where remote drivers step in - it's a logical next step to make the help areas dynamic and mark slowdowns or detours due to an accident or a landslide or a cow on the road for similar handling.
95% of that job would not be stressful. I'd be more worried about it being boring...but then, so is normal in-person driving.
I'm ready to guess what it looks like: Car on autopilot going through a residential neighborhood, playground ball bounces out into the street from between two parked cars, car does not brake in anticipation of a 6-year-old that is not currently visible.
When you're going 45 in a curve driving along PCH, and a sudden fog bank obscures your cameras and LIDAR and the computer says "your controls, good luck!" you have maybe 2 seconds to react, if you're lucky. It might be a lot less.
Humans make really dumb decisions sometimes, but we are also outstandingly capable of reacting to novelty.
Whether it's an illusion of safety, a letdown of attention, the general idea is that humans should never trust that things will go well when there's real probability that they don't. I think it's not in the amount of automation, as you explained well, but rather in focusing users on the critical parts that they should watch out for — and there clearly automating helps us remove the unimportant from the equation, and also make us more responsive, more accurate for the important parts. But it's a lot of great UX, and that's one field where e.g. the military is usually great but commercial companies are abysmal if they can get away with it (read: sell enough to justify not spending a dime on more quality). That's worrying when security is involved, but it hasn't proven a moral or ethical problem for most industries absent of regulation (forced ethics, ha!), so... I think there's valid concern by OP.
As for passenger jets, the Airbus A320 (late 1980s) was the first commercial plane to have a "full" autopilot; all systems were electrical¹ (manoeuvering, thrust control, etc) which allowed the computer to integrate and manage it all. :)
It was tested a number of times by pilots for fun, from taking off to landing entirely on autopilot — ofc they're standing right there ready to take over if anything goes wrong but I've seen it first hand many times. We're talking commercial flights with passengers, it's 100% safe and actually quite "smooth" because the computer is so accurate.
Honestly, the problem is much, much, much easier for planes: a good GPS and it becomes quite the closed problem, and obviously 100% of autopiloted planes are simultaneously piloted by real humans... ready to take over. Yet a plane could technically land itself just fine if pilots were incapacitated, it really could. I suspect it did more than we know for many reasons. And when flying by instrument (means you see s__t), an autopilot is basically just a computer doing what a human would do slower by reading the same data (and maybe cross-checking with physical/manual instruments, but an autopilot doing the grunt work of stick-holding gives you more time to double, triple-check everything incidentally).
: Note that all systems are also doubled (even tripled) with mechanical (hydraulic etc) failovers, because obviously you can lose electricity in catastrophic situations, hence why it always seemed crazy to me that a planed requires software to fly properly instead of plain old good physics and mechanics).
I think we'll see 100% automation for freeways within a short time. I think the only way we'll see 100% automation for arbitrary point a to point b in 50 years, that a standard human could "safely" do, is if we get flying transport.
On top of the social side of things, the roads are not in great shape in northern climates and many of the visual cues that we use to drive can be missing or very hard to see for many miles. Striping delineating the edge of the road often gets worn away over the course of a few years and doesn't get re-painted for a few more. Some major roads connecting two towns may not even have a paved shoulder, just 24 feet of asphalt with a stripe down the middle. (For reference, 12 foot lanes with 4 foot shoulders are the general norm for this part of the US.) All of this and I have still yet to touch on weather.
I look forward to self driving cars. However, I don't think that they are going to solve many of our traffic issues outside of urban cores. For me, the incremental steps to reach self driving will result in fewer injuries and fatalities on our roadways, and that is a win.
I really think self driving could improve public transportation in urban cores by tracking preset paths. I could also imagine buses that are mostly autonomous, but where a remote driver could override the controls for exceptions.
My answer would be no: the opposite, if self driving cars work out, people will probably just drive more! The only traffic gains would come from 100% self driving cars that could then be optimized somewhat globally.
The only thing unleashing it is the decision to turn it on, which is largely up to regulators.
Ya, but around the corner no one thought that would be 2020. The strawman is always to place around the corner as tomorrow, but many people just mean in a decade or two.
The Guardian: https://www.theguardian.com/technology/2015/sep/13/self-driv...
Business Insider: https://www.businessinsider.com/report-10-million-self-drivi...
There is a point in the conversation where Lex and Jim clearly disagree about how "easy" self-driving AI should be. Lex is clearly pessimistic and Jim is clearly optimistic. I have to admit I was more swayed by Lex's points than by Jim's, but it is hard to discount someone so clearly (extraordinarily) expert and working directly in the field.
It feels to me like those working on the concept are expecting that if they keep adding sensors and twiddling with AI they can avoid that.
I get why. It's a vast and expensive undertaking that is out of their control and they want to sell their product asap. But if we started with major city streets and highways it could be a quicker and safer route to get it to market.
Years ago (around the late `90s) I worked on an "Intelligent Traffic Systems" project for the city of Branson, Missouri. The 3M Corporation demonstrated a magnetic tape for street lines and a snow plow truck outfitted with sensors that could detect the lines connected to vibrators on each side of the drivers seat. When the truck got too close to the line the seat would vibrate on the side they were close to. I got to ride in the truck for a demo of the tech and worked well.
We also had street cameras that detected autos and could estimate speed and traffic congestion. These sent video and data to the local 911 center. I created a "traffic congestion map" that ran on a web server using that data that worked pretty much the same as Google Maps that show congestion.
We need "smart streets" to really make this work. Without adding that to mix corporations could be banging their heads against the wall and spending billions of dollars to try and never make the last mile.
It's probably worth ignoring what most people think about topics like this, except to the extent that their insane thrashing affects the situation (votes, funding, etc).
One day we'll look back and say "aww, they tried so hard with the limited tech they had and got so close but what they needed to make it good just didn't exist yet".
Driving is simple but still logical. Though it doesn't seem to involve too abstract of processing like what is required for language, that doesn't mean it is just looking the road ahead and making turns. Say if an accident happens and the road condition is messy now the one-way road is changed to switch to allow once direction then another, how would AI understand this? No it can't.
Current NN based model had huge problem guarantee Robustness, while human can be incredibly resilient against adverisal scenarios, because we are superfasr few shots learners.
The only exception might be specially instrumented road tracks, with limited access (like train rails).
And yes, I fully expect such a thing on interstates. It will never happen inside cities.
Want to make money? Design a vehicle agnostic system, primarily aimed at long haul trucks. Install it in a ton of cars, but don't switch it on.
Then instrument some highways, and convince governments to only allows these cars on those lanes. By being vendor agnostic, anyone with a car could get this.
It will require deep pockets, and maybe you would need government to mandate this (some kind of open standard).
I think the direction of the thought is reasonable though, you should just take it a few steps farther. If networked infrastructure is a good idea, then maybe cars are not a good idea. We already have driverless, well defined, organised modes of transportation, they're called trains.
modern subway systems already pretty much drive themselves, they also come with the added bonus of not having everyone carry two tons of steel around.
What if terrorists spoof phantom cars or rewrite maps to send people off cliffs? Assume they stole the master signing keys, have root on the central servers, exploited 0-days on the client car software, etc. One car misbehaves somewhere, triggering sensors of nearby cars which tell every other car in an N kilometer radius to pull over en masse. If anything, it is more robust to hacking/terrorism than the independently-self-driving Tesla or Waymo approach because those do not have the benefit of the herd. One gazelle in a herd who gets tackled can yell out to save the rest of the herd.
> I think the direction of the thought is reasonable though, you should just take it a few steps farther. If networked infrastructure is a good idea, then maybe cars are not a good idea. We already have driverless, well defined, organised modes of transportation, they're called trains.
How do I take a train/subway from my apartment to the front door of a McDonald's? People go from building to building. It's not practical to do this without cars or buses outside of maybe 5 cities in the world like HK or NYC.
Also there is an ungodly amount of cars in the world. It's a lot cheaper and efficient to retrofit cars with self-driving modules than to recycle all that metal into trains/subways.
Mostly by taking the subway to the nearest station of the mcdonalds and walking. I've lived in more than 5 cities without ever owning a car. Walkable cities on the planet are the norm, not the exception, the US is very skewed in that regard because it built most of its cities around the car, but that is only a fraction of the world population.
Much more important for the future is to ask what all the places do that still have the decision to make if they want to expand their usage of cars, like the African continent and much of Asia, or if they want to invest into mass transit and built their cities around alternative modes of transport.
For example, things like getting companies to agree to a unified standard at a government/industry level & determining frameworks for liability all seem to be as important (and perhaps difficult) as eking out another 0.00001% increase in safety.
 I'm not even a fan of the idea anymore btw, just trying to assess the technical hurdles. Sensors have become numerous and cheaper, compute power is immense. Investment won't be higher to make new steps in 20-30 years.
>The result? They went on a lot more car trips.
That's kind of pointless 'study'. Of course I will take a lot more trips for some time after getting chaffeur/self driving car, just for the novelty of it. One family for one week does not really tell anything
What happens to car-insurance companies?
Seems like with self driving cars the form of car insurance we have now wouldn't be really necessary. I expect car-insurance profits to decline. Isn't there an incentive ( read: lobbying ) for car insurance companies to discourage self-driving cars?
In other jurisdictions, auto insurance is provided by a single public auto insurer - these rarely run a notable profit.
Maybe machine learning can be used to tell the difference between a dog and a plastic bag, but you'll need some hard code to describe how to react to either.
My understanding is that's largely how it's done. The ML part is mostly about recognizing objects. But the car doesn't "learn" how to drive. It's told how to drive depending on what's happening in its field of view.
Which is why there's probably misperceptions about the importance of miles on the road. It uncovers un-programmed situations but it's not like the car runs over someone and reinforcement learning leads to it not doing that next time.
Turns out that the underlying technology was far from maturity in each case and the use cases limited to a handful of enthusiasts.
But that's not really sexy, is it? it just doesn't sell as well as fully automated AI driving does.
1. Wild optimism fueled by how rapid the progress of ML had been in certain domains over the course of a few years combined with a lot of hype and general SV techno-optimism. (And the fact that a certain demographic so desperately wanted a robo-chaffeur to drive them around.)
2. Hypesters and scammers who knew it was mostly smoke and mirrors but it didn't matter so long as they got their payday.
I've even seen it happen within the same company: marketing dude talks to engineer, gets it all wrong and exaggerates capabilities; then CEO demands to know why the capabilities in his sales literature don't exist in the product.
I call it "human informational centipede."
Hypesters and scammers always around; excepting in Theranos type cases, they mostly don't really push the needle. Pretty sure Irene Aldridge didn't change perceptions of HFT much, for example.
To your first point, I agree there was something of a big game of topper going on for a while. If anyone came out and said that they weren't going to have production self-driving for 10 years (much less 20 or 30). A lot of people, including on boards like this, would nod their heads sadly about how far behind $COMPANY was compared to a certain other car company that was already supposedly selling self-driving-capable vehicles.