
Self-driving cars turned out to be harder than expected - apsec112
https://www.vox.com/future-perfect/2020/2/14/21063487/self-driving-cars-autonomous-vehicles-waymo-cruise-uber
======
cletus
Alternate headline: Some people overstated how simple this problem was because
they didn't know what they were talking about, could promote themselves as
experts, could get funding for a self-driving startup or some combination of
the above.

I guess that isn't as pithy but it's closer to the truth.

When it became clear that Uber's strategy (under Kalanick anyway) was premised
on replacing drivers with AIs before the cash ran out I couldn't see general
self-driving vehicles coming within 20 years. I still say that's true. AI
assistance? Sure. But there's an uncanny valley there too where the AI will be
good enough in most circumstances that drivers lose attention and people will
die. You already see this with Tesla autopilot.

Here's a simple counterexample to the idea that self-driving cars are "just
around the corner": in NYC, quite a few buildings have doormen. This is great
for residents. Part of this is dealing with deliveries and so forth but
there's also an issue of general security. People can sneak in (and I'm sure
do) but just the fact that a human is there acts a strong (but not complete)
deterrent. Just like having a dog is one of the most effective burglary
deterrents.

What prevents a lot of bad actions on the roads is actually fear. Fear of what
other drivers might do. Fear of road rage by other drivers. That sort of
thing. This is just how humans work.

Once a driver knows the car next to them isn't driven by a person it changes
their behaviour. They will do things they wouldn't do if it were a human
behind the wheel, particularly because they know an AI won't ram them, cut
them off, yell at them and whatever. There's no fear there. Even if there's a
passenger in the car, it's still (psychologically) different.

How do you program around humans changing their behaviour to take advantage of
there being no driver in your car?

~~~
AtlasBarfed
As usual the article conflates the (very difficult problem) of self-driving
everywhere with various other sub-problems.

The self-driving problem on highways is much more solvable, and can further be
improved by convergent infrastructure evolution (forewarning of traffic jams
from central control, embedded rfid to aid pathfinding in bad weather, weather
reports, wildlife nets/barriers, highly standardized signage, internetworked
cars). It would solve and serve a number of functions that are extremely
valuable in efficient logistics, safety automation of boring long term
driving, and better utilization of infrastructure (automated overnight driving
when the interstates are much less used).

I don't really care if my trip to Taco bell is automated. Sure it would be
nice, but that is so insanely difficult compared to automating a 400 mile trip
on an interstate.

When they can full automate a 400-800 mile interstate trip so that I can sleep
in my car while it drives overnight, it will massively disrupt the airline
industry in so many positive ways.

But the article is from a sensationalist news source, so what can one expect.

~~~
ghaff
I suspect that part of it is that a lot of people in the tech and tech
journalism space live in and around cities. Many of them are mostly focused on
being driven around and not owning a car.

Long highway drives may be something they don't do that often. So the ability
to own a car that can automate that subset of their driving probably doesn't
strike them as all that interesting.

Personally, as I've said elsewhere, this seems like a much more tractable
problem and gives me a lot of the total benefit of self-driving.

~~~
AlotOfReading
It's way more difficult from a fleet operational perspective and more
importantly, way less profitable. Take LA<->SF as an example. That's about 6h
driving, and buses do it for $25-50. That's a measly 7-14 cents per minute.
Once you've made the trip, the company has to find someone going the other
way. If there's any directional or temporal imbalance, the fleet ends up with
a lot of cars sitting idle or driving hours to be useful, further reducing
utilization. If the car has an issue halfway, now someone has to pick the
customer up hours from a company facility and get them to their destination.
All this for pennies. You don't even get an easier problem, because you still
have to drive in the city to drop people off.

Compare to city driving: 10 minute drive, $5. That's at least 5x more
profitable and takes less capex. Plus, if anything goes wrong, your facilities
and backup vehicles are already nearby.

~~~
ghaff
I'm assuming individuals still own cars with autonomous features (or rent them
like they do today). Which is sort of my point. This capability doesn't really
help those who want (or need) to be driven everywhere. It's a convenience (and
safety) feature for car owners. It would be a very nice one. But it doesn't
really help eliminate car ownership or the need to drive.

------
Mattasher
You have to wonder, when you see these stories, whether the engineers in
charge of these programs spent much time driving around, trying to see the
road like an AI might. In my experience, driving in the city, it's rare that I
can go more than a few miles before some exception comes up that requires
human judgment. Here's a few of the things I saw just last week:

* Car double parked. Do I cross the center line to go around them?

* Stop sign that's been bent and it's no longer obvious which street it refers to.

* Semi with its hazards on in a left turn lane. Do I make a left turn around it?

* Cop directing traffic by hand at an intersection.

Most city driving is a series of continual exceptions to the rules, or
situations that are one-off. Who thought this would be easy?

~~~
dstroot
> Cop directing traffic by hand at an intersection.

That’s going to be really challenging. Good one!

~~~
pasttense01
The cop can carry a transmitting device which will communicate with cars
telling them what to do.

~~~
bdamm
Who will pay for that?

~~~
pasttense01
A cop already carries thousands of dollars worth of equipment. For example you
modify the police radio to transmit to nearby vehicles.

------
heynk
When I was starting my engineering career, it was right around when the first
DARPA challenges had started. The hype was beginning, and my optimism towards
technology was strong. I thought the predictions and timelines would be
correct, and I still feel strongly that self-driving will be safer than humans
in the long term.

Recently, I bought a newer Subaru, with EyeSight. It has adaptive cruise and
lane keep assist. The LKA is fine - it'll beep if you sway outside of a lane,
and automatically adjusts the steering, but it won't keep you centered. It's
more of a safety thing, and it works well from that perspective.

The adaptive cruise is really good. It's camera based, and I have had zero
problems with it. It works well at night and in pouring rain. It'll even stay
pretty close to the car ahead of you if you turn the "tolerance" all the way
down. I'm always impressed.

Since I've had this car, I've thought a lot more about the practical
implementation details of actual self-driving. I more often notice situations
when driving that are seriously complex.

The more I think about it while I'm driving, the more I realize how fucking
hard self-driving would be.

~~~
slavik81
I tried out a relative's Subaru on a 6h drive over the holidays. I really
liked the adaptive cruise control for following behind folks who were not
keeping a consistent speed. I just set it to a reasonable value at the maximum
following distance and stopped worrying about my speedometer.

However, at one point the guy in front of me turned off onto a small side
road. It was at night, and I don't think the car realized he had moved into a
turn-off lane. It slammed on the brakes. I probably went from 90kph to 40kph
before it realized I was not going to hit that car.

I completely failed to react to the situation. I was worried my erratic
braking would cause an accident behind me, but in the moment, I didn't know
how to stop it. That was not a type of emergency I had considered or prepared
for.

~~~
bdamm
Yeah this is interesting. My Tesla M3 does similar behavior, and so I often am
ready to punch the accelerator. In the Tesla, this is how you solve that
problem. The driver's push on the accelerator contraindicates the AI's
decision to slow down, and so the car follows the driver's direction.

Where it gets dicey is the scenario where the "imminent collision" (hazards
on, seatbelts tightened) detection is triggered, and the driver continues to
push hard on the accelerator. Tesla has a fairly lengthy statement in the
manual about this scenario. The bottom line is there are all kinds of
heuristics at play that may or may not result in an override depending on the
specific sequence of events.

~~~
yread
I'm amazed by people like you. You're a programmer, you know how your code
looks like. Worse still, you have seen other people's code, how they fail to
account for corner cases, you've seen so many articles on HN's front page
about security bugs found by fuzzing. Yet, you trust your LIFE to
"heuristics"? Do you really trust that when the proverbial black swan flies in
front of your car the software won't swerve you into the oncoming traffic?

Why don't you just drive yourself or take a taxi?

~~~
bdamm
The "collision imminent" scenario would occur whether or not the car is in
self-driving. If the car manages to avoid a collision that is the amazing part
to me. And there's plenty of evidence that the Teslas do, in fact, avoid quite
a lot of collisions. It would be foolish, however, to drive like it's going to
resolve all your collisions for you.

I view these as assistance to driving. It's a comfort that the steering and
brakes are not overridable. And honestly, if the system messes up so badly as
to go flying into a barrier, well, that's not so different from a tire
popping, another car careening across into yours, or other catastrophic and
unlikely events that do happen. We have seatbelts, crumple zones, airbags,
pre-tensioners, cargo hold-downs, and emergency services to help us survive
what even 40 years ago would be unsurvivable accidents.

~~~
perl4ever
If the car avoids 99% of crashes, but crashes happen 1% of the time, and it
causes crashes 1% of the time, then it's making you less safe.

Those are just arbitrary numbers and a simplistic framework, but the point is,
you can have a _huge_ increase in safety by the numbers, and a very small
increase in problems due to the safety system that cancel it out, because the
prior underlying rate of crashes was pretty small.

I think this is an abstract pattern that comes up in other contexts and it
doesn't seem to be intuitive.

~~~
bdamm
True, the implementation quality does matter.

------
csours
Disclaimer up front: I work for GM, I don't work on SDC.

As I see it, self-driving is a cursed[0][1] problem. If you could choose a
different problem space, or to ignore some complications, self-driving would
merely be very hard. But the requirement to handle ANY external behavior with
ANY external conditions and navigate to ANY destination, while also
maintaining safety is impossible* satisfy.

One cursed corner of the problem space is ML itself: ML is amazing in that it
enables emergent behavior [3], but ML is terrible in that it gives rise to
emergent behavior. The traditional engineering mindset wants a map of inputs
to outputs, but you don't get to choose your inputs in the SDC world, and you
can't specify all of your outputs.

Another cursed corner is the Always/Never[2] problem. You want safety features
like Automated Emergency Braking to Always kick in when there is a problem,
and you Never want them to kick in when there isn't a problem.

I really don't know how any of this gets fixed. I do think that sensor fusion
and advances in AI can reduce the size of some of the cursed area of the
problem, but the problem is also meta-cursed in the definition of Self-
Driving: the solution to normal cursed problems is to reduce or change the
solution scope, but if you reduce or change the Self-Driving solution scope,
then "it's not real self driving".

0\. [https://twvideo01.ubm-
us.net/o1/vault/gdc2019/presentations/...](https://twvideo01.ubm-
us.net/o1/vault/gdc2019/presentations/Jaffe_Alex_Cursed_Problems_In.pdf)

1\.
[https://en.wikipedia.org/wiki/Curse_of_dimensionality#Machin...](https://en.wikipedia.org/wiki/Curse_of_dimensionality#Machine_learning)

2\. [https://www.newyorker.com/news/news-desk/always-
never](https://www.newyorker.com/news/news-desk/always-never)

\---

3\. edit: [pdf]
[https://core.ac.uk/download/pdf/81580604.pdf](https://core.ac.uk/download/pdf/81580604.pdf)
Especially see section 4.3

~~~
jacquesm
This is spot on. The Always/Never is well articulated, it is exactly why I got
rid of a vehicle with auto braking. It did it twice when it wasn't supposed to
and that was more dangerous than if it had never triggered at all. And yet, in
an actual imminent crash it would have probably responded a lot quicker than I
ever could. Still, I'm not going to be scared twice in a month by my car doing
stuff 'on its own' to my possible detriment.

~~~
GeorgeTirebiter
Could you share which company's auto-braking had the false positives? What is
the acceptable false-positive threshold for you to use auto-braking again?
Also, did your system tell you _why_ it decided to brake (albeit incorrectly)?

------
erobbins
I've been beating this drum for years (but I'm a nobody, so it's like pissing
in the wind):

1\. Anything less than 100% FULL automation is MORE dangerous than manual
driving, because the "driver" will almost certainly lack any situational
awareness. When the need for manual intervention happen, it will be at the
moments where you need maximum awareness and split second reflexes.

2\. There are SO many edge cases and never-seen-before situations that happen
when driving "at scale" that the automation features will fail unexpectedly
and in strange ways.

3\. G and Cruise might be exceptions, but most of the companies in this space
are cowboys with reckless disregard for public safety and terrible "iterate
quickly" coding practices.

4\. At some point there will be an accident that kills a photogenic "middle
America" person or people and at that point the government will crush this
industry with regulation, with the financial backing of automakers, UAW, and
other people who benefit from the status quo.

The only way 100% fully self driving cars will ever happen is for the
infrastructure itself to be built to accommodate them. Mixing regular cars,
parking, trucks, bicycles, scooters, pedestrians, dog walkers, hoverboards,
etc all together on the same roads ensures that the problem is unsolvable.

~~~
testcase_delta
To your first point, I'm not convinced that's true. People augment their lives
in lots of ways that don't seem to reduce safety. A few examples off the top
of my head: simple "dumb" cruise control hasn't lead to more accidents.
Parachutes have auto-deploy features if the cord isn't pulled by a certain
height. Scuba divers use dive computers that basically eliminate the need to
learn dive tables (and beep at you when you're doing something dumb).
Apparently passenger jets are highly automated (I'm out of my depth on that
one). These are all on the spectrum towards automation and have only been
helpful. Do you think the problem occurs as you approach 100%? Like an uncanny
valley in the 99 to 99.99% range?

~~~
erobbins
The thing with other activities (diving, flying are great examples) is that
when a problem occurs, you generally have minutes to analyze what's happening
and decide on a solution. If my dive computer goes on the fritz, I can decide
to immediately start an ascent, or go off physical dive table, or make an
extra safety stop at say 20 feet just to be sure.

When you're going 45 in a curve driving along PCH, and a sudden fog bank
obscures your cameras and LIDAR and the computer says "your controls, good
luck!" you have maybe 2 seconds to react, if you're lucky. It might be a lot
less.

Humans make really dumb decisions sometimes, but we are also outstandingly
capable of reacting to novelty.

------
takk309
I am a traffic engineer and one of the things that I do regularly is write 10
year traffic plans for small to medium sized cities, think 60,000 people max.
When a lot of the self driving hype was really kicking off, I was told to make
sure to incorporate them into the plans. This mostly consisted of a few
sentences about self driving and that was it. When modeling future traffic
growth, the client would always say, "these numbers a pretty high, do you
thing self driving cars will lower them?" To which I would respond that it is
doubtful. I am in the rural west where cars are a large part of life. Ride
sharing and public transportation are rarely a thing outside of the core of
towns. The idea of a self driving car that one doesn't own would be very odd
to a lot of people in this area.

On top of the social side of things, the roads are not in great shape in
northern climates and many of the visual cues that we use to drive can be
missing or very hard to see for many miles. Striping delineating the edge of
the road often gets worn away over the course of a few years and doesn't get
re-painted for a few more. Some major roads connecting two towns may not even
have a paved shoulder, just 24 feet of asphalt with a stripe down the middle.
(For reference, 12 foot lanes with 4 foot shoulders are the general norm for
this part of the US.) All of this and I have still yet to touch on weather.

I look forward to self driving cars. However, I don't think that they are
going to solve many of our traffic issues outside of urban cores. For me, the
incremental steps to reach self driving will result in fewer injuries and
fatalities on our roadways, and that is a win.

~~~
skywhopper
I'm always baffled by folks who assume self-driving cars would reduce traffic.
No matter what, even if they are privately owned, truly self-driving vehicles
could only increase traffic. If they can be dispatched without a human driver
to pick up groceries or take-out, for example, there'll be a lot more trips.
And sharing self-driving cars rather than owning them would surely be more
likely to _increase_ the number of cars on the road since they have to make a
trip to pick you up and another one after they drop you off.

~~~
takk309
The rationale I have heard is that a single car could be used to make a
multiple chained trips instead of multiple cars making multiple trips. This is
the same idea behind ride sharing. I feel this breaks down with privately
owned vehicles. I do see an increase in traffic in areas with expensive
parking. Specifically I would sent my car to a cheaper lot and that would
result in an additional trip. Even worse, that would be a zero occupancy trip.

~~~
ghaff
I'm also pretty sure it will increase commutes and this applies even if self-
driving isn't fully door to door. There are probably a lot of people who would
hesitate to do an hour car commute today who would be a lot more open to a 60
or even 90 minute commute of something else is doing the driving.

------
aphextron
It amazes me how Tesla can continue to just outright lie in their sales
material about this. Right now you can purchase a brand new Model S with a
"full self driving" package, even though no such thing exists, nor is there
any timeline at all to when it will (if ever). Autopilot as it stands is
nothing more than an advanced driver assist system with blindspot monitoring
and automatic lane changing. Not even close to a level 3 system, let alone
full autonomous.

~~~
pkaye
Didn't Elon Musk said by 2020 there would be robo-taxis?

------
bravoetch
This meme of shock and surprise that self-driving cars haven't appeared
overnight is just media BS. Yes, all technology development is a gradual
process and we may not know how long it takes. Companies working on it have
pitch decks and investors so they give their best or most optimistic
estimates.

~~~
ghaff
Go back a couple of years and look on just about any thread about self-driving
on a forum such as this one and you'll find no shortage of people arguing that
they're just around the corner. Because, after all, that's what the SV
hypesters were saying and they'd never lie.

~~~
seanmcdirmid
> you'll find no shortage of people arguing that they're just around the
> corner

Ya, but around the corner no one thought that would be 2020. The strawman is
always to place around the corner as tomorrow, but many people just mean in a
decade or two.

~~~
Piskvorrr
Yeah. It's been _juuust_ beyond reach, but within five years to mass adoption,
for a century now.

~~~
seanmcdirmid
I don’t think anyone was hyping self driving cars in the 1920s. If they were,
they already had horses that could do that anyways.

~~~
Piskvorrr
Nope. Horses obsolete, slow, hungry, disobedient. Future was in machines.

[https://io9.gizmodo.com/when-people-in-
the-1920s-and-1930s-i...](https://io9.gizmodo.com/when-people-in-
the-1920s-and-1930s-imagined-the-future-1695028672)

------
reggieband
I've been watching Lex Fridman's youtube podcast and there is a recent
interview with Jim Keller [1]. Keller is a chip designer famous for his
involvement in multiple chips at Intel, AMD, Apple and he was co-author of the
x86-64 instruction set. He also worked for Tesla.

There is a point in the conversation where Lex and Jim clearly disagree about
how "easy" self-driving AI should be. Lex is clearly pessimistic and Jim is
clearly optimistic. I have to admit I was more swayed by Lex's points than by
Jim's, but it is hard to discount someone so clearly (extraordinarily) expert
and working directly in the field.

1\.
[https://www.youtube.com/watch?v=Nb2tebYAaOA](https://www.youtube.com/watch?v=Nb2tebYAaOA)

~~~
theresistor
Jim Keller went back to Intel in 2018.

~~~
reggieband
My mistake - I should have checked his bio rather than assume based on the
content of the discussion. I've updated my comment to change his association
to Tesla to the past tense, Thank you.

------
oblib
>some researchers have argued we won’t have widespread self-driving cars until
we’ve made major changes to our streets to make it easier to communicate
information to those cars.

It feels to me like those working on the concept are expecting that if they
keep adding sensors and twiddling with AI they can avoid that.

I get why. It's a vast and expensive undertaking that is out of their control
and they want to sell their product asap. But if we started with major city
streets and highways it could be a quicker and safer route to get it to
market.

Years ago (around the late `90s) I worked on an "Intelligent Traffic Systems"
project for the city of Branson, Missouri. The 3M Corporation demonstrated a
magnetic tape for street lines and a snow plow truck outfitted with sensors
that could detect the lines connected to vibrators on each side of the drivers
seat. When the truck got too close to the line the seat would vibrate on the
side they were close to. I got to ride in the truck for a demo of the tech and
worked well.

We also had street cameras that detected autos and could estimate speed and
traffic congestion. These sent video and data to the local 911 center. I
created a "traffic congestion map" that ran on a web server using that data
that worked pretty much the same as Google Maps that show congestion.

We need "smart streets" to really make this work. Without adding that to mix
corporations could be banging their heads against the wall and spending
billions of dollars to try and never make the last mile.

------
KKKKkkkk1
In the debate over self driving, there are two schools of thought. One says
that you can't solve it without machine learning because it's impossible to
hand-engineer a system to cover all edge cases. The other says that you can't
solve it with machine learning because any solution must have zero glitches.
Both sides are correct.

------
krm01
It’s really just a matter of time. Those that started out optimistic (when it
was hyped) but are now saying that they don’t see self driving cars can do the
complex situations we humans supposedly can should have some patience. As the
tech improves, and more data is used to train models, it’ll surely surpass
humans in driving ability.

~~~
mark-r
I have this argument constantly. Humans make mistakes too, so we will
eventually have tech that makes fewer mistakes than humans. The problem is
that humans have a unique ability to make sense of situations they've never
specifically encountered before. No amount of training data will be able to
make up for that, because there will always be situations so rare that they
never made it into the data.

~~~
pasttense01
But what really matters is the overall statistics: The self-driving doesn't
know how to handle this one in a 10,000 situation. But in the other 9,999 of
10,000 situations the self-driving vehicle is equal or better. Thus the self-
driving vehicle averages out to be a lot safer.

~~~
mark-r
Everybody says this, but they simply take it as a matter of faith. I don't see
any statistics to back it up. The article itself points out that there simply
aren't enough self-driving miles at this point to make a valid comparison.

------
noonespecial
It still feels kind of like we're in the "Apple Newton" phase of self driving
with the iphone in the indeterminate future.

One day we'll look back and say "aww, they tried so hard with the limited tech
they had and got so close but what they needed to make it good just didn't
exist yet".

------
tanilama
Than expected? I am pretty sure none of the people who actually works in the
related industry expect self-driving car to come out anytime soon. The hype is
created by media and investors, with intention other than creating a viable
product.

Driving is simple but still logical. Though it doesn't seem to involve too
abstract of processing like what is required for language, that doesn't mean
it is just looking the road ahead and making turns. Say if an accident happens
and the road condition is messy now the one-way road is changed to switch to
allow once direction then another, how would AI understand this? No it can't.

Current NN based model had huge problem guarantee Robustness, while human can
be incredibly resilient against adverisal scenarios, because we are superfasr
few shots learners.

------
ars
I've said it multiple time on HN, and I'll say it again: Self Driving cars are
impossible until we have Artificial general intelligence (and not the limited
AI that exists now).

The only exception might be specially instrumented road tracks, with limited
access (like train rails).

And yes, I fully expect such a thing on interstates. It will never happen
inside cities.

Want to make money? Design a vehicle agnostic system, primarily aimed at long
haul trucks. Install it in a ton of cars, but don't switch it on.

Then instrument some highways, and convince governments to only allows these
cars on those lanes. By being vendor agnostic, anyone with a car could get
this.

It will require deep pockets, and maybe you would need government to mandate
this (some kind of open standard).

------
zelly
Self-driving stops being a (difficult) AI problem if you force all cars to be
self-driving and network with each other. It converts the behavioral/theory-
of-mind problem into a distributed computing problem. We could have had self-
driving in the 80s with this approach. Imagine where we would be today if we
had decades of 24/7 trucks and ubiquitous robotaxis. It would be a completely
different world. It's a shame humans are so bad at cooperating. But still,
this is not out of the question today. Obviously the only way to pull this off
would be with massive government subsidies. It would be worth experimenting in
a small country like Luxembourg.

~~~
Barrin92
relying on networked cars is _monstrously fragile_. One critical error in the
system is going to tank everyone, disconnecting from the system is going to
drastically reduce the safety of both the individual _and_ the system, as it
now has to deal with rogue elements, and the potential for malicious attacks,
like terrorism or simply some sort of natural freak event is huge problematic.

I think the direction of the thought is reasonable though, you should just
take it a few steps farther. If networked infrastructure is a good idea, then
maybe _cars are not a good idea_. We already have driverless, well defined,
organised modes of transportation, they're called trains.

modern subway systems already pretty much drive themselves, they also come
with the added bonus of not having everyone carry two tons of steel around.

~~~
zelly
Those concerns are legitimate. My proposal is not a purely centralized hub-
and-spoke network topology where one data center drives a million cars. Yes,
you can have the central brain too, but I'm thinking more of a mesh network
among nearby cars on the road. The network can be fully connected so that one
nefarious car cannot take down the whole mesh. There are a lot of
cryptographic safeguards to prevent bad actors too. Each car is able to drive
on its own, or pull over, only relying on networking for a stream of sensor
output from nearby cars. The upstream servers can go down without doing
anything worse than making all its client cars pull over and stop. All the
steering is done on the client side.

What if terrorists spoof phantom cars or rewrite maps to send people off
cliffs? Assume they stole the master signing keys, have root on the central
servers, exploited 0-days on the client car software, etc. One car misbehaves
somewhere, triggering sensors of nearby cars which tell every other car in an
N kilometer radius to pull over en masse. If anything, it is more robust to
hacking/terrorism than the independently-self-driving Tesla or Waymo approach
because those do not have the benefit of the herd. One gazelle in a herd who
gets tackled can yell out to save the rest of the herd.

> I think the direction of the thought is reasonable though, you should just
> take it a few steps farther. If networked infrastructure is a good idea,
> then maybe cars are not a good idea. We already have driverless, well
> defined, organised modes of transportation, they're called trains.

How do I take a train/subway from my apartment to the front door of a
McDonald's? People go from building to building. It's not practical to do this
without cars or buses outside of maybe 5 cities in the world like HK or NYC.

Also there is an ungodly amount of cars in the world. It's a lot cheaper and
efficient to retrofit cars with self-driving modules than to recycle all that
metal into trains/subways.

~~~
Barrin92
>How do I take a train/subway from my apartment to the front door of a
McDonald's? People go from building to building. It's not practical to do this
without cars or buses outside of maybe 5 cities in the world like HK or NYC.

Mostly by taking the subway to the nearest station of the mcdonalds and
walking. I've lived in more than 5 cities without ever owning a car. Walkable
cities on the planet are the norm, not the exception, the US is very skewed in
that regard because it built most of its cities around the car, but that is
only a fraction of the world population.

Much more important for the future is to ask what all the places do that still
have the decision to make if they want to expand their usage of cars, like the
African continent and much of Asia, or if they want to invest into mass
transit and built their cities around alternative modes of transport.

------
jihoon796
Outside the vacuum of technology (can it be done?), it seems to me that the
role that policy plays when it comes to the actual rollout of self-driving
cars (should it be done and how?) is vastly underrated. This seems to be the
real bottleneck in mass adoption - improvements in technology will have
diminishing returns after a certain point.

For example, things like getting companies to agree to a unified standard at a
government/industry level & determining frameworks for liability all seem to
be as important (and perhaps difficult) as eking out another 0.00001% increase
in safety.

------
axguscbklp
Yes, harder than hype and deluded optimism expected. Plenty of people have
seen through the hype and delusion for years now. Level 5 requires AGI. AGI is
probably nowhere close to being realized - humans may well be centuries away
from reaching the necessary technological level. Humanity might never get
there at all. In any case, if humanity ever developed AGI, it would
revolutionize the world. Self-driving cars would be one of the most boring and
mundane applications.

~~~
viburnum
You’re right, but this is a religious conviction for nerds, like their belief
in space colonies. They flip out if you don’t play along.

------
agumonkey
It's only the end of a wave. 50s saw some ideas, 80s a few more, this one was
a massive push [0], I honestly don't think the road to SDV is long and windy.
It's just a natural bubble bursting backpush for now.

[0] I'm not even a fan of the idea anymore btw, just trying to assess the
technical hurdles. Sensors have become numerous and cheaper, compute power is
immense. Investment won't be higher to make new steps in 20-30 years.

------
psychometry
Any software dev worth their salt knew this was going to be a 90-90 problem.

~~~
AnimalMuppet
Oh, far worse. 90% of the effort will get you 90% of the way there. Another
90% of the effort might get you another 9% of the way there. But that last 1%
of "getting there" is going to take _many_ more 90%s of effort.

------
c3534l
There are problems where we know how difficult the problem is. These problems,
you can just throw money at them to make them work and the skill is in
figuring out how to do it cheaply. There are other problems where we don't
even know what we're missing. I see people, all the time, try to extrapolate
current progress on problems where we don't know what puzzle pieces we're
missing. Nobody knew what it was going to take to build self-driving cars, nor
do we know what it will take today, but the hype machine that sucks in funding
and grants and produces nothing pretended like this was a problem we
understood and just had to throw money at it. I've found that asking "do we
know what we don't know yet?" has been a surprisingly good way to cut through
the bullshit over the years. I'd say "I told you so" if anyone knew who I was
or had any reason to listen to me.

------
skywhopper
Billions of dollars of VC funding go a long way to convincing people of what
is possible, whatever the reality is.

------
SemiTom
It's one of the biggest engineering problems in the last 10 years due to
sensory input issues, dealing with the unexpected behavior of outside
influences and you have to have the mechanical and software components that
the first two components give the commands to, and can then faithfully execute
them [https://semiengineering.com/challenges-to-building-
autonomou...](https://semiengineering.com/challenges-to-building-autonomous-
chips/) and [https://semiengineering.com/more-data-more-problems-in-
autom...](https://semiengineering.com/more-data-more-problems-in-automotive/)

------
krtong
If you could create instructions efficient enough for a computer to drive a
car, it always seemed like you just created the best instructions for a human
to drive a car too, and a human who had similar instructions would always be a
better driver of the two. "Pay attention to this, prioritize these things,
brake when this occurs, turn when this occurs, safest speed is this given
these parameters." and so on. You would think silicon valley would be obsessed
with learning how to drive really well given the amount of time so many
engineers have spent on automating it. We should have our own F1/WRC team by
now sponsored by these companies.

------
msoad
The more self driving advances in lab, the more safety features for regular
cars are invented which in turn makes it harder for fully self driving systems
to computer with human+safety features and justify their value...

~~~
frosted-flakes
That's an excellent point. Why hand over complete control to a computer when
we can have both? Each has different strengths, so an almost-SDC with a human
at the wheel will surely be better than a true SDC. I think that's a much more
likely future.

------
Andrew_nenakhov
Lived in Russia and have some knowledge about programming. Unless some
breakthrough happens in general purpose artificial intelligence research, I do
not believe self-driving cars that can drive here are possible at all.

------
vgaldikas
>One study attempting to estimate the effects of self-driving cars on car use
behavior simulated a family having a self-driving car by paying for them to
have a chauffeur for a week, and telling them to treat the chauffeur service
the way they’d treat having a car that could drive itself.

>The result? They went on a lot more car trips.

That's kind of pointless 'study'. Of course I will take a lot more trips for
some time after getting chaffeur/self driving car, just for the novelty of it.
One family for one week does not really tell anything

------
rafaelvasco
As the article said, we're in that awkward transition period. The problem is
that it puts people's lives at risk, even though most deaths could be avoided
if the drivers hadn't been so negligent, like watching their smartphones while
the car AI operated the car, sleeping etc. It's a turbulent transition, but
the tech is here to stay, and it'll not only keep improving, but the
advancements in AI due to self driving research will surely benefit several
other areas;

------
WhompingWindows
It's hard to fully understand the challenges of SDCs, because most of these
players are extremely secretive about their approaches. The fact that we rely
on disengagement data from California as some proxy, that's just not great
data when so many of the players are doing 1000's of miles in other states.
It's sad, I wish we knew more about the inner-workings of these systems, and I
would love to see them collaborate for the benefit of society.

------
nunez
I have a feeling that self-driving will be one of those things that everyone
will be opted into. Self-driving becomes much easier if everyone is playing by
the same ground rules (with either the same set of cars or a common protocol
through which cars communicate). Trying to engineer away the entire problem
space of driving seems intractable (think: you have to engineer for drunk
drivers doing damn well nearly anything on the road)

~~~
amanaplanacanal
Even if everybody in every vehicle (including motorcycles?) is forced to opt
in, you still have pedestrians, bicycles, scooters, animals, and who knows
what else on the roads. This feels like an impossible problem.

------
Naac
What I think is interesting is that none of these articles talk about one
specific implication of self-driving cars.

What happens to car-insurance companies?

Seems like with self driving cars the form of car insurance we have now
wouldn't be really necessary. I expect car-insurance profits to decline. Isn't
there an incentive ( read: lobbying ) for car insurance companies to
discourage self-driving cars?

~~~
Marsymars
In competitive jurisdictions, car insurance isn't a profitable venture per
customer - insurance companies make money on the float.

In other jurisdictions, auto insurance is provided by a single public auto
insurer - these rarely run a notable profit.

------
sunstone
Who would have thought that it might be tricky to replicate the capabilities
of an organism exquisitely designed by evolution over more than 10 million
years to dash through the jungle canopy with break neck speed, split second
coordination and decision making?

------
NohatCoder
I think car companies try to rely way too much on machine learning. You get
some promising results fast, but it is all inside a black box, both verifying
the correctness and changing what is wrong are almost impossible jobs.

Maybe machine learning can be used to tell the difference between a dog and a
plastic bag, but you'll need some hard code to describe how to react to
either.

~~~
ghaff
>ut you'll need some hard code to describe how to react to either

My understanding is that's largely how it's done. The ML part is mostly about
recognizing objects. But the car doesn't "learn" how to drive. It's told how
to drive depending on what's happening in its field of view.

Which is why there's probably misperceptions about the importance of miles on
the road. It uncovers un-programmed situations but it's not like the car runs
over someone and reinforcement learning leads to it not doing that next time.

~~~
NohatCoder
There are some levels in between, like feeding an ML algorithm recordings of
human drivers and expecting it to learn how to drive from watching those.

------
puranjay
I've become increasingly jaded about the tech hype cycle. We had 3D Printing,
VR/AR, crypto, and self driving cars hyped to no extent in 2-4 year cycles. We
were told that they were going to change our world in the next few years.

Turns out that the underlying technology was far from maturity in each case
and the use cases limited to a handful of enthusiasts.

~~~
allhacks
I worked at a car marketplace company. The number of time people told us back
in 2014-2016 that Americans would no longer own cars in “2-3 years” and that
consequently our market would vanish over night ... btw this group of people
included very respectable folks, many of whom have a lot of fans here on HN :)

~~~
wutbrodo
That's trivially stupid though; if a perfect SDC came out tomorrow, it would
take more than 2-4 years to turn over the vast amount of capital out there,
and probably another decade or two to shake people (and thus policy, in a
democracy) out of their status quo bias, no matter how terrible the status quo
is.

~~~
puranjay
You're going to take even more time to convince people to let go of the
control - or at least the idea of control - that comes from driving yourself.

------
axilmar
We could make driving fully automatic by going the train route, i.e. filling
roads with tracks and letting the cars move on those tracks only, fully
automated.

But that's not really sexy, is it? it just doesn't sell as well as fully
automated AI driving does.

------
scottlocklin
You mean the output of VC firm and Silly Con Valley marketing centipedes isn't
strictly true? Perish the thought! Next thing you know, you'll be telling me
the advocates of strong-AI, 3-d printing, Virtual Reality and Drones as
revolutionary technology might not be exactly accurate.

~~~
ghaff
I do wonder how much of the self-driving narrative at the peak of inflated
expectations was the result of

1\. Wild optimism fueled by how rapid the progress of ML had been in certain
domains over the course of a few years combined with a lot of hype and general
SV techno-optimism. (And the fact that a certain demographic so desperately
wanted a robo-chaffeur to drive them around.)

vs.

2\. Hypesters and scammers who knew it was mostly smoke and mirrors but it
didn't matter so long as they got their payday.

~~~
scottlocklin
In the case of 1. I'm certain a whole bunch of it is people getting high on
their own supply; aka pay a marketer to hype your thing, your competitor does
same, then you become afraid of all the progress your competitor has made
(which is imaginary marketing hype). I've actually seen this dynamic at work
in "AI" land; it's gotten me work.

I've even seen it happen within the same company: marketing dude talks to
engineer, gets it all wrong and exaggerates capabilities; then CEO demands to
know why the capabilities in his sales literature don't exist in the product.

I call it "human informational centipede."

Hypesters and scammers always around; excepting in Theranos type cases, they
mostly don't really push the needle. Pretty sure Irene Aldridge didn't change
perceptions of HFT much, for example.

~~~
ghaff
You definitely get into feedback loops. And if "everyone" is saying self-
driving is right around the corner you also begin to doubt your own skepticism
especially if you think others should be in a better position to know the
reality than you are.

To your first point, I agree there was something of a big game of topper going
on for a while. If anyone came out and said that they weren't going to have
production self-driving for 10 years (much less 20 or 30). A lot of people,
including on boards like this, would nod their heads sadly about how far
behind $COMPANY was compared to a certain other car company that was already
supposedly selling self-driving-capable vehicles.

------
joejerryronnie
Self-driving personal aircraft would probably be easier to implement than
self-driving cars.

~~~
mdonahoe
Definitely. The air is a universal medium... the ground is a mess.

------
m3kw9
There are lots of edge cases that human can adapt to easily but would confuse
machines. The risk is that when that edge case comes up if the car go full bat
sht crazy and kills someone, that will bring down the hammer.

~~~
m3kw9
Just look at parking lots and self summoning. Tesla is still in step 2 of 10

------
linksnapzz
Expected by whom?

Where did they find people who thought this was going to be easy, and more
importantly why were they given billions to screw around with after
grotesquely underestimating the complexity of the task at hand?

------
0x8BADF00D
There’s a huge bubble in the autonomous vehicle space. I foresee Waymo being
able to do it, only because I know some of the crazy smart people that work
there. It won’t be until the end of the decade though.

------
spiderfarmer
I have a feeling the adoption of AI will be like the early years of the
internet. Massive hype, then a massive crash and then slowly it finds its into
everything, fulfilling most of what was promised.

------
LoSboccacc
harder than you pundit expected. nothing in the news cycles ever warranted
such optimism. been a skeptic of current tech long enough. they'll come
eventually, but they haven't yet even started to define the boundaries of the
domain problem, just piling heuristics and people literally died because of
corner cases that weren't covered.

well unless you want to tow the line of "that wasn't true autopilot" to wich I
say yeah, that's the point exactly, none of them is doing it.

------
vmchale
My impression (not in the field) was that Uber wanted to believe in this out
of wishful thinking, and then everyone scrambled because they thought they
were going to be left behind.

------
betoharres
just like any other side project: "Oh, that's easy to build" _fast foward 10
weeks_ "Shit, this is harder than I expected"

------
kumarvvr
And they are just getting started with in Developed countries.

The real test for a self driving car would be on roads in third world
countries.

That would be a real test of capability.

Meanwhile, I just wish I could drive my car without moving my legs or hands so
often. Why can't somebody make a car such that it can be driven by, say a
simple joystick like thingy.

~~~
catalogia
Saab tried joystick control back in the 90s:
[https://en.wikipedia.org/wiki/Saab_9000#Prometheus_(prototyp...](https://en.wikipedia.org/wiki/Saab_9000#Prometheus_\(prototype\))

------
flyGuyOnTheSly
So do a lot of ventures but that's no reason to throw in the towel.

------
readhn
i think that the best solution right now is very costly: have a separate lane
for "self-driving tech" equipped cars.

having a mix traffic (humans&automated cars) is a recipe for a disaster.

------
ecolonsmak
as long as software is based on clumsy "if then else when for" statements and
computation relies on binary switches - self driving cars will remain
impossible.

------
_pmf_
Expect massive push for goal post shifting.

------
jacquesm
Expected by who?

~~~
bnegreve
Apparently by General Motors, Google Waymo, Toyota, Honda and Tesla.

~~~
jacquesm
All of those except for Tesla have never made hard claims about being able to
deliver full autonomy in the near future as far as I know.

------
fourthark
Puff piece with critical title.

------
fnord77
near human levels of AGI will be needed to achieve Level 5 AVs- change my
mind.

~~~
mdonahoe
It depends on your definition of vehicle.

I bet we see a few Level 5 flying cars before we see Level 5 ground vehicles.
Roads are more complicated than the airspace.

Flying cars are unlikely to be as mainstream as cars due to energy
requirements though.

------
anewguy9000
no they didn't lol

unless you were a layman, but then, what are your expectations based on?

------
tjpaudio
To anyone with a detailed understanding of the current limits of machine
learning, this should be no surprise; unsupervised learning is far from solved
and ML in its current state will always be plagued by the cat and mouse game
of edge cases. The reality is the industry has decided to go this way anyways
because it has a good profit outlook if you can get it to work, which is the
only thing funding the endeavor. Consider two ways of going about automated
transportation:

1) AI. The car independently makes decisions and drives itself. 2) Networks.
The car communicates with a grid to make decisions.

Why did we go for #1? Well, thats easy, capitalism. Consider:

AI: \- Company gets to own the intellectual property to form a temporary
monopoly. \- Easier to sidestep governments involvement. \- No need to build
large infrastructure

Networks: \- Shared, less opportunity for monopoly formation. \- Will need the
government to cooperate. Governments are slows.

If I had to guess, we will eventually go the network route. The research used
for AI will drive safety features and failsafes, but not the meet of it.
Anyways, why accelerate a line of stopped cars one at a time with autonomous
vehicles when you could accelerate the entire line of cars simultaneously with
a networked setup?

~~~
streetcat1
So I do not think that this is an unsupervised problem. As a matter of fact,
this should be one problem where labeled data is not an issue.

I.e. just let humans ride the self-driving car, and record the human actions
as well as all the sensors (at time t-1).

Combined that with all the humans on road X, and you have the a model for road
X.

The problem here is that each road needs its own model as well as model per
time of day.

Another problem is that each company will not share its supervised data.

------
sjburt
Than who expected? The criminally credulous tech media?

