
Self-driving cars are headed toward an AI roadblock - devy
https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber
======
jtbayly
Waymo seems to be the only question mark in my mind making me doubt my bet
that we're a decade or more away from usable self-driving cars.

If Waymo has truly solved the problem while everybody else is just trying to
catch up and/or bluffing, I guess I wouldn't be that surprised.

Edited to add: The reason is that they've apparently ordered 10's of thousands
of these cars. Can they really use that many only in good-weather places with
perfect roads? It seems unlikely that they are just that stupid to spend that
much money without being confident that their plan will work. One last option
would be that they intend to put 10's of thousands of them on the road just to
exponentially speed up the data collection that they've decided is necessary
to truly get the system up to par.

~~~
rbranson
I tend to agree. Waymo ordered 62,000 Chrysler Pacificas with delivery slated
for late 2018. Base retail on the hybrid model is $40,000USD. Even if they got
an unbelievably generous 50% discount, that means they're $1.24B USD serious,
excluding the cost of the additional autonomous driving equipment. So this
will either be one of the biggest debacles in Google/Alphabet's history, or
they really are ready to go.

~~~
Fricken
Waymo has also ordered 20k Jaguar I-paces. Rumour is they have invested about
11 billion in R&d so far, and I'm crudely estimating the cost of their 80k
upcoming robotaxis to be around 12 billion. Alphabet is committed.

Waymo estimates about 50 trips per day will be served per vehicle. At that
rate, 80k robotaxis is enough to displace most of the taxi/rideshare services
in the southwestern United States, though I'm not expecting to see all of them
on the roads until 2022 or so. Waymo has a ton of work ahead of them. Mapping
and validating in the ~100 square mile area of Chandler has taken them about
18 months so far, and supposedly they are on the verge of a commercial launch.
Every city they hope to deploy in has differing signage, differing driving
habits, and many funky intersections and road anomalies that need specific
attention.

~~~
rbranson
In this scenario there's no moat for Lyft/Uber either. If Waymo can meet
demand, they'll be able to undercut ride sharing prices so deeply that most
consumers will switch overnight. Banks will be falling over themselves to
provide the massive credit they'll need to bankroll the expansion.

~~~
CPLX
> they'll be able to undercut ride sharing prices

Why?

I see comments like this constantly. But does anyone ever do even a basic
spreadsheet to explain the unit economics of this argument?

Low skilled people who can drive are pretty cheap and plentiful. Even if self-
driving tech is flawless (spoiler: it's not and won't be soon) it still only
replaces part of their responsibilities. Someone will still have to clean the
cars for example. Presumably there will be some required level of human
monitoring, etc.

Conversely, capital and highly reliable technology isn't free. You can
calculate pretty easily the rate at which it's profitable to substitute
technology for labor, this is a trade-off we've literally been making for
centuries.

One one side you have plentiful cheap low skilled labor. On the other you have
lasers, fast computers, graphics cards, cameras, and the associated
programming inspection and maintenance costs.

Why do we think the latter side is going to be cheaper in anything remotely
like the near term?

~~~
abathur
I don't disagree that some may be over-estimating how much cost can
pragmatically be cut, but I also think you may be glossing over some of the
costs and inefficiencies that come with operating a large heterogenous
operation comprised of individuals and personal or rented vehicles.

A company deploying large mostly-homogenous autonomous fleets may be able to
benefit from:

    
    
      - we probably don't need to tip AVs
      - economies of scale on obtaining vehicles, fuel, maintenance, and cleaning
      - tuning maintenance/cleaning schedules across a fleet towards keeping vehicles in service for longer
      - may be able to perform cleaning/maintenance during off-hours, vs owners who'll often have to trade 
        working hours to do these tasks
      - autonomous systems may drive (and be tuned towards) in ways that also help preserve long-term vehicle 
        value and minimize costs
      - lower insurance, legal, and PR costs if they outperform human drivers, don't molest/murder 
        passengers, etc.
      - minimizing costs around acquiring and managing a human workforce
      - there are probably many small ways to optimize the positioning and functioning of an AV fleet that
        just won't work with a large contractor fleet
    

That said, the potential for a lot of these savings depends on current prices
actually reflecting these components. It may very well be more expensive to
perform some of these activities, no matter how efficiently, than to
exploitively externalize their costs on drivers and riders.

~~~
abathur
I haven't looked for work on the price/demand relationship here, but it's also
worth noting that price reductions may not just purely be a matter of cutting
costs and passing those on at a static ridership and profit level. There are
likely to be some price points that profoundly change user behavior

To give an example, my favorite coffee shop is about 7 minutes (+wait) from my
apartment by car either way, and around 18-26 minutes (+wait) by bus (shorter
there, longer back). Without a ride pass and with tip, it's probably around
$7-8 to Uber this one way, vs. $1.25 for the bus.

At this price, I'll usually only Uber if heat or rain make getting to and from
the bus miserable. I have a ride pass atm that knocks this down to around $6-7
with tip, which makes me marginally more likely to take Uber, but it isn't my
default. I took the bus this past Sunday morning, planning to get some open
source work done, but I'd forgotten my laptop wasn't in my bag. The time/sweat
cost of the bus round trip and the money cost of the Uber round trip were high
enough that I just sat and read a book instead.

I'm not sure exactly where, but somewhere between ~$2-5 total, I'd probably
default to taking Uber both ways. Down at the low end of that range, I even
would've gone back for the forgotten laptop.

------
Animats
Another bad article.

As I keep pointing out, the way you start to do automatic driving is by first
profiling the terrain to see where you can go. If it's not flat road, you
don't go there. Doesn't matter why it's not flat. Then try to classify other
road objects and predict their behavior. This only matters for moving objects.

Waymo gets this, as we know from Urmson's talk at SXSW a few years back. Most
of the DARPA Grand Challege vehicles got this, because they had to drive off-
road, where you have to profile terrain or else.

Tesla does not get this. Cruise may or may not get this. Uber - well, Uber's
system detected the pedestrian and ran into her anyway, which should end with
someone in jail.

There's this mindset that you just throw deep learning at camera images and
automatic driving comes out. Musk claimed that. It didn't work. We don't hear
much from Tesla about self-driving any more. Udacity's self-driving course is
also deep learning based.

As for how much testing is required, read the California DMV accident
reports.[1] This gives you a sense of what the real-world problems are. 25
minor accidents so far this year. Mostly Cruise. The most common problem,
especially with Waymo, is being rear-ended while cautiously entering an
intersection with limited visibility. Their system will start forward to get a
better view, then detect cross traffic and stop. What may help there is some
convention such as rapidly flashing the brake lights when a sudden stop is
likely and there's a vehicle close behind.

On the LIDAR front, Continental's flash LIDAR is already working well enough
that drone makers are buying it. Continental is ready to produce that thing in
volume, but they need volume orders from automakers before the price comes
down.

[1]
[https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/auton...](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/autonomousveh_ol316+)

~~~
zawerf
> The most common problem, especially with Waymo, is being rear-ended while
> cautiously entering an intersection with limited visibility. Their system
> will start forward to get a better view, then detect cross traffic and stop

This actually highlights a really hard problem that (perfect) self driving
will need to solve: theory of mind.

[https://en.wikipedia.org/wiki/Theory_of_mind](https://en.wikipedia.org/wiki/Theory_of_mind)

Not only does the AI need to empathize and predict how typical humans react,
they also need to be easy for humans to empathize with. Humans typically use
themselves as a template for a reasonable range of reactions. So when the
capabilities don't match up (like needing to scoot up to process how to cross
where a typical human wouldn't need to), it will violate their expectation of
you. So in this case being cautious is considered a crazy and erratic behavior
to a human. (e.g., "what were you thinking?! why did you stop!? shouldn't
you've seen that the road was empty?!")

~~~
ScottBurson
The only time I've ever been rear-ended was that way — pulling out of a
parking lot where the street was at a slightly higher level, so I couldn't see
around the parked cars on either side of the exit. I started to pull out
rather quickly, but then was forced to hit the brakes, and the person behind
me — well, I don't know where they were looking.

I guess that's not quite the same scenario, though, since the person behind me
couldn't see the cross traffic either; they were just cueing off my own
behavior to conclude that it must be clear.

Anyway, you're right: good driving isn't just predicting what other people are
going to do; it's also making sure your own behavior gives _them_ the correct
idea about _your_ intentions.

------
comesee
> “Rather than building AI to solve the pogo stick problem, we should partner
> with the government to ask people to be lawful and considerate,” he said.
> “Safety isn’t just about the quality of the AI technology.”

Wow. What a goal post move. I don't think this is how successful technology
spreads. Governments usually accomodate tech because it proves itself in the
market, not vice versa.

~~~
microcolonel
Also, good luck marketing a product which only works when everyone is lawful
and considerate, and otherwise has a considerable chance of killing occupants
or pedestrians.

~~~
adrianN
Cars that you can buy today are such a product. As are kitchen knives.

~~~
comesee
Not in the same way, there is accountability when a driver or knife wielder
uses them dangerously. He's arguing for putting the accountability on
pedestrians, which would be the equivalent of putting accountability on a
murder victim. "Hey, it's not the murderers fault, the victim shouldnt walk
around in bad neighborhoods."

------
adrianN
The crashes that the article claims came from edge-cases that the engineers
didn't account for weren't, in fact, edge-cases. The Tesla crashes are due to
the way Tesla's system works, it doesn't recognize stationary hazards.
Similarly Uber's system failed to brake for a hazard that it could clearly
see. That's not an edge-case, it doesn't have to know that the object is a
woman pushing a bicycle. It's a solid object moving steadily into the car's
trajectory.

~~~
brookhaven_dude
Edges cases are what really matters in this case!

Isn't there a saying that 80% of the code is written to handle 20% of the
cases?

~~~
ltdanimal
I think you have that flipped. 20% of code covers 80%. As you try to get to
100% coverage of all possible scenarios the complexity gets in some cases
exponentially harder as all the low hanging fruit is picked

~~~
the-dude
So he is exactly right : 80% of the code handles edges cases. You are the one
who is flipped.

~~~
cozzyd
20% of code covers 80%. 80% of code covers 19%.

------
dawhizkid
As someone who worked at a ridesharing co, I'm also really curious about the
ops side of things, like how will passengers behave when there's no driver in
the car? Will people try to have sex, do drugs, make a mess? What happens when
someone vomits in the car or is very drunk (actually a very common problem in
ridesharing)?

~~~
mulletbum
They change the interiors of the car to accommodate. There is a reason the
public transportation doesn't feel like my Ford Fusion. It is made for lots of
different butts to be in and if a mess is caused they come in and spray it
out. A bus for example can be cleaned with a pressure washer (sometimes inside
and out.) Seats, headboards, etc. etc. can all be put on by snaps or simple
screws. Anything stolen can easily be monitored and the last occupant can be
charged for it. There are a lot of ways to get this done and make it extremely
cheap and durable.

------
Zhenya
I'm continually astounded by the Tesla hasbara:

"Tesla and a host of other imitators already sell a limited form of Autopilot"

Imitators? You mean GM, Mercedes and VAG that have had these systems for
longer than Tesla are just bad copies of the glorious Tesla autopilot?

Unreal.

~~~
freerobby
All due respect, have you used any of the Autopilot clones? There are endless
debates about what's better on paper, but everybody I know who's actually used
them says Autopilot is much more mature. Comma.ai seems to be right on their
tail, but it's an aftermarket system requiring a fair bit of DIY (so it's not
very accessible yet for most end users). Tesla was first to market with auto
steering, and seems to have the most mature offering, so framing them as the
market leader seems pretty accurate to me.

~~~
mdorazio
As someone who has used Autopilot and many of its "clones" as well as worked
with several OEMs on these features, Tesla's offering is not more mature, it's
more irresponsible. Autopilot is a combination of ADAS features that many
other OEMs have had for a while and deliberately chosen not to tie together
due to safety concerns. These concerns have been validated multiple times now
by Autopilot errors. The "clones" are deliberately less good because the
manufacturers offering them care more about safety and reliability than about
marketing beta products.

~~~
freerobby
How mature a system is and whether it's morally responsible (or imperative) to
release it are different questions.

> The "clones" are deliberately less good because the manufacturers offering
> them care more about safety and reliability than about marketing beta
> products.

When I say "maturity," I'm not talking about holding back features. I'm saying
that Tesla's system works better in situations where other manufacturers offer
similar features. For example, how many other manufacturer's TACC will slow
down (below the set limit) when a turn is coming up and then accelerate
through the turn as a human is taught to do? How many autosteering systems
will sense an adjacent car drifting from its lane and "creep" away from it to
avoid a possible collision? These are the sorts of mature behaviors that put
Autopilot ahead of the pack and make it much more real-world useful than its
competitors.

On irresponsibility, I passionately disagree. But rather than rehash a bunch
of old discussions I'll just link to prior comments explaining why I feel the
way I do, and we can debate it further in those comments if you're interested.

[https://news.ycombinator.com/item?id=17302164](https://news.ycombinator.com/item?id=17302164)

[https://news.ycombinator.com/item?id=17233845](https://news.ycombinator.com/item?id=17233845)

[https://news.ycombinator.com/item?id=16773903](https://news.ycombinator.com/item?id=16773903)

[https://news.ycombinator.com/item?id=17236608](https://news.ycombinator.com/item?id=17236608)

~~~
mdorazio
Ok, we have different definitions of maturity. To me, it means your system's
core functionality works reliably in the use cases it is designed for. Nice-
to-haves like speed smoothing are secondary.

Thanks for the links to your other comments. I don't think there's fruitful
debate to be had here - we're just of different opinions. I think autopilot
encourages bad driving behavior in drivers who would otherwise be more alert,
and if I've read your comments correctly you think the responsibility is
nevertheless still on the drivers and autopilot probably helps more people
than it harms. I'd love to revisit this if we ever get apples to apples
accident data showing comparisons between autopilot-enabled vehicles and
comparable vehicles from other manufacturers in the same price and age range
(thus with corresponding modern and expensive ADAS systems).

~~~
freerobby
> Ok, we have different definitions of maturity. To me, it means your system's
> core functionality works reliably in the use cases it is designed for. Nice-
> to-haves like speed smoothing are secondary.

I agree with this definition, but I see it more as a sliding scale than a
checkbox. I think it comes down to whether you look at each system in
isolation or whether you put them all on a single spectrum. I'm imagining
human driving at one end of a spectrum and fully automated driving at the
other, and looking at what percent of driving conditions each system provides
a safety or utility improvement in (so you can see why features like speed
smoothing and reacting to nearby cars quickly become relevant to my thinking).
But I can see why, judging each system on its own, one would say that
Autosteer is less mature at what it aims to do than simpler and more
constrained systems are at what they aim to do.

You summarized my take on Autopilot as accurately as one can in a single
sentence (thank you for that btw). Like you, I look forward to the day when
data is available to resolve competing hypotheses like these. Elon pledged on
the last earnings call to release quarterly Autopilot data reports; I'm hoping
those will include Autopilot vs non-Autopilot driving usage and their
respective highway accident rates, broken down by feature (TACC, Autosteer,
TACC + Autosteer, etc). I think that'd be the best way to assess Autopilot
safety, as it would control for all other factors. It'd be great to have
apples-to-apples numbers from all manufacturers, but I think that will take
longer and probably some coordination from NHTSA.

------
samirparikh
I work on self-driving cars. Imho, companies need to collaborate on certain
aspects like perception instead of competing. The vehicle is basically blind
beyond the range of the sensors or if you miss a detection. Collaboration on
these issues will lead to faster progress instead of competing.

~~~
chronic821
> Imho, companies need to collaborate on certain aspects like perception
> instead of competing.

I also work on self-driving cars. Why would I share technological innovations
with another company?

~~~
TaylorAlexander
So people don’t die.

This is an example where companies could put people over profits and share
what they’ve learned to keep things safe. I don’t see a lot of research papers
(I see none) coming out of Waymo, Tesla, Uber, or Cruise.

~~~
samirparikh
This is another form of collaboration. You could publish your innovations in
some technical forum w/o code and it'll automatically drive the industry
forward since SOTA results will be available.

Right now, no one really knows the performance of the Waymo system. Millions
of miles doesn't mean a lot. I can do billions of miles in a parking lot with
no cars or pedestrians. Even millions of miles on road doesn't say anything
about the performance of the components.

------
intopieces
The story of computation in general — “The Rise of the Machines” — has been
more about humans becoming more like machines than the other way around. The
question is not “will computers ever be able to drive as well as a human?”.
It’s “Will humans remake their driving systems to fit a sufficiently
intelligent computer-driven model?” I believe the answer, as the US ages and
falls below replacement rate, is yes. We are already fully invested in the age
of benevolent machines, and there is no turning back.

------
georgeecollins
Rodney Brooks gave an interview that was recently reposted on ArsTechnica. He
has so much real world experience productizing AI and robotics that no one
else really compares. He is not pessimistic just realistic about self driving
cars. And he has great reasons why true autonomy is going to take a long time.

~~~
totaldick
Rodney Brooks has a history of skepticism about AI (said in 2009 that he
doesn't think AI is even possible), as does Gary Marcus (who's been anti-deep
learning since 2011). "AI is hype" hit pieces always quote them.

~~~
georgeecollins
I assume you are trolling, but in case someone misses your user name: To say
that the director of the MIT Artificial Intelligence lab for ten years and the
founder of the a successful AI workstation company and the most successful
robotics company in the world "has a history of skepticism about AI" is
idiotic.

~~~
totaldick
not trolling. I don't think you know Brooks well enough, i've been following
him since the 90s. Rodney Brooks is famous for his subsumption architecture
and disregard of statistical methods. Because of the failures of Marvin Minsky
GOFAI, the previous director of MIT AI, he went in the opposite direction
completely and became highly skeptical of AI while serving as MIT AI head.
This was the AI winter days of the 90s.

~~~
georgeecollins
Why create a name "totaldick" to make a single post if not to troll? My
response would be: You don't head the most prestigious AI lab in world with
pure skepticism.

I think you have elaborated on your true objection, his past skepticism about
statistical methods. I don't think that equates to a blanket skepticism. I
also refer you to the talk in my original post. He points out that there are
breakthroughs and pitfalls that are unanticipated. He outlines specific
pitfalls to autonomous driving that he feels no one has adequately addressed.

~~~
totaldick
This is my only account, just signed up recently, and the name was just spur
of the moment. Sorry if it comes across as trolling, that wasn't my intention.
I just know a bit about Rodney Brooks, am a fan of his work, so I gave my 2
cents.

~~~
dang
I believe that you're not trolling but could you please email us
(hn@ycombinator.com) with a clearly-non-trolling username that we can replace
"totaldick" with? A name like that amounts to subtly trolling other users with
every post you make.

------
dsnuh
This is tangentially related, and something I have wondered, so I will ask it
here.

Does anyone know how self driving cars will react to attempts by law
enforcement officers to pull over the vehicle?

~~~
vamin
They will pull over.

~~~
devy
Another attack surface for self-driving cars

~~~
bryanlarsen
Sure, if you're willing to risk going to jail for impersonating a law
enforcement officer.

~~~
dsnuh
Fair enough. I'm sure some would be willing to do that, but likely not many. I
do think it is going to be very interesting to work out some of the scenarios
if we do get to Johnny Cab style self driving cars.

~~~
bryanlarsen
If you're willing to risk jail, disrupting traffic is ridiculously easy, self-
driving cars or not.

------
matthewaveryusa
One major aspect a lot of comments aren't capturing is that autonomous
vehicles create a platform that goes beyond moving people around. Forget about
the passengers during peak demand, that's already a given -- what about all
that cargo that can be moved either with passengers or during off-peak demand
hours?

------
mactitan
-A decision was made to do nothing based on ambiguity in perception, and the emergency braking was turned off because it got too many false alarms from the sensor”

So instead of solving this problem it was decided to go forward;abominable. At
least the truth came out.

------
tonyquart
Talking about self-driving cars, I always interested in the legal aspect of
these cars. What's the regulation, what are the driver's (or passenger?)
responsibilities, and what are the automakers' responsibilities. I have just
read a nice information related to this matter at
[https://www.lemberglaw.com/self-driving-autonomous-car-
accid...](https://www.lemberglaw.com/self-driving-autonomous-car-accident-
injury-lawyers-attorneys/). It's always interesting to talk about this future
technology.

------
n_ary
More I read about AV, more I get the idea that, these AVs need to think to
some extent like a human driver. My notion is that, these AV research focus
blindly on making individual cars think too much. Am curious what happens if
the thinking and data-collection is distributed instead, like swarm
intelligence, each AV nearby can broadcast what it sees and where it
is(extends the vision of each car in a given location beyond it's own
sensors); front car suddenly breaks/had an accident/roadblock/switched to
human intervention => all following cars will negotiate if they should
bypass(reporting car will be parking) or wait (reporting car will use another
lane); a car is on a blind turn => oncoming traffic from both sides can
negotiate if this car should attempt the turn because others are far
enough/slow enough to let it turn safely or wait until they are gone(high
speed so not enough time for turning car before that one approaches); a road
is very confusing so all cars following should switch to/request human
intervention with ample time ... etc and given how cellular phones switch
nicely between different towers and areas, we can use similar tech to allow
all nearby cars to communicate ... that is communicate, coordinate and decide.
Now this have some issues like privacy, drivers with bad intention can feed
bad data to cause mass confusion, legitimacy of data received etc ... which
can probably be an interesting place to borrow some ideas from blockchain to
verify legitimacy etc ... I think instead of focusing too much on ml, try to
incorporate other techs which solve minor but similar issues is a good idea.

Disclaimer: I do not claim to have full/partial/enough understanding of any AV
technology/blockchain/distributed computing/swarm intelligence etc... so my
comment is more or less an opinion based on what I think may be interesting,
which may or may not be already thought by and discarded for not-enough
viability by the AV research.

------
jonplackett
QUESTION: Is this really and AI problem or is it a sensor problem.

I mean, if there’s a big blob of something solid in the road in front of you,
it doesn’t really matter what it is does it. You still have to not run into
it.

It seems that because these poor AIs are relying on LIDAR and even just radar
and cameras with Tesla, that they aren’t confident enough in that data to stop
if they sense a big blob of something in the road.

If you gave any of these AIs a really high quality 3D rendering of the area
around them I would presume they’d do a damn fine job of navigating it and not
running into anything. At that point it’s just a computer game.

So either we need better sensors or better algorithms for extracting better 3D
data from the existing ones.

It’s not like these fatalities are caused by particularly crazy problems. It’s
not that a kid ran behind an ice cream van and the AI couldn’t predict it
would run into the road like a human would. It’s just that they make dumb
decisions a learner driver wouldn’t make because they can’t see the road
properly.

------
pwaai
Well thankfully the Google CEO is not some super salesman like Elon Musk who
is trying to emulate Steve Jobs and failing hard with Tesla.

I would bet the farm on Waymo, they realized it takes a long time to build a
level 4 autonomous vehicle, so no doubt they were being poached by Uber which
all failed.

Likely the problem is one of numbers. The sheer amount of data, and the
traffic data Google has access to, all of this adds to a superior safety
experience.

Meanwhile a level 2 masquerading as level 3 autopilot should be up for some
serious scrutiny, especially if it's true that there appears to be some issue
with the autopilot on the Tesla.

------
alkonaut
The Q isn’t when complete autonomy arrives, but how much autonomy we will add
before the cost of adding more is vastly greater than having on-call humans
take over. At that point the ROI of the research will fall as all you gain in
going from 99.05 to 99.10 % autonomy is a few employees across a whole fleet
of vehicles.

I agree with the pessimists: driving is 99% a mindless activity that even a
mediocre AI will handle soon. But the last percent requires not just a better
AI but a human. At least as long as it drives with other people in an
environment designed for people.

------
the-pigeon
This article talks about issues with deep learning than admits it's not used
on these projects. Making most of the article not relevant to self driving
cars despite being about them.

~~~
mcherm
Despite PURPORTING to be about them, probably because news about self-driving
cars tends to get eyeballs and Hacker News upvotes.

------
tempz
What is the most optimistic benefit of a driverless car?

If we put aside creation of unemployment, putting limits on what human drivers
will be able to do, and similar dystopian outlooks, what are the benefits for
the masses?

People who cannot drive will be able to use cars as people who can. Is this
good? Are interesting places on the planet going to have the same fate as
Internet did, when it transformed from elite audiences in 90s to tragedy of
commons today?

------
zwieback
How much better than human drivers do self-driving cars have to be to succeed?
If you take a metric like fatalities or damage per mile I'm guessing we ask
our robots to be much better than humans, even though it doesn't really make
sense. Over time the differential will come down but a better intermediate
strategy would be to keep humans away from robot cars until collision
avoidance techniques improve substantially.

~~~
carapace
I had this argument the other day with someone else.

 _You can 't make a robot that goes out in public and kills random people._
You just can't do that. Okay?

Q: "What if the robot looks just like a car and there's a person inside the
robot watching TV? Can it kill random people then?"

Still no.

Q: "But _people_ do that all the time! Tens of thousands of people die or are
maimed by traffic collisions every year! My robot can't join the fray?"

Still no. It's insane and horrible that we set up a death and mayhem lottery
and that we force everyone-- children, old people, pregnant ladies, folks in
wheelchairs, ev-er-ee-one -- to play it whether they want to or not. That's a
bad thing we shouldn't do. But it still doesn't make it okay for you to make a
robot that goes out in public and kills random people.

If you make a robot, send it into the world, and it kills someone, you are a
murderer and you should go to jail, _even if your robot looks like a car and
there 's a person riding inside it._

\- - - - -

We could make a self-driving _golf cart_ that never went fast enough to be
able to injure people and it could take my mom (who has dementia and can't
drive or ride the bus on her own anymore) to the doctor and back safely. We
could build that today, with existing technology, there's a market for it, and
it wouldn't kill anyone.

Self-driving cars are a fetish that distracts from solving real problems! (One
day that won't be true, let's not kill too many more people until then, hey?)

\- - - - -

Humans drive really well. Like really really crazy good well. When I first
realized how well people drive it made me consider that Guardian Angels might
be a real thing. But then I learned more about how the motor cortex worked and
some of the uncanniness faded.

We should be so lucky as to have robots that drive as well as we do.

In the meantime, the industry should concentrate on incrementally automating
traffic and stop trying to bite off more than current technology can chew.

What we're seeing now is hubris driven by ego and greed. And it's killing
people.

------
gwbas1c
When will we have self-driving cars?

When we can get rid of recycle bins, because we've finally figured out how to
get our robots to sort our trash for us!

~~~
tsycho
I know that the actual system is currently terrible in many places, wherein
the recycle and trash are mixed together and re-sorted by humans. Ignoring
that for the moment...

There are some practical reasons for recycle bins, or at least for separating
out paper. Paper can get damaged if mixed with liquids or other
foods/compostable material, and harder or not worth recycling. Keeping it
separate from the very beginning increases the ability and efficiency of
recycling => less dead trees.

------
frgtpsswrdlame
This has been hanging out there for a while it's just that it never got much
mainstream press. I wonder what it is specifically about the concept of "AI"
that makes it so prone to the magical thinking that drives these boom-bust
cycles.

 _" I tell adult audiences not to expect it in their lifetimes. And I say the
same thing to students"_

 _" Merely dealing with lighting conditions, weather conditions, and traffic
conditions is immensely complicated. The software requirements are extremely
daunting. Nobody even has the ability to verify and validate the software. I
estimate that the challenge of fully automated cars is 10 orders of magnitude
more complicated than [fully automated] commercial aviation."_

\- June 2015, Steve Shladover, transportation researcher at the University of
California, Berkeley

[https://www.automobilemag.com/news/the-hurdles-facing-
autono...](https://www.automobilemag.com/news/the-hurdles-facing-autonomous-
vehicles/)

 _" While I enthusiastically support the research, development, and testing of
self-driving cars, as human limitations and the propensity for distraction are
real threats on the road, I am decidedly less optimistic about what I perceive
to be a rush to field systems that are absolutely not ready for widespread
deployment, and certainly not ready for humans to be completely taken out of
the driver’s seat."_

\- March 2016, Mary Cummings, director of the Humans and Autonomy Laboratory
at Duke

[https://www.commerce.senate.gov/public/_cache/files/c85cb4ef...](https://www.commerce.senate.gov/public/_cache/files/c85cb4ef-8d7f-40fb-968c-c476c5220a3c/8BC0CC7E137483CEFD0C928ECB14E74E.cummings-
senate-testimony-2016.pdf)

 _" With autonomous cars, you see these videos from Google and Uber showing a
car driving around, but people have not taken it past 80 percent. It's one of
those problems where it's easy to get to the first 80 percent, but it's
incredibly difficult to solve the last 20 percent. If you have a good GPS,
nicely marked roads like in California, and nice weather without snow or rain,
it's actually not that hard. But guess what? To solve the real problem, for
you or me to buy a car that can drive autonomously from point A to point
B—it's not even close. There are fundamental problems that need to be
solved."_

\- September 2016, Herman Herman, director of the Carnegie-Mellon University
Robotics Institute

[https://motherboard.vice.com/en_us/article/d7y49y/robotics-l...](https://motherboard.vice.com/en_us/article/d7y49y/robotics-
lab-uber-gutted-says-driving-cars-are-not-even-close-carnegie-mellon-nrec)

~~~
endymi0n
'It may be a hundred years before a computer beats humans at Go — maybe even
longer,'' Piet Hut, an astrophysicist at the Institute for Advanced Study in
New Jersey, told the NYT in 1997. ''If a reasonably intelligent person learned
to play Go, in a few months he could beat all existing computer programs. You
don't have to be a Kasparov.''

~~~
pluto9
If you care to, you can dig up plenty of examples of people underestimating,
overestimating, or correctly estimating the future capabilities of AI. Just
because some guy in New Jersey underestimated it once doesn't mean we should
accept AI hype uncritically, especially since much of that hype is driven by
moneyed interests.

~~~
mark-r
People are just bad at estimating in general. They tend to assume things will
go in a straight line. You can see this most often when the curve is
exponential, but asymptotic curves will throw people off too.

------
s2g
> argues the problem is less about building a perfect driving system than
> training bystanders to anticipate self-driving behavior

Can we stop listening to him now?

I'm joking, he's an expert of course, but seriously that's a pretty dumb view.
It's exactly the sort of thing I expect from a SV "disruptor" type.

> “Rather than building AI to solve the pogo stick problem, we should partner
> with the government to ask people to be lawful and considerate,” he said.
> “Safety isn’t just about the quality of the AI technology.”

Just crazy.

These people just really do not seem to care if they kill people. They are
engaging in foolish behavior rushing ahead with a technology that's not ready
and thinking they can just paper over the gaps with laws and patches.

------
mathiasben
As long as software is composed of primitive language derived statements and
"functional" logic (IF, WHEN, WHILE, DO, GOTO, etc...) AI will remain a
fantasy.

~~~
jfoutz
This is hard to parse. All software that can be represented as if and goto
(and some sort of assignment i suppose)?

when, while and do are just if and goto.

alpha go is all just if and goto. I'm not sure what you're looking for or
alluding to.

Is there some other operation that isn't just if and goto? because i think any
function you come up with, i can just make a big table of inputs that result
in specific outputs.

Heck, decrement and jump if zero is enough. I'm not convinced at all that
specific operations are limiting us somehow. Could you elaborate?

~~~
jtbayly
I believe what he's saying is that we don't "think" in if/then statements. So
true AI will never be intelligent-like if you judge intelligence as thinking
like us.

~~~
vntok
What do you mean? Neurons work exactly like if/then statements. If (input >
level) {transmit information}

~~~
jtbayly
Ummm.... no. You _might_ be able to make the case that a neuron can be
simulated with a very complex set of if-then statements, but that assumes that
we can know everything about the state of a dozen inputs, which we can't. At
any rate, the brain is not a computer: [https://aeon.co/essays/your-brain-
does-not-process-informati...](https://aeon.co/essays/your-brain-does-not-
process-information-and-it-is-not-a-computer)

~~~
xj9
i like that this essay puts the complexity of the brain into perspective, but
the distinction between a computer and an organism is kind of arbitrary.
brains are physical systems that can be modeled mathematically. if you can
create a mathematical model of a thing, you can execute that model on a
computer. a brain isn't a classical computer, but that doesn't mean that a
computer can't simulate a brain.

we currently lack the ability to recreate a brain-like entity, but the subtext
that i am reading here is that the complexity of the brain is such that
accurately modelling a brain in mathematical terms is impossible. the "brain-
as-computer" model may not be accurate, but everything that exists _can_ be
expressed in mathematical (and therefore compute-able) terms.

i doubt that cyberbrains will run on anything that we recognize as a general-
purpose cpu. gpu micro-architecture is already a significantly more efficient
option for performing nn computations. as our grasp on this stuff improves,
more specific silicon is being developed to make it even more efficient.

~~~
jtbayly
I don't think it's arbitrary. You even distinguished in this comment between a
brain and a simulation of a brain. Steel is a physical system that can be
modeled mathematically. Accurately simulating steel doesn't make steel.

~~~
a_crc
Agreed. I think radioactive decay is an even better example of a well modeled
physical system that defies simulation. A simulation of an ounce of decaying
uranium won't tell you which atoms will decay in which order in a physical
chunk. Ergo, somethings defy simulation.

