
Why Tesla's Autopilot and Google's car are entirely different animals - mblakele
http://ideas.4brad.com/why-teslas-autopilot-and-googles-car-are-entirely-different-animals
======
oneJob
Tesla has stated publicly they believe that within three years their cars will
be capable of full autonomy, and expect it to take another two years to
receive regulatory approval. By fully autonomous they state you will be able
to walk outside, have the car approach to meet you, it will open and close the
door for you, you can fall asleep, and several hours later wake up at your
destination.

Yes, Tesla is taking an incremental approach to releasing the feature sets
that are required to have a fully autonomous vehicle, but no, the end product
goals for Tesla and Google are not different in kind.

What certainly is different is the manufacturing approach the two companies
are taking. Google is seemingly aiming to release a fully autonomous vehicle
at version 1.0, meaning every system of the car, such as manufacturing
process, sales, customer support, will be at version 1.0 at the same time. In
contrast, when Tesla releases its version 1.0 of the fully autonomous driving
feature set, they will already have very matured versions of the other
components, such as their manufacturing process, battery and drive train
technology, sales and marketing, customer support, etc.

Plus, the Google cars look like something one buys for their four year old
niece or nephew.

~~~
slyall
"...the Google cars look like something one buys for their four year old niece
or nephew. "

You have to stop thinking like some guy from Mad Men and more like somebody
buying AWS instances.

Imagine that instead of buying a single car that you drive everywhere you
instead reserve a car for your daily commute, you might get various options,
eg:

    
    
      $500/month - Tesla Sports car, 30min journey, unshared occupancy
      $300/month - 2 seat Google-Car, 30min journey, unshared
      $125/month - 8 seat van, 40min journey, shared, up to 1 vehicle change
    

Now if you are a go-getter gunning for VP you'll pick the sports car. But
others might not see the extra expense as being worth it.

~~~
rev_bird
> $125/month - 8 seat van, 40min journey, shared, up to 1 vehicle change

This is interesting -- it's basically a little autonomous bus, for essentially
the same price as a regular one, that might pick up/drop off at a more
convenient location, minus someone there to supervise. I kind of like having a
bus driver.

~~~
Johnny555
But would you rather walk 3 blocks in the rain (or snow) to a bus, wait at the
stop for it to come, then walk another 3 blocks to your office, or would you
rather have the minibus pick you up at your door and drop you at your office?
Point-to-point travel is where self-driving cars excel - even if you're
sharing the vehicle with others so you don't have a direct trip.

------
rm999
>Doing many thousands times better will not be done by incremental improvement

This assertion isn't obvious to me. In my experience incremental updates are
often exponential in their impact (especially if enough resources are put into
a problem). Moore's Law is an excellent example of this: at any given time,
researchers are working on a fixed number of solutions that will generally
make a fixed % impact. This is why we can see a doubling in transistor density
without a huge increase in the size of the industry.

In the case of reducing accidents, I could see a similar exponential pattern.
The first incremental step maybe took the accident rate from 10% to 1% by
eliminating 90% of the possible sources of accidents. In the second step,
researchers will again shoot to eliminate 90% of the current causes of
accidents, bringing the rate to 0.1%. This could repeat every couple years
until the accident rate is sufficiently close to 0.

------
Animats
The Tesla "autopilot" is comparable to what BMW[1] and Mercedes[2] have been
shipping for years.

Self-driving is much harder. The first-order problems of driving on a empty
road were solved by the DARPA Grand Challenge, ten years ago. The second-order
problems involve dealing with other road users. That's hard, and that's what
Google is working on, with considerable success. So is the CMU/Cadillac
consortium, which has demonstrated their self driving car to politicians in
Washington traffic.[3] Nobody seems to pay much attention to that effort,
although they may be closer to a production product than anyone else. (Or not;
Uber hired some of the people involved away from CMU.)

Self-driving cars need and have a lot more sensors than semi-auto cars.
There's a lot more sensing to the sides and rear, and more forward sensing
than just being able to detect the next obstacle ahead. Vision processing is
far more elaborate. Google's vision system explicitly recognizes humans and
bicycles.

Google's little 25MPH driverless car is a way for them to enter the market. At
25MPH, slamming on the brakes is a good solution to situations the system
can't handle. Those things are going to be all over senior communities in a
few years. Google already has higher-performance cars on the road; they can be
seen all over Mountain View most days.

[1]
[http://www.bmw.com/com/en/insights/technology/connecteddrive...](http://www.bmw.com/com/en/insights/technology/connecteddrive/2013/driver_assistance/intelligent_driving.html)
[2] [http://www.mercedes-benz-intelligent-
drive.com/com/en/1_driv...](http://www.mercedes-benz-intelligent-
drive.com/com/en/1_driver-assistance-and-safety/7_active-lane-keeping-assist)
[3]
[https://www.washingtonpost.com/local/trafficandcommuting/dri...](https://www.washingtonpost.com/local/trafficandcommuting/driverless-
vehicles-even-in-dc-streets-an-autonomous-car-takes-a-capitol-test-
run/2014/08/25/6d26baa8-06a4-11e4-8a6a-19355c7e870a_story.html)

~~~
Cyph0n
I'm surprised the article didn't mention Mercedes' history with drive assist
systems especially. This is very old tech for them. For instance, I recall
seeing a Mercedes S class with dynamic cruise control in 2002.

~~~
ghaff
Because Mercedes isn't making absurd claims about autopilot systems and
because "everyone knows" that innovation only comes from Silicon Valley. OK,
that was a bit snarky but assistive driving systems, however useful and
safety-enhancing, don't get people all wide-eyed and excited the way promises
of autonomy in "just five years" does.

------
kordless
> A full robocar product is only workable if you would need to correct it in
> decades or even lifetimes of driving.

I had a conversation about this with friends in Germany a few months back.

In most societies, a mistake that causes suffering to another individual is
usually 'blamed' on the person causing the suffering. In many cases where
causality is obvious, this assignment of blame is fairly straightforward.
Example: Bob fell asleep, which caused him to lose control of his car, which
hit the bus, which killed a child. Bob is now culpable for the child's family
suffering. Bob remains one of many others who share culpability at this point,
assuming others are also falling asleep at the wheel. FWIW, 103M people fell
asleep at the wheel last year in the US, so Bob will likely have company.

Now put an autonomous piece of software written by company X into Bob's car.
Bob engages the autopilot, falls asleep, the autopilot software experiences an
error, the software fails to alert Bob, the software loses control of the car,
which hits a bus, which kills a child. Who is culpable for the family's
suffering now? The software? Company X?

The only way for company X to both a) allow Bob to fall asleep and b) bear the
culpability for a family's suffering is to get the software to the point it
only makes mistakes in a timeframe that is, at a minimum, several orders of
magnitude greater than Bob making the same mistake.

The logic goes that, once a company's software kills a child, it's going to be
pretty hard to keep the public from reacting negatively, even though overall
suffering will decrease. The only option company X is to require Bob to accept
he is "driving" the car and bear the culpability of any suffering the car's
software may cause, or alternately, be ready to pay a substantial settlement
that offsets suffering.

~~~
ghaff
It's an interesting question in that properly maintained and properly driven
automobiles do have accidents that are clearly no one's fault--skidding on a
patch of ice, deer running into the road, etc. Perhaps a more skilled or more
cautious driver could avoid such an accident--or not. I'm hard-pressed to
think of many other examples where a product used as intended will nonetheless
cause harm to the user or others with some finite probability--but aren't
considered the fault of the manufacturer. Pharmaceuticals and other types of
medical equipment probably come closest. (Though drug companies certainly face
lawsuits for side effects all the time.)

~~~
sokoloff
Chainsaws and handguns are probably other examples.

Fast food is another. (Alcohol, tobacco?)

Side note: I would argue that your patch of ice example is not nearly as good
as the deer one. Skidding on a patch of ice and crashing is, IMO, simply
driving too fast for conditions.

~~~
ghaff
Fast food etc. though is more "stuff that's bad for you taken to excess" as
opposed to something that can randomly get in an accident though.

I agree the deer is the better example. You can have patches of unexpected
black ice though. I've skidded a few times though never had an accident.

There's certainly gear one uses in activities that have some inherent danger
like your chainsaw example. I guesxs things like skis and helmets could be
another. The difference with an AI though is that it's the machinery itself
that is making the decisions.

------
keehun
I've always wondered how autonomous cars would handle the first and last
quarter mile of the journey. I'm talking specifically like portions of the
trip like the driveway, getting out of the parking ramp, or navigating small
alleyways where the car could be parked (where GPS could be weak in the city).
Things like even knowing which entrance to go to. Will fully autonomous will
ever be able to take us from A to B 100%? Will humans always take over the
last tiny bit where the maps aren't detailed and to park? Humans love to drive
around the lot to park at exactly the "perfect" spot. Cars can parallel park
now, but how will cars decide where to park exactly? Will we ever be able to
have the car take us through the drive-thru?

I personally think autopilot-like auto-cruise just on the highway and more
established local roads would be good enough. The convenience afforded by
having the robot take us from A to B parked to parked may not be worth the
insane price it must have on its tag to get there.

~~~
c22
The car can drop you off at your destination and then go park wherever it's
convenient. It can take as long as it wants since you're no longer waiting on
it, so it doesn't have to worry about navigating "unparkable" areas. It could
go to a networked garage where the cars stack side by side without space even
for the doors to open. If your visit is short it could just drive around the
block a few times. Alternatively, it could go pick up another passenger.

------
Evolved
If Google's 25mph car is able to slam on the brakes and avoid an incident
without swerving or taking other action then it could be argued that once we
move to a fully autonomous society, 25mph (example; may not be accurate but
for the sake of debate, 25mph is what I'll stick with for this scenario) may
end up being the max speed for safety reasons.

This isn't to say that now trips will take longer and that will adversely
impact our lives because I think what will end up happening is we will
rearrange our lives so that we use these longer driving trips to sleep or
work, converse with friends, do homework on the way to class, etc. and thus
the time it takes to get from point A to point B becomes moot as we are now
able to be orders of magnitude more productive in our vehicles.

Granted, this will not only reduce accidents as now the vehicles can
communicate with each other and will instead know what the intentions of the
other car are and adjust accordingly instead of trying to anticipate what the
other car is going to do, but it will also reduce or eliminate speeding
tickets and DUIs. Due to speeding tickets and DUIs being a large source of
revenue for municipalities, I'd expect this to evolve as well, unfortunately.

------
gfodor
The thing the author is missing is that Google _can 't_ incrementally improve,
since they're not in the car business. Tesla, on the other hand, has the
option of either incrementally introducing autonomy to their cars or taking
the Google approach of shipping a 1.0 in a big release years down the road.
That they've clearly chosen the former is telling.

The author pretends that both companies have a choice and have chosen
different strategies, but it's clearly not the case. Unless Google was
planning on building a traditional car business first (a fairly ridiculous
proposition), or partnering and integrating with the supply chain of a major
manufacturer (a stretch, if just to introduce fancy cruise control), they were
never going to be able to iterate towards a robocar.

------
melling
"Tesla’s autopilot isn’t even particularly new."

Guess what... Apple didn't sell the first smartphone either.

Someone takes a small step into car driving automation, tries to create some
buzz around it, then I've got to read about how it's not a big deal. The
nuances between autonomous and auto-pilot need to be discussed. We need a
taxonomy.

I guess writing these sorts of articles is a million times easier than adding
any autonomous features to any vehicle.

Forward progress is extremely important. It comes technically and socially.
Let's hope everyone demands a car with "that stuff" they have in a Tesla.
We'll get a little arms race that'll pay for further development, lives will
be save (in total), and we'll asymptotically approach the vision.

------
turs0und
"This is not a difference of degree, it is a difference of kind. It is why
there is probably not an evolutionary path from the cruise/autopilot systems
based on existing ADAS technologies to a real robocar."

Really interesting. I did not realize that.

~~~
joelwilliamson
The author didn't justify that statement at all. If we have gone from an error
once per minute (cruise control) to once per 30 minutes, and think that level
of improvement can be repeated twice more, we will be at one error every 450
hours. A third time will put us at one error every 13500 hours.

Is continuous, gradual improvement the best way to fully autonomous cars? I
don't know. But the author's argument is simply that we aren't there yet.

~~~
makomk
The trouble is that if they're not continually involved in the driving
process, drivers can't actually concentrate well enough to be able to step in
quickly when something happens that the automated systems can't handle. If I
recall correctly, there have been studies on this and it takes tens of seconds
for drivers to be able to respond to an unexpected situation sensibly if they
haven't been actively driving. Mostly-automated cars that rely on drivers to
step in when something goes wrong are probably not an option.

~~~
ghaff
This interaction of computer and human decision-making is actually the subject
of a fair bit of active research. e.g. at Duke
[http://hal.pratt.duke.edu/](http://hal.pratt.duke.edu/)

To your basic point--yep, at some point you need to stop automating unless
you're willing to hand over control entirely.

------
albertzeyer
Another interesting article about Tesla's Autopilot is this:
[http://electrek.co/2015/10/30/the-autopilot-is-learning-
fast...](http://electrek.co/2015/10/30/the-autopilot-is-learning-fast-model-s-
owners-are-already-reporting-that-teslas-autopilot-is-self-improving/)

It's learning. That is an interesting approach. I wonder how far they get by
that.

I guess Google's car will also collect data and help Google to improve the
performance. My impression so far was that it's mostly engineered work
however, and not so much learned (in a machine learning way).

~~~
rasz_pl
its self reported, most likely placebo effect

------
Evolved
Another way to improve the accident rate is the other side of the robocar
argument wherein we, as humans, do a better job of driving and of
watching/educating our kids.

I understand there are rules of the road and rights-of-way but a right-of-way
for a pedestrian in a crosswalk with the walking signal is not going to stop a
bus from running the light and killing the pedestrian.

Not that I'm blaming the pedestrian but it surely doesn't hurt to think
defensively, look both ways and judge if that bus is going to be able to stop
and act accordingly.

------
amelius
I'm always wondering how Google's car will certify when new releases are made.
Consider that an autonomous car will need X hours/miles of driving before it
will be certified. Now, if Google updates 1 line of code, the whole
certification process has to start all over.

~~~
maxerickson
I would think the certification might not exactly look like that. At least,
they aren't going to release every revision if the certification is like that.

I sort of think they are collecting the super detailed sensor data so that
they can play it back into new versions of their software and see if it
notices things earlier and makes better choices and such. A mass market
version needs to be able to function on environmental data (like a tree across
the road), so I don't think they are building a perfect map to have as a
crutch.

~~~
amelius
Yes, I thought about replaying the information too. But the problem is that
every different reaction by the car will lead to new input from the
environment. And you can't store what you didn't anticipate :)

------
erikstarck
A hundred thousand times better is only seventeen doublings.

Which approach has the fastest exponential growth curve? The one with
thousands of cars on the road, learning from each other, or the one with a few
but more capable cars? We'll see. Just remember to think exponential not
linear.

------
urvader
The point everyone is missing is the reaction time of a driver compared to a
computer is so different. A computer have millions of cpu cycles to estimate
the best decision during the time a human haven't even understood there will
be an accident.

~~~
duaneb
You're comparing # of cpu cycles to time taken to "understand" something. If
the CPU can make a wrong decision REALLY REALLY quickly it's still gonna be
wrong, whereas the human will have a continuous gradient of moving towards
decision making.

------
mikeash
Google's car is an attempt at full autonomy.

Tesla's Autopilot (mostly) keeps you within the lines and regulates your speed
to match the car in front of you.

Does this really need a full article?

------
melted
You people crack me up. Self driving car was never anything other than an
elaborate PR ploy for Google, the company which derives the vast majority of
its income from its advertising business. Who would you rather work for: a
company that is "building a self driving car" or a company that tracks the
living shit out of everything you do on the web and ruins the web with mostly
irrelevant ads? That's what I thought.

And they want these PR gravy trains running as long as humanly possible, so
launching a real product isn't even a goal.

