
Nvidia on new self-driving system: “basically 5 years ahead and coming in 2017” - stesch
https://electrek.co/2016/11/11/tesla-autopilot-chip-supplier-self-driving-hardware-its-five-years-ahead/
======
Animats
I'm still not happy with self-driving on vision alone, or vision augmented
with radar. There are too many hard cases for vision. Everybody who has good
self-driving right now - Google, Otto, Volvo, GM - uses LIDAR.

Self-driving is coming to the first end users in 2017, in Volvo's test of 100
vehicles. Volvo has multiple LIDARs, multiple radars, multiple cameras,
redundant computers, and redundant actuators. They're being cautious. Yet
they're getting there first.

With the new hardware, Tesla ought to be able to field smart cruse control
that doesn't ram into stopped vehicles partially blocking a lane. They've
rammed stopped vehicles at speed three times now. At least with the new
hardware things should get better. Do they still have the radar blind spot at
windshield height?

~~~
ChuckMcM
The argument reasoning I've heard goes like this;

People drive reasonably well using vision primarily and with imperfect
visibility of their environment.

Computer learning networks can classify imagery at least as accurately as
humans and sometimes more so.

A computer using imagery that is well classified from an array of visual
sensors with near perfect visibility should be able to drive as well, or
better, than a human driver.

The execution strategy appears to be to run classification and command
prediction all the time, and while the human is in control consider it
supervised learning.

The argument against LIDAR is just this in reverse, humans don't need LIDAR to
drive, why should computers?

LIDAR is an engineering solution to the problem of creating a representation
of the 3D space around the vehicle. It is a stand in for the less well
understood human ability to do the same just by looking around. As a result if
the "looking around" solution being proposed by NVidia and Tesla meets the
engineering requirement, I don't see any reason that the car should have
LIDAR.

~~~
tedunangst
ML seems pretty bad at classifying things it hasn't seen before though. There
are quite a few examples where an input outside the training data resulted in
misclassification.

Humans may not always see a white truck in a snowstorm, but is computer vision
going to see it either? Or will it pattern match the few visible parts as
something else entirely? Or dismiss the truck entirely as noise?

~~~
ChuckMcM
I don't disagree, both humans and ML are bad at classifying things they
haven't seen before[1]. However that reasoning doesn't disqualify either
vision only auto driving systems or machine learning.

Both statements are true:

"Computer driven cars may crash, even fatally, when they encounter a situation
that they do not recognize." and

"People driving cars may crash, even fatally, when they encounter a situation
that they do not recognize."

The success criteria for self driving cars is that they can drive at least as
well, in the common case, as the set of human drivers who are defined to be
"good" drivers. And self driving is not invalidated by a computer's
mishandling of an event that a good driver would also mishandle.

I expect that self driving systems will be differentiated by how well they
handle the unusual cases so a Mercedes system might do better in an unusual
situation than a Chevy system. And all of this discussion is orthogonal to
LIDAR :-).

[1] [http://puzzlephotos.blogspot.com/](http://puzzlephotos.blogspot.com/)

~~~
visarga
There is a difference though - humans understand the surrounding state,
computer vision is not quite there. It can recognize things, and in NVIDIA's
case directly generates steering commands without going through the
intermediate step of building a model.

Humans build models of the world, and such models allow us to predict the
future to a little extent, and explain the reasons behind a situation. Humans
can intuit the intentions of other drivers and the behavior of other objects.
AI can't do that quite as well.

~~~
philjohn
Also, making eye contact and being waved through. Humans are excellent at
reading cues such as this.

------
randomdrake
Super cool to see NVIDIA releasing hardware specifically meant to be a
platform for building self-driving systems[1]:

 _" NVIDIA DRIVE™ PX 2 is the open AI car computing platform that enables
automakers and their tier 1 suppliers to accelerate production of automated
and autonomous vehicles. It scales from a palm-sized, energy efficient module
for AutoCruise capabilities, to a powerful AI supercomputer capable of
autonomous driving."_

I hadn't heard of this before and with their purported pivot to an AI company,
I can't wait to see what other platforms they develop in a similar capacity.

[1] - [http://www.nvidia.com/object/drive-
px.html](http://www.nvidia.com/object/drive-px.html)

~~~
dogma1138
NVIDIA has been trying to pivot into a "services" company for a while, NVIDIA
GRID/Gaming Cloud, computing etc. "AI" or at least fused sensor automation
seems like a good place for them since they already have both the hardware and
the software expertise for this.

NVIDIA actually gave up on even attempting to do console graphics again since
they didn't want their pipeline to be suffocated by these draconian single
customer contracts. What keeps AMD's graphics department alive these days, is
exactly what would have prevented NVIDIA from pushing their business forward.

~~~
erk__
The new nintendo console/handheld have Nvidia graphics.

~~~
dogma1138
It's the NVIDIA Tegra SOC this is different than having to fill your pipeline
with custom graphics for a mainstream console.

Nintendo just licensed the SOC from NVIDIA, the PS4 had a custom chip from the
green giant, and AMD does custom APUs for the PS4 and Xbone,

------
matheweis
I thought the reason for Tesla switching away from Mobileeye was that
Mobileeye and Tesla couldn't come to an agreement on price and data licensing?

[https://electrek.co/2016/09/15/tesla-vision-mobileye-
tesla-a...](https://electrek.co/2016/09/15/tesla-vision-mobileye-tesla-
autopilot/)

... and because Mobileeye wasn't comfortable with Tesla using their system for
level 4 & 5 driving:

[https://electrek.co/2016/09/16/mobileye-responds-to-tesla-
ag...](https://electrek.co/2016/09/16/mobileye-responds-to-tesla-again/)

------
bcantrill
Could someone who understands this space weigh in on how technically
interesting this is? (Or isn't?) In particular, their research paper on "End
to End Learning for Self-Driving Cars"[1] seems to yield a system that
requires an unacceptable amount of manual intervention: in their test drive,
they achieve autonomous driving only 98% of the time. But I have no real
expertise in this space; perhaps this result is impressive because it was end-
to-end or because of the relatively little training? Is such a system going to
be sufficiently safe to be used in fully autonomous systems? Or is NVIDIA's PX
2 interesting but not at all for the way it was used in their demonstration
system?

[1]
[http://images.nvidia.com/content/tegra/automotive/images/201...](http://images.nvidia.com/content/tegra/automotive/images/2016/solutions/pdf/end-
to-end-dl-using-px.pdf)

~~~
ilaksh
It's incredibly freaking amazing if they are using deep learning to drive via
mainly cameras only 98 percent of the time. No one else can do that. 98
percent is obviously a lot.

~~~
bcantrill
Thanks -- that answers the question! So fair to say that it's impressive
because of the absence of LIDAR and/or other sensors -- and that by adding
LIDAR to such a system one could presumably get towards 0% manual
intervention?

~~~
sushirain
The difference between 98% and 99.999% is very difficult to solve, and it's
not going to happen in the next 5 years. LIDAR can't help, for example, with
obeying a police officer gesture.

------
varelse
NVIDIA's Drive PX 2 has too high power consumption and too low perf/W for the
moment. They're winning this space because they are there more than that they
are the best possible solution.

And they may continue to win because successful execution of an 80% product is
worth far more than a 90+% powerpoint processor _cough_ TenSilica et al.
_cough_ , or because this is such a huge potential market, it might actually
go to a successful competitor. 2018 and beyond will be very interesting.

For while it's really desirable have the deep learning equivalent of x86
assembly language (CUDA) across a full stack from training to inference, in
the end, IMO cost will be king. I'm not a big fan of $150K high-end servers
filled with $5000 GPUs that can be bested with clever code on a $25K server
fill with $1200 consumer GPUs. But I am a huge fan of charging what you can
while you are unopposed. It's just that I think that state is temporary.

~~~
dogma1138
>I'm not a big fan of $150K high-end servers filled with $5000 GPUs that can
be bested with clever code on a $25K server fill with $1200 consumer GPUs. But
I am a huge fan of charging what you can while you are unopposed. It's just
that I think that state is temporary.

There is virtually not a single "enterprise" grade product which can't be made
at least 50% cheaper (or sometimes 10 times...) with off the shelf consumer
grade hacked hardware....

Enterprise products always have a pretty steep markup, but what you lose with
those 1200$ GPUs is both features (e.g. virtualization, thin provisioning,
DMA/Cuda Direct etc.) and support. When you buy a 5000$ CPU over a 500$ with
the same performance what you pay for is reliability and support, if you don't
care about that then fine, but when you need to launch a 100M$ service on top
of that platform you won't really care about the price tag it's all in the
cost of doing business.

~~~
mattnewton
I don't understand your argument, so maybe this is off base, but if you are
saying people in industry aren't replacing their supercomputers with commodity
gpu's, you're wrong; both apple and google have massive purchase orders for
commodity nvidia gpus because they aren't just cheaper, they are better at
this application. And I imagine other companies are as well.

Edit: "replace" is probably not the right word, this is work that the old
systems don't do well, but they aren't throwing out x86 racks for gpus of
course. It's just instead of buying more of the same for machine learning
applications.

~~~
dogma1138
They aren't buying consumer GPU's they aren't buying the NVIDIA dedicated
servers, but they aren't running Geforce chips either.

If nothing else is that because you cannot virtualize Geforce line GPU's,
there is no CUDA Direct or NVLINK support etc.

If you are telling me that Google is buying Geforce GPU's and flashing the
bios with a custom bios ripped off a Quadro card so they can do PCIe
passthrough in a hypervisor and initialize the cards then sorry not buying it.

~~~
cambion
While I agree that Google is not buying GeForce GPUs, their general use-case
for GPUs does not require virtualization.

They use containers to isolate and throttle different tasks/jobs running on
the same hardware.

At their scale, virtualization would be significantly wasteful in terms of
manageability and overhead.

~~~
dogma1138
I think we have different meaning for virtualization when it comes to GPU.

I'm not talking about running virtual OS, I'm talking about things like rCUDA,
GPU direct and RDMA.

But still even for their containers solution they need support for gpu
passtrough and vGPU if not they can't run containers.

NVIDIA doesn't allow you to run GeForce cards over a hypervisor.

~~~
tmzt
Containers would imply there is no hypervisor involved, only a dri device
exposed by the kernel and bind-mounted into the namespace. You would still
need support for multiple contexts but that doesn't require multiple (virtual)
PCI devices or an IOMMU.

------
throwaway123dse
One thing I haven't seen discussed:

Many times I'm driving and have the police wave me through a traffic light.

Would a self-driving car realize what's happening?

What about a driver waving me to get ahead?

~~~
drcross
It's not a solved problem but it's something that people are working on.

[https://youtube.com/watch?v=8aEWHdduPwc](https://youtube.com/watch?v=8aEWHdduPwc)
This is the Mercedes car that communicates to pedestrians about the vehicle's
intent.

[https://en.wikipedia.org/wiki/Vehicular_communication_system...](https://en.wikipedia.org/wiki/Vehicular_communication_systems)
Something like this might feature prominently as well.

------
pimlottc
The headline reads confusingly to me. It sort of sounds like it's saying the
system is five years in future but what he meant is that Telsa itself is five
years ahead of the competition in this area. (Off-the-cuff verbal speech is
often a bit hard to follow when written down exactly word-for-word)

------
mrfusion
How well would a modern car do in the darpa grand challenge? I'm curious how
far we've come.

~~~
frik
I doubt a self-driving car without a LIDAR would make it - on the same "test
road" as in 2006. But it shouldn't be a problem for others like Google, Volvo,
Ford, etc

We need an independent review of self-driving cars in a few years. It will be
quite interesting how good they really are in different driving situations
like different road and weather conditions. Say good buy to only-camera+radar
based cars on a snowy road with bright winter sun (low sun rays) or heavy rain
in a dark foggy night.

------
pistle
First, I'm all for AI-based transportation solutions, but why does it seem
like there aren't nearly enough redundancies being considered? There are just
going to be competing proprietary solutions?

In the US, the NHSTA should get ahead of things and push for open standards
and potentially for some level of development of standard safety features like
being able to set some form of material or device on objects to mark them in
ways that can transmit specific information about objects like other cars,
fixed structures, etc.

I could make 10's of millions selling stickers or paint additives that would
mark a human-driven car's edges to help "protect" it from automated vehicles'
AI.

------
Belenus
Not only is NVIDIA getting those three things that Huang mentioned, but they
are also planning, and maybe getting, a monopoly. Think, rapidly switching
from chip-making to AI, before anybody else, but not for self-driving cars.
Then, when Tesla decided to start self-driving cars, NVIDIA hopped in and now
Tesla is going to use their Drive PX2 supercomputer. This is a smart move by
NVIDIA, after all, they're five years ahead!

------
tim333
The PX 2 is kind of a cool computer. 8 teraflops, 250W, liquid cooled. I
imagine it would run a good bit less than that most of the time. 8 teraflops
is about 8% of Moravecs estimate of brain equivalence so assuming you use ~8%
of your brain driving it may be about right.

~~~
emcq
The brain doesn't do floating point operations. You're trying to compare
apples to oranges.

~~~
tim333
Yeah but Moravec's arguments were based on the rough processing power he found
necessary to achieve equivalent performance in the simple robots he was
building. So it's a two apples equals one orange for making fruit salad kind
of estimate.

------
Joyfield
Noob question here. How does multiple LIDARs not interfere with each other?

------
zump
Question: Does NVIDIA do remote jobs for engineering positions?

------
meow_mix
This article seems to relate mostly to the computer vision problems associated
with self-driving systems, but what about weather conditions? Is this a solved
problem?

~~~
sushirain
Weather conditions are a sensor / computer vision /control problem. It is far
from solved.

------
MarkMc
I would love to see a betting market give a prediction about whether a Tesla
car sold today will have full self drive capability by 2018.

------
tener
Very cool: with capable CUDA device in a car, one should be able to mine some
crypto coins with it! With some luck the car should be able to pay for itself!

Alternatively, having in mind Elon's creative approach to finances, Tesla
could get some significant hashing power with its fleet!

Edit: looking at the downvote I suppose the joke wasn't obvious?

~~~
Retric
In the 9years I have been on HN I have up voted 3 really good jokes and
downvoted thousands of so so ones. Before you post a joke think is it that
good. Because, jokes are like spam on message boards they add clutter without
aiding the discussion.

~~~
tener
Wow, really? I just don't see how this is a spam. They definitely add some
value, unlike some other stuff out there.

~~~
ilaksh
I thought it was kind of funny but to be honest I couldn't tell you were
joking. It's hard not to assume people are just dumb.

------
sp332
Who thought making decent autonomous cars was going to take as long as 2020? I
mean for regulations etc maybe, but not for the tech. Earlier this year I
expected it to be done later this year.

Edited for clarity (I hope).

~~~
ghaff
Many people. And I still don't expect to see general-purpose (secondary
roads/cities, range of weather conditions) for decades.

~~~
60654
This exactly. I did robotics in the late 90's and it's interesting to see how
many fundamental problems still remain unsolved. And the trajectory of tech
advancement is positive for sure, but not _nearly_ as fast as popsci and
marketing articles suggest.

~~~
ilaksh
What fundamental problems remain to be solved for self driving cars?

~~~
sushirain
Vehicle detection: detect any vehicle, even from the side, even if its shape
is rare, even at dark, etc. This could be solved with data.

Control: when to yield, without watching the face of the other driver, etc.

Edge cases: obeying a police officer, yielding to an ambulance, cooperating
with other cars.

~~~
slv77
And the non-technical side.

Liability: who is liable when the car kills a pedestrian?

Driver engagement: How do you safely transition from automatic control to
manual control.

Maintenance: Will the manufacturer be obligated to provide software updates
for the life of the vehicle? Even if that vehicle is 20 years old? Even if
newer software is 10x less likely to kill a pedestrian?

Cost: Are the costs one-time or will there be a maintenance fee?

Licensing: Does the manufacture have the right to disable functionality after
the purchase? Do they have a right to your data? Can they sell that to your
insurer?

Regulation: Who certifies systems? How do they test them? When is a system
"good enough?"

All of these things sound trivial compared to the technical challenges but
it's these kinds of non-technical challenges that killed the small-airplane
market in the US and are still unresolved 65 years later.

