
Nvidia and Audi aim to bring a self-driving AI car to market by 2020 - t23
https://techcrunch.com/2017/01/04/nvidia-and-audi-aim-to-bring-a-self-driving-ai-car-to-market-by-2020/
======
frik
Tesla is using a modified Nvidia self-driving platform too for Autopilot v2,
available in early 2017.

Nvidia demonstrated their platform with many sensors like 10 cameras, ultra-
sonic sensors and at least one LIDAR.

Where as Tesla is using a sensor mix with radar, ultra-sonic sensors and like
7 cameras - the LIDAR is probably still too expensive even for a 70+k car (the
big one you know from Google cars cost $70k, the smallest one cost at least
7k).

It will be interesting to learn about Audi's sensor mix, and what LIDAR
product they choose.

~~~
lucidrains
250$ LIDAR is on the horizon
[http://driverless.wonderhowto.com/news/quanergys-
new-250-sol...](http://driverless.wonderhowto.com/news/quanergys-
new-250-solid-state-lidar-could-bring-self-driving-masses-0175790/)

~~~
emmab
> And if you're worried about the sensors being vulnerable to hacking—Eldada
> says they've engineered the sensor with seven layers of protection to make
> sure the system can't be tricked.

A seven layer seal? That could surely only be broken by the darkest of magics.

~~~
mixedbit
Let's hope each of this layer is not guarded by admin:admin credentials.

~~~
elcct
Nah, they got this - it is admin:changeme

------
KKKKkkkk1
The term "self-driving" in this context has no technical meaning behind it.
Cruise control is also self-driving. And don't get me started on "AI". Jen-
Hsun is saying that the car "was trained for four days". WTF? Mobileye has a
team of 100s of annotators working 9-to-5 generating training data. It's as if
this report is tailored to fool credulous readers who have a vision of HAL
9000 driving a car in their mind.

~~~
jayjay71
The article specifies Level 4 autonomy, which means:

The automated system can control the vehicle in all but a few environments
such as severe weather. The driver must enable the automated system only when
it is safe to do so. When enabled, driver attention is not required.

~~~
notheguyouthink
On a sidenote, level 4 autonomy will be really interesting when it's all over
the place. People are already "bad drivers" _(in my opinion, and including
myself)_ , but not having to drive 75-90% of the time will mean we rob
ourselves of the constant training that helps keep us _somewhat_ competent at
driving.

How bad will the roads be when we have thousands of people all very unsure of
their driving skills driving in conditions that are dangerous to begin with?
Seems quite the interesting issue.

~~~
LeifCarrotson
I am at my worst as a driver when I am fatigued from the stress of attending
to driving for a long time. Having an L4 system where I could step in for a
few minutes after just riding for an hour, giving fresh attention to those few
minutes, would make me a better driver.

The advent of cruise control has not made people incapable of using the
accelerator. You occasionally meet 16-year-olds who enable it on any 35, 45 or
55 mph road when they reach the speed limit, because holding constant speed is
a skill. But it's not difficult to refrain from using it for a while to brush
up on the skill.

~~~
jacalata
I once spent two hours driving through a snowstorm in Canada. There's no
reason to expect that the conditions which require human attention would only
last for a few minutes.

~~~
LeifCarrotson
That's great. I've done the trip to Michigan Tech from lower Michigan in a few
snowstorms.

But I have done far more miles on highways in good conditions where an L4
system would be helpful. The tech is still useful even if it's not a complete
solution.

------
thinkloop
Wonder how many more times this can get declared before the PR value works as
a net loss.

~~~
tsenkov
I think that's happening already, among engineers at least.

There should be some healthy dose of inspiring "propaganda", but this got out
of hand - everybody claims "they have it".

I am so sick of this endless stream of lies, that I'm not even going to read
the article. The next thing I am going to read on the topic would read
something like "X has a viable self-driving car - it's hitting the market
later this year.".

~~~
stale2002
Really?

You know that there are 10s of thousands of self driving cars on the road
RIGHT NOW, that you as a consumer can buy tomorrow.

Tesla sells thems, and they work.

~~~
j_jochem
Tesla cars are not autonomous as stated by Tesla themselves.

~~~
stale2002
You press a button, take your hands off the wheel, and it drives down the
highway.

That is absolutely self driving.

If half of your driving time is done by the car, that is still a massive
benefit to consumers.

End to end autonomity, Is even better, of course, but I think that everyone is
selling short the in between 95% self driving.

Even TINY features, like autobraking in emergencies, has the potential to save
tens of thousands of lives every year, if it got fully deployed in all cars.

And for jobs, highway driving is 90% of what truck drivers do. If you have
highway self driving (which we DO, right now!), then there goes most truck
driver jobs.

~~~
vidarh
I agre with you part way. Of course it is still a massive benefit.

But fully autonomous driving is still transformative in entirely different
ways that makes it hard to treat the evolutionary steps as all that exciting
even though many of them probably should be.

~~~
semi-extrinsic
> Of course it is still a massive benefit.

Could you ELI5 what the massive benefit is? The system _requires_ you to be
alert and focused on the road the entire time.

We know for a fact e.g. from the aviation industry that having a human sitting
alert but doing nothing increases human response time and decreases the
correctness rate of their actions, which is one reason why airplanes don't
regularly land on autopilot, even though the tech has been there for a long
time. In fact, auto-land on airliners is only used in very low visibility, low
wind conditions, or for training to ensure pilots use it at least semi-
annually.

The huge benefit of autopilot on airplanes and ships is that for normal
operation out on the ocean or up in the sky, it's sufficiently safe that the
pilot/captain can focus his attention elsewhere for long periods of time.

~~~
stale2002
No it doesn't. Systems like Otto, do level 4, self driving for trucks on
highways. These are on the roads right now.

The truck drivers can just get up out of their seat and do something else.
There goes 90% of truck jobs, as now you only need a driver for the first and
last parts of a trip.

Other benefits: saved lives. The more highway driver cars we have, the less
that humans will be driving, and the less chance that a human will make a
mistake.

The level 2 and 3 stuff on a Tesla, is already a better driver than most
humans.

~~~
semi-extrinsic
> The level 2 and 3 stuff on a Tesla, is already a better driver than most
> humans.

Ehm, how do you come to this conclusion? Statistically, Autopilot Teslas are
no safer than the NHTSA average, at least so far.

~~~
greglindahl
Statistically, the error bar is so large that your statement, and the one
you're replying to, are not accurate. We don't know because there's too little
data.

~~~
semi-extrinsic
Yes, I agree, to a point. However, if we suppose that Autopilot driving was
several orders of magnitude safer than normal driving, the probability that
the (admittedly poor) statistics at this point would show it being equal to
normal driving is very low.

If I say "black swans are extremely rare in this part of the world", and you
spot one the very next day, the Bayesian in you would assign a low probability
to my statement being true, even though that's based on a sample size of one.

------
peteretep
Random thought: the more deep learning is used in training, the less humans
will be able to retroactively explain decisions; this surely has liability
implications

~~~
bobcostas55
Perhaps NNs need functionality to come up with fake rationalizations for their
decisions just like the human brain does.

~~~
worldsayshi
I'm thinking that this idea about "fake rationalization" is a bit backwards.
We say that many actions happen before we are aware of it or have a
rationalization for them but becoming aware of why we did something after the
fact doesn't mean that "you" weren't part of the decision. Aren't your
reflexes a part of "you"?

~~~
bobcostas55
Of course, but nobody is going to say "my reflexes did it". We come up with
stories, lines of argumentation, etc. Even though these are "ex post" we often
manage to convince ourselves...

In Homer's time the Greeks viewed the unconscious as an external (divine)
force, the most famous example being Telemachus's sneeze. In a weird way this
feels more intellectually honest than the thing we do.

~~~
peteretep

        > we often manage to convince ourselves
    

In this case, presumably the trick is to convince a jury.

------
joeyspn
Bitcoin and cryptocurrencies, AI, self-driving cars... good (and profitable)
times for GPU manufacturers.

Nvidia stock (NASDAQ:NVDA) is x4 in the last year and probably will continue
escalating..

~~~
rrdharan
I thought all the bitcoin and crypto stuff had already moved past GPUs and on
to custom ASIC units?

~~~
notheguyouthink
I know next to nothing about this but my buddy is big into Bitcoin. He runs a
lot of ASIC machines for bitcoin, but he says there are a lot of reasons to
dislike the ASIC compatible algorithms. He's experimenting with switching to
GPUs, iirc he's mining Ethereum with them at the moment.

I can't go into much detail as it's not my knowledge obviously, but i can say
that GPUs are viable/required for some of the coins out there. Based on his
word, at least.

------
dbcooper
ZF's press release:

[http://www.zf.com/corporate/en_de/press/list/release/release...](http://www.zf.com/corporate/en_de/press/list/release/release_29147.html)

------
ilaksh
I think that Tesla is basically saying 'F it' and releasing something like
this either right now (version 2.0) or full version before end of 2017.

But they have more-or-less been doing that the whole time. Its just now they
have more sensors and deep learning so they are going to be autonomous a
higher percentage of the time.

So I think as soon as they start rolling it out more and more Tesla owner will
have more common 100% autonomous trips with some exceptions for weird traffic
or weather.

I think this is risky in some ways but overall its more ethical than delaying
because the only way to train/engineer for the exceptional situations is to
get a lot of vehicles running the system and training on data. Waiting a few
years means people die from human error and you're unlikely to see massive
improvements to the system that would make up for that.

One thing people will realize eventually is that we create a lot of driving
situations that are structurally unsafe. For example, it is accepted to speed
past pedestrians or bicyclists a few feet away on the sidewalk or bike lane.
No level of AI advancement can prevent some random horrific accidents in that
case. Could be as simple as a pedestrian crossingthe street a little early.
People are not going to tolerate AIs going 5 mph anytime a pedestrian is
nearby but thats the only way you could prevent fatalities in some situations.
That is part of the 'low confidence situations' the nvidia guy mentioned. So
actually we need laws to protect autonomous tech companies in those situations
or that will delay situational deployment and lead to more human error deaths.

------
ParadisoShlee
I know lawyers and states need time to legalise the paperwork around
driverless, but I feel like 2020 is simply to appease the auto companies and
give them another few years to stall.

------
zxcvvcxz
Why is everyone so damn cynical about self-driving cars?

We know this is coming, and this technology will improve the lives of so many
people in the long run. Maybe it's from Nvidia and Audi, maybe Tesla, Uber,
Google, that dude who launched and failed and ran away to China, who knows?

I'm excited to think about what opportunities will start to open up once
humans don't need to spend 2+ hrs a day with their hands on the wheel :)

------
lucidrains
Video:
[https://www.youtube.com/watch?v=7jS4AuPnmyg](https://www.youtube.com/watch?v=7jS4AuPnmyg)

"Audi and NVIDIA developed an Audi Q7 piloted driving concept vehicle, which
uses neural networks and end-to-end deep learning. Demo track at CES 2017 in
Las Vegas."

~~~
remir
I wonder how it compares to Tesla Vision.

~~~
dangrossman
They are one and the same, no? [http://www.nvidia.com/object/tesla-and-
nvidia.html](http://www.nvidia.com/object/tesla-and-nvidia.html)

~~~
remir
Tesla made their own software:

 _The computer delivers more than 40 times the processing power of the
previous system, running an Tesla-developed neural net for vision, sonar, and
radar processing_

~~~
dangrossman
Every customer of Nvidia's solution "develops their own neural net" by feeding
their own set of sensor data into Nvidia's training platform, then using the
resulting neural net on Nvidia's hardware in the cars. Did Tesla not do that,
the same way Audi did here, and Nvidia themselves did for BB8? Everyone's
going to be adding some software on top of that for the UI and UX they want to
offer and such. Did Tesla do more than that? Your quote doesn't actually
suggest they did.

~~~
PeterisP
The neural net is 99% of the work - if all they share is a particular GPU
model, but each have "their own" neural net made on their own training data,
then you can pretty much treat it as a completely separate, different system.

The quality and performance of the system is mostly determined by the data,
not the raw computing power, so it's worth comparing them as they can be very
different.

------
Hydraulix989
The scary thing about non-ad hoc techniques is that deep net is a "black box"
\-- you really don't know how pathologies occurred nor do you know how to fix
them.

Not only that, there are _inherent_ pathologies associated with using deep
nets in the first place.

~~~
lucidrains
Fun thought experiment.

An alien race landed on earth and demands to play a game of Go. We only get to
play one game with them. If they win, our planet is destroyed.

Who would you trust to play for the human race if this scenario happened
tomorrow? Lee Sedol or AlphaGo? Remember that we do not completely understand
how AlphaGo reasons, it is still a black box to us.

~~~
tempestn
Lee Sedol is also a black box, no?

Also, one shot for the survival of humanity is very different from billions of
iterations of driving situations every day for the foreseeable future. A
complete understanding of the system is much more valuable when you have the
opportunity to iterate.

I'm not necessarily arguing for one approach; just saying that the analogy
doesn't really apply to the case at hand.

~~~
cscurmudgeon
> Lee Sedol is also a black box, no?

Last time I checked we can talk to Lee Seedol and ask him to explain things.
We can ask him questions. We can have an intelligent conversation with him.

~~~
nradov
Human explanations for their decisions are often rationalizations after the
fact. The explanations don't necessarily accurately represent how the
decisions were actually made. Most decisions are made subconsciously based on
intuition and emotion. So that intelligent conversion might not have any real
significance.

~~~
cscurmudgeon
My general point applies to human thinking in general not just Go.

Example: Would you fly on a plane designed ultimately by a human vs an
impenetrable black box?

Also there is a spectrum. Let us not pretend otherwise.

1\. One end: No explanations.

2\. Middle: Seometimes false explanataions and sometimes true explanations.

3\. Other end: Always true explanations.

Are we really saying the middle is completely useless?

~~~
nradov
Yes I would fly in any large airplane which is properly certified for
scheduled commercial airline service, regardless of how it was designed. The
FAA has earned our trust and has a good safety record so if they tell me the
design is satisfactory then I would believe them. I also wouldn't take the
risk of flying in any non-certified experimental aircraft, again regardless of
who or what designed it.

We have no way to determine whether an explanation is true, false, or simply a
post-hoc rationalization. We like to believe that we can, but we're just
fooling ourselves.

~~~
cscurmudgeon
> We have no way to determine whether an explanation is true, false, or simply
> a post-hoc rationalization

If we have no way of determining whether something is true or false, then I
can say the same thing about your own statement quoted above. I can just say
it is false and go on with my life.

I sincerely hope you realize the obvious self contradiction :D

Logic 101. :)

~~~
nradov
There's no self contradiction. I never claimed that we have no way of
determining whether _something_ is true or false. I only claimed that we have
no way of determining whether the explanation a person gives for how he made a
decision actually matches his real mental process or motives. We can't yet
install a debugger with break points and variable watches in the human mind;
it's very much a black box.

Logic 201. :-)

~~~
cscurmudgeon
Logic 101. :)

> I never claimed that we have no way of determining whether something is true
> or false

Everything can be cast as a declarative statement.

Matches(givenStatement, actualIntention).

------
k__
And all this because of gamers.

You're welcome ;)

~~~
visarga
And the guys with the NNs.

------
slaunchwise
Audi? I'd be happy if they could build a car with a damned USB port and a
touch screen.

------
nameisu
lately every few weeks the trend in news is 1 tech company+1 car company

~~~
seeekr
... and soon the trend'll be 1 car company owned by a tech company. Wondering
when the first tech companies are going to start buying car companies -- in
the tried and true spirit of tech (software) eating the world.

~~~
chx
Maybe. Fiat Chrysler is a 16.44B company, nVidia is a 57.15B company. Any
other car group I can think of is much larger than Fiat Chrysler.

