
Tesla deploys massive new Autopilot neural net in v9 - skuzins
https://electrek.co/2018/10/15/tesla-new-autopilot-neural-net-v9/
======
threeseed
Still completely reckless in my opinion to deploy any autonomous system
without LIDAR or some other long range, night time capable sensor.

Even if it's not reckless their competitors will offer low light and night
time autonomous driving which will be a major advantage.

~~~
xvector
I do not get why your comment is so controversial - you are absolutely right.

Conventional cameras alone are not trustworthy for self-driving, and this is
part of the reason every respectable company venturing into self-driving is
incorporation technologies like LIDAR.

I find it sketchy that Tesla markets "future full-self-driving" when they are
unlikely to have the hardware to make that a safe experience.

~~~
jaimex2
Ever wondered why in millions of years nature didnt evolve a LIDAR equivalent
and most species use passive vision?

I think Tesla is bang on the money going only with cameras. They are already
better than the human eye.

~~~
mikestew
_Ever wondered why in millions of years nature didnt evolve a LIDAR equivalent
and most species use passive vision?_

Because it compensated for poor sensor hardware with a processing unit and
software that we have thus far been unable to even come close to replicating?
Granted, solve that, and we’re just a few years away!

~~~
candiodari
The problem with that is that LIDAR is a sensor that is so unlike human
perception humans completely misjudge what it's capable of. After an accident
humans refuse to believe that the sensor really didn't see it coming, which of
course is a big problem later in court. Sonars have the same problem. There's
things those sensors can see that humans just can't but in the case of Lidar,
only in specific planes (so, for example, it just doesn't see things that
"point" at the sensor. It just doesn't see stairs or even an abyss, even at
close range). Sonar has similar problems. It sees everything everything
everything ... as long as there is a whole lot of consistent nothing
surrounding it. When there is structure on the sea floor, sonar is useless
near it. When there is a ship on the surface accelerating, the eddy currents
create a region around, behind and below the ship where the sonar is blind.
And near the surface, sonar is useless. The more wind, the deeper the problem
goes. In a storm, it can be 10 meters and more. People with decades of
experience for some reason seem to outright refuse to believe that.

People seriously misjudge the limitations of these sensors, and this leads to
accidents.

Better to use cameras, which have almost exactly the same issues humans have
(e.g. bad vision in low light, limited view, "blind" angles near corners, bad
optical performance near the edges, ...), which will lead to "understandable"
mistakes.

Nobody will understand if a LIDAR misses a beam sticking out of a truck in
front of you (which would be expected behavior: such a thing is essentially
invisible to LIDAR) and impales the person sitting in the passenger seat on
it.

Or the Tesla fuckup. Failing to see the difference between front and back
wheels of a large truck and 2 cars. Then decapitating the driver by driving in
between the 2 sets of wheels at high speed. That's a typical LIDAR issue.

Sooner or later LIDAR will decide that just driving off an abyss is the best
solution to a simple traffic situation (because LIDARs see abysses everywhere,
so they use algorithms that assume abysses don't exist).

I've seen LIDAR controlled robots drive into tables "decapitating" (sort-of)
themselves, because it only saw the feet of the table, looking at the data,
and coming to the conclusion ... yep ... it was a perfectly understandable
mistake. That robot also threw itself off the stairs. Again ... tough to fault
it for that, as it saw the stairs as pretty much the same thing as a stick
lying on the floor. Afterwards looking at the data, that was a perfectly
reasonable conclusion.

We lost the robot to the stairs. I was looking at it making that decision. Why
? You see it move, and you're automatically assuming "surely it's not going to
go for the hole in the staircase". And then it decides on a solution. Boom.
And yes, I pressed the emergency stop button. Doesn't help much if the robot
is already falling.

~~~
Ntrails
> Better to use cameras, which have almost exactly the same issues humans have

Surely, he cries, the argument is to use both and simply apply use some kind
of likelihoods/confiendence in the output to combine them?

------
Ajedi32
The original forum post this article is about has better info IMO:

[https://teslamotorsclub.com/tmc/threads/neural-
networks.1014...](https://teslamotorsclub.com/tmc/threads/neural-
networks.101416/page-26#post-3120699)

~~~
synaesthesisx
Whoa back up - he claims it was built off Inception V1? Truly impressive if
that's the case...

~~~
nkoren
Care to expand on that?

~~~
CarVac
Right from the end of that forum post:

>Inception V1 is a four year old architecture that Tesla is scaling to a
degree that I imagine inceptions’s creators could not have expected. Indeed, I
would guess that four years ago most people in the field would not have
expected that scaling would work this well. Scaling computational power,
training data, and industrial resources plays to Tesla’s strengths and
involves less uncertainty than potentially more powerful but less mature
techniques.

------
Someone
_”When you increase the number of parameters (weights) in an NN by a factor of
5 you don’t just get 5 times the capacity and need 5 times as much training
data. In terms of expressive capacity increase it’s more akin to a number with
5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity
is more like 100,000.”_

I find it very, very hard to believe that. I know ‘expressive power’ is a
fairly vague concept, but if things scale that well, there must be papers out
there that at least hint at such (IMHO) insane scaling laws.

I think it also must mean that it is fairly easy for those with huge budgets
to build a system that’s way better, except for the fact that it is too slow
or takes too much power (just as 3D graphics in movies show what will be on
our desks/phones in a decade or two)

I’ve asked it before, but does anybody know of papers that describe an offline
self-driving system that’s as good as perfect?

~~~
X6S1x6Okd1st
Here is a highly specific definition of "expressive power"
[https://en.m.wikipedia.org/wiki/VC_dimension](https://en.m.wikipedia.org/wiki/VC_dimension)

To me the statement doesn't seem that unreasonable

~~~
mannykannot
Thanks for bringing this to my attenton. In that article, however, it says for
neural nets:

 _V is the set of nodes. Each node is a simple computation cell. E is the set
of edges, Each edge has a weight. .... If the activation function is the
sigmoid function and the weights are general, then the VC dimension is ... at
most O(|E|^2.|V|^2)_ [apologies for the crappy formatting]

While Someone's quote from the article seems to be suggesting something
exponential in the number of edges.

~~~
Someone
I haven’t even tried to hunt down the book referenced on Wikipedia, but I
think it’s worse than that. The Wikipedia page says _”The VC dimension of a
neural network is bounded as follows”_. “Is bounded” is an expression in
mathematics that is more about what we know about a problem, than about the
problem itself (as a classical example, see
[https://en.wikipedia.org/wiki/Graham's_number#Context](https://en.wikipedia.org/wiki/Graham's_number#Context).
Graham’s number ‘bounds’ a number whose value we know to be at least 13)

Given the huge range between that upper bound and the lower bound of Ω(|E|²),
chances are that upper bound is far from _tight_
([https://en.wikipedia.org/wiki/Upper_and_lower_bounds#Tight_b...](https://en.wikipedia.org/wiki/Upper_and_lower_bounds#Tight_bounds)).

Also, one line below the _O(|E|².|V|²)_ you quoted:

 _”If the weights come from a finite family (e.g. the weights are real numbers
that can be represented by at most 32 bits in a computer), then, for both
activation functions, the VC dimension is at most O(|E|)”_

Of course, they may use a different activation function, in which case that
mathematical statement doesn’t apply, but I would think it’s more unlikely
that applies than the claim made on the article we’re discussing.

For example, it would hugely surprise me if using an activation function that
isn’t increasing or that has many large discontinuities behaves a lot better
than the sigmoid surely used.

------
kwhitefoot
I wish that articles like this would make it clear which features apply to
which cars. It would be nice to know if there are any improvements to cars
with Autopilot version 1 for instance.

------
Jemm
Not a single pedestrian in the whole video.

------
desireco42
This is the blog/site, where editor got Tesla Roadster through affiliate
points. So, take it into account when you read articles about otherwise
fantastic Tesla cars.

~~~
astrowilliam
Good on them for working hard to get affiliate points for a brand new Tesla.
It took them working their butts off to get one.

I found the article to be full of unbiased information and also more personal
info in the "ELECTREK’S TAKE" section.

~~~
Kozoxy
Good on you that you found value in the article, but I found it to be biased
in favor of Tesla, and also find it sleazy that they didn't disclose their
affiliation to Tesla.

~~~
mgiannopoulos
Their participation in the affiliate scheme is very often mentioned in the
articles.

~~~
FireBeyond
But certainly not in this one.

~~~
Latteland
Every Tesla owner is in the referral program. If you get something like 20
referrals at various times you could get a car. These guys own a Tesla, I
think two of them between them, and so could eventually earn a car. Even I am
a _craven person_ who get something if you use my referral code to buy a
tesla. You get $100 a year for at least 4 years in supercharger credits
(that's like $400 a year in a regular gas car cause EVs get the equivalent of
around 100 miles a gallon in gas :-)).

I get virtually nothing unless I get 20 of them - not sure of the current
program but so far my life time total referrals is: 0. If you have questions
and want to ask someone who has owned two different ones since 2012 ask me
questions. If you want to buy a tesla, I have a referral link too, contact me
via email on my account here, maybe hackernews wouldn't like me to just put it
out here.

------
Kozoxy
That's great, but when are they going to properly address and mitigate the
potentially massively lethal cyber- and national-security issues made possible
by the tech?

Or do we have to talk only about what they want us to talk?

~~~
koiz
Everything has potential to cause issues, won't stop progress. Let's not act
like they are ignoring this.

~~~
Scoundreller
Do users have the ability to stop updates without repercussions?

Is it really a choice that cannot be overridden by the update?

If X declares war on California, will California shutdown every Tesla in X?

Will I be allowed to export my Tesla in 15 years to whatever developing
country I like?

~~~
Kozoxy
Will my car be taken over and kill me?

Will other people's cars be taken over and kill me?

Will my car be taken over and be used to drive me to someone who will harm me?

These are life or death concerns and let's not act like they're not ignoring
this.

~~~
koiz
1\. No it wont, please show me where this is happening. 2\. No it wont, please
show me where this is happening. 3\. No it wont, please show me where this is
happening.

There's concerns and then crazy questions and comments.

~~~
koiz
_#1 is easy: “pay us or we will accelerate your car in a random direction in
24h”. The first round will be scareware, the future may not be._

Can't happen in a tesla, “so even if somebody would gain access to the car,
they cannot gain access to the powertrain or to the braking system.”

And how is this any different than we'll kill you in 24h if you dont pay?

~~~
Scoundreller
How can that be? You can remotely update the system that does autopilot, and
autopilot must have control over throttle/brake/steering.

Even shutting off the lights or disabling the wipers can create big problems.

~~~
koiz
Systems are built in isolation, communication between them is encrypted.

Disabling wipers or turning off the lights will kill us now? Not forgetting
this hasn't happened, hasn't been proved to be possible and is all a theory.

Theories are fine but acting as if tesla is activity ignoring this is funny.

~~~
Scoundreller
They’re clearly not in isolation. We’re not talking about storing data, but
sending it and receiving it, so if the messages are encrypted, you just need
to feed the code that is doing the encrypting and voila.

Disabling wipers and all lights while driving at high speed with the wipers
running full blast can be quickly fatal.

~~~
koiz
If they were we wouldn't be debating this.

Sorry you are wrong.

~~~
Scoundreller
Not sure which point you were referring to by "they", but we must have
different definitions of "isolation".

If one system absolutely requires communication with the other to function, I
would never call them isolated.

