
Tesla Autopilot HW3 details - toomuchtodo
https://www.reddit.com/r/teslamotors/comments/acjdrt/tesla_autopilot_hw3_details/
======
ovrdrv3
Imagine being on the team involved in the development and reading through this
write up - I'm sure it would bring a smile to their face. "Got this right, got
that wrong."

~~~
TomVDB
I was once on a team that had a product with some crypto features that was
under attack.

There was a hackers forum where daily discoveries were discussed.

It was indeed exhilarating, and exactly that way you describe it: on one hand,
we didn't really want the thing to be cracked, but on the other it was
impossible not to be rooting for those scrappy hackers going down the wrong
path at first before figuring out the right one, one step at a time. Every
morning, we'd log in to that forum to check their overnight progress.

Still, we were confident that our bank-strength crypto algorithm would
prevail.

It did not. :-)

While we had done our due diligence, an external implementation partner had
decided to change the audited code later in the process, which broke things
completely, in the most embarrassing way.

In the end, it didn't matter, and we probably sold a few more units that what
we would have sold without the broken crypto.

Fun times.

~~~
stingraycharles
Sounds very interesting. This being hacker news, are you able to elaborate a
bit on the specifics, specifically the type of crypto and how it was broken?

I wonder what the "most embarrassing way" would mean in this context -- I'm
thinking timing attack or padding oracles, but it sounds like it might have
been even more trivial.

~~~
TomVDB
Have some random thoughts about the most crucial element of a
challenge/response system.

~~~
ben_w
xkcd 221?

~~~
TomVDB
Close enough!

There really is an XKCD for everything...

~~~
dtech
To be honest, that joke is very common. A much older version of it exists in a
Dilbert cartoon from 2001 [1], and it wasn't a new joke at that point.

[1]
[https://dilbert.com/strip/2001-10-25](https://dilbert.com/strip/2001-10-25)

------
ucaetano
> As a point of reference - Google’s TPU V1, which is the one that Google uses
> to actually run neural networks (the other versions are optimized for
> training) is very similar to the specs I’ve outlined above.

The TPU V1 is a 2015 chip (publicly announced in 2016, and that Google had
been using internally for over a year). 4 years is a big lag in terms of
technology.

Additionally, the comment about "actually run neural networks " seems to be,
AFAIK, plainly wrong. The first version was _limited_ to inference because it
could only perform integer operations, while V2 and onward support both
training and inference, as they can perform float operations.

TPU V1 was never available to the public, but both V2 and V3 are available on
Google Cloud. I don't have any info on it, but at this point, I'd expect all
V1 chips to have been deprecated and likely removed from production, given the
savings in space and power from V2 and V3.

~~~
danjayh
4 years is a big lag if you're not doing safety critical work. If you _are_
doing safety critical, 4 years isn't particularly fast, but it's not
unreasonable. If you've never worked in aerospace or automotive, you would be
shocked at the requirements for software and hardware that can kill people if
it breaks (assuming that Tesla is following the automotive equivalent of
DO-178/DO-254, ISO26262)... I guess the automotive guys do have an advantage
because they're not required to have independence for the various stages of
verification, but it's still a big chunk of work (more than the actual
development).

~~~
ucaetano
> If you are doing safety critical

Well, Tesla clearly isn't doing safety critical work on autonomous systems (it
wouldn't have even launched "autopilot" if it was), so 4 years is indeed a big
lag.

~~~
torpfactory
Tesla claims [1] that autopilot driving is safer than regular driving in terms
of risk of accident per mile. Not sure if I agree with the way they've
constructed their statistics, but for a moment take them at face value.

Isn't then the safer thing to release and encourage autopilot to the largest
extent possible?

[1] [https://www.tesla.com/blog/q3-2018-vehicle-safety-
report?red...](https://www.tesla.com/blog/q3-2018-vehicle-safety-
report?redirect=no)

~~~
evv
That statistic is wildly and purposefully misleading. It compares the safety
of Autopilot (which only drives on the freeway) with human drivers on any
road. Accidents-per-mile are already much lower on the freeway.

Plus, the statistic ignores the fact that responsibility is actually shared
between humans and the car. If autopilot gets into an unsafe situation and
gives control to the human at the last moment, the car company might claim
that the accident happened under human control. We have a long way to go
before we can truly understand the safety of semi-autonomous vehicles.

~~~
Faark
I'd even expect their human driven statistics not actually representative or
useful for comparisons. Their cars are expensive, mostly excluding young and
thus often unexperienced drivers. They also don't particularly target older
people, the other group with high accident rates.

Controlling for all those variables is hard and not to their benefit, thus I'd
never expect their pr team to do so. But I wonder if that might even creep
into their automated driving stats... e.g. by drivers taking control in
dangerous situations.

------
azhenley
The hardware upgrade will be free to existing Autopilot owners:
[https://electrek.co/2018/08/08/tesla-autopilot-hardware-
upgr...](https://electrek.co/2018/08/08/tesla-autopilot-hardware-upgrade-free-
with-full-self-driving-package/)

~~~
ashleyn
Still makes me scratch my head when the car was billed as having "full self
driving hardware" at the point of sale. This is the second _hardware_ upgrade
they've done already, although if it's free I suppose nobody but investors are
complaining.

~~~
azhenley
It certainly isn't fully self-driving, but I must say that it does remarkably
well on interstates. Still isn't worth the additional 5-6k for me yet.

~~~
nolok
> I must say that it does remarkably well on interstates

... until it mis-view an exit lane and attempts to crash/kill you, that is
(see events of 2018).

That is kind of deal breaker for me, when talking about 1+ ton metal box
moving me around I will take "slow to market but not blind" instead.

I'm still waiting for Tesla to accept reality and put lidar (make the option
more expensive if you have to and you probably do, whatever)

~~~
AnthonyMouse
> That is kind of deal breaker for me, when talking about 1+ ton metal box
> moving me around I will take "slow to market but not blind" instead.

The way Elon Musk tells it, they put it into production once it was safer per
mile than human drivers. Which seems like a legitimate point.

You use something when it's better than what you would have to use in the
alternative, not when it's 100% perfect with no possibility of ever making a
mistake under any circumstances. Which is probably not even actually possible
to do.

The problem is if human drivers kill more than 30,000 people in a year, it's
not news, because it's been that way forever. But one autonomous car kills one
person and it's the top story.

~~~
Splines
> _they put it into production once it was safer per mile than human drivers_

The thing is that I can take precautions to increase my safety margin - don't
drive impaired, drive in good weather conditions, get lots of sleep, don't be
aggressive, don't follow closely, etc. etc. etc.

I don't have a Tesla, but I imagine there's nothing I can do to voluntarily
make it safer. It functions normally until it doesn't.

~~~
zarkov99
You can pay attention and keep your hands on the wheel. From what I know all
the people who had accidents failed to do this. The challenge seems to be how
to overcome the false sense of security drivers get, precisely because the
thing works so well most of the time.

~~~
darkpuma
> _" You can pay attention and keep your hands on the wheel."_

In that sense these Level 2 systems are a lot like q-tips. The box says you
shouldn't stick them in your ear, but everybody knows that's what you buy them
for. If everybody were to follow the directions, the consumer desirability of
the product would plummet.

Even though you're not supposed to take your hands off the wheel, level 2
systems are often referred to as "hands off". And during a _60 Minutes_
interview Elon Musk is seen taking his hands off the wheel. Unofficially the
point of Level 2 automation is that you can take your hands off the wheel.
Officially, your hands must remain on the wheel. The unofficial usecase is
what actually gets consumers enthusiastic, but the official usecase is what
companies like Tesla cover their ass with.

They're having their cake and eating it too.

~~~
zozbot123
The actual point of a Level-2 system (other than as a safety measure, e.g. wrt
emergency braking) is that it provides driver assistance for things _other_
than having to pay attention to the road & surroundings at all times - and
there are _plenty_ of such things that can meaningfully impact driver's
comfort. That's all there is to it. Don't think "Autopilot"; think "Fly-by-
wire".

~~~
darkpuma
Yeah that's nice and all, yet when you have Elon Musk promoting his cars by
taking his hands off the steering wheel, it's pretty clear that Tesla's Level
2 system is being sold to consumers as "hands off".

~~~
zozbot123
Why are people assuming in this thread that Tesla is the only auto maker
that's providing Level-2 driver assistance? This couldn't be farther from the
truth.

~~~
darkpuma
The topic of this thread is Tesla, specifically Tesla's implementation of
automation features. That is why I am talking about Tesla's promotion of their
Level 2 system.

------
danso
IIRC, California posts the annual disengagement reports at the end of January.
It’ll be interesting to see if Tesla filed one for 2018. They didn’t report
any miles driven in 2017. In 2016, they reported a small number of miles,
maybe the number needed to record their FSD demo that they have on the
Autopilot page.

[https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testi...](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/testing)

~~~
Joky
The problem is that with a L2 system like the current autopilot where you have
to take over when you exit a highway and encounter for example a traffic
light, you trigger effectively a "disengagement" while operating with the
expected flow of the system. It means that the "disengagement metric" gets
very noisy (I'd say useless) in this case.

~~~
danso
I don’t mean that Tesla will an anamolous number of disengagements, I mean
that that they will file a report _at all_ , which they haven’t done in 2017
because presumably they stopped autonomous driving on real roads. These
reports don’t cover systems like the current Autopilot, as it is not
considered autonomous driving.

------
ModernMech
> with all vehicle sensors (radar, ultrasonics, cameras) staying the same.

Still no LIDAR? LIDAR was _the_ enabling technology in the DARPA challenges,
and it remains so today. I can think of one Tesla customer who would still
have a head if his "self driving" car was equipped with LIDAR.

~~~
dominotw
i thought LIDAR has high cost barrier to be in consumer vehicles.

~~~
Alupis
> i thought LIDAR has high cost barrier to be in consumer vehicles.

Ok, so you save a few bucks, kill a few people... then eventually come to the
same conclusion every other Autonomous Driving company already has... you need
LIDAR for this work reliably?

~~~
gpm
Or, you know, you get it working without LIDAR and make boatloads of money.

Humans don't rely on any sort of LIDAR to drive, I don't see how you could be
so certain that computers need to.

~~~
Alupis
> I don't see how you could be so certain that computers need to

We're nowhere close to emulating what goes on in a human brain... neural
networks are an absurdly simplified approximation of simple life-forms... it
took the K supercomputer (having 705,024 cores and 1.4 Petabytes of RAM) 40
minutes to emulate 1 second of 1 percent of human brain activity.[1]

Since we cannot come close to sensory input processing a human can do... we
need to augment our systems with better sensors. LIDAR is a better sensor, and
can help "see" things that would otherwise go missed.

> you get it working without LIDAR and make boatloads of money

And why can't you make "boatloads of money" while using LIDAR? Killing people
is bad for business... no?

Before you jump on it, no LIDAR is no silver bullet... but it's available
technology that can help quite a bit. Why not use it? Unless you've
overpromised what your price point will be for technology that is yet to
exist?

[1]
[https://www.telegraph.co.uk/technology/10567942/Supercompute...](https://www.telegraph.co.uk/technology/10567942/Supercomputer-
models-one-second-of-human-brain-activity.html)

~~~
darkpuma
> _" Before you jump on it, no LIDAR is no silver bullet... but it's available
> technology that can help quite a bit. Why not use it?"_

Because Elon wanted to advertise his cars as having all the hardware
necessary, because the automotive automation hype is very real. He wanted to
sell luxury cars to tech enthused idealists, but he didn't want to cut down on
his profit margins by actually delivering what he was selling.

The fact that he always admitted the software was lacking gave him plenty of
cover. It allows his customers to continue to believe, even though their car
can't actually drive itself. That's the beauty of selling hopes and dreams
instead of anything concrete.

------
bobowzki
Seems like these operations are common in all kinds of convolution with a
filter kernel...

------
Giorgi
Wait so this is not about Jetson AGX Xavier but some unknown GPU computing
unit?

------
berkut
Interesting, so no more nVidia hardware...

~~~
Klathmon
Can you blame them? They were bitten once by relying too much on proprietary
tech that they had to license from Mobileye who eventually pulled the plug. I
can't imagine they are in any rush to repeat that process.

Bringing as much of the software and architecture in-house as possible shields
them a bit more from that happening again.

I have no idea if that is the ultimate reason, or if it was even a significant
reason, but it sure can't hurt!

~~~
Nullabillity
It's amusing, given that they expect the opposite from their own customers.

~~~
Klathmon
can you explain a bit more what you mean?

~~~
Nullabillity
Intentionally sabotaging repairability, remotely bricking people's cars, etc.

------
sidcool
Is there an ELI5 for this? There are too many details I could not grasp.

