
Tesla is working with AMD to develop its own A.I. chip for self-driving cars - adventured
https://www.cnbc.com/2017/09/20/tesla-building-an-ai-chip-for-its-cars-with-amd-globalfoundries.html
======
slg
It is interesting that Tesla still maintains that cars sold today will
eventually be self driving. That implies the remaining problems are mostly
software related. Meanwhile there are still heavily investing in developing
new replacement hardware. That continued investment is obviously the right
move from a long term business perspective. However if I was a customer it
would have me questioning the capability of and their commitment to the
current hardware.

~~~
jimrandomh
If self-driving turns out to need a new CPU/GPU/XYZPU to run the software,
that's a pretty easy retrofit: take out one board and replace it with a new
board of the same physical size. It's the sensors that can't be replaced,
because they involve mounting and wiring in a lot of different places.

~~~
robotresearcher
Assuming power draw, cooling and connectivity are the same. That's a big
assumption.

~~~
mastazi
That's a good point, but given the current general trend I would expect a
newer gen board to be probably more power efficient, of course that's just
speculation since I don't have insider knowledge about this specific project.

~~~
felippee
Power per operation is certainly decreasing, not as rapidly as in the old days
though. However, it is completely not clear how much compute power is
necessary for sufficient level of cognition that would allow L4 autonomy.
Experimental cars (such as Waymo or Uber) have literally a data center in
their trunk that takes up many Kilowatts of power. Whether Tesla can do it in
at most a few hundred watts without LIDAR is _highly_ questionable.

~~~
robotresearcher
Do you have a source for that?

I've seen in the trunk of working research cars with less resources than a
high-end gaming PC.

Multiple KW is more than normal car air conditioning can deal with, since a
resting human outputs about 100W. Do these cars vent to the outside?
Otherwise, if outputting multiple KW, your Calfornia-based research car is
gonna get nasty very quickly.

~~~
felippee
My source is a chat with several engineers at companies I cannot disclose. I
also used to work for a company that develops autonomous navigation solution.
If you have on the order of 10 LIDARS on your vehicle and same number of
cameras and other sensors, I can assure you a single Drive PX will not be
enough for even basic processing of that data.

Also note, that a high end gaming computer can easily exceed peak usage of
1kW, and if it is packed with say 4GPU's it very quickly approaches 2kW. The
computers have custom cooling loops and vent outside, exact solutions vary
between companies.

And even with all that compute, nobody out there is any close to level 4,
better yet level 5. And Tesla promises to solve level 5 with drive PX, few
cameras and a radar (not a single LIDAR) and magical software. I'd really like
to see this :)

------
faragon
So after Tesla hiring Jim Keller -brilliant engineer- from Apple (DEC Alpha
2116421264, AMD K7/x86-64/HypertransportK8/K12/Zen, Apple A4/A5), the cards
are on the table: they will play hard for winning the AI race on cars.

~~~
senatorobama
How much can a great CPU designer help a fundamentally (non-deterministic) AI
process?

~~~
zamalek
Assuming that they want to print TPUs:

Optimistically, Tesla could run more elaborate networks. AI being AI means
that rarely equates to better decision-making - still, it's an option for the
rare cases.

Pragmatically, there may be no benefit to the quality of the decisions that
the AI makes. It will simply drain less energy and make those decisions
faster. Tesla could use the chip themselves to save time and energy on
training the AI.

Both scenarios depend quite strongly on the quality of the CPU/TPU and don't
strongly depend on determinism.

------
firefoxd
There was a correction:

> Correction: GlobalFoundries CEO Sanjay Jha mentioned Tesla as an example of
> a company working with chip fabricators, but did not specifically say that
> it was a GlobalFoundries customer.

------
deepGem
I'm surprised that they went with AMD and not an ARM vendor. ARM is pushing
into the ML space big time with ARM GPU with even a Caffe fork CaffeALC on the
ARM GPU. These GPUs are built with mobility and energy conservation, so these
processors are somewhat of a natural fit for TSLA.

[https://github.com/OAID/caffeOnACL](https://github.com/OAID/caffeOnACL)

------
brentis
Think this bodes well for AMD’s datacenter aspirations with next gen cloud
being heavily AI focused (and GPU)? Also what about Musk’s Neuralink?
Https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-
computer-interface-ai-cyborgs

------
mikel205
Audi and Mercedes have been in partnership with NVIDIA for a year or so now,
interesting to see Tesla join up with AMD at this point.

------
synaesthesisx
Interesting considering all Teslas currently come with Nvidia cards (AFAIK)

------
amaks
Interesting parallel with Intel working with Waymo.

------
phkahler
Lisa Lisa Lisa....

