
Nvidia's $1,100 AI brain for robots goes on sale - elorant
https://www.engadget.com/2018/12/12/nvidia-jetson-agx-xavier-robot-processor-available/
======
KineticLensman
I clicked through the various 'manage settings' dialogues starting at the
'before you continue...' splashscreen and eventually found a list [0] of Oath
partners who "participate and allow choice via the Interactive Advertising
Bureau (IAB) Transparency and Consent Framework (TCF)". The list contains more
than 200 different organisations.

I decided not to read the article.

[0]
[https://guce.oath.com/collectConsent/partners/vendors?sessio...](https://guce.oath.com/collectConsent/partners/vendors?sessionId=3_cc-
session_2f4e2e70-b63f-4f48-a2a3-647120b9d27e&lang=en-GB)

~~~
dgzl
> Interactive Advertising Bureau

Why does this just sound terrifying to me?

~~~
eutectic
sounds better than the alternative...

~~~
ethbro
Personally, I would prefer to batch my advertising. Ideally while my eyes were
not on the screen.

------
snops
Actual link to the module, with tech specs:
[https://developer.nvidia.com/embedded/buy/jetson-agx-
xavier](https://developer.nvidia.com/embedded/buy/jetson-agx-xavier)

A good set of links to resources:
[https://elinux.org/Jetson_AGX_Xavier](https://elinux.org/Jetson_AGX_Xavier)

Overall, looks incredibly powerful for the form factor and power usage, with a
ton of high speed camera, display, and PCIE interfaces.

I don't see any mention of production lifetime gaurantees, presumably that's a
"please ask". Other SoM manufacturers promise a few years (up to 10), so you
don't have to worry about redesigning your product every year. For the Jetson
module, it's designed to be fairly tightly integrated and hence an swap out
would not be trivial, e.g. you need to design a heatsink system for it
yourself so you can choose a fan or heat pipe it to the enclosure walls.

------
tim333
>You're not about to buy one yourself -- it costs $1,099 each in batches of
1,000 units

On the site it has: "Members of the NVIDIA Developer Program are eligible to
receive their first kit at a special price of $1,299 (USD)"
([https://developer.nvidia.com/buy-
jetson?product=all&location...](https://developer.nvidia.com/buy-
jetson?product=all&location=GB))

The specs seem quite impressive really.

~~~
pj_mukh
Is there a devboard?

~~~
tim333
Kinda. This video shows what you get
[https://www.youtube.com/watch?v=XoWW5HiGHsg&feature=youtu.be](https://www.youtube.com/watch?v=XoWW5HiGHsg&feature=youtu.be)

------
jkravitz61
The development kit has been available for the last month. One of the problems
no one talks about is that this platform (along with tx2/tx1) runs arm64 which
makes it a HUGE pain for getting many libraries to work. I’ve been using these
for a while, and consistently need to hunt down library source code and
compile it for arm64 since most libraries are distributed without arm64
support. There’s also plenty of device specific closed source SDKs (such as
point grey ladybug cameras) which just don’t support arm64 so your only option
is to attempt to write your own or pressure the manufacturer to publish an
arm64 version. I do not recommend this platform for hobbyists for this reason-
go buy a small x64 computer and spend 1/10th the time designing a better
battery system.

~~~
jacquesm
If you're into this kind of thing a little bit of compilation should not scare
you. The norm in the embedded world is to bootstrap your toolchain first, the
availability of a good compiler and libraries that are endian clean is amazing
progress.

~~~
jkravitz61
I would argue that it should scare you. This is targeted towards people who
are working in the AI world and are likely not embedded experts. Also, the
bigger problem is closed source libraries and drivers that choose not to
support arm64.

~~~
jacquesm
When building a product for embedded applications closed source libraries and
drivers that do not support your platform are no obstacle at all. You contact
the vendor and make a deal. That's in their - and your - interest.

This is a totally different world than the open source world that you are
referencing, likely that embedded product will _also_ not be open source.
Commercial licensing is your only option in that case anyway, unless you are
just looking for FOSS stuff with permissive licenses, but in that case those
won't be closed source to begin with...

So the problem you perceive simply does not exist. The biggest questions will
revolve around commercial viability, proof-of-concept and time to market.
Rarely around such details as closed source libraries or drivers. Though, in
case your supplier goes belly up those could become factors, but for that you
have escrow agreements.

~~~
jkravitz61
Respectfully, it is absolutely an obstacle. Not every manufacturer wants to
play ball and in some cases it requires much more investment than playing
around with the compiler settings. I would also argue that a large fraction of
research teams/ companies are just looking for a platform to prototype on
rather than actually deploy services on tomorrow. Most applications of this
processor are such low volume that it’s not in most manufactures interests
financially to care at this point.

~~~
ianhowson
Practically every chip vendor provides a free toolchain for their products.
The only major exception I can think of are automotive parts, where the
customer is always a multi-billion-dollar corporation.

arm64 is very common (Android!) and Xavier runs stock Ubuntu. If your camera
manufacturer doesn't ship a driver for arm64, you should speak to them. It's
extremely likely that they have one already.

------
mark_l_watson
Impressive compute in a small form and running on 10 watts. Also interesting
going after a non-consumer market although I think the chip would be a good
fit in a handheld gaming device that supported some inputs from watching the
player and had the power for very interesting/fun ‘game AI.’

~~~
dejv
Havent tried Xavier yet, but I am using TX2 in my work (which is previous
generation of this type of device) and CPU is very weak to allow any serious
gaming.

~~~
tonyarkles
Anecdotally, a friend has been using TX2s and got a Xavier to test out. He was
blown away by the performance delta. It’s got an octocore ARM for a CPU, and
while I don’t recall what the TX2 has... that’s a lot of CPU cores to work
with.

I’ve got a Xavier sitting on my desk too, but haven’t played with it much.
Running OpenCV on it and doing some light live video processing was really
smooth.

~~~
dejv
For games you usually want smaller number of high performant cores than many
lesser performant. Haven't done much game programming in recent years, but I
still remmember the terror of programming PS3 Cell CPU.

~~~
twtw
Were the CPU cores of Cell (PPEs) hard to program? I had the impression that
the difficulty of the Cell was in the need to manage the 8 SPEs, not in
writing software for the Power4 core.

The 8 core Carmel CPU in Xavier is not like the SPEs in Cell.

~~~
dejv
It was hard to utilise all those cores. I am sure game architectures evolved
during the years, but back then we didn't know how to split the code to
optimally utilise all available resources.

------
scottlocklin
Maybe this should be an 'ask HN' thread. About 10 years ago I considered
taking up robotics as a hobby, and thought better of it upon asking a robotics
professor I did deadlifts with in the gym. My goal was an autonomous robot
which could fetch me arbitrary things from a refrigerator with minimal
trickery (aka radio tags on beer cans, magnetic tape on the floor, etc).
Seemed impossible at the time, or at least a pretty serious Ph.D. thesis type
of effort.

Is there some list of 'open problems in robotics' by which I could inform
myself if this is still an insane goal?

~~~
sjf
Depending on your definition of trickery, you could probably do it right now
with a vending machine fridge and a conveyer belt.

~~~
scottlocklin
Yeah, when I was originally thinking of this, the conclusion I came to was
that this would be a more honest version of the available 'robotics'
solutions.

------
joefourier
It's interesting how each major iteration of Nvidia's embedded boards keep
increasing in price by a significant amount. The TK1 was $199, the TX1 was (at
release) $599, and now the Xavier is $2,500/1,299 (with rebate). The TX2 is
priced identically to the TX1 at release but was an incremental update.

With the TK1 being EOL, it seems there is no longer an embedded SBC in the
$100-$200 pricerange that has comparable GPU performance, despite the TK1
being over 4 years old.

------
Symmetry
This look really compelling for cases where a robot isn't big or stationary
enough to just use an industrial PC. I'm really looking forward to seeing how
NVIdia's newest iteration on Transmeta's core does in benchmarks. From the
Wikichip Spec results[1] and quick Phoronix tests[2] it doesn't seem too far
off from an Intel chip clocked down to a similar speed. The whole approach of
JITing form x86 or ARM instructions to an exposed pipeline VLIW design is just
a really interesting one. For the last generation that was used in the Nexus 6
it did very well in areas that VLIWs are traditionally good at like audio
processing and did sort of mediocre in areas where VLIW tends to be bad. A JIT
running underneath the OS has the freedom, in theory, to add things like
memory speculation across library calls that an OoO processor could do. But
the software to do that is, of course, really hard to write. I hope it's
improved in the years since the Nexus 9 came out.

[1]
[https://en.wikichip.org/wiki/nvidia/microarchitectures/carme...](https://en.wikichip.org/wiki/nvidia/microarchitectures/carmel)

[2][https://www.phoronix.com/scan.php?page=article&item=nvidia-c...](https://www.phoronix.com/scan.php?page=article&item=nvidia-
carmel-quick&num=2)

~~~
dejv
Also the GPU is usually lacking in typical industrial PC. I am using TX2 for
this exact reasons: small form factor and good performance for gpu enabled
code (running openCV and ML models). Plus you can easily add your own hardware
and it act as kind of a RaspberryPI on steroids.

------
perpetualcrayon
I think most consumer robots will be driven by centralized computing power.
There's probably no need for the brain to be on the robot, just a good wifi
connection.

EDIT: That is of course for robots that won't need to leave the house. Then
again, I can't imagine the future won't have global high bandwidth cellular
coverage with at least 5 9's availability.

~~~
ianai
So long as there’s enough low latency bandwidth.

------
syntaxing
I bought a TX2 recently at a discount and it was extremely fun to use. I would
love to use a Xavier but it is a bit out of my price point so I guess they
priced it solely for the industry. It's still amazing to see something priced
only at $1K (relatively speaking, this stuff was always expensive). I highly
recommend others to buy a TX2 if they want to dabble in embedded electronics
machine learning. Shameless plug, if you own a TX2, I recently designed a case
for it: [https://www.powu3.com/cad/tx2/](https://www.powu3.com/cad/tx2/)

------
agumonkey
1000 GBP at
[https://www.siliconhighwaydirect.co.uk/product-p/900-82888-0...](https://www.siliconhighwaydirect.co.uk/product-p/900-82888-0000-000.htm)

not bad

------
amelius
How does it compare to e.g. an Intel Movidius neural compute stick?

~~~
jahewson
That’s an apples and oranges comparison - the Xavier is an entire computer,
the Movisius is just a single accelerator chip.

~~~
amelius
Well, nowadays you can buy an entire computer for a few dollars (e.g. in the
form of a small PCB board containing an ARM processor), so I think the
comparison is valid.

~~~
mtgx
Movidius targets sub-1w. I haven't read the article but I assume this needs at
least an order of magnitude more power.

------
xvilka
Too bad this comes from the worst company to the open source. I wish something
other than CUDA and NVIDIA dominated modern AI industry.

~~~
twtw
I wish the economics of the present were somewhat different, and that money
didn't exist in the 21st century.

And yet, in the world we live in, I have a hard time faulting a corporation
for not giving away their core products for free.

~~~
xvilka
Well, for example, many other corporations have a friendlier stance to open
source. It is not only about money and profits.

~~~
TomVDB
Nvidia, the worst company to the open source, has 127 open source repositories
on GitHub.

~~~
floatboth
Yeah, a bunch of little libraries and obscure experiments, while people want
_drivers_. Repository counts don't mean anything. You need context.

So just look at their competitors. AMD and Intel both have many dedicated
employees directly committing into Mesa. There are _two_ open source
implementations of Vulkan for Radeon GPUs, ffs. AMD is working on Radeon Open
Compute to get all the code written against CUDA to work anywhere. There is
_no_ proprietary Linux driver for Intel GPUs. BTW even Broadcom and Qualcomm
are supporting Mesa now. While nvidia uh.. was interested in nouveau on Tegra
a little bit but is completely against nouveau on desktop.

~~~
TomVDB
> ... while people want drivers

And Nvidia has decided that it's not in their best interest to give away that
IP for free. If they believe that their driver has secret sauce that gives
then an competitive advantage, then that's entirely their prerogative.

> AMD is working on Radeon Open Compute to get all the code written against
> CUDA to work anywhere.

If you were in a position where your proprietary software fueled 90%+ of a
highly profitable industry, would you open source it just for the good of
humanity?

Of course AMD is trying to copy that and open source it: they don't have 90%+
market share to lose. It doesn't cost them anything to do so.

------
saosebastiao
Does it have onboard memory? I feel like calling it a system-on-chip kind of
implies it, but I didn't see anything about it.

~~~
monocasa
"SoC" is pretty orthogonal to having memory, but system on modules almost
always do. This one has 16GB.

------
techsin101
Could someone eli5 this? I assume it runs coffee in gpu so do you need to know
since special programming language

~~~
p1esk
It's a Linux board with an ARM processor and 30W Volta GPU. You connect one or
more cameras to it, and develop GPU accelerated computer vision apps using the
supplied SDK (CUDA, CuDNN, TensorRT, OpenCV, etc):
[https://developer.nvidia.com/embedded/jetpack](https://developer.nvidia.com/embedded/jetpack)

You can also install Tensorflow on it.

------
sandworm101
They should task this chip with proofreading that article.

