Hacker News new | comments | ask | show | jobs | submit login
Nvidia's $1,100 AI brain for robots goes on sale (engadget.com)
144 points by elorant 62 days ago | hide | past | web | favorite | 86 comments



I clicked through the various 'manage settings' dialogues starting at the 'before you continue...' splashscreen and eventually found a list [0] of Oath partners who "participate and allow choice via the Interactive Advertising Bureau (IAB) Transparency and Consent Framework (TCF)". The list contains more than 200 different organisations.

I decided not to read the article.

[0] https://guce.oath.com/collectConsent/partners/vendors?sessio...


Do these companies not realize that this is just in violation of GDPR? Might as well entirely ignore it if you are just going to blatantly violate it anyway.

The GDPR is pretty clear that fully opting out should be as easy as opting in. "Going to a third party site and manage your choices with 200 organizations" will not hold up in court.


Until the EU takes those companies to court and wins nothing will change.


EU doesn't need to "take those companies to court". EU regulator just fines them. If company disagrees then it may end up in court.


sorry. I guess meant the EU has to take action.I haven't yet heard of any action under the GDPR


> Interactive Advertising Bureau

Why does this just sound terrifying to me?


Funny because they're about the least scary entity in the digital advertising space.


sounds better than the alternative...


Personally, I would prefer to batch my advertising. Ideally while my eyes were not on the screen.


uBlock Origin & uMatrix allow you to read the site without seeing that popup or sending data to the advertisers. I realize that you shouldn't need to do that, but imo it's the reality we leave in.


I run uBlock Origin & uMatrix in my browser of choice.

Because (reasons) I opened the article in Edge and was dropped into the reality that I ignore most of the time. I posted because I don't think it hurts sometimes to be reminded of reality.


The lack of umatrix in safari is my biggest gripe — otherwise its a lot more pleasant than chrome/ff.

Browsing on mobile is always the same trouble too, entering a world where everything went wrong


I find uMatrix pretty hard to configure to not break everything while safeguarding my privacy. I prefer to use private tabs for these kinds of situations, though I wished I understood better what I was doing when configuring uMatrix.


Whenever I get prompted by such an aggresive cookie wall I just open it in a private tab with uBlock Origin loaded as well.

I already use Firefox Focus as a standard browser on mobile and use private tabs on the desktop more and more as well. Whenever I want to reply or use a service that requires cookies (such as logging in on HN) I use the regular session.

Subpar but it works with minimal effort.


Actual link to the module, with tech specs: https://developer.nvidia.com/embedded/buy/jetson-agx-xavier

A good set of links to resources: https://elinux.org/Jetson_AGX_Xavier

Overall, looks incredibly powerful for the form factor and power usage, with a ton of high speed camera, display, and PCIE interfaces.

I don't see any mention of production lifetime gaurantees, presumably that's a "please ask". Other SoM manufacturers promise a few years (up to 10), so you don't have to worry about redesigning your product every year. For the Jetson module, it's designed to be fairly tightly integrated and hence an swap out would not be trivial, e.g. you need to design a heatsink system for it yourself so you can choose a fan or heat pipe it to the enclosure walls.


>You're not about to buy one yourself -- it costs $1,099 each in batches of 1,000 units

On the site it has: "Members of the NVIDIA Developer Program are eligible to receive their first kit at a special price of $1,299 (USD)" (https://developer.nvidia.com/buy-jetson?product=all&location...)

The specs seem quite impressive really.


Is there a devboard?




The development kit has been available for the last month. One of the problems no one talks about is that this platform (along with tx2/tx1) runs arm64 which makes it a HUGE pain for getting many libraries to work. I’ve been using these for a while, and consistently need to hunt down library source code and compile it for arm64 since most libraries are distributed without arm64 support. There’s also plenty of device specific closed source SDKs (such as point grey ladybug cameras) which just don’t support arm64 so your only option is to attempt to write your own or pressure the manufacturer to publish an arm64 version. I do not recommend this platform for hobbyists for this reason- go buy a small x64 computer and spend 1/10th the time designing a better battery system.


If you're into this kind of thing a little bit of compilation should not scare you. The norm in the embedded world is to bootstrap your toolchain first, the availability of a good compiler and libraries that are endian clean is amazing progress.


I would argue that it should scare you. This is targeted towards people who are working in the AI world and are likely not embedded experts. Also, the bigger problem is closed source libraries and drivers that choose not to support arm64.


When building a product for embedded applications closed source libraries and drivers that do not support your platform are no obstacle at all. You contact the vendor and make a deal. That's in their - and your - interest.

This is a totally different world than the open source world that you are referencing, likely that embedded product will also not be open source. Commercial licensing is your only option in that case anyway, unless you are just looking for FOSS stuff with permissive licenses, but in that case those won't be closed source to begin with...

So the problem you perceive simply does not exist. The biggest questions will revolve around commercial viability, proof-of-concept and time to market. Rarely around such details as closed source libraries or drivers. Though, in case your supplier goes belly up those could become factors, but for that you have escrow agreements.


Respectfully, it is absolutely an obstacle. Not every manufacturer wants to play ball and in some cases it requires much more investment than playing around with the compiler settings. I would also argue that a large fraction of research teams/ companies are just looking for a platform to prototype on rather than actually deploy services on tomorrow. Most applications of this processor are such low volume that it’s not in most manufactures interests financially to care at this point.


Practically every chip vendor provides a free toolchain for their products. The only major exception I can think of are automotive parts, where the customer is always a multi-billion-dollar corporation.

arm64 is very common (Android!) and Xavier runs stock Ubuntu. If your camera manufacturer doesn't ship a driver for arm64, you should speak to them. It's extremely likely that they have one already.


Sorry, no. It's still a problem. You can say there are work arounds. Just because it's always been this way doesn't mean it can't be made better.


Could you give an example for when this has been a problem? I've done quite a few embedded projects and have not encountered any to date, though I'm not too old to learn.


Try that with Broadcom and get back to me.


Mostly a matter of MOC. If you're a hobbyist that definitely isn't going to fly. If you need a few million chips then you're going to be able to make a deal.

The problem is that chips like that are about as hacker unfriendly as they could be. But in industry hardly any of that matters.

Being in the middle - hundreds to tens of thousands of units to ship - is the toughest place of all. No vendors will talk to you and you're going to be out of options if someone decides to EOL that chip your design depends on.

In that case I would advise to only use open hardware and to take the associated performance, size and power requirements hit. At least your product will live.


> I do not recommend this platform for hobbyists for this reason

There's also the fact that the article suggests that they come in batches of 1000 at $1100 each.


You can buy the development kit by itself, The article is referring to the processors themselves for the 1000/batch


Impressive compute in a small form and running on 10 watts. Also interesting going after a non-consumer market although I think the chip would be a good fit in a handheld gaming device that supported some inputs from watching the player and had the power for very interesting/fun ‘game AI.’


Not sure if 10 watts CPU can work in any handheld gaming device:

A typical 3000 mAH cell phone battery would last only 18 minutes of time just for 10 watts CPU alone, adding Display, wifi, DDR, etc would be less.

For automobile connected or wall powered devices (robot), 10 watts for CPU/GPU power is fine.


The Nintendo Switch uses the Tegra X1 which fits in a traditional power profile of ~10W, but it is somewhat downclocked (about 1/2 the OOTB specs) to drive that power usage down more, helping achieve its current battery life. So a 10W part can absolutely work, if you scale down the clocks a little bit. The Switch uses a 4310 mAh battery, for reference.

In contrast, the TX2 in "MaxQ" mode (energy efficiency savings mode) achieves close to equivalent performance on many benchmarks to the TX1 at half the power budget, so it has 2x the energy efficiency overall.

The real question is where Xavier lands on the power/efficiency curve here, but I'm betting it would be pretty good, and there's nothing to necessarily disqualify a downclocked part, here. I think a custom version of Xavier could make a good gaming part, if it wasn't for the cost being outrageous.



I'm not sure a JITing ARM->VLIW core like that is the best choice for a game console. It'd probably lead to all sorts of weird perf inconsistencies that are hard to track down.


Havent tried Xavier yet, but I am using TX2 in my work (which is previous generation of this type of device) and CPU is very weak to allow any serious gaming.


Anecdotally, a friend has been using TX2s and got a Xavier to test out. He was blown away by the performance delta. It’s got an octocore ARM for a CPU, and while I don’t recall what the TX2 has... that’s a lot of CPU cores to work with.

I’ve got a Xavier sitting on my desk too, but haven’t played with it much. Running OpenCV on it and doing some light live video processing was really smooth.


For games you usually want smaller number of high performant cores than many lesser performant. Haven't done much game programming in recent years, but I still remmember the terror of programming PS3 Cell CPU.


Were the CPU cores of Cell (PPEs) hard to program? I had the impression that the difficulty of the Cell was in the need to manage the 8 SPEs, not in writing software for the Power4 core.

The 8 core Carmel CPU in Xavier is not like the SPEs in Cell.


It was hard to utilise all those cores. I am sure game architectures evolved during the years, but back then we didn't know how to split the code to optimally utilise all available resources.


Do you consider the Switch "serious gaming" or do you mean "serious graphics"? I don't know what "serious gaming" means and I doubt it means anything really, but in terms of raw performance: The TX2 default config will be somewhere around 4x more powerful than the (default downclocked) Tegra X1 in the Switch and everything from the CPU to the GPU is far more powerful at every metric, and it has better power consumption at basically every point on that curve.

"Weak CPU cores" does not mean anything, you have to couple it with a metric or comparison -- the Switch has a locked, 1Ghz quad core A57+A53, and it does just fine for a huge amount of games...


Maybe this should be an 'ask HN' thread. About 10 years ago I considered taking up robotics as a hobby, and thought better of it upon asking a robotics professor I did deadlifts with in the gym. My goal was an autonomous robot which could fetch me arbitrary things from a refrigerator with minimal trickery (aka radio tags on beer cans, magnetic tape on the floor, etc). Seemed impossible at the time, or at least a pretty serious Ph.D. thesis type of effort.

Is there some list of 'open problems in robotics' by which I could inform myself if this is still an insane goal?


That goal has gotten easier in the last 10 years, mainly due to machine learning in the vision system. You can reliably train a neural net to find things in a fridge from a depth camera.

The rest of the problem: navigation, motion planning, etc. hasn't changed that much, but is definitely possible on an amateur budget.

The problem with a list of "open problems in robotics" is that just about everything people have come up with has been demonstrated in a lab somewhere. Walking, grasping, manipulation, navigation, swarming, etc. But nobody has managed to combine all those capabilities in a single robot. So the remaining open problem is to solve all the individual problems with one piece of hardware and software.


Absolutely not an insane goal. If you want to see progress in picking items a good starting point with a bunch of resources is the Amazon picking challenge.

In terms of navigation, magnetic line following is trivial and more complex navigation really isn't that hard these days. Just look at how any of the modern robovacs navigate.

If you want to generally see where things are I'd recommend checking out some papers or even some writeups from the last icra.

EDIT: Just looked at the Amazon picking challenge for the first time in quite a while and it's not as impressive as I remember it to be.


Depending on your definition of trickery, you could probably do it right now with a vending machine fridge and a conveyer belt.


Yeah, when I was originally thinking of this, the conclusion I came to was that this would be a more honest version of the available 'robotics' solutions.


not that hard with some fiducial tags. Here's my (hobbyist) effort that cost a bit more than 1k https://youtu.be/9_9aARNxSuE


Thanks very much for posting that. Looks about what I had in mind (though I have machine tools, so I'm hoping I can make fancier hardware).


It is an open problem still. I'm unsure of a list though.

There have been some great advancements in grasping and embodied decision making lately though, so it could fall soon.


It's interesting how each major iteration of Nvidia's embedded boards keep increasing in price by a significant amount. The TK1 was $199, the TX1 was (at release) $599, and now the Xavier is $2,500/1,299 (with rebate). The TX2 is priced identically to the TX1 at release but was an incremental update.

With the TK1 being EOL, it seems there is no longer an embedded SBC in the $100-$200 pricerange that has comparable GPU performance, despite the TK1 being over 4 years old.


This look really compelling for cases where a robot isn't big or stationary enough to just use an industrial PC. I'm really looking forward to seeing how NVIdia's newest iteration on Transmeta's core does in benchmarks. From the Wikichip Spec results[1] and quick Phoronix tests[2] it doesn't seem too far off from an Intel chip clocked down to a similar speed. The whole approach of JITing form x86 or ARM instructions to an exposed pipeline VLIW design is just a really interesting one. For the last generation that was used in the Nexus 6 it did very well in areas that VLIWs are traditionally good at like audio processing and did sort of mediocre in areas where VLIW tends to be bad. A JIT running underneath the OS has the freedom, in theory, to add things like memory speculation across library calls that an OoO processor could do. But the software to do that is, of course, really hard to write. I hope it's improved in the years since the Nexus 9 came out.

[1] https://en.wikichip.org/wiki/nvidia/microarchitectures/carme...

[2]https://www.phoronix.com/scan.php?page=article&item=nvidia-c...


Also the GPU is usually lacking in typical industrial PC. I am using TX2 for this exact reasons: small form factor and good performance for gpu enabled code (running openCV and ML models). Plus you can easily add your own hardware and it act as kind of a RaspberryPI on steroids.


I think most consumer robots will be driven by centralized computing power. There's probably no need for the brain to be on the robot, just a good wifi connection.

EDIT: That is of course for robots that won't need to leave the house. Then again, I can't imagine the future won't have global high bandwidth cellular coverage with at least 5 9's availability.


So long as there’s enough low latency bandwidth.


I bought a TX2 recently at a discount and it was extremely fun to use. I would love to use a Xavier but it is a bit out of my price point so I guess they priced it solely for the industry. It's still amazing to see something priced only at $1K (relatively speaking, this stuff was always expensive). I highly recommend others to buy a TX2 if they want to dabble in embedded electronics machine learning. Shameless plug, if you own a TX2, I recently designed a case for it: https://www.powu3.com/cad/tx2/



How does it compare to e.g. an Intel Movidius neural compute stick?


That’s an apples and oranges comparison - the Xavier is an entire computer, the Movisius is just a single accelerator chip.


Well, nowadays you can buy an entire computer for a few dollars (e.g. in the form of a small PCB board containing an ARM processor), so I think the comparison is valid.


Movidius targets sub-1w. I haven't read the article but I assume this needs at least an order of magnitude more power.


Movidius Myriad chips have a Leon RISC cpu on-die.


Too bad this comes from the worst company to the open source. I wish something other than CUDA and NVIDIA dominated modern AI industry.


“The worst company to the open source” is a bold claim when there are thousands (millions?) of companies that don’t open source anything at all.

Last time I checked, Nvidia has quite a bit of open source software on GitHub.

Open sourcing something that you have developed and paid for(!) should always be at the discretion of those who did so.


It is fine if company not open sources its products. It is not fine when it actively resists the attempts to write open source for their products, like nouveau drivers for example.


How does it actively resist?

Did it send out C&D orders?

How can it actively resist when it has released at least some useful pieces of information?


They reluctantly provide signed firmware images, very often with significant delay.


Even if they didn't provide signed firmware, they wouldn't be actively resisting anything. Actively resisting would require them to take actions to block developers from creating an open source driver.

But as you admit: not only did they not actively resist, they actively helped out by providing something that they were under no obligation to provide.

I don't think you have a clear understanding of what "the worst" and "actively" means.

It's fine to be disappointed with a company deciding not to support open source in a way you'd like. But let's not go overboard with hyperbole? Have you ever complained about Microsoft not open sourcing the Office suite? About Oracle not open sourcing their crown jewels? About Apple not open sourcing iOS? About Broadcom not open sourcing the RPi drivers? And a million other companies, large and small, not open sourcing their money making products?


I wish the economics of the present were somewhat different, and that money didn't exist in the 21st century.

And yet, in the world we live in, I have a hard time faulting a corporation for not giving away their core products for free.


Well, for example, many other corporations have a friendlier stance to open source. It is not only about money and profits.


Nvidia, the worst company to the open source, has 127 open source repositories on GitHub.


Yeah, a bunch of little libraries and obscure experiments, while people want drivers. Repository counts don't mean anything. You need context.

So just look at their competitors. AMD and Intel both have many dedicated employees directly committing into Mesa. There are two open source implementations of Vulkan for Radeon GPUs, ffs. AMD is working on Radeon Open Compute to get all the code written against CUDA to work anywhere. There is no proprietary Linux driver for Intel GPUs. BTW even Broadcom and Qualcomm are supporting Mesa now. While nvidia uh.. was interested in nouveau on Tegra a little bit but is completely against nouveau on desktop.


> ... while people want drivers

And Nvidia has decided that it's not in their best interest to give away that IP for free. If they believe that their driver has secret sauce that gives then an competitive advantage, then that's entirely their prerogative.

> AMD is working on Radeon Open Compute to get all the code written against CUDA to work anywhere.

If you were in a position where your proprietary software fueled 90%+ of a highly profitable industry, would you open source it just for the good of humanity?

Of course AMD is trying to copy that and open source it: they don't have 90%+ market share to lose. It doesn't cost them anything to do so.


Cue the argument that these "don't count" because they use CUDA.


[flagged]


Flagging your comment. See guidelines.

"Please don't impute astroturfing or shillage. That degrades discussion and is usually mistaken. If you're worried about it, email us and we'll look at the data. "


Obviously, I have a bias coming into this discussion (and so do you).

Mine comes from experience in a world of hardware and physical products that gives me a greater affinity for the business model of hardware companies than "open source friendly" advertising giants. I have a fascination with the war of words free software adherents wage against nvidia - the company could give out candy to children and it would get criticized because Linus gave them the finger.

I'll also admit that I am entertained by the no true scotsman squirming that occurs when I point at things that nvidia has done that don't support the story that it is the evil empire.


You're on the right side of this argument but that doesn't mean you shouldn't answer the question.


He's under no such obligation. Unless you have hard proof it would be nice if you did not contribute to attempts to label people here as shills. See HN guidelines.


Ha, fair enough. I would like to argue but the relevant comment has been flagged, so I don't have the material to make a case, and I gracefully concede this point.

However while we're being HN etiquette pedants, the original comment clearly breaches this guideline:

> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

The original comment virtually embeds a confession that it is flame war material:

> Cue the argument that these "don't count" because they use CUDA.

I introduce this just to point out that a strict interpretation of the HN rules can stifle reasonable discussion.


Nvidia isn't just unfriendly towards open-source, their embedded division is unfriendly to anyone that isn't a large corporation involved in AI, self-driving cars and such applications. Good luck if you try to develop something else on the Jetson as a small-scale company and have to deal with device-specific bugs and driver issues.


Anyone registered in Nvidia developer program can file bugs. Did you filed yours and got ignored or something?


Does it have onboard memory? I feel like calling it a system-on-chip kind of implies it, but I didn't see anything about it.


"SoC" is pretty orthogonal to having memory, but system on modules almost always do. This one has 16GB.


Could someone eli5 this? I assume it runs coffee in gpu so do you need to know since special programming language


It's a Linux board with an ARM processor and 30W Volta GPU. You connect one or more cameras to it, and develop GPU accelerated computer vision apps using the supplied SDK (CUDA, CuDNN, TensorRT, OpenCV, etc): https://developer.nvidia.com/embedded/jetpack

You can also install Tensorflow on it.


They should task this chip with proofreading that article.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: