I decided not to read the article.
The GDPR is pretty clear that fully opting out should be as easy as opting in. "Going to a third party site and manage your choices with 200 organizations" will not hold up in court.
Why does this just sound terrifying to me?
Because (reasons) I opened the article in Edge and was dropped into the reality that I ignore most of the time. I posted because I don't think it hurts sometimes to be reminded of reality.
Browsing on mobile is always the same trouble too, entering a world where everything went wrong
I already use Firefox Focus as a standard browser on mobile and use private tabs on the desktop more and more as well. Whenever I want to reply or use a service that requires cookies (such as logging in on HN) I use the regular session.
Subpar but it works with minimal effort.
A good set of links to resources: https://elinux.org/Jetson_AGX_Xavier
Overall, looks incredibly powerful for the form factor and power usage, with a ton of high speed camera, display, and PCIE interfaces.
I don't see any mention of production lifetime gaurantees, presumably that's a "please ask". Other SoM manufacturers promise a few years (up to 10), so you don't have to worry about redesigning your product every year. For the Jetson module, it's designed to be fairly tightly integrated and hence an swap out would not be trivial, e.g. you need to design a heatsink system for it yourself so you can choose a fan or heat pipe it to the enclosure walls.
On the site it has: "Members of the NVIDIA Developer Program are eligible to receive their first kit at a special price of $1,299 (USD)" (https://developer.nvidia.com/buy-jetson?product=all&location...)
The specs seem quite impressive really.
This is a totally different world than the open source world that you are referencing, likely that embedded product will also not be open source. Commercial licensing is your only option in that case anyway, unless you are just looking for FOSS stuff with permissive licenses, but in that case those won't be closed source to begin with...
So the problem you perceive simply does not exist. The biggest questions will revolve around commercial viability, proof-of-concept and time to market. Rarely around such details as closed source libraries or drivers. Though, in case your supplier goes belly up those could become factors, but for that you have escrow agreements.
arm64 is very common (Android!) and Xavier runs stock Ubuntu. If your camera manufacturer doesn't ship a driver for arm64, you should speak to them. It's extremely likely that they have one already.
The problem is that chips like that are about as hacker unfriendly as they could be. But in industry hardly any of that matters.
Being in the middle - hundreds to tens of thousands of units to ship - is the toughest place of all. No vendors will talk to you and you're going to be out of options if someone decides to EOL that chip your design depends on.
In that case I would advise to only use open hardware and to take the associated performance, size and power requirements hit. At least your product will live.
There's also the fact that the article suggests that they come in batches of 1000 at $1100 each.
A typical 3000 mAH cell phone battery would last only 18 minutes of time just for 10 watts CPU alone, adding Display, wifi, DDR, etc would be less.
For automobile connected or wall powered devices (robot), 10 watts for CPU/GPU power is fine.
In contrast, the TX2 in "MaxQ" mode (energy efficiency savings mode) achieves close to equivalent performance on many benchmarks to the TX1 at half the power budget, so it has 2x the energy efficiency overall.
The real question is where Xavier lands on the power/efficiency curve here, but I'm betting it would be pretty good, and there's nothing to necessarily disqualify a downclocked part, here. I think a custom version of Xavier could make a good gaming part, if it wasn't for the cost being outrageous.
This has Tegra X1.
I’ve got a Xavier sitting on my desk too, but haven’t played with it much. Running OpenCV on it and doing some light live video processing was really smooth.
The 8 core Carmel CPU in Xavier is not like the SPEs in Cell.
"Weak CPU cores" does not mean anything, you have to couple it with a metric or comparison -- the Switch has a locked, 1Ghz quad core A57+A53, and it does just fine for a huge amount of games...
Is there some list of 'open problems in robotics' by which I could inform myself if this is still an insane goal?
The rest of the problem: navigation, motion planning, etc. hasn't changed that much, but is definitely possible on an amateur budget.
The problem with a list of "open problems in robotics" is that just about everything people have come up with has been demonstrated in a lab somewhere. Walking, grasping, manipulation, navigation, swarming, etc. But nobody has managed to combine all those capabilities in a single robot. So the remaining open problem is to solve all the individual problems with one piece of hardware and software.
In terms of navigation, magnetic line following is trivial and more complex navigation really isn't that hard these days. Just look at how any of the modern robovacs navigate.
If you want to generally see where things are I'd recommend checking out some papers or even some writeups from the last icra.
EDIT: Just looked at the Amazon picking challenge for the first time in quite a while and it's not as impressive as I remember it to be.
There have been some great advancements in grasping and embodied decision making lately though, so it could fall soon.
With the TK1 being EOL, it seems there is no longer an embedded SBC in the $100-$200 pricerange that has comparable GPU performance, despite the TK1 being over 4 years old.
EDIT: That is of course for robots that won't need to leave the house. Then again, I can't imagine the future won't have global high bandwidth cellular coverage with at least 5 9's availability.
Last time I checked, Nvidia has quite a bit of open source software on GitHub.
Open sourcing something that you have developed and paid for(!) should always be at the discretion of those who did so.
Did it send out C&D orders?
How can it actively resist when it has released at least some useful pieces of information?
But as you admit: not only did they not actively resist, they actively helped out by providing something that they were under no obligation to provide.
I don't think you have a clear understanding of what "the worst" and "actively" means.
It's fine to be disappointed with a company deciding not to support open source in a way you'd like. But let's not go overboard with hyperbole? Have you ever complained about Microsoft not open sourcing the Office suite? About Oracle not open sourcing their crown jewels? About Apple not open sourcing iOS? About Broadcom not open sourcing the RPi drivers? And a million other companies, large and small, not open sourcing their money making products?
And yet, in the world we live in, I have a hard time faulting a corporation for not giving away their core products for free.
So just look at their competitors. AMD and Intel both have many dedicated employees directly committing into Mesa. There are two open source implementations of Vulkan for Radeon GPUs, ffs. AMD is working on Radeon Open Compute to get all the code written against CUDA to work anywhere. There is no proprietary Linux driver for Intel GPUs. BTW even Broadcom and Qualcomm are supporting Mesa now. While nvidia uh.. was interested in nouveau on Tegra a little bit but is completely against nouveau on desktop.
And Nvidia has decided that it's not in their best interest to give away that IP for free. If they believe that their driver has secret sauce that gives then an competitive advantage, then that's entirely their prerogative.
> AMD is working on Radeon Open Compute to get all the code written against CUDA to work anywhere.
If you were in a position where your proprietary software fueled 90%+ of a highly profitable industry, would you open source it just for the good of humanity?
Of course AMD is trying to copy that and open source it: they don't have 90%+ market share to lose. It doesn't cost them anything to do so.
"Please don't impute astroturfing or shillage. That degrades discussion and is usually mistaken. If you're worried about it, email us and we'll look at the data. "
Mine comes from experience in a world of hardware and physical products that gives me a greater affinity for the business model of hardware companies than "open source friendly" advertising giants. I have a fascination with the war of words free software adherents wage against nvidia - the company could give out candy to children and it would get criticized because Linus gave them the finger.
I'll also admit that I am entertained by the no true scotsman squirming that occurs when I point at things that nvidia has done that don't support the story that it is the evil empire.
However while we're being HN etiquette pedants, the original comment clearly breaches this guideline:
> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.
The original comment virtually embeds a confession that it is flame war material:
> Cue the argument that these "don't count" because they use CUDA.
I introduce this just to point out that a strict interpretation of the HN rules can stifle reasonable discussion.
You can also install Tensorflow on it.