
Open Hardware Pushes GPU Computing Envelope - rbanffy
https://www.nextplatform.com/2017/03/17/open-hardware-pushes-gpu-computing-envelope/?adbsc=social_20170320_70931886&adbid=843906722003533825&adbpl=tw&adbpr=92598387
======
amelius
I'm wondering if an application like Deep Learning would require a fast
interconnect network between GPUs. Without this requirement, upscaling would
be much simpler, I suppose.

~~~
CoolGuySteve
I think this is an area where Intel is sort of mucking things up for everyone.

Because of licensing issues, we'll never see a modern version of the
nForce/XBox design where the CPU and GPU share relatively fast RAM via a
common MMU from Intel/NVidia.

The best we can hope for future shared RAM designs are Zen/Radeon (or a new
HyperTransport), Intel/Knights Landing, or an ARM/NVidia solution.

But I'm surprised NVidia doesn't make something like a Tegra on steroids for
this application. Basically an ARM running its own Linux, a 10GigE ethernet
port, and a Titan/1080Ti, all in a single blade/PCI-E card. I guess the market
demand isn't there yet.

edit: Looks like the Drive PX2 is more or less what I'm talking about but
meant for cars, so the market demand is there: [http://wccftech.com/nvidia-
pascal-gpu-drive-px-2/](http://wccftech.com/nvidia-pascal-gpu-drive-px-2/)

~~~
arcanus
[https://www.nextplatform.com/2017/02/28/amd-researchers-
eye-...](https://www.nextplatform.com/2017/02/28/amd-researchers-eye-apus-
exascale/)

AMD Research, the research division of the chip maker, recently took a look at
using accelerated processing units (APUs, AMD’s name for processors with
integrated CPUs and GPUs) combined with multiple memory technologies, advanced
power-management techniques, and an architecture leveraging what they call
“chiplets” to create a compute building block called the Exascale Node
Architecture (ENA) that would form the foundation for a high performing and
highly efficient exascale-capable system.

