Well, almost. Integrated graphics generally take the place of additional CPU cores. For example, there is no 8-core Ryzen with integrated graphics. So if you need cores (like for compiling code) then you end up with a discrete graphics card even if you don't need fast graphics.
I actually wish Intel would make a cheap discrete graphics card for that purpose, because they have the most stable Linux graphics drivers.
So I install 16.04, the most recent LTS release available because I don't want these machines to turn into a pumpkin after less than 9 months but the graphics are a total fail. SVGA 800x600 only.
So the kernel on that release is too old and updating the kernel on Ubuntu is fraught with peril. So I look into third party drivers and find that their support was dropped entirely from even the third party Ubuntu support. So I go to ATI/AMD's website and find the driver package, but it turns out to be a tarball full of .debs with zero instructions on how to install or enable them. I was having flashbacks to working on Linux installs in the late 90s and having to hack everything to get it working.
In the end I had to install 17.10 and apply the hacks to make it work just to get the graphics running, and now I'm going to have to revisit those machines in a couple of months to upgrade them to 18.04.
 Disabling the new systemd DNS resolver that crashes on literally every lookup attempt on our network. Also had to learn how to program the modeline equivalent for Weyland for when you're going through a KVM and don't have EDID. This was no picnic in later versions of X, but I had already figured it out and had earlier versions as reference. I still miss xconfig sometimes.
It's gonna be a breeze in a few month though. And once that happens, the open source AMD drivers are going to be stabler than the Nvidia blob again.
You should have read the friendly documentation! I have AMDGPU running on 16.04 after running an install shell script (amdgpu-pro-install) in the tarball, as spelt out by the docs.
After that, the GPU would be too-strong on memory and too weak on compute. The minimum HBM speed 1024-bits / 250GB/s from one stack. Yeah, the same memory-bandwidth as a RX 580 with 2304 shaders, while the APU 2400G only has 704 shaders.
The Intel + AMD chip manages to solve the problem: the CPU, GPU, and HBM2 are all on the interposer. So the expensive interposer has a shared design. Second, the GPU is a fully dedicated Vega design that's roughly the size of an Rx 580. That way, the dedicated-GPU can "keep up" with the super-fast HBM2 RAM.
The geforce1030, based on a slower version but the same architecture as the 1050/1070/1080, gets me hardware accelerated H265/HEVC and the capability to play most MKV files without dropping frames at up to very high bitrates. Typical playback for something like a 2160p60 24GB copy of the original Blade Runner is at about 18% CPU usage on all four cores using a recent VLC beta.
But the total money spent was something like $89 motherboard + $89 video card + $165 CPU. It would be nice to not have to buy the video card.
The next generation of motherboard chipsets from AMD should remediate this, but motherboards with these will not launch before April.
I'm sure that by April there will be similarly priced motherboards that do support 2160p60 out on their built in HDMI port, for use with these new CPUs.
Picking a random NewEgg motherboard with AM4 & DisplayPort, it looks like it would do the job (see Specs):
So does the cheapest board:
Almost went to an Intel 8th generation Core architecture and Intel's integrated video. But the performance difference between intel onboard and a $80 to $90 geforce1030 was significant. I wanted to build something that would still be able to play ridiculously high bitrate HEVC.
Edit: oops, they released the laptop APU earlier, like ryzen 7 2700U.
Unclear whether it will be BGA-packaged though.
> September 4, 2013 – IFA 2013 – HDMI Forum, Inc., a non-profit, mutual benefit corporation, today announced the release of Version 2.0 of the HDMI Specification. This latest HDMI Specification, the first to be developed by the HDMI Forum, offers a significant increase in bandwidth (up to 18Gbps) to support new features such as 4K@50/60 (2160p),
Anyway, I would recommend a Ryzen 3 + GTX 1050 at a minimum for a gaming computer. The performance here on GTA V, a several years old game, is pretty poor.
Maybe with a bigger package they could throw HBM2 on there... Something like a Ryzen 7 2 CCX configuration (8c/16t) + Vega ~20 + 4GB HBM2 in a package that fits the Threadripper/Epyc socket.
At that point you're stuck paying a premium for the X399 platform, though, you should probably just buy a dGPU :)
The biggest issue will be heat dissipation - but most games are GPU bottlenecked (especially beyond 8C/16T) and games generally aren't designed to consider the Threadripper module structure anyway.
You could probably add the GPU and a couple of 8GB stacks of HBM to the two empty slots, then throttle one of the CPU modules when the GPU ramps up.
I'd recommend a water block.
You can see a much larger gaming benefit by overclocking the IGP. In most tests I saw today, an overclocked Vega8(1100MHz) in the $99 part easily catches up to the stock Vega11(1250MHz) and can be pushed to ~1500MHz.
ACER Swift 3 with Ryzen 5 2500U Vega8, 4GB
Lenovo IdeaPad 720S with Ryzen 7 2700U, Vega10, 8GB
HP Envy X360 with Ryzen 5 2500U Vega8, 16GB
There is an all-in-one
DELL Inspiron 7775 AIO, Ryzen 7 1700 CPU + RX 580 GPU, 16GB
Etherium is Dagger-Hashimoto, which tries to be RAM-bandwidth bound. GDDR5 and HBM2, which can achieve RAM-bandwidths of 500+ GB/s (compared to a typical CPU+ Dual-channel DDR4 with only 50GB/s), so Etherium's best machine are the cheap GPUs with GDDR5. HBM2 (Vega64 and Titan V) are faster, but I don't remember if they win in price/performance. HBM2 might win in ongoing costs due to being more power-efficient.
Monero is Cryptonight, which tries to be memory-latency bound (Ideally: executing out of L3 cache of CPUs). Unfortunately, it seems like the miners have figured out how to parallelize it, as HBM2-based cards manage to perform incredibly well in Cryptonight.
Basically, the only cryptocoin that MIGHT be worthwhile on an APU connected to DDR4 is Monero / Cryptonight. Even then, because Cryptonight is primarily memory-latency bound, I doubt that the GPU-portion would be any faster than the CPU. The majority of the algorithm is waiting for L3 cache to respond.
The L2 Cache in GCN (and probably Vega) is 512kb and shared between 4 compute units. So its too small to execute Cryptonight. As such, why execute from the on-board GPU when the CPU already maxes out the L3 cache?
Cryptonight requires 2MB of low-latency access per instance. Threadripper with 16 cores and 32MB of L3 is basically the most ideal CPU setup. But Vega64 managed to outperform Threadripper by significant margins, probably because HBM2 RAM is incredible (and there seems to be some kind of unexpected parallelization trick in the Cryptonight algorithm)
Good guess. The CPU is nearly twice as fast as APU, and limited by L3 cache as you correctly noted.
One "qualm" is that AMD's APU is a very interesting platform, which can share pointers and memory between the CPU and GPU. Some degree of caching is shared between the CPU and GPU since both are on the same die!
Its not a big deal for cryptonight (which basically pushes AES-NI instructions: so its fastest on the CPU anyway. And L3 is so small that only 2-instances can run on the Ryzen 5 2400G). But in the general case (say: maybe Ethash, which isn't as latency bound / L3 bound), there may be custom code that can be written to take advantage of AMD's HSA platform.
Code which doesn't exist yet (and therefore can't be benchmarked). In part, the ROCm project doesn't even have Raven Ridge support yet, so HSA coding / CPU/GPU sharing can't really take place yet. But looking forward: it might be an interesting project to see EtHash implemented as HSA / ROCm code.