No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).
For gaming though, theres no reason not to get an AMD GPU. They are at near performance parity with Nvidia relative to their Windows performance, they work with the inbuilt drivers on every distro out of the box, and the only footgun to watch out for is that new hardware generally takes a feature release Mesa cycle to get stable after launch. You even get hardware accelerated h264 encoding and decoding (and vpx on some chips) via vaapi. All on top of the fundamental that they are much more freedom respecting than Nvidia.
Stop giving Nvidia your money to screw you over with. CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks", and much more are all anti-competitive monopolist exploitative user-hostile evil meant to screw over competition and customers alike. Nvidia is one of the most reprehensible companies out there among peers like Oracle. AMD isn't a selfless helpless angel of a company, but when their products are competitive, and in many ways better (such as supporting Wayland) stop giving such a hostile business your money.
To be fair, only Intel has good OpenCL 2.0+ support. NVidia isn't really pushing OpenCL and AMD's ROCm OpenCL driver is still a bit unstable.
AMD's OpenCL 2.0 support has always been poor. The Windows OpenCL 2.0 stuff didn't have a debugger for example, it really wasn't a good development environment at all. Only OpenCL1.2 had a decent debugger, profilers, or analysis tools.
Frankly, it seems like OpenCL2.0 as a whole is a failure. All the code I've personally seen is OpenCL 1.2. Intel is pushing ICC / autovectorization, NVidia is pushing CUDA, and AMD was pushing HSA, and maybe now HCC / HIP / ROCm. No company wants to champion OpenCL 2.0.
Video Games are written in Vulkan Shaders, DirectX shaders, or GLSL. They exist independently of the dedicated compute (CUDA / OpenCL) world.
CUDA, OpenMP 4.5 device offload, and ROC / HIP / HCC are the compute solutions that seem to have the best chances in the future... since OpenCL 2.0 is just kinda blah. AMD's ROCm stack still needs more development, but it does seem to be improving steadily.
I know everyone wants to just hold hands, sing Kumbaya and load SYCL -> OpenCL -> SPIR-V neutral code on everyone's GPUs, but that's just not how the world works right now. And I have my doubts that it will ever be a valid path forward.
The hidden message is that CUDA -> CLang -> NVidia is a partially shared stack with HCC -> CLang -> AMD. So LLVM seems to be the common denominator.
And they are doing it again with Vulkan. With a large majority looking for higher level wrappers instead of dealing directly with its APIs and increasing number of extensions.
This assertion makes no sense. The whole reason why these APIs are specified based on C is that once a C API is available its trivial to develop bindings in any conceivable language. You can't do that if you opt for a flavor of the month.
Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain. Thus by providing a standard API in C and by enabling standard implementations to be implemented in C you avoid forcing end-users to pay a performance tax just because someone had an irrational fear of C.
One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.
Nonsense. We're talking about an API. C provides a bare-bones interface that doesn't impose any performance penalty, but just because the API is C nothing forces anyone to implement core functionality in the best tool they can find.
That's like complaining that an air conditioner doesn't cool the room as well as others just because it uses a type F electrical socket instead of a type C plug.
I don't think we are. We are talking about languages.
In CUDA, compute kernels are written in a language called "CUDA C++." In OpenCL 1.2, compute kernels are written in a language called "OpenCL C." Somebody else probably could have (likely did) implement a compiler for their own version of C++ for OpenCL kernels, but the point 'pjmlp was making is that the standard platform did not enable C++ to be used for kernels until long after it was available in CUDA.
NVidia that created a C++ binding to use Vulkan instead of plain C or AMD that had to create higher level bindings for the CAD industry to even bother to look into Vulkan.
This blind advocacy for straight C APIs will turn Vulkan into another OpenCL.
CUDA is clearly performance driven, and is a more mature C++ model.
Template functions are a type-safe way to build different (but similar) compute kernels. Its far easier to use C++ Templates Constexpr and whatever to generate constant-code than to use C-based macros.
In practice, CUDA C++ beats OpenCL C in performance. There's a reason why it is so popular, despite being proprietary and locked down to one company.
The real issue IMO was the split-source. CUDA's single source or HCC's single-source means all your structs and classes work between the GPU and CPU.
If you have a complex datastructure to pass data between the CPU and GPU, you can share all your code on CUDA (or AMD's HCC). But in OpenCL, you have to write a C / C++ / Python version of it, and then rewrite an OpenCL C version of it.
OpenCL C is driven by this interpreter / runtime compiler thingy, which just causes issues in practice. The compiler is embedded into the device driver.
Since AMD's OpenCL compiler is buggy, this means that different versions of AMD's drivers will segfault on different sets of code. As in, your single OpenCL program may work on 19.1.1 AMD Drivers, but it may segfault on version 18.7.2.
The single-source compile-ahead-of-time methodology means that compiler bugs stay in developer land. IIRC, NVidia CUDA also had some bugs, but you can just rewrite your code to handle it (or upgrade your developer's compilers when the fix becomes available).
That's simply not possible with OpenCL's model.
If AMD made something that'd beat my 1080 Ti for a reasonable price, I'd definitely buy it. I certainly don't like Nvidia's Linux drivers, but the majority of my non-IGPU needs are Windows-based, so it's not as much of an issue. If I exclusively used Linux on my high-end PCs, I'd likely be more willing to lose some raw performance to go with AMD.
I think the main issue with AMD is that their compute drivers are clearly behind NVidia's. However, their ROCm development is now on Github, so we can publicly see releases and various development actions. AMD has been active on Github, so the drivers are clearly improving.
But I think it is surprising to see just how far behind they are. ROCm is rewriting OpenCL from scratch, HIP / HCC / etc. etc. is built on top of C++ AMP but otherwise seems to be built from scratch as well. As such, there are still major issues like "ROCm / OpenCL doesn't work with Blender 2.79 yet".
And since ROCm / OpenCL is a different compiler, it has different performance characteristics compared to AMDGPU-PRO (the old OpenCL compiler). So code that worked quickly on AMDGPU-PRO (ex: LuxRender) may work slowly on ROCm / OpenCL (or worst case: not at all, due to compiler errors or whatnot).
EDIT: And the documentation... NVidia offers extremely good documentation. Not only a complete CUDA guide, but a "performance" guide, documented latencies on various instructions (not like Agner Fog level, but useful to understand which instructions are faster than others), etc. etc. AMD used to have an "OpenCL Optimization Guide" with similar information, but it hasn't been updated since the 7970.
EDIT: AMD's Vega ISA documentation is lovely though. But its a bit too low level, and while it gives a great idea of how the GPU executes at an assembly level, it doesn't really have much about how OpenCL relates to it, or optimization tips for that matter. There are certainly nifty features, like DPP, or ds_permute instructions which probably can be used in a Bitonic Sort or something, but there's almost no "OpenCL-level" guide to how to use those instructions. (aside from: https://gpuopen.com/amd-gcn-assembly-cross-lane-operations/. That's basically the best you've got)
That's just the reality of the situation right now for anyone looking into AMD Compute. I'm hopeful that the situation will change as AMD works on fixing bugs and developing (there have been a LOT of development items pushed to their Github repo in the past year). But there's just so much software to be written to have AMD catch up to NVidia. Not just code, but also documentation of their GPUs.
I'm still very much looking forward to the Radeon VII due to the memory bandwidth, since I'm currently working on bandwidth-constrained CFD simulations. But that's a specific usecase and I write most things from scratch anyway.
If you really can use those 500GB/s HBM2 stacks + 10+ TFlops of power, the Vega is absolutely a monster, at far cheaper prices than the 2080.
I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.
Anyway, my custom code tests are to try and build a custom constraint-solver for a particular game AI I'm writing. Constraint solvers share similarities to Relational Databases (in particular: the relational join operator) which has been accelerated on GPUs before.
So I too am a bit fortunate that my specific use cases actually enable me to try ROCm. But any "popular" thing (Deep Learning, Matrix Multiplications, etc. etc.) benefits so heavily from CUDA's ecosystem that its hard to say no to NVidia these days. CUDA is just more mature, with more libraries that help the programmer.
AMD's system is still "some assembly required", especially if you run into a compiler bug or care about performance... (gotta study up on that Vega ISA...) And unfortunately, GPU Assembly language is a fair bit more mysterious than CPU Assembly Language. But I expect any decent low-level programmer to figure it out eventually...
As one example, I have a recurring task that runs on my GPU in the background, and I sleep next to the computer that does that. Since I don't want it to be too noisy, and it is acceptable for it to take longer to run while I'm asleep, I have a cron job which changes the power cap through sysfs to a more reasonable 45W (and at those levels, it's much more efficient anyhow, especially with my tuned voltages) at night.
> I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.
Drivers are hard, and AMD has sorta just been getting around to doing them well. The Mesa OpenGL drivers are usually faster than AMDGPU-PRO at OpenGL, and RADV is often faster than AMDGPU-PRO Vulkan (and AMDVLK).
I've been hoping these last few years that AMD would try to ship Mesa on Windows (i.e., add a state tracker for the low level APIs underlying D3D), and save themselves the effort. As far as I can tell, there is no IP issue preventing them from doing that (including if they have to ship a proprietary version with some code they don't own). There still seems to be low-hanging fruit in Mesa, but the performance is already usually better.
I'm glad AMD is finally catching up, but a savings of only $51 an entire year later doesn't exactly sound like a particularly great deal to me.
NVidia's 2080 (roughly equivalent to the 1080 Ti) is also $699 to $799, depending on which model you get. Its the nature of how the process nodes work now.
Rumor is that the lower-end of the market will get price/performance upgrades, as maybe small-7nm chips will have enough yield to actually give cost-savings. But that's a bit of "hopes and dreams" leaking in, as opposed to any hard data.
For now, it is clear that 300mm^2 7nm chips (like the Radeon VII) are going to be costly. Probably due to low yields, but its hard to know for sure. Note that Zen2 and Apple chips are all at around 100mm^2 or so (which seems to indicate that yields are fine for small chips... but even then, Apple's Phones definitely increased in price as they hit 7nm)
Implemented by AMD too, under the moniker "hip."
> their RTX crap
Vendor-independent via DXR, soon to be available through Vulkan as well.
Nvidia GPUs now work with freesync monitors
Open source, as of december.
I want to feel good about AMD, but they have thus far failed to build a stable platform around their GPGPU stack(s). Nvidia has done a pretty good job with making CUDA a stable platform for SW development, however anti-competitive you might think it is.
It's forcing nVidia's hand. If they try to keep everything proprietary and the market starts to shift to the open alternatives because everybody hates that, everyone who used their solutions to begin with and has to throw them out and start over would be displeased. So their only hope is to try to be just open enough that people continue to use their stuff. But they still suck -- look at the state of the open source drivers for their hardware.
And they should hardly be rewarded for doing the wrong thing for as long as the market would bear. Forgive them after they fall below 40% market share.
Point is if you are Nvidia and user of open source software stay away from nvidia products
I can confirm that besides Intel, AMD plays nicely with Wayland too, am typing this from a Wayland GNOME session using the open-source AMDGPU drivers.
I was hyped about RTX when it came out and I'm all for more performant raytracing however my hype has seriously waned since the RTX release due to the lack of support. Until we have more games that support it I'm inclined to agree that there is no reason to included it when deciding between NVIDIA and AMD at this time. I would argue that by the time raytracing in gaming is more widespread there will be a more open and accessible solution, likely supported (or built) by AMD.
Cemu, yuzu and any other cpu-intense emulators which only support opengl.
The mesa drivers are much faster (you can get 40% more performance out of cemu my running it on Linux via wine) but this doesn't help windows AMD users.
RTX is meant to be linked to "raytracing".
Raytracing in general computes a scene by computing how photons are affected by matter - e.g. a full reflection by an absolutely smooth non-absorbing surface or a partial reflection&path_divergence done by liquids, etc... .
In the simulation, the photon that "bounces off" a surface is then "rebounced" by another surface and so on, and this creates a picture similar to the one we use to see in the real world.
RTX maybe cuts the whole "rebouncing" and generation of photons a bit short, meaning that there isn't really any new next-gen technology but it's just a bit more processing power that is available to do some additional parallel short/semi-pure raytracing stuff, but which does not work when your scene is complex and it does need many reflections "rebounded" many times.
Again: this is just my initial understanding.
No they don't work with any distro that uses linux-libre.
Do you have similar issues with the vast majority of software companies out there that don't open source their product?
Whether it's Microsoft, SAS, Wolfram Research or Adobe. Synposys, Autodesk, or Blizzard or any mom-and-pop software company that solves some niche problem. It's all closed source and "anti-competitive".
Are they just as monopolist, exploitative and user-hostile?
If not, what's the difference? Just the fact that they sell hardware along with their software?
It's perfectly reasonable to be concerned about a vendor going bankrupt and choosing not to support their product. If that's the issue, just don't buy their products.
If one thinks that CUDA functionality is really that important, then isn't that proof that it's valuable IP? Nobody should be forced to open up valuable IP. I don't think it's reasonable to call somebody scum of the earth because they value and don't want to give away what they created.
Also, I don't think that someone is bad for making use of crappy laws such as IP laws, but I don't think we want to have that discussion here.