Hacker News new | past | comments | ask | show | jobs | submit login

A lot of people are talking along the lines of "oh AMD is nice but... Nvidia".

No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).

For gaming though, theres no reason not to get an AMD GPU. They are at near performance parity with Nvidia relative to their Windows performance, they work with the inbuilt drivers on every distro out of the box, and the only footgun to watch out for is that new hardware generally takes a feature release Mesa cycle to get stable after launch. You even get hardware accelerated h264 encoding and decoding (and vpx on some chips) via vaapi. All on top of the fundamental that they are much more freedom respecting than Nvidia.

Stop giving Nvidia your money to screw you over with. CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks", and much more are all anti-competitive monopolist exploitative user-hostile evil meant to screw over competition and customers alike. Nvidia is one of the most reprehensible companies out there among peers like Oracle. AMD isn't a selfless helpless angel of a company, but when their products are competitive, and in many ways better (such as supporting Wayland) stop giving such a hostile business your money.




> No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).

To be fair, only Intel has good OpenCL 2.0+ support. NVidia isn't really pushing OpenCL and AMD's ROCm OpenCL driver is still a bit unstable.

AMD's OpenCL 2.0 support has always been poor. The Windows OpenCL 2.0 stuff didn't have a debugger for example, it really wasn't a good development environment at all. Only OpenCL1.2 had a decent debugger, profilers, or analysis tools.

Frankly, it seems like OpenCL2.0 as a whole is a failure. All the code I've personally seen is OpenCL 1.2. Intel is pushing ICC / autovectorization, NVidia is pushing CUDA, and AMD was pushing HSA, and maybe now HCC / HIP / ROCm. No company wants to champion OpenCL 2.0.

Video Games are written in Vulkan Shaders, DirectX shaders, or GLSL. They exist independently of the dedicated compute (CUDA / OpenCL) world.

CUDA, OpenMP 4.5 device offload, and ROC / HIP / HCC are the compute solutions that seem to have the best chances in the future... since OpenCL 2.0 is just kinda blah. AMD's ROCm stack still needs more development, but it does seem to be improving steadily.

----------

I know everyone wants to just hold hands, sing Kumbaya and load SYCL -> OpenCL -> SPIR-V neutral code on everyone's GPUs, but that's just not how the world works right now. And I have my doubts that it will ever be a valid path forward.

The hidden message is that CUDA -> CLang -> NVidia is a partially shared stack with HCC -> CLang -> AMD. So LLVM seems to be the common denominator.


Khronos learned too late that the world has moved on and the community wanted to use something else other than C to program their GPGPUs.

And they are doing it again with Vulkan. With a large majority looking for higher level wrappers instead of dealing directly with its APIs and increasing number of extensions.


> and the community wanted to use something else other than C to program their GPGPUs.

This assertion makes no sense. The whole reason why these APIs are specified based on C is that once a C API is available its trivial to develop bindings in any conceivable language. You can't do that if you opt for a flavor of the month.

Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain. Thus by providing a standard API in C and by enabling standard implementations to be implemented in C you avoid forcing end-users to pay a performance tax just because someone had an irrational fear of C.


Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain.

One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.


> One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.

Nonsense. We're talking about an API. C provides a bare-bones interface that doesn't impose any performance penalty, but just because the API is C nothing forces anyone to implement core functionality in the best tool they can find.

That's like complaining that an air conditioner doesn't cool the room as well as others just because it uses a type F electrical socket instead of a type C plug.


> We're talking about an API

I don't think we are. We are talking about languages.

In CUDA, compute kernels are written in a language called "CUDA C++." In OpenCL 1.2, compute kernels are written in a language called "OpenCL C." Somebody else probably could have (likely did) implement a compiler for their own version of C++ for OpenCL kernels, but the point 'pjmlp was making is that the standard platform did not enable C++ to be used for kernels until long after it was available in CUDA.


Proper APIs can be defined via IDLs, C is not needed at all.


Tell that to Khronos that is now forced to support HLSL so that most studios bother to port their shaders.

NVidia that created a C++ binding to use Vulkan instead of plain C or AMD that had to create higher level bindings for the CAD industry to even bother to look into Vulkan.

This blind advocacy for straight C APIs will turn Vulkan into another OpenCL.


Yeah, except in actual high-performance computing, almost nobody uses C. It's all C++ and Fortran.


> Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain.

CUDA is clearly performance driven, and is a more mature C++ model.

Template functions are a type-safe way to build different (but similar) compute kernels. Its far easier to use C++ Templates Constexpr and whatever to generate constant-code than to use C-based macros.

In practice, CUDA C++ beats OpenCL C in performance. There's a reason why it is so popular, despite being proprietary and locked down to one company.


Honestly, I think I can live with the C thing to program GPGPUs.

The real issue IMO was the split-source. CUDA's single source or HCC's single-source means all your structs and classes work between the GPU and CPU.

If you have a complex datastructure to pass data between the CPU and GPU, you can share all your code on CUDA (or AMD's HCC). But in OpenCL, you have to write a C / C++ / Python version of it, and then rewrite an OpenCL C version of it.

OpenCL C is driven by this interpreter / runtime compiler thingy, which just causes issues in practice. The compiler is embedded into the device driver.

Since AMD's OpenCL compiler is buggy, this means that different versions of AMD's drivers will segfault on different sets of code. As in, your single OpenCL program may work on 19.1.1 AMD Drivers, but it may segfault on version 18.7.2.

The single-source compile-ahead-of-time methodology means that compiler bugs stay in developer land. IIRC, NVidia CUDA also had some bugs, but you can just rewrite your code to handle it (or upgrade your developer's compilers when the fix becomes available).

That's simply not possible with OpenCL's model.


RTX isn't crap. It's an innovation. Yes, it's at its early stages, but nevertheless. True reflections alone will result in new game mechanics. It will accelerate traditional 3D rendering.


At the very least, it will make it easier to implement not weird looking ambient occlusion and viewspace reflections.


I love AMD's innovation in this space, but for high-end gaming Nvidia is still destroying them in raw performance. RTX and G-Sync are definitely stupid, though they are adding limited Freesync compatibility to recent cards now.

If AMD made something that'd beat my 1080 Ti for a reasonable price, I'd definitely buy it. I certainly don't like Nvidia's Linux drivers, but the majority of my non-IGPU needs are Windows-based, so it's not as much of an issue. If I exclusively used Linux on my high-end PCs, I'd likely be more willing to lose some raw performance to go with AMD.


Well, the Radeon VII looks like it is around the 1080 Ti / 2080 for $699.

I think the main issue with AMD is that their compute drivers are clearly behind NVidia's. However, their ROCm development is now on Github, so we can publicly see releases and various development actions. AMD has been active on Github, so the drivers are clearly improving.

But I think it is surprising to see just how far behind they are. ROCm is rewriting OpenCL from scratch, HIP / HCC / etc. etc. is built on top of C++ AMP but otherwise seems to be built from scratch as well. As such, there are still major issues like "ROCm / OpenCL doesn't work with Blender 2.79 yet".

And since ROCm / OpenCL is a different compiler, it has different performance characteristics compared to AMDGPU-PRO (the old OpenCL compiler). So code that worked quickly on AMDGPU-PRO (ex: LuxRender) may work slowly on ROCm / OpenCL (or worst case: not at all, due to compiler errors or whatnot).

EDIT: And the documentation... NVidia offers extremely good documentation. Not only a complete CUDA guide, but a "performance" guide, documented latencies on various instructions (not like Agner Fog level, but useful to understand which instructions are faster than others), etc. etc. AMD used to have an "OpenCL Optimization Guide" with similar information, but it hasn't been updated since the 7970.

EDIT: AMD's Vega ISA documentation is lovely though. But its a bit too low level, and while it gives a great idea of how the GPU executes at an assembly level, it doesn't really have much about how OpenCL relates to it, or optimization tips for that matter. There are certainly nifty features, like DPP, or ds_permute instructions which probably can be used in a Bitonic Sort or something, but there's almost no "OpenCL-level" guide to how to use those instructions. (aside from: https://gpuopen.com/amd-gcn-assembly-cross-lane-operations/. That's basically the best you've got)

That's just the reality of the situation right now for anyone looking into AMD Compute. I'm hopeful that the situation will change as AMD works on fixing bugs and developing (there have been a LOT of development items pushed to their Github repo in the past year). But there's just so much software to be written to have AMD catch up to NVidia. Not just code, but also documentation of their GPUs.


From my perspective (computational physics, not machine learning) the situation with GPU compute is very simple. If you are fine writing everything from scratch and won't need the CUDA ecosystem (which is really all there is for good sparse matrix, linear algebra, etc. support), write OpenCL 1.2 (or even GLSL if it's a visualization-heavy code with relatively simple compute) and buy whatever gets you the best compute/$ at that time. Otherwise - and this probably includes most people in this space - you have no choice but to keep using CUDA. There is just no meaningful compute ecosystem for AMD GPUs, sadly.

I'm still very much looking forward to the Radeon VII due to the memory bandwidth, since I'm currently working on bandwidth-constrained CFD simulations. But that's a specific usecase and I write most things from scratch anyway.


AMD's hardware is stupid-good from a compute perspective. Vega64 is $399, but renders Blender (on AMDGPU-PRO drivers) incredibly fast, like 2080 or 1080 Ti level. That's basically the main use case I bought a Vega for (which is why I'm very disappointed in ROCm's current bug which breaks Blender)

If you really can use those 500GB/s HBM2 stacks + 10+ TFlops of power, the Vega is absolutely a monster, at far cheaper prices than the 2080.

I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.

---

Anyway, my custom code tests are to try and build a custom constraint-solver for a particular game AI I'm writing. Constraint solvers share similarities to Relational Databases (in particular: the relational join operator) which has been accelerated on GPUs before.

So I too am a bit fortunate that my specific use cases actually enable me to try ROCm. But any "popular" thing (Deep Learning, Matrix Multiplications, etc. etc.) benefits so heavily from CUDA's ecosystem that its hard to say no to NVidia these days. CUDA is just more mature, with more libraries that help the programmer.

AMD's system is still "some assembly required", especially if you run into a compiler bug or care about performance... (gotta study up on that Vega ISA...) And unfortunately, GPU Assembly language is a fair bit more mysterious than CPU Assembly Language. But I expect any decent low-level programmer to figure it out eventually...


I agree, and I'd add that VII is probably going to be a lot better. There are some pretty big benefits to the open drivers as well (which can be used for OpenGL, even if you use the AMDGPU-PRO OpenCL, which is probably wise if OpenCL is what you want to do).

As one example, I have a recurring task that runs on my GPU in the background, and I sleep next to the computer that does that. Since I don't want it to be too noisy, and it is acceptable for it to take longer to run while I'm asleep, I have a cron job which changes the power cap through sysfs to a more reasonable 45W (and at those levels, it's much more efficient anyhow, especially with my tuned voltages) at night.

> I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.

Drivers are hard, and AMD has sorta just been getting around to doing them well. The Mesa OpenGL drivers are usually faster than AMDGPU-PRO at OpenGL, and RADV is often faster than AMDGPU-PRO Vulkan (and AMDVLK).

I've been hoping these last few years that AMD would try to ship Mesa on Windows (i.e., add a state tracker for the low level APIs underlying D3D), and save themselves the effort. As far as I can tell, there is no IP issue preventing them from doing that (including if they have to ship a proprietary version with some code they don't own). There still seems to be low-hanging fruit in Mesa, but the performance is already usually better.


https://github.com/hashcat/hashcat has some assembly optimizations. They look fairly readable.


I bought my 1080 TI just over a year ago (December 2017) from Newegg for $750. (Newegg item N82E16814126186)

I'm glad AMD is finally catching up, but a savings of only $51 an entire year later doesn't exactly sound like a particularly great deal to me.


Welcome to the end of Moore's Law. 7nm is as expensive as 14nm was. Sure, you gained double the density, but it costs twice as much to make. So you only get improved performance / watt. Cost per transistor stayed equal in this 7nm generation.

NVidia's 2080 (roughly equivalent to the 1080 Ti) is also $699 to $799, depending on which model you get. Its the nature of how the process nodes work now.

-----------

Rumor is that the lower-end of the market will get price/performance upgrades, as maybe small-7nm chips will have enough yield to actually give cost-savings. But that's a bit of "hopes and dreams" leaking in, as opposed to any hard data.

For now, it is clear that 300mm^2 7nm chips (like the Radeon VII) are going to be costly. Probably due to low yields, but its hard to know for sure. Note that Zen2 and Apple chips are all at around 100mm^2 or so (which seems to indicate that yields are fine for small chips... but even then, Apple's Phones definitely increased in price as they hit 7nm)


Your rhetoric is a bit out of date. I have no problem with AMD (their hardware is good!), but I don't think your presentation is accurate.

> CUDA

Implemented by AMD too, under the moniker "hip."

> their RTX crap

Vendor-independent via DXR, soon to be available through Vulkan as well.

> Gsync

Nvidia GPUs now work with freesync monitors

> Physx

Open source, as of december.

I want to feel good about AMD, but they have thus far failed to build a stable platform around their GPGPU stack(s). Nvidia has done a pretty good job with making CUDA a stable platform for SW development, however anti-competitive you might think it is.


The reason they're doing that now is AMD forcing them to. nVidia's goal was to lock everyone in, which kind of worked, but everybody hates it. And AMD has been successful enough with their open alternatives that it created the significant risk people could switch to that instead.

It's forcing nVidia's hand. If they try to keep everything proprietary and the market starts to shift to the open alternatives because everybody hates that, everyone who used their solutions to begin with and has to throw them out and start over would be displeased. So their only hope is to try to be just open enough that people continue to use their stuff. But they still suck -- look at the state of the open source drivers for their hardware.

And they should hardly be rewarded for doing the wrong thing for as long as the market would bear. Forgive them after they fall below 40% market share.


I regret purchasing Nvidia, i run linux for day to day usage. X frequently has issues even with compositor at times. My intel-only laptop is buttery smooth with sway, while i am forced to use X just because of nvidia not supporting wayland properly.

Point is if you are Nvidia and user of open source software stay away from nvidia products


Yes, the NVidia Wayland situation is bizzare, with them forcing EGLStreams down on people when nobody wants to use them etc.

I can confirm that besides Intel, AMD plays nicely with Wayland too, am typing this from a Wayland GNOME session using the open-source AMDGPU drivers.


> their RTX crap

I was hyped about RTX when it came out and I'm all for more performant raytracing however my hype has seriously waned since the RTX release due to the lack of support. Until we have more games that support it I'm inclined to agree that there is no reason to included it when deciding between NVIDIA and AMD at this time. I would argue that by the time raytracing in gaming is more widespread there will be a more open and accessible solution, likely supported (or built) by AMD.


Fair enough. But when the choice is a 2080 for 699 with RTX and their new 2080 competitor without RTX for 2080, or no AMD card at all, it's not a hard choice.


> For gaming though, theres no reason not to get an AMD GPU.

Cemu, yuzu and any other cpu-intense emulators which only support opengl. The mesa drivers are much faster (you can get 40% more performance out of cemu my running it on Linux via wine) but this doesn't help windows AMD users.


Whats wrong with rtx (other than it being a bleeding edge tech that isn't well supported yet)?


I'm a "n00b" in this sector, so pls. feel free to correct me:

1) RTX is meant to be linked to "raytracing".

2a) Raytracing in general computes a scene by computing how photons are affected by matter - e.g. a full reflection by an absolutely smooth non-absorbing surface or a partial reflection&path_divergence done by liquids, etc... .

2b) In the simulation, the photon that "bounces off" a surface is then "rebounced" by another surface and so on, and this creates a picture similar to the one we use to see in the real world.

3) RTX maybe cuts the whole "rebouncing" and generation of photons a bit short, meaning that there isn't really any new next-gen technology but it's just a bit more processing power that is available to do some additional parallel short/semi-pure raytracing stuff, but which does not work when your scene is complex and it does need many reflections "rebounded" many times.

Again: this is just my initial understanding.


> they work with the inbuilt drivers on every distro out of the box

No they don't work with any distro that uses linux-libre.


What gpu works on linux-libre?


Many old gpus store firmwares in non-volatile memory so that OS doesn't need to load them. Such gpus work with linux-libre as long as there is a free driver.


Pre 900 series Nvidia GPUs would, I think, since Nouveau wasn't forced to use signed nvidia firmware until the 900.


Intel


> CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks"

Do you have similar issues with the vast majority of software companies out there that don't open source their product?

Whether it's Microsoft, SAS, Wolfram Research or Adobe. Synposys, Autodesk, or Blizzard or any mom-and-pop software company that solves some niche problem. It's all closed source and "anti-competitive".

Are they just as monopolist, exploitative and user-hostile?

If not, what's the difference? Just the fact that they sell hardware along with their software?


Proprietary drivers are the worst kind of propreitary software. If the vendor goes bankrupt or simply chooses to stop supporting your product, you have no path forward to update your operating system. Why use a Canon printer when HP has open source drivers? Why use Nvidia when AMD has drivers in the kernel that will never be removed. Old Nvidia cards like a GTX 8800 have only poor reverse engineered support on recent kernels. By contrast, if Microsoft abandons Word, I could easily get it working with Virtualization or possibly through shims like Wine. Nvidia knows that there stance is problematic, and for the embedded auto industry they actually support the open source drivers for their Tegra hardware.


> Why use Nvidia when AMD has drivers in the kernel that will never be removed.

It's perfectly reasonable to be concerned about a vendor going bankrupt and choosing not to support their product. If that's the issue, just don't buy their products.

If one thinks that CUDA functionality is really that important, then isn't that proof that it's valuable IP? Nobody should be forced to open up valuable IP. I don't think it's reasonable to call somebody scum of the earth because they value and don't want to give away what they created.


Nobody talked about forcing Nvidia to open up their IP. Their product is just bad from the driver point of view, and we don't want to buy it.

Also, I don't think that someone is bad for making use of crappy laws such as IP laws, but I don't think we want to have that discussion here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: