No, in 2019 all AMD GPUs this decade support OpenGL through 4.5, support Vulkan, and still really don't have a great OpenCL situation (rocm is out of tree on every distro and only supprts parts of 2.0 still).
For gaming though, theres no reason not to get an AMD GPU. They are at near performance parity with Nvidia relative to their Windows performance, they work with the inbuilt drivers on every distro out of the box, and the only footgun to watch out for is that new hardware generally takes a feature release Mesa cycle to get stable after launch. You even get hardware accelerated h264 encoding and decoding (and vpx on some chips) via vaapi. All on top of the fundamental that they are much more freedom respecting than Nvidia.
Stop giving Nvidia your money to screw you over with. CUDA, their RTX crap, Gsync, Physx, Nvidia "Gameworks", and much more are all anti-competitive monopolist exploitative user-hostile evil meant to screw over competition and customers alike. Nvidia is one of the most reprehensible companies out there among peers like Oracle. AMD isn't a selfless helpless angel of a company, but when their products are competitive, and in many ways better (such as supporting Wayland) stop giving such a hostile business your money.
To be fair, only Intel has good OpenCL 2.0+ support. NVidia isn't really pushing OpenCL and AMD's ROCm OpenCL driver is still a bit unstable.
AMD's OpenCL 2.0 support has always been poor. The Windows OpenCL 2.0 stuff didn't have a debugger for example, it really wasn't a good development environment at all. Only OpenCL1.2 had a decent debugger, profilers, or analysis tools.
Frankly, it seems like OpenCL2.0 as a whole is a failure. All the code I've personally seen is OpenCL 1.2. Intel is pushing ICC / autovectorization, NVidia is pushing CUDA, and AMD was pushing HSA, and maybe now HCC / HIP / ROCm. No company wants to champion OpenCL 2.0.
Video Games are written in Vulkan Shaders, DirectX shaders, or GLSL. They exist independently of the dedicated compute (CUDA / OpenCL) world.
CUDA, OpenMP 4.5 device offload, and ROC / HIP / HCC are the compute solutions that seem to have the best chances in the future... since OpenCL 2.0 is just kinda blah. AMD's ROCm stack still needs more development, but it does seem to be improving steadily.
I know everyone wants to just hold hands, sing Kumbaya and load SYCL -> OpenCL -> SPIR-V neutral code on everyone's GPUs, but that's just not how the world works right now. And I have my doubts that it will ever be a valid path forward.
The hidden message is that CUDA -> CLang -> NVidia is a partially shared stack with HCC -> CLang -> AMD. So LLVM seems to be the common denominator.
And they are doing it again with Vulkan. With a large majority looking for higher level wrappers instead of dealing directly with its APIs and increasing number of extensions.
This assertion makes no sense. The whole reason why these APIs are specified based on C is that once a C API is available its trivial to develop bindings in any conceivable language. You can't do that if you opt for a flavor of the month.
Furthermore, GPGPU applications are performance-driven, and C is unbeatable in this domain. Thus by providing a standard API in C and by enabling standard implementations to be implemented in C you avoid forcing end-users to pay a performance tax just because someone had an irrational fear of C.
One of the reasons CUDA beat OpenCL was that enough people preferred C++ or Fortran to C and CUDA was happy to accommodate them while OpenCL wasn't.
Nonsense. We're talking about an API. C provides a bare-bones interface that doesn't impose any performance penalty, but just because the API is C nothing forces anyone to implement core functionality in the best tool they can find.
That's like complaining that an air conditioner doesn't cool the room as well as others just because it uses a type F electrical socket instead of a type C plug.
I don't think we are. We are talking about languages.
In CUDA, compute kernels are written in a language called "CUDA C++." In OpenCL 1.2, compute kernels are written in a language called "OpenCL C." Somebody else probably could have (likely did) implement a compiler for their own version of C++ for OpenCL kernels, but the point 'pjmlp was making is that the standard platform did not enable C++ to be used for kernels until long after it was available in CUDA.
NVidia that created a C++ binding to use Vulkan instead of plain C or AMD that had to create higher level bindings for the CAD industry to even bother to look into Vulkan.
This blind advocacy for straight C APIs will turn Vulkan into another OpenCL.
CUDA is clearly performance driven, and is a more mature C++ model.
Template functions are a type-safe way to build different (but similar) compute kernels. Its far easier to use C++ Templates Constexpr and whatever to generate constant-code than to use C-based macros.
In practice, CUDA C++ beats OpenCL C in performance. There's a reason why it is so popular, despite being proprietary and locked down to one company.
The real issue IMO was the split-source. CUDA's single source or HCC's single-source means all your structs and classes work between the GPU and CPU.
If you have a complex datastructure to pass data between the CPU and GPU, you can share all your code on CUDA (or AMD's HCC). But in OpenCL, you have to write a C / C++ / Python version of it, and then rewrite an OpenCL C version of it.
OpenCL C is driven by this interpreter / runtime compiler thingy, which just causes issues in practice. The compiler is embedded into the device driver.
Since AMD's OpenCL compiler is buggy, this means that different versions of AMD's drivers will segfault on different sets of code. As in, your single OpenCL program may work on 19.1.1 AMD Drivers, but it may segfault on version 18.7.2.
The single-source compile-ahead-of-time methodology means that compiler bugs stay in developer land. IIRC, NVidia CUDA also had some bugs, but you can just rewrite your code to handle it (or upgrade your developer's compilers when the fix becomes available).
That's simply not possible with OpenCL's model.
If AMD made something that'd beat my 1080 Ti for a reasonable price, I'd definitely buy it. I certainly don't like Nvidia's Linux drivers, but the majority of my non-IGPU needs are Windows-based, so it's not as much of an issue. If I exclusively used Linux on my high-end PCs, I'd likely be more willing to lose some raw performance to go with AMD.
I think the main issue with AMD is that their compute drivers are clearly behind NVidia's. However, their ROCm development is now on Github, so we can publicly see releases and various development actions. AMD has been active on Github, so the drivers are clearly improving.
But I think it is surprising to see just how far behind they are. ROCm is rewriting OpenCL from scratch, HIP / HCC / etc. etc. is built on top of C++ AMP but otherwise seems to be built from scratch as well. As such, there are still major issues like "ROCm / OpenCL doesn't work with Blender 2.79 yet".
And since ROCm / OpenCL is a different compiler, it has different performance characteristics compared to AMDGPU-PRO (the old OpenCL compiler). So code that worked quickly on AMDGPU-PRO (ex: LuxRender) may work slowly on ROCm / OpenCL (or worst case: not at all, due to compiler errors or whatnot).
EDIT: And the documentation... NVidia offers extremely good documentation. Not only a complete CUDA guide, but a "performance" guide, documented latencies on various instructions (not like Agner Fog level, but useful to understand which instructions are faster than others), etc. etc. AMD used to have an "OpenCL Optimization Guide" with similar information, but it hasn't been updated since the 7970.
EDIT: AMD's Vega ISA documentation is lovely though. But its a bit too low level, and while it gives a great idea of how the GPU executes at an assembly level, it doesn't really have much about how OpenCL relates to it, or optimization tips for that matter. There are certainly nifty features, like DPP, or ds_permute instructions which probably can be used in a Bitonic Sort or something, but there's almost no "OpenCL-level" guide to how to use those instructions. (aside from: https://gpuopen.com/amd-gcn-assembly-cross-lane-operations/. That's basically the best you've got)
That's just the reality of the situation right now for anyone looking into AMD Compute. I'm hopeful that the situation will change as AMD works on fixing bugs and developing (there have been a LOT of development items pushed to their Github repo in the past year). But there's just so much software to be written to have AMD catch up to NVidia. Not just code, but also documentation of their GPUs.
I'm still very much looking forward to the Radeon VII due to the memory bandwidth, since I'm currently working on bandwidth-constrained CFD simulations. But that's a specific usecase and I write most things from scratch anyway.
If you really can use those 500GB/s HBM2 stacks + 10+ TFlops of power, the Vega is absolutely a monster, at far cheaper prices than the 2080.
I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.
Anyway, my custom code tests are to try and build a custom constraint-solver for a particular game AI I'm writing. Constraint solvers share similarities to Relational Databases (in particular: the relational join operator) which has been accelerated on GPUs before.
So I too am a bit fortunate that my specific use cases actually enable me to try ROCm. But any "popular" thing (Deep Learning, Matrix Multiplications, etc. etc.) benefits so heavily from CUDA's ecosystem that its hard to say no to NVidia these days. CUDA is just more mature, with more libraries that help the programmer.
AMD's system is still "some assembly required", especially if you run into a compiler bug or care about performance... (gotta study up on that Vega ISA...) And unfortunately, GPU Assembly language is a fair bit more mysterious than CPU Assembly Language. But I expect any decent low-level programmer to figure it out eventually...
As one example, I have a recurring task that runs on my GPU in the background, and I sleep next to the computer that does that. Since I don't want it to be too noisy, and it is acceptable for it to take longer to run while I'm asleep, I have a cron job which changes the power cap through sysfs to a more reasonable 45W (and at those levels, it's much more efficient anyhow, especially with my tuned voltages) at night.
> I really wonder why video games FPS numbers are so much better on NVidia. The compute power is clearly there, but it just doesn't show in FPS tests.
Drivers are hard, and AMD has sorta just been getting around to doing them well. The Mesa OpenGL drivers are usually faster than AMDGPU-PRO at OpenGL, and RADV is often faster than AMDGPU-PRO Vulkan (and AMDVLK).
I've been hoping these last few years that AMD would try to ship Mesa on Windows (i.e., add a state tracker for the low level APIs underlying D3D), and save themselves the effort. As far as I can tell, there is no IP issue preventing them from doing that (including if they have to ship a proprietary version with some code they don't own). There still seems to be low-hanging fruit in Mesa, but the performance is already usually better.
I'm glad AMD is finally catching up, but a savings of only $51 an entire year later doesn't exactly sound like a particularly great deal to me.
NVidia's 2080 (roughly equivalent to the 1080 Ti) is also $699 to $799, depending on which model you get. Its the nature of how the process nodes work now.
Rumor is that the lower-end of the market will get price/performance upgrades, as maybe small-7nm chips will have enough yield to actually give cost-savings. But that's a bit of "hopes and dreams" leaking in, as opposed to any hard data.
For now, it is clear that 300mm^2 7nm chips (like the Radeon VII) are going to be costly. Probably due to low yields, but its hard to know for sure. Note that Zen2 and Apple chips are all at around 100mm^2 or so (which seems to indicate that yields are fine for small chips... but even then, Apple's Phones definitely increased in price as they hit 7nm)
Implemented by AMD too, under the moniker "hip."
> their RTX crap
Vendor-independent via DXR, soon to be available through Vulkan as well.
Nvidia GPUs now work with freesync monitors
Open source, as of december.
I want to feel good about AMD, but they have thus far failed to build a stable platform around their GPGPU stack(s). Nvidia has done a pretty good job with making CUDA a stable platform for SW development, however anti-competitive you might think it is.
It's forcing nVidia's hand. If they try to keep everything proprietary and the market starts to shift to the open alternatives because everybody hates that, everyone who used their solutions to begin with and has to throw them out and start over would be displeased. So their only hope is to try to be just open enough that people continue to use their stuff. But they still suck -- look at the state of the open source drivers for their hardware.
And they should hardly be rewarded for doing the wrong thing for as long as the market would bear. Forgive them after they fall below 40% market share.
Point is if you are Nvidia and user of open source software stay away from nvidia products
I can confirm that besides Intel, AMD plays nicely with Wayland too, am typing this from a Wayland GNOME session using the open-source AMDGPU drivers.
I was hyped about RTX when it came out and I'm all for more performant raytracing however my hype has seriously waned since the RTX release due to the lack of support. Until we have more games that support it I'm inclined to agree that there is no reason to included it when deciding between NVIDIA and AMD at this time. I would argue that by the time raytracing in gaming is more widespread there will be a more open and accessible solution, likely supported (or built) by AMD.
Cemu, yuzu and any other cpu-intense emulators which only support opengl.
The mesa drivers are much faster (you can get 40% more performance out of cemu my running it on Linux via wine) but this doesn't help windows AMD users.
RTX is meant to be linked to "raytracing".
Raytracing in general computes a scene by computing how photons are affected by matter - e.g. a full reflection by an absolutely smooth non-absorbing surface or a partial reflection&path_divergence done by liquids, etc... .
In the simulation, the photon that "bounces off" a surface is then "rebounced" by another surface and so on, and this creates a picture similar to the one we use to see in the real world.
RTX maybe cuts the whole "rebouncing" and generation of photons a bit short, meaning that there isn't really any new next-gen technology but it's just a bit more processing power that is available to do some additional parallel short/semi-pure raytracing stuff, but which does not work when your scene is complex and it does need many reflections "rebounded" many times.
Again: this is just my initial understanding.
No they don't work with any distro that uses linux-libre.
Do you have similar issues with the vast majority of software companies out there that don't open source their product?
Whether it's Microsoft, SAS, Wolfram Research or Adobe. Synposys, Autodesk, or Blizzard or any mom-and-pop software company that solves some niche problem. It's all closed source and "anti-competitive".
Are they just as monopolist, exploitative and user-hostile?
If not, what's the difference? Just the fact that they sell hardware along with their software?
It's perfectly reasonable to be concerned about a vendor going bankrupt and choosing not to support their product. If that's the issue, just don't buy their products.
If one thinks that CUDA functionality is really that important, then isn't that proof that it's valuable IP? Nobody should be forced to open up valuable IP. I don't think it's reasonable to call somebody scum of the earth because they value and don't want to give away what they created.
Also, I don't think that someone is bad for making use of crappy laws such as IP laws, but I don't think we want to have that discussion here.
I haven't done any kind of elaborate benchmarks, but as someone who runs Linux full-time, I want to support companies that make my life a bit easier.
That said, I have had some issue with my computer having some weird graphical glitches, and then crashing...I don't know if that's the drivers fault but I never had this with my NVidia or Intel cards...
I seem to remember at around the same time that the Intel open source drivers went through a number of regressions.
In the past I've had really bad experiences with ATI's GPUs. My 2016 experience would certainly allay my fears about buying AMD.
Some time ago, I bought an Nvidia. It works like charm with the closed driver on Linux and windows. I do mainly games on Linux/windows, some gpgpu (machine learning with tensor flow), and the usual stuff. I couldn't be happier... Except if it was open source ;-)
Since switching to the AMD laptop, my experience has been smooth. My only worry is the upgrade path, there aren't that many high-performance AMD laptops out there and my next purchase is definitely AMD.
I think I ended up bisecting the changelogs to find that they had dropped support two versions back.
After a couple of times of this, what I saw that was happening (and I am not saying this was it in your case) was my X config file was being replaced/updated and really just breaking everything. So I got in the habit of always making a backup of that file. Usually, when dropped at the console, I could just backup the config file there, then copy my old file over, and everything would work perfectly on restart.
Except this last time (a few months ago) - but it was inevitable it would happen, and it was entirely my fault.
I had a need a couple of years back to be able to use the latest C++ 11 version of gcc - but 14.04 LTS didn't have it available, and there wasn't any backports. So I decided to "wing it" from scratch, compiling a new version.
Then I found myself in dependency hell - which I also got past through a variety of updates from for my Ubuntu, or via download and install, etc. It was a complete mess, but in the end I got it working...
...until I tried to update - the entire update system was fairly broken, so no moving forward from 14.04 LTS.
But I thought I could do NVidia's latest proprietary driver - and it needed the compiler and other parts (for what reason I don't know) and it died a horrible death, leaving me with no good options to for the driver. I had to fall back to the open source neuveau driver (yuck) just to get my desktop back. But things were pretty well hosed.
Fortunately my OS was on a seperate partition and drive, so I bit the bullet and did a reinstall and upgrade (to Budgie Desktop 18.04 LTS), and vowed to never do any hand compile and install stuff again (next time if I need such a thing, it's going to be in a VM or containerized).
I had issues with the proprietary nvidia drivers just today and it took me about 1 hour to fix (arch linux).
My attitude was the same as you but it has shifted in the last year or so. Maybe just because I want that sick CPU :)
At least, I personally didn't have that experience on Linux with proprietary driver when the Windows gaming rig did. They likely determined it wasn't worth the effort or Linux users would be more likely to lash back at the intrusion. Or it could have been a later target but still on their plan..
Have they changed this and forced their telemetry collection onto the Linux desktop yet?
I doubt they'd prereq it to install drivers for their ML or workstation cards. I'm a little more worried about their gaming/desktop cards.
If you want to install just the driver, you have to extract the driver from their executable and manually handle updates, or use a third party package manager like chocolatey.
Most people won't know this is possible. The trade-off to the vast majority is to trade unknown bits of their information for security and stability updates.
Should that really be the default state of things?
It seems that everyone has a different experience on the subject! Just a precision: I had an ATI card (laptop) from 2005 to 2010, then an ATI (desktop) from 2010 to 2017.
Between a 4850, 7870, 290, and 580 I've been a satisfied customer. It was rough waters in 2012, but nowadays its flawless.
On the CPU side though... I do want to make a big upgrade this year. I've had a 4770k since release. 8 cores sounds pretty juicy. But AMD still has their proprietary PSP and unlike with the Intel ME I have no way to disable it. While it sucks giving Intel money when they have been no help disabling their government backdoor into my computer the fact the community has disabled it (assuming ME cleaner will work properly on whatever Intel CPUs come out this year) makes me lean towards buying another Intel chip. Not because I want to support them, but because AMD hasn't given me an alternative yet.
Well, Intel actually seems to get their open source driver support in far enough ahead of time these days that you can just load the newest Ubuntu with them and it works, but that wasn't the case 5 years ago.
anyway, but AMD have actual graphics cards, and the driver quality they open source is usually very good.
That is hilariously false. I work on the drivers. They're open source. Go take a look.
My friend bought latest Intel NUC with AMD Vega graphics and he could not even get Linux to boot with that.
Meanwhile I have Nvidia GTX970 on my desktop PC and everything works fine, even G-Sync works. I have used Nvidia cards with Linux for 10 years now and I have not had any issues like I have with AMD now.
To me it seems like AMD dumping their drivers as open source is a call for help on maintaining them rather than being all friendly to community.
Intel is a bit faster about this but it's not like they have perfect day 1 kernel support either. i.e. temperature monitoring drivers for both first gen Ryzen and Coffee Lake landed in 4.15. At the time Ryzen was almost a year old and Coffee Lake was about half a year old.
1 - https://www.phoronix.com
I've had a Ryzen 1700 since they first came out so I was following the Linux support for that fairly closely at the time, mostly by browsing Phoronix fairly frequently. Thankfully after a few months I didn't need to because things were working pretty well by then, other than the thermal monitoring that took quite a while to land. Linux and brand new hardware rarely seem to get along well regardless of the vendor unfortunately.
It could be a nice way to learn a tool like Binary Ninja or IDA (pro).
It's probably some DeviceIoControl call. Once you know the call 12-bit (I think?) ID from userland code, it should be relatively straightforward to follow the driver code to see where it ends up and what hardware registers it uses.
Or maybe it's just something through ACPI?
I get why, you have stuff to do and Nvidia performs better, but still it a little annoying.
OpenBSD seems to be the only open source operating system that suggests that you get an AMD card (or use Intel integrated graphics).
Personally, I do prefer to avoid the headaches from day 1, so it's AMD or Intel.
Same thing with a GTX 760 (currently using v390.42) on Gentoo: I kept that server running multiple times for many weeks at a time and never had crashes/weird things happening while doing GPU-mining, playing or just having the GPU idle while using the CPU.
As well with setting things up, I always just replaced the old card with the new one and that was it.
>>a server and a media center will do fine even without proprietary drivers...
Honestly: not sure (never tried). My big question mark involving the GPU frequency scaling (maybe now an issue only with "Nouveau"?) when using video filters (e.g. framerate sync and/or nice upscaling filters).
"What are you smoking" is a US colloquialism that means that one's statement is so nonsensical that the most logical explanation is that the speaker was smoking marijuana at the time the statement was made (because the implied alternative is that the speaker is crazy or quite daft).
'When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."'
This comment can be shortened to eliminate the first sentence, which is needlessly rude.
US colloquialisms are perfectly appropriate on this site.
And funded by a Brit :-)
Honestly, that's not how today's world works. Sites like Reddit and HN obviously have an international audience. It is not meant for people from tech-centric U.S. cities, but rather to be a sort of sales channel for YC, which BTW has well-known companies founded by non-Americans on its list, such as Gitlab, Stripe etc.
> on US hardware and software
Really? Are you sure there's not a line of free/open-source in there that's written by a non American? It runs on Linux, I presume, who's creator is Finnish and there are many non-U.S. contributors. If there's NGINX somewhere in the pipeline, now you're got some Kazakhstani/Russian code in there. If any part was ever touched by a JetBrains IDE, more evil Russians were involved etc.
Please keep the nationalistic rhetoric down.
P.S. Did you know that paper is a Chinese technology? Yeah, the same Chinese who keep the U.S. down by hyping climate change! I hope you're not using it! \s
Who currently lives and works in the United States, last I checked.
Not to say I disagree with your point, of course; just pointing out the irony of condemning someone for putting undue emphasis on a particular nation's contributions to something while inadvertently doing the same in the process.
Because( no offense to the nouveau guys, you do great job, all things considered) but the open source one is in pretty bad shape and that is completely the fault of Nvidia.
Having to extra compile their driver (that does not conform to kernel standards) delivers a subpar experience for Linux users and taints the kernel. If they don't care about Linux users and help develop the open source driver, they I don't see any reason for buying their hardware. I rather spend it on hardware manufacturers that try to support me as a customer.
Sounds like things are much better these days.
If I build another PC, I'm going to very strongly prefer an AMD card.
There is unfortunately a little boot issue currently (amdgpu conflicts with the EFI framebuffer, so when booting with UEFI you have to turn efifb off, resulting in no display after the bootloader and until the GPU driver loads). But other than that, I'm very happy with amdgpu & Mesa.
There's regular display corruption and flickering on the desktop, most significantly on the second display.
All I need/want is basic 2D composition of the desktop, so maybe their 3D acceleration works better. But there doesn't seem to be any reason to put up with the crappy re-compilation path just to get kinda OK desktop performance with a heap of bugs.
And my god it is fast...
Not in the recent years. The trend has been positive for AMD and negative for Nvidia at least among Linux users for quite some time already.
OpenCL AMD story has been rather flaky until recently, but supposedly ROCm is the most complete and open option there is now. You can also use Vulkan for compute purposes as far as I know. And Vulkan support isn't behind Nvidia in any way.
Some further Vulkan+OpenCL interop is still planned by Khronos.
I will reevaluate both brands for the next switch. I would like to use AMD because of its friendliness/efforts towards open-source, but it has to perform (not "top" but at least "good") & be stable & be silent.
>>As for proof, there are lots of sites..
Pls. post the direct links to the specific pages that allow us to see such a direct comparison or to at least see data which can be compared.
The pay I got was the downgrade of the available GPGPU features, which now requires the Windows drivers for me to actually make use of the DirectX 11 capabilities and accelerated video decoding, as the radeon driver for the APU only covers a subset of what fxgl was capable of.
So yeah, what is the point of supporting AMD again?
Worst Open Source driver? In my universe that might be lima (no offense to the devs, I think freedreno, etnaviv and videocore drivers are a bit better and I don't think there is a PowerVR open source driver yet)
Remember, Comcast also has software developers.
They want to bind customers to their hardware and cause them to buy newer hardware versions in high frequency and there seems to be no better way to do that than via software that they can artificially age if need be and lock to prevent alternative solutions.
AMD separated HDCP and DRM related silicon from video acceleration units some time ago to be able to open source their GPUs completely sans the NDA bound stuff. Even this is a very big generosity and step from them for the Linux community.
I'm sure that the firmware contains some highly proprietary and revealing information about some of their secret sauce. So, they won't be able to do it even if they want to.
Also, if the core enablement and configuration is done in the firmware, some vendors may find themselves in a hard position, since they may be selling crippled GPUs as lower spec cards.
Last, but not the least if folks enable faulty CUs in their cards and see the faults, they may create some (albeit unjustified) noise in "teh internets", which will return as bad press.
So, while the firmware is good for research and educational purposes, it's also a Pandora's box IMHO.
In the end, opening it up isn't any worse than opening up the kernel driver to begin with (you could apply similar arguments to that). And AMD were OK with it, and from what I've heard, DRM is really the main issue here. As usual, media lobby poisoned the technology for us.
I'm a big free and open software advocate. I primarily use free and open source software, and try to open every line of code I write. I'd like to see the firmware on the open like the drivers. I just wanted to talk my understanding of hardware. If my comments sounded otherwise, I'm sorry, my bad.
BTW, I'm not employed by AMD or ATI. I was just one of the independent members while the GPU driver beta testing was closed to outsiders.
DRM always complicates things, but always get broken at the end. Also, it's always a crippling pain.
So far users have been waiting for over a year and heard only silence. I suspect, given their attitude to older hardware, the required firmware will never be released.
(I personally understand that the "firmware" does the low-level stuff, and the drivers of the OS provide the abstraction through their functions, but I might be terribly wrong)
Then, if the performance is "worse" or "better" for a specific OS, it's just because the code of the open-source part (kernel and/or userland progs) and/or the app (game/application/whatever) is not written as well as on the other OSs, right?
Indirectly asking as well: even if the firmware is alway the same one, there is no "part" of the firmware that is dedicated to only a specific OS?
it's unlikely that the firmware has OS-specific code. it's more likely that the firmware exposes functionality that happens to be taken advantage of by one OS' drivers more than it is by another OS' drivers -- perhaps in part because of differences in driver execution models on different kernels. or sometimes (as with nvidia) because a proprietary closed-source windows driver was written by the company with access to private documentation of all the firmware's features while the community-written linux OSS driver was written with incomplete knowledge of the firmware's features derived from reverse engineering.
For the firmware, it's hard to know, i think only someone from AMD can answer this.
It's technically possible if part of a firmware is used only by the Windows driver for example. After, I have no idea, if it's the case...
What's interesting is there's not a clear reason for the firmware to not be opened other than it's probably a huge amount of work to document. It looks like an unexpected house design so there shouldn't be IP issues, and it's pretty mundane stuff so there shouldn't be trade secret issues either. : /
They all seem to be geared toward CUDA, which of course is an NVidia only thing.
I've never really looked deeply into it, but are there performant options, close to CUDA, that would allow me or others to use such ML libraries on AMD GPUs?
See following links for more discussion:
It's an important feature for projects like dxvk.
The whole point of libre software is that the choice of what you use matters, but I rarely see ethical considerations trump hot rodding.