However, having a GPU driver not just be open source but in the upstream Linux kernel is a gigantic deal. Kernel development takes a long time, we have millions of lines in the amdgpu driver, and if every one of those dealt with the lengthy review process it would have never made it in the tree.
So it's a necessary evil. I do wish they would clean it up though, I sent a fix to amdgpu once that was the same thing for 3 different files that were largely duplicated. That kind of thing wouldn't fly anywhere else in the kernel
Another way of looking at it is -- I started playing with openwrt for a relatively small router, with 5 ports plus wifi.
I was amazed at not only the amount of openwrt code required to support the different router families and the different router models, but at the sheer amount of stuff turned on by default in the kernel just in case I might need to load a module for some obscure feature or package. I assume the same goes for a gpu driver both at the source level and in the kernel.
Yes, I did caveat my suggestion. Why not submit a fix if it's so simple ;-)
Assuming they play nice with the community, it could be a huge benefit to AMD in the long run.
Still, 2 million lines is a massive amount of code to start working on.
Propriatery drivers (especially nvidia), most likely has lots of similar game specific workarounds and optimizations (even going as far as overriding shaders in games with better ones they wrote). 
If anyone wants to learn more about lower-level aspects of GPUs, the Vulkan driver code I linked is one of the best places to start. It directly implements the Vulkan API on one end and talks to the kernel drivers on the other end, so it's relatively easy to follow if you're a systems programmer with an API-level understanding of graphics. Just pick a Vulkan function of your choice and start tracing through the code, e.g. vkCmdDraw: https://github.com/mesa3d/mesa/blob/master/src/amd/vulkan/ra.... The Vulkan driver calls into some of the low-level radeonsi code I linked from the Gallium tree but it isn't a Gallium-based driver, so you don't have to deal with those extra layers of abstraction.
They are enabled via driconf . Not nearly as many as what I imagine you'd find in the proprietary Windows drivers though.
The Nvidia driver is crappy, doesn't support Optimus, etc, but at least I haven't had any problems with it for as long as I've used it.
Anecdotal data and all that. I'm on a Radeon VII, pretty darn solid, will probably continue to choose AMD cards in the future. Wish the Windows driver were a bit more stable, and it's... frankly weird to be saying that in comparison to the Linux driver for the same card.
I have an AMD 5700XT in my Windows games machine, and the driver is an absolute travesty. And looking over the installed files is a horror show. Qt5WebEngineCore.dll and avcodec-58.dll, because a browser engine and ffmpeg are essential in a device driver. And why does FacebookClient.exe exist? Fuck knows.
Also don't forget that these are user-space apps that are simply bundled with the driver but not necessarily part of it. Qt5WebEngineCore.dll is most likely used by the UI portion of the driver (settings dialogs, radeon software etc.), same with the ffmpeg dll and the facebook client.
NVIDIA does the exact same btw. - see 
For anyone using Nvidia on Windows, here's a useful tool to carve out most of the trash from the driver prior to installing.
I would only use AMD cards in my Linux boxes. The nvidia drivers/cards pale in comparison.
I'm hoping AMD's next gen turns out to be competitive with the RTX3000 series for my next GPU for the same reason.
However, don't expect a new Radeon GPU to be well supported on day of release, expect 1 kernel release cycle until it basically works, and one more until it has most of the bugs ironed out, and then wait until your favorite distro gets that kernel. So you're looking at 3 to 9 months depending on what distro you use.
I'm personally going to be looking for people selling their RX 5700, to replace my RX 480 ...
Intel simply has no closed source driver for Linux. New hardware is often supported/merged before it is even sold. AMD is trying the same, but not there yet.
Typically the entire system will freeze (speakers will continue to play whatever was in the short audio buffer - pretty awful) for 10-15s, then the driver will detect the hang and reboot the iGPU. Happens much more frequently (every ~15m) when using more graphically intense programs. I can't use blender because sometimes when it hangs it won't reset and requires a full reboot.
There are dozens of issues about it and related problems in Intel's drm fork of the kernel . I (finally) posted a bug report about it months ago since it seemed to have gotten worse after 5.4 but never heard back from them.
All this to say - be wary of Intel graphics on linux.
In general, it's kind of a crapshoot no matter which way you go, and expect pain if the gpu chipset is less than a year old.
How is that? I think Mesa provides state of the art OpenGL and Vulkan support, especially with work on ACO. Nvidia doesn't have any edge in that anymore. They did a few years ago still, but not today.
Mesa does implement a lot of stuff but they do not take much advantage of what the higher level parts of the API allow to optimize rendering. From what i remember until AMD pushed some devs on it, they didn't care about supporting the entire API at all.
Vulkan support is most likely good though.
(EDIT: yes, "display lists are deprecated", but this is irrelevant, the API is there, available and works and works great on Nvidia and still very good on AMD Windows driver and a lot of applications use it - Khronos splitting the API to core/compatibility was a mistake that made everything more complicated than necessary when what they should have done if they wanted a clean API would be to make something new like they eventually did with Vulkan and avoid messing up OpenGL )
There is always more that could be optimized, especially when it comes to niche use cases, but generally Mesa/radeonsi do a decent job of making things fast.
> yes, "display lists are deprecated", but this is irrelevant, the API is there, available and works and works great on Nvidia and still very good on AMD Windows driver and a lot of applications use it
By "lot of applications" you mean some workstation applications that refuse to upgrade their code. You can still use AMD's closed source driver on Linux if you need optimizations for those. If you don't (and most people won't) then Mesa works extremely well.
> Khronos splitting the API to core/compatibility was a mistake that made everything more complicated than necessary when what they should have done if they wanted a clean API would be to make something new like they eventually did with Vulkan and avoid messing up OpenGL
You could argue for drivers not providing newer features in the compatibility profile (and Mesa did that until recently) but as long as there are customers demanding support for newer features while refusing to move off the older APIs, this is what you will get. I don't think having OpenGL Core and OpenGL Compat sharing some of the API hurt anything here.
Sure, i didn't dispute that, what i wrote was that Nvidia's drivers are faster in some cases based on code i've actually seen. And they used to be slower until not too long ago in that case too, so it isn't like they aren't improving. But still Nvidia's implementation is faster.
> By "lot of applications" you mean some workstation applications that refuse to upgrade their code. You can still use AMD's closed source driver on Linux if you need optimizations for those. If you don't (and most people won't) then Mesa works extremely well.
I mean games, applications and tools, not workstation applications. Not every application uses the latest and -rarely- greatest version of everything out there nor all applications are always updated - or even under development (especially games). Those that are may have other priorities too.
But why an applications uses some API is irrelevant, the important part is that the API is being used and one implementation is faster than another, showing that that other implementation has room for improvement.
> You could argue for drivers not providing newer features in the compatibility profile (and Mesa did that until recently) but as long as there are customers demanding support for newer features while refusing to move off the older APIs, this is what you will get. I don't think having OpenGL Core and OpenGL Compat sharing some of the API hurt anything here.
My point was that the split itself was a mistake (it isn't like splitting OpenGL into Core and Compatibility was a mandate from heaven -or hell- it was something Khronos came up with) and the hurt was that it make things complicated for a lot of people (e.g. not everyone cares about having the best performance out there - some applications are, e.g., tools that wont even come close to using even a 1% of a GPU's power, but they'd still prefer to rely only on open APIs instead of some proprietary one or some library that may be abandoned next year - code written for OpenGL 1.x 25 years ago can still work fine in modern PCs after all) and split the OpenGL community into two "camps".
This created issues like libraries and tools only supporting one version or the other, tons of bugs and wasted time for "integrating" to Core (or supporting both Compatibility and Core), invalidating a ton of existing knowledge and books (OpenGL being backwards compatible down to 1.0 is very helpful since you can always start at the beginning with something proven and work your way towards more modern functionality in an as-needed basis) and at the end all of that was a huge waste of time since everyone outside Apple decided that Compatibility is necessary - and Apple decided that splitting OpenGL in two halves wasn't enough, so they made everyone's life even harder and came up with a proprietary API all on their own.
And deprecated features? I think there are better things to focus on first optimization wise.
Also it is much more practical (and realistic) to have a few devs optimize a handful of API implementations than expect the thousands of devs who work on thousands of applications to do that (also why OpenGL etc isn't going anywhere).
I wouldn't say that. In all common cases they don't. And as above, deprecated features is the last thing I'd start comparing that on. If you use something deprecated, worrying about performance shouldn't be the case, rather you should worry about rewriting your code.
The acceleration, particularly with RadeonSI and RADV, and particularly as the RADV developers (independents, Valve, and some smaller companies I wish I remembered the names of) have been making massive improvements on the shader compiler side. RADV's own shader compiler (ACO) is noticeably better than the first-party AMD LLVM stack, and RADV is substantially faster than any of the first-party AMD Vulkan drivers for both graphics and compute workloads. I hope ACO in RadeonSI becomes a thing, I think it will be a major improvement.
Message to anyone listening from AMD: maybe look into making ACO your primary target rather than LLVM, it is clearly a better design for your GPUs, it has substantially less overhead, and there's no legal reason it can't be a part of all of your drivers.
As for kernel support, it is often same-day or at least it can access the displays on launch day, provided you have the latest stable kernel. ArchLinux is rarely that far behind a new stable kernel release, so on ArchLinux, same-day support of one form or another, and full support that day or some day soon, is the norm.
Some issues i had with a variety of AMD drivers on my current PC from the top of my head: turning on the monitor before the PC would cause the GPU to not realize there is a monitor attached, letting the monitor to go to power save mode would also cause the GPU to think the monitor was lost, settings for display scaling would be lost after every full reboot (full=real reboot, not the fast hibernate based one Win10 do most of the time, you get a full reboot after updates, some installs, etc), random full system hangs when trying to play GPU accelerated video (which is pretty much most videos on web as well as some applications like Microsoft's new XBox Games app), random reboots too, etc.
So i tend to be careful with updating the drivers. Last issue i had wasn't as bad the random hangs/reboots (which fortunately hasn't happened recently) but i simply couldn't launch the crimson UI at all. I had to do a full reset and reinstall of the drivers for it to appear again.
In comparison updating to the latest Nvidia driver when i had an Nvidia GPU (which was since early 2000s to ~2 years ago) was basically a non-issue: i wouldn't even think twice about it as i never had any issue.
And FWIW that was the same on Linux too: i never had issues with Nvidia's drivers there either and performance was more or less the same (at least for OpenGL stuff). But note that i avoid stuff like Wayland, hybrid GPUs, etc like the plague.
I have a similar issue with a Dell display attached to an AMD card. After suspending the PC, the monitor does not detect the PC at the other end of the DP cable, except for Amazon Basic cables which work for some reason. Digital standards are weird.
The fix for me was switching from Xorg to Wayland. Haven't had a problem since, apart from Steam not liking it all that much.
Now when you turn the laptop (with Radeon gfx) on, it requires me to turn the monitor off and on before It is recognised.
I have an issue though where switching off the monitor for a few days might make the AMD card disabling the outputs and not recognizing the monitor afterwards (I think it is related to the order in which I try to "wake" the monitor) - which I cannot recover from without rebooting the machine.
But this is with a machine never going into suspend or any sleep state - and I can't say if this would be the same with the NVIDIA card. I do not use the NVIDIA card for video output because the proprietary driver would regularly stop showing my desktop - or suddenly any output at all after reboot.
The integrated Intel GPU on my laptop is mostly without issues whatsoever.
On laptops I would still recommend Intel GPUs anyway for power consumption reasons - although AMD APUs are quite interesting and I don't have recent knowledge about how well they compare. The CPU and its ability to lower power consumption under sleep is also relevant there, and this was way better under Intel so far. Unless you need the increase in performance an AMD GPU/APU would offer...
We're long past the worst period for Radeon on Linux which was back in the 2000's with "fglrx" - a driver that I never managed to get working. The new stuff will run with some competence.
In Linux, the driver (including audio) seemed very robust, but I didn't find anything like a detailed control panel for the card's graphics features.
On Windows, the AMD-supplied control panel has plenty of knobs and buttons, but the driver itself seems less robust, particularly w.r.t. audio-over-HDMI.
Apparently AMD doesn't have the resources to debug these millions lines of code, since this has been open for a year now.
Yet people still say NVIDIA on Linux has issue. They don't support Wayland and tend to lag behind with Linux only tech in general, but the driver itself is top notch. I haven't had an nv driver crash on Linux in 10 years. It's only the same echo chamber borne by the famous moment of Linus flipping NVIDIA the bird.
I never had a mentionable issue with AMD cards since switching to the open source driver approx. a decade ago. I have a NVIDIA 1060 card in my workstation for CUDA - every single time I put it in running state again, I have a realistic chance of completely borking my system.
In fact I had an AMD card installed after the first two incidents, simply to have at least a chance of having working video output when the NVIDIA driver once again doesn't want to talk to the kernel.
That and the whole practical implications and idealistic differences of having an (mostly) open source driver vs. a (mostly) closed source driver (I think we can agree that the open source NVIDIA driver is out of the discussion).
Obviously you might run into problems if you try to run very recent hardware right after availability. Kernel driver development is not ideal for cutting edge hardware and some things might break and it might need some time for your distro to ship the newest kernel/driver.
I am, however, very much looking forward to the new AMD GPUs. Hopefully the RX 6000 series will be near a 3080 in more than the 3 hand picked games in their teaser. Would love to use Wayland on my desktop.
Searching for amdgpu bug reports leads to:
which links to a page saying "Bugzilla is no longer in use" :-(
This is under Qubes/Xen, though, so maybe that causes extra problems. If any devs are reading, I did report it here in the end:
The first was that my distro (KDE Neon based on Ubuntu 18.04) shipped an older version of Mesa at the time, which was too old for the AMDGPU driver, so I had to add a PPA with an updated version. Since Neon updated to a 20.04 base, it works straight from a clean install. It also worked with no issues when I switched to openSUSE Leap 15.2.
The second was that DVI output was limited to single-link instead of dual-link. My monitor at the time only supported full 1440p through dual-link DVI or displayport, and the old GPU didn't have displayport. Buying a displayport cable was a quick fix, and I believe the DVI issue is fixed in the driver now.
Aside from those two minor hurdles, it has been smooth sailing, very good OpenGL performance in the games I play.
I bought my RX 5700XT shortly after release, and was using alpha/beta kernel releases and downloading extra files manually for several months after to run, then an upgrade/update may turn into a blank screen on boot for me. It also broke out of the box support for running full VMs, which was pretty painful for me as well, and I wasn't going down that rabbit hole to try and build it myself.
YMMV of course.. but that's just my take on it.. I bought specifically for Linux support, but took a few months to shake out.
Most of the Linux community has a historical hatred of Nvidia because of the driver issue so there’s a lot of relative love out in forums, but just “stable” would be a step up for me for Radeons on windows.
With native performance on official Linux games on par with or better than the Windows equivalent, and more and more games getting Linux ports due to Vulkan, I just about have no need to boot into Windows at home anymore apart from Fusion 360.
As a workstation in Windows, since I don't overclock I don't see any stability issues. Fusion 360 is fast and fluid unlike my 8 year old Sandy Bridge dinosaur at work, even after adding a GT 1030. Good quality Crucial RAM and a no-frills AsRock B450 board make for a rock-solid build. Ditto on Linux as a workstation, everything just works and works well, and it's superb for 3D modeling and music creation (two of my main hobbies).
AMD has incredible CPUs, but just buy an Nvidia GPU - especially if you are using linux.
Also, the open source driver for Nvidia (nouveau) has incredibly poor performance compared to Nvidia's proprietary driver, and lacks essential features such as reclocking for recent hardware generations:
AMD's and Intel's open source drivers are their primary offerings on Linux and have good performance across all hardware generations.
I've switched to AMD now and things are much better. Go with AMD.
The SNA acceleration architecture in the Intel Xorg driver was a disaster in terms of correctness and stability. When SNA appeared as an option it initially seemed quite fast, but didn't take long to reveal it was also quite broken vs. UXA.
I used to explicitly use UXA but for the last 5-10 years simply using modesetting has been the way to go.
Personally I think you're conflating Xorg and kernel driver issues. Xorg is basically unmaintained in general now and unfortunately SNA was the last major development in that context for the Intel driver, and it was not good.
It's true that Nvidia doesn't support Wayland properly, but that's not really an issue in my opinion. Wayland still has its own problems that mean switching from X11 isn't viable yet.
Regarding GPUs and how good they work under Linux, computing on GPUs is only a part of the discussion I would argue...
> 5 or so years of tearing
I know what people are referring to, but a less geeky person might come away from this thinking people get very emotional about bad Linux graphics drivers.
Eagerly awaiting the new AMD hardware.
AMD's ok if you have the room for the discrete card, but I wish they would invest more in integrated on-board chips.
AMD developers in that thread are chasing their tails and still haven't figured out why so many cards are having issues, and why other aren't, but as a consumer, that's really not inspiring at all.
Randomly locks up, random black screen, random rainbow colors all over my monitors.
With my new Nvidia 2060 which I bought to replace it; nothing. No issues. Works just fine on Manjaro.
For whatever reason, the AMD cards just get clapped on Linux.
That said I just picked up a quadro (not my choice, came with a prebuilt NUC) and I've been pleased to find that it "just works" on freebsd (I use it to realtime transcode video), so clearly great experiences are possible and I don't want to be needlessly harsh.
Personally, I'm dying for a discrete intel card. I can't recall any hiccups with intel chipsets, ever, and that matters WAY more to me than raw performance.
Could you say more about what specifically makes the driver abominable? Is it just those files with largely duplicated code?
abstraction is one of the main sources of code complexity.
you start with one function used in 3 places, then add boolean args to it to get slightly different functionality at each place, eventually it becomes a mess of complexity
The amdgpu driver has duplicated files for different versions of things, so it'll have thing_v6.c and thing_v7.c and thing_v8.c with a lot of duplicated functions.
The more common way of doing something like this would be to have structs of function pointers that get populated based on what version of GPU you have. You have one file with all the common functions that they can share, in the definitions for each GPU version you set the majority of the function pointers to the common version they all share and for ones that have to be different, you set them to their unique version. That way you can define all the common functions once, and point to them in the structs for each version.
Having a quick flick through the code now, they do use structs of function pointers in each version for common operations but they still don't abstract out the ones that are either identical or have very few differences that you could special case.
Refactoring such a giant driver for no performance gain is going to be extremely low on AMD's todo list, so it'll probably stay like that. It just doesn't look like anything else in the kernel
A big issue also is the use of bitfields as much as reg duplication. Bitfields in c/c++ are a minefield if you don't lock down a known-good compiler version because there's just so much of it that's technically unspecified. Oftentimes you'll also have issues where certain register fields exist for some registers of a series and not the next or where the functionality/sizing/interpretation is context dependent or where certain locks or write orders are needed for correct access and these are often handed with presence checking macros.
IMO, if we want better driver code, it's time for GCC/Clang to nail down the bitfield layouts for the embedded use cases. This has been broken for far too long.
Changes like this are probably a good way to get started but I would guess the AMDGPU driver is one of the worst places to get started as a beginner.
It's one of the unusual circumstances where, unfortunately, abstraction can decrease flexibility and increase development time.
That is, it's better to have duplications than the wrong abstraction. This may also be in reference to C compilation, in that loading header files and dependencies costs more than inlined code. That's one of the goals that the Go language sought to resolve, anyway.
So, it includes many times more register definitions than are ever used (consider there are 8x more register definition lines than actual code lines that could use them) and it includes many sets of 16 or 64 definitions that a software developer would have made one parameterized definition (all the same except for _00, _01, _02, _03, etc). But this is exactly what the hardware guys generated for public release, and it is to be used as-is.
IMHO it's kinda annoying and sad. The rest of the kernel is held to a higher standard, that's why all the other non-trivial multi-arch multi-family multi-generation code in the linux kernel is much more concise / less sloppy. It takes a lot of effort to make it that way, and commercial companies pretty much never bother, except when required by the Linux maintainers.
But, modern graphics drivers are way too complex and way too much work, and most people do want some proper modern GPU support in the kernel, so compromises have to be made. It's not too bad, just a bunch of inert header lines, git and the compiler handle them just fine I guess.
...which is arguably not compatible with the GPL:
"The source code for a work means the preferred form of the work for making modifications to it."
This is the published hardware interface for the driver, the formal public contract. You can't change it without changing the hardware itself.
If you really want to run the generator... well, the preferred form for modification is open to interpretation and if it's some proprietary tool then just getting the output is preferable to a dependency. Sometimes the rabbit hole is too deep, and we have to draw a line.
From what I can tell, most if not all of the driver is licensed with an MIT-style license. But even if it was GPL, AMD would be the licensor, so it gets to decide the “preferred form of the work”.
> a licensee who receives ... GPL license cannot compel the copyright owner to do anything.
Unless licensee in question has also contributed to a published revision of original licensor's code. And for that to work (remember the wording "preferred form for modification"), you need a form suitable for modification by a skilled stranger with little prior exposure to said work. You would otherwise get different preferred forms of modification of each contributor, which is unworkable.
That’s a nice idea, but it’s not a condition of the GPL. GPL v2 and v3 both only state, “The ‘source code’ for a work means the preferred form of the work for making modifications to it.” That definition exists because without it a licensee might try to argue that distribution of modified and then obfuscated code satisfies the source code offer condition.
Regarding a project licensed to others under the GPL, if the project owner accepts contributions under the GPL, then he becomes a licensee of the contributions. So, as you pointed out, he would need to meet the “preferred form” clause and other terms, at least as regards to the contributed portions. As you might expect, for a substantial project with many contributors, this could become very complicated. Therefore, many projects require contributions be made under a more liberal license (or even a copyright assignment) that allows the contribution to be sub-licensed to others without conditions.
Most, but not all of European jurisdictions, have a legal stipulation that all copyright assignments are either void or revocable even if the assigner says otherwise, except for work-for-hire. You therefore cannot release yourself from preferred form even if you required a copyright assignment, otherwise you will get stuck in the case any further published modifications to your work, not only for the contributions, but any part those modifications that interact so much so that they are inseparable, even by the original licensor, may become illegal overnight. As GPL does not state "the form deemed preferred for modifications by the licensor(s)", but " preferred form ... for modifications", you need to apply that objective definition I stated above. It would be nice if they explicitly stated that way though, relieving a lot of load from judges in resolving a possible dispute on which forms are preferrable for modification and which are not.
So, in the case of the project owner who (1) starts out owning all of the rights to the project, (2) incorporates code licensed from a contributor and, (3) distributes the combination, the only person who could possibly sue the project owner for copyright infringement is the contributor. The claim would only pertain to the contributor's code, because that is the only part he owns the copyright to. The project owner/defendant would raise the license as a defense and the key question would shift to whether the owner/defendant violated any of the conditions of the license.
Where the license is the GPL, one of the conditions is partially affected by the "preferred form" definition of source code. The court would look at what the owner/defendant did and whether he met that condition. Importantly, the condition and "preferred form" definition would only be considered in relation to the plaintiff's code; the owner/defendant's code wouldn't be relevant.
Regarding the contributor's code being "inseparable", that will not be the case for one very simple reason: If the contributor sues the project owner, then he must identify which portion of the code he is suing about. If he can't do that or can't show ownership of it, then he will lose.
It works like that in fully assignable IP jurisdictions (like USA), but it works like a contract of adhesion in the author's compulsory rights jurisdictions (like Germany and Czechia).
What I meant by inseparable contribution was a significant contribution, when eliminated, that would make entire work not resemble the current state of the work; i.e. the line that tells derivative work versus near-equal co-authorship apart (which are treated similarly in fully assignable IP jurisdictions, yet have entirely different regimes in the compulsory rights jurisdictions). Not the entirety of the work indeed.
> the condition and "preferred form" definition would only be considered in relation to the plaintiff's code; the owner/defendant's code wouldn't be relevant.
It would, in a compulsory rights jurisdiction, because all copyright assignments are either void or revocable at will in such jurisdictions.
I didn't believe this, so I looked at a study of EU copyright law. Rights of authors are split into moral rights and economic rights. Economic rights are transferable as property. Moral rights, however, inure to the author and are inalienable. In some countries, the moral rights include the right to withdraw the work from circulation. This right to withdraw is probably what you are referring to when you say that copyright assignments are void or revocable.
The right to withdraw a work from circulation, however, does not come for free. In Spain it is only, "after indemnification of the holders of exploitation rights for damages and prejudice." In Estonia, "The rights ... shall be exercised at the expense of the author and the author is required to compensate for damage caused to the person who used the work." In France, "... he may only exercise that right on the condition that he indemnify the assignee beforehand for any prejudice the reconsideration or withdrawal may cause him." In Romania, the right is "subject to indemnification of any holder of exploitation rights who might be prejudiced by the exercise of the said withdrawal right."
In all of the examples I could find, the withdrawal right essentially extinguishes an assignment of the economic rights. So, in a sense you are correct that an assignment is revocable. Practically, however, the author who exercises that right would be liable for damages to the assignee, which could be significant, and the author would not be able to exercise the right if he could not pay for the economic harm.
Anyway, this has been interesting and I learned something about European copyright regimes. Thanks.
 Id. at 134.
 Id. at 93.
 Id. at 173.
 Id. at 301.
1) represent it more compactly;
2) represent it in a form that can more easily be read and transformed to handle future use-cases for the data;
3) after some future restructuring of the driver, represent the data in a form that better fits with that structure.
If you have to regenerate the code using the proprietary tool in order to restructure the driver, the generated code is not "the preferred form of the work for making modifications".
And, besides, there is an excellent chance that you will never end up changing the names.
And yeah sure pragmatically it might not make much of a difference in this specific case, but if the AMD devs were to port their driver to a new language they wouldn't edit the C headers they would certainly just update their generator, so the preferred form for modification is clearly not the generated C headers.
Not to mention if all you wanted to do was change the names, maybe prefix them with something, editing the generator is _still_ clearly the preferred format for making that change.
Strikes me that AMD have supplied everything required: all the driver code in the preferred form for modification of the driver, i.e., a bunch of C files.
Some of these C files are a big long list of slightly opaque magic number defines that relate to the hardware, perhaps generated by some unreleased tool, who can say - it's all speculation at this point - but that's OK! The hardware is not the bit you're going to modify. As far as the people modifying the drivers are concerned, those numbers are never going to change. This portion of the driver is fixed.
Honestly lifting it from a header file manually is going to be easier for everyone.
They sure as shit didn't type out these header files by hand, so clearly these are not the "preferred form" for modification.
I'm not an open-source absolutist: I think the pragmatic solution Linux went with is good here. But it's silly to suggest that the driver couldn't be improved if it were more open.
It’s lists of registers and stuff like that; not things that can really be fixed by external devs.
In other words, there's more functionality that they're keeping secret? Sounds like a challenge...
Edit: so the hacker spirit is not welcome here...?
And ... you know this. Checking your comment history https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... I easily found https://news.ycombinator.com/item?id=8834863
> Intel CPUs have had undocumented features since their introduction; it's not hard to imagine their chipsets do too.
Before I did the search I thought you were one of the 10,000 (https://xkcd.com/1053/ ), surprised that others didn't share your enthusiasm. Now I don't understand your surprise.
You must surely know your comment about "the hacker spirit is not welcome here" comes across like snobbish gatekeeping, yes? At the very least, the implied lack of knowledge about undocumented features makes it seem that you aren't one to judge what the hacker spirit might be. ... which cannot be correct given your posting history.
Now I happen to know that the vast majority of it is shared between GPU generations to some extent, so someone could abstract things out manually to remove duplication, but it's a huge task.
If those headers aren't expected to change then, with regards to accountability, it's far better to have the code checked into the version control and processed as is.
More importantly, if the code is already generated then there's no need to make the build system more brittle by adding a non-standard build target that depends on custom/third-party tools.
In many hardware shops the C definitions for the visible registers are generated automatically from the hardware's source code
Also, their driver is very complex, and they are constantly improving their hardware. They don't want to be dependent on getting new features and performance improvements upstreamed.
Why are these kind of licenses even allowed. If I buy a product, surely I can do with it as I please?
Also, why doesn't TSMC slap a license on every IC that leaves their fab, taking (say) a 30% profit from every application in which their ICs are being used?
The problem is in the software (the driver) which you never can buy, only license under a long list of conditions which prohibit specific uses.
If e.g. Noveau could implement interfaces needed for CUDA, you could probably try to use a 3050 in a datacenter. I bet NVidia has provisions against this turn of events, too.
Ok, so who gave software a special status over hardware? Is this desirable? Can we reverse it?
Software is rarely sold (outside of bespoke development). All the off the shelf software is essentially rented.
Software itself has no legal value - the copyright is what is considered to be property. That property can be leased or sold. This is why copyright infringement is called infringement and not theft.
When you “buy” software, you are actually entering into a lease contract to use the software (sometimes perpetual, but increasingly only temporary) which can have various terms and conditions (that you really should read, but never do). But that lease doesn’t grant you the copyright.
I don't think this way of selling (or as you say renting) stuff should be considered legal.
And then it get even more complex when you get in to online services. Game consoles are going online only next gen. If you buy the ps5 digital edition and you mod your OS and sony bans you from their servers, your console is now a brick. But in many cases its fair to be banned such as banning cheaters.
This means I'm not buying but renting, which is not how it is advertised.
(I avoid nvidia whenever possible)
If you buy a GPU you own it and the copy of the software it came with. You are free to use that combination as you choose, forever.
It’s not renting because you don’t have to pay rent to continue to use it. There may be software license restrictions, typically against modifying or reverse-engineering the software, However, it is an error to say that those license restrictions convert your ownership into anything like a rental agreement.
Some digital activists say that we don’t really own the devices that we buy because of license restrictions or restricted device firmware. It’s hyperbole. We do own our devices and the copies of the software they came with, even if they came with artificial limitations.
Does that sound like ownership? Can BMW employee pop over to your garage one day remove some bits of the car he thinks you shouldn't have any more?
That does not sound like ownership to me - again, think back to car ownership. Firstly tampering with your car would have been criminal damage.
Secondly, BMW does not get a say in how you use your car. They can't stop you going over the speed limit. You could get your car fixed without having to involve BMW or going to court to force their hand.
In my view this Sony case looks like compensation for breach of a lease-like contract.
Members of the class could opt out of the settlement and sue Sony individually. A court could theoretically enjoin Sony to restore the feature for those individual plaintiffs, but the plaintiffs would have to show that monetary damages would be insufficient. Generally courts don’t like to force defendants to do things when paying money would be an acceptable outcome.
> In my view this Sony case looks like compensation for breach of a lease-like contract.
I haven’t read the complaint in that case but the plaintiffs probably alleged a breach of the implied covenant of good faith and fair dealing. So, yes, possibly a breach of contract claim but not a lease. (Note: A lease is a specific form of contract in which a lessor transfers possession of property to a lessee, but retains a future interest in the property after the contract term ends.)
You do NOT own the software that comes with your GPU!
Ownership implies the ability to transfer, modify, and resell, none of which are within the rights granted by the license of said software.
It's not "rental" either - it's licensing. You don't have to become a lawyer, but knowing and understanding the difference between proprietorship (ownership) and possession is a good start. Same goes for renting vs. licensing vs. ownership.
TL;DR you do not have ownership of any software that came with any device you bought and it's not hyperbole at all.
When you purchase a consumer GPU that comes with software, you acquire the GPU, the copy of the software it came with, and a license to use the software subject to particular terms and conditions. That is what you own, no more, no less.
This is inaccurate, at least as to purchases software. A license is not a contract because the licensee is not required to do anything. A license can have conditions (restriction), but not covenants (promises to do something). A license basically functions as a defense against a claim of infringement.
Note: For purchased software there is a contract for the sale of the software subject to the license, but that shouldn’t be confused with the license itself.
That's simply not true. You are indeed making a contract for the sale of the license itself. Otherwise subscription models wouldn't work and would even be legally allowed to share and resell the software, which you aren't (i.e. just because it's possible to resell an acquired license while keeping a working copy, doesn't make it legal to do so).
As you correctly point out, one who sells his only license to a piece of software no longer has a license. If he kept a copy of the software and continues to use it, he is committing an act of infringement. That is the same whether the license is for a term (subscription) or perpetual.
The datacenter-versus-personal conditions of NVidia drivers attach instead to the use of the copyrighted work. These restrictions are based on the idea of an end user license agreement as an enforceable contract, either agreed-upon when the driver is downloaded or through a theory that copyright attaches to the temporary (in-memory) copy of the driver necessary to run it.
See if you can convince them to "let you" publish a benchmark of their database management system.
that the data set is large enough that cannot fit in memory
that storage is orders of magnitude slower than memory and memory is orders of magnitude slower than processor cache
Oracle has the “best implementation” given these constraints.
Is that not the case?
First, while storage used to be orders of magnitude slower than memory, not SSD storage is just a single order of magnitude slower;
Second, in many domains now it's often practical to ensure that your data set can fit in memory. For example, if your system is for storing financial transactions (which is a prime market for Oracle), then your enterprise has to be quite large to get a terabyte of transactions and you can put a terabyte (or much more) of RAM in a database system if you choose to.
So how would anyone (legally) know?
As for software, well, EA has the right to ban me from their servers if I hack their games, even if I did pay for the product, and this makes sense because it ruins everyone else's experience. I don't pay for HN but if I did they still would have a right to ban my account if I start posting slurs or other abusive content.
Is it desirable? Of course it's desirable; imagine having no control over your own creations and having to deal with the consequences of other people abusing it.
EA famously uses online-only DRM in many of their modern titles; if you get banned from, say, SimCity, you can't run the game at all. There is no "offline mode".
Also, SimCity eventually got an offline mode.
For the record I am against always on DRM so I did not buy this game nor any other game that uses it. I don't believe we need to codify laws banning the practice or any such thing that requires software developers to build things they don't want to build (with the exception of critical fields such as healthcare and aviation).
It's desirable in that a one time purchase does not entitle a customer to a lifetime of server resources; they paid for the game and they can certainly keep the game, but they don't have a right to the services required by the game (those are recurring costs). This makes sense since the alternative is forcing EA to pay to host servers for people that violated their terms of service.
You are correct that it got an offline mode eventually, I overlooked this. But this demonstrates that the market corrected this problem: Enough consumers complained to force a change. Therefore, is there need for external intervention? The simple solution to always-on DRM seems to be to just avoid buying any products that use it.
Only in a limited form e.g. exhaustion doctrine prevents you from restricting resale. If someone wants to resell their exclusive Ferrari, there's nothing Ferrari can legally do (though this'll probably get you blacklisted from ever receiving an exclusive vehicle).
In general, terms can't go against existing laws and have to be 'conscionable' to be enforceable (i.e. they can't be obviously 'unfair').
Sounds like discrimination to me, and not desirable.
But I'm arguing that when you hear "X is discrimination" then it's wrong to automatically imply that X is bad or X should be changed - there's just a narrow subset of discrimination that's immoral and should be avoided; and there's a narrow subset of discrimination that's illegal discrimination (there's some overlap between these two subsets but they are not exactly the same of course), but most discrimination - and certainly the default situation - is just reasonable human activity of us applying common sense and acting according to the specific situation instead of blindly acting the same no matter what like robots would, it's completely normal to adapt to the specific person and act differently to be most suitable with them, make adjustments and custom approaches for different individuals which definitely is discrimination but there's nothing a priori wrong with that. For example, custom pricing is one form of discrimination - offering a discount for students or senior peole is certainly discrimination, but we generally consider that it's entirely appropriate.
And in certain cases a lack of discrimination would be completely immoral - for example, the concept of "reasonable accomodations" is a requirement for discrimination; for example, a policy that forbids electronic devices in an exam does not discriminate in any way and applies equally to anyone (in colloquial language one might call it a "discriminatory policy" but that's wrong; perhaps I'm nitpicking on that but it a misuse of words to mean their exact opposite), but as it forbids hearing aids for people who need them, then that non-discrimination is bad; and also simply equally allowing all devices would be bad for other reasons, so ADA and equivalent laws require to discriminate and apply different rules to people with different abilities.
So if you see a practice that seems definitely bad and harmful, then "is it discrimination?" is the wrong question to ask, since it's very likely that it may be harmful but not discrimination, or it may be discrimination but nothing wrong with it; these aren't edge cases, the overlap is just partial. The proper question to ask is whether the criteria of the discrimination is fair (the up-thread issue of discriminating upon wealth certainly is debatable whether that should or should not be acceptable) and whether the results of that discrimination are appropriate.
The example on ability to repay is closely related to discrimination by pure wealth, but there are businesses with even more straightforward criteria, e.g. financial services that are offered only to individuals with net worth above a certain (quite large) amount, and having less money than that automatically disqualifies you from that service even if you were able and willing to pay the involved fees.
That was not the issue. The question was whether it is desirable.
Personally it leaves a bad taste. It reminds me of a fashion brand that doesn't sell to obese people (can't remember the name but it was in a documentary).
Are you allowing everybody who want to have sex with you to have sex with you or are you discriminating to a select few / unique person ?
Discriminations is part of human nature.
Glad to see someone also came to this conclusion!
Some american politician extended copyright protection towards software. The rest of the world eventually did the same.
> Is this desirable?
> Can we reverse it?
Sure. We just need to have billions of dollars just like the copyright industry. That money can buy a lot of influence.
>> Is this desirable?
So I'm sure you'd be happy if I just took the software for whatever great startup idea you'd been slaving away on for the last two years, slapped better marketing on it, and undercut you by 50% since I didn't have to employ all those pesky overpaid engineers.
It is their product at that point, because they purchased it.
You should not have the rights to control someone else's product, such a a graphics card, or whatever, after they have purchased it. It's theirs now.
However, if nvidia has sold me a functional graphics card including the driver as an unalienable part of the package that I purchased (since the driver being functional is part of the card being 'fit for purpose' of the sale), I should be free to use the driver without any unreasonable restrictions. I have legally bought [a copy] of it, it's not copyright infringement for me to run it on a computer - even if it resides in a datacenter.
My whole point is that there needs to be more efforts to hack and modify these things, and that this would be "more desireable".
And that orgs should be using their power to cause this to happen more. For example, if open source orgs can weaponize licensing agreements against nvidia, in order to force them to do this, then they should and this would be desireable.
Laptops are generally an all or nothing proposition. I wanted a laptop with a high performance CPU and the nvidia GPU just came along with it. Couldn't even disable the thing in firmware since hardware video decoding with the Intel GPU caused kernel panics.
Like you showed your disapproval of Nvidia by giving them your money anyway. So... They're right - people care enough to complain, but not buy something else, so it doesn't really matter.
Are you aware of the concept of market power, switching costs, barriers to entry, and market lock in?
If so, then that should enlighten you as to the explanation for this.
> did it actually matter?
Yes. It matters, yet still did not change consumer behavior, due to the concept of market power.
Sure in theory you could run an open source driver, and in practice sometimes the river won't crash, but there's no point because you could get an equally good open source driver video card for the same price, since you can't get the fancy card's peak performance from the open source driver
It’s like buying a blender and then finding out that you’re not allowed to blend anything unless someone in the manufacturers’ operation approves of it.
Well, buying and licensing are not so different in Europe (first sale doctrine). The company can not forbid you to resell a license (Exhaustion of intellectual property rights) in Europe.
However, the ability to resell a license doesn’t remove other conditions, e.g., restrictions on data center usage.
Not happy? That is what makes FOSS so appealing.
Satya Nadella bought Skyrim recently. All of it.
I don't follow the semiconductor industry closely enough to know anything about TSMC's business practices, but these kind of contracts are far from unheard of in other sectors.
What are they going to do, call you and ask how you are using the GPUs? Don't answer. Message you on Facebook? Don't answer. Visit you? Don't publish your address.
Alternatively, just don't call it a datacenter. Just call it a private internet gaming cafe or something of that sort. NVIDIA doesn't have a right to know what's actually inside.
The US will fall behind in tech if it insists on enforceability of things like this.
AMDs version is simply not supporting their version of cuda (rocm) on consumer cards (the navi ones anyway)
Monopolists can do anything with your money including sitting on their hands. Also, supporting students and hobbyists may be noble, but education is something we all pay tax for.
Also, hobbyists would be better served if they could develop their own version of cuda.
How are they going to prevent me from doing so?
We no longer buy products these days. We license them. Another form of rent that allows the true owner to maintain control. Somehow this became the norm.
There is nothing wrong with someone doing whatever they want with a product that they now own.
If someone wants to modify their own hardware, that is their right.
You could try to regulate that what is manufactured is not gimped on it's way to the consumer for ideological reasons but in the end you'd just end up paying more for a separate physical model because the profit margins on these advanced use cases are simply what drives GPU design.
As for the royalty licensing TSMC is ahead in abilities and has captured an enormous portion of the market but it's not so far ahead that it can eat however far into customer income streams as it wants. Other manufacturers still exist and get deals, Nvidia is using Samsung 8nm for the latest round of GPUs for example. If it continues to increase its lead then we may see that type of agreement grow though.
Because companies would stop using TSMC chips...?
Not to mention the logistical problems to attribute "profit" to any chip in particular.
But perhaps I'm too much thinking from the perspective of what large US based businesses would do.
RTX 30 is manufactured by Samsung.
They're not in (some parts of) Europe.
NVIDIA reduces (actually reduced on die) FP64 calculation units and disables ECC RAM support for GeForce (except some Titans) to not to be used in datacenter. Previously it works because most scientific calculations requires FP64 calculation and reliability is matter.
But now is the deep learning era, it won't need FP64 calculation and rare RAM error isn't matter. So they must enforce the EURA to avoid dirt cheap Geforce to be used in datacenter for deep learning.
The price premium is like ~10%, which is fair.
How much is nvidia single handedly holding back innovation and new discoveries?
This is ignoring two very important things.
The first is the number of government-funded projects that burned a mountain of cash and led to nothing. Unfortunately this is the rule rather than the exception in modern times because modern government has been captured by interest groups that divert money from where it's supposed to be going to themselves, which makes everything cost ten times more than it did when the government was funding the Apollo program and ARPANET. So you can't just say "government fund more stuff" without fixing that first.
And the second is that private companies inventing stuff only to see somebody else successfully commercialize it is still causing it to be invented. And the overlap between invention and commercial success can be very little and still cause people to do it, because the reward when it happens is very large.
The saying is really that free market competition drives innovation.
Obviously patents and copyrights are government-issued monopolies, and monopolies are by definition lacking in competition.
The theory is that by granting the monopolies we get more innovation. Often the theory is wrong.
Especially when we allow the company to leverage the monopoly on the thing they actually invented into a monopoly on ancillary things that are only used in combination with that class of product.
Sent from my iPhone.
Microsoft probably started wondering internally why they don't just write their own equation editor, but didn't have time, so decided to do a crazy patch to this one and then start on a rewrite.
But yes, it did eventually get mitigated.
This is the number one usability issue with nouveau: no firmware means no re-clocking, which means bad perf.
is contradicted by the fact that just recently a version of windows source (old, but still) was leaked, and people did manage to successfully build and boot the leaked windows (xp and server 2003 IIRC) code within days of that source becoming available.
There's a talk by Bryan Cantrill about that.
They basically could not provide a fully functional OS because some marginal yet used-everywhere parts where licensed and proprietary (Bryan cites the internationalization library as an example).
while obviously not an official source, that isn't particularly surprising either.
as an additional relatively-well-known-but-possibly-incorrect bit of internet lore, right now their Linux driver is basically a wrapper around their Windows driver, so that explanation makes a lot of sense. They would have to go through and disentangle what parts they own and what needs to be stripped out / replaced for the linux version at an absolute minimum.
Now AMD is opensource? Great! However, it's still very far from perfect. You only have to take a look at the list of AMDGPU issues at freedesktop... because being opensource is easy, but working fine in a stable manner is another.
And what about AMD bug tracker? It's open, so you can see the bugs. That's a plus, not a minus. Nvidia blob has all the bugs hidden somewhere, so you don't see them. It doesn't mean the blob doesn't have them.
I think it's the opposite. Some years ago, Nvidia was your only chance to have accelerated graphics on Linux. ATI/AMD didn't care about it at all, and Intel cards were not for gaming. So Nvidia made it possible to do things in Linux when nobody else allowed you to... how's that hindering the progress of anything? Specially when nobody forces you to get an Nvidia card.
> And what about AMD bug tracker? It's open, so you can see the bugs. That's a plus, not a minus. Nvidia blob has all the bugs hidden somewhere, so you don't see them. It doesn't mean the blob doesn't have them.
Yes, I didn't say Nvidia was bug free. I just said AMD drivers for Linux are, at the moment, far from perfect, despite being opensource. I'd say, for newer cards, they're worse than Nvidia's. I value opensource, but if I have to choose between having an opensource desktop crashing twice a day, vs. the Nvidia blob, of course I'd go for the latter, as much as I'd love to have a fully opensource OS.
Regarding slowing down progress, I was talking about modern desktop like Wayland compositors and so on. Nvidia was hindering it for years. And their attitude towards Nouveau is disgusting.
As for slowing down Linux desktop progress, I think it's not Nvidia's fault: you could always get a card from another vendor, although the alternatives were not as good. Well, maybe those other vendors are to blame, and not Nvidia...
Today it's less relevant, since usage of Nvidia on Linux is gradually dropping, so their damage to the progress is also diminishing thanks to that. Wayland compositors' developers can simply say - we don't support the blob and don't plan to and be done with it. In the past it was much harder, due to how many Linux users had Nvidia still while alternatives were way less viable.
They're afraid of patent trolls.
That one is fun.
Software isn't supposed to be able to cause that. That's on your monitor.
What I convinced myself of after a few minutes being sure I wasn't hallucinating was that the graphics driver was pushing out malformed data in some way or the other which was triggering bugs in the monitor hardware/firmware, which is easy to believe are plentiful. It would be an interesting project to try to track down and replicate the bug.
Meanwhile projects like Sway have a direct "Go to hell if you use nVidia, we won't let you run this code." It's bizarre that you blame nVidia for this.
It's hardly surprising that a lot of Wayland compositor developers would rather not put in a ton of extra effort to add a special case for one particular set of proprietary drivers, which they would then need to maintain and support separately from the common code path.
The only way I see nVidia succeeding here is if they clearly demonstrate that EGLStreams is a technically superior alternative to GBM, not just for their own hardware but in general, and also contribute the changes needed to support EGLStreams for all the other graphics drivers currently using GBM so that applications don't need to deal with both systems. As long as the EGLStreams code path can only be exercised in combination with the proprietary nVidia drivers it will remain a second-class citizen and projects would be well-advised to avoid it. (Drew DeVault goes into more detail in his objection to the inclusion of EGLStreams support in KWin, which I agree with 100%.)
Or they could just acknowledge that this is a Linux driver, not a Windows driver, and implement the standard Linux GBM interfaces like everyone else even if that means less shared code.
Most Linux distro will also prevent you submitting a bug report for a kernel issue if you have a tainted kernel.