Hacker News new | past | comments | ask | show | jobs | submit login
AMD Open Source Driver for Vulkan (phoronix.com)
216 points by rys on Dec 22, 2017 | hide | past | favorite | 62 comments



AMD has been working for years to make them the most Linux friendly GPU option. Sadly the legacy of the Linux community talks like AMD doesn't work in Linux and NVIDIA is the only real choice. Kind of like Windows users complaining about AMD drivers when they haven't been an issue for over five years.

I had AMD on my Linux boxes for eight years and I have haven't had an issue for the past six years.


I couldn't agree more. I'm a full time Linux user and have recently switched to AMD because of their awesome open source support. My new system is the most responsive and stable Linux experience I've ever had.

I just recently build a full AMD system (Ryzen 5, Radeon RX 560) and I couldn't be happier. I've had Intel systems that were fine but couldn't keep up with GPU performance. I've also had nvidia systems that were fine performance wise but the closed nvidia driver has been more and more problematic lately (crashes, VSync is off, etc.).

Not only that but it's all free software. When freesync support lands I can't image ever using anything but AMD under Linux.


NVidia has always been more of a pain under Linux for at least the past 5 years, even if AMD wasn't contributing much to the open source drivers code-wise at the time they published hardware specifications for their cards so the mesa and DRM developers could do a halfway decent job. My Radeon R9 290 worked quite well without installing fglrx, as did my Radeon HD 7850.

I'm actually at the point where I need an upgrade for my work laptop, and I am going to seriously push for something that doesn't use NVidia graphics since they're such a pain to deal with under Linux (especially Optimus, holy crap).


I fully agree. My workstation came with an NVIDIA Quadro and it had a lot of driver issues. The proprietary drivers did not work with Wayland and with the open source drivers the screen would regularly flicker, not come out of display sleep, etc.

Based on the positive comments wrt. the amdgpu driver I bought an AMD FirePro W2100 (because its max TDP is 25W). And it's been excellent. Since GCN 1.0 and 1.1 are supported by both the radeon and amdgpu drivers, I use the kernel flags

  amdgpu.si_support=1 radeon.si_support=0 amdgpu.cik_support=1 radeon.ciksupport=0
to disable radeon and use the newer amdgpu drivers. Everything is incredibly smooth under Wayland in 4k@60Hz and I had no issues at all so far.


While NVIDIA is far from the only choice and AMD has improved dramatically these days to the point where they are within NVIDIA's performance reach, NVIDIA still dominates the higher end side [1] and they do have by far the most complete and stable OpenGL implementation (with the Mesa front end on AMD you do not even get a compatibility profile - a major disaster IMO - whereas on the other hand NVIDIA not only gives you that but in some cases using it you can get a huge bang for your buck in terms of performance).

[1] https://www.phoronix.com/scan.php?page=article&item=16way-gp...


I'm curious - what would you need OpenGL compatibility profile for?


Regardless of what i'd personally need it for, it is still something that NVIDIA's drivers provide and AMD/Mesa lack and thus a negative on Mesa's side (especially considering that NVIDIA not only provides all that extra stuff but they still manage to be faster than the implementations - like Mesa and Apple - that do not).

Now as for what I would need the compatibility profile, it is very simple: i just find it more convenient to use so i have a ton of code that uses it. The original idea for the separation of profiles was that the core profile would make implementing OpenGL easier and make it faster but neither of those actually happened (as i said, NVIDIA despite implementing the entirety of OpenGL is still the fastest implementation) while i really disagree with the idea of pushing the complex bits towards the end users (of the API) in order to avoid the implementors doing it. It is better to have 1 developer on the driver/implementation side handle the complexity that 1000 users (of the API) are going to enjoy than have each one of those 1000 users make 1000 individual solutions to that complexity, thus multiplying the time spent on the complexity by 1000 (this of course applies to all tech that by itself is "simple" only because it pushes the hard bits to its clients - see Wayland as another example).

Ok, so actually i have two reasons, one practical (i find it convenient) and one more ideological (pushing complexity upwards). Actually i have two ideological reasons, the second (and actually more important) is that i dislike backwards incompatible breakage in APIs and libraries (unless that API is meant for internal or utility use, of course, or is in beta/development stage). This is a much bigger issue though, so let's leave it at that :-P.


The hard truth is that OpenGL 3.0 was far too tame when it came to breaking compatibility. Initially, they tried to design something much closer to Vulkan (back in 2008!), but ended up with a far too conservative final specification to make backwards compatibility possible. The disappointment among OpenGL users after the release of the final 3.0 specification was very real.

If you have a good understanding of how GPUs actually work (of which the OpenGL pipeline is just a very crude and simplistic abstraction), you can understand why modern APIs like Vulkan and DX12 are the way they are and also the power they hand to a capable user.

The hardware model underlying the design of OpenGL has aged. The one used for 1.0 and 1.1 is so outdated now that using it runs exactly counter to what the driver needs to do on hardware built within the last 15 years. The story gets a bit better once you get to vertex buffers, but those are not exposed in a way that lets the driver handle them without any guesswork on its part. The same is true for textures and framebuffers. Even that damned global OpenGL state is a quint relic of the past. Modern drivers should get the desired new state all at once instead of piecemeal through a chain of calls that triggers expensive recomputation of the desired actual GPU hardware state at each step.

Instead of clinging to OpenGL 1.x or 2.x for no good reason, you should really switch to a high level drawing library or rendering engine that knows how to pass things properly to an underlying modern API.


> The hard truth is that OpenGL 3.0 was far too tame when it came to breaking compatibility.

That is your opinion, mine is that breaking compatibility is the wrong answer in all but the most extreme cases (of which OpenGL was not).

Keep in mind however that i am talking about OpenGL. IMO they should have indeed created a more low level API similar to Vulkan back in 3.0, but instead of calling it OpenGL 3.0 they should have called it something else (like, i dunno, Vulkan :-P) and left OpenGL 3.0 alone to be an easy to use and fully backwards compatible API without creating an unnecessary schism (like the one we have with Apple/Mesa and everyone else today).

Of course that is what they eventually did with Vulkan, but i'd rather have that without the damage that they made to OpenGL.

> Instead of clinging to OpenGL 1.x or 2.x for no good reason

I have a good reason, i already explained it in my post.

> you should really switch to a high level drawing library or rendering engine

That library already exists and is called OpenGL with the compatibility profile. The only problem is that it isn't supported by every vendor that claims OpenGL support.


I have no idea why but there are video games (even fairly recent ones) that requires the compatibility profile. No Man's Sky and Dying Light for example.


Mesa definitely support compatibilty profile, I use OpenGL 1.x applications on RX580.


The OpenGL compatibility profile is something very specific: it is defined by the OpenGL spec as a variation of a 3.x context that is modified to also support all obsolete features of OpenGL 2.1 (as opposed to a core context that drops all the quaint immediate drawing and matrix stack functions among others). It defines a few of the OpenGL 3.x features deliberately differently than a core 3.x context to keep backwards compatibility while providing most new features. This type of context was defined to help migrate applications forward. There's a lot of old OpenGL rendering code in CAD and DCC tools that is barely maintained and these vendors pushed the compatibility profile into the specification.


I bought an AMD card for my Linux desktop this year, because they have better open source drivers then nVidia :) Before that, I bought nVidia or Intel. It's a good feeling to run without that ugly nvidia-drivers blob.


Semi-relatedly and unfortunately, AI research is locked-in to NVIDIA (on Linux) because all the big lienar algebra and autodifferentiation frameworks work only with CUDA.


AMD is working on ROCm https://rocm.github.io/ It will be an open source answer to CUDA and I hope this will now get more support.

"We are excited to present ROCm, the first open-source HPC/Hyperscale-class platform for GPU computing that’s also programming-language independent. We are bringing the UNIX philosophy of choice, minimalism and modular software development to GPU computing. The new ROCm foundation lets you choose or even develop tools and a language run time for your application."


Does anyone here know how fqr along this effort is? Is it usable today (in the sense that one could develop TF backend which uses ROCm)?


Not for long. Some of them actually work with AMD now, and with NVIDIA now banning use of GeForce cards for AI research (https://news.ycombinator.com/item?id=15983587), NVIDIA is going to lose that market very quickly.


This is totally not my field so it may very well be the stupidest question ever: is there technically anything that prevents implementing CUDA on top of non-NVIDIA hardware?


We are working on an OpenCL based layer that enables AMD GPUs, see these links for more info https://developer.codeplay.com/computecppce/latest/getting-s... https://github.com/tensorflow/tensorflow/issues/22


AMD is working on a tool to convert CUDA code: https://github.com/ROCm-Developer-Tools/HIP


No there isn't, Google developed a CUDA frontend for clang/llvm which targets PTX (nvidias virtual machine instruction set), in principle someone could come along and implement a different backend for this frontend targeting for example AMDs gpu instruction architecture or something different. The other ingredient missing is a reimplementation of CUDAs runtime libraries.


The last part is particularly important. Most deep learning libs rely on things like cuDNN which implement by hand basic operations very efficiently.

The ROCm initiative by AMD already "transpile" most CUDA kernels to being AMD compatible, but it seems performance is not yet comparable in real word benchmarks.


CUDA is made by nvidia


Not sure why you were down voted. CUDA is developed by NVIDIA.

CUDA is a registered trademark by NVIDIA - https://trademarks.justia.com/850/30/cuda-85030071.html

Here is the CUDA webpage

https://developer.nvidia.com/cuda-zone


Because it's a pointless answer that basically just repeats part of the question, providing absolutely no new information.


Well I think it makes a short answer NVIDIA doesn't let their proprietary CUDA technology work on non-NVIDIA devices.


> Sadly the legacy of the Linux community talks like AMD doesn't work in Linux and NVIDIA is the only real choice.

Not in my experience. AMD efforts are welcomed by Linux community, especially gamers.

See trends here (AMD GPU usage is growing): https://www.gamingonlinux.com/users/statistics#trends


This does not mirror my experience. I have a machine with amdgpu drivers installed. I specifically installed ubuntu instead of arch or something more exotic because i knew there would be issues. And i was right. I can use the OpenCL now but I can't use x11 because the driver uses old ABIs. My experience with Nvidias binary blobs even on systems like freebsd has been stellar in comparison.


> I specifically installed ubuntu instead of arch or something more exotic because i knew there would be issues.

You should try Arch. I'm using AMDGPU with X11 on Arch since late 2015 (when full support for my card entered the mainline kernel) and I had exactly zero issues. Just installed the driver packages, rebooted and everything worked (xrandr, power mgmt, OpenGL, games, etc.).

Maybe your issue is trying to use the proprietary blobs (aka AMDGPU-PRO). I never used these (and I don't know why I should).


Why are you using the closed source driver?


I'm using amdgpu on FreeBSD (-CURRENT, drm-next-kmod), with Vulkan (RADV) + Wayland, it's awesome :)

Yeah, OpenCL is a problem, clover crashes, the new AMD OpenCL driver is not ported yet (I guess it depends on kernel 4.15? we have amdgpu from 4.12 for now)...

I wish people started using Vulkan for compute instead of OpenCL!!


Wait Vulkan is for compute like opencl is?


It does both compute and graphics.



Compute is something of a problem for AMD GPUs right now. The open source driver is wonderfully stable and performant for gaming but to use it for OpenCL you really want to use the proprietary driver instead which is also pretty performant but sometimes has weird behaviors.


I should add that I generally dont like nvidia either. Intel is the GPU of choice generally, but sometimes you need the compute.


I built a new system recently, Ryzen 7 with an AMD Vega 64... I've yet to get the Vega 64 to work right with Debian Testing (Or Ubuntu 17.10/16.04). I am hoping I have some time to fiddle with it and maybe get it working before Christmas :)


Vega cards will be supported in Linux 4.15 onwards with mesa drivers. Linux 4.12, 4.13, and 4.14 do support Vega cards but not for display output. You'll have to use AMD's proprietary drivers for now.


I plan to try the 4.15 rc. But then I don't know for what all I need on top of that. Phoronix has some articles on the topic, but they don't provide much specifics.

I did try their proprietary drivers, too, but did not have any luck. Maybe I'll have time to tinker with it tomorrow. It would have been nice to have out of the box support :) I suppose they will get there eventually.


In case anyone else is wondering, I was able to get my Vega 64 working, just required building the latest 4.15 RC kernel :)


I have heard that Vega 64 has some serious thermal issues. My friend has one on Windows and it caused him so much trouble he sold it used (For a profit) and got a Vega 52.


>to make them the most Linux friendly GPU option.

Intel works quite fine. I'm pretty sure it's just nVidia that's the problem.


Agreed; but they were probably referring to big beefy GPUs, so you only have team green and team red.


The problem is AMD (and Intel) require a lot of moving pieces to come together to make a fully stable system, while nVidia is more of a complete packaged solution. For me, I tried AMD (neé ATI) once in 2003 and the experience was so bad, I just completely swore them off. I have literally never had nVidia's drivers crash and kill my X session, or hard-lock the kernel, which were frequent problems with both ATI's closed-source drivers and Intel's open-source drivers. I don't see myself ever buying anything other than nVidia ever again.


> 2003

You had a problem with ATI's closed-source driver 14 years ago. AMD's open-source driver now is a COMPLETELY different story.


> You had a problem with ATI's closed-source driver 14 years ago. AMD's open-source driver now is a COMPLETELY different story.

Indeed. In 2003 no-one would have based their expectations on what happened in 1989. 14 years is an eternity in this field.


Yes, and it was bad enough that I decided to never ever buy from that company again. For the record, I did have the displeasure of using an AMD GPU in 2010, because that's what my employer at the time purchased, and it was only slightly more stable. I am absolutely convinced that nobody except nVidia can produce stable, working video drivers. The last 14 years haven't produced a counter-example.


Also do you also refuse to use RPM based distros due to dependency hell?

Also GOOD NEWS ATI is dead, it wasn't AMD that manufactured the Card or the Drivers! It was a company called ATI based in Canada. https://en.wikipedia.org/wiki/ATI_Technologies

AMD bought them 2006 and killed ATI brand 3 years later. Also I bet you only a handful of people actually still work on the technology.


I had a bad experience with an ATI card many years ago with a Ubuntu box. And line you, I said that I would never buy an ATI GPU. Actually my computer have a RX580 and work like a dream (On windows and Linux)


You're personifying companies way too much.


My experience in 2017 is the complete opposite. The nVidia binary driver is the only thing I experience that causes me significant pain, including system lockups. Unfortunately we use CUDA, but as soon as ROCm matures, I'm going to be strongly advocating a switch away from nVidia.

https://news.ycombinator.com/item?id=15877262


Why would you base that decision solely on your experience 14 years ago? The driver is completely different now.


> 2003

Read the offical documentation on Linux with SUSE and NVIDIA.

http://www.nvidia.com/object/linux_display_ia32_1.0-4363.htm...

Linux in 2003 is a totally different beast today. Tell me you got your wifi working or NVIDIA with a working desktop out of the box. I would sit in the terminal and have to install proprietary drivers just to see Gnome or KDE. I would have to sit by the Ethernet port for a while till I got wifi working.

Read this forum post I can post hundreds of these because I read them all trying to get my card to work. I had to edit my X11 configs by hand till around 2008???

https://www.opengl.org/discussion_boards/showthread.php/1575...


Huh? Intel drivers have been fine for at least the past decade, when I was much younger I remember toting around a laptop with an A4 APU running the libre drivers just fine too.


Is there also a useful open-source OpenCL implementation for AMD GPUs? I would find that even more useful.


You can use the in-kernel open source amdgpu driver with the proprietary opencl userspace by grabbing the tar and installing just the opencl .debs.

Check out https://math.dartmouth.edu/~sarunas/amdgpu.html

If you're on Debian instead of Ubuntu you can manually install the debs.


https://github.com/RadeonOpenCompute/ROCm ROCm has open-source support for OpenCL 1.2 but doesn't yet work on all mainline components (i.e. Linux kernel and LLVM changes needed), but that should change in 2018, and also only works with relatively recent AMD GPUs.


That looks very interesting. I'm OK with it only supporting recent AMD GPUs, but I hope the other changes will be upstreamed soon.


Tangentially related question - is there any good Vulkan tutorial/online class one can take that would allow learning how to build own 3D engine/VR/AR system from the scratch? I am in the mood... Thanks!


Have a look at this list, hope it helps! http://stephaniehurlburt.com/blog/2017/7/14/beginner-friendl...


Any chance of this leading to a 3rd party Vulkan driver on Mac?


Great. May be now radv and amdvlk can fill the gaps from each other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: