Hacker News new | past | comments | ask | show | jobs | submit login
It’s Time to Open Up the GPU (gpuopen.com)
459 points by jdub on Jan 26, 2016 | hide | past | favorite | 93 comments



This is an AMD project motivated by the fact that NVIDIA has become the de facto standard for GPU computation in deep learning and machine learning applications.

TensorFlow, Torch, Theano, CNTK, CAFFE, and every other deep learning framework out there works out of the box with NVIDIA hardware via the CUDA stack, but not with AMD hardware, e.g., via the OpenCL stack.

Which is a shame, because the entire NVIDIA hardware + software stack is closed.

It would be great to see this new project succeed, and maybe even force NVIDIA to open source key or all components its stack down the road.

To the team at AMD running this project: if you are reading this, step one is to get built-in support into all major deep learning and machine learning frameworks.


I'm sad to say, but it's true. AMD has been claiming "this is the year we really open up" since at least 2007[1]. Well, I'm happy to hear them recommit, but I don't believe them anymore.

Now that they are losing ground in the marketplace on the GPU side with deep learning, and their CPUs can't compete with intel on the performance side, their claims ring hollow.

Imagine where we would be if they open sourced their drivers in 2007 like they said they would do.

1. http://arstechnica.com/gadgets/2007/05/amd-launches-the-hd-2...


I don't think they ever claimed they would open source their drivers. They've certainly released a ton of documentation for their GPUs over the years.


Which is probably why the open source driver is pretty good. radeon is light years ahead of nouveau.


Yep, HW decode, works for games (FPS is 10-20% less, depends on game). Might even be better than the intel driver, I had a few problems with that, but I haven't had any problem with radeon.


And likely all of it had to be vetted by lawyers before being released, to avoid lawsuits from third parties.


this is really unfair, they have documented their gpus extensively + provided developers for open drivers. They are currently in the process of integrating all the features from the closed drivers into the open ones.


Have you tried the open AMD radeon drivers lately?


I have not. Care to give us some insite? It's kind of an expensive experiment. It's not like trying out emacs for a week. Unfortunately, I know that an nvidea will work, so I buy that even though it's not open


Huge improvement over what they used to be, and these days I get way better performance with it than fglrx.


I can confirm that radeon gives much better performance than it did even a year ago. Anecdotally, I used to get a maximum on 30 fps on a 2003 first person shooter game. Now I can easily get 125 stable with the latest drivers, or at least 71 fps in maps that are foggy, smoky etc.


Poke around the site and look at the code on Github, with what looks like ISC/BSD licenses.


We are developing the Rust Machine Intelligence Framework Leaf[1] for which we created the portable High-Performance Computation Framework Collenchyma [2], which abstracts over CUDA, OpenCL and commen host CPU. Although we are ready to bring Machine Learning to OpenCL supported devices, the OpenCL ecosystem lacks the fundamental kernels, tooling and library that already exist for the CUDA ecosystem (e.g. cuDNN[3]). But we looking forward to support OpenCL, maybe SPIR-V accelerates the process.

[1]: https://github.com/autumnai/leaf

[2]: https://github.com/autumnai/collenchyma

[3]: https://developer.nvidia.com/cudnn


I think SPIR-V + Vulkan would be the better path to follow, considering that NV doesn't really care that much about OpenCL.


I wish we could have hardware vendors compete on hardware with completely open software stacks from drivers on up. AMD is much farther down this path than Nvidia is. The downside of this is it gives the hardware manufacturers less flexibility in implementing new features because the APIs are design by committee.

On one side we have design by committee (Khronos) giving us tools like OpenCL, Vulkan, and OpenGL on the other side we have things like Cuda and DirectX. Design by committee APIs will always have a few characteristics. The standards won't move as fast. The APIs will be more hardware agnostic. And they will either be slower and less feature rich (openCL vs cuda) or lower level and more difficult to use (OpenGL/Vulcan vs DirectX).

Whenever we have a clear market leader they either push their own standard or Embrace Extend and Extinguish the open standards. The leader used to be 3dFx pushing glide, and Nvidia/ATI were the underdogs pushing standards that could run on eachothers hardware.

The underdogs will try to push the open standards (because they must) and the leaders will push proprietary ones (because they can) and they can move faster than the committee. There's always going to be resistance toward spending money on research your competitor can take advantage of.

I think Vulcan and SpirV have the potential to put a real dent in proprietary software stacks. It can provide real benefits over any proprietary stack at the moment for heterogeneous computing. (nvidia's blindspot is they will naturally always favor the GPU). I hope it doesn't end up like OpenCL did (feature poor and less performant than cuda). We're a long ways off from having open hardware and open drivers that can compete against closed stacks due to the performance arms race. But GOOD open standards get code written in them and they can win out over market leaders (like what happened with 3dfx). I think Vulkan might be the starting point for this. If it gains traction and can maintain leadership in performance open stacks will have a chance at competing.

The biggest issue is that closed stacks can always benefit from innovations in open ones but not the reverse. Because of this, I don't think we'll ever see competitive open drivers from market leaders. But we can have competitive open APIs that can compete across vendors, If we can at least get back to that stage I would be happy.


Every company, perhaps every individual, wants to be a big fix in a small pond, rather than one of many small fish in a large pond.

I seem to recall that back during the industrial revolution, every supplier of screws had their own direction and density of winding. Thus the engineers running various massive projects had to introduced and enforce various standards to get anything done at all.


Yep. ANSI and ISO emerged because it was mutually beneficial for all. Just like the Khronos group. It's also important that following the standards should be voluntary so innovation can flourish without being hindered by the committee. With the hope that the good ideas will make it back into the open standards.


Yea, reminds me of the original Altair and IBM PC and how both was cloned with profit margins on the boxes falling over time. Unfortunately I think ANSI/ISO and other standard committees was probably poorly suited to setting standards in this area. The right way back then would probably be to design a reference system and release schematics and other design information.


> The right way back then would probably be to design a reference system and release schematics and other design information.

The interesting choice is to release a fully open design with a solid implementation for your own hardware but also one for the competitor's hardware that meets every part of the spec but you've spent no effort to optimize.

Then everyone starts with your API because it gives support for the largest variety of hardware out of the gate and only supports the competing proprietary API if the project has the resources and the performance difference can justify it.

Which puts the competing hardware vendor in a tough spot. Either they do the work to optimize the open API implementation for their hardware, essentially guaranteeing that the open API wins, or they don't and users of software that supports only the open API start to avoid their hardware.


This doesn't happen for the same reason that open source developers have such a difficult time implementing open standards for closed, proprietary hardware - how do you add support for a low-level protocol to a competitor's product when their product is essentially a black box?


Create an abstraction layer on top of their public API. This is the part where you don't really care if the performance is great.


I am not talking about APIs.


Isn't it beautiful how OpenSource is the joker card of the underdogs? Google played it with Android against Apple, too.

In 2007 I did my diploma work based on AMD GPU cause it was better. In 2011/12 AMD was the standard for Bitcoin mining, yielding twice the performance than NVIDIA. Now, buying my laptop I ended buying NVIDIA. What happened to AMD?


AMD is still better for bitcoin mining but that's a matter of integer versus floating point processing power.


> AMD is still better for bitcoin mining

Neither CPUs nor GPUs have any remaining relevance there, given ASIC/FPGA miners.


Even FPGAs and older-process ASICs are unusable for Bitcoin today. The network hashpower just broke 1 exahash/second


Side question: does anyone have links on some power bounds for what the Bitcoin network is currently consuming?

Understand it would be a very rough estimate, but seems like it would be possible to arrive at some kind of number given electricity prices as a ceiling and state of the art efficiency as a floor.

Curious on what the magnitude is in relation to other things...


The power/monetary cost of 1 megahash is going to be different in Australia than it is in China.


That's why I said "bounds".


They're still better for integer ops like password cracking. And they're competitive for gaming.


I can only speculate on this aspect (from inside the industry), but it's been plain to see that AMD has been hemorrhaging engineers (though they still have plenty of talent). This relates to your request about getting built-in support: I'm not sure AMD has the manpower to do that work, and they might be hoping the OSS community helps them with that...


I'd be interested in what the ROI on more deeply embedding with OSS devs is vs employing someone to contribute.

From what I've seen, for years the OSS GPU driver crowd's response to "opened" specifications / docs seems to have been "That's nice, but these aren't the docs we need."


Oh, they have the docs they need. That's essentially what is shared with console devs. The problem is that to get up and running is pretty Herculean with just docs. They need good sample implementations to get kickstarted.


It's a shame how Nvidia is able to close up the market, there has, traditionally, been great competition between AMD and Nvidia in that space.


I think it will bite them in the ass the same way glide did for 3dfx. Vulcan/SpirV may end up as the openGL of heterogeneous computing. Nvidia has the momentum at the moment with a focus on GPGPU, but if we can get an open API that leads in performance across vendors for both CPU and GPU we'll see that competition again. We just have to hope that nvidia doesn't nerf their implementation like they did with openCL.


It is not only performance.

NVidia was clever to see that no one wants to code GPGPU code in C, rather leverage their C++ source code and the Fortran libraries with decades of investment.

So CUDA has been language agnostic from the first day.

Khronos has realized their error by creating SPIR, but it might be too late already.

Also the debugging experience for CUDA has always been quite good.


Not a shame for game developers. It's nice to have some ability to predict that the failure modes where cards diverge from specification will tend to diverge along the same paths, because they're made by the same vendor.

Granted, it'd be preferable to live in a world where the drivers and hardware are open and inspectable (so you don't have to debug graphics issues as a black-box problem), but we've never lived in that world and between the likelihood of AMD's initiative taking root and a de-facto Nvidia monopoly, I'd rather have the monopoly.


That is and will continue to be one of the biggest concerns for the GPU market for some time now.

I assume that after a few years, factories will be buying computers, cameras, and GPUs instead of people for inspection.

This will be a large market, and that's what NVIDIA understood when they started making ties with different companies. AMD has a lot of catch up to do, but in my opinion they can do it.


> factories will be buying computers, cameras, and GPUs instead of people for inspection

There's a lot of this stuff on the market already. Even self-contained machine vision devices can be pretty powerful, like this[1] integrated camera+light device with built-in web server for setup and configuration and monitoring. With the built-in software, you can measure parts and verify shapes and dimensions and read labels.

[1] http://www.microscan.com/en-us/Products/Machine-Vision-Syste...


+1 I rarely think closed system is bad, but I think this is one of the few exceptions I'd agree with. I own a powerful Radeon card, so it sucks I can't run CUDA there and none of the machine learning toolkits will work there (even if works, probably poor performance).


Video encoding also might also be worth putting effort toward. There are probably a lot fewer people using GPUs for video encoding than for linear algegra, but it's still something nobody is currently doing with open operating systems or otherwise really. If AMD did it, I think people would take notice. nVidia started adding APIs.

Video encoding on CPUs is extremely inefficient and essentially precludes on-demand encoding.


Maybe if Apple and Khronos cared to define OpenCL native support for C++ and Fortran since day 1, this would have turned out to be different.


Step one is to produce competitive hardware. If AMD's hardware sucks then it doesn't matter if it's supported by frameworks or not.


First, nullify all the patents ...

Seriously, back around the turn of the century I was pretty heavily into graphics and rendering and was frustrated by all the constraints on getting access to documentation. I finally figured out a way to pin down a vendor with all the necessary NDAs and all the necessary agreements so that I could actually get 100% access to the inner workings of the GPU. As long as I read the documents onsite at the vendor's office and brought no means to make copies. And after all of that, in discussions with the vendor's counsel, their biggest fear was that as GPUs were so competitive, and so heavily protected by IP, that they did not disclose any details to people who might see something that might be protected by some patent somewhere and defending a lawsuit, which always settled before discovery because nobody wanted to actually share their documents! It was insane.

That said, I think an open GPU is an excellent idea, however while I don't think it will get built any time soon I think just publishing all the techniques and algorithms will draw out all these lawyers and we could hopefully arrive at the 'available set' of features which one can implement. I expect though that you need a Google level VP8 kind of program for this to make progress (and Google failed there if you recall)


I'm actually amazed that you were able to get an IHV to give you that access.

Interestingly, most console developers currently have this sort of access. If they are just targeting consoles (or better, a specific platform), they're able to get a lot done. The tricky part is if they have to target PC and consoles. They have the access they want on consoles, but can't replicate on PC.

Actually, on top of that, replicating that task on PC would mean giving developers a nightmarish mix of varying, quirky architectures to target. Realistically, what most _engine_ devs want on PC are more current and lower level access APIs (DX12 and Vulkan) and maybe even an OSS shader compiler. The compiler wouldn't be even for them to poke at. I think they'd hope that the OSS community might do a better job than the IHVs at maintaining the compiler.


> I expect though that you need a Google level VP8 kind of program for this to make progress (and Google failed there if you recall)

Google paid off MPEGLA so they wouldn't sue for anyone using VP8. That smells like success. http://arstechnica.com/information-technology/2013/03/google...


I would not be surprised if designing a GPU requires more lawyers than engineers.

I also cannot conceive of many industries where the barriers to entry are higher.


> That said, I think an open GPU is an excellent idea

This isn't what the article is about.


Nullify the patents? Why? A lot of R&D has gone into making the best GPU's what they are. What makes you feel entitled to that IP?


I feel a rant on software patents coming on.

I've been in groups that have been on the receiving end of patent lawsuits twice. Both times the patents were transparently invalid.

The first time involves my dorm at MIT. Back before I arrived there some enterprising student had hooked up our laundry machines to the internet so we could all find out when there was a free one and when our laundry was done. The internet at large got wind of this and sadly the servers were Slashdotted in both senses of the word. Shortly after the idea appeared on Slashdot some enterprising entrepreneur filed for a patent on the idea of hooking laundry machines up to the internet and was granted it. Several years later our dorm, through MIT, was sued for patent infringement. Luckily the MIT administration had spines so they didn't get anywhere. But the patent is still in force and other universities still have to pay millions of dollars to this group to hook their laundry machines up to the internet.

The second time involved the first company I worked for out of grad school, a sensors consultancy. They had some ideas about structural health monitoring using piezoelectrics and sent their designs off to another company to be fabbed. They filed for patents on their ideas and shortly after the fab house filed for patents on those same ideas. They mostly got their patents but for some reason the fab house got one in first despite filing later causing their patent application to be denied. Then the fab house sued them and somewhere in the process there were layoffs and I lost my job.

Just because something is patented doesn't mean that it is original and non-obvious. The Patent Office doesn't actually have enough resources to do the job that has been set out for it. The number of patent applications being submitted has increased exponentially over the years but the USPO staff hasn't. You might ask why they just don't fail to grant a patent until they've examined it carefully. Well, they used to do that but the backlog got big and Congress In Its Wisdom passed a law saying that they couldn't have a backlog. You might ask why they don't raise filing fees and use the money to hire more workers. Well, Congress In Its Wisdom has decided what the fees are and has also decided that the USPO can't keep all the money it collects since there are needy people in many other Congressional districts. So that's where we are.

It would not surprise me in the least if many NVidia and AMD patents cover the same ideas in different language. But just because AMD happens to have a patent on the technique they're using doesn't mean that NVidia can't sue them with their own patent on that same technique. Legally patents have a presumption validity behind them so courts are bound to assume the USPO gave them the sort of inspection the USPO is unable except in certain complicated cases.


He doesn't - it's written in jest. The next sentence starts with 'Seriously, ..'


In the first year of grad school when I was trying to choose a field to delve in, real time rendering was one of the options. So I took the relevant CS-fivesomething class to see how things worked.

The first time I got my feet wet with code, I realized that the workflow was mostly "write and pray it works". No debugging tools, huge blobs of proprietary code and total lack of consensus among manufacturers even on the most basic stuff scared me away. I ended up doing a machine learning thesis instead.

This was almost ten years ago. I can't say I've been following the graphics field closely, but my impression is that while tooling came a long way in the past ten years, the GPU is still mostly a black box for the developers who don't have access to a debug build of the driver of the graphics hardware at hand.

Such initiatives couldn't come soon enough.


It is really exciting to see AMD opening up, especially in GPU driver area - arguably the weakest point for the free software. I hope this will go further than "1 step forward, 2 steps backward" approach same AMD showed with libreboot involvement:

https://libreboot.org/faq/#amdbastards

https://news.ycombinator.com/item?id=10895021


Hey, what about Intel? They suck even more: https://libreboot.org/faq/#intel


Market leaders tend to push proprietary stacks (because they can). While underdogs push open stacks (because they must). If the open standard is better it can win. This is how nvidia gained the upper hand over 3dfx by pushing opengl and directx alongside ATI. They surged ahead and glide died along with 3dfx.

Behold, history repeats itself.

I'm hoping that Vulcan will be that new competitive standard moving forward. I don't have much hope for open hardware and open drivers from market leaders (who are too concerned with losing their edge to competitors) but I would be happy with an open API that leads in performance across vendors.


Doesn't give off the impression of being super open when you have samples say they are only for Radeon cards. This seems almost like a marketing strategy/guise from AMD, and if you go look at other websites for this article it literally says AMD launches GPUopen.com. AMD is only mentioned once in the article.. shady in my opinion and not very 'open' either.

When one company continually calls out for 'openness' in a marketplace over time I think people get annoyed at it, I know I do. AMD, you're nearing the point where people are just going to scoff at you and take it as whining. You can't say you want openness then build your tools specifically for your own hardware without looking like an ass. Furthermore, 3/4ths of the sample are for DX only. How open is DX again?


It doesn't seem all that clandestine. The snippet at the bottom about the author:

Nicolas Thibieroz is the Senior Manager of Worldwide Gaming Engineering at AMD...

From the middle of the article:

GPUOpen marks the beginning of a new philosophy at AMD.


Ahh I was misinterpreting the 3rd party affiliate links where they say they aren't responsible for any content on them.


Did you even read the post? The whole thing talks about how this is AMD's new version of mantle which is specifically for gcn cards. It even says who its written by at the bottom, the senior manager of engineering at AMD.


No it doesn't. It puts out that it's making something called gpuopen, which right now is just a brand page with a bunch of fluff and nothing more with links to resources that are already available on AMDs website, but with nearly all of the AMD branding removed.


I must be going stupid, show me the code?!

Where is any of this? I find a bunch of press releases, but.. there is no code, anywhere.

(Edit: for anyone else struggling, expand the arrow on the top left. Or go directly to e.g)

http://gpuopen.com/professional-compute/

(That said, a bunch of the repositories are just binary dumps. Like, this isn't what Git is for AMD.)


I wanted to add my GPU compute library for Clojure to their list of libraries, and there is no option to send them anything.

So, it seems that gpuopen site is open for AMD to promote whatever they deem promotable. Not very open for this age in my opinion.


Try to email the author: nicolas.thibieroz@amd.com


Emailed. No response.


Could we open up the website first so we could improve its performance? Because this is just silly.


Sounds great but they don't mention anything about drivers? Does that mean we still won't get vendor supported open source drivers?



I keep hearing about this, seeing mentions and links to github, but I have to ask: where's the "meat"?


Wasn't OpenCL supposed to deal with this? Obligatory XKCD[1]

[1] https://m.xkcd.com/927/


OpenCL is basically a framework to make it more convenient to run shaders for non-graphics purposes. It's still a high-level cross-vendor hardware abstraction layer.

This sounds like it's more about officially documenting and supporting the kinds of things people have been reverse-engineering: https://news.ycombinator.com/item?id=10605156


Yes to your OpenCL stuff, no to your reverse-engineering stuff. The post you linked is about generating your own native shaders on GCN. GPUOpen could possibly expose that, but that's a difficult problem to expose to developers considering the variety of architectures exposed at any time by the IHVs (including just inside one IHV at any time). If they did go down that path, they would open up their internal bytecode-to-native-ISA shader compiler...which they haven't done yet, and have shown no indication of doing.


Yes, it may be a difficult problem to handle exposing hardware-specific functionality in a non-portable way, but if GPUOpen is to have any substance at all it's obvious that they're going to actually do some of that. This can't just be an announcement of a batch of OpenGL vendor extensions.

And at the moment, it seems like GPU architectures within one company aren't really that much more diverse than Intel's CPU architectures over a similar time period. They're not overhauling the ISA every year.


> but if GPUOpen is to have any substance at all it's obvious that they're going to actually do some of that.

I don't think they'll do it. I think they're going to shoot for SPIR-V being close enough that SPIR-V compilers generate code good enough that the internal compilers can gen good ISA code.

> They're not overhauling the ISA every year.

They aren't, but the changes rev to rev are more significant than you'd expect. On top of that, the big bumps are significant, such as Fermi to Kepler or Northern Islands to Southern Islands (or whatever). These are not backwards compatible with each other, unlike Intel x86 ISA.

Bonus edit: Also...I'd speculate that some IHVs would rather not expose their IL-to-ISA compiler. Not because of 'secret sauce'. But because they aren't that good, and would rather have the OSS community start fresh and do a better job.


Makes me wonder if AMD will open their Radeon IP for licenses. Even Nvidia has opened up, although no one has come forward to license it yet ( I think ) .


Or, just make better drivers for linux....


Speaking of opening GPU's, is this page making anyone else's scrolling chug hardcore? I don't think I've ever had a page stutter this hard.


5MB website...

90% of this are useless super large images.

Also webpagetest.com counted 23 javascript requests.


The thumbnails are 1920×700px in case you use a 20K monitor.


Smooth for me on Win7/FF44b+IE11. Chrome 48 doesn't appear to like it much though.


Safari up-to-date on a MacBook Pro, so much chug. Must be a webkit thing.


Works fine for me on Firefox too. It likely is a WebKit issue.


Safari with a ton of other tabs on a 12" Retina MacBook, on battery, and this website works fine.


Absolutely sluggish on Mobile Safari, latest iPad mini.


A really cool site by AMD


Ever heard of image compression? It exists.


Can we please stop calling it a GPU? These auxiliary processors are used for much more than graphics.


They are used for other things, but

* They're better at graphics than that other stuff,

* Other auxiliary processors exist which beat GPUs on most kinds of that other stuff (though they aren't as accessible as GPUs),

* Most importantly, what sells GPUs is first and foremost their performance on graphics workloads. A GPU with stellar OpenCL performance but sucky OpenGL performance will be outsold by a huge margin by a GPU with the reverse characteristics.

So they're GPUs alright.


Nope. In other news, we still call them "hard drives," even when nothing is driven.


Couldn't we just re-purpose the G in GPU to stand for "general" rather than "graphics"?


I think VPU (for vector) is the most descriptive.


GPUs are not really vector processing units though.


Most graphics IS vector based math run through matrices. Same with physics. Every vertex is essentially a vector from the mathematical perspective. But you're right, that doesn't quite capture the massively parallel simple core arch that we see today.


There should exists such devices and be open from day 1.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: