Hacker News new | past | comments | ask | show | jobs | submit login
Vulkan – Graphics and computing belong together (khronos.org)
309 points by scoopr on Mar 3, 2015 | hide | past | web | favorite | 91 comments



I applaud this effort. I hope we can get to a point where a GPU requires as many drivers as a CPU.

I think one of the primary reasons more people don't target GPUs to offload computation, is because it -- most often -- requires proprietary drivers running on the host OS, for it to work (at an acceptable speed).

Imagine if putting an AMD of nVidia card in your box was the same as adding a CPU. If you wanted to write OpenCL to execute on the GPU, thread allocation, memory management etc. for that would be part of an open source OpenCL library, that your application links to, and then you can write OpenCL kernels that execute on the GPU via these libraries. No need for proprietary driver blobs running on the host machine.

I hope we can get to a point where trying to sell a GPU that requires a proprietary driver running in the host OS is as viable as trying to sell a CPU that requires the same.


There have been significant benefits to having a driver / runtime layer between the application and the GPU. It has enabled a competitive ecosystem of GPU vendors and I don't think we would have seen as rapid a pace of innovation and performance improvements if we had had a de facto low level hardware interface proprietary to a particular vendor.

This is not to criticize Vulkan, the trend to lower level APIs like Vulkan, DirectX 12 and Metal is a good one but I don't think the goal should be to eliminate the driver entirely or get rid of any hardware abstraction layer on non console platforms.


Having open GPU ISA's is not the same as requiring them all to be the same. GCC and LLVM both compile to a variety of architectures but manage to share a lot of code and optimizations between all of the targets.

Having the same for GPUs, where the vendor gives you an ISA and you can use whatever compiler you like (even hand written assembly), would be a lot more preferable. This is what the open-source Gallium3D stack does, though it's hindered by relying on reverse-engineered information.


I thought CPUs did have binary blob drivers, both in the BIOS and on the operating system.


A driver is always required in order for the OS to speak to hardware. That's essentially what a driver is: an interface between software and hardware.

With CPUs, the code needed to run in the host OS (the driver) is fairly simple code that gives access to the CPU hardware more or less as it is: practically speaking, everything the CPU itself can do, the compiler can output code to do directly. The output of the compiler (eg. gcc) is sent more or less as-is to the CPU for execution.

With a GPU, we have a large chunk of proprietary code running in the host OS (the GPU driver), which provides a 3D API interface, such as OpenGL or Direct3D, to the GPU hardware. There is a huge difference between what is sent to this driver code, and what the driver sends to the GPU hardware.

Applications submit, eg., OpenGL instructions to this driver, the driver compiles these OpenGL instructions into code that will execute on the GPU, and sends it to the GPU. So the GPU driver essentially acts as a closed source compiler, that compiles from OpenGL/Direct3D into whatever intermediate language the GPU accepts, which the GPU may further compile into something that its processors can execute.

So for a CPU, the compiler does most of the work, while for a GPU, the driver does most of the work, and actually functions as a closed source compiler. On top of that the driver handles memory management, and automatically allocates memory according which OpenGL instructions are executed.

Imagine if Intel provided a CPU that you could only use to execute Python code. It would require a closed source driver to work. This closed source driver would compile Python code submitted to it into some unknown instructions that execute on this CPU. It would also have exclusive control over an area of memory, that the driver would automatically allocate portions of, to store data from Python variables. That's essentially what nVidia and AMD offer, except the hardware is a processing unit that comes with its own RAM, and the language is OpenGL/Direct3D and not Python.


Wasn't this approach taken with good reason though? So the effort to get code running on multiple GPUs was a reasonable? Remember the days of when games would only support select GPUs?

Sure you could have a standard like i386. But isn't it the case that there is a lot more innovation happening in the GPU architecture space making this very difficult?


GCC supports waaaaay more different CPU architectures than these drivers do for GPU architectures. This argument has no merit. The only purpose for these closed architectures is vendor lock-in.


And every vendor has an HLSL and a GLSL shader compiler. You also usually don't ship ten different binaries for ten different architectures, which you would need to do for GPUs.


In this model I think you'd distribute something in an intermediate bytecode, like .NET does w/ MSIL, and then the target system would be responsible for precompiling.

Edit: just saw this:

https://news.ycombinator.com/item?id=9140001


This is the rhetoric GPU vendors will use to justify their behavior, of course, but $billions and modern technology can address this if AMD and Nvidia feel that it is important.


> Sure you could have a standard like i386.

We're not talking about standardizing hardware (the ISA), but the API: Vulkan. It really doesn't matter in which form your hardware processing unit comes, as long as its driver accepts SPIR-V and can execute it on said processing unit.


Apropos your hypothetical Intel throwing restrictions scenario: Intel has been considering [0] dropping OpenCL support on the Xeon Phi. Which is exactly that kind of asshole move. Not to say that Nvidia or AMD are good guys in this respect...

[0] https://plus.google.com/105097056044353520580/posts/81ddzBqq...


Are you referring to the microcode updates that operating systems apply on startup? Those are binary blobs, but they're only to correct CPU errata.


I'm not really sure what exactly they are. I know I've had to update my BIOS to be able to use newer versions of CPUs that still fit in the old sockets before [1]. I know I've seen Windows updates for both the motherboard's chipset and for the CPU. But what difference any of that has made, I don't know.

[1] Actually, that particular instance was a complete pain in the ass, because I had bought all of the parts new, but the motherboard didn't have the latest BIOS on it, and I didn't have a compatible CPU on hand. This was also before the liberal return policies that online vendors now have, so I ended up just ordering a different motherboard completely.


You seem to be mixing up different concepts.

The BIOS runs outside of the OS and helps all the parts of your computer speak to each other.

This is why the process is a bit more involved than just running an update from your OS's update manager.

The "binary blob" that's referred to when talking about GPU's is the huge piece of software that you have to download and install in order to make you graphics card able to use all of its features. Some graphics cards won't even use a display's full resolution without the correct driver.

This is different to the CPU in that there is no big downloaded driver sitting between you and it.

If you really want to understand where everything sits, you should check out a book or search for information on Computer Architecture.


> You seem to be mixing up different concepts.

This is oversimplifying but my understand is that graphics programs like games send data and instructions as a job into a queue managed by operating system to the CPU which then has to then quickly decide what to do with it. GPU's function as a sort of co-processor in that the CPU will offload a portion or all of the instructions/data of jobs where GPU processing is indicated in the job instructions. This is why all GPU processing involves CPU overhead. Also GPU's are somewhat bottle-necked by this whole process.

It's my impression that current trends like Mantle, DX12 and Vulkan are trying to reduce the amount of stuff the CPU has to do to process jobs intended for the GPU. But they can never really eliminate the role of the CPU even in something like an ARM chip is embedded on the GPU board that handles most of the processing that the CPU would have done.

tl;dr - CPU is what makes the computer happen but the GPU is like a dedicated graphics co-processor.

Feel free to correct me if I'm wrong.


I took the post to which I originally replied to be more concerned about the proprietary nature of the driver, rather than the size. The BIOS is a type of simplistic set of drivers, even a basic operating system, in essence. My point was that the comparison to CPUs with regards to not needing proprietary drivers was not apt, as there are certainly several layers of proprietary code in between you and your CPU, for most systems.


There is actually a binary blob for processors that the BIOS is responsible for loading. The biggest difference is that you aren't responsible for downloading it as it is usually included with your motherboard (as the grandparent says). You can find CPU binary blobs in the open source Core Boot BIOS much like you can find the GPU binary blobs in the Linux kernel.


I had a long debate on multiple forums recently regarding the almost uselessness of OpenGL in presence of something like OpenCL, especially when OpenGL does its job (as a graphics library) in a very crappy manner by trying to be part-framework, part-library, and not doing either properly, and making the life of a graphics programmer a living hell, who has to code 80% of the graphics pipeline by hand or using third party matrix libraries anyway. I realized this very strongly when I considered doing graphics using OpenGL but with a photorealistic renderer like a path tracer.

Currently the only reason for OpenGL over GPU being inevitable is that GPU vendors don't provide access to "non-GPGPU" parts (like TMU, ROP) directly to OpenCL (or CUDA), which is clearly a matter of design rather then technicality.

Unsurprisingly, people were offended or upset (you either don't use OpenGL, or you end up defending it against criticism, subconsciously, because you have invested so much time learning it).

I hope something good comes out of this initiative.


> I had a long debate on multiple forums recently regarding the almost uselessness of OpenGL in presence of something like OpenCL, > (...) > I realized this very strongly when I considered doing graphics using OpenGL but with a photorealistic renderer like a path tracer.

Well, then you're going be disappointed by Vulkan as well. OpenGL (and modern GPUs) are built around screen space rasterization of points, lines and triangles. Being aimed primarily at the graphics part of GPUs, Vulkan uses the same primitives.

The major difference to OpenGL is, that you no longer do stuff like

    glGenTextures(...)
    glBindTexture(…, texID)
    glTexStorage…(...)
    glTexImage…(...)

    glActiveTexture(... + i)
    glBindTexture(…, texID)
    glUniform1i(sampler_index, i)
operating on a global, TLS driver state, which the driver has to queue and reorder into a command stream to the GPU. Instead you allocate some memory (using the Vulkan API) map it, write to it directly and set elements of a resource descriptor to what the layout of the data in that memory actually is.

    vkCmdBindDescriptorSet(cmdBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS, textureDescriptorSet[0], 0);
    vkQueueSubmit(graphicsQueue, 1, &cmdBuffer, 0, 0, fence);
    vkMapMemory(staticUniformBufferMemory, 0, (void **)&data);
    // ...
    vkUnmapMemory(staticUniformBufferMemory);
It's much more low level. And there's no strong type safety remaining. You can use a block of memory to be written by rendering operations, and then, just using another descriptor, use that same memory as a texture in post processing. However due to the asynchronous nature of GPU processing this puts a lot of burden on the application using it to get all the synchronization right.


>It's much more low level. And there's no strong type safety remaining.

It seems from the Vulkan Language Ecosystem graphic that they expect new languages to be developed that translate to Vulkan, for those who want to work at a higher level.


> It seems from the Vulkan Language Ecosystem graphic that they expect new languages to be developed that translate to Vulkan, for those who want to work at a higher level.

Actually no, because Vulkan itself is just a API. But what you can do it create high-level bindings, similar to, say, the Haskell X bindings that immediately leverage the asynchronous execution, lazy evaluation and built-in concurrent parallelism.

Also typesafety regarding the buffer contents can be mapped into a Hindley-Miller system as well, by looking at the tuple `(memory handle, descriptor)`; in Haskell (e.g.) that'd nicely map into a type constructor.


I think he/she meant targeting SPIR-V with your language of choice.


SPIR-V is on a completely different level. The Vulcan API is called by the binary running on the CPU. The binary generated from SPIR-V is executed on the compute device, which in 99% or all cases that Vulkan is concerned with will be the GPU.

In the same way you don't use GLSL to program OpenGL, you don't use SPIR-V to program Vulkan. GLSL/SPIR are to OpenGL/Vulkan what browser-side JavaScript is to a webserver.


Sure, but like PTX/CUDA it will mean you can write shaders in other languages with a better hardware mapping, instead of compiling them to GLSL/OpenCL C source, which is currently one advantage of CUDA.


Oh yes, I've been waiting to do that with OpenGL for a looong time. As a matter of fact I preferred working with the ARB_vertex_program and ARB_fragment_program extensions; I even designed (but never fully implemented) a custom, Lisp inspired language that compiles into the ARB_???_program assembly.


I've spent 15 years working on OpenGL, but I'd kick it to the kerb in a heartbeat.

AMD and NVIDIA are free to implement Vulkan on Windows and Linux, but the problem is that we will probably be stuck with OpenGL 4.0 on Mac OS X for the foreseeable future.


I don't know too much about the topic (I wish I did), but how what makes you think so? I see that it says "Will work on any platform that supports OpenGL ES 3.1 and up" and Apple platforms seem not to support that (as of now). However Apple is part of the working group, doesn't that make it very likely they are working on an implementation, too?


I'm a cynic, but don't get me wrong, I'd be delighted if Vulkan or OpenCL 2.1 show up in OS X 10.11.

Khronos is a funny beast, a bit like the United Nations. Just because these companies are all part of it doesn't mean they actually like each other and want to cooperate!

Apple has famously lagged far behind with their implementation of OpenGL. They only support OpenGL 4.1, the spec for which was released in 2010 [1].

Their developer tools are also pretty lacking on the desktop, and because they keep their drivers closed, external vendors can't do much to help. (Apple write their own drivers for Intel, NVIDIA and AMD GPUs, and even the engineers at those companies have little insight to what goes on behind Apple's closed doors.)

Perhaps this new, streamlined Vulkan API will be easier and more attractive for Apple to implement.

We live in hope...

[1] http://en.wikipedia.org/wiki/OpenGL#OpenGL_4.1


> Khronos is a funny beast, a bit like the United Nations. Just because these companies are all part of it doesn't mean they actually like each other and want to cooperate!

It seems to me that this is an accurate description for any technology focused consortium or work group.


Apple are traditionally tardy when it comes to OpenGL. Yosemite is still on 4.1 (2010 release), lacking support for things like compute shaders.


This explains why CryEngine supports Linux but not Mac. CryEngine requires OpenGL 4.3 as that's what was the easiest translation target for CryTek from DirectX 11.


Oh, I see, didn't realize that. That is a pity because I would very much like for cross platform gaming to become much more of a thing and Vulkan does seem likely to be a big milestone in that, esp. considering Valve's involvement.


Cross platform gaming equals to game engines abstracting graphics APIs.

Despite urban legends propagating the myth, games consoles don't feature OpenGL APIs as such, rather using more low level ones, even if inspired by OpenGL.


It seems like it's unfair to say "Apple are late with OpenGL features." The third parties could put the same effort into making drivers for OS X that they do into making them (and their crapware frontends) for Windows, but they don't. I may be wrong, but it appears to me that it's Apple doing most of the work.


There's not much hard information on how Apple deals with graphics drivers, but from what I gather they use a unified in-house OpenGL frontend for the hardware-independent layer, then defer to a modified version of the vendors driver for the low-level stuff.

Their frontend only implements up to GL 4.1, so even if the vendors were to release drivers independently there would be no way for an app to access the newer functionality through the OS X frameworks.


A bit of a tangent but... are you sure?

I just replaced a dead drive in an old Mini with an SSD, then installed Yosemite. I'm not really an Apple guy, so I was not aware that Yosemite implemented kext (driver) signing, which had the side effect of 'breaking' TRIM support for 3rd party SSDs. It was never supported to begin with, but prior to Yosemite you could enable it by changing strings in the kext; doing so now will render the Mac unbootable unless you globally disable kext signing. Apple does not offer any other ability for a 3rd party to write their own driver to make this work, short of possibly rewriting the entire AHCI stack, which is not feasible.

https://web.archive.org/web/20150205071750/http://coriolis-s...


From the website:

  > Will work on any platform that supports OpenGL ES 3.1 and up
Furthermore, Apple is on the working group.


It's a single data-point, but Apple was on the working group for KHR_debug and still hasn't implemented it three years later.


It doesn't just magically work, someone still has to write the drivers & user space libraries.

Apple would also then have to support three graphics APIs, vintage OpenGL, Metal (on iOS) and Vulkan (on iOS and OS X?), across four hardware platforms (AMD, NVIDIA, Intel, Imagination on ARM).


Google tried this when they introduced RenderScript, but they dropped support for graphics calculations on the second release and steered it into a way to use C99 with GPU capabilities, without having to drop into the NDK.


Speaking of path tracing, I hope Khronos keeps an eye to the future for Vulkan in regards to supporting path tracing. I don't think we're too many years away from seeing gaming engines or graphics card makers support path tracing in a big way (mainly because I think it's the future of gaming, especially for VR gaming - which is also the future of gaming itself). Vulkan should be optimized for that from the beginning and not tacked-on later.


I noticed the very significant working group members at the bottom of the page. If that is nearly as good as it looks, I'm very hopeful.

Here is the press release: https://www.khronos.org/news/press/khronos-reveals-vulkan-ap...

It seems Valve will work hard on it:

“Industry standard APIs like Vulkan are a critical part of enabling developers to bring the best possible experience to customers on multiple platforms,” said Valve’s Gabe Newell. “Valve and the other Khronos members are working hard to ensure that this high-performance graphics interface is made available as widely as possible and we view it as a critical component of SteamOS and future Valve games.”

Technical previews will be shown at GDC this week.


So it looks like this is the Khronos answer to Mantle/Metal. Does anyone have a quick summary comparing the three? Is Vulkan just targeting game developers or is there a wider set of use cases they want to hit? Is there any current guess about when hardware/software support will come? Can we expect a WebVulkan sometime before 2025?


Vulkan is the evolution of Mantle (get it... get it :)) ?) AMD contributed a lot to the development of this API and then collaborated with Microsoft to create DX12. Mantle will remain as a testbed of future technologies


I get that AMD wants this because it will make CPU performance less important for gamers, as they can't currently compete against Intel here. AMDs APU concept would become more attractive that way.

Valve etc. want this because OpenGL sucks, but DX is Win only.

Microsoft will want to defend its DX stronghold - can they, on a technical level? Will Nvidia play along?


NVIDIA will totally play along. They already have something that looks a lot like Vulcan, called NV_command_list, in their beta 346-series drivers.

Apologies for the SliderShare link, I hate SlideShare.

<http://www.slideshare.net/tlorach/opengl-nvidia-commandlista...

Sample code here:

<https://github.com/nvpro-samples>


> Valve etc. want this because OpenGL sucks

Actually Valve labeled OpenGL to be "shockingly efficient" in one of their presentations (postmortem of porting the Source Engine to Linux).


I went back and checked, it was actually an nVidia guy who said that. Rich Geldreich from Valve was also there but he's expressed nothing but disdain for OpenGL since he left the company:

> "I know it's possible for Linux ports to equal or outperform their Windows counterparts, but it's hard. At Valve we had all the driver devs at our beck and call and it was still very difficult to get the Source engine's perf. and stability to where it needed to be relative to Windows. (And this was with a ~8 year old engine - it must be even harder with more modern engines.)"

http://richg42.blogspot.co.uk/2014/11/state-of-linux-gaming....


> I went back and checked, it was actually an nVidia guy who said that.

Hmm, makes sense. But then NVidia really loves OpenGL.


From this diagram[1] it looks like they didn't get rid of the single thread bottleneck for submitting commands to the GPU (which can handle commands in parallel). Didn't Mantle allow parallel submission for that instead of using a single thread with one queue? DX12 supposedly allows that as well. If Vulkan won't allow it, it will be at a disadvantage.

1. https://i.imgur.com/x1CJO96.png


The idea that it's a bottleneck is wrong. Just because it's one thread doesn't make it a bottleneck. Draining from N queues into 1 queue and posting that work off to the GPU is never going to be a bottleneck. Even an arduino can handle that task fast enough to saturate the latest & greatest from Nvidia or AMD.

Also fwiw in DX12 even if parallel submission is supported, what's going to happen is N user threads are going to submit to 1 driver thread that then submits to the actual GPU.


I understood it that GPU itself can handle multiple commands concurrently, no? So if hardware supports it and there is just one thread submitting them, hardware isn't used to full capacity. Or that's wrong? And what's the point of making one thread do it, if multiple can? Queuing through one usually makes sense if you need some ordering.


Most GPUs cannot handle multiple arbitrary commands concurrently, but even if they can you're still going to easily saturate that with a single thread pushing over command buffers. The command buffers in this case are large, complex work units, not small microtransactions like the APIs of old.

And yes, ordering is needed here.


So how exactly then DX12 and Mantle do it? They don't appear to have such single queue manager thread. You said driver does that task. So if it does, can't it do it for Vulkan as well? Or their point is to reduce driver complexity here?

I can see that it might be not a major issue if time that takes it to fetch from the queue is way smaller than time it takes to execute some command [buffer]. I.e if queue manager will fetch and push commands to the queue, returning to fetch another without waiting for the first to finish, then it will work fine (it will have to handle GPU saturation though). But if it will be blocking - it will be some bottleneck which won't let using hardware fully.


Graphics APIs are almost universally non-blocking, this won't be blocking either.

Vulkan is basically a rebranded Mantle. I suspect you're just mistaken about Mantle, but as that API is not (yet?) open we don't really know if this was a change Khronos made when adopting Mantle or if this is just how Mantle works as well.

DX12 works the same as Vulkan, though, with a single submission thread ("The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process." http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-1...)


About Mantle, I saw this: http://techreport.com/review/25683/delving-deeper-into-amd-m...

It shows that multiple queues can be used for one GPU. I guess Vulkan can allow the same but it's not clear from that diagram.


Take a look at that again. There's one queue per type. One queue for graphics, one queue for compute, and one queue for DMA. Note that multiple threads are all feeding the single graphics queue.


I found the answer: https://www.youtube.com/watch?v=EUNMrU8uU5M&t=19m50s

Vulkan does support multiple command queues. And there can be several queues of the same type for one GPU.


Yeah, I understood that. One queue per independent type of tasks. But Vulkan diagram didn't mention even that. May be such parallelism is AMD specific, and Vulkan being more generic has to cater for the wider range of GPUs. Not sure.


Looks quite nice, actually.

I thought it might have turned into a LongPeaks 2.0, but from the initial overview it is quite interesting.

And I really like it seems to be moving forward into a more language agnostic friendly API and better tooling support.


Very exciting! I wonder: when will we be able to start making projects with this? It would be interesting to see how it compares with OpenCL when it comes to doing things like Bullet physics' GPU rigid body solver and so on. Also, of course, ray tracing! :D


Maybe there isn't a spec yet, but it seems there are demos running through vulkan: http://blog.imgtec.com/powervr/trying-out-the-new-vulkan-gra...


Thanks, this article has some really useful comments. Here's another thread on r/gamedev about it[1]. Not much discussion as of now but hopefully as people wake up it may have interesting comments too.

[1]: http://www.reddit.com/r/gamedev/comments/2xrobv/an_overview_...


I'm by no means an expert in the field, but if I understand the role of Vulkan correctly, you -- as an application developer -- won't be writing your programs against the Vulkan API.

OpenCL will become a library that is implemented by interacting with the GPU via the Vulkan API, and as an application developer you'd use OpenCL. Or perhaps you'd never again use OpenCL, because someone else develops a better library.

Perhaps someone creates a Haskell library -- which talks to the GPU via the Vulkan API -- that implements a map function that executes in parallel on the GPU. In that case you wouldn't use OpenCL or Vulkan directly at all.


> I'm by no means an expert in the field, but if I understand the role of Vulkan correctly, you -- as an application developer -- won't be writing your programs against the Vulkan API.

If I understood correctly, for graphics at least Vulkan gives great flexibility to application developers, giving them to decide how to manage multithreading. If developers care about high efficiency (and games usually do need it), they'll use it.

Of course if developers are using third party engines which already handle it for them, they might not need to deal with Vulkan directly. But engine developers would have to in such case.


I agree. Engine developers would target Vulkan. But I don't consider a game engine to be an application. It's more of a library, I would say. The game that uses the engine is the application.


Many games write their own engines, so I'd say that it concerns game developers too. They better understand the underlying logic anyway.

By the way, it looks like Vulkan doesn't get rid of single thread bottleneck for submitting commands to the GPU: https://news.ycombinator.com/item?id=9140849

It doesn't look competitive in comparison with DX12 and Mantle.


The headline made me expect several things until I saw that it was Khronos, the group that gives us, among other things, OpenGL API's. Looks like the Next Generation OpenGL is getting more ready to show off.

With Metal and Mantle, it was clear that graphics programmers were wanting to use GPU's at lower levels of abstraction to more efficiently utilize the design of GPU's, which have a much different architecture than a decade ago. Without a corresponding option that is standards compliant, low-level API's are threatening to fragment GPU programming.

One operation I didn't see any mention of that seems to have some future role, albeit completely uncertain, is the Heterogeneous System Architecture Foundation spear-headed by AMD and having seemingly every chip-maker except Nvidia and Intel on-board.

As GPU stream processors get more CPU-like but are natural at parallel processing, it is a matter of time before we get the right abstractions so that map gets scheduled across many stream processors and is reduced by a higher-single-thread performance CPU and suddenly computer vision and many other naturally parallel data synthesis workloads are programmed in a less heterogeneous software environment and executed on chips where the stream processors and CPU's share a large amount of commonalities, possibly down to micro-op compatibility through something Nvidia could be pushing towards in their Denver architecture (as yet highly speculative).

To somewhat less than enthusiastic coverage, Nvidia has been building up their partnerships with automakers like crazy. Computer vision is obviously one of the applications that will be required in self-driving cars. Tango and other projects are also quiet beneficiaries of Nvidia tech maturing into the Tegra platform.

We have far from conquered programming and CPU design just because JIT's are good, 8GB of RAM is expected or GPU's can mine MHashes/s etc. We're in some future's bad-old-days. The idea that Khronos is involved in the unification of graphics and computing API's as well only makes the exciting question of who will drive our cars more intriguing.


I find another aspect of this announcement VERY interesting: the SPIR-V immediate language.

I haven't finished reading the PDF [0] yet, but it looks like LLVM, but for hardware.

[0]: https://www.khronos.org/registry/spir-v/specs/1.0/SPIRV.pdf


It is surprising how much semantic significance is placed on whether a language is encoded in text or binary format.

SPIR-V looks to be a completely new format, not evolved from previous versions of SPIR. This new version is being described as a "fully specified Khronos-defined standard", which is great to hear.


"Rapid progress since June 2014"

Gee, I wonder what happened in June 2014 to inspire this....


What did happen?


I think he's reffering to metal at WWDC. That's the only thing I can think of as being relevant that month.


I think he's reffering to mantle at WWDC.


Mantle is from AMD and dates back to 2013. It has nothing to do with Apple or WWDC.

You're thinking of Metal.


I noticed that it says "the first shading language" to be supported will be GLSL. Does that imply that there will be a new shading language or extensible shading language facilities?


Direct access to the GPU...is that done in some kind of sandbox at least? How do Mantle/Metal/DX12 do it?


»Direct« as in »a very thin layer around the GPU's capabilities and workings«, instead of what OpenGL does currently with massive amounts of state that the driver has to handle and a movel that doesn't at all work like the GPU does. This removes a quite hefty translation layer between your software and the GPU, leading to better performance, fewer hacks and workarounds (hopefully), etc.


CUDA gives you pretty much full access, and AFAIK you don't need admin priveleges to run a CUDA program. many of my errors when writing cuda code resulted in my web browser being rendered as a texture.


"Direct access" most likely doesn't mean literal direct access, just less convoluted and in the way than current GL is.


I'm unaware of Mantle and Metal, but DirectX is direct too.

It's not really a massive issue when you're locally installing programs since you've already compromised your system in the first place. But when leveraged over the web with APIs such as WebGL, it does become a security vulnerability.


Yes, GPUs these days have an MMU and the idea of multiple contexts in order to separate processes from one another.


Now let's hope lock-in proponents won't sabotage this effort.


Will Playstation 4 support this? That would benefit it a lot. I'm assuming Xbox One won't.


The PS4 doesn't even support standard OpenGL, why would they support Vulcan?


Because Vulcan is very much like the APIs you use on console development to talk to the GPU. With the main difference that it's been designed to be GPU vendor neutral.


Console vendors have always designed their own APIs why change now?


I would imagine it comes down to costs. If this api does a good job, why spend the effort (read: money) to create a new one with all of the tooling around it?

Though, I should note I doubt this will really save much. Just, on paper it is a costs savings. (Unless, as usual, I'm wrong on something.)


Consoles tend to have custom hardware architectures that don't map at all to generic APIs.


I think that was a more true statement before MS enterred the game.

Also, for something as low level as this api sounds, it probably mapped decently, while still requiring another set of primitives for processor and memory control.

That is, I think the point of this api is to be a subset that can remain a bit more common between platforms. Not the all inclusive api that rules everything.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: