Hacker News new | past | comments | ask | show | jobs | submit login
DirectX Adopting SPIR-V as the Interchange Format of the Future (microsoft.com)
174 points by AshleysBrain 19 days ago | hide | past | favorite | 133 comments



No surprise here, given the extent HLSL is already the de facto shading language for Vulkan.

Khronos already mentioned in a couple of conferences that there will be no further work improving GLSL, and given DirectX weight in the industry, HLSL kind of took over.

Additionally for the NVidia fans, it might be that Slang also gets a place in the Vulkan ecosystem, discussions are ongoing, as revealed on SIGGRAPH sessions.


My understanding was that dxc lacked support for compiling various HLSL features to SPIR-V (hence SM7 now), so there are still a bunch of Vulkan-focused projects like Godot which only support GLSL.

But yes, the games industry has been almost entirely HLSL since forever, and this is going to help remove the final obstacles.


Yep, especially DXC HLSL to SPIRV was a big issue when it came to supporting new features from Vulkan.

Though I would still like to see if slang can succeed and I am always a bit afraid of Microsoft just dropping the ball somewhere.


What about WGSL though, the shader language of WebGPU? WebGPU is kind of Vulkan lite, but unlike with Vulkan, Apple is on board and actually the reason why WGSL exists as yet another shading language.


What about it? Nobody wanted WGSL, it's just an artifact of having to appease Apple during WebGPUs development as you say. I don't see why it would be adopted for anything else.

The old WebGPU meeting notes have some choice quotes from (IIRC) Unity and Adobe engineers literally begging the committee not to invent a new shader language.


>The old WebGPU meeting notes have some choice quotes from (IIRC) Unity and Adobe engineers literally begging the committee not to invent a new shader language.

This was an interesting tidbit, so I tried to find the source for it. While I did not find it, I did find the December 2019 minutes[0] which has a related point:

>Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.

I found this link via rust hn[1] which I found after reading this blog post:[2]

>Vulkan used a bytecode, called SPIR-V, so you could target it from any shader language you wanted. WebGPU was going to use SPIR-V, but then Apple said no

The lobsters thread also links to a relevant HN post:[3]

>I know, I was there. I also think that objection to SPIR-V wasn't completely unfounded. SPIR-V is a nice binary representation of shaders, but it has problems in the context of WebGPU adoption: It's so low level [...] It has a lot of instructions [...] Friction in the features we need, vs features Khronos needs. [...] there is no single well specified and tested textual shading language. HLSL doesn't have a spec.

The linked blog post from lobsters was also discussed on HN, which you also commented in.[4]

It would be great if you could find that Unity/Adobe discussion as I would be interested to read it.

[0] https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...

[1] https://lobste.rs/s/q4ment/i_want_talk_about_webgpu

[2] https://cohost.org/mcc/post/1406157-i-want-to-talk-about-web...

[3] https://news.ycombinator.com/item?id=23089745

[4] https://news.ycombinator.com/item?id=35800988


> It would be great if you could find that Unity/Adobe discussion as I would be interested to read it.

https://github.com/gpuweb/gpuweb/wiki/Minutes-2019-09-24

Corentin: Web Shading Language — A high-level shading language made by Apple for WebGPU.

<room in general grimaces>

[...]

Jesse B (Unity): We care about HLSL

Eric B (Adobe): Creating a new high level language is a cardinal sin. Don’t. Do. That. Don’t want to rewrite all my shaders AGAIN.

Jesse B: If we can transcode to HLSL to whatever you need, great. If we can’t, we may not support your platform at all.

Eric B: Would really not like even to write another transcoder. If there’s an existing tool to get to an intermediate representation, that’s good. Would suggest SPIRV is an EXCELLENT existing intermediate representation.

Note the WSL language made by Apple which sparked that discussion is unrelated to the WGSL language they ended up shipping, but the sentiment that the ISV representatives just wanted them to use HLSL or SPIR-V stands.


>WSL

Ah, that explains part of why I couldn't find it. I was searching mainly for WGSL. Something Like 'WEBGPU minutes "Unity" "HLSL" "WGSL"'. There was also WHLSL also from Apple at one point but was later droppped in favor of WSL.[0][1]

>A few months ago we discussed a proposal for a new shading language called Web High-Level Shading Language, and began implementation as a proof of concept. Since then, we’ve shifted our approach to this new language, which I will discuss a little later in this post.

>[...]

>Because of community feedback, our approach toward designing the language has evolved. Previously, we designed the language to be source-compatible with HLSL, but we realized this compatibility was not what the community was asking for. Many wanted the shading language to be as simple and as low-level as possible. That would make it an easier compile target for whichever language their development shop uses.

>[...]

>So, we decided to make the language more simple, low-level, and fast to compile, and renamed the language to Web Shading Language to match this pursuit.2

The "we designed the language to be source-compatible with HLSL, but we realized this compatibility was not what the community was asking for" comment is funny because Unity's "We care about HLSL" comment seems to be directly against this.

In any case, this is really a disappointing move from Apple. Just another example of them ignoring developers – even large developers like Adobe and Unity – over completely petty disputes and severe NIH.

The craziest line in the post is probably "[WSL] would make it an easier compile target for whichever language their development shop uses." It's like they knew people wanted SPIR-V but they wouldn't do it due to some petty legal drama that Apple invented and then chose literally the worst of all worlds by making yet another compile target instead of at least choosing the next best thing which would be something that is compatible with HLSL.

[0] https://github.com/w3c/strategy/issues/153

[1] https://webkit.org/blog/9528/webgpu-and-wsl-in-safari/


> it's just an artifact of having to appease Apple during WebGPUs development

To appease Google most likely. WebGPU is based on original work by Apple and Mozilla which based it on Metal.

I doubt Apple would be against whatever Metal uses for its shader language.


The choice was between using or adapting SPIR-V, which is what basically everyone doing multi-platform development wanted, or using anything else and pissing everyone off by making them support another shader language. Apple stonewalled using SPIR-V or any other Khronos IP on unspecified legal grounds so they effectively forced the decision to roll a new shader langage, post-hoc rationalizations were given (e.g. human readable formats being more in the spirit of the web despite WebAssembly already existing at that point) but the technical merits were irrelevant when one of the biggest stakeholders was never ever going to accept the alternative for non-technical reasons.

https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...

Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.


I don't understand why people say things that are kind of trivial to disprove, but here's the document with the notes where Apple refuses to use SPIR-V.

https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...

> MS: Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that.

Reading between the lines, it seems like Apple mainly doesn't want to implement SPIR-V because engaging with the "Khronos IP framework" would prevent them from suing other Khronos members over patent disputes.


WebGPU, like WebGL, is a decade behind the native APIs it is based on.

No one asked for a new Rust like shading language that they have to rewrite their shaders on.

Also contrary to FOSS circles, most studios don't really care about Web 3D, hence why streaming is such a thing for them.

There have been HLSL to SPIR-V compilers for several years now, this is Microsoft own official compiler getting SPIR-V backend as well.


Because WebGL, just like WebAssembly (with its hacky thread support and compilation issues) is a giant kludge.

WebGL still has fundamental issues of not even supporting anything resembling a modern OpenGL feature set (with modern meaning 2010s era stuff like compute shaders and multi draw indirect) in theory, and in practice, macOS doesn't support WebGL2, meaning stuff like multiple render targets (which is necessary for deferred rendering), so it's almost impossible to make a modernish game that runs in a browser well.

Imo the problem isn't that WebGPU/Wasm is a decade/X years behind, but that we cannot reliably expect a feature set that existed on typical mid 2000s PCs to reliably work in the browser across all platforms (which is the whole point of the web).


It's almost as like some Fruit based company is sabotaging the efforts to keep its walled garden.


Despite all the bending over backwards to keep the fruit company on board with WebGPU, they still haven't actually shipped their Metal backend in Safari over a year after Chrome managed to ship DirectX, Metal and Vulkan backends simultaneously. Mozilla hasn't shipped WebGPU either but their resources can hardly be compared to Apples.


Honestly Google's probably almost as guilty - Native Client was a great idea and sidestepped basically all the issues we are having now, but they killed it in favour of 'standard' APIs, like Wasm that basically barely work for their intended purposes


Nah, Native Client had a lot of its own problems. Except for pthreads-style multithreading, PNaCl couldn't even compete with asm.js, and Spectre/Meltdown would be just as catastrophic for PNaCl as it was for SharedArrayBuffer.


Add Mozzilla to the mix, for not wanting to adopt PNaCL, coming up with asm.js, and for what.

Firefox is almost irrelevant now, and Google is calling all the shots anyway.

Without Safari's relevance on mobile, the Web would have long turned into ChromeOS everywhere by now.


> Add Mozzilla to the mix, for not wanting to adopt PNaCl.

Mozilla wasn't in any position to command the market, even at the time PNaCl was created. PNaCl failed on its own demerits.

> Firefox is almost irrelevant now

Firefox has been irrelevant because it doesn't have the trillion dollar budget of Apple and Google, nor the vendor lock-in, and with that no reach which would enable it to steer web the way it deems fit. It has nothing to do with asm.js


Not at all, had Mozilla adopted PNaCL instead of coming up with asm.js, and WebAssembly would never come up, delaying everything for a decade.

Here is a memory refresher from 2011,

"Mozilla's Rejection of NativeClient Hurts the Open Web"

https://chadaustin.me/2011/01/mozillas-rejection-of-nativecl...


The argument against NaCL was that it was the browser API, PPAPI, was poorly documented and exposing implementation details of Blink/Chromium and thus very difficult to implement in a non-Chromium browser, so it's no surprise that Mozilla, Apple, and Opera were unenthused.

https://bugzilla.mozilla.org/show_bug.cgi?id=729481#c83


Mozilla wasn't the only one with problem with PNaCl. They were definitely most opposed to it, but even Opera was strongly against it (granted it was around 2011).


> Honestly Google's probably almost as guilty - Native Client was a great idea and sidestepped basically all the issues

NaCl failed on its own.

A) Wasn't backwards compatible

B) Spec was - look at the Chrome source

C) No one other than Google wanted it

D) It was essentially ActiveX Google (yeah, ActiveX had some nifty ideas and they still persist to this day)


PNaCL specification documents.

https://www.chromium.org/nativeclient/pnacl/

Naturally after a decade not all links are working.


> macOS doesn't support WebGL2

WebGL2 is fully supported in Safari since quite a while now. In fact it's using the same rendering backend as Chrome and Firefox (ANGLE), and AFAIK Google and Apple engineers worked together to create (or at least improve?) the ANGLE Metal backend and integrate ANGLE into Safari.


Safari supports WebGL2 since version 15 - unless you meant something else by macOS lacking support?

(I agree with your general point though.)


The native WebGPU libraries accept SPIRV as input, and they offer libraries to convert WGSL to SPIRV and back. E.g. WGSL is only needed when running WebGPU in browsers, but even there it can be code-generated from other shading languages by going through SPIRV (but tbh, I actually like WGSL, it's simple and straightforward).


Except that the conversion to WGSL is a complete waste of compute resources, engineering effort and the time of everyone involved. WebGPU is a _web_ API after all, even if people realized the runtimes could be used outside the browser.

Converting your SPIR-V to WGSL just to convert it back to SPIR-V to feed it into a Vulkan driver, or running an entire language frontend just to emit DXIL or Metal IR. We learned 15 years ago that textual shader languages at the GPU API interface are a mistake but we're forced to relearn the same mistakes because Apple wouldn't play ball. What a joke.


WGSL was a mistake and hopefully they get rid of it, it negatively impacts WebGPU's adoption, at least it did for me, the syntax is one of the worst ever created, just horrible


WGSL could be good for Khronos. It’s a modern language with an actual specification. It’s gaining users every day.


> Khronos already mentioned in a couple of conferences that there will be no further work improving GLSL

Unfortunately, HLSL isn’t an open standard like GLSL. Is it Khronos's intention to focus solely on SPIR-V moving forward, leaving the choice of higher-level shader languages up to application developers?


There's likely to be very little funding for GLSL moving forward, and I would expect no major spec updates ever again, but vendors will probably keep publishing extensions for new GPU features and fixing things up. GLSL still has a fairly large user base. Whether SPIR-V is going to be the only Khronos shading language (or whatever you want to call it) moving forward, that's hard to say. Nvidia is pushing for Slang as a Khronos standard at the moment. Not sure if anyone's biting.


Yes, they officially stated at Vulkanised, SIGGRAPH among other places, that there is no budget for GLSL improvements, and also they aren't programming language experts anyway.

It is up to the community to come up with alternative, and the game development community is mostly HLSL.


Will this help games be more compatible with the proton layer on Linux or is this not related?


In theory if DirectX games start passing shaders to the driver in SPIR-V, the same format Vulkan uses, then yes it should make Protons job easier. Translating the current DXIL format to SPIR-V is apparently non-trivial to say the least:

https://themaister.net/blog/2021/09/05/my-personal-hell-of-t...

https://themaister.net/blog/2021/10/03/my-personal-hell-of-t...

https://themaister.net/blog/2021/11/07/my-personal-hell-of-t...

https://themaister.net/blog/2022/04/11/my-personal-hell-of-t...

https://themaister.net/blog/2022/04/24/my-personal-hell-of-t...


Maybe. Maybe not; it could well be an incompatible flavour of SPIR-V.


It's unlikely to diverge from the same general flavor as vulkan. The worst parts of the DXIL to SPIR-V conversion I remember from that chain of blog posts is rebuilding structured control flow and how it interacts with atomics and wave convergence.

That's a problem that goes away irrespective of any DX extensions to SPIR-V for supporting the binding model DX uses.


I haven't used either in a while, what is missing from GLSL?


C based, no support for modular programming, everything needs to be a giant include, no one is adding features to it as Khronos isn't assigned any budget to it.

HLSL has evolved to be C++ like, including lightweight templates, mesh shaders and work graphs, has module support via libraries, is continuously being improved on each DirectX release.


I'm not a fan of GLSL either, but adding C++ like baggage to shading languages like HLSL and especially MSL do (which is C++) is a massive mistake IMHO, I'd prefer WGSL over that sort of pointless language complexity any day.


Long term shading languages will be a transition phase, and most GPUs will turn into general purpose compute devices, where we can write code like in the old days of software rendering, except it will be hardware accelerated anyway.

We already see this with rendering engines that are using CUDA instead, or as shown at Vulkanised sessions.

I do agree that to the extent C++ has grown, and the security issues, something else would be preferable, maybe NVidia has some luck with their Slang adoption proposal.


At some point you have to stop working in assembly and graduate to a high-level language and beyond.

Modern GPU stuff is getting too complex to be practical without higher language features.


From the pov of assembly, C and any other high level language are basically the same. That doesn't mean that climbing even higher up on the abstraction ladder is a good thing though (especially for performance).


Hopefully this isn’t actually Third SPIR-V Dialect


I wouldn't expect being able to load a D3D12 SPIRV blob into Vulkan or OpenGL anyway though, just because the 'input semantics' are very different (and I think that's also the main difference between GL and Vulkan SPIRV blobs). But AFAIK SPIRV is extensible for this type of differences without rendering existing SPIRV tools completely useless.


I think you’re kind of missing what I was getting at: today, tools which produce SPIR-V for OpenCL cannot be used to produce SPIR-V for Vulkan shaders, and vice versa. MLIR is a possible way out of the mess, but I am not hopeful that the future looks less messy, at least for some years, and it may not have enough incentive to even be feasible to improve.


Ah right, I was thinking of the Vulkan vs GL SPIRV flavours.

I don't think it's much of a problem though. I cannot run a WASM blob compiled for the web in a WASI runtime either, or an x86 executable compiled for Windows on Linux. Heck, I can't even run an executable compiled for one Linux distro on another Linux distro if the glibc library versions don't overlap.


yes, but you can use the same compiler, unlike with SPIR-V


I guess the main reason is that entirely different people work on using GPUs for compute tasks versus using GPUs for rendering tasks (unless you're using the 3D API's compute features). E.g. not a technical problem, but organizational.


Yes. The ecosystem is the problem, and it’s a chicken/egg or network-effect problem. There’s no reason we can’t use the same source and compiler to target both Vulkan shaders and OpenCL kernels and now DX12 shaders, but in practice the fact that these all supposedly use the same standard IR doesn’t actually get us the promised utility of implementing that standard, because in practice they do use different variations of the standard.

Therefore this news is less than useless unless it does something to defeat this problem.


I think the main reason why DX switches to SPIRV is that it saves the DX team a lot of work. They can drop dxc which is based on a very old LLVM fork, and replace the DXIL "hack" (which is just LLVM-IR with some things bolted on) with a properly specified bytecode.


This is really good news!


I'd wish more Microsoft devblog content was like this one.


I could do without the shitty memes images.


Aside: I find it funny to see memes with attribution.

Given the culture around memes, attribution feels somehow weird.


Cinematic crossovers have gone too far


Good. Now if Windows would adopt Vulkan as the graphics API of the future.


What's wrong with d3d12? It works perfectly fine for what it does. In my experience it causes a lot less issues than Vulkan. And it's not really due to windows not supporting Vulkan correctly, since my experience with Vulkan has mostly been on Linux.

I don't dislike Vulkan either, it's just that I don't see the point of replacing something that works pretty well.


Adopting Vulkan doesn't mean removing Direct X 12. Just like adopting spirv doesn't mean removing hlsl. No one said anything about getting rid of anything.


SPIR-V is not an alternative to HLSL. It's an intermediary format that you compile HLSL (or GLSL) to.


Reinvention of the wheel and tax on supporting "yet another thing" for developers who need to deal with it.

Same reason standards have some value.


I don't think it's reinventing the wheel, since Vulkan was ready quite a bit after d3d12 but yeah I guess maybe it could be the standard on windows after d3d12 becomes obsolete...

But that's going to be in quite a while since I can't think of an actual feature (for end users) that is missing from one vs the other right now.

Everything on Windows already uses d3d12/DirectX basically so it would actually be a huge wheel reinvention to migrate to a standard just for the sake of it.


I think saying that DX was first so it's Vulkan that was reinventing the wheel is incorrect with historical context.

AMD and DICE developed a prototype API called Mantle. Which is what both DX and Vulkan are based on.

Both Vulkan(glNext back then) and DX12 were announced around the same time. VK came a bit later as standards are usually slower in coming to decisions but it's not like VK was reinventing anything from DX.

I remember we were having a laugh reading early DX12 documentation as it was in parts just copied from Mantle with names unchanged in places!


> DirectX 12 was announced by Microsoft at GDC on March 20, 2014, and was officially launched alongside Windows 10 on July 29, 2015.

> Vulkan 1.0 was released in February 2016.

What people forget is that Mantle was basically a proprietary AMD API that they wanted and developed until, well, the release of Metal in 2014 and DX 12 in 2015.

Only then did they "graciously" donated Mantle to Khronos for the development of modern APIs.

Vulkan was not just late. It suffers from the same issues as OpenGL before it: designed by committee, lackluster support from the major players.


AMD indicated from the beginning they wanted it to become the universal API.

Opening stuff up formally also takes time. So it all was going towards Vulkan in one form or another and no one was forcing MS to push DX12 NIH while this was happening.

And counter to your point, despite Mantle being "proprietary", MS directly used it to create DX12 (same as Vulkan used it), so AMD clearly didn't have any complaints about that.


> AMD indicated from the beginning they wanted it to become the universal API.

Was it an indication or was there any actual work done? Such as supporting anything else but AMD cards for example, inviting others to collaborate etc.?

> despite Mantle being "proprietary", MS directly used it to create DX12

I can't remember the term for it: what do you call when a single company develops something with little to no external input and collaboration, even if it's sorta kinda open?

As for "NIH"... Microsoft has/had a much bigger investment and interest in new gaming APIs than the few AMD cards that Mantle supported. And they already had their own platform, their own APIs etc. Makes sense for them to move forward without waiting for anyone


Over time the work was obviously done for Mantle → Gl Next → Vulkan. And that's becasue AMD were positive about this idea. Their initial presentation of Mantle was in that vein, i.e. to kickstart the progress of the common API.

MS just decided to do that whole thing for their NIH in parallel using parts of Mantle practically verbatim. It wouldn't have been possible without AMD basically allowing it.

See: https://x.com/renderpipeline/status/581086347450007553

I.e. I don't see any reason here for MS not to collaborate on Vulkan instead, besides the usual lock-in approach.


Ah you are right, I forgot that they both were announced at around the same time. It just feels like Vulkan took forever. To the point where some teams at my job had to use OpenGL even for greenfield projects for quite a while after Vulkan was first announced (even when they wanted to use Vulkan).

I wonder if that means that dx12 and Vulkan could have a good interop/compatibility story, since they both have similar origins.


DX12 was pushed as NIH, since it was made from Mantle same way as Vulkan was. So to reduce NIH, it only makes sense to unify it all in Vulkan.

They already made the first sensible step with SPIR-V here. The next step makes the same sense.

And stuff can be translated into Vulkan if it can't be rewritten.


It's Vulkan that was reinventing the DX12 wheel wasn't it though?


Vulkan is based on Mantle, which predates the release of DX12 by about 2 years.


The same can also be said about D3D12, it is at least 'heavily inspired' by Mantle. In the end, not much of Mantle has survived in Vulkan either though. Mantle was a much cleaner API than Vulkan because it didn't have to cover so many GPU architectures as Vulkan (Mantle especially didn't have to care about supporting shitty mobile GPUs).


In this case Vulkan is the only option. DX12 is a non starter since it was never intended to be universally available.


DX12 is proprietary. Vulkan is not.


It's mostly on us, the developers. Vulkan is fully supported on windows.

I would say that if you want to have multi-platform support just use Vulkan. Covers most of the platforms(especially if you include MoltenVK)[0].

Though, for games, if you want to support Xbox, that usually throws a curveball into API choice planning. As that might be more important of a target than Linux/Android/Mac/iOS(maybe even combined) for your game. So if you already have to support DX for that..

[0] https://www.vulkan.org/porting


> Vulkan is fully supported on windows.

If and only if the GPU vendor supports it (they do for now). Windows itself fully supports only DX

> Though, for games, if you want to support Xbox, that usually throws a curveball into API choice

As does Playstation. Very few develop with Vulkan for Playstation as GDN (or whatever the name of the native framework is) is the main choice.

And on mobile, well, you need to support Metal.


vulkan is already supported on windows as a first-class citizen by all major IHVs. I am not sure what this "adoption" you speak would entail. If you're talking about replacing d3d12, that actually is a terrible idea.


That's not really the same as being supported by Windows. I think that's 3rd party support and not built into the OS.


what do you mean when you say "built into the os"? d3d12 is just an api. the d3d runtime is user-space, both the UMD that wraps it and the KMD are supplied by the hardware vendor. In the end, both a d3d app and a vulkan app end up talking to the very same KMD. See here for reference:

https://learn.microsoft.com/en-us/windows-hardware/drivers/d...


D3D is clearly more integrated into the OS than Vulkan is.

Most importantly, Windows includes a software D3D renderer (WARP) so apps can depend on it always being present (even if the performance isn’t spectacular). There are lots of situations where Vulkan isn’t present on Windows, for example a Remote Desktop/terminal server session, or machines with old/low-end video cards. These might not be important for AAA games, but for normal applications they are.

Another example: Windows doesn’t include the Vulkan loader (vulkan-1.dll), apps need to bundle/install that.


> D3D is clearly more integrated into the OS than Vulkan is.

sure, but addressing the two points that you brought up would not entail changing windows _the operating system_, just the stuff that ships with it. you could easily ship swift shader along with warp and the loader library, both of those are just some application libraries as far as the os/kernel is concerned. of course now we're in the territory of arguing about "what constitutes an OS" :-)


Oh, I was under the impression that Direct X 12 was built-in for Windows like Metal is on Apple.


If you're talking about replacing d3d12, that actually is a terrible idea.

Why do you say that?


I say this because vulkan is hamstrung by being an "open API" intended to run on a very wide range of devices including mobiles. this has major repercussions, like the awkward descriptor set binding model (whereas d3d12's descriptor heaps are both easier to deal with and map better to the actual hardware that d3d12 is intended to run on, see e.g. https://www.gfxstrand.net/faith/blog/2022/08/descriptors-are...). overall d3d has the benefit of a narrower scope.

Another problem with being an open API is that (and this is my own speculation) it's easier for IHVs to collaborate with just Microsoft to move faster and hammer out the APIs for upcoming novel features like work graphs for example, vs bringing it into the public working group and "showing their cards" so to speak. This is probably why vk gets all new shiny stuff like rtrt, mesh shaders etc. only after it has been in d3d for a while.

One could argue this is all solvable by "just" adding a torrent of extensions to vulkan but it's really not clear to me what that path offers vs d3d.


I would guess that if DX didn't exist the iteration on VK side would just be faster. Through extensions, like you've mentioned.

In the end it might have even speed up the adoption of such features. Currently if you have a multiplatform engine, even though windows is like 99% of your PC player base it's still sometimes a tough decision to just use a feature that you can't support on all your targets.


The downside is that it ties them incredibly heavily to Microsoft, and makes cross-platform efforts much harder.


Does that support extend to ARM? Not sure if it's still the case, but I recall that early Windows on ARM devices didn't have native Vulkan (and I believe OpenGL was translated to DirectX via ANGLE).


I haven't laid my hands on any ARM windows devices so I wouldn't be able to tell you. I'd be somewhat surprised if the newer snapdragon stuff doesn't have vulkan support because qcom supports vulkan first-class on its gpus. in fact, on newer android devices OpenGL support might already be implemented on top of vulkan, but don't quote me on that.


LunarG released a native ARM version of the Vulkan SDK shortly after the Snapdragon X machines launched so presumably it works on those.

edit: yup https://vulkan.gpuinfo.org/listreports.php?devicename=Micros...


Why?

Vulkan is not a well designed API. It's so complicated, verbose, and error prone. It's pretty bad.


But are you saying that compared to DX or just in general?

We're talking here about potential DX replacement, not about design in general and the bulk of it is very similar for both APIs.

There are some small quirks from Vulkan being made to be easily extensible which in the end I consider worth it.

I personally like how consistent the API is in both patterns and naming. After using it for a while, it's easy to infer what function will do from the name, how it will handle memory, and what you'll need to do with that object after the fact.

I find documentation better than the DX one.

What are your biggest problems with it?


At least it's documented.


The DirectX specs are much better than both the OpenGL and Vulkan specs because they also go into implementation details and are written in 'documentation language', not 'spec language':

https://microsoft.github.io/DirectX-Specs/


If you search for 'D3D12' spec what you actually find is D3D12 doesn't have a specification at all. D3D12's "spec" is only specified by a document that states the differences from D3D11. There's no complete holistic document that describes D3D12 entirely in terms of D3D12. You have to cross reference back and forth between the two documents and try and make sense of it.

Many of D3D12's newer features (Enhanced Barriers, which are largely a clone of Vulkan's pipeline barriers) are woefully under specified, with no real description of the precise semantics. Just finding if a function is safe to call in multiple threads simultaneously is quite difficult.


I don't think that going into implementation details is what I would expect from an interface specification. The interface exists precisely to isolate the API consumer from the implementation details.

And while they're much better than nothing, those documents are certainly not a specification. They're are individual documents each covering a part of the API, with very spotty coverage (mostly focusing on new features) and unclear relationship to one another.

For example, the precise semantics of ResourceBarrier() are nowhere to be found. You can infer something from the extended barrier documentation, something is written in the function MSDN page (with vague references to concepts like "promoting" and "decaying"), something else is written in other random MSDN pages (which you only discover by browsing around, there are no specific links) but at the end of the day you're left to guess the actual assumptions you can make.

*EDIT* I don't mean to say that Vulkan or SPIR-V specification is perfect either. One still has a lot of doubts while reading them. But at least there is an attempt of writing a document that specifies the entire contract that exists between the API implementer and the API consumer. Missing points are in general considered bugs and sometimes fixed.


> I don't think that going into implementation details is what I would expect from an interface specification.

I guess that's why Microsoft calls it an "engineering spec", but I prefer that sort specification over the Vulkan or GL spec TBH.

> The interface exists precisely to isolate the API consumer from the implementation details.

In theory that's a good thing, but at least the GL spec was quite useless because concrete drivers still interpreted the specification differently - or were just plain buggy.

Writing GL code precisely against the spec didn't help with making that GL code run on specific drivers at all, and Khronos only worried about their spec, not about the quality of vendor drivers (while some GPU vendors didn't worry much about the quality of their GL drivers either).

The D3D engineering specs seem to be grounded much more in the real world, and the additional information that goes beyond the interface description is extremely helpful (source access would be better of course).


D3D11 and D3D12 are objectively better designed APIs than their 'Khronos counterparts' OpenGL and Vulkan, as is Metal on iOS and macOS.


While OpenGl vs D3D11 I would agree I don't find D3D12 vs Vulkan difference to be that big.

What are the parts that you consider objectively better in D3D12 compared to Vulkan?


It should.


[flagged]


Step 1: Microsoft has a proprietary alternative to an open standard, people complain.

Step 2: Microsoft begins adopting the open standard, people complain.



I know that's what they're referring to. If you're concerned about Microsoft gaining undue influence over Vulkan/SPIR-V then rest assured they already effectively controlled the desktop graphics landscape, however they define DirectX becomes the template for hardware vendors to follow, and Vulkan then has to follow their lead.

The pattern is especially obvious with big new features like raytracing, which was added to DirectX first and then some time later added to Vulkan with an API which almost exactly mirrors how DirectX abstracts it. There are even Vulkan extensions which exist specifically to make emulating DirectX semantics easier.


That's understandable. Control over standards has the immense value. Just like look at Nvidia's CUDA.


CUDA success has much to thank Intel and AMD for never providing anything with OpenCL that could be a proper alternative in developer experience, graphical debugging, libraries and stable drivers.

Even OpenCL 2.x C++ standard was largely ignored or badly supported by their toolchains.


Isn't the point of OpenCL to be... open? Not only did Intel and AMD not provide enough value, but neither did the community.

CUDA... is kind of annoying. And yet, it's the best experience (for GPGPU), as far as I can tell.

I feel like it says something that CUDA sets a standard for GPGPU (i.e. its visible runtime API) but others still fail to catch up.


The problem is the OpenCL development model is just garbage.

Compare the hello world of OpenCL [1] vs CUDA [2]. So much boilerplate and low level complexity for doing OpenCL whereas the CUDA example is just a few simple lines using the cuda compiler.

And what really sucks is it's pretty hard to get away from that complexity the way OpenCL is structured. You simply have to know WAY too much about the hardware of the machine you are running on, which means having the intel/amd/nvidia routes in your application logic when trying to make an OpenCL app.

Meanwhile, CUDA, because it's unapologetically just for nVidia cards, completely does away with that complexity in the happy path.

For something to be competitive with CUDA, the standard needs something like a platform agnostic bytecode to target so a common accelerated platform can scoop up the bytecode and run it on a given platform.

[1] https://github.com/intel/compute-samples/blob/master/compute...

[2] https://github.com/premprakashp/cuda-hello-world


Yeah, not just OpenCL, but even "newer" standards like WebGPU. I considered making a blog post where I just put the two hello worlds side-by-side and say nothing else.

I was severely disappointed after seeing people praise WebGPU (I believe for being better than OpenGL).

As for the platform-agnostic bytecode, that's where something like MLIR would work too (kind of). But we could also simply just start with transpiling that bytecode into CUDA/PTX.

Better UX with wider platform compatibility: CuPy, Triton.


OpenCL 2.x was a major failure across the board.

OpenGL and Vulkan were good though. Gotta take the wins where they exist.


Thanks to Intel and AMD.


NVidia never even implemented OpenCL 2.0

AMD had a buggy version. Intel had no dGPIs so no one cared how well an iGPU ran OpenCL (be it 1.3 or 2.0)

--------

AMD was clearly pushing C++ AMP at the time with Microsoft. And IMO, it worked great!! Alas, no one uses it so that died.


Don't blame NVidia for Intel and AMD failures to support OpenCL.


Olympics-tier mental gymnastics, considering Apple treated both Intel and AMD as manufacturing partners at the time.


cough cough

Remind me who owns the OpenCL trademark, again?

Intel and AMD weren't the ones that abandoned it. Speaking in no uncertain terms, there was a sole stakeholder that can be held responsible for letting the project die and preventing the proliferation of Open GPGPU standards. A company that has everything to gain from killing Open standards in the cradle and replacing them with proprietary alternatives. Someone with a well-known grudge against Khronos who's willing to throw an oversized wrench into the plans as long as it hurts their opponents.


Don't blame Apple for what Khronos, Intel and AMD have done with OpenCL after version 1.0.

It isn't Apple's fault that Intel and AMD didn't deliver.


It is entirely Apple's fault that they rejected OpenCL to replace it with a proprietary library. If this was an implementation or specification problem, Apple had every opportunity to shape the project in their own image. They cannot possibly argue that this was done for any other reason than greed, considering they themselves laid the framework for such a project. Without Apple's cooperation, Open Source GPGPU libraries can not reasonably target every client. Apple knows they wield this power, and considering their history it's both illogical and willfully ignorant to assume they're not doing this as part of a broader trend of monopolistic abuse.

Having shut out Nvidia as part of a petty feud, Apple realized they could force any inferior or nonfree CUDA alternative onto their developers no matter how unfinished, slow or bad it is. They turned away from the righteous and immediately obvious path to complicate things for developers that wanted to ship cross-platform apps instead of Mac-only ones.


The fact is that Intel, AMD and even Google (coming up with Renderscript), didn't gave a shit about making OpenCL something developers cared about.


That's not their job. CUDA wasn't "something developers cared about" for 11 fucking years and now look at where we are. If the OEMs focused on doing their job and implementing their standards, then neither of us would be trying to assign blame in the first place.

The worst part is, now that Apple has gone all-in on incomplete and proprietary alternatives, nobody has won. Apple successfully applied their monopoly abuse to a market that they have completely captive. And we want to blame... checks clipboard Intel and AMD, for having renewed interest in a successful decade-old concept.


Would you be willing to share the deal with Apple/Khronos relations?


Apple didn't like OpenGL, rightfully, and came up with their own Metal which they released two years before first version of Vulkan was released.

Now people pretend that Apple is bad because it never adopted Vulkan and never implemented the "good modern OpenGL" (which never really existed).


It runs deeper than that, during the development of WebGPU it came to light that Apple was vetoing the use of any Khronos IP whatsoever, due to a private legal dispute between them. That led to WebGPU having to re-invent the wheel with a brand new shader language because Apples lawyers wouldn't sign off on using GLSL or SPIR-V under any circumstances.

The actual details of the dispute never came out, so we don't know if it has been resolved or not.


Apple, refusing to use open standards, and instead demanding everyone else do things their way? Say it’s not so!


The bizarre thing is that Apple did used to cooperate with Khronos, they were involved with OpenGL and even donated the initial version of the OpenCL spec to them. Something dramatic happened behind the scenes at some point.


My absurd pet theory is that this was related to their 2017-2020 dispute with Imagination. Apple started (allegedly) violating Imagination's IP in 2017. They were, at the very least, threatened with a lawsuit, and the threats were compelling enough that they've been paying up since 2020. It could be Apple pulled out of the Khronos IP pool to prepare a lawsuit, or to have better chances of dodging one.


Most likely related to how Khronos managed OpenCL after getting hold of it.


Please, tell us all about how Khronos hurt Apple with free software that Apple had every opportunity to influence. Point to the boo-boo that justifies making things worse for everyone.


My dear Apple has zero influence on Windows, Linux and Android.

Where are those great OpenCL implementations from Intel, AMD and Google?


I can imagine a scenario: Apple donates openCL, then later suggests some changes for the next version. Khronos delays or pushes back and now openCL is stuck from Apple's perspective and they can't do anything about it.


Yep.


I really want them to get it together with OpenCL 3 and especially Vulkan interop but I’m not really holding out hope for it.


OpenCL 3 is OpenCL 1, no one cares, Intel has made extensions on too for DPC++, AMD is pushing Romc or whatever else they think of.

Still not showing that they care.


I don't know why anyone would try to care when Apple announced they were pivoting away from OpenCL half a decade ago. The value prop of a cross-platform GPGPU API died the moment that Apple gave up, and OpenGL's treatment reflects what happens once Apple abandons an open standard.


Yes, obviously. It is an incredibly tiresome comment which is brought up every single time that Microsoft adopts any sort of open standard and it's never done with any particular insight into if this is one of the times that it'll be relevant.


Has it ever not ended up being relevant? Like, I would agree that it is kind of redundant--and thereby maybe doesn't need to be said--but if there are people who actually think "maybe this time will be different", arguably the comment should be pinned to the top of the thread as a reminder?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: