Hacker News new | past | comments | ask | show | jobs | submit login
Ray Tracing with Metal [video] (developer.apple.com)
139 points by pjmlp on June 16, 2019 | hide | past | favorite | 182 comments

It's a shame that Hackernews posts about graphics API features are so often derailed by discussions about Vulkan adoption. I'd be fine with the discussion if it was in context (a critique of Metal or even Vulkan) but threads about new Metal API features and even threads about WebGL and WebGPU are increasingly filled with the same fly-by responses about Apple's lack of support for Vulkan from people that have never used the API and are not graphics engineers. It would be much more interesting to have discussions about the merits of different approaches to raytracing in Metal vs. DXR vs. Vulkan than read the same tired half baked concerns about the existence of Metal, although it's a good chance for me to give Pjmpl more karma I guess. No one complains that Apple doesn't implement the entire Android API for iOS, why is it different with Metal?

Edit: The talk isn't even about new API features or raytracing hardware. The only reason it's "with Metal" is because it's a WWDC talk, it could be called "implementing GPU raytracing with compute shaders" and apply to Vulkan, DirectX, OpenGL or any other modern graphics API. Another good talk along these lines is this one by Matt Swobda (https://youtu.be/ZrFkKrw1uGI).

> why is it different with Metal?

Because the GPU ecosystem has historically been fractured. Vulkan looks like it has the potential to finally end that, but Apple has a walled garden and doesn't seem to be on board.

But I agree, such things aren't all that relevant to the article being discussed here.

Have they? OpenGL was really the only option on most platforms for years. Even Windows has always supported OpenGL even if driver support from various vendors wasn't always good and official Microsoft support lapsed. Consoles have always used propiertary APIs, and before someone brings up PS3 and PS4 OpenGL implementations they are wrappers over low level APIs, so if they count then MacOS and iOS do support Vulkan via MoltenVK (there is even an official SDK on LunarG). If anything we've seen more fragmentation in the last 5-6 years due to the brief introduction of Mantle, followed by Metal and Vulkan. Some of this was necessary because OpenGL and DirectX development had stagnanted.

> we've seen more fragmentation in the last 5-6 years due to the brief introduction of Mantle

I thought Mantle was what became Vulkan?

> Consoles have always used propiertary APIs

I certainly don't expect console manufacturers to adopt Metal, but at least there's the possibility of (eventual) unification with Vulkan.

> OpenGL ... support from various vendors wasn't always good and official Microsoft support lapsed

And Windows was the main PC gaming platform, so developers used DirectX instead of OpenGL as a result.

On OSX, Metal became a thing; as a result, last time I checked OpenGL was stuck back at version 4.1 (that's > 5 years behind everyone else).

I don't see Android, iOS, or Linux adopting DirectX. I don't see Windows or Android adopting Metal. If you're hoping for unification, Vulkan seems like quite literally the only possibility. Add to that Vulkan support from a variety of hardware vendors other than AMD and Nvidia as well as officially planned Vulkan-OpenCL interop and I'm actually somewhat optimistic about things.

To add to this, I can see Microsoft adopting Vulkan. I don't understand why Apple has taken this approach. Well, I understand why _Apple_ did, but I feel like most companies in their position wouldn't invent a new API.

Bear in mind that there was a great deal of fragmentation within OpenGL as well, due to tons of proprietary OpenGL extensions to support new features of video cards anyway.

Just because we had "OpenGL" and "DirectX" doesn't mean that there was only one thing to consider when doing engine development.

Strongly agree with this. That is a big part of the reason I don't think a unified API on all platforms helps the situation much. Hardware across different vendors and device categories (mobile vs console vs PC) varies wildly. I've never worked on a game that targeted console where "PC directx" and "Xbox directx" weren't different targets and usually AMD and Nvidia had slightly different code paths in some places for extensions or profiling reasons. Add to the fact that just having a portable graphics API doesn't magically make your game portable to all platforms, and that most games are made with portable engines that already run in some from on all major consumer hardware it just seems like there is less of a practical reason for Vulkan every where.

Still, slightly different code paths is better than totally different API, no?

The web doesn’t automatically give you cross browser pages. You have to fix compatibility bugs and sometimes do things two ways. But it’s way easier than having separate WinForms, Cocoa, and QT UIs.

To discuss 3D APIs with their intrinsic value, instead of FOSS religion about them, we are better off visiting Gamasutra, Flipcode (sadly long time dead), Gamedev, IGDA forums, IGF,Game Connection, Making Games, Develop among many others in the game development scene.

I have long time ago learned the hard way, how the values between both communities are completely different.

Programmers have a tendency to over-generalize stuff.

The idea that we need one true programming language, or one true GPU API, or one true text encoding, and use it everywhere, looks good on the surface. It’s when one digs deep enough into technical details, it became clear why people pick less portable technologies. It’s always about tradeoffs.

For GPU API, I think #1 reason is vendors support.

On PCs, Windows is _the_ target platform for majority of game developers. The only GPU API with first-party support is DirectX. GPU vendors do descent job supporting GL and VK there (except maybe Intel), but first-party support is still better. Major parts of D3D work in the kernel, dxgkrnl.sys does quite a lot of things.

On Android and many embedded Linuxes, it’s similar with GLES. Vulkan support is improving fast, I hope in a few years VK will replace GLES there, but so far there’re many older devices.

One application which has impressed me with metal implementation even on low end HW (intel HD 620) is iterm2[1] terminal emulator.

IMO, it is a perfect example of how normal screen drawing if made using HW accelerated API could make a huge difference. This is especially true with low end hardware e.g. if Raspberry Pi (Broadcom/Videocore) had implicit support for OpenGL from the start by default like Apple has done with Metal, there could have been a decent desktop experience which is not possible with current SW rendering on low power CPU.

GPU rendering talk is getting lost with discussions about High end graphics cards or GPGPU computation, but I think Metal has brought back the focus to general screen drawing and judging from how developers like George Nachman (iTerm2) was able to take advantage of it; Apple seems to have done good with it.


If you've ever played with Wayland on a raspberry pi, it's an amazing experience. It doesn't feel slow anymore. Now, apps still run slowly if they are doing anything particularly compute intensive (chromium), but it's very "responsive".

Yes, these are my experience. Please suggest if I had missed something which could provide a better experience.

My configuration to minimize variables which diminish Pi desktop experience - USB SSD boot, Zswap, Btrfs on Arch Linux 64bit (for better support for the latter 2).

1. Enabling VC4 driver at boot, enables OpenGL support to Pi.

2. For Wayland compositor, I used Enlightenment (Gnome, GDM didn't launch). I enabled HW rendering via OpenGL for all default rendering. Basic windowing experience is fast, as expected from HW rendering. Few default applications like terminal (Terminology) if used carefully(not resizing windows, not dragging etc.) works. But any other applications which use OpenGL like screenshot (or) applications which use XWayland chromium, firefox failed to launch. So, my conclusion is that this setup works only for basic windowing i.e. file transfers and careful use of console.

Perhaps Wayland via SW rendering (likely the one mentioned by you) could provide a crash free experience, but that defeats the purpose of getting general screen drawing HW accelerated (to feel responsive)?

Even without Metal, all drawing on macOS/iOS is "hardware-accelerated", especially if based on CALayers. Performance comes from focusing on performance no matter what layer you use.


Thank you.

What's the story with Apple still refusing to support Vulkan natively?

They don't need to. Also Metal isn't really 1:1 comparable to Vulkan in terms of abstraction level.

Same way MS didn't need to support common HTML/JavaScript since they had ActiveX?

If Apple would have acted properly, instead of their usual "eat our lock-in" mentality, they could provide Vulkan support, and then build higher Metal-style abstractions on top of it. But no, it's Apple for you. Eat Apple only Metal or get lost.

Apple has all the right to not think about Vulkan and invest in their own implementation.

- First release of Metal is June 2014. The Khronos Group began a project to create a next generation graphics API in July 2014. First release of Vulkan is 2016.

- Metal is C++. Vulkan is C. Yes, that may matter

- Metal is specifically optimised for a specific set of hardware with known capabilities. Vulkan aims at everything under the Sun

- Vulkan is headed by a consortium that wasn't known for its great decisions in the past, and Apple might want a faster turnaround on features and capabilities

- It's just business

> Metal is C++. Vulkan is C. Yes, that may matter

The Metal API is Obj-C or Swift, only the shading language is a C++ variant.

A C-API (not C++) would actually be nice. While ARC is quite convenient it also adds a significant overhead. Most of this overhead can be prevented with quite a lot of handholding, but this also reduces the benefits of high-level languages like Objective-C or Swift, I'd even argue that this handholding is more hassle than having a more explicit C-API to begin with.

All of that is not a justification for their sick lock-in.

- It was explained above, that Apple knew well all along, that there was an effort to make a common low level API. Well before they started Metal. So they purposefully refused to participate.

- C was a proper choice for Vulkan (short of choosing Rust let's say), since it makes it easier to provide bindings from any language than let's say using C++, making bindings to which is way more painful.

- Optimization for hardware happens on low level compilation, so it's a lame excuse not to support Vulkan.

- Faster or not, Apple could observe the result (which was good) once it was ready, and support it. They refused (because Apple).

- Business or not, lock-in is a nasty attitude that's aimed at taxing developers. There is no no need for development tools lock-in, it's a sick anti-competitive methodology, for which Apple is quite infamous.

When Metal was initially developed and even when it was released, Vulkan wasn't even a proposed project yet. The only comparable thing was Mantle from AMD (which later evolved into Vulkan). At the time NVidia were still saying use OpenGL with AZDO techniques and on mobile, you were stuck with OpenGL ES, with a multitude of issues.

Optimization of existing code happens at a lower level but if you can gear an API for optimal use of your hardware and existing APIs, you make it easier for developers to write code that will have a better shot at being optimized. Not all code optimizes the same way.

Also what you view as taxing developers, can also be flipped around as being able to make an API that is a lot easier for the developers on their platform.

And indeed, metal is probably the easiest of the modern graphics APIs to get started with, with a very light learning curve even compared to OpenGL. All while not sacrificing performance and expressiveness for more involved development. Vulkan meanwhile remains very inaccessible to many developers even with graphics backgrounds.

Let's not also forget that apple isn't an outlier for lockin, and actually are quite open. OpenCL, Clang, webgpu, swift etc are all very open and cross platform. Meanwhile even things like directx are locked to windows.

The real goal isn't lockin. It's providing the best API to developers to let them get in quickly while also optimizing for their hardware. Lockin is an unfortunate side effect but they're also completely succesful in meeting their actual goals.

> The real goal isn't lockin.

That's an excuse, since the result is lock-in. I.e. if Apple so much wanted Metal - let them have it. But how does that prevent them from supporting Vulkan, for those who don't want to waste resources on duplicating work and want to reuse their existing Vulkan codebase on Apple systems? But no, Apple forces you to use Metal. So the claim that it's for developers' benefit won't fly. It's for Apple's benefit very clearly.

WebGPU is also a counter example, since Apple there try to prevent adoption of common formats like SPIR-V.

When it comes to sabotaging interoperability, Apple are always among the first.

Apple does all their driver and API development in house. Supporting Vulkan would mean a lot more effort on their part. Unlike windows and Linux where support is handled by the GPU vendor instead.

Why support Vulkan, which is arguably harder to use and takes more resources of their internal development teams, when they have an API that provides all the benefits and is arguably a better API? Especially when the major off the shelf engines already support metal. It's diminishing returns at this point.

Personally I'd be quite happy to have Vulkan support on mac, but if you take a step back from your inalienable position that it's a conspiracy, there's actually very sound reasoning behind it.

You may not agree with the decision, but you weaken your argument when you're trying to force everything to fit a black and white narrative. The reality is much more nuanced.

> Apple does all their driver and API development in house. Supporting Vulkan would mean a lot more effort on their part. Unlike windows and Linux where support is handled by the GPU vendor instead.

How does this jibe with your position that Apple isn't performing lock-in, particularly when Apple is now specifically refusing to approve previously-available Nvidia drivers and thereby preventing them from being used in external GPUs for Macs?

The details of that aren't public other than both companies want control of the driver stack and neither is willing to budge. The driver breakage happened between major operating system versions that had significant graphics systems overhauls so would not be compatible as is.

I don't classify that as lock in. It's an unfortunate side effect of business dealings between two immovable companies.

However apple does work with AMD and Intel GPUs, and still allow AMD external GPUs.

> Apple does all their driver and API development in house

Not OP but the games and film industry believe this here is the problem. The additional effort to support common APIs is an artificial self-imposed problem resulting in worse software.

I'll definitely agree here that it's got its share of pros and cons, and having worked in both realtime and film work, it is difficult.

But on the flip side, it does fit in with apples ability to develop very tailored platforms and make very easy to use APIs.

So while I often find myself on the wrong side of the double edged sword, I do also feel their decisions make sense as well. And honestly, having done dx9-12, OpenGL and Vulkan, I won't argue against how easy to use metal is.

I'm sort of mixed on how easy Metal is. Having to support Objective-C and/or Swift as part of the build toolchain is sort of a headache and Metal actually isn't low-level enough for me to support a lot of the optimizations I can do in DX12/Vulkan and especially doesn't come close to console APIs (which you'd think should be possible given that they can specialize for their hardware). My feeling is that for driver APIs, it's ok for the API to be lower level since users (and other developers) generally won't need to deal with that anyways.

Given what was shown at WWDC regarding software support and hardware for live 8K HD editing, I don't think film industry is that much worried.

As for games, the sales speak for themselves.

> Supporting Vulkan would mean a lot more effort on their part. Unlike windows and Linux where support is handled by the GPU vendor instead.

First of all, who exactly prevents GPU vendors from doing it for Apple? Apple. Secondly, with Apple's resources I don't buy the excuse of "it's too hard" or "we can't spend on it". Apple are very happily spending piles of money, when it helps them hamper interoperability in result. So being cheap here is just another lame excuse for lock-in.

It's simple policy. Apple do their own dev and don't use third party companies for the development. This is for a variety of reasons including providing a tailored platform to their specifications, and meeting the level of secrecy required internally.

Whether you agree or not with the policy, it's not one motivated by lock in.

And again it's not a matter of being cheap. It's a matter of perspective. Apple believe metal is worth using and that it delivers needed performance, optimization and features for their stack. Apple also believe in very straightforward developer stories within their own platform. Vulkan doesn't provide value above and beyond what metal does from apples perspective, and it complicates their developer story.

> It's a matter of perspective.

That's my point too - their perspective is lock-in. Not just in graphics APIs but in many things. So this issue falls into the common picture with many other similar ones.

I.e. Apple could easily implement it. They don't for political reasons, and I don't buy any lame excuses like "that's better for developers" and such.

No , your perspective is lock in. You're trying to fit all their rationale into one convenient box and ignoring the nuance of the situation.

You seem certain that preventing people from hastily porting Android games was NOT a motivation for Apple.

But they have explicitly said in the past: they really prefer we not put cross platform apps on the App Store.

I don’t know why you won’t consider the odds that Apple is trying to hamper cross platform apps with Metal deliberately as a policy when they’ve specifically said that’s their strategic goal.

Theorizing about non malicious motives doesn't change the facts - the result is lock-in. And when it consistently happens all the time for supposedly "other reasons", excuses about it not being done for the sake of lock-in stop being even a tiny bit convincing. To put it differently - who cares about motives at this point, when the result is a nasty lock-in which Apple refuses to remove.

Google is the company currently driving WebGPU efforts and has shown the fruits of their labour at Google IO.

To the point that what Intel has contributed to WebGL 2.0 compute shaders implementation on Chrome might be thrown away, as Google now considers it needless in presence of WebGPU.

Which by the way is based on HLSL, just as Khronos is also finding out on their Vulkan PR efforts, is the favourite shading language among professional game devs.

Which is why Samsung and Google are the two biggest contributors to HLSL to SPIR-V compiler.

Shader language shouldn't matter as long as it's compiled to standard SPIR-V. There is no point to limit it to one language.

It matters if the available tooling isn't practical.

What do you mean tooling? Like Unity?

IDE like support for shader development, something that all proprietary APIs support out of the box, and both OpenGL and now Vulkan have always been lacking.

Usually what gets thrown by OEMs or FOSS devs looks more like sample apps, are OEM specific thus making the API not as portable as advertised, or feel short.

For example you can pinpoint specific pixels and trace back to which code lines from each shader were involved in rendering the pixel. Check the Metal debugging talk at WWDC 2019.

I would honestly have been happy if we just had OpenGL compute shaders available on all desktop operating systems. That was standardized in 2012 by OpenGL 4.3, two years before Metal.

> Let's not also forget that apple isn't an outlier for lockin, and actually are quite open. OpenCL, Clang, webgpu, swift etc are all very open and cross platform.

OpenCL is deprecated on macOS, and hasn't been updated past OpenCL 1.2 (2011). It was killed in favour of Metal.

I hope that WebGPU is a good cross-platform API for GPU compute when the standard is finalized and implemented, but I've learned not to count my chickens before they hatch.

"Lock-in" means that there are alternatives that you could use, but the vendor does not let you. When Metal appeared, there were no alternatives, Vulkan wasn't a thing for 2 years after Metal, and even today, Vulkan has seen relatively low adoption, so one could argue that it still isn't really a thing.

Metal might be many things, but lock-in isn't one of them. If you are happy with Vulkan then good for you - without Metal, Vulkan wouldn't even exist. It took Apple telling Khronos to GTFO for they to start working on Vulkan.

> without Metal, Vulkan wouldn't even exist.

That's completely false. Apple only started the whole Metal effort because of Mantle, which was the origins of Vulkan. But being Apple, instead of collaborating and ending up in proposing things for Vulkan, they pushed for NIH lock-in.

> Lock-in" means that there are alternatives that you could use, but the vendor does not let you.

Which is exactly the case here. Take some project like Wine for example, that works on implementing Direct3D over Vulkan. That's the alternative they want to use, yet Apple prevents them, forcing them to either waste resources on implementing another path using Metal, or to go suboptimal route of extra translation. Who is to be blamed for it, but Apple?

> Metal might be many things, but lock-in isn't one of them.

It wouldn't be, if Vulkan was available as an alternative. But like you said yourself, lock-in happens when the the vendor does not let you use the alternatives. Who but Apple prevents native Vulkan support from appearing on their systems? So Metal is very clearly lock-in, according to your own statement.

You're understanding of the history of Mantle, Metal, and Vulkan is very poor.

1. Mantle was not originally being proposed as a "open" API as you are implying. A reading of the [original white paper](https://www.amd.com/Documents/Mantle_White_Paper.pdf) makes apparent that AMD was trying to leverage their game developer mind-share from the consoles by bringing it to desktops. Mantle was being proposed as an AMD GPU only API.

2. Metal was released 10 months after Mantle was announced (and 3 months after API docs surfaced). That's an extremely short timeline for a big company to go from zero to deciding to release a new graphics API, spec that API and it's shading language, write drivers and compilers for it, write developer tools, and author documentation. The timeline you're proposing simply doesn't make sense. Apple was clearly working on Metal long beforehand.

3. Vulkan, as an initiative, was reactionary to Apple's release of Metal, being that - up to that point - iOS and macOS were the largest OpenGL[ES] markets and Apple the only company commercially invested in OpenGL (OK, maybe not the only, but clearly Apple was the most influential Khronos member). By the summer of 2014, it was clear that Mantle - as an AMD only API - was not commercially viable. Thus, it was "donated" (so to speak) to Microsoft and Khronos to bootstrap DX12 and Vulkan development. (My understanding is that DX12 was already under development, but it's clear that the API shifted drastically in response to Mantle as evidence by the DX12 version 1.0 docs containing passages verbatim taken from the Mantle programming guide).

Please stop spreading false narratives in order to bolster your agenda.

It wasn't open from the start, but AMD communicated their intent to open it. No point to pretend Apple didn't know about it. It took a while, and AMD did just that, by giving Mantle over to Khronos, as a base of Vulkan.

You know that Apple did not start working on Metal the day they announced it, right ?

> Apple only started the whole Metal effort because of Mantle


They responded to it. AMD were first to come up with such design and published it for everyone in the form of Mantle. AMD also clearly communicated their idea to make it open and available for all.

Apple used the idea, but instead of eventually joining Vulkan group, pushed their NIH in their typical fashion.

> without Metal, Vulkan wouldn't even exist. It took Apple telling Khronos to GTFO for they to start working on Vulkan.

This is not true.

What’s the story with Microsoft support Vulkan?

Oh that’s right they only support direct3d.

Because tying an OS cycle to a standards body causes significant problems in terms of feature lag, validation, etc

Is MS being a lock-in jerk an excuse for Apple being one? This thread is about Apple, but if you want to bring MS in it - they at least don't force you to use DX on Windows. I.e. you can use Vulkan there. They do on Xbox though. Which is a similar problem.

Vulkan only works on the classical Win32 subsystem.

Win32/UWP store sandbox does not support OpenGL ICDs, which is the mechanism used by Vulkan drivers on Windows.

UWP is dead, luckily MS understood they want too far with it and dropped that lock-in nonsense.

UWP is pretty much alive, that narrative keeps being spread by people that don't have any clue about Windows programming.

The only thing from UWP that is dead is an UWP only store.

The store, now as a mix Win32/UWP sandbox and the ongoing replacement of Win32 legacy APIs by UWP ones is pretty much alive.

In fact React Native for Windows is being rewritten to use UWP APIs, using WinUI 3.0, which is also the official MFC replacement for C++ devs.

Alive as any MS dead end. Of course they won't just drop it, they need to support existing stuff that's using it. But MS buried their plans to make it a mandatory requirement. That's the end of it in essence. Which is good, we don't need this lock-in forced on developers.

Alive as all new Windows APIs since Vista are based on COM and UWP is just a continuation of that process, as what .NET should have been all along (COM Runtime), which is something that the anti-UWP crowd fails to grasp.

Win32 is the Carbon of Windows world, stuck in a Windows XP concept.

If you want to write Windows applications as if targeting Windows XP, then jump of joy. There are still Apple apps being written in Carbon and Linux is stuck on the UNIX ways of yore anyway.

Luddites also need to work anyway.

I know you are a big fan of lock-in, but you are lying to yourself, you think it's good for developers. It never is. And those who support such approach are hurting others.

You crossed into incivility and personal attack in this thread. Please don't. It's not what this site is for.


Frameworks, graphical debuggers and first class IDE support it is good for developers.

Working with caveman tooling just in the name of some greater good, not so much.

Can you please eliminate the gratuitous provocations? They are quite unnecessary and take discussions to shallower, angrier places.


Sorry dang, I lost track of the thread. I will refrain myself to ever reply to him again.

Lock-in isn't, which is self explanatory, which leads me to think you are on purpose just wasting time in this thread pretending that you don't understand it. It's not the first time you are defending lock-in.

That lock-in is called business trade off, which the large majority of the market prefers to compromise on.

Maybe you should make a reflection why no one cares about such Quixotic endeavours, which remind me of soapbox speeches while the gaming industry just carries on business as usual.

It's not a trade off. It's an anti-competitive tactic. Always puzzling when developers defend this garbage. Shills normally do it.

Do you think I feel anything to be called any kind of names?

I just like to explain how the gaming industry works to those without any real experience on how it works.

It was my OpenGL and Linux blindness zealotry that costed me a few interviews at well known AAA studios, back when I still cared for making my mark in the industry.

Still, those days left me quite a few friends that I still keep in touch with, and I got to learn what the industry actually cares about.

Producing good IP to sell, using the best tools available, with support from platform owners, and ensuring good deals with publishers, eventually even movies and board games from titles.

Everything else are just windmills.

Now, whatever anyone learns from my comments or decides to carry on advocating practices that the industry doesn't care one second about it, it up to themselves to decide.

You quite consistently defend lock-in every opportunity you have, so I don't believe you are doing it for objective reasons. Quite naturally, usually those who benefit from such corrupt methods defend them. So it's a shill position, whether you like it or not.

Some people never learn.

They don’t need to do anything. It would be a more attractive platform to develop for with a modern cross platform graphics api.

I don't know how closely you're following this, but MoltenVK[0] which was formerly a proprietary option for Vulcan on Metal is open sourced now.

0: https://github.com/KhronosGroup/MoltenVK/blob/master/README....

Sure, it's a workaround for the dumb Apple's refusal to support it natively. It's great of course to break the lock-in, but it only highlights the problem. Translation is always suboptimal to the native option.

Apple, please add a time multiplier to your video player. There's no need to watch a demonstration like this at 1x time.

Apple is using just a bog standard <video>-tag, so this should be a general plea to browser vendors (including Apple/Safari) to add that functionality.

I vaguely recall the video element already having this functionality - I remember an engineer working on it making the playback rate be negative. The performance was not spectacular :)

I'm not surprised and I don't think it's the fault of browsers or the implementation. Delivery codecs have a lot of optimizations around playing forward. There are a whole other class of video codecs designed around the needs of video production; scrubbing, frame-by-frame, playing backwards, etc.

Frame-by-frame and slow-mo should be fundamentally easier than scrubbing/backwards. Nothing about a forward oriented codec would make frame advance or slow-mo harder. You still get a series of image buffers eventually.

When going frame-by-frame you tend to go back and forth. You can preload and cache, but that leans a lot more on your player and hardware. Yes, you do get every frame but some frames may be very "muddy" because of compression. I'm not saying it's impossible, it's just not an optimized use-case that's trivial with other codecs.

Seeking is another example. It's a hard requirement for video editing, and desirable for users but often compromised for more preferable things like bandwidth and file size. For example, if the data-rate is variable, you don't really know where 3minutes 20seconds into the file is without hunting around (which is trivial if the data-rate is constant). MP3 supports a 1-100% lookup table at the front of the file, but this isn't very granular for long files. Even if you can find the spot in the file, you'll probably snap to an i-frame unless you decode the whole segment.

I'm sure these things will get better as codecs mature and constraints on size and processing power ease up.

Oh yeah it absolutely makes sense - scrubbing backwards through iframes (or is it bframes? I legit can’t remember and codes were never my thing :) ) - it was just funny seeing the massive power usage difference when playing backwards

Doesn't right click > play speed work?

Not on Chrome on Windows - could be it falls back to a different player that allows that on other platforms/browsers.

No, standard <video> tag everywhere. Apparently Chrome doesn't have that feature.

document.getElementById("video").playbackRate = 1.5

It's very disappointing that Apple hasn't adopted Vulkan. It's not like they are completely against standards either. I believe they even joined the FIDO Alliance recently.

With Apple adopting Vulkan, it may actually stand a chance against DirectX and the majority of game developers might start developing for Vulkan first. And Apple would benefit greatly from that, just like every other Vulkan supporter would.

Ignoring both technical merit and politics, both of which I’m not qualified to judge on, one reason they didn’t adopt it may be that it didn’t exist when they switched away from OpenGL.


”Metal has been available since June 2, 2014 on iOS devices powered by Apple A7 or later”


”The Khronos Group began a project to create a next generation graphics API in July 2014”

Of course, Apple could have changed direction later on, but they must have had a large investment in Metal by the time it became clear Vulkan would become a long term thing.

Johan Andersson (then of DICE) was a presenter at Metal's unveiling. Andersson was one of the prime movers for Vulkan and Mantle before it, dating to 2013. Apple was very aware of Mantle/Vulkan while designing Metal, just as Apple was very familiar with CUDA when it released its own ill-fated OpenCL alternative.

”its own ill-fated OpenCL alternative.”


”OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group.”

And yes, it started life that way. https://en.wikipedia.org/wiki/OpenCL#History:

”OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008. This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.”

Apple first shipped OpenCL in August, 2009.

So, OpenCL may be ill-fated, but I don’t see why Apple would have to have gone with CUDA, which is Nvidia-exclusive, both in actual hardware, and, AFAIK, license-wise.

That's not really a valid reason, since not only Apple knew all along where it was heading (from Mantle times, to a common open low level API), nothing stopped them from supporting Vulkan once it became official. With Apple's resources, it's peanuts for them. So it looks like a completely political, nasty lock-in stance.

Why would Apple care about how DirectX is doing? They support 'Standards' and 'Open Source' to the extent that it benefits them and that's it.

Vulkan is not designed to be used by game developers directly but rather as a basis for games engines like Unreal or Unity. So I don't see how game developers would benefit from its widespread support as the majority of game developers will never have to deal with it directly.

That said Vulkan is an unmitigated disaster, though less so than classic OpenGL. Any API which fiddles with void* in 2019 should be put right back in a box. Security is mentioned exactly twice in the entire spec. Type-safety is mentioned twice also. And that is in a spec which is 900 pages long and incredibly complex.

> Vulkan is not designed to be used by game developers directly but rather as a basis for games engines like Unreal or Unity. So I don't see how game developers would benefit from its widespread support as the majority of game developers will never have to deal with it directly.

Vulkan has significant benefits even if you're not writing Unreal or Unity. One of the biggest is that it removes a lot of the heuristics and guesswork that OpenGL drivers do (e.g. with regards to when to move things in and out of GPU memory) that cause a lot of the bugs and incompatibilities on different IHVs/cards/platforms. Its validation layers also leave the halfhearted, vendor specific debugging extensions in the dust.

> That said Vulkan is an unmitigated disaster, though less so than classic OpenGL. Any API which fiddles with void* in 2019 should be put right back in a box. Security is mentioned exactly twice in the entire spec. Type-safety is mentioned twice also. And that is in a spec which is 900 pages long and incredibly complex.

Metal overtly has statically checked memory safety via ARC, but the reality is that the IHV libraries will segfault in response to semantic mistakes, just like Vulkan. A big difference is that Vulkan has cross-IHV validation layers that catch most of these mistakes during development, and they're only getting more comprehensive. It's totally trivial to write a wrapper over Vulkan that fixes the type/memory safety of the C interface; it's not trivial to create a validation system of anywhere near the quality of Vulkan's for Metal.

From what I've seen so far, Metal's own validation layer also does very thorough checks, to a point where it almost replaces the documentation (as far as the quality of the validation error messages goes).

> That said Vulkan is an unmitigated disaster, though less so than classic OpenGL. Any API which fiddles with void* in 2019 should be put right back in a box. Security is mentioned exactly twice in the entire spec. Type-safety is mentioned twice also. And that is in a spec which is 900 pages long and incredibly complex.

OK are you a graphics professional? Have you written renderers in Vulkan? Do you have any idea of what you're talking about? What "void*" pointers are you referring to? Are you aware that type-safety is very much a design goal of the API? Are you aware that the spec ships with a formal memory model which is a godsend for graphics engineers that had to deal with implicit guarantees in other APIs for years? I'm not someone that authored the original spec, but reading your comment is somewhat rage-inducing given how off-base it is.

FWIW there have been one or two (apparent) Apple employees on here who seemed sincerely convinced that Vulkan was a mess compared to Metal. I have no idea how true that is or how much difference it actually makes, but there it is.

Metal, nice OOP based tooling with several frameworks to support handling textures, fonts, materials, GPU live shader debugging, code completion on Xcode with static analysis, C++14 based shaders.

Vulkan, plain old C style API, full of extensions just like OpenGL, leading to several code paths, just the bare bones pixel drawing, go fish for random libraries for anything else, try your luck with RenderDoc or GPU specific debuggers, enjoy Notepad++ + UNIX shell style tooling support.

I believe this is a deliberate misrepresentation of the current state of affairs on Vulkan. Likely for the purposes of spreading disinformation about a specification that legions of smart, dedicated, hardworking engineers came up with after countless hours of debate and consensus.

I mean be real...

there is no way that any programmer who actually used Vulkan would find it as pleasant to work with as you imply in your comment.

Here is the list of Vulkan extensions with device coverage.


Clicking a bit around, gives the Vulkan support version per device, which on Android is a mess, unless one only cares about flagship devices.


Now regarding the tooling, since I am spreading disinformation, can you please provide us with the SDK tooling that matches the offerings from Apple, Sony, Nintendo and Microsoft, regarding their own APIs?

Because what LunarG offers is pretty bare bones, and the OEM SDKs pretty much just provide the basic stuff as well, with C samples.

I guess that someone that rather goes hunting for glm, uses sbt for textures, writes their own font rendering engine, compiles shaders from the command line, only knows about RenderDoc is capable of,... will feel very uncomfortable with the available set of Metal tooling and Frameworks.

And this is one of the best tutorials available for Vulkan beginners, https://vulkan-tutorial.com

Most of the extensions are for what are currently pre-production or fringe features, and the WG showed they're not afraid of pulling important (and even not so important) extensions into minor version releases much faster than OpenGL has put things in core with Vulkan 1.1 (I was glad but honestly surprised that the cross-process resource sharing made the cut).

You're definitely right about significant parts of the toolchain being weaker. In particular sorting out how glslang -> Vulkan works in practice is pretty trial-and-error, and if you want to use HLSL it's even more poorly documented. You have to expect to be going through the Vulkan spec (which I will say is much easier than directly using the OpenGL spec). However the Vulkan validation layers are incredible, and leave the equivalents for Metal, DirectX, and especially OpenGL in the dust.

I think you missed the joke.

It was a joke. Read the comment to the end.

I think the supposition that pjmlp is knowing mischaracterizing Vulkan is both likely to be false and excessively uncharitable (and I say that as a big fan of Vulkan).

I think it's just a matter of not having had much experience with it. It is natural to look at the shape of the API and have PTSD about wrestling with OpenGL for the past few decades.

I'm pretty sure the comment you're responding to is just a switcharoo-format joke. Unless the last sentence is missing an "un" before "pleasant".

Yeah I assumed that was the omission but that makes a lot more sense lol

My understanding is that Vulkan is specifically supposed to be as low level as possible in order to facilitate device manufacturers creating new hardware and to give developers fine grained control over things if they want it.

I believe the stated strategy is to let third party middleware handle the higher level abstractions so that different approaches that appeal to different people can coexist. This also theoretically reduces driver complexity and thus (hopefully) bugs.

I have absolutely no idea if this is actually a good idea, or if it's working out in practice. In case anyone wants to poke around the ecosystem: (https://github.com/vinjn/awesome-vulkan).

Okay, but none of that was true in 2014. If Apple had been building an ecosystem of Vulkan tooling instead of an ecosystem of Metal tooling, presumably Vulkan would have the frameworks, the Xcode support, and (maybe) some of the debuggability (since Apple could have their drivers' Vulkan shader pipeline target LLVM IR, and then do LLVM-internal stuff from there to enable debuggability.)

So what made Metal a better technical choice than Vulkan, back when the options were "Metal without Apple's support built yet" and "Vulkan without Apple's support built yet"?

Being almost two years older than Vulkan.

>seemed sincerely convinced that Vulkan was a mess compared to Metal

Oh god I hope not. I come from the API mess that was OpenGL over 3 major versions, and I was hoping all the lessons learned from that were applied to Vulkan.

I think the lessons learned are being applied, but those are the lessons on how to maximize fragmentation.

Graphics APIs are not good when designed by committee.

Although not an ideal solution, MoltenVK [1] does a great job at bridging that gap

[1] https://github.com/KhronosGroup/MoltenVK

Yes, I find it difficult to imagine developers that are enthusiastic about "metal", I mean who wants another platform specific API for graphics cards. If Apple really think we need something other than "vulkan" then they should push to make "metal" a competing open standard- But I doubt they think that, and It's quite easy to imagine ulterior motives, M$ had exactly the same motives.

I'm quite enthusiastic about Metal, and I'm not even much of an Apple fan. Compared to Vulkan or even OpenGL, Metal's API design is much more elegant and developer-friendly. It can be used just fine without a "sanity layer", which can't be said about Vulkan.

All of us that rather use productive SDKs, with higher level languages, instead of a C API that only knows about drawing pixels.

The same devs that rather use DirectX, libGCM, libGNM, GX, GX2, NVN.

I am talking about principles of openness vs lockin, I am not personally interested in technical merits until I can depend on it existing beyond Apple's back garden.

My point was I am surprised others do not care about this aspect, you are obviously one example - So my question to you (as someone who values it technically) is: does this not bother you?

APIs win on technical merits and market tooling, not on feels good kind of thing.

Not at all, as someone that learned to program graphics when demoscene was all the rage, it is all about taking advantage of the hardware.

Professional game studios are more than used to have a thin layer to abstract each 3D API, which is a very small codebase from any game engine/middleware.

> APIs win on technical merits and market tooling, not on feels good kind of thing.

Ok, but the technical merits for metal stop at Apple, you can abstract away differences in your translation layer if you care about your game existing outside, but you have to leave any merits of metal behind. Doesn't this dull your enthusiasm over improvements and features specific to metal?

I get that the original demoscene you were using unique hardware, and unportable demos that squeezed everything they could out of that specific piece of hardware felt natural, but Apple computers use the same GPUs everyone else is using... doesn't that make a proprietary low level graphics API seem more artificial.

Not every developer feels like they have to support every computer in existence.

Many game studios are quite happy being Apple experts, just like many other game studios release exclusives or sell themselves as experts on a given platform, including selling consulting services.

For those, vendor tooling is very much appreciated.

The ones that care for portable code, they also care about having the easiest way to start coding on each platform.

If Khronos cares about adoption, they should improve LunarG SDK to be more than just a set of bare bones libraries.

So in short, to answer my own question: a developer can have enthusiasm for metal if metal covers all of their target platform (Apple).

Their most important platform uses in-house GPUs these days.

> If Apple really think we need something other than "vulkan" then they should push to make "metal" a competing open standard

Metal predates Vulkan by years. They were already using Metal in shipping products before Vulkan existed.

Why does that mean they shouldn't make it an open standard?

"Metal" is a vertically-integrated software stack that goes up into Objective-C runtime land. An open standard based upon it would require that there be an open-standard Objective-C runtime and toolchain to use in working with it.

Unlike Vulkan, the lowest-level, C-ABI, non-type-safe parts of Metal aren't exposed as public APIs, AFAIK. You could maybe standardize those, but Apple presumably wants to reserve the right to change them while just keeping the higher-level ObjC ABI stable.

> Unlike Vulkan, the lowest-level, C-ABI, non-type-safe parts of Metal aren't exposed as public APIs, AFAIK.

They don't even really exist. Most Metal APIs are defined as protocols (id<MTL...>) for a reason; if you look at the objects you get back, they're directly implemented by the GPU driver.

cos randomers don't get to dictate the scope of the project after-the-fact

If "randomers" are developers who are repelled by the proprietary nature of their graphics API "randomers" may well dictate the scope of the project.

As someone pointed out, it probably has to do with controlling the stack all the way from hardware to userspace. Increasingly, Apples UIs run on top of Metal and I get the impression they want complete control, not just "good enough". Which makes sense, it's what makes Apple worth it for many people. I totally get the inconvenience from a developers (or gamers) perspective, though.

You can't adopt stuff which doesn't exist.

Apple was the first to release next-gen 3D API, Metal was released in 2014. Then came MS, DX12 was launched with Win10 in 2015. Vulkan was too late, initial release in 2016.

Metal 1.0 was ultra-limited; it was more MVP than anything else.

It was done exactly for the Apple to be "first". The initial Vulkan release was more comprehensive than Metal 2 in 2017.

Metal 1.0 reflected the capabilities of mobile GPUs at the time. Apple is not in a race with Khronos.

Yet you will find several comments on this same page, that Apple won the race.

Nowhere was mentioned, that Metal 1 and Vulkan 1 were not comparable by a long shot. Exacly, precisely because it was MVP focusing on specific mobile GPU.

What about AMD’s Mantle?

I don’t think Apple could adopt Mantle. They needed a GPU API to spawn across PCs and phones/tablels. Phones & tablets earn them most profits i.e. more important for them, but mantle was a PC-only tech.

Created by AMD and supported in only a subset of AMD's cards. Appeared in 2013 (so, a year before Metal, and by that time Apple had probably already invested millions in Metal), and abandoned in 2015.

So. What about Mantle?

Mantle lead to Vulkan. Your point about Vulkan simply arriving too late for Apple is not as clear cut as you make it.

Someone else answered about Mantle/Vulkan/Metal timeline in a different thread: https://news.ycombinator.com/item?id=20200116

(Note: I in no way imply that you spread false narrative etc. It just happens that this linked comment is such a reply)

sorry at this point we need to stop wishing Apple would adopt standards or wishing Apple would stop reinventing the wheel in their own bespoke stacks and just support what we believe in elsewhere.

Pretty much all of the major game studios already have Vulkan implementations for their engines because of Stadia, Google's game streaming platform. Obviously we've yet to see how much staying power that will have.

Vulkan is on Android, Direct X is on Windows, Metal is on macOS and iOS.

I don't see anything wrong with it. Just like Windows did't adopt Vulkan. And that is assuming Vulkan is good in the first place.

And most of the gaming development are now done with Middlewares like Unity and Unreal. The way I see it, more and more people will develop with Middlewares in mind, those API won't be touched by a shrinking amount of developers.

Vulkan runs on Windows.

And Linux.

And macOS. And iOS.

Exclusively with the use of MoltenVK, which uses Metal under the hood...

Not exclusively -- gfx-portability (https://github.com/gfx-rs/portability) is another option.

Wasn't aware of that, thank you!

And anybody who is not a masochist and cares about compatibility across all those platforms will use Unity or Unreal or one of the lesser-known others.

DirectX 12 is the easiest of these APIs to use, and it's not exactly fun.

Only on classical Win32 mode.

Does any OS natively support Vulcan? Or do you get the drivers from a third party?

Only Android, and on Nintendo Swift as 2nd tier API (1st tier is NVN).

Even on Windows, you need from get the drivers from OEMs, and they only work in old Win32 desktop mode, not on store/UWP.

It remains to be seen if Windows sandbox will allow for anything else other than DirectX.

If you consider that window, normally use third party drivers (by amd, intel, nvidia) which bundle by default vulkan.

Linux distributions and the more recent android versions?

Pretty much every OS, except heavy lock-in oriented ones, where OS is controlled by some entity which prevents providing graphics drivers. In such case it's up to that entity to produce them, and in most cases their lock-in idea is against it.

Pretty sure AMD on Linux can do Vulkan with just second-party stuff.

AMD hardware has two drivers on Linux, radv by Mesa and amdvlk by AMD themselves. Not sure what you call "second party", but both are very competitive today.

The "second party" is the user. I don't think any users are writing their own Vulkan drivers.

My mistake, thought about from the user perspective.

Given they make their own hardware, it makes sense to push their own standard, especially with the kind of market penetration they have. Now if it would only be possible to run Metal in the cloud ...

Apple doesn't sell any products with NVidia RTX hardware.

Yet still they feel the need to re-invent the wheel and write an API for it. I guess they really want to maintain that competitive moat.

Ray Tracing has been around for years. Heck, POV-Ray has been a great open source app for decades, even. It’s not some NVidia secret sauce.

> POV-Ray

Is not realtime. You could find an article on raytracing and code something in a weekend. Granted, POV-Ray is vastly more complex than a toy raytracer, but it is not realtime. It takes minutes to hours to render a single frame.

"Computers have been around for years, punch-card mainframes have been great at universities for decades even."

NVidia are, currently, the only vendor who have dedicated silicon for it (AMD have demonstrated DXRT working with existing silicon).

They do a actually have a secret sauce.

Real-time just means that they’re doing it faster. NVidia has their own implementation. But ray tracing isn’t an NVidia technology, and NVidia implementing it in silicon is no reason for everyone else to sit around and say “well I guess we have to do it NVidia’s way now”.

> well I guess we have to do it NVidia’s way now

Nobody is, DXRT is a specification that any vendor is free to choose how they implement. NVidia are the only vendor who currently implement it.

If anyone is trying to look it up (Google doesn’t know) DXRT refers to DirectX Ray Tracing.

ATI is also bringing out hardware with dedicated RT cores as well, so it's likely that the hardware is coming to Macs shortly, just as next-generation game consoles will.

Metal is Apple's realtime API, and seeing as DirectX doesn't run on Macs, they'll need an API for raytracing in order to take advantage of ray tracing hardware, just as Vulkan did.

And if Apple doesn't provide a Metal RTRT API for third-party devs to target now, then whenever the hardware acceleration does show up third-party apps won't benefit from it right away. Apple will want the new hardware to bring immediate, large performance boosts to existing apps, not months and years of delay and hesitation from ISVs. Besides, they couldn't do any software RTRT themselves without writing a library for it anyway.

Apple does though design its own GPUs. So it is not a far stretched idea that they might someday ship a GPU with dedicated RT hardware.

They're also likely to kiss and make up with nVidia again at some point, too.

Hmm, I would argue that the amount of investment into the GPU space by Apple almost ensures that they will replace both AMD and Intel at some point.

Probably first in the thinnest of Macbooks as a way to test the waters and buy some more time.

RTX is a marketing term, not a new technology.

Wrong, it refers to new dedicated logic on their GPUs for doing BVH traversal and triangle intersection, much faster than if you use the more general purpose shader units.

The same way GPUs had dedicated logic for rasterisation to make that super fast (hierarchical z-buffering, interpolation and blending), now they have some for ray tracing.

The iOS gaming platform is their biggest though? Think mobile gaming.

Is this not something to do with the new Mac Pro?

Mojave as well does not have signed drivers for Nvidia.

The new Mac Pro only ships with AMD GPUs which do not yet support raytracing.

Of course you can ray trace on AMD, or Intel, or an abacus. It’s not some magic it’s just maths you can run on any general purpose computing device.

You can raytrace by hand too, but that doesn't change the fact that there is no dedicated hardware support for it on any hardware shipped by Apple.

Raytracing is a not a problem that is well suited to GPUs since there are lots of data dependencies, which means parallelism is severely limited.

> Raytracing is a not a problem that is well suited to GPUs since there are lots of data dependencies, which means parallelism is severely limited.

This is a joke right? I mean ray tracing is the most common example pointed at as being highly amenable to parallel computation. all ray computations are independent (i.e cores), all scene and BDRF data are shared - this matches GPU architecture perfectly, the only reason it's more of a recent trend is because even with parallel compute it's still demanding for real time performance.

If you don't believe me you can go to shadertoy right now and see endless demos doing raycasting and pathtracing in a quad.

Raytracing is indeed ill suited to GPUs because of the terrible cache coherency you get in a real scene that has a size measured in gigabytes instead of kilobytes that scenes on Shadertoy take up.

> all ray computations are independent (i.e cores)

Which makes it ill-suited to GPUs since they want sibling threads to take the same execution path on cache coherent data. Better than a quad core CPU sure but a very small fraction of the performance they're otherwise capable of.

You can see this with the DXR performance on Pascal vs. Turing.

> that doesn't change the fact that there is no dedicated hardware support for it on any hardware shipped by Apple

Nobody in this thread or anywhere else said there was.

> Raytracing is a not a problem that is well suited to GPUs since there are lots of data dependencies, which means parallelism is severely limited.

...no it's the classic example of an embarrassingly parallel problem - that's literally the term people use because it's so easy to parallelise. Each pixel is completely independent with no data or control dependency between them and they can be rendered in parallel.

There's a subtle distinction here which I believe is the source of this disagreement. Raytracing is indeed embarrassingly parallel - you can render a 5 megapixel image on 5 million different machines in the time it takes to render a single pixel on one machine.

However, each machine will be executing entirely different instructions after a very short period - there's not much "coherency" between adjacent rays, because all it takes is to clip the corner of an object and suddenly you're bouncing around a completely different part of the scene. This is a difficulty for GPUs, which are not true parallel clusters. What they do well is running the same set of calculations on different data, at the same time - in other words, not raytracing. I believe this is what the parent meant by "data dependency" - there are a lot of divergent branches, and the calculations that you do depend entirely on scene data.

Intel's Larrabee architecture would have made GPUs behave like genuine clusters. I think it's a bit sad we don't have general-purpose clusters in our machines, just the hobbled GPUs.

Embarrassingly parallel in theory is different from embarrassingly parallel in practice. One of the major differences is cache -- both instruction cache and data cache. You can, in theory, run 100 rays at once on 100 different threads, but since each ray goes to a completely different part of the scene, it will be loading different geometry, running different shaders, and fetching different textures.

Even in path tracers that aren't using the GPU, one of the relatively recent huge improvements was "ray sorting", which tries to collect rays into bundles that use the same shader and roughly the same area of textures to improve on cache behavior. It brought huge speed increases.

One of the big limitations of NVIDIA's RTX right now is that it does not support ray sorting.

Interesting tidbit about ray sorting: Imagination Technologies had a GPU design in 2014 that had support for ray tracing, including a Coherency Engine:


But there wasn't enough interest in GPU path tracing yet.

You seem to have bought in heavily to marketing.

Pathtracing software has been available for production use on GPUs for years before RTX was available.

Even in games, raytracing has been in use for a long time in various forms.

Yes RTX enables extra optimizations and provides nice APIs for it to boot, but it doesn't enable the technology as a whole. It's very much existed in realtime for a long time and on GPUs for a long time as well.

The real coup if anything is denoising. Advances in denoising are actually what really push realtime pathtracing forward as you no longer need to fire as many rays to converge on a useable image.

This makes no sense. Why wouldn’t Apple be forward-thinking with their APIs? This news suggests that there will soon be Macs that have raytracing hardware.

And raytracing is a problem that is well-suited to GPU acceleration...ray calculations can be easily run in parallel and distributed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact