Hacker News new | past | comments | ask | show | jobs | submit login
CUDA Toolkit Release Notes (nvidia.com)
160 points by sergiomattei 13 days ago | hide | past | web | favorite | 119 comments





From the notes:

> CUDA 10.2 (Toolkit and NVIDIA driver) is the last release to support macOS for developing and running CUDA applications. Support for macOS will not be available starting with the next release of CUDA.


Between this and the challenges around OpenMP (now generally solved), setting up a Linux workstation just for experimenting was easier than getting it working on my “pro” Apple hardware.

Fully understandable..anyways none is doing ML on Apple hardware due to ATi Chipsets.. last Apple HW with Nvidia graphic cards are old or running Linux or Windows with Bootcamp anyways

In my understanding, Radeon Instinct cards are not doing that bad. Yes, they're not supported by CUDA, but they're not slouching either.

It's extremely hard to use the standard ML toolchains for GPU on anything that doesn't support CUDA.

>standard ML toolchains

So Nvidia has achieved their goal and pushed ATI out of ML market?


ATI was never in the AI market. They have been trying, but their efforts are a joke compared tobest their competition is doing.

Thanks for all the replies. I'm not very knowledgeable about the current state of the big ML frameworks' status. I know better now. Thanks again.

It's one thing to have an API and hardware support for such things, it's a different thing entirely to have support from the major ML stacks (pytorch/tensorflow etc).

sure, but if tools or libraries would have used Vulkan or OpenCL APIs, then it would have been better

Not surprising given Apple’s apparent vendetta against Nvidia hardware

A well deserved vendetta, Nvidia has screwed Apple over multiple times going back to the PowerPC days. After the defective chipset fiasco one can’t blame them for cutting ties.

That seems like a very one-sided take considering Apple’s history.

Moreover, the reason is almost certainly related to Apple not wanting to proliferate Nvidia’s tools and instead force developers to use their Metal API. AFAIK Apple’s even abandoned OpenCL on macOS.


So, you are aguing that NVidia constantly messing up when Apple have used them in good faith as a supplier, followed by the subsequent denial of problems even existing by NVIdia and their refusal to accept any culpability (see Bumpgate), is "a very one-sided take" and eschew it in the belief that this is all about pushing Metal? That seems like the very narrowly held view to me. Perhaps they are thinking the way you propose, but since NVidia hasn't figured anywhere in any of Apple's hardware since Bumpgate, I'd suggest that NVidia being a bad partner has considerably more to do with it.

Nearly a decade has passed since nVidia messed up one generation of laptops so probably it's time to let it go. Grudges don't really help anyone, and Apple's so far from innocent they shouldn't promote grudges as solutions to problems, especially as alternatives to consumer-friendly hardware support.

In much more recent times Apple themselves ruined four generations of laptops and were deceptive and dishonest about it, until they grudgingly rolled out their free keyboard replacement program which starts expiring for their earliest victims at the end of next year. Apple chose to experiment for 3 additional years at, in aggregate, major expense and inconvenience to their customers.

nVidia was never as bad as Apple and it's hypocritical too. How many Foxconn employees killed themselves without Apple permanently prohibiting working with Foxconn? How many child workers have they found in their supply chain without terminating their relationship with suppliers? nVidia obviously solved their one issue, and went on to ship millions of chips in laptops since then without problem.


Apple’s behaviour is irrelevant in this instance. It has nothing to do with the situation being discussed.

Your second paragraph, while containing the seed of facts, is total hyperbole. It certainly isn’t worse than refusing to acknowledge anything was wrong and then refusing to pay for the repairs! This is precisely what NVidia did. At least Apple repaired the keyboards!

Your last paragraph is the usual trite rhetoric to anything that can be deemed ‘pro Apple’. Both items have been addressed by Apple (https://www.apple.com/uk/supplier-responsibility/) albeit after pressure, and they are continuing to apply pressure and drop suppliers that don’t meet their requirements.


In the end, it's the customers who lose

In the end it's apple who loses. Customers can use Linux or Windows, but how is apple going to sell $6000 workstations and $5000 displays without even the option to install aftermarket nvidia cards? OK so Apple doesn't trust nvidia as a supplier but why do they forbid the release of drivers? Total insanity.

Sounds like AMD and Intel should sever their ties too.

They are worth each other. Lock-in should be avoided, both from Apple and Nvidia.

So what do people here use to learn/fool around with GPU-dependent ML stuff (non production use cases)? A non Mac laptop? Or do you do it all in the cloud? Using the cloud presumably gets expensive over time and I also would think that the overhead of dealing with a cloud setup and all the associated legwork to get started can be frustrating compared to working locally.

If you're learning the basics, a laptop is great, because you're not number crunching for days at a time.

If you're doing anything process intensive, gpu or cpu, you'll want a desktop or cloud. Sure, some laptops are fast, but they head up, so there isn't much of a way around it.


A mac to ssh into my linux desktop. It’s cheaper to buy your own gpu.

We need a CUDA alternative for non NVIDIA gpu's, specially on a Mac.

I recently looked into GPGPU programming, and it seems to be a bit of a mess, which is surprising seeing how long it's been around and how much it is used. I think a large part of it is due to Nvidia using their strong position to hamper anything other than Cuda.

OpenCL 2.0 was announced in 2013, Nvidia added OpenCL 2.0 "for evaluation purposes" in 2017. I don't think they have final support yet.

OpenCL 2.1 uses the same intermediate language as Vulkan, SPIR-V. Nvidia does not seem to support this.

When OpenCL 2.2 was announced in 2017 Khronos said they were working on converging OpenCL and Vulkan compute, but OpenCL will remain separate. I think this means that they will both use the same SPIR-V backend to run shader code, but I don't know.

Khronos also has SYCL. The latest version, 1.2.1, was released a few days ago and uses OpenCL 1.2 as the backend. I'm guessing it doesn't use a newer version because of Nvidia's poor support for them, but AMD doesn't seem to support it well either.

Then there's a multitude of other libraries/platforms, like POCL, HCC, Acoran, that I don't know anything about but found mentions of. AMD has HIP which can convert Cuda to run on AMD hardware, so maybe Cuda is the best option for AMD as well?

I find it a bit of an embarrassment for the industry that it's this messy and complicated, and I don't see it getting better in the near future. It seems as if the proprietary Cuda is still the best option, and that's a big failure for everyone other than Nvidia. I don't want to use a proprietary language, but I want the code to run well on all the major platforms. I still don't know what to use.

https://www.khronos.org/sycl/

http://portablecl.org/

https://gpuopen.com/compute-product/hcc-heterogeneous-comput...

https://www.codeplay.com/products/acoran/

https://gpuopen.com/compute-product/hip-convert-cuda-to-port...


> I don't want to use a proprietary language

CUDA is mostly just limited C++. A couple of the interesting points in the release notes: “Added support for CUDA Virtual Memory Management APIs.; 10.2 now includes libcu++, a parallel standard C++ library for GPUs.” Those things make CUDA even easier to treat like regular C++.

The language isn’t really the force keeping people in or out, the libraries & tools are. cuDNN, for example, is something you can’t get in OpenCL.


As you've correctly identified, CUDA is the only sane default choice in the GPGPU world. OpenCL 1.2 is a viable distant second if you're sure you don't need any of the major CUDA libraries and value the portability aspect (not the same thing as performance portability, of course, but still). Finally, if you don't need strict accuracy guarantees and your application is mostly about graphics, you can get by with compute shaders. Everything else - POCL, HCC, ROCm, the various SYCL implementations, etc. you can safely ignore if you want something usable today, none of them are production ready or anywhere close to it.

Who actually supports opencl 2.0? ROCM, which AFAICT is AMD's new preferred platform, only supports opencl 1.2 also.

I dont really understand this, why would you use a Mac for anything compute intensive?

hackintosh

That's such a narrow segment of users. Would it be worth supporting?

Even if it was worth supporting, I guess it's understandable that Apple wouldn't go out of its way to do it...

Would this from AMD count? https://rocm.github.io/

It's not supported on Mac, and even on Linux seems to have weird limitations on specific kernel versions.

It also supports just a subset of AMD chips, which doesn't seem to include the actual AMD chips of the different Macs that I have available.

ROCm would count if it's mature enough so that the setup "just works" on any reasonable environment (in the way that it mostly is so for the major ML platforms on nvidia/CUDA) but it does not yet seem ready for that.


You make a good point, I forgot that it's not Mac or Windows compatible, and the limited hardware support.

AIUI, Vulkan can be used for compute - so why not do that? It might work reasonably well on both nVidia and non-nVidia hardware. Of course, this requires GPU-enabled software to implement Vulkan support, but that's one-time work.

Last year, I supervised a student project that involved writing a Vulkan backend for a compiler that already possessed CUDA and OpenCL backends. While I recall that everything ended up working, it didn't run all that fast. The vast majority of the problems were related to the language used for encoding the shaders, SPIR-V.

Among the difficulties we encountered, I recall the need to use an offline optimiser to do trivial things like CSE and constant folding. CUDA and OpenCL will do this during kernel compilation, but Vulkan implementations seems to be designed as non-optimising (in order for loading to be faster, I assume). This is a perfectly understandable design, but it means it's a little more awkward to use. For direct programming (as compared to a compilation target), it's a lot more awkward.

Another issue was that basic Vulkan is very restricted, as I assume it's supposed to be usable on simple hardware. Additional functionality can be enabled through a collection of extensions. One particular problem I recall is that our code generator assumes the existence of general pointers, for example such that the same memory can be used for different types of values at different times. SPIR-V has/had a very strict notion of pointers, and no way to cast between different types without an extension, and that extension was IIRC not implemented by NVIDIA or AMD. There were lots of these kinds of issues.

In the end, Vulkan is usable for compute - after all, a single master's student (admittedly one who's very bright) managed to implement a Vulkan-targeting compiler backend in half a year. However, at the time I concluded that it's not as mature or as practical as CUDA or OpenCL, for reasons that seem perfectly solvable. However, for direct programming, the SPIR-V that is needed for shaders is completely inaccessible. It would be like writing machine code (not assembly) by hand. I assume graphics programmers have some layer on top to provide a sane interface, but since we were writing a compiler anyway, it wasn't a big deal for us.


Vulkan compute shaders are roughly on the level of OpenGL compute shaders, which never were as flexible and general as CUDA and OpenCL kernels.

Even though compute in Vulkan appears as a first class citizen instead of the weird tacked on feature it is in OpenGL, the design is still focused very much on embedding compute steps in a process that is designed to deliver images to a screen. And that uses a lot more opaque object representations like textures and samplers. GLSL has also always been a very restrictive language and that carried over into SPIR-V.


OpenCL is decent, and although it's deprecated, it's probably going to continue working for years. It's not as nice as CUDA for direct programming, but don't most people access CUDA through higher-level libraries anyway? You probably would not be able to tell whether those use OpenCL or CUDA behind the scenes.

Deprecated by Apple in regards native OS X support for OpenCL, important distinction.

It's called SYCL and no one uses it. It's been around since 2014.

Why doesn't apple support NVIDIA though? Considering the whole ML and AI community use only NVIDIA GPUs, it sucks that we can't use apple laptops for the same.

NVidia didn't want to pay for failing NVidia chips in MacBooks when they had problems with their manufacturing over half a decade ago and since then Apple tries everything to make NVidia's and its customers' life difficult.

Weren't these chips failing because of bad thermal design of the enclosure?

Nope. This was a confirmed defective GPUs by Nvidia themselves.

All companies that used these GPUs were affected. They even sued Nvidia.

Here is example: https://blog.dell.com/en-us/nvidia-gpu-update-nvidia-class-a...


They were failing in any computer with that chip (Nvidia 8600m), whether it was MacBook, Thinkpad or some Dell.

NVidia changed solders without changing to a thermal potting compound with the same expansion coefficient as the new solder. Cycles of thermal stress caused the solder to fracture and the chip to lose electrical contact with the board eventually.

Why would you do anything compute intensive on laptops that suffer from cooling issues?

Why make such a broad statement at all? There are laptops for nearly every need - yes there are some which will throttle very quickly and are only good for light work, but there are laptops in every size that have decent enough cooling that they can run at max load 24/7 without any issues. As to why someone would do this - can you really not think of a single reason?

I'm not so sure I agree. My old laptop (still in use) is a Dell Precision m4800. It has a quad-core i7, and is so thick and heavy the word "laptop" is kind of a joke. It has massive heatsinks and fairly loud fans. Still it will throttle back after ~90 seconds of "make -j 4" or other heavily parallel jobs.

If you have a laptop that will run at 100% load indefinitely, it's only because the manufacturer has chosen a low power CPU.


Well, then you need to look a little bit longer around the market.

At work we use Eurocom laptops with desktop-grade i7 and i9s rated at 90W of power - and we use them for conferences and trade shows to run our demonstrations, they do absolutely fine under full load. Sure they weigh about 5KG, but that's absolutely fine for the target use. I hope that answers the question of "why would you possibly want to do that".

I have a Razer Blade Stealth, 13" laptop where the manufacturer has actually gone against the advice from Intel, and gave the CPU a budget of 25W instead of the advised 15W - and the cooling to work with that. Both the CPU and GPU can be at full load indefinitely and will not throttle. It's just a well designed dual-fan cooling solution.

My wife has a Lenovo Y540 - with an i5-9300H, 45W CPU - again, the cooling on it is super beefy and it will run indefinitely at full load. And that's a completely normal 15.6" laptop.

But you know what throttles? Laptops like the MacBook Air, where Apple used a Y-series CPU and gave it zero cooling - that will throttle hard after a while. And that's a 5W chip. It's almost an achievement that they managed to mess this up. But maybe they shouldn't feel too bad - a lot of other companies do mess it up too. Dell XPS. HP Envy. Those are top lines for these brands and they are famous for aggressive throttling under load.

My point is - of course there are laptops on the market that are designed for sustained full load and are completely absolutely fine with it. I'm just baffled by 1) how can this not be obvious 2) how can it be hard to think of one usecase where that's useful?


Those Eurocom laptops look neat, thanks for the tip.

I'm just a bit surprised if they manage to squeeze out significantly better thermal performance than Dell does out of a similarly bulky laptop.

Have you actually verified the core frequencies over time with a tool like CPU-z when running a workload that pegs all cores at 100%?

The Precision model I have doesn't stutter or feel any slower under load. I've never managed to make it feel slow, even when running heavy physics simulation codes on all cores in the background. But when you actually monitor frequencies, you see it clocks down by around 20%.


Right, let's clarify what I mean by "throttling". All intel CPUs have base and turbo speeds, and the turbo speeds usually only apply to a single core or to multiple while there is thermal headroom. By "throttling" I do mean the CPU falling below its base frequency to protect itself, I don't mean the turbo frequency falling down under load, that's normal and happens even on desktop CPUs with ample cooling.

As an example - the CPU in the Razer Blade is an i7-8565U, with base speed 1.8GHz, turbo speed 4.6GHz. Under maximum load I'll see it jump to 4.6GHz briefly, and then settle at 3.2GHz where it will remain indefinitely. Sure, the CPU has "throttled" down from its maximum turbo speed, but it's stable at 3.2GHz on default cooling. In comparison, I used to own an MSI GT63R with a quad core i7(2630QM if I remember correctly) and that CPU would "throttle" by regularly falling down to 400-600MHz(!!!) as a result of stretched thermals. It was not "stable" at neither its base nor turbo speeds. That behaviour still happens(in the mentioned Air, or an XPS 15 for instance) but there are definitely laptops which don't do that at all.


Ah, okay, then I'm using a different definition of throttling.

Intel will publish as you say a base speed of 1.8 GHz, then a single core Turbo speed of 4.6 GHz and also an all-core Turbo speed of 4.2 GHz (numbers made up, but something like this). If sufficient cooling is available, the CPU should be able to sustain the all-core Turbo number indefinitely. If it can't, I call that throttling. It can be mild (if you go from 4.2 to 3.2) or severe (if you go from 4.2 to 1.8). A colleague has the XPS 15 (well, the Precision equivalent) and he's never seen it drop below base clock, the problem with it is that base is something ridiculously low like 1.2 GHz. If a machine drops below base freq. due to thermal issues, it has been designed ver wrongly.

Our workstations with water cooling run at maximum all-core Turbo freq. for days on end. Those CPUs do exceed the specified TDP when doing so, which is fine as long as the cooler can easily dissipate that heat. And you can get water coolers that support 500W TDP, so no worries.

The only guarantee Intel makes is that the processor will stay within TDP when running at the base clock. What's happening when your laptop goes briefly to 4.2 GHz is that it exceeds both the Intel-stated TDP and the cooling system TDP. Then it throttles back to 3.2 GHz, which is a little below the cooling system TDP but above the processor TDP. In a laptop like the XPS 15, the cooling system TDP is only a little higher than Intel-stated TDP.

The momentary thermal headroom between what the CPU puts out at max Turbo and the cooling system TDP is provided by the heat capacity of the metal in the heatsink/heatpipe.


My Lenovo W530 still has a socketed CPU which I've replaced from 2 core/4 thread i5-3210M to 4 core/8 thread i7-3610QM and it will happily chug along without down throttling in parallel compilation, benchmarks or games using eGPU. It's a 45W TDP CPU and there are stronger ones doing fine still. And before you say "that's outdated" - well not really https://browser.geekbench.com/processors/753, it's about as fast per core as 2013 retina MBP and has 4 real cores. I guess my point is "you can make it work".

Now, my work's 2018 15" MBP is suffering massive overheating problems and throttling down just on some light work + YouTube.


Sure but none of the properly cooled laptops are Macs.

Maybe you want to functionally validate your task in the SW environment that you can take with you.

We got a 28-Core mac Pro now, with lots of expansion ports.

Tensorflow stopped supporting MacOS w/GPUs a year or two ago. Maybe if you're using Pytorch or some other framework that supports MacOS w/GPU/external gpu... but to my knowledge that doesn't exist/isn't widely used.

I think for the larger models now it's more likely people are remotely logging into GPU clusters at this point. I don't think it will be that big of a change.

I do GPGPU programming for a livng. It's all NVIDIA. They have the best developer support and the best hardware.

It's nice having a laptop with nvidia so I can test and develop software even if real models are run on big iron.


It's not a change in the sense of the status quo being bad. Everyone on our team wanted os x dev, and the choices of remote dev vs linux dev were subpar for them.

I'm optimistic about Nvidia eGPUs, but will be awhile to smooth out.


Apple hates hackers. Didn't the missing Esc key clue you in?

They added Esc key to the latest Macbook. Do you feel their love now?

Oh, I thank Your Highness for only whipping me 40 times instead of 50! Truly Your Highness is wise and merciful!

That only flies if Ctrl-[ also enables Nvidia support.

MacBooks with iPhone stripe don’t have Nvidia hardware.

Being downvoted I'm genuinely interested - are there new MacBooks with Nvidia cards? I'd like to get one if it exists. Am I missing something? All have AMD/Intel graphics cards, right?

The fair breeze blew, the white foam flew,

The furrow followed free


Apple wants to force people to use APIs it controls like OpenCL then Metal. The availability of CUDA meant if Apple shipped Macs with nVidia devices, there would be an alternative.

This makes sense. Macs don't support their hardware. Researchers and scientists who need CUDA get platforms with the power and capability to support them.

I'm using CUDA under Ubuntu, and have noticed that the CUDA library uninstalls itself every so often. Has anyone else experienced this?

You probably installed a kernel update and the nvidia kernel module didn't recompile itself. You can avoid having to reinstall the whole driver package by just running "dkms autoinstall" and then "modprobe nvidia"

You may first need to unload any loaded nvidia modules (built for an older kernel).. So some combination of "rmmod nvidia_modeset" "rmmod nvidia_uvm" "rmmod nvidia_drm" "rmmod nvidia" and then run dkms

I run a ~1000 node server room for a computer science graduate program at a university.. keeping these drivers built and loaded properly has been a nightmare! Nvidia really needs to get things worked out if they want to keep pushing the GPGPU stuff


What distro are you running in your labs? At my university’s cluster running with RHEL 6 (same applies to RHEL 7, hopefully they managed the upgrade over the summer) all that’s needed is installing dkms, then the CUDA repo from NVIDIA, which includes the driver and CUDA packages. Any kernel update will rebuild the kmod on reboot. I’m not 100% certain if that repo is Tesla cards only (which is what we were) but ELRepo also has the generic driver and associated bits (same as negativo, but those are very granular). DKMS is really the only piece that’s necessary to keep the system running (other than keeping an eye on which kernel version the kmods were based on if using the non-NVIDIA repo). DKMS also works just fine with the NVIDIA provides installer, just make sure you have the libglvnd bits installed before you install the driver.

There is an nvidia dev apt repo that is great for installing and setting CUDA up because apt does all the heavy lifting for you. I've never had any problems with it.

First an optional prerequisite, for updated video drivers:

``` sudo apt-get install -y software-properties-common && sudo add-apt-repository -y ppa:graphics-drivers/ppa && sudo apt-get update && sudo apt-get install -y nvidia-driver-NNN ``` (Eg, nvidia-driver-435)

This is sometimes a required prerequisite, because these drivers have 32bit and 64bit binaries in them where the ones from nvidia's website or normal apt packages only have the 64bit drivers. (Eg, it's a requirement for Steam and many video games, which will suddenly stop working when CUDA is installed.)

Then there is CUDA itself:

``` sudo apt-get install -y gnupg2 curl ca-certificates && curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubu... | sudo apt-key add - && sudo echo "deb https://developer.download.nvidia.com/compute/machine-learni... /" > /etc/apt/sources.list.d/nvidia-ml.list && sudo apt-get update && sudo apt-get install -y nvidia-cuda-toolkit libcudnn7 libcudnn7-dev ``` (If also using cudnn.)

Also note: NVIDIA currently doesn't officially support Ubuntu 19, but their 18.04 repo works perfectly for 19. In the future you can always try grabbing from https://developer.download.nvidia.com/compute/cuda/repos/ubu... instead.


Has anyone experimented with ClojureCL [1]? It claims to run on MacOS.

[1] https://clojurecl.uncomplicate.org/articles/getting_started....


I guess that means we won't be getting a new Webdriver for their graphics cards either :/

I couldn't find any documentation on libcu++ i guess i'll have to download the new sdk, and see if there are samples or look at the headers.

So nvcc is completely based on clang now?

It has been for a long time. Only the very earliest versions of CUDA had an nvcc based on Open64.

Makes sense, given that the underlying drivers only support macOS 10.13 (which is likely considered out-of-support by Apple now)

10.13 support ends September 2020.

True, it does look like Apple provides support for the current and last two releases. 10.12 Sierra got its last (ever) security update at the end of September.

mac users can always use AMD's openCL offerings...

I was under the impression OpenCL was deprecated for Mac OS.

Yep, deprecated in 10.14 in favor of Metal. I expect it to be like OpenGL, where the existing functionality will stay around in maintenance mode for years to come but never improved.

https://developer.apple.com/library/archive/documentation/Pe...


I thought it was still supported and the AMD version of tensorflow was supported as well...

Submitted title was "Nvidia drops support for CUDA on macOS". We changed that for a while to "CUDA 10.2 is the last release to support macOS", which is language from the article itself. Since then someone emailed and asked why the title was like that in light of the discussion at https://news.ycombinator.com/item?id=21617016, so I've reverted to the article's title.

Edit: If the article were more a burying the lede kind of thing, as sometimes happens with press releases, then the original title would be misleading and it would arguably be right to change it. But that seems unlikely here?

Edit: I suppose we could switch the URL to an article like https://gizmodo.com/apple-and-nvidia-are-over-1840015246 if this really is the only story here.


Any title that gives us a clue why the story is worthy of note. At the moment it's frustrating as you have to click through to find out if you want to click through.

(title at the time of my comment was "CUDA Toolkit Release Notes")


As an ML + open source developer for over 10y, MacOS support for deep learning is already long gone, Linux is the prime AI/ML OS. However, Apple and NVidia parting away is a good omen for GPU competition I believe. Whatever Apple hw comes up with, if usable outside MacOS software stack, it'd be an interesting alternative.

> if usable outside MacOS software stack

Except, of course, according to Apple’s history regarding the matter, it won’t.


Unless the swift-for-tensorflow effort cross-fertilises something interesting, I suppose? If you were asked to guess the most likely path by which AMD cards become widely useful, perhaps this would be as good a bet as any.

Someone will need to get ROCm on MacOS, or something like that.

God, I really hope AMD goes out of the deal with Apple as well. I'd rather people abandon macOS early than have to put up with Apple imposed implementations like Metal, HLS and Webkit.

At the end of the day, Apple should realize that having money alone does not make you , by any means, a popular company. It's the little things that make people sign up for Apple.


Emphasis on `if usable`

I don't think the general crowd will adapt itself to go on to Apple's stack unless it is radically better.

Till Apple gets its developers to use what it makes, by enforcing it on its App Store, like the way it pushed people for Webkit - Otherwise it won't have takers.


crosses fingers in OpenCL

Agreed. Quite frankly the new title doesn't help people understand why it's newsworthy.

By the time we changed the title, the comments were making that quite clear.

Good point.

As the whiner who whined about this title - the story isn't worthy of note at all, other than someone fishing a supposedly-important detail out of it and putting it in the title. Apple hasn't as much as sold a machine with an Nvidia GPU for many years. This non-editorializing thing is explained in great illustrative detail here:

https://news.ycombinator.com/item?id=21617907


Perhaps they haven’t sold Nvidia hardware, but the allowance of Nvidia’s web drivers so that I could connect my eGPU w/ GTX 1080 to the 13in Macbook Pro was one of the primary reasons I bought the MBP.

It is fine if they want to kill off features off upcoming hardware, but killing off the capacity to use something that was used for months is not the best look. If Metal Only was the goal, add a prompt when someone enables said web drivers that they’re going off the reservation snd that they assume the risk.

These release notes seem to be Nvidia giving up the ghost that they’re going to be able to resolve this dispute and High Sierra is the last release where all but a couple of really old Nvidia GPUs work.


Yep; you’re right. And Metal isn’t the only goal. Preventing people from going off the reservation is the goal. I can see both sides; there are both upsides and downsides for both Apple and their customers. Either way, at this point, Apple’s strategy is clear and widely known. It sucks you can’t use eGPU with Mac, and I’d still be primarily using a Mac if you could, but it’s no longer an available choice. As consumers, the only option now is to vote with our pocketbooks.

The configuration you're describing was never officially supported - I'm not arguing the changes in this support is a good or bad thing just addressing the concern Nvidia might be trying to sneak something super-important past people in release notes and thus the editorializing should perhaps get a pass.

Even AMD eGPUs have a lot of compromises (e.g. HDCP issues), although those may or may not be Apple's fault.

I bought a lower-spec MBP with the expectation of using eGPUs (in the hope Apple and Nvidia would make up), but it's good to see it confirmed that it won't be worthwhile.


1. I didn't know the fact highlighted in the original title 2. Doesn't this have an impact on eGPUs, Hackintoshes and other customizations?

It basically means heavy GPU-based work is pretty much a Linux/Windows only thing now. Huge problem as machine learning starts to become an interest topic for people in the art and design community.


> Huge problem as machine learning starts to become an interest topic for people in the art and design community.

Out of curiosity, what Mac apps are you and other professionals using for art & design? Being a part time artist & ex-Mac person, I thought the stereotype of Mac being an artist/designer’s machine was almost gone. Most professional artists I know have had to switch away from Mac. To be fair, most artists I know are doing 3d and need GPUs. But I’m honestly wondering what else is keeping people on Mac and how well the platform is serving professional designers and artists today.


For 1., again, the standard isn't 'someone might not have known a detail in the story', it's 'does the title represent the story'. If that particular detail is worthy of highlighting, it's easy to find (or even write!) a story about that or point it out in a comment. For 2., that's been the unfortunate case for ages but more importantly, see 1.

It's not a "detail" if it's pretty much the only reason to post or read the story in the first place.

Sure, it might be an important detail to you but that's just not how titling HN stories works, as explained at great length in the thing linked in my comment and many other places.

Personal opinion: I really don't care about "CUDA Toolkit Release Notes" or probably most other "$stuff Release Notes" on HN. I only opened the comments because others here seemed to upvote it more than I anticipated and thought "there must be something special about this release" - and indeed, nVidia dropping MacOS support is something that's worth knowing (even as a Linux dev).

Thus I personally would like to either have the title "CUDA Toolkit Release Notes: MacOS-support to be dropped" or link to the click-baity gizmodo (which is missing the CUDA aspect).


> Personal opinion: I really don't care about "CUDA Toolkit Release Notes" or probably most other "$stuff Release Notes" on HN. I only opened the comments because others here seemed to upvote it more than I anticipated and thought "there must be something special about this release" - and indeed, nVidia dropping MacOS support is something that's worth knowing (even as a Linux dev).

Would have to agree here, too. Part of the problem is that there's no obvious indicator to show that the title has changed (some sort of "post title history" feature would be useful here) which then leaves you (as a latecomer to the article) confused as to what's remarkable about something which would normally perhaps be considered mundane, and confused as to the context against most previous comments were made.


Why is the threshold for changing urls and titles lowering all the time?

It seems like it has went from something rare, only used in very clearcut cases to something you use daily.


That is sample bias. HN moderation of titles and URLs has been the same for many years. It was never rare, and was always happening daily.


[flagged]


It works fine on Windows and Linux. Apple didn't even ship a computer with a replaceable GPU for 6 years.

Nvidia spent money building the superior SDK and that's why CUDA is an important piece of infrastructure. Apple half heatedly supported OpenCL, then killed it, then pushed Metal and all this time didn't ship a computer that could even take advantage of this power.

Closed tech is bad but Apple killed off the only real open competitor and replaced it with something even more closed.


The corresponding AMD infrastructure and most related tools are fully open source. The driver is even upstreamed in the Linux kernel!

I am a full stack web developer using a Linux environment as my development environment. I plan to start learning some ML related stuff and I just recently bought a Radeon 5700 XT, the current high end consumer GPU from AMD. Before this I had an RX 580. The driver support on Linux is fine but not optimal, there are multiple issues regarding color ranges, dual monitor support and with Navi based hardware (5700/5700XT) plain hardware freezes/hangs. I did not personally experience the hangs but I installed the 5.4-rc8 kernel and compiled mesa 20 from the git just to avoid them in the first place. There is no Navi support on ROCm and the prefered way to have OpenCL on older hardware like Polaris (RX 580 for example) is to just extract the relevant files from the closed source driver. I still prefer something with open source drivers but AMD is far behind NVIDIA in terms of GPGPU software support, I cant't honestly understand what their strategy is and why they are allowing the majority of developers to tie themself and their careers to NVIDIA.

Those tools suck.

You surely can write one yourself or pay for development. If that doesn't work maybe you could help developing one of the existing packages?

Not at all, Khronos just keeps missing the boat that there is a large community that doesn't want to be stuck with C for GPGPU programming.

When they took steps to embrace other languages the boat was already in high sea.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: