> CUDA 10.2 (Toolkit and NVIDIA driver) is the last release to support macOS for developing and running CUDA applications. Support for macOS will not be available starting with the next release of CUDA.
So Nvidia has achieved their goal and pushed ATI out of ML market?
Moreover, the reason is almost certainly related to Apple not wanting to proliferate Nvidia’s tools and instead force developers to use their Metal API. AFAIK Apple’s even abandoned OpenCL on macOS.
In much more recent times Apple themselves ruined four generations of laptops and were deceptive and dishonest about it, until they grudgingly rolled out their free keyboard replacement program which starts expiring for their earliest victims at the end of next year. Apple chose to experiment for 3 additional years at, in aggregate, major expense and inconvenience to their customers.
nVidia was never as bad as Apple and it's hypocritical too. How many Foxconn employees killed themselves without Apple permanently prohibiting working with Foxconn? How many child workers have they found in their supply chain without terminating their relationship with suppliers? nVidia obviously solved their one issue, and went on to ship millions of chips in laptops since then without problem.
Your second paragraph, while containing the seed of facts, is total hyperbole. It certainly isn’t worse than refusing to acknowledge anything was wrong and then refusing to pay for the repairs! This is precisely what NVidia did. At least Apple repaired the keyboards!
Your last paragraph is the usual trite rhetoric to anything that can be deemed ‘pro Apple’. Both items have been addressed by Apple (https://www.apple.com/uk/supplier-responsibility/) albeit after pressure, and they are continuing to apply pressure and drop suppliers that don’t meet their requirements.
If you're doing anything process intensive, gpu or cpu, you'll want a desktop or cloud. Sure, some laptops are fast, but they head up, so there isn't much of a way around it.
OpenCL 2.0 was announced in 2013, Nvidia added OpenCL 2.0 "for evaluation purposes" in 2017. I don't think they have final support yet.
OpenCL 2.1 uses the same intermediate language as Vulkan, SPIR-V. Nvidia does not seem to support this.
When OpenCL 2.2 was announced in 2017 Khronos said they were working on converging OpenCL and Vulkan compute, but OpenCL will remain separate. I think this means that they will both use the same SPIR-V backend to run shader code, but I don't know.
Khronos also has SYCL. The latest version, 1.2.1, was released a few days ago and uses OpenCL 1.2 as the backend. I'm guessing it doesn't use a newer version because of Nvidia's poor support for them, but AMD doesn't seem to support it well either.
Then there's a multitude of other libraries/platforms, like POCL, HCC, Acoran, that I don't know anything about but found mentions of. AMD has HIP which can convert Cuda to run on AMD hardware, so maybe Cuda is the best option for AMD as well?
I find it a bit of an embarrassment for the industry that it's this messy and complicated, and I don't see it getting better in the near future. It seems as if the proprietary Cuda is still the best option, and that's a big failure for everyone other than Nvidia. I don't want to use a proprietary language, but I want the code to run well on all the major platforms. I still don't know what to use.
CUDA is mostly just limited C++. A couple of the interesting points in the release notes: “Added support for CUDA Virtual Memory Management APIs.; 10.2 now includes libcu++, a parallel standard C++ library for GPUs.” Those things make CUDA even easier to treat like regular C++.
The language isn’t really the force keeping people in or out, the libraries & tools are. cuDNN, for example, is something you can’t get in OpenCL.
It also supports just a subset of AMD chips, which doesn't seem to include the actual AMD chips of the different Macs that I have available.
ROCm would count if it's mature enough so that the setup "just works" on any reasonable environment (in the way that it mostly is so for the major ML platforms on nvidia/CUDA) but it does not yet seem ready for that.
Among the difficulties we encountered, I recall the need to use an offline optimiser to do trivial things like CSE and constant folding. CUDA and OpenCL will do this during kernel compilation, but Vulkan implementations seems to be designed as non-optimising (in order for loading to be faster, I assume). This is a perfectly understandable design, but it means it's a little more awkward to use. For direct programming (as compared to a compilation target), it's a lot more awkward.
Another issue was that basic Vulkan is very restricted, as I assume it's supposed to be usable on simple hardware. Additional functionality can be enabled through a collection of extensions. One particular problem I recall is that our code generator assumes the existence of general pointers, for example such that the same memory can be used for different types of values at different times. SPIR-V has/had a very strict notion of pointers, and no way to cast between different types without an extension, and that extension was IIRC not implemented by NVIDIA or AMD. There were lots of these kinds of issues.
In the end, Vulkan is usable for compute - after all, a single master's student (admittedly one who's very bright) managed to implement a Vulkan-targeting compiler backend in half a year. However, at the time I concluded that it's not as mature or as practical as CUDA or OpenCL, for reasons that seem perfectly solvable. However, for direct programming, the SPIR-V that is needed for shaders is completely inaccessible. It would be like writing machine code (not assembly) by hand. I assume graphics programmers have some layer on top to provide a sane interface, but since we were writing a compiler anyway, it wasn't a big deal for us.
Even though compute in Vulkan appears as a first class citizen instead of the weird tacked on feature it is in OpenGL, the design is still focused very much on embedding compute steps in a process that is designed to deliver images to a screen. And that uses a lot more opaque object representations like textures and samplers. GLSL has also always been a very restrictive language and that carried over into SPIR-V.
All companies that used these
GPUs were affected. They even sued Nvidia.
Here is example: https://blog.dell.com/en-us/nvidia-gpu-update-nvidia-class-a...
If you have a laptop that will run at 100% load indefinitely, it's only because the manufacturer has chosen a low power CPU.
At work we use Eurocom laptops with desktop-grade i7 and i9s rated at 90W of power - and we use them for conferences and trade shows to run our demonstrations, they do absolutely fine under full load. Sure they weigh about 5KG, but that's absolutely fine for the target use. I hope that answers the question of "why would you possibly want to do that".
I have a Razer Blade Stealth, 13" laptop where the manufacturer has actually gone against the advice from Intel, and gave the CPU a budget of 25W instead of the advised 15W - and the cooling to work with that. Both the CPU and GPU can be at full load indefinitely and will not throttle. It's just a well designed dual-fan cooling solution.
My wife has a Lenovo Y540 - with an i5-9300H, 45W CPU - again, the cooling on it is super beefy and it will run indefinitely at full load. And that's a completely normal 15.6" laptop.
But you know what throttles? Laptops like the MacBook Air, where Apple used a Y-series CPU and gave it zero cooling - that will throttle hard after a while. And that's a 5W chip. It's almost an achievement that they managed to mess this up. But maybe they shouldn't feel too bad - a lot of other companies do mess it up too. Dell XPS. HP Envy. Those are top lines for these brands and they are famous for aggressive throttling under load.
My point is - of course there are laptops on the market that are designed for sustained full load and are completely absolutely fine with it. I'm just baffled by 1) how can this not be obvious 2) how can it be hard to think of one usecase where that's useful?
I'm just a bit surprised if they manage to squeeze out significantly better thermal performance than Dell does out of a similarly bulky laptop.
Have you actually verified the core frequencies over time with a tool like CPU-z when running a workload that pegs all cores at 100%?
The Precision model I have doesn't stutter or feel any slower under load. I've never managed to make it feel slow, even when running heavy physics simulation codes on all cores in the background. But when you actually monitor frequencies, you see it clocks down by around 20%.
As an example - the CPU in the Razer Blade is an i7-8565U, with base speed 1.8GHz, turbo speed 4.6GHz. Under maximum load I'll see it jump to 4.6GHz briefly, and then settle at 3.2GHz where it will remain indefinitely. Sure, the CPU has "throttled" down from its maximum turbo speed, but it's stable at 3.2GHz on default cooling. In comparison, I used to own an MSI GT63R with a quad core i7(2630QM if I remember correctly) and that CPU would "throttle" by regularly falling down to 400-600MHz(!!!) as a result of stretched thermals. It was not "stable" at neither its base nor turbo speeds. That behaviour still happens(in the mentioned Air, or an XPS 15 for instance) but there are definitely laptops which don't do that at all.
Intel will publish as you say a base speed of 1.8 GHz, then a single core Turbo speed of 4.6 GHz and also an all-core Turbo speed of 4.2 GHz (numbers made up, but something like this). If sufficient cooling is available, the CPU should be able to sustain the all-core Turbo number indefinitely. If it can't, I call that throttling. It can be mild (if you go from 4.2 to 3.2) or severe (if you go from 4.2 to 1.8). A colleague has the XPS 15 (well, the Precision equivalent) and he's never seen it drop below base clock, the problem with it is that base is something ridiculously low like 1.2 GHz. If a machine drops below base freq. due to thermal issues, it has been designed ver wrongly.
Our workstations with water cooling run at maximum all-core Turbo freq. for days on end. Those CPUs do exceed the specified TDP when doing so, which is fine as long as the cooler can easily dissipate that heat. And you can get water coolers that support 500W TDP, so no worries.
The only guarantee Intel makes is that the processor will stay within TDP when running at the base clock. What's happening when your laptop goes briefly to 4.2 GHz is that it exceeds both the Intel-stated TDP and the cooling system TDP. Then it throttles back to 3.2 GHz, which is a little below the cooling system TDP but above the processor TDP. In a laptop like the XPS 15, the cooling system TDP is only a little higher than Intel-stated TDP.
The momentary thermal headroom between what the CPU puts out at max Turbo and the cooling system TDP is provided by the heat capacity of the metal in the heatsink/heatpipe.
Now, my work's 2018 15" MBP is suffering massive overheating problems and throttling down just on some light work + YouTube.
It's nice having a laptop with nvidia so I can test and develop software even if real models are run on big iron.
I'm optimistic about Nvidia eGPUs, but will be awhile to smooth out.
The furrow followed free
You may first need to unload any loaded nvidia modules (built for an older kernel).. So some combination of "rmmod nvidia_modeset" "rmmod nvidia_uvm" "rmmod nvidia_drm" "rmmod nvidia" and then run dkms
I run a ~1000 node server room for a computer science graduate program at a university.. keeping these drivers built and loaded properly has been a nightmare! Nvidia really needs to get things worked out if they want to keep pushing the GPGPU stuff
First an optional prerequisite, for updated video drivers:
sudo apt-get install -y software-properties-common &&
sudo add-apt-repository -y ppa:graphics-drivers/ppa && sudo apt-get update &&
sudo apt-get install -y nvidia-driver-NNN
This is sometimes a required prerequisite, because these drivers have 32bit and 64bit binaries in them where the ones from nvidia's website or normal apt packages only have the 64bit drivers. (Eg, it's a requirement for Steam and many video games, which will suddenly stop working when CUDA is installed.)
Then there is CUDA itself:
sudo apt-get install -y gnupg2 curl ca-certificates &&
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubu... | sudo apt-key add - &&
sudo echo "deb https://developer.download.nvidia.com/compute/machine-learni... /" > /etc/apt/sources.list.d/nvidia-ml.list &&
sudo apt-get update &&
sudo apt-get install -y nvidia-cuda-toolkit libcudnn7 libcudnn7-dev
(If also using cudnn.)
Also note: NVIDIA currently doesn't officially support Ubuntu 19, but their 18.04 repo works perfectly for 19. In the future you can always try grabbing from https://developer.download.nvidia.com/compute/cuda/repos/ubu... instead.
Edit: If the article were more a burying the lede kind of thing, as sometimes happens with press releases, then the original title would be misleading and it would arguably be right to change it. But that seems unlikely here?
Edit: I suppose we could switch the URL to an article like https://gizmodo.com/apple-and-nvidia-are-over-1840015246 if this really is the only story here.
(title at the time of my comment was "CUDA Toolkit Release Notes")
Except, of course, according to Apple’s history regarding the matter, it won’t.
God, I really hope AMD goes out of the deal with Apple as well. I'd rather people abandon macOS early than have to put up with Apple imposed implementations like Metal, HLS and Webkit.
At the end of the day, Apple should realize that having money alone does not make you , by any means, a popular company. It's the little things that make people sign up for Apple.
I don't think the general crowd will adapt itself to go on to Apple's stack unless it is radically better.
Till Apple gets its developers to use what it makes, by enforcing it on its App Store, like the way it pushed people for Webkit - Otherwise it won't have takers.
It is fine if they want to kill off features off upcoming hardware, but killing off the capacity to use something that was used for months is not the best look. If Metal Only was the goal, add a prompt when someone enables said web drivers that they’re going off the reservation snd that they assume the risk.
These release notes seem to be Nvidia giving up the ghost that they’re going to be able to resolve this dispute and High Sierra is the last release where all but a couple of really old Nvidia GPUs work.
I bought a lower-spec MBP with the expectation of using eGPUs (in the hope Apple and Nvidia would make up), but it's good to see it confirmed that it won't be worthwhile.
It basically means heavy GPU-based work is pretty much a Linux/Windows only thing now. Huge problem as machine learning starts to become an interest topic for people in the art and design community.
Out of curiosity, what Mac apps are you and other professionals using for art & design? Being a part time artist & ex-Mac person, I thought the stereotype of Mac being an artist/designer’s machine was almost gone. Most professional artists I know have had to switch away from Mac. To be fair, most artists I know are doing 3d and need GPUs. But I’m honestly wondering what else is keeping people on Mac and how well the platform is serving professional designers and artists today.
Thus I personally would like to either have the title "CUDA Toolkit Release Notes: MacOS-support to be dropped" or link to the click-baity gizmodo (which is missing the CUDA aspect).
Would have to agree here, too. Part of the problem is that there's no obvious indicator to show that the title has changed (some sort of "post title history" feature would be useful here) which then leaves you (as a latecomer to the article) confused as to what's remarkable about something which would normally perhaps be considered mundane, and confused as to the context against most previous comments were made.
It seems like it has went from something rare, only used in very clearcut cases to something you use daily.
This summarizes it well.
Nvidia spent money building the superior SDK and that's why CUDA is an important piece of infrastructure. Apple half heatedly supported OpenCL, then killed it, then pushed Metal and all this time didn't ship a computer that could even take advantage of this power.
Closed tech is bad but Apple killed off the only real open competitor and replaced it with something even more closed.
When they took steps to embrace other languages the boat was already in high sea.