
CUDA 11.0 - ksec
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-whats-new
======
usmannk
I noticed CUDA 11.0 was almost ready for release last week when I went to
install CUDA and the default download page linked to the 11.0 Release
Candidate. The 10.1 and 10.2 links were buried behind a link off to the side
labeled "legacy". The thing is, no library you use is going to be supporting
the CUDA 11.0 RC, that's ridiculous. For example, Pytorch stable is on 10.2
and Tensorflow only goes up to 10.1.

This is generally indicative of how poorly organized the CUDA documentation
and installation instructions are. The Conda dependency manager has made this
a lot easier recently. Especially by, e.g., providing pytorch binaries. Though
if you want to use packages like NVIDIA Apex for mixed precision DL[0] you're
going to be in for a huge headache trying to compile torch from source while
also managing your cuda and nvcc version, which sometimes must be the same but
sometimes can not be![1]

[0] Yes, I'm aware that Apex was very recently brought into torch but it seems
that the performance issues haven't been ironed out yet.

[1] [https://stackoverflow.com/questions/53422407/different-
cuda-...](https://stackoverflow.com/questions/53422407/different-cuda-
versions-shown-by-nvcc-and-nvidia-smi)

~~~
jjoonathan
Yeah, and the CUDA 10.0 official Visual Studio demo project build was broken
for... looks like a year, at least, because they didn't want to populate the
toolkit path. NVidia, you're better than this.

[https://forums.developer.nvidia.com/t/the-cuda-
toolkit-v10-0...](https://forums.developer.nvidia.com/t/the-cuda-
toolkit-v10-0-directory-does-not-exist/65821/6)

> The Conda dependency manager has made this a lot easier

Yeah but conda is "Let's do dependency management with a SAT solver, it'll be
great!" On a good day, it's just slow. On a bad day, the SAT solver spins for
hours before failing to converge. On a really bad day, the SAT solver does
something "clever."

I've had a couple of really bad days this year. I'm really starting to not
like conda very much.

~~~
pjc50
> "Let's do dependency management with a SAT solver, it'll be great!"

Debian managed something like this over 20 years ago in dpkg. But somehow
people must keep reinventing the wheel.

~~~
jjoonathan
I thought the debian SAT solver was a maintainer tool rather than something
that ran every time? In any case, conda's implementation is really quite awful
by comparison and they would have been well served by copying something that
works instead of building something that doesn't.

------
infairverona
Everytime I have to deal with multiple versions of CUDA on Linux I feel like
poking my eyes out. I get that supporting developer libraries that have to
interact with hardware is hard but come on...

~~~
zelly
For something this popular it shouldn't be so hard. I don't think being
related to hardware is an excuse. CUDA is not a driver and exists entirely in
userspace.

This is the kind of thing that happens when you're dealing with a monopoly.

~~~
blueblisters
The economic incentive is simple: open-sourcing the driver will allow an open-
source API to interact with the hardware, allowing AMD/other competitors to
support the same API. So instead of competing at the silicon level, Nvidia
chooses to set up unnecessary barriers to entry at massive cost to
developers/users.

Like Torvalds says [1]: Fuck You, Nvidia.

[1]:
[https://www.youtube.com/watch?v=iYWzMvlj2RQ](https://www.youtube.com/watch?v=iYWzMvlj2RQ)

~~~
mpfundstein
unnecessary? not in the eyes of the shareholders. just compare NVIDIAs stock
surge with the performance of AMD. they protect their market and they do it
pretty well.

the Linus video is awesome though :-) And I totally understand his sentiment

------
ziddoap
>cuFFT now accepts __nv_bfloat16 input and output data type for power-of-two
sizes with single precision computations within the kernels.

This exact sentence is listed both under "New Feature" and "Known Issues". I'm
not super familiar with CUDA stuff, but, it can't be both right?

~~~
usmannk
Looks like a mistake, should only be in New Features.

~~~
bjornsing
So a known issue in the known issues?

~~~
willwill100
A known known

------
cjhanks
Does anyone understand why such minor upgrades resulted in a major version
bump? Is this some sort of stability check point? Or some other versioning
convention?

~~~
einpoklum
Well, I think a new microarchitecture means a major bump. So between that and
version bumps to to actual major software features, you get to 11 within 13
years or so.

Also, GCC 9.x compatibility may seem minor to some, but is significant for
others. I also think there's some C++17 support in kernels - that's something
too.

~~~
cjhanks
Ooh, I missed those. Support for C++17 is pretty major. Thanks. Perhaps my
memory is fuzzy, I just remember the CUDA 9->10 switch having some significant
(but not major) performance and feature changes.

------
ykl
Interesting that Fedora support seems to have been dropped. Anyone know why
that might be?

Edit: oh wait I think I see. Latest supported gcc for CUDA 11 is gcc 9.x, but
I think latest Fedora is on gcc 10.

~~~
tw04
I'm guessing it was an oversight in the table given they still have all the
fedora installation instructions on the install page:

[https://docs.nvidia.com/cuda/cuda-installation-guide-
linux/i...](https://docs.nvidia.com/cuda/cuda-installation-guide-
linux/index.html#system-requirements)

------
Yajirobe
> Added support for Ubuntu 20.04 LTS on x86_64 platforms.

Huh? I've been using CUDA for a while now on my Ubuntu 20.04 machine

~~~
hobofan
That probably just means that they are testing that they are testing that
platform in CI now, and are officially supporting it (taking bug reports into
account, etc.).

------
muska3
I'd like to play with CUDA, but I just got a new laptop without an Nvidia GPU,
coming from one that had a built in Nvidia GPU. It's got a thunderbolt port,
but unfortunately most of the gpu's are quite expensive at around 400$. Does
anyone know any cheaper options?

~~~
paraselene_
If you just want to try for a few hours, you can add GPU(s) onto a GCP CE
instance. Along with the trial credits it should get you a few hours poking
around with cuda.

Otherwise, get a pre-owned GTX950 (one that doesn't require external power
supply) and a TB3 to PCI-E x16 adapter. Not enclousure, adapter. Should cost
you around $200 all in IIRC. And it allows you to upgrade the card furthur
down the line since most of the cost is the adapter.

~~~
muska3
Do you know what exactly I should search for when looking for a "TB3 to PCI-E
x16 adapter"? Will this utilize all thunderbolt lanes available? I've got a
newer laptop that, I believe, has all lanes available.

------
chewxy
oh great. More chasing to do. Anyone interested in working on the CUDA
integration for Go ([https://gorgonia.org/cu](https://gorgonia.org/cu))? PRs
welcome, as I am quite short on time.

------
lihan
Is there Mac OS support?

~~~
desertrider12
Nope. That's mostly on Apple though, as they discourage all APIs that aren't
Metal.

~~~
suyash
Apple needs to build a completive standard to CUDA.

~~~
desertrider12
Apple doesn't directly compete with CUDA, they just want total control of
their platforms. Metal does have great performance and tooling. In practice
nobody does HPC on macs so there's no demand for linear algebra or graph
libraries, which are a big selling point for Nvidia over AMD.

The fact that Apple is trying to kill OpenGL and OpenCL and block Vulkan
definitely sucks though for anyone trying to do indie games, or open source
ML/HPC.

~~~
suyash
Have you looked at OpenGL api's, they are a strange legacy beast. I think
Apple does what's best for it's users, even if it has to create a new
standard. How about other companies adapting to modern Metal API's instead?

