Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the apis that were available to me.
The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.
The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this eco system and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.
> We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.
Were some benchmarks done perhaps or could you provide some more low-level reasons as to why CUDA was more performant? I'm not experienced with CUDA, just generally interested.
I also have to say that I am a bit skeptical of Nvidia as I have never received any proper support for Linux development on Nvidia GPUs for drivers and generally tracking bugs on their cards. It was so frustrating that I just switched to AMD GPUs that "just worked". How is this different for these kinds of use cases? Does Nvidia only care about their potential enterprise customers but they don't care about general usage of their GPUs on Linux? It seems to rub me the wrong way and I don't understand.
Nvidia loves and cherishes you (I think I don't work there). They want you to be able to do this on your laptop, your server, your super computer.
If it has been a few years I would encourage you to get your feet wet again because support has gotten alot better. It's not like 5 years ago when it was nigh impossible to get the driver installed and weird conflicts would come up. I generally recommend using the debian installer if that works for you. Rapids is meant to make data science at scale accessible to people. If you have trouble with CUDA drop by the https://rapids-goai.slack.com . There are many people there that are willing to help.
Do you use Nvidia products on Linux? Reading "love" and "Nvidia" in the same sentence feels a little bit odd because the general sentiment for Nvidia on the Linux community is "don't touch it with a 10 foot pole". If I remember correctly Torvalds himself named it the worst hardware company they had to deal with.
I'm not sure what you're talking about. Besides games, using CUDA on Linux has been the de facto OS for anything serious for almost as long as CUDA has existed. What exactly is the problem with it?
I think this sentiment exists solely among people that don’t actually own any NVIDIA hardware. I‘ve never had any problems with their drivers, any crashes in video games can be usually be attributed to be at least in part the Games fault. In contrast to Windows Linux has abysmal support for restarting crashed video drivers.
Linus Torvalds's kernel developer point of view might be very different from the majority of users'. For the end users, they just need to install Nvidia's proprietary drivers and everything just works.
For a long time, Nvidia was the best option for 3D graphics on Linux. ATI/AMD had terrible drivers (fglrx/Catalyst), Intel had abysmal performance.
The proprietary drivers are pretty nice and performant and have been for a long time. The same can’t be said about Intel (they don’t produce comparable hardware) or AMD (until recently their drivers were garbage, at the moment their best graphics card is worse than the best NVIDIA one)
With nvidia-docker (multi-year effort at this point) and AMIs, esp. the era of ML, this is a non-issue for 80% of our users. The other 20% struggle even without the GPUs. ML is a thing and GPUs run it, so the community has come together here.
Linux laptops remain a mess in general tho, which is annoying for non-cloud dev =/
Well it pretty much always was a part of the eco system it just was not open source. We have been contributors to rapids for a while. And yes, we are betting on Nvidia for sure.
For most people building GP GPU solutions they are going to have to make a decision when it comes to which hardware they want to support. After that decision is made it really isn't something you can revisit without copious amounts of money.
So, the part that confuses me with this argument is we live in an Intel world where they have 98% market share in servers. So we're already at the whim of a single company. Why not challenge that dominance?
Not the same. Two companies make x86 processors, and in the very specific case of this article/comment thread, more than one company supports OpenCL. Nvidia/cuda is a one-pony show, no matter how you look at it.
That seems like a pretty good reason...I have been looking to learn some GPU programming to optimize some matrix math that I've been doing for a pet project, and while my first instinct was telling me OpenCL since it's portable, if people who actually know what they're talking about are saying that CUDA is simpler to start with, it might be worth it to me to pick up a cheap Nvidia GPU/Jetson Nano and do some processing that way.
Even if you choose OpenCL, the tools (profiler, debugger, etc) are usually platform specific. In addition, my experience with opencl across platforms was that each of the vendors' compilers had distinct issues and that performance was not portable.
I get the appeal for an open API, but opencl never grew a development ecosystem or any libraries. IMO it is dying and isn't worth the effort. AMD is implementing CUDA with hip - maybe roll with that.
You definitely do not want to use opencl for matrix multiplies on Nvidia cards. That's the most highly optimized task on GPUs, so much so that they have dedicated hardware units for it. Opencl cannot take advantage of those.
Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the apis that were available to me.
The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.
The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this eco system and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.