Hacker Newsnew | past | comments | ask | show | jobs | submit | felipe_aramburu's commentslogin

This is actually going to be pretty sick for us at BlazingSQL. We are freaking frothing at the chance to be able to scale beyond system and gpu memory without having to shove data through pipes that are no longer needed.

RDMA will be a necessary building block for most of us that are working on HPC solutions and the fact that now we can directly access fast persistent storage from a GPU is going to allow us to scale several orders of magnitude bigger than we currently can.


Have you compared performance between your suggested solutions and what can be achieved using hardware vendor platforms? If not then whats kind of pathetic is how quickly you dismiss the people above who say the HAVE done this before.

If you have seen something we have not when it comes to performance then please by all means share it so we can learn!


Thanks Leo! We love having you all as early adopters of our tech!


GPU memory is expensive but a big as #@$% computer is even more expensive. When we show comparisons to things like spark we are doing so use cost basis. So if we say something like we are x times faster than this technology on this workload what we did was launch clusters that have similar costs. Total cost of ownership is also reduced by the fact that the engine itself is totally ephemeral. You can turn it off and on within seconds.


Thats a great question. The answer is two-fold.

Early on when we first started playing around with General Processing on GPU's we had Nvidia cards to begin with and I started looking at the apis that were available to me.

The CUDA ones were easier for me to get started, had tons of learning content that Nvidia provided, and were more performant on the cards that I had at the time compared to other options. So we built up lots of expertise in this specific way of coding for GPUS. We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.

The second answer to this question is that blazingsql is part of a greater ecosystem. rapids.ai and the largest contributor by far is Nvidia. We are really happy to be working with their developers to grow this eco system and that means that the technology will probably be CUDA only unless we somehow program "backends" like they did with thrust but that would be eons away from now.


> We also found time and time again that it was faster than opencl for what we were trying to do and the hardware available to us on cloud providers was Nvidia GPUs.

Were some benchmarks done perhaps or could you provide some more low-level reasons as to why CUDA was more performant? I'm not experienced with CUDA, just generally interested.

I also have to say that I am a bit skeptical of Nvidia as I have never received any proper support for Linux development on Nvidia GPUs for drivers and generally tracking bugs on their cards. It was so frustrating that I just switched to AMD GPUs that "just worked". How is this different for these kinds of use cases? Does Nvidia only care about their potential enterprise customers but they don't care about general usage of their GPUs on Linux? It seems to rub me the wrong way and I don't understand.


Nvidia loves and cherishes you (I think I don't work there). They want you to be able to do this on your laptop, your server, your super computer.

If it has been a few years I would encourage you to get your feet wet again because support has gotten alot better. It's not like 5 years ago when it was nigh impossible to get the driver installed and weird conflicts would come up. I generally recommend using the debian installer if that works for you. Rapids is meant to make data science at scale accessible to people. If you have trouble with CUDA drop by the https://rapids-goai.slack.com . There are many people there that are willing to help.


Do you use Nvidia products on Linux? Reading "love" and "Nvidia" in the same sentence feels a little bit odd because the general sentiment for Nvidia on the Linux community is "don't touch it with a 10 foot pole". If I remember correctly Torvalds himself named it the worst hardware company they had to deal with.


I'm not sure what you're talking about. Besides games, using CUDA on Linux has been the de facto OS for anything serious for almost as long as CUDA has existed. What exactly is the problem with it?


I think this sentiment exists solely among people that don’t actually own any NVIDIA hardware. I‘ve never had any problems with their drivers, any crashes in video games can be usually be attributed to be at least in part the Games fault. In contrast to Windows Linux has abysmal support for restarting crashed video drivers.



Linus Torvalds's kernel developer point of view might be very different from the majority of users'. For the end users, they just need to install Nvidia's proprietary drivers and everything just works.

For a long time, Nvidia was the best option for 3D graphics on Linux. ATI/AMD had terrible drivers (fglrx/Catalyst), Intel had abysmal performance.


>For the end users, they just need to install Nvidia's proprietary drivers and everything just works.

And that's the crux of the issue. proprietary drivers.


Hollywood and other 3D heavy creation studios are pretty fine with it.


The proprietary drivers are pretty nice and performant and have been for a long time. The same can’t be said about Intel (they don’t produce comparable hardware) or AMD (until recently their drivers were garbage, at the moment their best graphics card is worse than the best NVIDIA one)


We exclusively do Nvidia/Linux.

With nvidia-docker (multi-year effort at this point) and AMIs, esp. the era of ML, this is a non-issue for 80% of our users. The other 20% struggle even without the GPUs. ML is a thing and GPUs run it, so the community has come together here.

Linux laptops remain a mess in general tho, which is annoying for non-cloud dev =/


> blazingsql is part of a greater ecosystem

But now blazingsql is part of an ecosystem within a walled garden fully dependent upon the stability of a single company.


Well it pretty much always was a part of the eco system it just was not open source. We have been contributors to rapids for a while. And yes, we are betting on Nvidia for sure.

For most people building GP GPU solutions they are going to have to make a decision when it comes to which hardware they want to support. After that decision is made it really isn't something you can revisit without copious amounts of money.


So, the part that confuses me with this argument is we live in an Intel world where they have 98% market share in servers. So we're already at the whim of a single company. Why not challenge that dominance?


Not the same. Two companies make x86 processors, and in the very specific case of this article/comment thread, more than one company supports OpenCL. Nvidia/cuda is a one-pony show, no matter how you look at it.


Thanks for answering my question so quickly!

That seems like a pretty good reason...I have been looking to learn some GPU programming to optimize some matrix math that I've been doing for a pet project, and while my first instinct was telling me OpenCL since it's portable, if people who actually know what they're talking about are saying that CUDA is simpler to start with, it might be worth it to me to pick up a cheap Nvidia GPU/Jetson Nano and do some processing that way.


The collab link below let's you use a gpu for free on Google cloud


> OpenCL since it's portable

Even if you choose OpenCL, the tools (profiler, debugger, etc) are usually platform specific. In addition, my experience with opencl across platforms was that each of the vendors' compilers had distinct issues and that performance was not portable.

I get the appeal for an open API, but opencl never grew a development ecosystem or any libraries. IMO it is dying and isn't worth the effort. AMD is implementing CUDA with hip - maybe roll with that.


You definitely do not want to use opencl for matrix multiplies on Nvidia cards. That's the most highly optimized task on GPUs, so much so that they have dedicated hardware units for it. Opencl cannot take advantage of those.


This is a Distributed SQL engine not a database. We store no data. You store your data in HDFS, S3, posix, NFS etc. We allow you to query directly from these filesystems of the file formats you have already. You can look here to see the file formats cudf supports. https://github.com/rapidsai/cudf/tree/branch-0.9/cpp/src/io

You can try it out yourself here https://colab.research.google.com/drive/1r7S15Ie33yRw8cmET7_...

Or use dockerhub https://hub.docker.com/r/blazingdb/blazingsql/

The benefits are.

Greatly increased processing capacities. We can just perform orders of magnitudes more instructions per second than a cpu with the gpus we are using.

Decompression and parsing of formats like CSV and parquet happens in the GPU orders of magnitude faster than the best cpu alternatives.

You can take the output of your queries and provide it to machine learning jobs with zero copy ipc and get the results back the same way. We are all about interoperability with the rapidsai eco system.


Is there any reason why a SQL format isn't is that list? Wondering if there's a way to join SQL sources with file storage sources. An example of this would be filtering or enrichment operations.

// sorry if this is a stupid question.


When you say SQL format do you mean being able to read the output of a jdbc or odbc driver? If this is the case then mostly just time. You are not the first person to ask about this and now that there are java bindings in cudf this might become easier to make a reality in the next few months.

Or do you mean being able to read a database's file format natively? If this is the case there are many reasons. 1. There are many poorly/non documented formats 2. Even if you decide to read some other DB's format natively, those formats change over time 3. Little control of how and where the data is laid out


Not a stupid question. The reason is priorities, but definitely our ideal to do predicate push down and join databases to files, streams, etc.


I've read the website, but I could't find a hint that the engine is distributed. Even the spark benchmarks compare a single instance with multiple nodes.

Is it distributed? How do I set it up in a distributed mode? Does it support nested parquet (something that even spark itself struggles to support inside SQL).


Distributed is getting released in the next few days, I've been playing with it over the past week.

Right now we use k8s on Google K8s Engine(GKE) to deploy in distributed mode.

We don't supported nested at present, there are Rapids teams looking into this.


Blazingsql is built on top of CUDF. We are contributors to rapidsai


No. We hadn't heard of what PartiQL was before you wrote that message. At least I hadn't. I am the CTO of blazingsql


Yes to be more specific it works on cuda 9.2 and 10.0 at the moment like the rest of the rapidsai eco system.


Why not? That would be in line with what these physicians were doing which was benign actions whose intent was to alleviate suffering.

It seems that the greater point is that instead of reacting to mental illness by telling people,"you are sick and mentally unfit", they are trying to alleviate stress and suffering in that moment and then getting people to rest. Many mental disorders can be aggravated by stress and lack of sleep so it may not be surprising that this provided tangible benefits to the individuals being treated in this manner.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: