Hacker News new | past | comments | ask | show | jobs | submit login

Seems like a smarter response to me. Go ahead and use a GPU database if you want?? I have yet to see a good use case of one though.



We have seen many.

Big joins are our best use case. Joins are hard for many databases to optimize when they have not seen them before or are not "expecting" them like when you let amazon know how to partition your different tables onto the same physical machines so that redshift can return your query in a reasonble amount of time. But most SQL operations can be accelerated by the use of GPU's. Order by (holy smokes it helps), arithmetic or date transformations (20-30x for comparable cpu code), predicates, group by. All of these operations are happening over vectors of data. SIMD rock out when it comes to running these kinds of loads. The only use cases that we actually think are very poorly suited to gpus thus far (and this is a nut someone will one day probably crack) is wild card string searches. Some of our competitors handle this by caching all the data in GPU RAM but we consider that to be "cheating" since you would never be able to justify the pcie transfer to do wild card string searches.


This seems more like an analytics workload.. you are querying a bunch of things you can't index ahead of time.

Why not put GPUs on your analytics machines? Or a cluster of them with SPARK. Or heck, distribute the spark cluster on top of your database.


We are an Analytical database not a transactional one. We would love to integrate with more tools like spark alas we are a small team of 5 working on the engine and making the engine itself has taken most of our time to date.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: