Big joins are our best use case. Joins are hard for many databases to optimize when they have not seen them before or are not "expecting" them like when you let amazon know how to partition your different tables onto the same physical machines so that redshift can return your query in a reasonble amount of time. But most SQL operations can be accelerated by the use of GPU's. Order by (holy smokes it helps), arithmetic or date transformations (20-30x for comparable cpu code), predicates, group by. All of these operations are happening over vectors of data. SIMD rock out when it comes to running these kinds of loads. The only use cases that we actually think are very poorly suited to gpus thus far (and this is a nut someone will one day probably crack) is wild card string searches. Some of our competitors handle this by caching all the data in GPU RAM but we consider that to be "cheating" since you would never be able to justify the pcie transfer to do wild card string searches.
Why not put GPUs on your analytics machines? Or a cluster of them with SPARK. Or heck, distribute the spark cluster on top of your database.