
Pushing a Trillion Row Database with GPU Acceleration - rbanffy
https://www.nextplatform.com/2017/04/26/pushing-trillion-row-database-gpu-acceleration/
======
anigbrowl
Those people who said GPUs would never be any use for general-purpose
computing tasks have been looking a bit silly lately. Perhaps they had
forgotten about the advantages of math coprocessors in earlier hw
architectures.

~~~
pcwalton
Unfortunately, despite GPUs being ubiquitous, they get very little use in the
desktop/mobile space, aside from games. Apps use them as little more than
blitters, if they even do that much. There's a ton of computing power we're
leaving on the table.

Of course, improving this is a big opportunity for innovation, so perhaps it's
not so unfortunate after all :)

~~~
saganus
I wonder if at some point (hopefully not that far from now) there's some
breakthrough or development that would allow FHE in GPU.

I'm guessing maybe that could be a front where the unused GPU power can be
taken advantage of in desktop/mobile, besides desktop.

I could imagine some sort of algorithm/driver/framework plus a protocol that
would allow remote services to make queries on your personal files (without
privacy issues), while being accelerated by the client's GPU.

Obviously I'm talking out of my behind here but I hope maybe someday someone
will figure it out.

~~~
skdotdan
Excuse my ignorance, but what do you mean by FHE?

~~~
grzm
Likely Fully Homomorphic Encryption, e.g.,

"Accelerating fully homomorphic encryption using GPU"
[http://ieeexplore.ieee.org/document/6408660/](http://ieeexplore.ieee.org/document/6408660/)

~~~
saganus
Aha!

Very nice. This looks interesting. Even though I probably won't understand it,
at least it means I wasn't that far off.

Hopefully we will see more of this sooner rather than later. FHE is one of
those things that I think would be a game changer if it's cheap enough to use
broadly.

------
ddorian43
No GPU-accelerated search-engine yet ?

~~~
deepnotderp
Google uses neural nets extensively for search, those run on TPUs and Bing
uses fpga for acceleration, presumably also for neural networks.

Both are more efficient than a gpu.

~~~
arcanus
That is for inference. The neural networks are trained using GPUs.

~~~
deepnotderp
Well yes, but I would assume that this case would be for inference, since GPUs
are pretty much train once and deploy everywhere (granted, hyperparameter
tuning is still a point).

