Hacker News new | comments | show | ask | jobs | submit login
nvzhuangdal on Dec 7, 2017 | hide | past | web | favorite

I really hate this kind of clickbait. In general, GPU-databases are great for large vector operations against thousands or millions of entries. However, they perform perform poorly for typical DB usage: selecting random data as well as join operations. GPU-based DBs are just another tool that only pays for itself in extremely specialized use-cases.

Plus, a top of the line volta gpu only has 16GB of memory where as i can provision a r4.16xl rds instance with 488 Gib of memory or even set up my own x1e.32xlarge with 3,904 Gib of memory. That's a 30 to 244 times increase in memory size.

according to STREAM benchmarks, the GPU's random access memory bandwidth scales similarly to the CPU's as the sequential access one. Is the difference in the CPU's more sophisticated caching behaviour? Did you actually do any tests with random access on GPU?

First off, the website is down.

2nd, what do they mean by GPU based?

I could be wrong,but have always thought that gpu/cpu were used for processing and querying, not storing

GPU memory is so fast that it might actually have some advantages in retrieval time relative to main system memory.

But this is speculation that I’m throwing out here in the hopes that someone better informed will correct my wrong answer...

It's for in-memory databases:


Although the idea sounds intriguing for numeric data processing, the site looks suspicious and the article won't open.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact