> This research paper is talking about performance whilst you're talking about scalability. Those are related but are distinct from each other.
The paper has "Scale to Hundreds of Thousands of Cores" in the title. I have not yet read the paper but it seems unlikely it doesn't talk about scalability.
If your data is small enough to easily fit in ram, you kind of can't have that slow a query on it (or at least you no longer are talking about a database problem).
If you end up having to scan the 10 GB graph many times per query without acceleration structures helping you (like indices), it will be slow. I'd say it's still a DB problem.
I'm guessing that, when the paper's author mentioned "hundreds of thousands of cores", they didn't have 10GB of data in mind. That works out less than a typical L1 cache's worth of data per core.
This is really common across article-comment platforms; is anyone interested in discussing how to incentivise comment sections that have read the paper?
The paper has "Scale to Hundreds of Thousands of Cores" in the title. I have not yet read the paper but it seems unlikely it doesn't talk about scalability.