Yes, that's my point. It's too simplistic to say "well, the data fits in RAM", you have to add parallelism to make the workload tolerable. In the past, some people have done that using MapReduce or Spark, GNU parallel or just writing parallel code in their favorite language. But RAM by itself isn't the only limiting factor to whether a problem is solvable in a reasonable amount of time.