Hacker News new | past | comments | ask | show | jobs | submit login

I have done plenty of parallel work in python and R, so I'm still not sure what you mean.



Yes, that's my point. It's too simplistic to say "well, the data fits in RAM", you have to add parallelism to make the workload tolerable. In the past, some people have done that using MapReduce or Spark, GNU parallel or just writing parallel code in their favorite language. But RAM by itself isn't the only limiting factor to whether a problem is solvable in a reasonable amount of time.


If you are using python and R, can’t you make your own parallelism?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: