Hacker News new | past | comments | ask | show | jobs | submit login

14x seemed awfully small. Looking at e.g. http://mcx.sourceforge.net/cgi-bin/index.cgi, they see a ~300x speedup for a monte carlo simulation on the GPU.

(Note - it might well be that the FPGA version of that particular problem would be even faster. I merely posted the GP to point out that there was significantly more effort to tailor the model towards the FPGA than towards the GPU, which seemed to be merely a "compile for GPU" without a restructuring to make the data model fit.)




I've also heard that financial companies needed good double precision support, which was not always fully performant on GPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: