Hacker News new | past | comments | ask | show | jobs | submit login

GPGPU. GPU performance is still increasing along Moore's Law, single-core performance has plateaued. The implication is that at some point, the differential will become so great that we'll be stupid to continue running anything other than simple housekeeping tasks on the CPU. There's a lot of capital investment that'd need to happen for that transition - we basically need to throw out much of what we've learned about algorithm design over the past 50 years and learn new parallel algorithms - but that's part of what makes it exciting.





Sounds interesting, what language is best positioned for GPGPU's?

C++ through CUDA is by far the most popular option. There is some support in other languages but the support and ecosystem is far from what exists for CUDA and c++.

Python via RAPIDS.ai . There first bc most data science community for prod + scale is in it. It feels like the early days Hadoop and Spark.

IMO golang and JS are both better technical fits (go for parallel concurrency and js for concurrency/V8/typed arrays/wasm), and we got close via Apache arrow libs, but will be a year or two more for them as a core supporter is needed and we had to stop the JS side after we wrote arrow. Python side is exploding so now just a matter of time.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: