Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Love this concept and hope this does well or something like it becomes a de facto standard. There's an explosion of software using GPUs. In the DL space alone, PyTorch and TensorFlow and many frameworks that compete with them, plug-in to them, etc. There's also an explosion of DL hardware coming. We have NVIDIA's whole product line, competing GPUs, TPUs, Inferentia, a bunch of start ups... That compatibility matrix is going to be insane. You need a good integration layer between them. If everyone can support CUDA, great. But there's also OpenCL and other competing standards. They're all going to have some performance trade-offs, but some of that must be worth it to keep some semblance of a common framework, otherwise small and new players don't stand much of a chance.


I wish PyTorch would utilize CPUs more effectively out of the box. The last time I tried to run some DNN training on CPU (last summer), I was disappointed to find that only one single core was used on my Ryzen machine. Yes, GPUs don't have as much memory bandwidth or compute as GPUs, but this is still leaving a lot of performance on the table.


pytorch can use more than one core.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: