Hacker News new | past | comments | ask | show | jobs | submit login

Onnx is cool, the other option is tensorflow js which I have found quite nice as a usable matrix lib for JS with shockingly good perf.would love to know how well they compare



Also shout out to Taichi and GPU.js for alternatives in this space. I've also had success with Hamster.js, that 'parallelizes' computations using Web workers instead of the GPU (who knows, in the future the two might be combined?).


"who knows, in the future the two might be combined?"

You can combine both today alreay and I experiment with it. The problem is still the high latency of the GPU. It takes ages, to get an answer and the timing is not consistent. That makes all scheduling for me a nightmare, when dividing jobs between the CPU and GPU. It would probably require a new hardware architectur, to make use of that in a sane way, so that GPU and CPU are more closely connected. (there are some designs aiming for this, as far as I know)

edit: you probably meant hamsterS.js


They are probably two different use cases. Parallelizing with web workers could be faster for algorithms that do a lot of branching (minimax comes to mind) but if you can vectorize (matmuls for example) then GPU probably dominates.


It would be cool to implement some of these in either library to see how they stack up. In the Hamster.js case, I am envisioning each worker having access to a seperate GPU on your local machine..and having results come in asynchronously on the main thread. Massive in-browser simulations with access to existing JS visualisation packages would make real-time prototyping more feasible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: