We store NDArrays as floating point WebGLTextures (in rgba channels). Mathematical operations are defined as fragment shaders that operate on WebGLTextures and produce new WebGLTextures.
The fragment shaders we write operate in the context of a single output value of our result NDArray, which gets parallelized by the WebGL stack. This is how we get the performance that we do.
It will be interesting to see if the industry will produce a standard for GPGPU in the browser. Giving that the desktop standard is less common than a proprietary standard.
They did: webcl Sadly, it had multiple security issues so the browsers that had implemented it in their beta channels (just Chrome and Firefox, I believe) ended up removing it. And now, I think it's totally stalled and no one is planning on implementing it.
Also sadly, SIMD.js support is coming along extremely slowly.
WebAssembly is coming along quite nicely.
And SwiftShader is a quite nice fallback for blacklisted GPUs. They simulate WebGL on the CPU and take advantage of SIMD:
We wanted to do hardware accelerated deep learning on the web, but we realized there was no NumPy equivalence. Our linear algebra layer has now matured to a place where we can start building a more functional automatic differentiation layer. We're going to completely remove the Graph in favor of a much simpler API by end of January.
Once that happens, we'll continue to build higher level abstractions that folks are familiar with: layers, networks, etc.
We really started from nothing, but we're getting there :)
I ended up switching to OpenCL since I am running this on my desktop. Just curious to see what you did. Thanks!