Hacker News new | past | comments | ask | show | jobs | submit login

Author of deeplearn.js here. A quick summary:

We store NDArrays as floating point WebGLTextures (in rgba channels). Mathematical operations are defined as fragment shaders that operate on WebGLTextures and produce new WebGLTextures.

The fragment shaders we write operate in the context of a single output value of our result NDArray, which gets parallelized by the WebGL stack. This is how we get the performance that we do.

Which... is pretty much how GPGPU started in the early 2000. Sad/funny how we go through this cycle again.

It will be interesting to see if the industry will produce a standard for GPGPU in the browser. Giving that the desktop standard is less common than a proprietary standard.

This is still done in pretty much every game engine I've worked with (for general computation used to support rendering as much as the rendering itself). It's frankly extremely practical and better than many GPGPU apis because it matches what the hardware is doing internally better (GPU core warps, texel caches, vertex caches, etc).

> It will be interesting to see if the industry will produce a standard for GPGPU in the browser.

They did: webcl Sadly, it had multiple security issues so the browsers that had implemented it in their beta channels (just Chrome and Firefox, I believe) ended up removing it. And now, I think it's totally stalled and no one is planning on implementing it.

Also sadly, SIMD.js support is coming along extremely slowly.

WebGPU conversations are ongoing: https://en.wikipedia.org/wiki/WebGPU

WebAssembly is coming along quite nicely.

And SwiftShader is a quite nice fallback for blacklisted GPUs. They simulate WebGL on the CPU and take advantage of SIMD: https://github.com/google/swiftshader

Are there plans to offer the whole zoo?


As I understand, deeplearn.js is more of a kitchen than a prepared meal. Part of the library is referred to as “numpy for the web” with classes to run linear algebra equations efficiently, leveraging the GPU. I don’t see why you couldn’t use those pieces to set up other networks. I think the name “deeplearn.js” is moreso capitalizing on the branding momentum of “deep learning” rather than being the demonstration of one kind of network. I’m in the middle of introductory machine learning classes, so I hope someone will correct me if I’m wrong.

You're right. Some history:

We wanted to do hardware accelerated deep learning on the web, but we realized there was no NumPy equivalence. Our linear algebra layer has now matured to a place where we can start building a more functional automatic differentiation layer. We're going to completely remove the Graph in favor of a much simpler API by end of January.

Once that happens, we'll continue to build higher level abstractions that folks are familiar with: layers, networks, etc.

We really started from nothing, but we're getting there :)

Thanks for the explanation! I recently have been working on my own deep learning library (for fun) and was doing something similar. Aren't GL textures sampled with floating point units inexactly? Do you just rely on floating point error to be small enough that you can reliably index weights?

I ended up switching to OpenCL since I am running this on my desktop. Just curious to see what you did. Thanks!

You can set nearest neighbor interpolation for the texture (aka no interpolation), and gl_FragCoord can be used to determine which pixel the fragment shader is operating on.

Sick! That is a world-class hack

It's not really a hack, it's just using the GPU's parallel computing capabilities to compute things in parallel. This technique has been around for ages.

Languages buddy, languages.. As much as languages were a barrier for human culture to spread their ideas, it's analogous in the computing world.. JS is catching up with many concepts that were prevalent in other languages/environments. Also due to JS it is now becoming more accessible and popular to the commoners..

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact