Hacker News new | past | comments | ask | show | jobs | submit login

I believe 2D canvas rendering is performed on the CPU rather than the GPU.

You wouldn't use a 2d context, you can use WebGL shaders instead. Besides that, most operations on a 2d context are performed on the GPU.

Most is done on the GPU nowadays - only getImageData() and putImageData() has to go via the CPU.

You can BTW also "cheat" your way to meta-balls: https://stackoverflow.com/questions/17177748/increasing-real...

much of it is offloaded to the GPU by recent browsers

The important bit is getting the metaball function into the fragment shader. I'm not really a web guy, but I know you can do that with WebGL.

For a canvas with a more limited API, you can still do it if images are GPU accelerated with a composite mode like "lighter". If that's the case, you can do basically the same thing by first rendering the metaball function to an image once, and then drawing that image for each metaball. Doing it via an image introduces extra aliasing artifacts, but might get around the API limitations.

Edit: I suppose you would still want to find a GPU-accelerated threshold function for the step after that.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact