"Thinking quickly, I made a megaphone by taking a squirrel and tying it to a megaphone I had."
Seriously: This is an interesting article because it's about more than just rendering video, it's about creating a new video procedurally and then rendering it with that pre-existing rendering program. It would be nice if the title reflected that.
Another way to do it on the fly is to feed image data into stdin of ffmpeg. I haven't tried with BMP with that method, so the intermediary encoding might not be spared.
Im gonna need to dive further into this, I do js canvas and webgl rendering to videos with the chrome/puppeteer via 'puppeteer-screen-recorder'.
But they do at times cause some issues. Latest 3d renders have been causing memory issues that I think would be solved with bigger boxes but havent need to investigate for a little while.
Have thought about it just outputting frames and then later having ffmpeg sticth them into a video, but havent gotten around to really testing it.
Im guessing this limited to 2d canvas, but excited to
check it out. Thanks!
WebGL/3D works fine, just with some additional dependencies (e.g. mesa drivers) and a little more setup in Nodejs to create the context and copy the framebuffer to node-canvas to do the image encoding.
Here's a little 3D animation I've rendered using a similar technique (plus WebGL) in a docker container:
The main thing to watch out for is whether you need specific WebGL extensions that might not be supported. Array instancing is the main one I use, which is supported.
Seriously: This is an interesting article because it's about more than just rendering video, it's about creating a new video procedurally and then rendering it with that pre-existing rendering program. It would be nice if the title reflected that.