If you want to support every meeting platform, you can’t really make any assumptions about the data format.
To my knowledge, Zoom’s web client uses a custom codec delivered inside a WASM blob. How would you capture that video data to forward it to your recording system? How do you decode it later?
Even if the incoming streams are in a standard format, compositing the meeting as a post-processing operation from raw recorded tracks isn’t simple. Video call participants have gaps and network issues and layer changes, you can’t assume much anything about the samples as you would with typical video files. (Coincidentally this is exactly what I’m working on right now at my job.)
At some point, I'd hope the result of zooms code quickly becomes something that can be hardware decoded. Otherwise the CPU, battery consumption, and energy usage are going to be through the roof.
The most common video conferencing codec on WebRTC is VP8, which is not hardware decoded either almost anywhere. Zoom’s own codec must be an efficiency improvement over VP8, which is best described as patent-free leftovers from the back of the fridge.
Hardware decoding works best when you have a single stable high bitrate stream with predictable keyframes — something like a 4K video player.
Video meetings are not like that. You may have a dozen participant streams, and most of them are suffering from packet loss. Lots of decoder context switching and messy samples is not ideal for typical hardware decoders.
This makes sense. I find it curious that a WASM codec could be competitive with something that is presumably decoded natively. I know Teams is a CPU hog, but I don't remember Zoom being one.