No, this story is about interprocess communication on a single computer, it has practically nothing to do with WebSockets vs something else over an IP network.
Because they were sending so much data to another process over the Websocket.
An uncompressed 1920*1080 30fps RGB stream is 178 megabytes / second. (This is 99% likely what they were capturing from the headless browser, although maybe at a lower frame rate - you don’t need full 30 for a meeting capture.)
In comparison, a standard Netflix HD stream is around 1.5 megabits / s, so 0.19 megabytes.
The uncompressed stream is almost a thousand times larger. At that rate, the Websocket overhead starts having an impact.
It should still have the same impact at scale, right? ie if I had a server handling enough WebSocket connections to be at 90% CPU usage, switching to a protocol with lower overhead should reduce the usage and thus save me money. This is of course assuming the system isn't io bound.