OP says that Gemini had still images as input, not video - and the dev blog post shows it was instructed to reply to each input in relevant terms. Needless to say, that's quite different from what's implied in the demo, and at least theoretically is already within GPT's abilities.
Implying they’ve solved single token latency, however, is very distasteful.