
The Truth Behind Liveplace’s Photo-Realistic 3D World And OTOY’s Rendering Engine - nickb
http://www.techcrunch.com/2008/08/20/the-truth-behind-liveplaces-photo-realistic-3d-world-and-otoys-rendering-engine/
======
eugenejen
I found all replies on TechCrunch are all trapped in an assumption: "For each
client, the server has to render the whole scene again and again"

I guess in fact they don't need to do that. Here is my guess.

1\. Given a city model, it can be rendered according to camera location,
orientation.

2\. If users are only allow to walk on the street. Which reduce the possible
rendering camera position and possible orientations.

3\. If user can only move in fixed speed.

4\. Given 1, 2, 3. The possible camera positions, orientations are limited.

5\. The render farm render all possible scenes in system. They are marked by
positions, camera orientation.

6\. Assemble all rendering scenes according to users' camera position,
orientation and pace and stream the whole scene out.

7\. We can add one more parameter, time of the scene. So a city can have 24
hours and the change of Sun, Moon.

So the problem is reduced to how to assemble those scene fast enough and
streaming them. And because it will be a movie, so one important thing is to
find the difference between two scene and compress them quickly.

If they want to add users' avatars, the avatars may not need to be ray-traced.
But just a object overlay in the scene.

