Not really. The process takes RGB and depth images and regenerates a new mesh of the scene.
The process is able to reconstruct occluded objects by using multiple images from within a view box. That way the scene should look fully intact as long as the viewer is within the box.
Think stitching together a 3d panorama but with enough capture data that you can also peek around things.
Depends what you need. Everythin including GI is baked into the mesh.
If you want dynamic lights, you could try to bake a out the scene as diffuse and then again as normal maps. Then you can have simplified realitime illumination while still baking in AO, static diffuse lights, and the LODed mesh.
I may be understanding Seurat wrong but I think its more than just that. I think they bake a surface light field and do some form of image-based rendering against a much lower-poly version of the mesh.
Correct: it generates a surface light field from a set of RGBZ images that sample the scene surfaces. In turn, that's simplified into a set of alpha-textured quads that very accurately represent the scene from a specified headbox.