Worked also with only one photo.
Extending this into using known 3rd party geometries of identifiable objects instead of reconstructing by hand seems like a very logical extension in retrospect.
As cited in the paper and by Canoma this 1996 paper by Paul Debevec is really where it all started: Modeling and Rendering Architecture from Photographs:
A hybrid geometry- and image-based approach
Still very impressive Video:
Once this happens (and it doesn't look like it'll take long) photography will no longer be an accurate reference for knowledge about the real world.
I think we passed that point a few years ago. The trick now is to lower the barrier so that it is more accessible to more people.
The state of the art in rendering looks perfect:
Our product, Clara.io, makes it possible to render things like this out in real-time:
I used to love that technology was pushing towards this point when I was a kid. Now it scares me. Maybe I'm getting old..
Be interesting if people could correctly tell which images are renders and which are real, with > 50% accuracy, if they were given 50 sample images.
Because being told that an image is computer generated, and then saying "Yep, it's unrealistic" isn't particularly compelling argument.
Maybe not completely realistic but getting pretty good, especially in a browser.
DanBC made an interesting comment - it would interesting to see renderings like these in a double blind test with photographs and see how well they stack up.
Sorry, I'm not a native English speaker (so I'm not confident enough to downvote or anything), but your judging this use of “girl” as female version of “boy”, ignoring the overall mode of expression, doesn't seem adequate. I would have no objections if thomaseng's comment was more formal:
> I would consider these pictures of a girl quite lifelike
But it's not.
If I were the author, and the pictures were of a man, I'd totally say “guy”. Once you flip the gender, “guy” seems to become “girl”, not “woman”. (Again, given the overall informal style used.)
And as for the word “guy”, it doesn't sound in any way belittling a grown-up man (and you just used it yourself).
The English language is often unhelpful in that exact equivalents of the word you want that exist for one gender don't exist for the other, or else carry other connotations. Master vs Mistress for instance.
For client side WebGL, click the Real-Time tab.
(Personal recollection: there was a solid model Shaq's head at 3d modeling company Viewpoint Datalabs back in the day. His head is huge.)
Stills from Running Man taken at about 01:19 - http://imgur.com/rQlxigG
(1) Fitting 3D stock models to existing models using a simple but interactive ray casting approach.
(2) Estimating soft lighting on objects fairly convincingly.
(3) Re-rendering the stock models using the artificial lighting and textures of the original photographs.
It is a pretty cool approach. There are real limitations to this but I think that the automated lighting estimate is just cool and has wide applications in the visual effects space.
PaintMatch algorithm: http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/index.php
I'd really like to see a video of someone starting with an image and using these algorithms and tools to create one of these effects from start to finish.
So I think the fingers indeed were filled in algorithmically. This is plausible since, as best as I can tell, current Context Aware Fill algorithms are based on magic.
Follow some of the links here for examples and detailed explanation: http://www.adobe.com/technology/projects/content-aware-fill....
As per wikipedia. Imagination, also called the faculty of imagining, is the ability to form new images and sensations that are not perceived through senses such as sight, hearing, or other senses. Imagination is magic. Everyone knows that. So any generative model in general is ;)
Example with a 3D truck:
This and Photoshop's context-aware fill (to help fill the holes left from removing the object in the reference image) are very handy to achieve such effects.
In this scenario you already have all 3d elements in hand, so no need to look for them, as well as the complete environment. lots of things that called for re-rendering can be done with this approach post render.
Now, if the author wants to claim authenticity or ownership of a picture, he just has to present the original picture so that people can attest it is not significantly modified and/or he is indeed the author.
Of course, reading the private keys on the chip has to be very hard.
This is almost impossible – the camera's processor can be tampered with and the environment can be altered (e.g. GPS spoofing).
The best we could do would be a notary service where a trusted third-party could produce a signature for a set of bits at a particular time. That would prevent either altering an unwilling third-party's photos or back-dating images after an important event.
All these are unreliable to an extent which suggests there will probably be a fair market for forensic photography software in the future…
sadly the spec does not say how it works.
That's why I want the date and camera SN to be authenticated as well. A photo of an actual event could be shown to have the correct date, while a photo of an altered print would have a later date.
"the user (c) interactively aligns the model to the
photograph and provides a mask for the ground and shadow"
What is a mask for ground and shadow and how hard is it to develop one?
To create these in photoshop you can use the magic wand tool and it selects things with similar colors. But you can create these types of masks in a variety of ways.
> For areas of the object that do not satisfy the criteria of geometric symmetry and appearance similarity, such as the underside of the taxi cab in Figure 1, the assignment defaults to the stock model appear- ance. The assignment also defaults to the stock model appearance when after several iterations, the remaining parts of the object are partitioned into several small areas where the object lacks structural symmetries relative to the visible areas. In this case, we allow the user to fill the appearance in these areas on the texture map of the 3D model using PatchMatch.