Hacker News new | past | comments | ask | show | jobs | submit login

Nice work! Are you doing any 'content-aware fill' style magic on the invented occluded pixels?

I wonder what else can be done with the depth data... Fog is the obvious one. You could potentially insert elements into the scene that knew when they were behind foreground items.

I'm sure some kind of point-cloud or mesh could also be derived but not sure how good it would be.

Funnily enough I nearly posted to /r/android/ earlier "I wish you could save the generated z-buffer" - it didn't occur to me to actually look!




Fog, refocus, background separation, chroma shifting.

As for the fill, if the depthmap is of good quality and without sharp edges - there is no need for that, unless you go berzerk with the scale of displacement.


If you're interested in Google's metadata spec for what the camera actually captures:

https://developers.google.com/depthmap-metadata/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: