I always wondered why Zorin's original work [1] didn't make it to photography nor 3D rendering/gaming; we basically had 2 decades of ugly perspective distortions everybody got used to.
I would have liked to see the results compared to ground truth. ie a picture of the person taken from the center the lense.
And i would have like to see more false-positive rejection. e.g. stuff that gets detected as a face but isn't- at the edges of photos, but really that relies on the robustness of the face detect heuristic, so it's a short and sweet heuristic that will make people look more normal at the edges of photos.
The image on the left is a rectangular crop of the image after lens correction, the one on the right combines lens correction and their warping method. That combination allows them to pull in a few pixels that would've been outside the rectangular region otherwise.
Going into this paper, i expected they would be correcting for barrel distortion. I was disappointed to read that they instead:
>we formulate an optimization problem to create a content-aware warping mesh which locally adapts to the stereographic projection on facial regions, and seamlessly evolves to the perspective projection over the background.
Reminds me of this person who wrote a youtube tutorial with the factually incorrect title "Manually correct perspective in Photoshop". He did not correct the (already correct) perspective, but rather selectively distorted parts of scenery photographs so bridges looked "vertical". In fact, he was making those parts of the image no longer conform to the mathematical photo projection. Instead of picking a better-suited image projection, that video and this paper selectively fudge parts of the image to use a different projection.
[1] http://graphics.stanford.edu/~dzorin/perception/sig95/index....