Hacker News new | past | comments | ask | show | jobs | submit login

Does anyone have a paper (paywalled is fine, I have institutional access) on the tech behind this? Fascinating!

Which part? For the main lighting technique, most of the magic comes from the artist providing the surface normal components. Then to produce the image under some specified lighting direction, you can do the normal 3D graphics thing: dot product the light vector with the normal vector and scale that by the diffuse color. http://en.wikipedia.org/wiki/Lambertian_reflectance

In case the artist doesn't draw the X,Y, and Z surface normal components directly but instead chooses some other set of lighting profiles, you could use photometric stereo to recover the surface normals. (If this is the approach used, then applying such a technique to specially-crafted pixel art is indeed novel).

Here's a factorization technique for photometric stereo that could be applied to the artist inputs: http://www.wisdom.weizmann.ac.il/mathusers/vision/courses/20...

Yeah it's the second case I was interested in. It looks like the algorithm generates a best fit normal map based on the various lighting profiles - presumably it must be told which direction the light is coming from.

I do a lot of research work with stereo, so thanks for the link! Have to give that a go sometime :)

My guess is that it works by blending the hard-light-case sprites trigonometrically, w.r.t. the vector between the position of the light and the origin/sprite.

I understand how normal mapping works, what I was curious about is how the normal maps are generated automatically from different lighting views.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact