Hacker News new | past | comments | ask | show | jobs | submit login

> Is it so hard to believe that such models have developed a sense for how light propagates through a scene...

This specifically is the thing I usually notice in AI images (outside of the hand trope).

I'm not GP, and at best a layman in the field, but it's not hard to believe it's possible to generate believable lighting, given enough training data, but if I'm not mistaken it would be through sheer volume of properties like lighting/shadow here usually follows item here.

But it's extremely inefficient, and not like we reason. It's like learning the multiplication table without understanding math. Just pairing an infinite amount of properties with each other.

We on the other hand develop a grasp of where lighting exists (sun/lamp) and surmise where shadows fall and can muster any image in our mind using that model instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: