Hacker News new | past | comments | ask | show | jobs | submit login

This is one of the biggest challenges with AI in my opinion. The models can generate the transformations but they have no concept of correctness when applied to vague generative tasks like creating a face based on a set of existing photos.

Basically, there's just one level of cognition. In this case, the AI would only achieve that expected fidelity if the system is layered with more and more models that aim for correctness and accuracy (does this look like a woman, does this look like a mouth, does this look like a nose, etc). The problem with this approach is that it becomes incredibly hard to determine what's needed to be 100% successful at a complex task.

This is the reason why I think we are still far far away from a fully cognitive AI and is the same reason why you only see AI used for very narrow use cases.

Self-driving cars seem to be the first real attempt to have a broad AI system applied to a super-complex and unpredictable field, but I always see conflicting information regarding the progress and challenges in this area.




I think that could feed the output of the GAN into yet another network that assesses the quality of the generated image and automatically tweak the parameters a bit until it doesn't look like an alien.

In fact, that network is probably already part of the original GAN training phase.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: