Hacker News new | past | comments | ask | show | jobs | submit login

They're not just training the model to make pictures from nothing. They're training the model to make pictures from an input.

So I assume they're simulating what an input would look like of, say, a planet or astroid or elephant or whatever, given that it was viewed through the relevant type of sensor system. Then when they feed in the black hole sensor data, they get pictures that look like the black holes we imagined. Even if we never told the model what a black hole looks like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: