Do you understand Katie's explanation?
They have a sparse set of data that is part of an image. They have trained a model to look at the sparse set and make an educated guess about what the full image looks like. They do this by feeding it full images.
The full images you feed into the model thus have an effect on the final image generated. In order to see how large that effect is, they trained different versions of the model with different sets of complete images. Some were images of what we thought a black hole looked like. There is potential that this heavily influences the model and ensures that the output looks like what we expect it to, even if that isnt actually true.
They also trained the model with non-blackhole images. Since the output of the model was approximately the same, this indicates that the resulting output picture doesnt look like what we think a black hole looks like just because it was trained with black hole images. It likely really looks like that.
The model doesn't need to be told what a black hole looks like. The sparse measurements combined with knowledge of how sparse data can be combined to form a generic image is enough. The model learned that the sparse data is not likely pure noise, instead there are shapes and lines and gradients that relate the sparse data points to each other.
Her analogy of sketch artists is good. If you have a functionally complete description and give it to 3 sketch artists from different cultures who are used to different looking people, they will still draw the same person. However if your description isnt actually detailed enough, their sketches will significantly differ as they use their existing knowledge and bias to fill in the gaps with what they think is likely.
What does training mean?
I thought that the training means to adjust Neural Network until it learns to convert our input into expected output of "complete image".
But if thaining means to teach the model to produce expected "complete image", then how is it possible that "the output of the model was approximately the same" [for different training "complete image"s]?
The output images are approximately the same because the model is "looking" at training images at a lower level that we do. The talk says they chop the images up into small pieces. So the model never "sees" the full shapes that are in the full images. It only sees small local features. I guess it turns out that these smaller pieces are pretty generic in that they are common between images of black holes and everything else. The curve of an elephant trunk looks similar to the curve of an event horizon if you cut it out in a small enough piece.
Perhaps if they didnt do this step, then the model would be more sensitive to the images its trained on.
>They also trained the model with non-blackhole images. Since the output of the model was approximately the same, this indicates that the resulting output picture doesnt look like what we think a black hole looks like just because it was trained with black hole images. It likely really looks like that.
If you are feeding non-blackhole images in and getting blackhole results out, wouldn't that be indicative of an over-trained model? Her other analogy was we can't rule out that there is an elephant at the center of the galaxy, but it sounds like if you feed a picture of an elephant in you'll get a picture of a blackhole out?
They also showed that when they fed in simulated sparse measurements based on real full images of generic things, they got back fuzzy versions of the real image.  So if you put in a sparsely captured elephant (if for instance there was one at the center of the galaxy) you'd get an image of the elephant out, not this black hole.
To complete the artist analogy, imagine that the suspect that is being drawn by each artist is some stereotypical American. The description given to the artists doesnt say that, it just describes how the person looks. One of the three sketch artists is American and the others are Chinese and Ethiopian.
If the American draws a stereotypical American, how can you be sure that the drawing is accurate and thats not just what he assumed the person would look like because everyone he has ever seen looks like that?
You look at what the other two draw. If they both draw the same stereotypical American, even though they have no knowledge of what a stereotypical American looks like, you can be pretty sure that they determined that based on the description provided to them. The actual data.
They did still likely utilize some of their knowledge about what humans in general look like though. This is analogous to how the model uses its training on what a generic image looks like. For instance, maybe several sparse pixels of the same value are likely to have pixels of that same value between them. The model puts things like this together and spits out a picture of what we think a black hole looks like even though its never seen a black hole before.
Did they try to feed random noise into their trained image builder?
I suspect that the output of that trained image builder is always the same "black hole", even with random noise as an input.
I think if you trained with random noise you would get random noise output.
So I assume they're simulating what an input would look like of, say, a planet or astroid or elephant or whatever, given that it was viewed through the relevant type of sensor system. Then when they feed in the black hole sensor data, they get pictures that look like the black holes we imagined. Even if we never told the model what a black hole looks like.
They don't make a habit of posting the shitty TEDx talks to the main channel, I'm guessing. (And there's plenty of those.) This is definitely high quality relative to most TEDx talks, so I understand why it was upgraded.