What Garrett Stanley and Yang Dan did in that work [1] is very cool, but it's a reconstruction. It is "what a cat sees" in the same sense that a false color image is "what a satellite sees". Essentially they regressed the activity of an ensemble of LGN neurons (visually responding neurons in the thalamus) with pixels on a camera to build a set of linear filters in space and in time. Then they used those filters to reconstruct the scenes based on the neural activity alone [2]. So it's what the cat sees in a kind of information-theoretic sense.
2. And actually on a reread this morning, it doesn't seem that they cross-validated their reconstructions. So perhaps their reconstructed movie frames are actually an overestimation. Regardless, there is no doubt that their technique works and it was a great starting point. Keep in mind: they did this in 1999!
1. https://stanley.gatech.edu/publications/stanley_dan_1999.pdf
2. And actually on a reread this morning, it doesn't seem that they cross-validated their reconstructions. So perhaps their reconstructed movie frames are actually an overestimation. Regardless, there is no doubt that their technique works and it was a great starting point. Keep in mind: they did this in 1999!