This statement is false. There is a well known thought experiment called Mary’s Room the gist of which is that knowing all conceivable scientific knowledge about how humans perceive color is still not a substitute for being a human and perceiving the color red: https://philosophynow.org/issues/99/What_Did_Mary_Know
The experience of seeing red is an example of what is called “qualia”.
In Google AI systems that identify cats, birds, etc it is reasonable to imagine AI technology evolving towards systems that can discuss those objects at the level of a typical person. However with an AI based on text only there is no possibility of that. It would be like discussing color with a blind person or sound with a deaf person.
In any case, at some level everything is symbols. A video is just a bunch of 1's and 0's, as is text, and everything else. A being raised on only text input would have qualia just like a being raised on video input. It would just be different qualia.
It may sure look indistinguishable, but on the inside it just wouldn't be the same.
If you assert that a person can understand everything there is to know about the color red and then still not understand what it is like to see red, you have either contradicted yourself or assumed dualism.
Also, it's a thought experiment. Some people will claim the answer to that question is no, she learned nothing. Others will claim that she did. It's that thing she learned beyond the physical that theoretically cannot be conveyed by science, or even possibly by language.