OpenAI showed it in 2017 with the sentiment neuron (https://openai.com/research/unsupervised-sentiment-neuron). Basically, the model learned to classify the sentiment of a text which I would agree is a general principle, so the model learned a generalized representation based on the data.
Having said that, the real question is what percentage of the learned representations do generalize. For a perfect model, it would learn only representations that generalize and none that overfit. But, that's unreasonable to expect for a machine *and* even for a human.
Maybe we just don't know. We are staring at a black box and doing some statistical tests, but actually don't know whether the current AI architecture is capable enough to get to some kind of human intelligence equivalent.
Having said that, the real question is what percentage of the learned representations do generalize. For a perfect model, it would learn only representations that generalize and none that overfit. But, that's unreasonable to expect for a machine *and* even for a human.
Maybe we just don't know. We are staring at a black box and doing some statistical tests, but actually don't know whether the current AI architecture is capable enough to get to some kind of human intelligence equivalent.