Yes, I too am familiar with the 101 level of understanding, but I've also heard of LLMs doing things that stretch that model. Perhaps that's just a matter of combining things in their training data in unexpected ways, hence the second half of my question.