I appreciate this, that is why I said, "LLMs and other models". Knowing the probability relations between words, tokens, or concepts/though vectors is important, and can be supplemented by smaller embedded special purpose models/inference engines and domain knowledge in those areas.
As I said, it is overhyped in some areas and underhyped in others.
As I said, it is overhyped in some areas and underhyped in others.