EX: "How long do stars last?" Means something very different in a science class than a tabloid headline. Is that tabloid talking divorce or obscurity? Notice how three sentences in I am clarifying last.
EDIT: a combination of noise, I should say, and paucity of information.
Asking a computer to solve all the ambiguity in human language perfectly is asking it to solve it far better than any human can.
For human-level NLP, you need to model the mechanism by which the relationship network is generated, and ground it in a set of experiences - or some digital analogue of experiences.
Naive statistical methods are not a good way to approach that problem.
So no, Wikipedia will not provide enough context, for all kinds of reasons - not least of which is the fact that human communications include multiple layers of meaning, some of which are contradictory, while others are metaphorical, and all of the above can rely on unstated implication.
Vector arithmetic is not a useful model for that level of verbal reasoning.
For AI to determine their own goals, well now you get into awareness ... consciousness. At a fundamental mathematical level, we still have no idea how these work.
We can see electrical signals in the brain using tools and know it's a combination of chemicals and pulses that somehow make us do what we do ... but we are still a long way from understanding how that process really works.
I'd actually just say that we've not really defined these very well, and so arguing about how far along the path we are to them isn't that productive.
In similar context they probably end up parsed to the same question assuming correct inflection, posture, etc. Spoken conversations are messy, but they also have redundancy and pseudo checksum's. Written language tends to be more formal because it's a much narrower channel and you don't get as much feedback.
PS: It's also really common for someone to ask a question when they don't have enough context to understand what question they should be asking.