2) Training doesn't necessarily mean the correct output that matches the Google result.
and, LLM probabilities-based constructs are unreasonably effective when you suppress the creativity that you don't need in a Q&A for yourself