> If you haven't read it yourself how do you know...
This vacuous objection can be raised against every single piece of information any human has ever learned from elsewhere, recursively, back to the dawn of communication, regardless of the nature of the third party source of information.
Furthermore, LLM hallucination, particularly of reviewed documents, is not a problem I experience any longer with the models I use. For example, my LLM setup and the query I would use would cause the output to include quotes of the differences, which makes ctrl+f/f3 to spot check easy.
LLMs are not a third party source of information, they're prediction engines with known hallucination behaviors. If they're faced with a difficult or impossible challenge (e.g. if the user fails to provide a diff, or fail to provide anything to compare against), and if there is only one type of answer in its training data (there is very little text on the internet that's positive about a TOS change), the most likely outcome is that it'll just make something up that's similar to that type of answer. Yes sometimes they'll realize and ask for more info or maybe call out to a tool to make a diff, but it all depends on the user's setup and settings and the state of RNG that day
Surprise, surprise ... The people get 1 change, Name.com getall the rest; including making parts of it more ambiguous.
But it was easy to understand using the LLM analysis and it took longer to read than generate.