I was thinking LLMs are actually an incredible compression of human knowledge- if you can swing it, a decent model and a few GPUs would be amazingly handy to help rebuild civilization.
I wonder if there has been any study of what happens to LLMs subject to bit rot, given the level of compression of facts on one hand and the “superposition of stored facts” on the other one ( I’m obviously a layman here, I don’t even have the correct vocabulary)
But that’s during training, not during inference. Also it’s more structured in where the dropout is happening. I do think that points to them being somewhat resilient but we haven’t had LLMs exist long enough for a good test