Hacker News new | past | comments | ask | show | jobs | submit login

I was thinking LLMs are actually an incredible compression of human knowledge- if you can swing it, a decent model and a few GPUs would be amazingly handy to help rebuild civilization.



I wonder if there has been any study of what happens to LLMs subject to bit rot, given the level of compression of facts on one hand and the “superposition of stored facts” on the other one ( I’m obviously a layman here, I don’t even have the correct vocabulary)


Lots of models are trained with dropout, which is kinda like bitrot at very high rates...


But that’s during training, not during inference. Also it’s more structured in where the dropout is happening. I do think that points to them being somewhat resilient but we haven’t had LLMs exist long enough for a good test


Or just get lucky and end up with a Dr. Stone [1] :)

[1]: https://en.wikipedia.org/wiki/Dr._Stone




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: