Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if there has been any study of what happens to LLMs subject to bit rot, given the level of compression of facts on one hand and the “superposition of stored facts” on the other one ( I’m obviously a layman here, I don’t even have the correct vocabulary)



Lots of models are trained with dropout, which is kinda like bitrot at very high rates...


But that’s during training, not during inference. Also it’s more structured in where the dropout is happening. I do think that points to them being somewhat resilient but we haven’t had LLMs exist long enough for a good test




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: