Well, humans already produce "an innocuous, test-passing function that's catastrophically wrong" even without LLMs involved. So, the real question is, will LLM adoption result in a significant increase in such incidents? I don't know if anyone can really answer that.