Hacker News new | past | comments | ask | show | jobs | submit login

I tend to think it's more about people using languages that fail to correctly provide enough assurances about how they function in all cases that assumptions have to be made in practical use, and those assumptions end up being wrong in odd, minute ways, or on new platforms with slightly different behaviors, or after compiler writers decide they want to take advantage of some ambiguity for the sake of performance.

When someone trying your software with a newer compiler or a newer CPU or a slightly different architecture than you wrote it on and it doesn't work right, it's easy to come away thinking programs are never "done".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: