Hacker News new | comments | show | ask | jobs | submit login

This is different. If you get a failure, like a flipped bit in RAM, it's a failure. With many-core processors, the normal mode of operation is to provide correct results but they may be different from time to time, assuming more than one correct solution exists. Even if the results are exactly the same or only one solution exists, the process of getting there might be different.



Sure, it's different in terms of why it's not reliable, but the outcome is the same, so why does it matter?


If I understood correctly, the outcome is not the same. In a memory failure you get incorrect results. In a concurrent program, you get correct results but they might not be the same results every time (provided that there's more than one correct answer) or the computation that took us there might be different (in terms of timing, power consumption, etc).


Still seems the same to me in principle. For problems where there are multiple correct answers, probabilistic methods are frequently used to find them (e.g. genetic algorithms). All kinds of things can change timing and power consumption, especially when using a probabilistic method.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: