Hacker News new | comments | show | ask | jobs | submit login

Isn't every real computer program inherently probabilistic? Even if the code is not, the hardware always has some chance of failure. Even with ECC RAM etc, there is always some chance that multiple bits will get simultaneously corrupted into a new but still valid configuration.

You can't get to 100% reliability. You just have to choose how many nines you want. Why should this be any different?




This is different. If you get a failure, like a flipped bit in RAM, it's a failure. With many-core processors, the normal mode of operation is to provide correct results but they may be different from time to time, assuming more than one correct solution exists. Even if the results are exactly the same or only one solution exists, the process of getting there might be different.

-----


Sure, it's different in terms of why it's not reliable, but the outcome is the same, so why does it matter?

-----


If I understood correctly, the outcome is not the same. In a memory failure you get incorrect results. In a concurrent program, you get correct results but they might not be the same results every time (provided that there's more than one correct answer) or the computation that took us there might be different (in terms of timing, power consumption, etc).

-----


Still seems the same to me in principle. For problems where there are multiple correct answers, probabilistic methods are frequently used to find them (e.g. genetic algorithms). All kinds of things can change timing and power consumption, especially when using a probabilistic method.

-----




Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: