Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there any evidence that running a GPU/CPU at 100% for X years degrades the product vs average use over the same period?


No; there's been many tests. The mistake seems to come from thinking GPUs "wear out" like cars. The cooler might, but the silicon is fine.

Some even suspect that crypto mining is better for the card than gaming as it avoids the constant ramp-up/ramp-down of the clock/activity by keeping it pegged at 100%. In turn, that would lower thermal stress.


Actually there's reasonable evidence to believe this isn't true. This paper from google: https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s... outlines specific scenarios where CPUs fail over time. Given the evidence that these are silicon defects that are actually worsening over time, there's no reason to imagine these failures don't extend to GPUs as well.

The difference in data here is obviously scale, google has -way- more CPUs than GPUs so the absolute counts of failures will be different.


I'll concede that silicon can wear out over time; it's impossible for it to be immune from happening, and I wasn't speaking in absolutes. But as you mention, it's one of scale. I'm curious how likely a second hand GPU from a cryptominer is affected.


So that's actually one of the awful implications of this paper. It's probably actually happening at a rate higher than would be noticed by humans.

If a given piece of silicon is hosing up a GEMM (matrix multiply), in graphics scenarios this may be invisible to the human eye as it could potentially just introduce artifacts in a scene rendering that could be entirely ephemeral to the frame.

In the case of crypto mining though, it's completely possible (probable?) that there are GPUs that can't possibly ever calculate a proper SHA3 hash (see the paper on AES instructions that fail in symmetric ways).


> GPUs aren’t cars

I think another comparison is with hard drives, which are also used by some cryptocurrency schemes, and do degrade faster with intensive use.


Yes, because hard drives, like cars, have moving parts that can wear out.[a] The silicon of a GPU doesn't have moving parts and is therefore more resilient.

[a]: It's (hopefully) common knowledge in the tech community that purchasing used hard drives is a Bad Idea(TM)


SSD's have no moving parts, and wear out too, right?


Yes, but for different reasons. The grandparent is probably incorrect though, there is emerging evidence that silicon is actually changing/failing over time. See this paper from google on their CPU cores where they have practical evidence of this occuring: https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s...

If their data is correct, it should follow that these exact issues will happen on the small transistor process GPUs as well.


It's not emerging, electromigration been a core constraint in semiconductor design for 60 odd years. E.g.,:

Blech, I. A. and Sello, H. (1966). The Failure of Thin Alluminum Current-Carrying Strips on Oxidized Silicon. In Proc. Symposium on PoF in Electronics, 496-505.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: