I wasn't aware that there was a difference at all. Haven't there been cases of changing a single resistor to switch the card's mode?
In theory you make 1 model of chip.
You manufacture them.
You test them. What clock speeds are they stable at. Are any of the cores faulty.
Many (probably most) will have defects. Some of them will be duds and must be thrown
The bits with the defects are switched off and resold as lower end hardware.
The common less powerful failure cases cost less and the rare more powerful successes cost more.
The high end stuff sells for more than the cost of manufacturing and the low end stuff goes for less. They offset each other.
So when people force a model of GPU to preform higher they are basically ignoring the Q.C. and are going to be dealing with undefined behaviour. Did the card get reduced just because it produced more heat at the higher levels (in which case a highend heatsint might counteract the issue) or because there is a core that returns faulty data.
That's only the theory though.
In reality it might be artificially screwed with to increase profits. Maybe there are many more successes than the desired price level matches. Maybe they are selling more cheaper stuff so they turn off the bits of the chips even though they passed the QC. There are only 2 companies selling gaming GPUs so there could be an artificial duopoly. Nvidia only have to be price competitive with AMD.
But I thought pro chips actually tended to be clocked a bit lower than gaming chips. There's no 'ignoring QC' there if you trick the board, quite the opposite.
Yes, the pro gpus have different clocks, ECC memory and there may be other differences in the board as well.
Resistor hack turns a Nvidia GTX690 into a Quadro K5000 or Tesla K10
No it doesn't. This was discussed earlier in the thread you link to. This hack triggers a bug in the display driver, which allows you to work around a silly limitation in the Linux driver.
You can do the same with a software hack in the kernel that spoofs the PCI vendor:device identifiers. And the driver bug is probably fixed so it won't work any more. Hopefully the silly monitor count limitation (not present on Windows drivers with same GPU) is lifted as well so this hack isn't needed any more.
This resistor hack does not change the GPU clocks or enable units that have been fused off. It does not turn a gaming gpu into a pro gpu.
There are also other differences such as error correcting memory in pro gpus, different clocks, etc. Some units that are not essential for gaming performance have been fused off in gaming gpus so that the chips can run faster and hotter.
It's funny how you never hear anyone complain about the fact that Intel's i3, i5 and i7 models are almost the same chip but there's a huge price (and perf) difference. Intel's latest product line has 22 chips of the same design, and they definitely don't have 22 semiconductor production lines.
> I wasn't aware that there was a difference at all. Haven't there been cases of changing a single resistor to switch the card's mode?
This was debunked in an earlier HN discussion, all the resistor hack does is spoof the PCI vendor:device identifier. The fused off parts won't come back up. It won't turn a gaming gpu into a pro gpu, it only triggers a bug in the driver that allows multiple displays (what a silly limitation).