GPU shortage ended not because of a crash in crypto, but because Ethereum switched to PoS and ditched GPUs. Other cryptos offer just a fraction of payouts from mining, so it's not profitable to mine them on GPUs unless you have free electricity.
Literally none of the LLMs we're talking about were trained on consumer GPUs where the market shortage mattered, they used things like nVidia A100 pods or custom hardware like Google TPU clusters.
The GPUs which are good for cryptocurrencies are decent for using ML models but are not good for training LLMs and vice versa, as the hardware requirements start to diverge. Training LLMs requires not only high compute power but also lots and lots of memory and extremely high-speed interconnect for large models, which ends up costing far more than the pure compute cryptomining needs, making it not cost-efficient for mining.
start following more interesting people