I vaguely recall (many years ago) some GPU (a Matrox one iirc) coming out and it being a huge deal because of how many transistors it had. I think it may have been an amazing 100 million. Times have changed :D
MHz->GHz wars. This caused Intel and AMD to chase hyperdeep pipelines with terrible stall characteristics and awful cycle efficiencies.
You're misremembering. Matrox cards were also terribly slow in DOS and anything adjacent to 3D. I own several models with SGRAM. Even GeForce 2 GTS: 25m transistors. G200: 10m. Voodoo2: 4m.
> You're misremembering. Matrox cards were also terribly slow in DOS and anything adjacent to 3D
Nope, it was the Parhelia released in 2002 - so somewhat later than the hardware you're considering. The Parhelia was what I was thinking of, with a truly insane number of transistors: 80 million! ("something like" was bearing quite a lot of weight in my first comment :D ). However it was only in the same ballpark, not actually faster that the Geforce 4 Ti or the Radeon 8500, and it was more expensive, so failed in the market.
The difference between 1990 and 2000 was astounding. 2000 and 2010 was ok. 2010 and 2020 was meh. Now, what computers did in the 90's, cell networks did from 2005 to 2015.
Moore is still alive. In 2030 we have an estimated one trillion transistors on a chip. We would need that for our ever increasing hunger for memory and processing power. 1TB model on your iphone 22 with 100 tokens per second.