Also, I'm not clear on the arguments presented in the slides. The transistor normalized price curves for 20nm and 14nm do eventually cross over the preceding curves, unlike the 40nm curve. And even putting the same transistor count on a smaller process node can result in lower power consumption once the leakage problems are mitigated, so the value is increased.
Edit: For historical background, here's a fascinating conversation from 5 years ago between Morris Chang (founder and chairman, TSMC) and Jen-Hsun Huang (CEO, Nvidia):
That's not the reason. Intel's fabs are at capacity producing chips with much higher margins than GPUs. Check the price per die area on Geforce card vs. the CPU that goes on the same motherboard, and then remember that the video card is assembled and stuffed (with a ton of DRAM) and board tested where the CPU just has to be packaged and shipped. The economics simply wouldn't allow it even if Intel wanted to play.
I used to design CPU tests for a major x86 CPU manufacturer. 100% of manufactured chips are tested, sometimes at a length of many hours.
That's still a half node ahead, but this is mostly a matter of Intel in specific rather than CPU manufacturers in general being fast to adopt more advanced process nodes.
Just another chapter like the breakup of wintel. The 500 pound gorilla in the room is that the PC business is no longer king. A company like TSMC has limited resources and as demand for them skyrockets from mobile companies the price goes up.
AMD just gave up their stake in GlobalFoundries, so I'm assuming so.
At the absolute cutting edge I think there are only 2 or 3 companies on the planet stamping hardware at that level; Intel being 1 of them, TSMC being another and I am sure IBM is in there somewhere (does IBM have its own foundries? Given their PowerPC history and then the Cell processor I assumed so, but maybe all that production was external?)
So yes, production costs have increased. And that may have (is having, I guess) effects on the speed at which new technologies are adopted (i.e. the crossover into "worth it" is delayed).
But isn't that just a way of saying that semiconductors are finally becoming a mature technology? That's not really such a shock, nor does it justify the poo-flinging at TSMC.
So the real question in my mind is whether TSMC is having problems that the other foundries (Samsung or Global Foundries, say) are not. Given that Intel has been sampling 22nm parts already, it seems like the real news hers is that TSMC sucks and the bit about production costs are just ammunition.
If an entire industry's financial projections are based on exponential improvement then becoming a mature technology is apocalyptic, especially if it happens earlier than predicted.
I think Intel will be able to bundle a "good enough for playing AAA games in 1080p with maxed out graphics" video processor into the main CPU in about one/two years, maybe less. Those CPUs will be more expensive that the tinier (in transistor count) ones. This will happen around when they hit 14nm. NVIDIA will go out of bussiness or become a niche company dedicated to vector processing for scientific computation or heavy workstations, and maybe a small player in the ARM market with their Tegra architecture. Maybe AMD will go the same road, making something similar to Intel but using their ATI technology.
Bonus 1: too bad, no more Singularity mumbo-jumbo in less than two years.
Bonus 2: I am one of the few that believes ARM will never be relevant in the desktop/laptop/console/gaming segment, because of what I think Intel will come up with.
And then you see what game devs are working on for the future such as this: http://www.youtube.com/watch?v=Gf26ZhHz6uM or this: http://www.youtube.com/watch?v=Q4MCqM6Jq_0 or this: http://www.youtube.com/watch?v=EhwZ7Sb0PHA and the notion of "good enough" quickly flies out the window.
It is fascinating seeing the split between design and fabrication come out like it has. As Sun faded out, I thought they were at a huge disadvantage without fabrication capability, it really looked like the only way to compete and matter was to do it all. That could ultimately end up being Intel's disadvantage if some of the fab companies can out do them at fab costs; but like I said, Intel is insanely good at what they do.
It's also remarkable that NVidia does look like the odd man out. Seemed unbeatable for so many years there
And I don't know a lot about hardware, but my GTX460 can't max out games even at 1080p and it's pulling 150 watts. If you packed all of that hardware and 1GB of video memory onto a CPU it would melt.
I think 1080p is here to stay, at least for three/four years. TV manufacturers seem to be focused more on things like 3D than in increasing resolution. I don't blame them, 1080p for movies had very slow acceptance, and 1080p is already pretty much indistinguishable from 4k when viewed more than 3/4 meters away or when projected in less than 60 inches of diagonal size.
Have you seen one of the new iPad screens? It's hard to believe that that kind of DPI on computer monitors isn't going to have huge adoption.
They can catch up as fabbing processes catch up into consoles, or some sort of gaming set top box that works more like a PC. I believe its sort of a common assumption PCs are going niche.
> Have you seen one of the new iPad screens? It's hard to believe that that kind of DPI on computer monitors isn't going to have huge adoption.
I haven't. I'm sort of into photography (see profile) and would like to in order to see some of my photos displayed on that much DPI on an iPad held at about 20/30cm from my face. That resolution should really help get more depth effect in full screen photos, and maybe some day we'll no longer need to print them in order to fully appreciate them. I know I'll drool in awe when I see one, it did happen with the iPhone 4 :P
But... remeber resolution is all about distance, because as you put things further away pixels seem smaller. According to Apple "retina display" is 57 arcseconds, that's (I believe) an iPad 3 at 30cm. WARNING: I didn't find an easy calculator for these numbers, please someone correct me if I'm wrong: A 46 inch 1080p TV at 2.5m should present you with smaller pixels than that, around 45 arcseconds. A 27 inch computer monitor with 2560x1440, like the iMac display or the Dell U2711, at 60cm has a pixel size of around 75 arcseconds. So we are almost there for the best displays, and definitely already there for big TVs, taking into account average viewing distance. Since I imagine graphic intensive gaming will take place mostly on TVs using something more like a console than a PC, I don't see a bright future for NVIDIA, but I do see most gaming taking place in Intel SOCs.
Also, for completion of my numbers, a 23 inch 1080p monitor at 60cm should be more in the 85 arcseconds territory... so maybe there is still some leeway for graphic cards, but they'll definitely hit a roadblock soon.
AMD has a full line of high-performance video chipsets to pull tech from, but they still just barely get to 'adequate' for low-end useage with the cpu/graphics combo. I don't see where intel is going to leapfrog everyone else in this area, unless they buy NVidia (which would probably be blocked).
windows 8 will be arm optmized, which means that intel is genuinely worried. nvidia's tegra is already being used for servers ... and of course, people are buying many more tablets than laptops.
ps3 graphics capable handsets will be out in 2013, so intel is truly looking for ways to stay relevant in an ipad world.
They can stay relevant in gaming. When/if PCs fade out and we go back to a big servers/workstation/thin client world Intel will be able to live in the three of them. They are already doing well in big servers and workstations, and I think they are the only ones currently in a position to be the dominant provider of technology for the "gaming thin client", whatever it will look like... I don't know if it will be more like a console, or more of a small PC you plug into your TV. NVIDIA only has something to offer for the workstation type of PC.
Plus, the big big reason is that there is already a ton of content out there from big publishers that is Tegra optimized.
Tegra has been very successful in the thin client/ipad world and is looking to screw with Intel's server world with Project Denver.
I'm not even talking about nvidia's next gen Kepler graphics card released this week with serious power optimizations - Intel is seriously far away from that world.
In short, intel has 0 in graphics capabilities - while nvidia has 0.75 in cpu capabilities.
If transistor price doesn't decline for discrete graphic card manufacturers, there will be no more transistors in your GPU ever again, unless you pay a hefty premium for the transistor count difference. This is what NVIDIA is complaining about in the article. They will have to compete with Intel's bundled graphics card, and not only in the lower segments of the market but also the mid and high/mid segment.
But if people are willing to pay $1-2K for a gaming rig (and given that games are $100-200 plus a monthly subscription they probably will) then the proportion of that cost which is GPU will increase for a few years.
The question is whether in the 'post-PC era' it would be worth the games companies putting out games which will play on a $500 home PC with an embedded intel GPU - even if that GPU is as good as todays GT295. Between high end gamers, family console gamers and casual mobile gamers is there any profit in good-enough?
Of course Nvidia can't just throw it's hands up and say "welp, we've hit the wall" when it has someone to blame.
It's fascinating to watch those predictions about the end of Moore's Law come true.
There's been a lot of work on nanoelectronics. For example, some scientists recently announced at a semiconductor conference that they had found a way to bypass the diffraction limit completely, making 2nm features with a 650nm laser. Similarly, I know an engineer who worked on a massively-parallel electron-beam-based etching device, which could also be applied similarly at very small node sizes. There's a billion other technical challenges, but there's been a lot of progress in making features on the molecular level.
If anything, this is better than expected: EUV has had some serious problems over the past few years, yet its delay hasn't stopped the next few nodes.
Moore's law doesn't require that shrinkage continuing, though, as there are other technologies available. Memristors in particular may allow scaling faster than Moore's law would otherwise predict.
That comes with a very large set of engineering challenges but if those are overcome you can expect another big jump.
Another possible solution is to use 3D chip stacking to add dedicated hardware for a wide variety of tasks, most of which is turned off when not being used.
Or something different?
I wonder if the price will ever come down to the point where it's used in consumer devices.
We don't just need to cram more transistors on a chip, we need those transistors to use proportionally less power (preferably even less than that).
The main reason for going small is cost - Silicon costs you per sq in - smaller transistors are supposed to buy you more hardware for the same money. If that stops there is less reason to go small.
Further, part of bringing costs down involves efficiencies of scale. If you don't have the scale, the price will limit the output. It might be back to the supercomputer world for those who need that particular extra bit of speed.
It has nothing to do with computing architectures.
That said, you can't outsource your complete manufacturing process only to come back later and wish for "virtual IDM" partnerships. Thats all of the benefits and none of the risk.
But they are very different beasts with different technology and a different (much smaller) market, right?
However... where size itself (and power consumption) are the primary factors, there will be demand. Which means that SoC GPUs will adopt new process nodes more vigorously than video card GPUs. http://www.embedded.com/electronics-news/4304118/Imagination... Sounds like disruption.
(40/28)^2 ~ 2, so if 28nm is not that bad comparing to 40nm, assuming, again, comparable yields.
That aside, assuming a fab better than TSMC's showed up on your doorstep today, you're not out of the woods in terms of being dependent on suppliers. On the contrary, designing fab tools is also a very capital-intensive business and tends towards an oligopoly, so the TSMCs and Intels of the world are just as dependent on tool vendors as NVIDIA is on TSMC. The same holds for the very specialized materials and consumables fabs use, like the glass to make mask sets.
• The ex-CEO was criticized for selling their mobile business to Qualcomm just before the Apple+Android explosion
• AMD spun off GloFo due to immediate liquidity problems but maintained a stake. Recently they have sold their stake, so if AMD is successful at being design-only, their actions are justified. But it would seem they are dwindling - i.e. Bulldozer is hardly a home run, and lots of sources are saying the move to automated layout of their transistors is a significant part of the problem - so being design-only could be seen as a mistake.
Obligatory: Intel uses extensive auto-layout and auto-analysis of designs; they seem to have the advantage in this area by merging automatic synthesis with focused optimizations by engineers.