> Intel for their part is nothing short of extremely proud over what advancements they have made over the last several years to make their 14nm process a reality
And so they should be. From the outside looking it, this slow creep of progress might seem inevitable, but it's anything but. Every new process node they hit is a huge achievement, and I greatly admire Intel not just for managing it, but doing it on a timetable planned years in advance too. The fact they're only missing their goals by a few months here and there is nothing short of incredible (in my personal opinion).
To put it another way, some things you can put a price tag on, but fundamental research isn't usually one of them. Pouring billions in and getting steady results out is something that sounds easy, but really isn't, from what I've seen of it.
Completely agreed, and it's hitting a dangerous point where not enough companies are putting in the necessary investment. I studied quite a bit of both VLSI/chip design and economics in college, but I never fully understood what was behind Moore's Law until I read this quora comment: http://www.quora.com/Moores-Law/For-how-long-will-Moores-Law...
Here's a snippet:
>The limits of Moore's Law is not driven by current technology, after all if we knew how to go smaller we probably would have done it. The limits of Moore's Law are really a matter of cost. Each new node shrink increases the technology development cost in a fairly predictable way by about 40%. The result is that Moore's Law is limited by economics.
> ...
> When [Intel] figures out that they don't have invest so heavily to stay ahead then Moore's Law will slow
Moore's Law doesn't exist in a vacuum - the cost to keep Moore's Law going has been increasing exponentially, and fewer and fewer companies have been able to keep up.
The semiconductor industry is that - an industry with numerous companies providing the materials, equipment, operations etc needed for Intel, TSMC, Samsung to build and run their fabs.
In order for this to work, there is a need for coordination. The manufacturers of steppers for example must be able to deliver a stepper that can draw features with the needed resolution at a given time. Since the development takes time, they and everybody else needs to communicate and decide how fast Moores law progress.
Moores law thus represents what all parties agree to are what can be achieved and can meet the end user requirements over time. The industry organisation responsible for this is ITRS - The International Technology Roadmap for Semiconductors:
If one wants to see the predictions for Moores law, ITRS have documents available that are well worth studying. When I worked on ASIC design for mobile handsets we looked at these documents for product roadmaps since we could predict the power consumption, gate density etc for future process nodes.
Intel can to a higher degree than many others move forward at their on pacing. But they don't make their own fab from scratch nor produce their materials all by themselves. This means that even if Intel is able to develop a process in the lab that provides smaller feature size, efficiency and whatnot than what is agreed upon. But they can't move that much faster than the rest of the industry if they want to manufacture chips in commercial quantities.
After reading the article, the first thing I started thinking was why do we need to wait for 2 years to double the number of transistors on a chip. Why can't be advance faster; only to discover your comment a little later. Thanks.
This is just one of those instances which makes me realize how much I appreciate the existence of a place such as HN.
it's hitting a dangerous point where not enough companies are putting in the necessary investment.
That is, of course, because they are reaching the point where they couldn't recoup the cost. Intel has tremendous volumes and margins, which will allow them to scale to more expensive nodes more quickly than much of the rest of the industry.
Well, Intel can't slow down unilaterally - they have suppliers and a whole network of companies that depend on having everything on schedule. I guess they could slow down as long as they give a couple of years of warning, though.
I wouldn't be surprised if at some point later on (when it's almost impossible to move to a new process node) that Intel starts developing ARM chips again (they did it before with StrongARM and XScale). With the manufacturing advantages that Intel have they could easily become the largest player in that market, giving them an easy to get cash injection.
A family would share one or two X86 CPUs replaced every 3-5years. But very soon I'll probably have at least half a dozen ARM CPUs within a few meters of me wherever I go, all with much shorter replacement cycles.
And, Intel could charge more for them since nobody else can match their process capabilities. It may not be a ton of money, but it's better than having a factory sit idle in a hypothetical future where x86 chips are mostly used in servers and higher end workstations.
Qualcomm is likely to hit a $9.x billion annual profit run-rate in the next four quarters. By comparison Intel's profit for 2013 was $9.6 billion.
If Intel had been smart about it, the moment they saw the iPhone (not to mention Jobs asked Intel to come up with a solution for the iPhone), they would have fired up a new ARM chip and went after the smart phone market. By not doing so, they likely left tens of billions of profit on the table. Shareholders should be furious.
With that logic why should anyone ever seek a remedy to a failure in judgement?
Intel has had excellent ARM chips in the past and has sold them off. They are too married ideologically to X86, just like Microsoft was too infatuated with Win32 as an API. Intel selling ARM is I think to them a signal of defeat in loss rather than a focus on the customers needs. I think AMD is being rather mature in making a server ARM part.
I work in the Failure Analysis division; my job is to take an electron microscope and look for defects in wafers that come out of the fab. Our results get sent to the engineers, who make the changes in the fab and make more wafers for us to tear apart. I don't think I'm allowed to say much about it, but it's a pretty cool process, and I'm happy that I decided to move up to Hillsboro to work for them.
Almost went into this exact field with KLA. I'm happy where I am now working on software, but I do miss semiconductor physics from time to time, though I'm sure my head would explode if I ever went back to it :)
"The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so."
Maybe...
The problem is, is Intel saying this as (honest) engineers or as (somewhat less honest) business people? Every IP business has enormous flexibility in how it defines costs and where it places them. nV's complaint reflects the cost it pays, which ultimately reflects some sort of aggregated cost for TSMC over not just per-wafer manufacturing costs, but the costs of R&D, of equipment, of financing, of various salaries, etc etc.
Intel, in a graph like this, has the flexibility to define basically whatever it likes as "$/transistor". On the one hand, it could be an honest reckoning (basically the TSMC price), but on the other, it could be a bare "cost of materials and processing", omitting everything from capital expenditures to prior R&D.
Anyone got a reference for the number of atoms in the pitch of 14nm transistors? I recall it was pretty low for 22nm and it's surprisingly hard to find this information.
The order of magnitude is correct. The interconnect pitch being 52 nm means that the width of a wire plus the isolation to the neighboring wire is just under 100 atoms.
And so they should be. From the outside looking it, this slow creep of progress might seem inevitable, but it's anything but. Every new process node they hit is a huge achievement, and I greatly admire Intel not just for managing it, but doing it on a timetable planned years in advance too. The fact they're only missing their goals by a few months here and there is nothing short of incredible (in my personal opinion).
To put it another way, some things you can put a price tag on, but fundamental research isn't usually one of them. Pouring billions in and getting steady results out is something that sounds easy, but really isn't, from what I've seen of it.