the last 20 years people had serious doubts on breaching 7nm (whatever the figure means today) but, and even if Keller is a semigod (pun half intended) .. I'm starting to be seriously dubious on 20 years of continued progress.. unless he means a slow descent to 1-2nm .. or he's thinking sub-atomic electronics / neutronics / spintronics (in which case good on him).
Yes. But that's a huge decline compared to even recent past.
Performance increases from generation to generation used to be much faster. TSMC's N16 to N7 was still doubling or almost doubling performance and price/performance over
the long term. N5 to N3 is just barely single digits.
Every fab generation is more expensive than in the past. Soon every GIGAFAB costs $30 billion while technology risk increaseses.
Not sure that is really true based on the data. Remember, Moore's law says the number of transistors in an IC doubles every two years, which doesnt necessarily mean a doubling of performance. For a while in the 90's, performance was also doubling every two years, but that was largely due to frequency scaling.
A lot of the new processes have not had the same cost reductions. Also, some increase in transistor count is due to physically larger chips. Also, you have “Epyc Rome” on that graph, which actually isn’t a single chip but uses chiplets.
Yeah and after you have a working $30B fab, how many people are going to follow you to build one?
The first one built will get cheaper to run every year - it will pay for itself by the time a second company even tries to compete. The first person to the "final" node will have a natural, insurmountable monopoly.
You could extract rent basically forever after that point.
I don't think we'll see a final node in our lifetimes. Improvements are slowing down and will become a trickle, but that doesn't mean research stops entirely.
Consider other mature technology, like the internal combustion engine. ICEs have been improved continuously, though the changes have become marginal as the technology matured. However, if research and improvements on ICEs ends entirely it's not because the technology has been fully explored but because they're obsoleted by electric cars.
I thought the drivers of cost are lots of design work, patents, trade secrets etc. involved with each process. If there’s a “final” node, those costs should decrease over time and eventually become more of a commodity.
Since the "nm" numbers are just marketing anyway, I think they don't mean much in regards to how small we can go. We can go small until the actual smallest feature size hits physical limitations, which is so decoupled from the nm number that we can't possibly tell how close "7nm" is (well, I mean, we can, there's a cool youtube video showing the transistors and measuring feature size with a scanning electron microscope, but I mean we can't tell just from the naming/marketing).
David Patterson is not disputing that there's decades left of transistor shrinking, he's just saying that the statement of "transistor count doubling every 2 years" doesn't hold up empirically.
David Patterson is saying he considers Moore's Law is dead because the current state of say, "transistor count doubling every three years" doesn't match the Moore's Law exact statement.
In other words, he is simply being very pedantic about his definition. I can see where he's coming from with that argument.
It's more than that though as it's important to remember why Moore made his law in the first place.
The rough organizational structure of a VLSI team that makes CPUs is the following pipeline:
architecture team -> team that designs the circuits which implement the architecture -> team that manufactures the circuits
The law was a message to the architecture team that by the time your architecture gets to manufacture you should expect there to be ~2x the number of transistor you have today available, and that should influence your decisions when making trade-offs.
And that held for a long time. But, if you're in a CPU architecture team today, and you operate that way, you will likely be disappointed when it comes to manufacture. Therefore one should consider Moore's law dead when architecting CPUs.
I don’t think it’s irrelevant to look at changing timescale. If the law broke down to be 3 years, there isn’t any reason it won’t be 4, 5, or some other N years in the future.
[1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1800