Reading the development story and just how long they worked on this, with would-be competitors falling by the wayside the whole time, I'm less surprised than I was that Intel missed the boat on this technology. Intel, and apparently almost everyone else other than TSMC, were skeptical this would work.
But now that it does, Intel has some catch-up to do to build fabs using these things to match what TSMC is doing with them.
The article mentions Intel is slated to get the first next-gen EUV device ("high-NA EUV"). I wonder how much lead time this buys them from when TSMC gets theirs? Presumably would just be a month or two at most, unless Intel is getting more than just the first. If they can produce 55 of the current generation per year, the next generation would be produced at some order of magnitude slower at first.
You dont buy a few and expect them to work on latest nodes unless you want capacity constrained launch cycle. So it is likely the whole order for that year would go to Intel if they do intend to use it on their 4nm.
TSMC has settled on older generation anyway. Their "slightly" delayed 3nm isn't on GAAFET / Nanowire / Ribbonfet or MBCFET. whatever term they decide to use. ( It is getting worse then their "nm" naming ) and not on High-NA EUV either which is scheduled for their 2nm. Sometime in 2025.
So the lead time may be around a year and a bit more.
And I keep mentioning this everywhere, on HN or Semiwiki and other places. TSMC is a (very) conservative company. Betting on something exotic and not quite ready isn't their thing.
Fun stuff. I'm actually working on a mechanical part for ASML that goes inside of the vacuum part of the system and deals with the wafers. Very stringent specs and when the final parts are here for processing, ASML engineers will be here to watch.
> the CTO jokingly gave him a plaque emblazoned with the words “Scientifically Accurate But Practically Useless.”
...Until it isn't and it gets your stuff done. The best CTOs I have ever met were far more superficial in how they treated their issues than the best engineers were about theirs (perhaps a sign of being focused more/only on the bigger picture?).
Can anyone explain what role does ASML play in TSMC's advantage over other fabs? Let's say TSMC and other fabs can all source same equipment from ASML, what else exactly does TSMC do better than the others?
They say in that article the Intel will produce chips based on 10nm tech in 2025.
However Wikipedia says the Apple M1 chips are currently using 5nm tech...
How come?
There's lots of variation in how companies define their nodes and not really connected to any measurement anymore. I.e. if a company speaks about it's "10nm node" that doesn't mean there is a physical thing 10nm in size (and e.g. Intels 10nm node is often considered to be equivalent to Samsungs 7nm node) - it's better to think of it as a version number in a way. As such, talking about these things gets confusing quickly, and the article also doesn't clarify what exactly it means when it says "10nm in size".
Different companies call these technologies different things, but they're fairly divorced from the actual minimum feature size these days. Your advertised feature size is a marketing term, so it's going to be gamed to some extent.
Everyone knew Moore's law was going to be a sigmoid curve, otherwise we'd be getting transistors smaller than atoms which is impossible. Today the term is used as vernacular for "well we are still getting smaller." Nobody knows what the last process node for conventional flat semiconductors will be, but recent speculation is around 1.5nm.
Small embedded and low performance chips may have already topped out in a cost/benefit sense. There's no benefit to a 5nm process microcontroller for a coffee maker. The really small nodes may stay reserved for high performance or ultra low power chips.
After we really do hit the top of the sigmoid it doesn't mean computers getting more powerful is over. It just means we would have to go to 3D, massively parallel systems with chiplets, compute/memory integration to reduce RAM latency, special purpose accelerators using quantum or photonic computation, etc.
It does probably mean the free lunch of "faster systems with existing code" is largely over. Of course that free lunch train started to end in the 2000s when clock speeds on conventional chips topped out in the 2-5ghz range. It meant we had to go parallel, a transition very much still in progress.
Edit: remember too that there is a TON of performance on the table on today's systems via more efficient slimmed down code. I think we will see the small/simple/efficient trend increase and bloat will be seen as much more embarrassing than it is today.
Look at what could be done with efficient code on an 8-bit 64K RAM computer in the 1980s:
> Edit: remember too that there is a TON of performance on the table on today's systems via more efficient slimmed down code. I think we will see the small/simple/efficient trend increase and bloat will be seen as much more embarrassing than it is today.
100% agree on that, but GEOS was terrible. I can't imagine anyone actually using it for anything useful.
My point was that GEOS worked... on less than 64K of usable memory and a ~2mhz 8-bit in-order CPU. I do remember painting, writing, and using a (300 baud!) modem with it as a kid. It was highly constrained but basically usable and it always stuck with me later as impressive for the resource constraints.
LUnix is another impressive C64 project, a Unix in the same tiny memory/CPU footprint!
We got highly useful very good GUIs on computers with hundreds of kilobytes of RAM and 8-16mhz 16-bit and 32-bit CPUs. Windows 3, OS/2, macOS Classic, and X11/Motif were a lot more useful and usable than GEOS.
Today's computers are hundreds to thousands of times more powerful. They should be a lot faster than they are.
right, which also hasn’t quite been happening lately but the term has changed in popular use to mean something more like “still getting smaller every few years”
80/20 rule comes into effect as well. Without a monumental change in the structure of CPUs, we will only be seeing small steps in performance for the foreseeable future. 5% improvement a generation is good, I am not complaining, but even if there are twice as many transistors in a CPU without the instruction set to take advantage of them and the software written with utilizing the available resources, it's nearly a moot point.
I believe we're long past the point where a clock speed boost or a new instruction set can generate any huge leaps in processing power over a single generation, and I would go so far as to say that there is very little practical difference between, say a 6th gen core i7 and an 11th gen i7 from the end users perspective.