So far Gelsinger's ambitious roadmap has worked out. Goodbye MBA mentality and back to Grovian execution and engineering centric culture.
Intel made huge error when they decided to delay DUV -> EUV transition. Now Intel is the first to order ASML’s EXE:5200 and push High-NA. PoverVia and RibbonFET are what Intel is going to use. Meanwhile Intel's EUV 3nm chips are coming out this year.
I realize the whole MBA bad idea is popular right now on HN but it's worth remembering that Intel's struggles with 10nm were the result of too much engineering ambition rather than too little. The original 10nm was very VERY ambitious and if it had actually worked and had hit volume production on anything resembling the original timeline Intel would have essentially had a half decade worth of a process advantage over its competitors. Unfortunately, letting engineering go nuts can sometimes screw you just as much as letting the out of touch bean counters rule. Engineering based businesses have to manage both the engineering and the business side. Failing to do that means disaster.
You can't accurately describe Intel's 10nm disaster without mentioning that they were making a huge bet that EUV wasn't going to be ready anytime soon so they were trying everything they could to keep up with Moore's Law except using EUV. But some of the things Intel planned for 10nm turned out to be harder to get working correctly than EUV.
It wasn't simply the engineers going nuts trying to make a huge jump all at once. They were taking a bunch of unique risks in order to follow a different path from the rest of the industry. If Intel had planned to follow a similar EUV timeline to the rest of the industry, they would have been subject to the same risks as everyone else regarding EUV and probably could have maintained a moderate lead throughout that transition, with a worst-case outcome being that they would be part of an industry-wide failure to keep up with Moore's Law if EUV didn't work out. Instead, they ended up years behind.
> But some of the things Intel planned for 10nm turned out to be harder to get working correctly than EUV.
Is that actually true? The direct competitor to Intel 10nm is TSMC N7, which is an overall similar process - DUV, multiple patterning etc. -, achieves similar performance and power efficiency, and had a similar timeline to how the Intel process played out (as opposed to how it was originally scheduled). TSMC also only began using EUV for processes following N7.
Don't indulge Intel's attempts to erase Cannon Lake from history. The 10nm that Intel shipped in 2019 was significantly scaled back from what they originally planned for their 10nm node, but was still a year later to the market than TSMC N7 and was never good enough to be competitive for desktop CPUs. By the time they had iterated enough to have a new process that could be used to offer faster desktop CPUs than their mature 14nm process, they decided to rename it to "Intel 7" and shipped it at the same time as TSMC N5 products (though still before AMD's N5 products).
This. Intel 10nm and TSMC N7 aren't comparable in their development. Intel didn't simply scale down their 14nm but took a different parth. TSMC N7 is basically a scaling of their N16. These two processes are 90% comparable. Only base layers are slightly different. N5 is quite different compared to N7 due to new litho though.
The MBA idea is correct in this case. Intel had the board and leadership that led it to ambitious process without enough urgency or resources, or ability to course correct quickly enough.
The most important question must be discussed at C-suite. Intel didn't have have enough people there to make decisions.
> Engineering based businesses have to manage both the engineering and the business side. Failing to do that means disaster.
Top engineers can learn to manage business at the highest levels. Business leaders can't learn enough engineering to manage engineering companies.
> Business leaders can't learn enough engineering to manage engineering companies.
Cisco Systems was led by CEO John T. Chambers from 1995-2015. His education was BS, BA in business and a JD. After he got his MBA he started in sales. During his time as CEO at Cisco, sales went from $1.9 billion to $49.2 billion. In 2000, Cisco became the most valuable company in the world.
Before Chambers was John Morgridge, who was an MBA. He helped oust the two founding engineers. Before him was Bill Graves (who had a BS in physics, but was only at Cisco for a year).
I think a lot of the reasons that engineers rag on business leaders more than engineering ones are that:
1. They haven't tried creating a product and then making money off it. It's amazing how painfully difficult that can be (without good Sales and Marketing).
2. They haven't done a (good) MBA and don't understand how much goes into it, so write it off as similar to an undergraduate business degree.
3. They have a superiority complex from their university days where engineering was the hardest discipline.
Pat Gelsigner's favourite phrase is "We all work for Sales and Marketing". He says it over and over. I think at this time in Intel's history an engineer is the better one to be running the ship, but the idea that you need to be 100% tech savvy to run a successful tech company is, as you proved through your example, patently false. A good CTO can make all the difference in the product, while a good MBA-type CEO can focus on everything else.
I’ve worked at both large and small companies, and engineers do tend to forget that no matter how good what they build is, it’s going nowhere without someone selling it.
Similarly, people on the business side often neglect the blood sweat and tears that can sometimes go into building something, forgetting sometimes that without a product there is no sales or marketing.
I really liked this response, but this is HN, people love romanticizing engineers to a fault.
At the end of the day, it’s just another false dichotomy. There are shitty business people just as there are shitty engineers.
Best to not be married to anyone idea in this case and just take it on a case by case basis.
Which is another way of saying: it's gotta be a partnership. That generalizes across all business categories, right? Product and Sales/Marketing have to work together.
That's such a banal insight that it's absurd so many businesses screw it up (in either direction) so badly.
Yes, even at my current company, which is by most measures, I'd say successful, this comes up to an absurd degree.
PMs will be ready to roll out a product with their finger on the button and Sales is like yo wtf we don't even know what we're charging for it yet or it's not even built into cpq/billing yet. So absurd that it's almost funny
Yeah but does the person selling it deserve 100-1000x the renumeration than the people making it? Business leaders get all sorts of perks that the engineers don't, that's where the resentment comes from.
Not sure Cisco had "two founding engineers", actually. Len would've qualified as an engineer, but Sandy was just... Sandy. And IIRC she thought of herself as the "business side" of the pair. Other pure engineers were of course there early on, but weren't running the company.
On the other hand, it's not obvious Chambers can be counted as a success in running an "engineering company". Chambers ran an acquisition company. It's entirely possible that an acquisition company was a more profitable idea than an an engineering company, but still.
> It's entirely possible that an acquisition company was a more profitable idea than an an engineering company, but still.
I think the primary criticism engineers have against "MBAs" is that they make money, but frequently destroy value.
So "during his time as CEO at Cisco, sales went from $1.9 billion to $49.2 billion." is clear evidence of making money, but is not at all clear evidence of producing value and not any evidence at all of producing more money or value in the US in aggregate.
I don't think very many engineers would agree that "value == money," but it wouldn't surprise me if that was a core axiom of many people defending "MBA" leadership.
This is very very left field, but when the US went off the gold standard, money ceased to have a hard relationship to value because money could be invented and thus money games became more profitable than producing. It's worth contemplating China's strategy. Our money first businesses sending production to China made china the place where value is created. So when covid hit and we needed masks, we had money, but not value and people started to understand the difference leading to many of the efforts we see now like CHIPS. War is an economic shift where value is everything and money is nothing, and since the world is heating up (metaphorically and literally), it's worth centering arguments around value produced rather than money made.
I think it is more common that "shortsighted value" or "false value" is looked-down on by engineers. For example, you can increase profitability in the short-term by gutting R&D expense (engineers) but over time maintaining the system and innovating to stay-abreast of the competition will significantly suffer. Or "combining" products that on paper are complementary, but are made of such different tech-stacks and designed for different personas, that combo gets botched and users hate the product even it manages to be sold to exec that won't ever use it themselves.
You made an absolute statement . . . "business leaders can never lead an engineering firm." All that is needed to disprove that statement is one counterexample, because then it has to morph into "some business leaders can lead an engineering firm and some can't."
Ironically given the subject matter, this is similar to how math proofs work . . .
This is why I try to not correct people’s spelling in comments on the internet, your own comment is bound to have a misspelling or other grammatical faux pas somewhere in there.
It's the nature of the word "can't" which allows a single counterexample to debunk an argument.
If the grandparent poster really meant "It is quite difficult", then it seems that we have arrived at a more accurate representation of the original argument, and thus 0xbadcafebee has been contributing greatly to the discussion.
A rising tide lifts all boats. Maybe "most valuable tech company in the world" would be a more convincing argument, since the tide receded shortly afterwards.
At the time that Cisco was top of the world, there were plenty of other high profile tech companies -- Microsoft, IBM, Intel, Sun, HP, DEC. On the networking side there were 3Com, Nortel etc. They all rose the tide of the dotcom bubble, yet Cisco came out on top. You should wonder why.
This general argument is so tiring, and really dismissive and ignorant of the capabilities of other people who do work outside of the realm of engineering, as well as being ignorant to the actual content and usefulness of an MBA education. All this argument really signaling is "business bros have dumb monkey brains and can't learn technology but the engineers are so smart that they can learn business."
So many flaws with it:
0. MBAs are Masters degrees, which means that they're the second degree of those who have them. For example, Tim Cook's first degree was an Industrial Engineering degree, and his masters degree was an MBA. So the idea that someone who has an MBA is incapable of learning how to do technical work is essentially backwards.
1. How many tens if not hundreds of thousands of self-taught no-degree engineers are out there coding right now? If a person without an engineering degree can write code, build technical products, etc, what makes you think that a person with a business degree can't learn or understand technological concepts? Aren't some of the most famous technical founders of Silicon Valley college drop-outs? If Steve Wozniak could design the Apple II without having finished his engineering degree couldn't someone who happens to have a business degree do the same?
2. What makes you think that the information within an MBA is trivial to learn and apply effectively? Do engineers know when to structure the company in a matrix organizational structure versus a foundational organizational structure? Do engineers know how to apply organizational behavior techniques to diagnose and resolve group psychology issues like demotivated teams or poor product quality? Do engineers know how to evaluate the business and political climate of a different region to decide what type of business structure to enter into when expanding internationally? Do engineers know how read the financial disclosures of a company to evaluate whether they are a healthy company, how much their valuation should be, whether they should be acquired? Do they know how to prepare a balance sheet, statement of cash flows, or income statement? The list goes on and on - even if an MBA won't confuse you with Fourier transforms, the time investment to learn all the concepts is similar to any degree with similar credit hours.
3. How much in-the-weeds technical work do you think top management is doing at engineering companies anyway?
4. An MBA is a breadth degree just like your typical computer engineering or computer science degree. Just like engineers, people who get MBAs build most of their skill set through on-the-job experience. For example, in computer engineering you might learn about power systems, transistors, microcontroller programming, digital circuit design, and analog circuit design, but it's unlikely that you would do all of those things in one career path. In the same way, an MBA is a foundation in a number of topics: accounting, corporate finance, organizational behavior, operations, information technology, entrepreneurship, etc. The measure of the person who has the degree isn't just the content of the degree, it's where that starting point leads them and how effective they build their career on top of it.
> Do engineers know when to structure the company in a matrix organizational structure versus a foundational organizational structure? Do engineers know how to apply organizational behavior techniques to diagnose and resolve group psychology issues like demotivated teams or poor product quality?
The better question is, do MBAs?
> How much in-the-weeds technical work do you think top management is doing at engineering companies anyway?
You don't have to be in the technical weeds to be making decisions where technical considerations are important.
I think your comment is circling the crux of the issue. Technical individual contributors see the results of bad management decisions and attribute blame to an easy, but tangible target (MBA's), which crucially has a "solution" that appeals to them: ignore the bean counters and let me do my own thing.
Good leaders don't need to be technical, though it helps, and just as an engineering degree doesn't guarantee a good IC, an MBA doesn't guarantee a good leader. But it's less appealing to complain about bad employees, and also it's easy to forget that even good employees (on both "sides" of the IC/management divide) can make bad decisions if there's incomplete info and someone has to make a judgement call
A business where one unit thinks it can make decisions in a vacuum is going to suffer regardless of the particular field of training that the leadership comes from.
If Engineering and Business work _together_ it doesn't matter who's in the lead position.
> Goodbye MBA mentality and back to Grovian execution and engineering centric culture.
> Intel made huge error when they decided to delay DUV -> EUV transition.
Just as an FYI, that error was made when the CEO was an engineer, not an MBA.
And I find it amusing for folks here to cheer Grovian culture. Andy Grove's management style had all of what people criticize Amazon's culture, on steroids. Indeed, I believe Jeff Bezos took some of the 14 leadership principles from Grove (who was CEO at that time).
> And I find it amusing for folks here to cheer Grovian culture. Andy Grove's management style had all of what people criticize Amazon's culture, on steroids.
People tend to forgive a leader's flaws, including really terrible flaws, if the leader seems to be producing results that people like.
Yeah, and I should mention that the idea to make the foundry a big part of Intel's business came from a past CEO who was an MBA (circa 2010-2011), and it was the engineers in the company who sabotaged that effort.
History showed the engineers to be wrong, and here we are with Intel trying to compete with TSMC for customers.
> Just as an FYI, that error was made when the CEO was an engineer, not an MBA.
Hm... no? Most of the EUV roadmap decisions were likely finalized c. 2009-2010 when they bet on double patterning for what would become the 14nm node. That was under Paul Otellini, who was an MBA who climbed the ladder in Intel via the sales and marketing organization.
The leadership principles are nonsensical because (no joke) they occur in pairs labelling extrema of various dimensions, the point being that _every_ activity can be described as lying on some point in 7-dimensional bullshit space, and that point can be either characterized as close to or far from some leadership principle.
I don't know how else to describe it, but it's just a justification system for a deep hierarchy to belittle the workers.
I say this having worked years in the inner engineering sanctum of Apple where none of this bullshit existed (both during and after the Steve Jobs era).
Not to defend Amazon's LP's, I learned early on their only utility was to weaponize them in arguments where you can retroactively think of ways to justify your position with them, but are they literally opposed pairs?
I don't disagree that they're often contradictory but I can't come up with directly opposed pairings.
(Not that it matters to your point, it's just bugging me to be spinning my wheels trying to come up with the pairs)
The Intel roadmap is a nice work of optimistic investor-targeted marketing, but I have no idea how to interpret it.
I can see in the roadmap slide that 10A "arrives" in late 2027.
However, the Intel roadmap also shows both intel4/3 and intel 20A/18A present from the start of 2023. The article mentions that 18A/20A nodes have been in "some form of production since 2023". Meanwhile, current Intel chips are still partially outsourced to TSMC, and Intel has promised zetta-scale systems by 2027.
They also outsource the SoC and IO tiles in addition to the GPU tiles to TSMC for meteor lake. But unless I have missed something, Intel is making all of their own cpu cores/tiles throughout their product lines, even for server/data center, though not necessarily profitably.
Intel used to be undisputed leaders in chip fabrication. They've always been hit or miss on chip design. Most notably, every time they've tried to develop two CPU microarchitectures in parallel, at least one of them has been a major failure. And most of the times AMD has been able to take the lead, it's been because Intel's chip design failed so hard it squandered their fab advantage and gave AMD an opportunity to catch up.
Since Intel's early designs for a discrete GPU were based off introducing a third x86 microarchitecture, the failure was not really surprising.
> it's been because Intel's chip design failed so hard it squandered their fab advantage and gave AMD an opportunity to catch up.
Pretty sure you got that backwards. Intel fell behind because their fab advantage dissipated as they struggled on 14nm for 4 generations longer than their timelines anticipated, their chip design was actually doing alright at the time (without the foreknowledge of Spectre, of course).
In addition, AMD and Intel went pretty tit-for-tat all the way through 90nm; with AMD usually having the superior per-node technology (e.g. AMD 90nm > Intel 90nm) and Intel usually being slightly ahead in node-size. Ironically, similar to Intel and Samsung/TSMC/etc now, in reverse. That didn't really fall apart until 65nm, and really crash until 45nm.
I may simply have been considering a longer period of history than you. I was counting Itanium and later Pentium 4 among Intel's major failures in microarchitecture design, and consider early Opteron to be AMD's most significant and sustained time in the lead prior to their Zen renaissance.
Intel being stuck on 14nm for so long was basically a single sustained fab failure, but their inability to make any significant improvements to the Skylake CPU core or deliver an improved iGPU during those years also illustrates some severe problems on the chip design side of the company: they're too trusting of what their fab teams promise, and their chip designs are too closely tied to a particular fab process preventing them from porting their designs to a working process when things go wrong with their fab plans.
> So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” When the tide went out with the process technology, and hey, we were swimming naked, our designs were not competitive. So all of a sudden we realized, “Huh, the rising tide ain’t saving us. We don’t have leadership architecture anymore.” And you saw the exposure.
Do you think NVidia is the world leader (by a wide margin) in GPU market share because their hardware or software is so great? It seems like no one can displace CUDA at this point.
I don't know about the hardware, but one factor about the software is that Nvidia generally pushes their proprietary software solutions, while AMD favors more open source and open standards. This has likely helped Nvidia a lot. People sometimes confuse this, but open source being good overall doesn't mean that it is good for the business of an individual company. Just look at Apple and iMessage vs RCS. Proprietary tech can be used to push out competitors.
Both? I’m leaning a bit more towards HW though. CUDA isn’t exactly that exceptionally hard to replicate (if you’re willing to spend billions of dollars anyway)
Why aren't these companies investing more into 3-dimensional chips instead of trying to squeeze more on the same 2-dimensional die when we're so close to hard atom-size limits?
A 3cm x 3cm x 3cm cube could fit a hell of a lot of transistors and gates even if it is 20nm.
How would manufacturing a hollow triangular prism, or a cube, be accomplished more cheaply than assembling planar components? When people discuss the possibility of 3D semiconductor fabrication, they're usually talking about additive manufacturing of multiple layers of transistors atop a single silicon wafer substrate. That's only going to be economical if most of the volume deposited is used for transistors rather than needing to be removed later. Large hollow shapes would be difficult and wasteful to manufacture that way; transistors aren't as easy to 3d-print as plastic filament.
I can't imagine it myself, though a massively parallel thing that didn't need to worry about heat might work. IDK is that what an optical type chip is supposed to somehow do?
I know a little more about how semiconductors work; the heat parts are related to the waste energy from the transistors change between conducting and not. Since the semi-conduction parts need the conduction paths opened or closed by changing charge levels in the narrow parts. I can barely imagine some sort of interference based optical chip might not generate heat at the logic combination part, but something some-where's gotta flip/flop and that's probably where the heat will exist.
The other option we've heard about for decades involves better conduction of the heat out of the chip. If that's the case thermally conductive layers could sandwich between stacked 2D layers. Maybe that'll finally be economically viable for high density logic. As far as RAM stacking seems to work well enough so far.
That would still be starting from the planar wafers we're manufacturing today, but the question was about making 3D structures directly without just assembling 2D chips as we have them currently.
Cooling and power supply. Plus it's really hard to have more than one transistor level: you can try to put transistors on both sides of a die or you can take separate dies, grind some of them really thin, and then put them on top of each other.
I’m not the one who downvoted you, but while stacking in 3D might work for things like ram and storage, it doesn’t help the same way with cpu. They aren’t trying to get the overall die size smaller per se, but rather the node size. Generally speaking, smaller transistors = lower energy consumption + higher performance for the same number of transistors.
I wonder how practical it would be to switch to non-binary transistors. I believe some early vacuum tubes where trinary instead of binary. Probably not practical but would be interesting to see how it could fit together.
Storage (NAND flash) and many off-chip communication protocols (both wired and wireless) are already non-binary and use multiple discrete voltage levels. There's also a lot of interest in more or less analog storage and compute elements for ML, where it's fine to have weights and activations represented in a fundamentally fuzzy manner. But I don't think there's any promising way to perform discrete deterministic logic and computation in base 3 or higher using transistors, aside from what we already do for multi-bit quantities.
Intel's five nodes in four years plan is extremely agressive and ambitious. They are trying to pull off something that I believe is not feasible. Lots of moving parts in parallel.
I worked in a Fab for a year and the complexity is mind blowing. I don't see how they can execute to build those nodes and get the yields under control in such a short timeframe.
Yes. I think they can get the node out with a little bit of delay. But I don't think they'll be very profitable for a while.
However, they do have good timing because the world is about to enter a period where chip designers will be desperate to get new AI chips manufactured.
There is only one cutting edge fab which is TSMC. Chip designers want a second supplier desperately to create some competition.
I think 20A/18A will only be used by products customer can buy in mid-2025 - not late 2024 like they said.
You can see here that Intel has already delayed everything by 1-2 quarters from their 2022 roadmap:
There is an AI chip frenzy for sure. However consumers aren't buying those - data centers are - those owned by Microsoft, Google, Facebook, Tesla and Apple. I therefore question the sustainability.
What happens if AI is a bubble and those companies don't get an ROI? What if they can't find a way to monetize AI because consumers just see it as a commodity? What if after ramp up AI chip purchase dies down because models are already trained and inference doesn't need that much compute?
AI is going to drive huge demand for new silicon for inference mostly but also for training. Every device needs to have new processors that are optimized for running ML. For example, when the new iPhones integrate LLMs directly onto iPhones, you'll need to keep buying new iPhones because Apple will drastically increase the size of Neural Engine every generation.
AI is going to cause a huge uplift in chip demand from clouds to small devices.
Hmm. For iPhones and Android, Apple and Samsung already have that covered. Their AI chips already exist and production must already be planned for the coming years. NVIDIA isn't going to contribute.
I heard predictions for increase in AI PC sales but I don't buy it. That remains to be proven.
So we are left with data centers. Isn't that right?
They need to drastically increase the size of the NPU every year to keep up. We are a long way away from running GPT4 on an iPhone.
Every chip will eventually need to be replaced with one that can do inference for a massive AI model. And we will need more chips to put in more places.
The step from 7 to 5nm shrunk pitch to 85%, a mere 19% of marketing hype.
But the step from 2 to 1nm shrunk gate pitch to 93.3% instead of the suggested 50%, a large 87% marketing hype.
Or viewed as a shrink by only 6.7%, it's a whopping 600+% hype.
The hype-to-reality gap is going to keep getting much bigger every generation of announcements due to Dennard scaling and the end of Moore's Law. I was fortunate enough to be an active computer technologist through the "golden age" from 1980 to ~2010. Every three to four years things got about twice as fast for around half the cost. And not just some features or aspects but almost the entire system in almost every way. We had no idea back then just how extraordinary that period was.
Today, the industry has regressed to hyping tiny incremental gains in narrow sub-metrics which only rarely have a material impact on overall performance across an entire system or most applications and usually at higher cost. Those that entered the industry in the last 15 years don't really appreciate just how much progress has slowed to a crawl.
Sadly, looking at the ten year roadmap projections from the likes of IMEC, there aren't any big leaps on the horizon. Progress will come mostly in single-digit percentages and most gains will be increasingly conditional (eg limited to one logic type or only in certain contexts). And the costs are going to keep skyrocketing. I hope we'll "get lucky" with some unexpected breakthrough but there's no reason to expect that.
I feel like there needs to be a "Gordon Moore" of marketing let's say Mordon Gore's law, that says for every X% decrease in actual size there is a 2^X% claimed by marketing.
I can't wait for the day when marketing nanometer figures blow right past atomic size and we're measuring feature sizes in quarks. By the time feature sizes DO actually get close to atomic size from an engineering perspective, marketing will be damn near the planck length.
However note that the feature size barely shrinks between 2 nm and 1nm. I'm not sure what size Intel will go with, but offhand this might be close to the limits of silicon as we know it. I'm not sure where things will get pushed in the quest for further progression.
Also of note, IIRC some of AMD's presentations around when Ryzen launched involved mention that cache / memory (and maybe some other features?) on CPUs weren't scaling down effectively past 7 ~ 5 nm, which is one of the reasons GPUs had the cores and then external memory controllers. I know automakers and a bunch of other consumers that would like bulk (cheap) products made on modern wafers (300mm) with a semi-modern process that's super inexpensive. Particularly power ICs which would prefer lower leakage even if it means a slower response.
These gate and metal pitch numbers don’t tell the whole story. In the end it’s logic gate density that counts.
And while decreasing the gate and metal pitch, also the logic gates have shrunk to be smaller (typically expressed by measuring the height of a gate in amount of metal tracks) from 9tracks down to 6tracks.
Changing the transistor from planar to fins, and now hopefully to ribbons with eventually stacked pmos and nmos are a big enabler.
That said, we’re still not hitting the ideal scaling numbers. We’re just doing somewhat better than what’s suggested by poly and metal pitch only.
Well what is 1nm measuring? The diameter of the sphere in which the combined gray brain matter of the marketing department could be squeezed into? Lithography wavelength?
It's a standardised "process" sizing established by the International Technology Roadmap for Semiconductors. Originally they did correspond to actual feature sizes but since around 1997, they no longer correlate to actual sizes and are more just a marketing term that matches the same naming scheme as previously.
Occassionally a given process will correspond to actual sizes but it's more out of luck than anything else.
This isn't fully correct. Drawn 7nm is like 20nm, but effective gate length is shorter due to short channel effects. So, this numbers are almost 1-to-1 reflection of effective gate length. People mix up the transistor pitch with gate length. We can make small gate length devices but diffusion (source/drain) and contacts don't shrink as much, so we end up having a larger area needed for these.
1nm is the real physical size of a transistor that you would need, to get equivalent performance characteristics, if the transistor's other layout characteristics were fixed in 1997.
The divergence is because when FinFET layout was introduced, it gave as much of as improvement as a node shrink would have otherwise done.
See also: AMD Athlon "3000" CPUs, clocked at only 2.x GHz, performing "as well as an equivalent" 3.0GHz Intel CPU,
In principle 7nm has 57, 64 or 76nm gate pitch. Routing is quite inefficient in 57nm gate pitch, so 64nm is used most often. As far as I can see in the design rule manual there's no possibility for 60nm pitch. 40nm metal pitch for colored metals is correct.
Standard cell height pitch has a similar story, 270nm height is possible but means high parasitics induced RC delay. For good performance 300nm or higher height pitches are used.
These tables miss a huge context though. TSMC N7 drawn gate length is like 20nm, not 60nm. This pitch includes diffusion (source/drain) and contacts. Biggest limit is interconnect (vias are impossible to shrink as we make them now).
> The phrase "7 nm" does not refer to any dimension on the integrated circuits, and has no relation to gate length, metal pitch, or gate pitch; since at least 1997, "node" has become a commercial name for marketing purposes that indicates new generations of process technologies, without any relation to physical properties. However, the smallest dimension within an individual transistor, the fin width, can sometimes be 7 nm. TSMC and Samsung's "10 nm" (10 LPE) processes are somewhere between Intel's "14 nm" and "10 nm" processes in transistor density.
This is such a ridiculous way to do, well anything. Especially something that’s supposed to be based on technology and science. It’s how you would sell infomercial products or gas station virility pills.
It's true that marketing sort of ran away with it but it's important to note that the original idea was sound and didn't have to do with marketing hype. "Process shrinks" require many different industries to coordinate or else nothing can get done, and the ITRS roadmap was meant to be the mechanism to accomplish that coordination. What happened was that not long after it was introduced the actual feature sizes diverged from the names on the roadmap, but they decided to just keep using the roadmap names because the need for coordination didn't actually go away and, well, the names were right there, why bother changing them.
The end result seems crazy now but it's the result a set of decisions that were individually rational, not marketing-driven, at each step.
It's fine, there's a roadmap for it -
"the 1000 quark" node should last as a good 10, 15 years until we get to "1 quark"
Then we move onto the "10,000 planck length node". That should easily get us to the end of Moore's law.
Why do you care? Are you designing a standard cell library for it?
Seriously. I don’t understand why so many people care that the “nm” number is a marketing term. The chip manufacturing processes are now often improving in ways that measuring the minimum pitch doesn’t really capture. So they bump down the number whenever it improves in some other way. It’s that simple.
I have designed a standard cell library. I don’t care what number they use in marketing. If you actually want to know the performance/characteristics/dimensions of the process there are so many other numbers you need to know and you’d refer to the actual documentation.
As an end user customers, all I need to know is that “3nm” is slightly better in performance/density/power than “4nm”, and the label they assign achieves that goal.
Well nm is not something anyone should care about. Chip designers care about gate density. But end users only care about speed, power consumption, capacity, etc.
Once again, the "process size" are marketing numbers. There's no actual feature measuring 1nm.
The actual transistors are around 40nm across [1]. They change the geometry of the features, either the shape of the transistor gate or even the method of power delivery. All really incredible features and worthy of awe. Just not actually making transistors with finger-countable number of atoms.
I have often heard that these are “just marketing.” But humor me - where does the “1nm” even come from? What is the calculation that ends up spitting out 1nm, even if its invalid?
It's supposed to reflect transistor density. If you take a chip made on a say 90nm process and shrink it by 90x it should have approximately the same transistor density as a "marketing 1nm" chip. The scaling stopped around 45 nanometers.
You can see this if you compare 2 processes: Intel 45nm had 2,779,468 transistors/mm², and the Apple A14 (7nm) had a transistor density of 134,100,000 t/mm²
2,779,468*(45/7)²=114,865,769, so the two are quite close.
The size used to correspond to minimum feature size. A transistor consists of multiple features and used to be fairly planar with things arranged side by side more or less.
As it became harder to shrink the minimum feature size, they figured out other ways to shrink. For example various ways of stacking things on top of each other rather than having them side by side[1].
As such they could cram more transistors in the same area compared to a planar transistor, and hence you got the same effect as shrinking the minumum feature size.
Not sure exactly how they name things, but one could calculate what feature size would have been required to get a given transistor density using a plain planar transistor, for comparison.
It used to be that it was a linear scaling factor. So if you moved from, say, 40nm to 20nm, you'd see a 4x increase in transistor density. In the early days of VLIW scaling (roughly from 45nm and up), that really did work by just "making everything smaller". And in that world, the smallest thing was generally the size of the middle/active region of the transistor on top of which the gate sat, so the "gate length" (which is actually the *width* of a resist line that crosses the transistor) became the standard.
But then they started having to cheat: you can take a big transistor and "fold" it vertically to be smaller but still have the same gate area, etc... The actual feature sizes may not have changed much but you have the 2x density increase, so you name your new node "32nm" even though the actual width of the gate feature when seen from above didn't change, etc...
But then somewhere around 10nm everyone just gave up and started handing out random numbers. TSMC's 3nm process is not remotely a 2x density increase over 5nm, for example.
Sort of. Down to around 14nm nodes, maybe lower, the number was the actual minimum length of the transistor gate. Somewhere thereabouts the transistors weren't shrinking as much but they needed the number to go down so it became some rough estimate of PPA (power performance area) improvement - it's not just density.
There's also a distinction between "drawn length" which is the number specified by the designer, and the actual feature size on silicon. This can be scaled up or down either completely arbitrarily (meaning the drawn length is a total sham) or optically (meaning the drawn length is real but the chip is fabricated with a magnification <1)
Oh no, long before 14nm. A quick google tells me that the gate pitch on Intel 14nm is 70nm!
Obviously there's a ton of complexity here and lots of cheats and optimizations were done over the decades that weren't directly related to linear sizing. But I stand behind the threshold I gave: the big discontinuity in the industry, where "node size" and "feature size" clearly began to significantly diverge, was the introduction of finfet/tri-gate transistors in Intel 32nm.
Gate pitch is not the same as gate length. The smallest feature size is usually the transistor gate length. In Samsung 14nm processes for example the drawn fin length is in fact 14nm. The actual physical size of the fin is closer to 8nm but I believe the whole structure is still close to 14.
The "divergence" started happening long before finfets where Intel is concerned, but they weren't manufacturing for anyone else. For the rest of the foundries, it still followed somewhat logically, if not exactly, down to 14 at least.
The actual pattern was intended to follow "previous / sqrt(2)", such that every node would double areal density, and every two nodes would double linear density.
If you looked at the chip sideways we would be shooting way beyond femtometers, but if you happen to look at it from above then it's only 1 nm. Don't you think it's strange that the manufacturer is using metrics that make them look bad?
They could at least use a number that would reflect the current transistor density if they could have that number of nm with planar 1980's transistors.
TSMC now labels their process e.g. “N3”. So there has been a shift away from nm in some contexts.
But it really doesn’t matter. There’s not a single physical number you can extract from the process that accurately describes its performance. So just continuing to use “nm” and assigning some number that feels right is actually a reasonable approach.
Transistor density for just, like, a grid of unconnected transistors? Or some reference design or something like that?
IMO it is interesting to get a general idea of where the companies are, but in the end the element size doesn’t matter to end users. People should check the linpack benchmarks of the chips that come out, or whatever.
SRAM size has not been scaling at all in recent nodes, so these days the notion of uniform scaling is also breaking down quite a bit. This means that future designs will have less cache memory per core, unless they use chiplets to add more memory (either SRAM or eDRAM) close enough to the chip to become usable as some kind of bespoke cache.
Just for the record, the "not at all" part is incorrect for the nodes I'm aware of. Correct would be "way worse", i.e. it's still getting denser, but the improvement is way worse than that of random logic.
TSMC's N3E (their first 3nm that will actually see broad use) has the same SRAM cell size as N5. Their original 3nm had 5% smaller SRAM cell size than N5, but that turned out to be too aggressive and the process was too expensive for most of their customers. So for the time being, TSMC has indeed hit a wall for SRAM scaling. But it looks like N3P will be able to shrink SRAM again.
I said this yesterday in a different thread, but Intel needs to jettison its design business and just be a foundry. Pulling a reverse AMD. I don’t think they’ll be able to meaningfully acquire customers for their foundry business unless they split the company. I also think their foundry business is the one thing that could cause the stock to soar, and was historically what gave them their edge. I think there is a lot of demand for another possible fab outside of TSMC[1] because of the risk China poses to Taiwan but only if there is a process advantage. Right now TSMC has proven itself to be stable and continues to deliver for its customers. Intel is playing catch up, and sort of needs to prove it’s dedicated to innovating its fab business. I don’t think competing with its potential customers is a way of doing that.
[1] I know Samsung has foundry services (among others) but I don’t think they have the leading node capabilities that really compete with TSMC.
Story time: I worked on Google Fiber. I believed in the project and it did a lot of good work but here, ultimately, was the problem: leadership couldn't decide if the future of Internet delivery was wired or wireless. If it was wireless then an investment of billions of dollars might be made valueless. If it was wired and the company pursued wireless, then this would also lose.
But here's the thing: if you decide to do neither then you definitely lose. But, more importantly, no executive would lose their head from making a wrong decision. It's one of these situations where doing anything, even the wrong thing, is better than doing nothing because doing nothing will definitely lose.
Intel's 10nm process seemed like a similar kind of inflection point. Back in the mid-2010s it wasn't clear what the future of lithography would be. Was it EUV? Was in X-ray lithography? Something else? Intel seemed unable to commit. I bet no executive wanted to put their ass on the line and be wrong. So Intel loses to ASML and TSMC but it's OK because all the executives kept getting paid.
I forget the exact timelines but Intel's 10nm transition was first predicted in 2014 (?) and it got delayed at least 5 years. Prior to this, Intel's process improvements were its secret weapon. It constantly stayed ahead of the competition. There were hiccups though, most notably the Pentium 4 transition in the Gigahertz race (only saved by the Pentium 3 -> Centrino -> Core architecture transition) and pushing EPIC/Itanium where they got killed by Athlon 64 and it's x86_64 architecture.
I see the same problems at Boeing: once engineering-driven people companies get taken over by finance leeches. This usually follows an actual or virtual monopoly, just as Steve Jobs described [1].
I used to work with an old Sun Microsystems dude, he was an executive at the company I was at. We used to have these meetings every week and he ended up attending one. We had been trying to come to a conclusion for weeks on a specific piece of tech. He stopped the meeting dead in its tracks and said we're going to make a decision right now, if it's wrong, we'll learn from it, if it's right, awesome. Not making this decision is more costly than making the wrong decision.
I just remember thinking, finally, someone with some authority is getting this ball moving.
Reminds me of a story that happened at my company.
We needed and had purchased a rather expensive database software license, however, we didn't have the hardware yet to run that database. The guys doing hardware spent MONTHS debating on which $10k piece of hardware they'd pick to run the DB. The DB license cost? Something like $0.5 mill.
As one engineer said to me "I don't care what hardware you guys get, purchase them all! We are wasting god knows how much money on a license we can't use because we don't have the hardware to install it on!"
I've noticed a lot of people, even ones making $200k+ in salary, are really bad when it comes to making decisions that involve any amount of money.
E.g. I've been in meetings with multiple developers who, if you add up everyone's salary, is well over $1 million/year, debating for way too much time on whether it's worth it to buy a $500/month service to help automate some aspect of devops.
Maybe this wasn't the case for your specific anecdote, but in the scenario I'm describing I got the feeling that a lot of people think about business purchases in the context of their own personal finances rather than in the context of the business's finances. Leading people to be extremely cautious with things like a $10k purchase that would be "expensive" if purchased as an individual and "cheap" if purchased as a company.
In those cases, getting an exec to come in and pull the trigger can help. The exec is used to looking at big picture budgets/strategy, which IC's aren't. (Although I'm sure someone here can come up with another anecdote proving that wrong)
Every so often my company would provide lunches for the developers. However, they didn't want to spend too much money doing this. So how did they resolve it? They put together a committee of devs to discuss lunch options/etc. Easily 1+million/year of salary in one room debating whether we do Jimmy Johns or McDonalds and how they'd get the food to the office.
For $2000, you can get some pretty nice catering for 100 people. But like you said, people just seem bad at thinking of that sort of big picture.
Providing daily lunches to developers is an insane ROI and I don't know why it's not standard practice. It's pennies compared to their salary, and they are happier, spend more time thinking about work instead of what/where to get lunch, spend more time eating together and conversing, and probably eat healthier food which mitigates the post-lunch coma.
The TAM separately considers whether the value of snacks provided to employees is excludable from income under Section 119. Rather than revisit the business reasons previously analyzed, the TAM relies on Tougher v. Commissioner, 441 F.2d 1148 (9th Cir. 1971), to determine that the snacks are not meals. Accordingly, a snack cannot be a meal furnished for the convenience of the employer. Nevertheless, the TAM does conclude that the value of the snacks is excludable from employee income as de minimis fringe benefits under Section 132(e)(1)."
Used to be something that companies just generally recognized as a good thing. Look at workplaces built in the 60 to 80s and basically all of them had cafeterias as part of the building plan. [1]
Subsidized. The trick was striking a balance in making the food cheap enough that the employees would eat it but not so cheap that the company is footing the entire bill.
When I worked at a merchant bank in London in the 90s there was a very good cafeteria with great food that was not free, but it was so subsidised that it might as well be.
+1 to this, I had one job where lunch was provided and it very much brought people together, particularly across team boundaries / job levels, and often "tricked" you into having pseudo meetings over lunch.
I find that still happens organically in smaller companies without, but in larger companies things trend towards more clique like behaviour without it (caveat small sample size)
Same thing with hardware in every company I've worked at. Really, "Sr. Staff Engineer Alice" makes $300k/yr, but only gets a budget of $2k for a laptop that's meant to last four years and is their primary tool? How does this make sense?
I see devs that are paid a fortune and they use these silly 20 or 24 inch monitors and I don't understand it. A large 4K monitor is a very cheap and really increases productivity. Give them 3 ffs.
Meanwhile, there is an accountant somewhere that thinks he's a genius for keeping the hardware budget in check.
Another good one: my company periodically forces all software engineers to wipe their workstations / laptops and re-install everything from scratch. I have one such wipe coming up, with no way to avoid it, will probably spend 3-4 days just setting up all software again and do no work.
Internalising what's a reasonable business expense compared to personal expenses is a skill I feel I've only fairly recently developed.
It's super important to consider the human cost and opportunity cost of each decision, and it's scaling characteristics.
Eg: spending $20+k on a ci system like circleci, GitHub actions, whatever might feel like a big purchase, but if you consider that split by the number of developers using it, their salaries, and that in general it scales with your head count rather than user count suddenly it's pretty attractive.
On the other hand some other seemingly small (unit) cost that applies per user of your product might be worth optimizing/eliminating as that can have a big impact on your margins - but even then you need to balance it against opportunity cost. We could increase margins by x% with y investment, or increase revenue by z doing something else where the delta of revenue outweighs the better margins.
Basically it's a big juggling act involving numbers that we're not used to dealing with in our personal finance and you need to ground your thinking in terms scaling characteristics and the companies overall revenue/burn to appreciate what actually moves the needle.
Yea, people generally don't think about these things. People were genuinely surprised to find out that 70% of our expenses as a software business were... salaries. The next largest category? Rent. Everything else was effectively a rounding error.
When I was a hardware product manager, there were a ton of decisions people asked for and a heck of a lot of them basically didn't matter. If we needed to course correct, we'd course correct. The main thing was not sitting around twiddling our thumbs for weeks or longer.
A crap boss is one that doesn't make choices, a good boss is one that does, a great boss is one that makes sure that the best of the possible choices is made giving the data at hand.
Having careers and heads depending on making the wrong choice just pushed to paralysis.
The Google Fiber project was always meant to push the carriers into competition. Google knew that if they didn't launch Google Fiber, none of their other ventures or the internet as a whole, could be as successful. Google paid big money for YouTube and the plan was always to turn it into the service it is today. At the time, there were also worries whether the carriers would restrict services (aka net neutrality) or if they would charge by GB. Launching Google Fiber made it such that the carriers had to start competing and upgrade their infrastructure.
If it wasn't for Google Fiber, I'm certain that we'd be stuck with 20mbps speeds, the cable/DSL monopoly, and we wouldn't have the likes of the OTT services and the choices that we have today. Or at least it would have been delayed by quite a bit.
I worked for a company that was an equipment vendor for Google Fiber and other service providers.
Plenty of countries have better (faster and/or cheaper) broadband options than most of the US, without having any Google involvement. Competition (or government enforced requirements and price caps) are what's needed, Google Fiber had a bit more of an incentive than most for aiming to undercut their competitors but ultimately I think you're overstating their importance.
Competition would be nice, but just the appearance of credible competition was enough to induce the incumbents to do better.
Google Fiber deployed to the Kansas Cities, making themselves credible competition. Then, they announced 20 cities they would deploy to. Suddenly, incumbents in 20 cities had deployment plans and deployed before Google Fiber got anywhere, and then Google Fiber decided not to do any new deployments.
Would the incumbents have deployed without Google Fiber's credible competitive announcements? Maybe? We'd need inside information to know for sure. It sure doesn't feel like they would have though.
> Would the incumbents have deployed without Google Fiber's credible competitive announcements?
Of COURSE they wouldn't!
If google fiber hadn't happened, all providers would have continued sitting on their collective asses, soaking as much money as possible, doing the least possible legally permissible work, nickle-and-diming customers as much as possible.
> Plenty of countries have better (faster and/or cheaper) broadband options than most of the US, without having any Google involvement.
Those countries have governments willing to regulate for the benefit of the consumer, or else to provide the service directly[1]. That there are better ways to do something doesn't mean it's not valuable to have done.
[1] Almost nowhere, in any market, had competing gigabit landlines in residential areas over the timeframe discussed. "Competition" is absolutely not the solution here.
Most countries have policies that expressly prohibit competition, or make it unnecessarily expensive.
Suppose the government owned the utility poles or trenches along the roads, paid for them in the same way as they pay for the roads, and access to use them was provided to all comers for free. All you have to do is fill out some basic paperwork and follow some basic rules to make sure you're not cutting someone else's lines etc.
People would install it. You -- an individual -- could go out and put fiber in the trench on your street, wire up the whole street, pool everybody's monthly fee and use it to pay for transit.
The reason people don't do this is that it's illegal, or to do it without it being illegal would require millions of dollars in legal fees and compliance costs and pole access charges.
Usually those countries have some combination of lower labor costs, higher density (you can run fiber and then hang 5x as many subscribers off it) and a more lax regulatory landscape (try getting permits to dig in a US city).
You don't necessarily need competition either. Switzerland's state owned telecoms provider provides 25gbit symmetrical fibre to practically all homes in all cities and it is very affordable.
That seems a very US-centric way of seeing the internet evolution.
The rest of the world moved to higher speeds and didn't count Gabs (except on mobile) decades ago and I mean decades.
In 2004 in Italy I had a 20 Mbit/s fiber connection, I had 100 Mbit few years later. I still remember pinging 4, literally 4 ms, on Counter Strike 1.6.
And Fiber was started way later in 2010. So I don't see any impact by Google fiber on internet as a whole, maybe it pushed US carriers to not do worse (internet in US is not really that amazing in terms of speeds and latency).
One thing that I noticed is that while speeds increased in the decades since then, latency became worse. Even with the fastest connection I can use I rarely if ever ping below 30 Ms on the very same Counter strike 1.6 or newer versions.
> If it wasn't for Google Fiber, I'm certain that we'd be stuck with 20mbps speeds
Are you trying to say that Google Fiber influenced the behaviour of incumbent telcos in different regions? If same region, sure, but the size of area served by Google Fiber is/was tiny.
> I see the same problems at Boeing: once engineering-driven people companies get taken over by finance leeches. This usually follows an actual or virtual monopoly, just as Steve Jobs described [1].
There is a nice mental framework for viewing such things. It has a bit of a religious origin, but it effectively explains and describes what you're seeing (I'm viewing it through an atheistic lens). I mean, the egregore.
This is the natural life cycle of an egregore! Which is explained by having two groups, those that serve the purpose the egregore was created for (engineers, people that provide value), and those that serve the egregore itself (financials, people that extract value). Both these groups need to exist for a healthy entity to exist. But the balance (seems to) always tip - the egregore eventually chooses the group that serves the egregore to lead - when that happens, the original vision is often lost, and the company looses customer trust by altering the relation the customer has with the egregore (how much value the customer extracts from the egregore vs how much value the egregore extracts from the customer).
> leadership couldn't decide if the future of Internet delivery was wired or wireless.
Weird way to think of this problem (IMO). I'd think there would always be a mixed wired and wireless world.
Even if customers don't end up using wired connections to their homes, you'd still need wired connection to the antennas servicing a home, neighborhood, apartment building. That's where a lot of Telcos today are making their money. Not to the customer, but to Tmobile or At&t as the put in a fiber line directly to the antenna towers.
And even if google wanted to be the end to end ISP for someone, they'd benefit from a vast fiber network even if they later decided it wireless was the best, because they already have the fiber wherever they'd need their wireless antenna.
The last mile is expensive. Even hooking up the customer to the line running outside their house is expensive. I've seen different customers estimate this at anywhere between $2000 and $5000 per premises. This assumes ~40% customer take-up rate so with more competitors, the cost to each goes up. It's one reason why overbuilds make no sense and municipal broadband is the best model for last mile Internet delivery.
Wireless bandwidth keeps going up. Wireless is already >1Gbps inside a building. What if instead of spending $5000 per house, you could use tightbeam wireless or highly cellular network with >1Gbps bandwidth? You may have spent billions on a network that it would take decades to amortize and have it be made worthless by wireless last mile delivery.
> Wireless bandwidth keeps going up. Wireless is already >1Gbps inside a building. What if instead of spending $5000 per house, you could use tightbeam wireless or highly cellular network with >1Gbps bandwidth? You may have spent billions on a network that it would take decades to amortize and have it be made worthless by wireless last mile delivery.
I'd presume you'd not cut the existing wired customers over to wireless. So it's not like the $5000 spent is lost, it's just that you can do new customers for cheaper (if you expect a lot of growth in an area).
Overbuilds is a weird one. They can make sense if it's a brand new community as you can get a BUNCH of homes done for cheap and can pretty much immediately turn on internet when someone moves in.
If the wireless last mile option costs $500/household then you'll get destroyed by a competitor who can come in and offer the same or better service at substantially lower cost. So yes, it does matter.
I'm not sure what you mean by "overbuild" here. I mean it in the sense that both AT&T Fiber and Verizon provide services to the same homes. US policy notionally tries to encourage this (because, you know, markets solve everything) but if you have 100K homes and 40% of them get service regardless, having 2 competitors means each ISP has to recoup their costs from half as many homes.
I'm talking about wiring brand new builds for an ISP. Getting the coax, fiber, telephone line right to the home before the lawn is sodded and drywall is installed is a really fast/cheap job. It costs basically nothing to do because there aren't a whole bunch of easements and property rights problems to navigate.
Generally, a developer already owns the rights to everything so it's just working with them to get everything done. And they like it because who doesn't want an internet ready community to sell?
You end up talking $100 per unit vs the $2000 or $5000. Which is a great discount for the ISP.
Moreover, builders often pay the cost to hook homes up to fibre during construction. Locally (in eastern Ontario, Canada) I have seen the incumbent quote and get $500k to fibre up new rural subdivisions.
> If the wireless last mile option costs $500/household then you'll get destroyed by a competitor who can come in and offer the same or better service at substantially lower cost.
That's not even necessarily true. The $2000-$5000 is a one time cost, and then that piece of fiber could last for 50 years. Having to amortize $40-$100/year against a service that costs around that much a month isn't fatal. Meanwhile they're offering service for whatever maximum speed so you set your price for that $1 below theirs and then charge $10-$20/month more for double that speed which they can't offer at all. Half the customers take you up on the higher speeds and you make back your costs, the other half take the $1/month discount and your competitor is the one who gets destroyed.
> Intel seemed primed to dominate the chip industry as it transitioned into the era of Extreme Ultraviolet Lithography (EUV). The company had played a pivotal role in the development of EUV technology, with Andy Grove’s early investment of $200 million in the 1990s being a crucial factor.
I know nothing, but it felt like intel paid the price of being the first. They picked something hard and pricey.. and it didn't pan out in time, allowing other competitors to catch up and adapt to markets (mobile) nicely.
> I know nothing, but it felt like intel paid the price of being the first.
As the book goes into, there were other things in question: since TSMC only did fab, and did not design, they had more customers/opportunities to iterate the process and get good at it (more focus).
There was internal-to-Intel stuff that led to lead loss as well.
I'm only partially through the book currently, and there's a lot of chip history being described (going back to the 1950s), so I'm not going to retain all of it in a single pass.
> I believed in the project and it did a lot of good work but here, ultimately, was the problem: leadership couldn't decide if the future of Internet delivery was wired or wireless. If it was wireless then an investment of billions of dollars might be made valueless. If it was wired and the company pursued wireless, then this would also lose.
This one is particularly amusing because the difference is primarily a business distinction and not a technical one.
Here's how your tablet gets internet via fiber: There is a strand of fiber that comes near your house and then you attach an 802.11 wireless access point to it. Every few years the latter has to be replaced as new standards are created.
Here's how your tablet gets internet via 5G: There is a strand of fiber that comes near your house and then the telco attaches a cellular wireless access point to it. Every few years the latter has to be replaced as new standards are created.
They should have just built the fiber network and put cell sites on some of the poles. Then you sell fiber to anybody who buys it and cellular to anybody who buys it and you don't have to care which one wins.
You start with product people, but if you do well enough making your product better doesn't really move the needle any more, so organizations tend to promote...
marketing / sales /operations people. These people usually are pretty good at understanding what the customer wants and so has a decent feel for the product, perhaps innovation goes down, but the customer is getting what they want, but then once you saturate the market sales and marketing are no longer going to move the needle so you promote...
Finance people. They usually don't have a great feel for product nor even what the customer wants, but they understand how to increase revenue and decrease costs and at this point in the company lifestyle that is what matters most. The risk is that you are in a competitive space where competitors are willing to jump on any product stumble. Often companies get stuck at this stage and stagnate, but usually they are so large and entrenched they keep doing just fine anyway.
One difference I'd point to is that Intel was doing "fine" not committing to future lithography. I put that in quotes because clearly it wasn't a fine plan over the long run, but not spending money on the future is a fine plan in the short/medium term. AMD had been struggling for years and Intel continued to handily beat them. ARM processors weren't a threat at the time either. Intel certainly had the better part of a decade where they weren't committing to future lithography and doing fine.
Before someone says, "but they lost mobile to ARM during that period," lithography isn't why they lost mobile to ARM. Apple was using TSMC's 16nm process in their September 2016 iPhone while Intel started shipping 14nm processors 2 years earlier. Mobile chose ARM when Intel wasn't behind on lithography.
With Google Fiber, not choosing had immediate repercussions. With Intel, the repercussions took the better part of a decade to manifest. Google just decided it didn't really care about the home internet business. No one at Google could say "yea, we're not rolling out wired or wireless home internet and the business is booming." Intel didn't decide that they were exiting the processor business, but their processor business was doing "fine" without this decision being made. Intel could say, "we aren't investing in future lithography and the business is booming anyway. Maybe future lithography is just a big waste of money."
You're correct that not choosing means you lose. However, sometimes it isn't obvious for a while. Google Fiber's lack of decision had obvious, immediate results and you couldn't delude yourself otherwise. Intel could delude itself. Execs could write reports about how they were still ahead of the competition (they were) and how they weren't wasting money on unproven technology. Fast forward a decade and they're not fine, but it took a while for that to manifest.
Plus, if Apple hadn't helped push TSMC forward so much, would Intel be in quite as bad a situation? Qualcomm has been happy to just package together ARM reference designs with their modems and it's really just their poor performance compared to Apple really pushing them forward. While Android users on HN might be buying Snapdragon 8 series processors, the vast majority of Android devices aren't using high-end ARM cores. The vast majority of the market for high-end ARM cores is Apple. If Apple hadn't made a long-term commitment to TSMC for 2016-2021, would TSMC have pushed as hard on EUV? It's a lot easier to invest when you have a guaranteed customer like TSMC had in Apple.
If Apple hadn't pushed performance so strongly, would we have seen as much EUV investment as quickly? It's unlikely it would be pushed by the Android ecosystem where most processors are low-end. TSMC serving Apple meant EUV investment. Once Apple was shipping extremely fast processors, Qualcomm and others wanted to be able to get to at least 50-70% of what Apple was offering (so there were more buyers). Once it was available, AMD could use it to push hard against Intel. Once there were more buyers, Samsung wanted to make sure that its fabrication business was at least in the ballpark.
But if Apple hadn't been focused on taking a strong performance lead, it might have been another 5+ years before Intel's lack of decision came back to haunt it. If it had taken 12-17 years instead of 7-9 years for others to put the screws to Intel, they would have basked in its profits for a long time as its execs were touted as having amazing insight. Of course: you're right. Eventually, Intel would have gotten its comeuppance. But Intel could have pretended it didn't need to invest in the future for a long time. By contrast, when Google didn't make a decision on wireless or wired, that was just the end of expanding that business.
is Intel possible to do this mostly because of heavy gov subsidy + ASML being "nudged" to provide Intel with higher priority que for their latest lithography machines?
That’s awesome but their current most advanced Intel 4 process is still 7nm. So it’s unlikely that this is going to happen on the timeline they promised.
I don't think they were decrying the use of Angstroms, but rather the continued use of size-based terminology that has become decoupled from the actual size of any physical features.
I think they meant fake as in "doesn't relate to any size of the transistors", as the gate/metal pitch sizes are e.g. 40 nm and 54 nm respectively for Intel's "7nm" node [0], even the fin pitch is 34 nm, so almost 5 times bigger than the marketing term would like to imply.
It does not refer to the physical size of any element or feature of the chips.
It's the marketing department's claim about what you'd have had to do to achieve "equivalent performance" using geometries (and probably other things) that are no longer used. Or to put it another way it's completely untethered from reality in every way.
I posted this link as a comment below but it might be worth linking from the top what I have always thought is a very good article, "The Node is Nonsense":
Am I the only one that's really bothered by Intel using A for ångström in their marketing? It should be Å. A is a completely different letter and the unit symbol for ampere.
Everything supports unicode these days so the only reason they don't use the correct letter is laziness.
> Esfarjani also shared details about Intel's globe-spanning operations. In addition to its existing facilities, the company plans to invest $100 billion over the next five years on expansions and new production sites.
Isn't the pc market shrinking? Or is Intel expecting the server market growth to more than make up for it?
This horse is dead and disfigured, but here we go again.
Intel 4 (Intel "7nm" rebranding), is roughly equivalent to TSMC "3nm" in transistor density. By the transistor density metric, Intel is not way behind. There is no reason right now to think they can't deliver 10A/1nm in 2027.
It's worth noting that Intel 4 only barely exists (mobile only and slower than Intel 7). Intel still is saying that they will launch 20a by the end of the year, and 18a next year, but I will be pretty surprised if they6 timeline sticks
While Meteor Lake is likely to be the only product ever made with "Intel 4", the improved variant of "Intel 4" rebranded as "Intel 3" is expected to be launched later in this year in several kinds of server CPUs, i.e. Sierra Forrest, Granite Rapids and Granite Rapids D.
Moreover, "Intel 3" will also be used for some I/O or cache memory tiles even in some later Intel CPUs where the computing tiles will be made with denser processes like "Intel 18A".
For now, Meteor Lake with "Intel 4" is the first Intel product made with a process that has been developed after the change of CEO. It remains to be seen later this year, based on whether the planned server CPUs will be launched successfully, then by the end of the year also the first CPUs with big cores having the first new microarchitecture since 2021 (Arrow Lake, Arrow Lake S and Lunar Lake), if Pat Gelsinger has succeeded to restore Intel's competitivity.
Yeah, but the most advanced chips in Meteor Lake are TSMC chips still...3 of them (GPU, SoC, IOE at N5/6 node). Granted there are lots of comparisons saying TSMC N5 is Intel 7, etc...but when you actually come down to real life testing of power and
Intel 4 and 3 are transitional processes, but 75% or customers opt for 18A, that’s why they pushed for it and it may appear in volume sooner than older nodes, as the article noted:
Capacity for the Intel 4 and Intel 3 processes doesn't build as quickly as 20A/18A, but that isn't surprising — the majority of the company's wins for its third-party foundry business have been with the 18A node, which Intel says is according to plan
18A process is planned to be ready in the second half of this year, and Intel is sticking to it. 18A chips (Intel processors and GPUs) will appear next year
Intel 4 and 20A are clearly transitional processes that lack a full suite of transistor libraries and are thus unsuitable for many kinds of chips/chiplets. The lack of interest in Intel 3 may simply be that it sucks, and isn't worth taping out a chip on if 18A is as soon and as good as Intel promises.
It's weird how widespread this notion is, that TSMC has this colossal lead on the rest of the world that no one else can possibly catch up to. Yes, TSMC is ahead at the moment thanks in large part to the massive volume it is getting from cellphone CPUs. But the lead over Samsung and Intel is almost as small as the nanometers that all three companies are dealing with. IBM has gone fabless but still has some of the best chip designers in the world. Etc.
When Intel had the world's leading fabs, driven off massive volume in PC CPUs, I don't remember people being so emphatic about how insurmountable its technological lead over the rest of the world was. Yes, I know the geopolitical issues around TSMC bring it more attention, good and bad. But still.
Oh, there was plenty of guffawing and huffing in the Intel-centered community at the time. Remember when TSMC was lagged so far they started launching half nodes just to not be a "full node" behind? What comes around goes around, but for sure people will cheer for winners.
Intel made huge error when they decided to delay DUV -> EUV transition. Now Intel is the first to order ASML’s EXE:5200 and push High-NA. PoverVia and RibbonFET are what Intel is going to use. Meanwhile Intel's EUV 3nm chips are coming out this year.