Hacker News new | past | comments | ask | show | jobs | submit login
How IBM invented the automated fab (ieee.org)
184 points by jnord 15 days ago | hide | past | favorite | 43 comments



When I interned at IBM and this was a big deal. IBM was really invested on being "vertically" integrated "from sand to software" as they would say. One wonders what another run at this concept would be like given advances in semiconductor manufacture.

I have often wondered if a chip fab that could make 1,000 chips of a given type economically (which is to say using your custom chip in your system was less expensive than adopting an off the shelf chip) would be a thing. The whole 'tiny tapeout' thing would be a lot more interesting too.


It's not the wafer cost. The masks (the "negatives" for the lithography) are the problem. A mask set (you need multiple exposures for one device) for a modern EUV node costs 20-30m$. That's the limiting factor. You can't get cheaper than that.

As a sibling comment notes, multi-product wafers are a theoretical answer. However, since you have process corners (manufacturing defects aren't uniformly distributed on the wafer), it is unfeasible for anything but the cheapest parts.

The real next moonshot in the foundry business would be to lower the respin costs, i.e., the amount of money it costs when your fabbed first silicon doesn't yield or validate functionally in the way you had expected/planned.

If I were the US government (or any other), I'd focus on that. Subsidize the respin cost to zero in the short-term, given certain prerequisites for start-ups, and push an all-out Manhatten project RnD effort to lower the respin cost in the long run.


I get that for state of the art fabs. Those optimize for long runs on big wafers. My question though is can you find a solution at a different node which favors cost/turnaround at the expense of not scaling?

For example, could one make a 200nm node with conventional UV masks and a limit of say 10 layers? Non mask lithography options? Or as in the article 'sub' masks where you step a single die image across the wafer?


Even a 10x cheaper mask set would still be $2-3 million USD.


>lower the respin costs

You might be interested in structured ASICs, which allow for substantial reuse of masks between different products. At the extreme was a via-only definition product where all interconnects were specified with one mask (and the via masks were among the cheapest to make since they are very uniform).

In regular ASIC development, we've had extra unrouted transistors available to wire in case of a mistake (so hopefully the respin involved just some new metal layers). Techniques like FIB can be used to test fixes to lower the number of respins, too. I'm not sure how much of this was automated to maximize chances of being useful.

https://en.wikipedia.org/wiki/Structured_ASIC_platform


Multibeam corporation is making “maskless” lithography tools:

https://multibeamcorp.com/applications/#high-value


Masks make sense the same way mega-print runs with similar fixed plates / other high yield copy operations make sense.

Offhand I'm not aware of a chip fab tech similar to the Laser Printer, not in a literal laser used to print but in a 'low run of tolerable quality' approach. The equivalent to a small print shop locally that has a medium-small range printer that does not use masks but churns out at high quality and speed.


Isn't Electron Beam Lithography [0] that sort of thing? slow, obviously, and I'm not sure about resolution,

0: https://en.wikipedia.org/wiki/Electron-beam_lithography


It is. Unfortunately it never worked at scale.


Jim Keller is working on a small fab design that is intended to make small runs economic. Atomic Semi. Hasn't yielded anything yet, but they only started in 2023.

https://atomicsemi.com/


Note also that Sam Zeloof is a co-founder. He's nowhere near as famous as Keller, but he built a small-scale fab in his parents' garage when he was a teenager. I doubt there are many people with more "small runs" stret cred than that.


I worked at a company that was vertically integrated. The fab was across the road from where ASICs were designed. The last technology node, I believe, was 130nm on 6" wafers. To go the next step would have cost something like $1B in 1990s dollars. That's a tall order for an in-house fab; if you're going to spend that kind of money the fab had better be doing something all the time. So either you take on foundry work, or you get rid of the fab and farm out your work to a foundry elsewhere.

As far as I know, the same thing happened up the food chain; the company that did make our next ASICs (IBM, Essex Junction Vermont) has spun off that fab as well. So it goes.


Given that there are only three companies in the world even capable of competing on the sand side, I'd say that window of opportunity is increasingly shrinking.


You can quite easily compete on old nodes and we will have to when old paid off hardware starts dying.


Full vertical integration implies big iron, or something like it. Nobody is going to buy TI 45nm today when 3nm exists and runs circles around it, no matter how good the software is.

Old nodes are fine for a lot of industrial applications, but they don't care about the full stack software as much.

The best seem to be those contracting the sand out and doing the rest. Nvidia, AMD, and all the new AI players.

Intel probably has the best chance at full vertical, but from everything I read it seems to be suffering from the same bloat IBM suffered.


Isn't Intel currently trying to spin out their foundry as a separate business?


Unsure, but it wouldn't surprise me. It's the AMD GloFo all over again. I guess the difference is they took billions in chips subsidies so probably need to pretend to compete a while.


in case anyone was wondering, "AMD GloFo" is explained here https://en.wikipedia.org/wiki/GlobalFoundries


> your custom chip in your system was less expensive than adopting an off the shelf chip

That's a tall order - off the shelf is rarely very expensive for the functionality you get (that is, compared to design cost.)

And that's not necessary. If you could get a chip much more expensive but with specific advantages, you'd already have a business. See FPGAs.


I've always found it amusing that somehow custom silicon makes economic sense in the absolute cheapest products.

You look inside a child's toy? A musical greeting card? A remote control? A $5 multimeter? ASIC. Often in the form of a black epoxy blob 'chip on board'.

You look inside a $30,000 industrial robot arm? No no no, we couldn't possibly afford custom silicon, FPGAs are the only option.


First of all the epoxy blob may well contain a mostly off-the-shelf SOC, maybe lightly customized (mask programmed ROM, choice of peripherals). Not a full custom design.

Second, volume! If you're going to make a million of something, the NRE of a relatively low-tech chip isn't so bad.

Also, FPGAs can be reprogrammed in the field if necessary.


Volume dominates here for the same reasons a standard sized screw / bolt will be used anywhere possible. (If even needed at all, the analogy falls apart for seam welded plastics.)

The toys are also designed as make once and trash, since that's their sales model. The environmental costs of the resources aren't correctly factored in to the overall price. That super expensive robot arm is a tool that needs to work, it's economic value is in continued production. Modular components that can be replaced are more valued here to make warranty repairs and out of warranty field service possible. The customers literally demand and pay for that functionality.


The robot arm isn't running off a battery and most of them are large enough that board size isn't a constraint, so why add the complexity of custom silicon if it isn't needed? Using off-the-shelf FPGAs also makes repairs easier.


The context here was making 1000 chips. Custom. You amortize (decide whether it's worth it) based on the 1000 chips.

So yeah, if you are going to churn out a million greeting cards, you might do a very simple and relatively inexpensive custom design (say not very many standard cells plain logic plus ROM of the exact size you need, slow, on a conservative node), and get a million slightly smaller custom chips and come ahead.

If you are going to ship a thousand robot arms. You might need a high performance chip costly to design - and would only amortize that design cost over 1400 chips. So no, you may or may not be able to afford that design, that's not the issue. The issue is that it's not worth it. If it's an expensive military robot arm that launches missiles, you stuff 4-5 FPGAs in there. If it's a cheap robot arm, you stuff a Raspberry Pi in there. Either way, no custom chip for you and it's not a question of "couldn't possibly afford custom silicon".


Multi product wafers are a thing, especially on older nodes. It’s accessible to hobbyists


Node size and price / sq mm? The last time I looked at what it would cost to have a partial wafer it was > $25,000 (of which slightly more than half was NRE charges) but would love to find something "hobbyist accessible." !


Tiny Tapeout is $150 for a chip with a small design (200x100um) on Sky130.


My understanding is that a chip design for one fab isn’t portable to another.

How does can hobbyist design be just be slapped onto a wafer (with others’ designs) and call it a day?


Indeed, you develop using a fab-specific PDK. But if you share a (Tower Semi, for instance) PDK, your device can have shuttle wafers with multiple designs from different companies. This is done for old nodes and low-volume or RnD parts.


IBM Microelectronics was a fairly significant fab and semiconductor player until the early 2010s.

Remember PowerPCs? That was IBM and was used everywhere from iMacs to Xbox 360s to Thinkpads.

Sadly, fabrication became commoditized because of outsourcing to Taiwan and South Korea, who gave unfair advantages to their state adjacent firms like TSMC and Samsung.


Then it was given away to GlobalFoundries, who ran out of cash trying to take IBM's 7nm process in to HVM and gave up on being a leading edge fab. IBM sued GF for this.

IIRC, the GF 7nm process was rumoured to have the best specs vs. Intel, TSMC and Samsung.


They also tried to do this with a free electron laser as the EUV source.

A JP lab is continuing those efforts, at least.


Just a nit, but PowerPCs were not used in Thinkpads except for very limited production runs in the mid-1990s. The problem was that IBM didn't have an OS for the platform. They had AIX, but it didn't make sense on a laptop. The idea was that OS/2 would provide a PC desktop of their own, but it barely shipped for PowerPC before IBM pulled the plug.

However IBM did design and build x86 chips in the 1990s, and these were used in Thinkpads.


https://en.wikipedia.org/wiki/IBM_ThinkPad_Power_Series lists 8 models and says 1994-98, which is a long time ago but certainly isn't nothing, and says they ran AIX, Solaris, and Windows NT.


These machines cost $12,000+ in 1990s dollars.

I think it's probably more accurate to say that a version(s) of AIX, Solaris, NT ( and I think a beta of OS/2) technically existed briefly for some models. While some commercial software might have been ported, I doubt it was ever officially supported. Except perhaps some AIX software? I assume it was binary compatible with PowerPC AIX.


Checking https://en.wikipedia.org/wiki/SPARCstation_20 implies that that's a perfectly reasonable price for what it was? The more interesting question is (relative) sales volume I think.


Funny to hear there were ~8 models because my impression has always been that they sold ~8 of them total.


> but PowerPCs were not used in Thinkpads except for very limited production runs in the mid-1990s. The problem was that IBM didn't have an OS for the platform

Doh. You're right!

> However IBM did design and build x86 chips in the 1990s, and these were used in Think-pads.

Yep! Those were fabbed by IBM Microelectronics, along with a lot of server SKUed x86 chips back in the 2000s.


I believe IBM also designed/fabbed the CPUs for the Ninteno GameCube, Wii & Wii U.


The degree to which automation is now essential is astounding. Every time a human is on the clean room floor you are burning dollars in terms of defects. For a process node at 3nm and beyond, I don't think you could achieve any yield at all if the automation rate were to fall even a few percent.


In the early 80s I visited an IBM plant in France near Paris, and they were customizing gate-arrays by direct write on wafer, I think ion implantation, making the chips for mainframes on demand. From what I saw in the plant, they were doing chip design work locally. The only non-IBM equipment I saw at the plant were some very fast end of line test machines.

e.


What a fascinating read. Thanks for sharing.


+1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: