Indeed. And they've got designs using at least two different processes (one for compute, another for IO) but the article has examples using 4 or 5! I just can't see anyone else going as far as that.
Well, AMD has a few "3D V-cache" variants with an extra cache/memory die, so it's 3 chiplets types today. And these chips do not have an integrated GPU, which it would make sense to have on a separate die (what Intel announced).
I wouldn't be surprised that after a period of evolution and leap frogging it ends up in pretty similar spots. They have similar constraints and tools after all.
That's cool. A quick search and I can't see if there is anything special/different about the silicon for that chiplet but it would absolutely make sense that they tweak those wafers to be less leaky and/or denser.
I don't know either if the 3D cache tile uses a different optimized process from the CPU one.
But chiplets are actually first for cost optimization, and second for process optimization IMHO.
The first cost optimization is to leverage the better yield for a small die compared to a monolithic die. The density of defects for a given process means that N small chiplets will be cheaper than one monolithic die with same number of cores: one fault will kill the whole monolithic die where it will kill only one chiplet. A SoC only uses good chiplets ("known good dies" or KGD is the term of the art). That's what has driven AMD too chiplets in the first place.
The process optimization can also be for cost saving more than performance: if an older node is acceptable, it's cheaper.
Then there is the design saving, and this shows up in Intel chiplets presentation: by developing M CPU and N GPU chiplet variants for example, one develop M+N tiles but by mixing and matching can offer MxN SoC. One has to add the chiplets interconnection complexity (extra work vs a monolithic design), but this may still save on design development. And design development costs increase a lot with each new node.
Chiplets may also become a way to extend the life of a design. We may see at some point chips embedding a mix of "old" and "new" chiplets. This could extend the useful life of a chiplet design, and give more time to amortize the increasing design costs.
So I guess a good way to see chiplets is as a cost management tool first, in the face of ever increasing design costs on advanced nodes. With the nice side effect that they also open the possibility to optimize the process per tile, if needed.
With a chiplet standard and the possibility to treat chiplets as silicon IPs today, a chiplet target market may be extended too. For example Intel as a fab may sell its chiplets (or part of their catalog) to their fab customers. This is yet another way to absorb increasing development costs, but we're not there yet.