Hacker News new | past | comments | ask | show | jobs | submit login

By using several small chips instead of a big monolithic one, it's possible to reduce costs in 2 ways:

1) the yield is better for a small die. For a given density of defect, a big chip will have a higher probability to have a defect than a small one. Basic example: you use 4 chips instead of one, and one defect that would kill the big chip will only kill one of four of the small chips. It's more subtle than this, there are simulators on the web to see the impact of size on cost for those interested;

2) parts of the chip can use cheaper nodes. For example the I/Os not only can use less advanced and cheaper nodes, but those nodes have often better support for analog IPs.

On the flip side, communications that were internal in the big monolithic die now must cross those small dies boundaries. And communications is expensive: you would certainly not want to handle this through a PCB. Instead, more local short range interconnects are used that are much more power efficient than a PCB interconnect (but not as good as in die). These require sophisticated packaging, which adds to the cost. Still for complex chips the net effect is positive, see what AMD did (with Intel now following).




Couldn’t they use fiber optic connections to realistically remove the overhead of the interconnect communication?

I saw a demo from Kodak once where they printed a fiber optic backed motherboard and it was lightning fast even when they increased distance. No clue what the heck happened to that tech but I do recall one of the folks giving a lengthy explanation about how fiber optics could replace some of the metal used in cpus because they could make microscopic glass

Edit: turns out I was on to something with this, there is work being done on this exact problem[0]. I still wonder WTF happened with that Kodak tech though

[0]: https://spectrum.ieee.org/amp/optical-interconnects-26589434...


At the distances (millimeters) involved, optical is slower, more expensive, and consumes more power than electrical signaling.


As long as the processing is being being done with electrons (CMOS), there will be a pretty big hit for converting back and forth to photons. In package chiplet communication is actually not that inefficient as long as you can put them right next to each other (like on a chip). Intel demonstrated this with AIB. With parallel interfaces, you can get to 0.1-0.5pJ/bit transferred, which is good enough for most.


Am I informed right that chiplets tend to live not very long in comparison to single chips? I have a friend who uses to repair computers and he claims that "combines" (that is how he calls chiplets) tend to break chip vs plate connections and he can not repair this, all what he can is to replace the whole BGA thing.


One chiplet probably can't be sanely replaced if it dies, but equally we couldn't really cut out and replace part of a single die either. So that seems like a wash.

I could believe they're more vulnerable to mechanical damage. Also seems possible that the thermal expansion introducing mechanical stresses is more of a problem. I suppose we won't really know for a while yet.


This is what I would expect. With the thermal conduction across the planar barrier being different, disparities in heat flow to the sinks and the possibility of one chiplet running hotter than the other parts, there might be too much stress on the joined area. This is also assuming that the package is not experiencing any additional mechanical stress from motion or vibration.


Not sure what your friend is referring to, but packages with exposed dies are definitely more fragile than ones with built in heat sinks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: