EDPS: When Chips Become 3D Systems and the Challenges of 3DHI

One of the presentations at the recent EDPS was by Cadence's John Park. Unlike most people in Cadence, whose background is in either IC design or PCB design, John has a background in packaging. This used to be a comparatively sleepy backwater until the last few years, but suddenly it is one of the hottest topics in the semiconductor world. His presentation was about the challenges of 3DHI or 3D Heterogeneous Integration.

EDPS is the IEEE Electronic Design Process Symposium. This is the meeting that was held in Monterey for years, which was a good location with a great view of the Pacific (and maybe dolphins). But it was comparatively hard to get attendees from Silicon Valley to drive down and require a hotel room. So the meeting got moved to be held at SEMI in Milpitas. I could only attend the first day and the keynote for the second day. It was actually quite a week for meetings, with Samsung Foundry Forum on Monday, Samsung SAFE on Tuesday, PCB West on Wednesday, and EDPS on Thursday and Friday.

I have actually covered this topic quite a bit recently, especially since this year's HOT CHIPS had so many examples. See my posts:

The big driver of all of this is economics, but a full analysis requires a lot of pricing information that is not readily available and which has been changing. The most obvious is the cost of a die of a given size, in a given process, and also the cost of different multi-die packaging technologies. It is only in the last few years that 3D packaging technologies have reached a sufficient volume that they seem to have become economic enough for more widespread use. Wafer costs have been going up with each process node, meaning that the balance between doing a large SoC versus putting multiple die into a single package has changed, and continues to change.

John Park's opening slide sums up what seems to be the current situation.

Simply following Moore's Law alone is no longer the best technical and economical path forward.

One of the big drivers of cost is yield, and big die simply don't yield as well as two (or more) small die making up the same area. This is due to the fact that a big die has a harder time escaping every defect on the wafer. The other big driver is the capability to optimize the design by doing different die (such as analog and RF) in different processes, keeping the most advanced node just for the digital logic that can benefit from it. Another challenge is to get enough electronics into devices with very small form factor, such as smartwatches. They may have space vertically but not horizontally, for example.

The transition from systems-on-chip to heterogeneous integration actually proceeds in two directions. One is moving from lots of packages on a PCB to putting the die in those packages into a single multi-chiplet design. The advantages of this approach are a smaller footprint, a simpler PCB, higher bandwidth I/O, and lower power.

The other direction is taking a large SoC and breaking it up into separate die that are manufactured separately before being brought together in a single package. This approach has lower NRE costs, a shorter time to market, the capability to do designs that are larger than the reticle size, and a more flexible IP use model.

This is not new, of course. We've had various forms of multi-chip modules (MCMs) for over 30 years. But they were expensive and typically only used for specialized purposes like building radar. Now there is a richer portfolio of technologies (see the diagram above). On the left are packaging technologies that can be used with "normal" die, meaning die without any through-silicon-vias (TSVs). Over on the right are silicon stacking technologies, using direct copper-to-copper bonding without any bumps. In the center are mixtures with die on a 2.5D interposer, or the ultra-high-density packaging known as fan-out wafer level packaging, or FOWLP. There is a lot of difference in how designs need to be done between the left and right of that diagram.

3D packaging (as contrasted with silicon stacking) uses solder-based connections (bumps), each die is designed independently, and signaling is done with I/O buffers, in a similar manner to packages on a PCB. On the other hand, silicon stacking has solder-free connections, and the design is a single RTL that is partitioned during physical implementation.

One of John's messages that he has been preaching for some time is the need for assembly design kits or ADKs. This is the equivalent of the PDKs that we are all familiar with for integrated circuit design. Historically, OSATs have either not had or been reluctant to disclose all the data that package designers require. But foundries have had this problem for years, where PDKs need to be made available but much of the raw data is confidential. If ever a "retail" market for chiplets is going to happen, we need a lot more standardization (such as UCIe or OpenHBI). Today, apart from some use of HBM (high bandwidth memory), multi-chiplet designs are created by a single company designing all the chiplets to either work together in a single design or, perhaps, allow chiplets to be configured in a number of different designs with different features (such as performance, price, and so forth).

On the design side, meaning designing the whole system with several or many die, the biggest requirement is to have a common database for the entire 3D-IC: chips, chiplets, tiles, packaging, PCB, and perhaps even connectors and backplanes. These designs can obviously be very large, with billions or even tens of billions of instances (with bigger designs coming all the time, of course). John kept his presentation generic, instead of rolling off all the Cadence product names, and used the above diagram that shows the huge number of tools that can be involved in doing one of these designs.

But I have the secret decoder ring, in particular that the big orange box at the top is OrbitIO. To read a more detailed look at OrbitIO, see my post Brian Jackson Introduces a Mystery Product at IMAPS (Shh, It's OrbitIO). Above is the diagram with Cadence product names.

One other challenge with putting multiple die in a package goes under the name "known good die" or KGD. What this means is that the chiplets need to be extensively tested before assembly, so presumably at wafer sort, to ensure that they pass. If you skimp a little on testing with a single die in a package, then if a bad die slips through, you've lost the cost of the package. You've also lost the cost of the die, but since it was bad, it had no value anyway. Once you put multiple die in a package, that logic doesn't work. If there are four die, and one bad die slips through, and you only find out at final test, then not only are you discarding one bad die, you are discarding three good die (I'm assuming there is no way to open up the package and recover the good die economically, which might not always be true).

John wrapped up with a slide on what to look for in a next-generation 3D integration platform.

Previous Article
John Park's Webinar on Chiplets
John Park's Webinar on Chiplets

Recently Cadence's John Park presented a webinar on Design Methodologies for Next-Generation Advanced Multi...

Next Article
Hydrostatic Equilibrium and Its States
Hydrostatic Equilibrium and Its States

Learn more about hydrostatic equilibrium and its different states in this article.

OrCAD Free Trial

Try OrCAD Today