eSilicon recently released a paper detailing its experiences and its thoughts on the future of chiplets. The author of the white paper is Dr. Carlos Macián. I have also covered a presentation given by Carlos recently at the AI Hardware Summit, and he is well-spoken and quite knowledgeable. To get the white paper, go to the eSilicon website white paper page where you can access a lot of white papers they have developed.
Chiplets are more than an interesting concept. There are many large companies and start-ups all investing in this approach. Even the US government, in the form of the Defense Advanced Research Projects Agency (DARPA), is trying to develop a useful methodology for this approach. So, what is a chiplet?
When you design using chiplets, you put multiple dies into the same package. This approach by itself is not a new technique. We have had multichip modules (MCM) for quite some time. But, MCM designs were usually reserved for high-end, somewhat expensive products. Today’s chiplet market is not about stacking a big memory die over a big processor die. The modern chiplet market is about standardizing the connection method to place multiple dies on a substrate to build a complete system. The problem is, there is not a standard specification for chiplets.
One of the benefits of using a chiplet approach is that you can develop each chiplet in a different technology node, and of course they can be from different manufacturers. This method would seem to be a more efficient approach than putting everything in one very fast expensive process, if not all of the design implementation needs to be at that expensive node. Analog or RF portions of the design may very well be best suited to 28nm or higher. Slower portions of the design may be just fine at 90nm. But, these chiplets have to “plug in” to the substrate used to connect them. To make an effective market out of this, you need a standardized “socket.”
The appropriate socket will depend on the target market. There are low-end applications that could plug in multiple sensors, small processors, small radios, and a bit of memory to make IoT devices. These could be done using a BGA model, though the pitch and electrical interfaces would need some standardization. There are already companies trying to build a design environment around this approach, such as zGlue. You could use faster chip-to-chip interfaces such as those from NVIDIA, Intel, or eSilicon for higher-end applications with 2.5D/3D interconnects, and other approaches to get memory closer to the processors, creating a huge benefit. This approach to chiplets is a good method for some designs, but if you want to be a chiplet provider, how do you standardize your chiplet products across the different vendor technologies? Then there is DARPA, who might be able to build a solution that is not as dependent on what is best from a cost perspective. You can read more about DARPA’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program here.
I think Carlos’ white paper provides the answer applicable to the sweet spot of this market. I don’t expect an off-the-shelf market to develop anytime soon for high-end applications. But data center, machine learning, domain-specific processors, and other high-performance, high-efficiency solutions are needed right now. Fortunately, this technology is also available now. Grab the eSilicon white paper here.Share this post via: