WP_Term Object
    [term_id] => 114
    [name] => Global Semiconductor Alliance
    [slug] => global-semiconductor-alliance
    [term_group] => 0
    [term_taxonomy_id] => 114
    [taxonomy] => category
    [description] => 
    [parent] => 14433
    [count] => 42
    [filter] => raw
    [cat_ID] => 114
    [category_count] => 42
    [category_description] => 
    [cat_name] => Global Semiconductor Alliance
    [category_nicename] => global-semiconductor-alliance
    [category_parent] => 14433


by Paul McLellan on 04-10-2014 at 6:27 pm

At the GSA Silicon Summit this afternoon there was a discussion of 3D IC and 2.5D IC. The session was moderated by Javier DeLaCruz of eSilicon and the panelists were:

  • Calvin Cheung of ASE (an OSAT)
  • Gil Lvey of OptimalTest (a test house)
  • Bob Patti of Tezzaron (semiconductor company specializing in TSV-based designs)
  • Riko Radojcic of Qualcomm (you don’t need me to tell you what they do)
  • Arif Rahman of Altera (FPGAs, working with Intel on 3D apparently)
  • Brandon Wang of Cadence (where he is director of 3D IC solutions)

So what are the success stories? The biggest is that image sensors are all 3D ICs. Your phone contains one or two in the cameras. Other sensors are also being made but in lower volume. One reason is that there are not really power issues.

Coming next is memory on logic. Like the Micron hybrid memory cube but where the logic is the functionality of the design and not just a memory controller. The big issue is thermal. All the heat generated on the logic die has to get out through the memory die. But going forward the power using this sort of approach should be lower than the standard approach where almost all the power goes in the interface DDRx interfaces (and that roadmap doesn’t go out far). So lots of work is underway.

But others were not so sure. One problem with memory on logic is that the memory and the logic die come from different manufacturers which leads to some technical issues and also to some business issues associated with who is responsible for what. Who thins the die? Who tests the die? Who is responsible for yield loss? As compared to sensors on logic where the same company owns both.

In the EDA world, sensors on logic is not very demanding. Memory and FPGA (Xilinx is shipping 3D parts) are so regular that it is feasible to do a lot by hand. Also the cost issues are a lot less severe. Xilinx makes parts that sell for thousands of dollars and just would not yield as a single huge die. So they are saving money with 3D.

The big challenge is the true 3D system. The main driver there is connectivity. There may be thousands of signals in the sort of system Qualcomm, for example, would like to use 3D for. Tezzaron has one design with 12,000 3D connections.

 The big problem, though, is cost.Calvin: “If volume goes up, costs will come down.”
Riko: “If costs come down, 3D chips will be in every phone.”
Another issue is test. You really only want to build 3D stacks with “known good die” and you have to be able to test them afterwards. This really requires BIST and self-repair after assembly to avoid too much yield loss.

So there are clearly some short-term issues of who owns yield loss. But longer term another issue is who does the R&D. Perhaps a consortium (modeled on Sematech) is needed to drive a roadmap that is out 5 or more years and to avoid building a cool solution that nobody wants.

One lesson learned: the system needs to be architected from the beginning for 3D to take advantage of it. This requires partitioning the design onto multiple die. But there is a major tool problem: there is no pathfinding tool for doing the exploration at the early stage, exploring “what if” analysis with thermal, transistors near TSVs, reliability and other issues. Nobody is creating such a tool either since it is unclear if there are more than a handful of customers that would ever use it. There is certainly not going to be any automatic tool for the forseable future that reads in one design and spits out multiple optimized chips. Today the limit is being able to read and understand more than one design at once. This allows design to be done semi-manually but for sure it does increase the risk of re-spins.

Eventually with learning the costs should come down and it should be cheaper (and lower power) to use 3D than not to use 3D.

More articles by Paul McLellan…

Share this post via:


0 Replies to “GSA 3DIC”

You must register or log in to view/post comments.