SILVACO 073125 Webinar 800x100

Morris Chang Comments on Q2 2012: 28nm, 20nm, 16nm, FinFets, CAPEX, etc…

Morris Chang Comments on Q2 2012: 28nm, 20nm, 16nm, FinFets, CAPEX, etc…
by Daniel Nenni on 07-19-2012 at 5:42 pm

Twenty eight nanometer is progressing very well. Our output and our yields are both above the plans that we set for ourselves and the plans that we communicated to our customers early in the year. Early in the year means January-February of the year, we set our plans in output and in yields and we, of course, ever since then we tried to exceed the plan and we had also communicated the plan to our customers at the time. And we have indeed exceeded the planned in both output and yields.

We expect to ramp up to about 68,000 wafers per month by the end of the year, 28 nanometer, 68,000 12-inch wafers per month by the end of the year. And by fourth quarter, we will be nearly caught up with the demand and we expect to fully meet the demand from the first quarter of 2013 on we will fully meet the 28 nanometer demand. It is also then that we expect that the 28 nanometer gross margin will catch up with the corporate average.

As I said today, both the defect density and use are better than 40 nanometer at the same stage of the volume ramp and they are also better than what we have – what we planned early in the year and what we communicated to our customers at that time.

Now, next a few words on 20 nanometer, 20 SoC. We have made very good progress on the 112 megabit SRAM yield. Now there are still challenges to overcome in meeting our yield plan of the entire chip. We have made very good progress on 112 megabit SRAM, there are still challenges to overcome in meeting our yield plan of the entire chip, which has both the logic and the SRAM on that, of course.

Now, our 20 nanometer SoC, we believe, is fully competitive with industry leaders, other companies’ 22 nanometer for the served available markets that we serve. For our markets, we believe our 20 SoC is fully competitive with anyone’s 20 nanometer or 22 nanometer offering.And, one important point to make is that our 20 nanometer has the industry’s leading metal pitch of 64 nanometers. Our leading competitors have 80 nanometer metal pitch. That allows an advantage in the device’s density and die size.

Now, as for the timing, we expect our 20 nanometer technology to be qualified by the end of this year and will be ready to support customers (inaudible) in Q1 of 2013. I think that we’ll start some production of 20-nanometer next year, but the small scale, very, very low, what we would call risk type of production basically, but 2014 will be a ramp year for 20-SoC.

Now today, last time I mentioned that we will have a FinFET product after 20 SoC. And today, I’m glad to say that we have been planning the 16 nanometer FinFET. Right after our 20 nanometer (inaudible), which is the 20 SoC, we will offer FinFET at 16 nanometer for significant active power reduction. We expect to achieve speed and density, speed and logic density levels comparable to industry’s leading players 14 nanometer FinFET.

So, we expect our 20 SoC to be competitive with competitors’ 22 nanometer or 20 nanometer products and we expect our 16 nanometer FinFET to be competitive with our competitors’ 14 nanometer FinFET products. You might ask why are we calling it 16. The only reason, in fact, until two days ago, we were undecided on whether to call it 14 or 16 FinFET. Now the only reason we decided to call it 16 FinFET is first, we want to be somewhat modest; second, we are told quite a few major customers ask the 16 FinFET, that designation and we didn’t want to confuse our customers by now switching to 14. But we expect it to be competitive with other people’s 14 nanometer offerings.

Now 16 nanometer FinFET, our 16 nanometer FinFET, is expected to deliver about 25% speed gain given the same standby power over the 20 nanometer SoC. It is expected to give 25% to 30% power reduction at the same speed and the same standby power, and for mobile products, it is expected to give 10% to 20% speed gain at the same total power. As for timing, we expect it to be about one year after 20 SoC namely it should be ready for risk production at the end of 2013 or early 2014, about one year later than the 20 SoC.

Now, why are we having such high capital intensity? Now, well, I think this is actually a focus point of our internal discussion among our top level managers for the last two years now. And basically, we invest in capacity to get future growth. So you look back at history. If you look at our TSMC’s history, during ‘97 and ‘02, between 1997 and 2002, during that six-year period, TSMC’s capital intensity ratio stayed mostly above 60%, 60% during that six-year period. And then was, as you recall, there was a high-tech bubble bursting in late 2000 and early 2001. But in spite of that, our revenue CAGR between ‘97 and ‘07, here after having spending a lot of capital, having sustained high capital intensity for six years, ‘97 to ‘02, our revenue CAGR between ‘97 and ‘07, that’s a 10-year period, was 20%, compounded annual growth rate of 20% in revenue during the 10-year period; the first six of which was marked by high capital intensity. During that 1997 to 2007 period, foundry industry growth was 16% in the same period and ours was 20%. As a result, our market share rose from – foundry market share – from 31% in ‘97 to 43% in ‘07.

And as far as this year’s CapEx is concerned at this point, we are still following the guidance that we gave you last time; I believe NT$8 million to NT$8.5 million. At this point we are still following that. But next year, I am – we are not going to forecast until early next year. But I think I have already given you a view of our reasoning and our strategy and our objectives. So but as to the exact number, I will not give you until early next year.

13 page transcript is HERE.


CEVA-XC4000 new DSP IP core

CEVA-XC4000 new DSP IP core
by Eric Esteve on 07-19-2012 at 9:53 am

The CEVA-XC4000 offers unparalleled, scalable performance capabilities and innovative power management to address the most demanding communication standards, including LTE-Advanced, 802.11ac and DVB-T2, on a single architecture. Building upon its highly successful predecessors, the CEVA-XC4000 architecture sets a new milestone for power efficiency and utilizes an innovative instruction set to enable highly complex, software-based baseband processing which otherwise could only be accomplished with dedicated hardware. CEVA-XC4000 is a fully programmable low-power DSP architecture framework supporting the most demanding communication standards for cellular, Wi-Fi, DTV, white space, and more.

If performance is key, low power consumption is becoming today the most important concern, especially –but not only- for mobile electronic devices, like smartphones or media tablet. Addressing the ever-increasing requirement for higher performance together with lower power consumption, the CEVA-XC4000 architecture incorporates new and innovative power-oriented enhancements, including CEVA’s second generation Power Scaling Unit (PSU 2.0) which dynamically supports clock and voltage scaling with fine granularity within the processor, memories, buses and system resources. The architecture also utilizes Tightly Coupled Extensions (TCE) to deliver inter-connected power-optimized coprocessors and interfaces for the implementation of critical PHY functions, further reducing power consumption. A rebalanced pipeline with low-level module isolation is also highly optimized for power.

Performance has been drastically improved: the CEVA-XC4000 delivers a 5X performance improvement over the CEVA-XC323 DSP for LTE-A processing (while consuming 50% less power). The CEVA-XC4000 architecture is offered in a series of six fully programmable DSP cores, offering modem developers a wide spectrum of performance capabilities while complying with the most stringent power constraints. By taking advantage of a unified development infrastructure composed of code-compatible cores, a set of optimized software libraries and a single tool chain, customers can significantly reduce software development costs while leveraging their software investment in future products. “The CEVA-XC4000 redefines the concept of a ‘universal communication architecture’, enabling every conceivable advanced cellular, connectivity, DTV, white space and powerline communication standard to be efficiently supported by a single DSP architecture,” said Gideon Wertheizer, CEO of CEVA.

“Today’s advanced wireless communications landscape is a complex array of evolving standards and protocols that product developers must support quickly, cost-effectively and efficiently,” noted Linley Gwennap, principal analyst of The Linley Group. “Based on the widespread adoption of its CEVA-XC architecture, the company has already delivered a programmable platform that meets the performance, power and die area requirements for today’s baseband applications. CEVA’s new XC4000 architecture is a scalable architecture with improved computational efficiency for next-generation wireless standards such as LTE-A and 802.11ac.”

System Know-how

The CEVA-XC4000 incorporates enhanced system-level mechanisms, queues and interfaces to deliver exceptional performance, realizing faster connectivity, higher bandwidth, lower latency and better PHY control. The architecture offers uncompromising modem quality using two distinct inter-mixable high-precision instruction sets, supporting the most advanced 4×4 and 8×8 MIMO algorithms.

In order to better serve CEVA-XC4000 customers, CEVA has also announced today complete reference architectures targeting complex communication standards, including LTE-A Rel-10 and Wi-Fi 802.11ac supporting up to 1.7 Gbps, in collaboration with CEVA-XCnet partners mimoOn and Antcor. These reference architectures are complemented with highly optimized software libraries for LTE-A and Wi-Fi.

Streamlined Software Development

The CEVA-XC4000 DSP architecture is supported by CEVA-Toolbox™, a complete software development environment, incorporating Vec-C™ compiler technology for advanced vector processors, enabling the entire architecture to be programmed in C-level. An integrated simulator provides accurate and efficient verification of the entire system including the memory sub-systems. In addition, CEVA-Toolbox includes libraries, a graphical debugger, and a complete optimization tool chain named CEVA Application Optimizer. The Application Optimizer enables automatic and manual optimization applied in the C source code. For more information, visit www.ceva-dsp.com/CEVA-XC4000.html.

This is not surprising to learn that the market share for CEVA DSP IP is evaluated to be 70%, a magic number for an IP vendor! This figure, published in The Linley Group’s recent report, titled ‘A Guide to CPU Cores and Processor IP’, have been commented by one of the co-author: “CEVA continues to be the most successful supplier of DSP IP — its licensees shipped more than one billion chips in 2011,” said J. Scott Gardner, an analyst at The Linley Group. “CEVA has an impressive customer base for its DSP portfolio, especially in communications and multimedia. Furthermore, with the 4G transition well underway, high-performance programmable DSPs are required to efficiently handle complex multimode baseband processing. CEVA is well positioned to capitalize on this trend.”

Eric Esteve from IPNEST


TSMC Reports Second Highest Quarterly Profit!

TSMC Reports Second Highest Quarterly Profit!
by Daniel Nenni on 07-19-2012 at 5:23 am

We all knew this quarter would be big but maybe not this big. Not all good news though so keep on reading. The news coverage is all over the map, mostly because they have no idea what a pure-play foundry really is. They also underestimate the power of mobile computing which should be a “Revenue by Application” market segment itself. I can’t wait to read what the tabloid analysts/journalists have to say about the results.

Yes, I’m still in Taiwan thus the early morning post. Tomorrow I go to Taipei for some hard earned rest and relaxation. I’m a big fan of trying new things but I just could not bring myself to eat this cricket. The person who did eat it said, “Not as good as I expected.” So there you have it. The fish I did eat and it was very good. I lost the staring match though, that fish did not blink ever. Squid on a stick? No thank you. Double click to enlarge at your own risk.



“The world’s biggest contract chipmaker, and rivals including Samsung Electronics and AMD, could face a downturn in global technology spending as Europe’s woes continue to dent demand and China’s economy slows.” Reuters


AMD is TSMC’s customer not rival and Samsung’s $390M 2011 foundry revenue does not rival TSMC’s $14.5B. Intel and Samsung are rivals, Intel and AMD are rivals, Xilinx and Altera are rivals, etc… I can’t wait to read John’s Cooley’s expert analysis. That should be a hoot. Last time John was in a fab was never. Here is the call transcript, good luck John!

“It makes complete sense to dedicate a whole fab, or two whole fabs, in fact, to just one customer,” Morris Chang said today. He didn’t name any clients it may offer a whole factory to or say whether it has immediate plans to do so. Bloomberg Businessweek.

I will give you three guesses who that customer is and they had all better be fruit.

While all this looks good I do see that some clarification is required:

  • Q4 2011 28nm revenue was 2%
  • Q1 2012 28nm revenue was 5%
  • Q2 2012 28nm revenue was 7%
  • Q3 28nm revenue is forecast to be 14%

That is a very interesting process ramp. Stay tuned, I will ask around a bit before I leave Taiwan. Post your guesses in the comment section or email it to me privately. I’m sure this will turn into great tabloid fodder.

Second-quarter net income rose to NT$41.8 billion, missing the NT$42.2 billion average of 19 analyst estimates compiled by Bloomberg. The earnings result included a NT$2.68 billion charge for its 5.6 percent stake in Shanghai-based SMIC. Consolidated revenue, reported earlier, rose 16 percent from a year earlier to NT$128.1 billion, beating the NT$127.2 billion average of analyst estimates and surpassing the company’s own forecast of NT$126 billion to NT$128 billion.

Looking forward:

Revenue in the three months ending Sept. 30 may be NT$136 billion ($4.5 billion) to NT$138 billion, the Hsinchu, Taiwan- based company said today, compared with the NT$134.8 billion average of 22 analyst estimates compiled by Bloomberg. Second- quarter operating income rose 23 percent to NT$46.7 billion, beating the NT$45 billion average of 15 estimates.

Other reported news is that fabless inventory levels have been increasing due to a decreasing global economy, which would lead to an inventory correction in the fourth quarter and a dip in TSMC’s revenue, which could continue into the first quarter of 2013. Don’t believe a word of it. It is going to be a very merry mobile Christmas! An iPhone5 and iPad Mini for all! Ho ho ho…


Higgs bosons, (un)certainty, and black holes

Higgs bosons, (un)certainty, and black holes
by Beth Martin on 07-18-2012 at 9:00 pm

Ever since the announcement in early July from CERN that they likely have, probably, finally found the Higgs boson, I’ve been thinking about what quantum mechanics means to our daily ‘classical model’ existence. On the surface, nothing. The most fantastical aspects of quantum mechanics, like uncertainty, tunneling and the like, vanish from our perspective just as leaves on a tree appear as a homogenous green halo from a distance.

I asked a colleague at Mentor Graphics, physics PhD Dan Blanks, to help me make sense of the importance of the CERN announcement. He pointed out that indeed, even Higgs himself has no idea what his boson could be good for (aside from a Nobel Prize. Cha-ching!). But, he said, “without a Higgs field imparting mass to particles, matter would never clump together to form stars, planets, galaxies, semiconductors, and us.”

Although quantum mechanics was not developed with any practical use in mind, Blanks said, we know that quantum mechanics made the transistor possible, and the transistor is our lives. The ideas of quantum mechanics account for the properties of metals, insulators, and semiconductors we use today.

Peter Higgs proposed his boson as mechanism of the Standard Model in 1964 to explain how things acquire mass. Here’s my analogy: You are at DAC. The mobile bar rolls onto the show floor at 4pm. The once uniform crowd now starts accumulating near the bartender. As he moves, he attracts more people, which gives him greater mass and therefore more momentum. The thirsty conference-goers are the Higgs field, a background field that becomes locally distorted whenever a particle (bar) moves through it. The distortion generates the bar’s (particle’s) mass. The Higgs boson is a clustering in the Higgs field, like a wave of clusters of people gathering to spread the word as the refreshments come in. Okay, roughly.

But why are the CERN scientists so sure they found the Higgs (or a Higgs-ish) boson? Because they calculated that they meet “5-sigma” of statistical significance. Yes, we know that means they are pretty certain. But, actually, what does it mean in this context? An informative article in the Wall Street Journal explains. If you recall statistics, 5-sigma means a one-in-3.5-million probability. In this case, it means a one in 3.5 million chance that “an experiment just like the one announced this week would nevertheless come up with a result appearing to confirm it does exist.” In other words, a false positive. And, the same conclusion was reached by two independent CERN teams. They are so sure that they invited Peter Higgs himself to the announcement.

Even the most famous living physicist, Stephen Hawking, is confident enough in the finding that he paid up on a $100 bet. Hawking bet Gordon Kane (University of Michigan) that the Higgs boson did not exist. Kane came to believe Hawking was right and paid up years ago. Hawking was big enough to refund that money when the tables turned. On a side note, after seeing Hawking and physicist Kip Thorne recently, I was wondering why Moore’s law doesn’t ultimately lead to a black hole. Follow me: double the number of transistors, shrink the chip size, increase the density. Eventually, shouldn’t it become so dense that it will collapse in on itself? If so, that would be a small black hole that Hawking postulates would give off enough energy to supply all our power needs for all eternity. Yep, I smell a Nobel Prize…


It Takes a Village: Mentor and ARM Team Up on Test

It Takes a Village: Mentor and ARM Team Up on Test
by Beth Martin on 07-18-2012 at 5:01 pm

Benjamin Franklin, “I didn’t fail the test, I just found 100 ways to do it wrong.” I was reminded of this line during a joint Mentor-ARM seminar yesterday about testing ARM cores and memories. The complexity of testing modern SoC designs at advanced nodes, with multiple integrated ARM cores and other IP, opens up plenty of room for error.

The seminar was a full house with nearly 100 attendees. Steve Pateras, product marketing manager at Mentor, presented first. He described the conditions that are driving new strategies for test and repair. Both ARM and Mentor realized that their customers were integrating single or multiple ARM processors into their SoCs, and teamed up to provide a fully automated DFT strategy. They’ve been working closely for over 7 years. The Cortex A-15 is the core for which they ironed out a full DFT reference flow, which includes documentation and command scripts for test insertion, synthesis, and automatic test pattern generation. The reference flow is available now for the ARM Cortex A15, and will be available for newer ARM cores going forward.

Pateras also described their support for testing memories on a shared bus used in ARM cores. They continue to add automation for things like generating the library files that the MBIST insertion flow needs for each ARM core. Customers can now automate the creation of the core lib file, and will soon have automation for the logical lib file. The physical lib file is created by the ARM compiler.
The second presenter was Richard Slobodnik, product engineer at ARM. He described the ARM core architecture and gave a nice review of the history of ARM core test strategies that eventually led to their standardized MBIST Shared Bus interface. Blocks with a shared bus and with memories on the bus (memory clusters) have a shared test interface to the bus (see the figure).

The shared bus lets multiple memory arrays be serviced with a single BIST interface. This means you don’t need a BIST wrapper for every memory block. They have thoughtfully added these so they don’t impact design performance, and so you can run at-speed tests.

Mr. Slobodnik identified some of the problems that have arisen over the years of developing the shared bus flow, including the difficulty of manually creating the BIST input libraries. Mentor added automation to solve that problem. Another snag is in not being able to trace memories behind the shared bus. This could result in missing some of the memories during BIST insertion. A workaround was described using Mentor’s ETChecker tool, and Mentor is working on providing fully automated support.

A key message I got from this seminar is not only about the complexity of testing SoCs, but that success for the designers depends on extensive cooperation between EDA and IP vendors. Mentor and ARM realize that for their customers to succeed, it truly takes a village.

To learn more, check out the Mentor whitepaper High Quality Test of ARM® Cortex™-A15 Processor Using Tessent® TestKompress® and a high-level pitch of the shared bus support in this video titled Mentor Graphics support of ARM IP.


How Do You Extract 3D IC Structures?

How Do You Extract 3D IC Structures?
by Daniel Payne on 07-18-2012 at 2:01 pm

The press has been buzzing about 3D everything for the past few years, so when it comes to IC design it’s a fair question to ask how would you actually extract 3D IC structures for use by analysis tools like a circuit simulator. I read a white paper by Christen Decoin and Vassilis Kourkoulos of Mentor Graphics this week and became intrigued by some new extraction technology coming our way. Back in May 2012 Mentor joined STMicroelectronics and other companies as part of the Grenoble Institute of Technological Research (IRT). This joint work is the result of that collaboration. Continue reading “How Do You Extract 3D IC Structures?”


EDAC Announces EDA up 6.3% in Q1 versus 2011

EDAC Announces EDA up 6.3% in Q1 versus 2011
by Paul McLellan on 07-17-2012 at 11:24 pm

EDAC announced that EDA industry revenue increased 6.3% for Q1 2012 to $1536.9M compared to a year ago. Sequentially it declined, as it normally does from Q4 to Q1, by 9.6%. Every category except services increased revenue and every region increased revenue except for Japan. The full report is available by subscription, of course.

Here are a few highlights:

  • CAE up 6.5% to $564.8M
  • IC Physical Design and Verification up 10.2% to $350.9M
  • PCB up 5.1% to $147.5M
  • IP up 5.4% to $391.3M
  • Services, the only down segment, dropped 3.8% to $82.4M


The EDAC press release is here (pdf).


EUV Masks

EUV Masks
by Paul McLellan on 07-17-2012 at 11:00 pm

This is really the second part to this blog about the challenges of EUV lithography. The next speaker was Franklin Kalk who is CTO of Toppan Photomasks. He too emphasized that we can make almost arbitrarily small features but more and more masks are required (not, that I suspect, he would complain being in the mask business). For EUV to go mainstream the costs must be lower than this and the masks must be defect-free enough. This may happen earlier in memory since the existing built-in redundancy makes it more tolerant of defects.


EUV mask blanks consist of a layer of low thermal expansion material, because the mask absorbs about 30% of the EUV light and so gets hot and must not distort. Then 40-50 layers of alternating silicon and molybdenum then a capping layer of ruthenium on top of that (to prevent oxidization). The EUV absorber (where we don’t want the light to reflect) is a tantalum boron nitride film topped with an anti-reflective oxide.

Apart from the obvious fact that this is reflective technology, the other big difference with EUV masks is that mask blanks will never (for the forseeable future anyway) be defect-free. So unlike traditional masks which are essentially blank slates until we pattern them, we need fiducials on the blank so that we can record where the defects are and, subsequently, decide precisely where on the blank to print the pattern to best avoid them. In fact it gets worse. Today we can’t even find all the defects that will print because we are reaching the limits of what we can see even with a scanning-electron-microscope (SEM).

Obviously the fewer defects there are, the better. There has been a lot of progress at reducing pit defects without adding particles and blank yield is improving but there is still a lot of work to do. Sematech’s numbers were that they have defects down to 9 at 50nm and 12 at 40nm. The reticle is typically 4X the wafer so 40nm on the mask translates into 10nm on the wafer.

So how do we avoid defects, given that they will exist? We want all the defects to fall on parts of the mask where we block the EUV light (corresponding to a place on the wafer where we don’t want to expose the photoresist). We can rotate the blank to all four positions. In principle we could shift the pattern on the blank too, but today we cannot yet shift accurately enough off the fiducials on the blank.

Synopsys did some research into this and took 7 designs (I think this actually means mask layers) and 10 actual mask blanks with data on where the defects were. It turns out that every blank could be used in at least 3 of the designs and every design had at least 2 usable blanks.

The full toolkit to be able to find the defects, shift the mask to avoid defects and so on is not expected to be ready until 2018. But Sematech thinks that the first year for high volume manufacturing will be 2015, and they will need 4-5,000 quality blanks with an estimated 45-50% yield to good masks.

Because EUV masks are used in a vacuum, they need a complicated pod with an outer and inner assembly. The inner assembly goes into the stepper where it is pumped down to a vacuum.

There are also the unknown unknowns that will show up after insertion of EUV into volume manufacturing. With refractive masks we had surprises: haze after 5 years and millions of exposures, for example. With EUV masks there are sure to be similar issues. Masks simply are not indestructible.



Design-to-Silicon Platform Workshops!

Design-to-Silicon Platform Workshops!
by Daniel Nenni on 07-17-2012 at 7:30 pm

Have you seen the latest design rule manuals? At 28nm and 20nm design sign-off is no longer just DRC and LVS. These basic components of physical verification are being augmented by an expansive set of yield analysis and critical feature identification capabilities, as well as layout enhancements, printability, and performance validation. Total cycle time is on the rise due to more complex and larger designs, higher error counts, and more verification iterations so we have some work to do here.

Learn how to leverage the superior performance and capacity of the Calibre design-to-silicon (D2S) platform, a comprehensive suite of tools designed to address the complex handoff between design and manufacturing. The Calibre D2S platform offers fast and reliable solutions to design rule checking (DRC), design for manufacturing (DFM), full-chip parasitic extraction (xRC), layout vs. schematic (LVS), silicon vs. layout, and electrical rule checking (ERC).

Target Audience:
IC Design Engineers who are serious about an in-depth evaluation of the Calibre Design-to-Silicon platform.

What you Will Learn:

  • Reduce turnaround time with advanced Calibre scaling algorithms and debugging capabilities that directly works within your design environment
  • Execute DFM functions and visualize results using Calibre Interactive and RVE
  • Understand the benefits of hierarchical vs. flat verification
  • Highlight DRC errors in a layout environment by using Calibre RVE
  • Learn the concepts of Waivers and hierarchical LVS
  • Identify and automatically repair planarity issues in low-density regions
  • Identify antennas and understand various repair methods
  • Use Calibre’s advanced Nanometer Silicon Modeling capabilities and understand advanced hierarchical parasitic extraction
  • Address manufacturability issues by using Calibre DFM tools that help analyze Critical Areas and features
  • Understand the importance of identifying LPC hotspots on advanced design nodes

Register Now:
[TABLE]
|-
| Jul 19, 2012
Fremont, CA
| Register
|-
| Aug 16, 2012
Fremont, CA
| Register
|-
| Aug 23, 2012
Irvine, CA
| Register
|-
| Sep 20, 2012
Fremont, CA
| Register
|-
| Oct 18, 2012
Fremont, CA
| Register
|-
| Nov 15, 2012
Fremont, CA
| Register
|-
| Dec 13, 2012
Fremont, CA
| Register
|-
| Jan 17, 2013
Fremont, CA
| Register
|-


How about this, attend the workshop and do a detailed write-up on SemiWiki and I will let you drive my Porsche. This one has the new Porsche Doppelkupplungsgetriebe (PDK) transmission with the Sport Chrono Package. An unforgettable driving experience for sure. Porsche….There is no substitute.


3D Thermal Analysis

3D Thermal Analysis
by Paul McLellan on 07-17-2012 at 11:32 am

Matt Elmore of ANSYS/Apache has an interesting blog posting about thermal analysis in 3D integrated circuits. With both technical and economic challenges at process nodes as we push below 28nm, increasingly product groups are looking towards through-silicon-via (TSV) based approaches as a way of keeping Moore’s law on track and being able to deliver increasingly complex systems at acceptable cost.

There are lots of challenges with 3D ICs, from floorplanning, to noise, to power distribution, to test. But one of the big ones is thermal analysis. Once you stack a lot of die on top of each other, heat from the silicon in the center can really only be dissipated by through the other die. The TSVs themselves, which are large copper plugs (well, large by semiconductor standards), are not just an electrical interconnect between adjacent die but also a thermal connection. Heat from the center is moved in the vertical axis, which with care can be a very good thing. The biggest area where care needs to be taken is to ensure that hot spots on one die do not align with hot spots on the die above or below. The big risk here is thermal runaway, where the temperature increase in turn increases current and power and so further increases the temperature. A chip can be completely destroyed by this.

Temperature affects performance, reliability, stress and leakage. So a full analysis of a 3D design is not straightforward since everything affects everything else. In particular, temperature affects performance and performance affects temperature.

A good analysis needs a model of how the temperature affects other aspects of the design at micron resolution. In turn this needs to interact with models of the chip, package and board to arrive at a sort of “thermal closure”.

Read the blog posting here.