BannerforSemiWiki 800x100 (2)

Front-End Design Summit: Physically Aware Design

Front-End Design Summit: Physically Aware Design
by Daniel Nenni on 11-24-2013 at 12:00 pm

Save closure time and boost performance by incorporating knowledge of physically aware design early into your front-end design implementation flow

With the adoption of advanced process nodes, design closure is becoming increasingly difficult due to the lack of convergence between the front end and the back end of the register-transfer level (RTL)-to-signoff flow. Incorporating knowledge of physically aware design early in the front-end design process is quickly becoming a must-have to enable faster convergence in the back end.

Join us at the Front-End Design Summit, where you can network with fellow logic designers and speak directly with Cadence® R&D experts from our Encounter® RTL Compiler, Encounter® Test, and Conformal® product teams. At this day-long technical event, you will:

  • Hear from Cadence customers the challenges they faced during logic synthesis, advanced low-power design and verification, engineering change order (ECO), and design-for-test (DFT) implementation, and the strategies they employed to address them
  • Discover how best to achieve power, performance, and area goals on industry-leading IP cores
  • Network, share your knowledge, and exchange best practices with your industry peers
  • Hear from Cadence R&D on product updates, solutions, and future directions
Win an Apple Mini iPad®! Drawing will be held at the end of the event.


Date: 05 Dec 2013 (9:00am – 5:00pm PST)
Location:Cadence Design Systems Campus, Bldg. 10 , San Jose, CA

Agenda
[TABLE] cellpadding=”5″ style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; margin-top: 5px; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid”
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; background: #eee; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Time
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; background: #eee; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Title
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; background: #eee; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Speaker
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 9:00am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Registration and Breakfast
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 9:30am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Welcome
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Andy Lin, VP of R&D, Front End Design, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 9:45am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Keynote
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 10:15am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Advantages of RTL Compiler Using Physically Aware Structuring
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Jaga Shanmugavadivelu, Techincal Lead, Cisco Systems
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 10:45am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Break
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 11:00am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Addressing Physical Challenges Early in RTL Synthesis
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Ankush Sood, R&D Director, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 11:30am
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | ECO Experience on a High Performance Mobile ASIC
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Deepa Thali, Staff Engineer, Qualcomm
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 12:00pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Corporate Update
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Chi-Ping Hsu, Sr. VP R&D, DSG and CSO, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 12:15pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Lunch with R&D
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 1:15pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Optimizing PPA for Tensilica BBE32 Core
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Jagesh Sanghavi, Engg. Director, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 1:45pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Power-Efficient SmartScan Test Architecture for Processor Designs
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Alan Hales, DFT Lead, Texas Instruments
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 2:15pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Addressing Test Challenges for GigaScale SoCs
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Michael Vachon, Group Director R&D, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 2:45pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Break
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 3:00pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Comparison of Various Flows for Post-Mask ECO: Manual vs. Conformal-ECO
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Pihay Saelieo, Senior MTS CAD Design Engineer, Spansion
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 3:30pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Streamlining Your Verification Flow
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Kei-Yong Khoo, Group Director R&D, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 4:00pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Bridging the Gap in an RTL2GSDII Flow
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Paul Cunningham, VP of R&D, ICD, Cadence
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 4:30pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Final Q&A and Conference Survey
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | 4:45pm
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” | Networking Event, iPad Mini Drawing
| valign=”top” style=”border-bottom: #ccc 1px solid; border-left: #ccc 1px solid; border-collapse: collapse; border-top: #ccc 1px solid; border-right: #ccc 1px solid” |
|-

Questions About this Event? Send email to events@cadence.com

Register »

lang: en_US


Intel’s Mea Culpa!

Intel’s Mea Culpa!
by Daniel Nenni on 11-24-2013 at 11:00 am

The Intel analyst meeting last week reads like an absolute train wreck with INTC stock dropping 5%+ the very next day. Since I work in the fabless semiconductor ecosystem during the day I was not able to listen to it live like the other pundits. Nor am I as easily fooled by Power Point slides. I did however review the materials and would like to comment on current Intel CEO Brian K’s coming legacy. One thing I can tell you is that it will not be “Speak softly and carry a big stick” because Brian is speaking very loudly and carrying a very small stick by comparison. My bet is that Brian’s legacy will also be the shortest Intel CEO legacy thus far, absolutely.


“We seemed to have lost our way,” Andy Bryant, Intel chairman of the board said.
“We were in denial on tablets. It put us in a hole and we had to catch up.”

Brian M. Krzanich
CEO, 2013
EDUCATION: B.A. San Jose State University

Paul S. Otellini
CEO, 2005-2013
EDUCATION: MBA, University of California-Berkeley

Craig R. Barrett
CEO, 1998-2005
EDUCATION: Ph.D., Stanford University,

Andrew S. Grove
CEO, 1987-1998
EDUCATION: Ph.D., University of California-Berkeley

Gordon E. Moore
CEO, 1975-1987
EDUCATION: Ph.D., California Institute of Technology

Robert N. Noyce
CEO, 1968-1975
EDUCATION: Ph.D., Massachusetts Institute of Technology

First I would like you to remember the 14nm fiasco. I wrote on August 1[SUP]st[/SUP] of this year Intel 14nm would be delayed. I put a question mark in the title so I would not get sued even though I knew it was true. 14nm process move-in had been delayed and Intel was not truthful about it. On August 24[SUP]th[/SUP] Paul McLellan attended the SEMI Silicon Valley luncheon and wrote “Intel 14nm really is delayed” as it was discussed openly. Unfortunately Brian K continued the ruse at the Intel Developers Forum a month later looking us in the eye and saying 14nm was NOT delayed. He even showed a 14nm based laptop which we were not allowed to touch. On the quarterly conference call a month later Brian finally fessed up to the delay. Paul wrote about it in “Yes Intel 14nm really is delayed and they lost $600M on Mobile”. So either Brian did not know 14nm had been delayed or he was not honest about it, I’m not sure which is worse for a CEO with 30+ years of manufacturing experience.

Second let me weigh in on the Intel as a foundry proposition. I had serious doubts when Samsung announced that they were entering the foundry business 6+ years ago. Being a foundry is very customer centric and that was not Samsung but they lucked out getting Apple. Apple first used the ASIC model and relied on Samsung for everything after the initial design. Apple has since become one of the largest fabless semiconductor companies and has the ability to manufacture wherever they choose (for 20nm Apple has moved to TSMC).

Samsung started the foundry business at 90nm then 65nm, 45nm, 32nm, 28nm, 20nm with very little customer traction other than Apple. But during that time Samsung learned the foundry business, developed a design enablement ecosystem second only to TSMC, allowed customers into the process development cycle, and at 14nm Samsung will become the number two foundry in the world, absolutely.

So why does anybody think that Intel can do it any faster than Samsung? Personally I don’t think Brian K. has the cahonas to make it in the foundry business. This is an absolute cut throat industry with no customer loyalty whatsoever. It is all about collaboration, price, and technology on each and every node and only being good at one out of three will not make it.

The top fabless semiconductor companies are currently straddling TSMC and Samsung at 14nm to get the best pricing and delivery. 10nm collaboration is already underway and let me tell you it is VERY intimate and there will be NO process secrets left unexposed. So you have to ask yourself: Does Intel really want to be the third big dog eating out of the same bowl?

More Articles by Daniel Nenni…..

lang: en_US


Better License Usage vs More Licenses

Better License Usage vs More Licenses
by Paul McLellan on 11-22-2013 at 11:00 am

When you see a new product announcement from an EDA company, it is always put in terms that make it seem as if the engineer is sitting at his or her desktop with a big server and is running the new tool to wondrous effect. But the reality in the real world is that most companies have a computing infrastructure of server farms, often several in different geographical locations, and some way of scheduling jobs. The infrastructure that makes all of this work is not as sexy as the latest FinFET extraction or double-patterning aware router. But it is just as important and can actually have as big an effect on the turnaround time for jobs and on the utilization of those server farms.

Runtime Design Automation’s NetworkComputer (NC) is the fastest of the job schedulers out there. For example, other schedulers take 40 minutes to dispatch 300,000 jobs but NC can do it in 11. The theoretical maximum, based on network latency, is 9 minutes so NC is close. In fact it is 4-10X faster than LSF and UGE with latency going from 1.6ms to 24ms.

OK, that is nice but it is probably not going to get your VP engineering to salivate. But the implications are huge. With other less smart schedulers, often many tool licenses lie unused. Simply by switching to NC the usage gets close to 100%, all of the licenses in use all of the time. To get that effect using the old scheduler requires doubling the number of licenses, which is obviously very expensive. But if you can afford those new licenses and you use NC too, then you immediately get about multiples of the capacity that you had before.

Another problem is that you are in, say, San Jose. All those licenses in Bangalore are going unused since it is the middle of the night or something. But they don’t want you to use them because when they get into work in the morning they don’t want to find the licenses have all been stolen by California. What is required is the capability to both use those licenses but still give priority to the locals so that when they want them, the jobs that are “borrowing” them are pre-empted.

What is required is a system that allows you to configure per-site allocations, and when a site is idle let other sites use that pool. But then when the idle site wakes up, usage shifts back to the defined allocations. NC can do exactly this, ensuring that a license never sits idle if someone, somewhere wants it.

For example the above graph shows this in action. The bottom of the three graphs shows jobs running in Austin and Sunnyvale. Then in the middle, India takes all those licenses for several hours. But when Austin and Sunnyvale come back at the right hand side, the Indian jobs are pre-empted and the license use returns to the US.

More information on NetworkComputer is here.


Signoff Summit and Voltus

Signoff Summit and Voltus
by Paul McLellan on 11-22-2013 at 10:21 am

Yesterday Cadence had an all-day Signoff Summit where they talked about the tools that they have for signoff in advanced nodes. Well, of course, those tools work just fine in non-advanced nodes too, but at 20nm and 16nm there are FinFETs, double patterning, timing impacts from dummy metal fill, a gazillion corners to be analyzed and so on.

The core of Cadence’s signoff environment consists of 3 tools, two of them new and one of them updated. These are:

  • Tempus, Cadence’s new timing engine announced in May
  • Voltus, Cadence’s new power grid analysis tool announced a couple of weeks ago
  • QRC, Cadence’s parasitic extraction tool

These tools are designed to interact, because at these process nodes signoff is increasingly like tuning a steel-drum for a Caribbean band, where every change you make alters every other note on the drum. Every change you make to the power network alters the parasitics and the timing, and adjustments to the timing change the power demands. You just have to cross your fingers and hope that the changes get smaller and smaller and eventually converge.

There is apparently an ancient Haida saying that “everything depends on everything else.” Sounds like the perfect metaphor for advanced node signoff!

The biggest effect is that voltage affects timing, so accurate analysis of the power grid (especially IR drop) is very important. But timing affects the power supply too: as changes are made to the design to meet timing there an knock-on effects incrementally changing voltage and thermal (and temperature affects timing and power dissipation, and not in a good way). And changes to the power net change all the parasitics. This all needs to be integrated with Allegro and Sigrity to take account of package and board effects since major current changes, especially inrush current when powering up domains that were powered down, can cause huge transients that affect the whole on-chip power network and thus the timing and…you get the idea. Everything depends on everything else.

I wrote about Tempus in detail on Semiwiki when it was announced here. It is a static timing analysis (STA) tool that has been designed from the ground up to be massively parallel. Yesterday Ruben Molina of Cadence said that the sweet spot on a single server is to use 8 cores after which you should distribute the design across multiple servers. Note that Tempus doesn’t just do the easy distribution, doing analysis in parallel of different corners, but also can distribute timing analysis of a single large design.

Voltus is pretty much the same message but for power analysis. For over a decade, Cadence’s analysis in this area has been based on the VoltageStorm technology acquired in the Simplex acquisition renamed EPS. However, Voltus is a completely new tool. I don’t know how much code it shares with Tempus but I’m betting quite a bit, based on the fact that it has the same massively parallel value proposition and the two tools are clearly tightly integrated. It is 10X the speed of other solutions on the market and supports designs of up to one billion instances. What does it do:

  • IR drop and electromigration analysis and optimization
  • Power consumption calculation and analysis
  • Analysis of power impact on design closure, from chip to package to PCB

Also during the day were several presentation by actual users of Cadence’s signoff tools: GlobalFoundries, nVidia, TI, Connexant and LSI Logic. In particular, nVidia is one of the lead customers for Voltus and presented some of their experience.

Information on Voltus is here. The Voltus white paper is here.


More articles by Paul McLellan…


Thermal Analysis for 3D SoC Integration

Thermal Analysis for 3D SoC Integration
by Daniel Payne on 11-21-2013 at 7:01 pm

The first time that I saw a DRAM in a ceramic package running on a tester I made the mistake of touching my finger to the metal lid, scorching my finger and teaching me a lesson that ICs can run extremely hot. I’ve read a lot the past few years about 3D IC design, and immediately my mind becomes curious about how an engineer would go about simulating or estimating the thermal performance before building a prototype. Last month the Global Semiconductor Association (GSA) invited Gene Matter from Docea Power to talk about:

  • How to model a 3D IC for dynamic power and thermal analysis
  • Creating compact thermal models for fast simulation and acceptable accuracy
  • Performing “what-if” analysis on the floor plan while running real software loads


Gene Matter, Docea Power

The cross-section of a typical 3D IC may contain a substrate that stacks multiple ICs, like: Processor, DRAM, RF and Non-volatile memory.

During the design process you want to know how temperature in such a 3D system impacts: power, peak performance, aging, and package costs. Here’s a thermal modeling flow used by Docea Power that creates a Compact Thermal Model (CTM):

The EDA tool from Docea is called Ace Thermal Modeler (ATM):

Once you’ve created a thermal model, the next step is to define your power model and use case, then run simulation with Aceplorer to understand the temporal and spatial effects:

By modeling thermal effects at the system level an engineer can now:

  • Analyze how IP leakage is temperature-dependent
  • Explore multiple power and thermal management strategies
  • Qualify the environment capacitive effect
  • Qualify the design of minimum cooling properties
  • Trade off and explore various floor plan and proximity dependencies
  • Find and fix spatial or temporal hot spots and gradients
  • Choose an optimal thermal sensor location
  • Manage costs across: Die, package, PCB, chassis, the complete system

3D IC Example – WIOMING

Here’s a Memory-on-Logic 3D stack example:


Source:CEA LETI, Pascal Vivet
A cross-section view shows how the SoC is connected to the DRAM memory:

Eight heaters and several thermal sensors were placed around the SoC in order to accurately characterize within about 1 degree Celsius. A compact thermal model was created and static simulation results were generated in milli-seconds, while a dynamic simulation only took seconds to complete. The difference between simulated and measured results across all scenarios showed and average error of just 4.22%:

Transient simulation results also showed acceptable correlation between simulated and measured:

Summary

It’s possible to use thermal modeling at the system level with Ace Thermal Modeler to explore and measure a 3D stack with TSVs. Compact thermal models allow for quick run times and decent accuracy. The simulated results correlate well with measured silicon values as seen with the WIOMNG example where a WideIO DRAM was added on top of an SoC.

You may read the complete presentation on the GSA web site.

More Articles by Daniel Payne …..

lang: en_US


It’s about the mobile GPU memory bandwidth per watt, folks

It’s about the mobile GPU memory bandwidth per watt, folks
by Don Dingee on 11-21-2013 at 4:00 pm

There has been a lot of huffing and puffing lately about 64-bit cores making it into the Apple A7 and other mobile SoCs, and we could probably dedicate a post to that discussion. However, there are a couple other wrinkles to the Apple A7 that should be getting a lot more attention.

There are two primary causes of user frustration in multimedia applications. Continue reading “It’s about the mobile GPU memory bandwidth per watt, folks”


QCOM delivers first TSMC 20nm mobile chips!

QCOM delivers first TSMC 20nm mobile chips!
by Daniel Nenni on 11-21-2013 at 3:00 pm

QCOM is now sampling the TSMC 20nm version of its market dominating Gobi LTE modem. The announcement also included a new turbo charged version of their 28nm Snapdragon 800 SoC with a Krait 450 quad core CPU and Adrino 420 GPU. Given the comparable benchmarks between the Intel 22nm SoC and the 28nm SoCs from Apple and QCOM, the new 20nm mobile products from the top fabless semiconductor companies will be well beyond Intel’s 22nm reach, absolutely.

The question is: When will Intel have a competitive 14nm SoC? The answer will hopefully come today at the Intel Analyst conference so stay tuned to SemiWiki. I will compare the conference info with what I have heard and see how they match up. Spoiler alert: Production Intel 14nm SoCs will not arrive until 2015, believe it.

TSMC’s 20nm process technology can provide 30 percent higher speed, 1.9 times the density, or 25 percent less power than its 28nm technology. The advanced 20nm technology demonstrates double digit 112Mb SRAM yield. The high performance device equipped with second generation gate-last HKMG and third generation Silicon Germanium (SiGe) strain technology. By leveraging the experience of 28nm technology, TSMC’s 20nm process can further optimize Backend-of Line (BEOL) technology options and deep collaboration with customers to continue the Moores’ Law shrinking path. Technology and design innovation keep production costs in check.

The new QCOM Krait 450 quad-core SoC is the first mobile CPU capable of running at speeds of up to 2.5GHz per core with a memory bandwidth of 25.6GB/s which will significantly increase the speed of running apps and browsing the internet. According to QCOM it is also capable of delivering Ultra HD (4K) resolution video, images, and graphics to mobile devices and HDTVs via their new Adreno graphics engine (the Adreno 420 GPU claims a 40% graphics boost over the Snapdragon 800). QCOM also claims to have integrated hardware accelerated image stabilization, which would be an industry first. The quad core processors are still 32-bit which was a bit of a disappointment for me. If anyone can push Android to 64-bit it is QCOM. As it turns out, Apple really did pull a rabbit out of the hat with their 64-bit ARM based A7 SoC for the iPhone5s which I have and am thoroughly enjoying!

“Using a smartphone or tablet powered by Snapdragon 805 processor is like having an UltraHD home theater in your pocket, with 4K video, imaging and graphics, all built for mobile,” said Murthy Renduchintala, executive vice president, Qualcomm Technologies, Inc., and co-president, QCT. “We’re delivering the mobile industry’s first truly end-to-end Ultra HD solution, and coupled with our industry leading Gobi LTE modems and RF transceivers, streaming and watching content at 4K resolution will finally be possible.”

The FinFET version of Snapdragon and Gobi LTE modems are expected to sample one year from now with a 20% performance boost or a 35% power savings from the silicon alone. I also expect it will have a 64-bit ARM based architecture for greater throughput. Apple’s next A8 SoC (iPhone6) is also TSMC 20nm which will mark the first time Apple has competitive silicon with competing tablets and smartphones. Apple’s A7, which just came out, is old school 28nm and last year’s A6 was 32nm. Exciting times in the fabless semiconductor ecosystem, absolutely!

See the Qualcomm presentation HERE.

More Articles by Daniel Nenni…..

lang: en_US


Semiconductor market could grow 15% in 2014

Semiconductor market could grow 15% in 2014
by Bill Jewell on 11-20-2013 at 8:00 pm

The global semiconductor market has grown 4% for the first three quarters of 2013 compared to a year ago, according to World Semiconductor Trades Statistics (WSTS). Guidance for 4Q 2013 revenue change versus 3Q 2013 varies widely for key semiconductor companies. Texas Instruments (TI), Broadcom, Infineon and Renesas all expect declines ranging from 7% to 10% based on the midpoint of their guidance. Intel, Qualcomm, STMicroelectronics (ST) and Advanced Micro Devices (AMD) guide toward flat or low single digit growth. Micron Technology did not provide specific revenue guidance, but provided estimates of DRAM and flash Memory bit growth and price changes for their quarter ending in late November. Based on the Micron guidance, Semiconductor Intelligence estimates revenue growth of 30%. Samsung did not provide revenue guidance, but expects solid demand and a tight market (meaning higher prices) for both DRAM and flash memory. Based on the table below, the 4Q 2013 semiconductor market should be flat or up low single digits from 3Q 2013. Thus full year 2013 growth should be 5% to 6%.

[TABLE] align=”center” border=”1″
|-
| colspan=”4″ style=”width: 529px; text-align: center” | Key Semiconductor Company Revenue Guidance
|-
| colspan=”4″ style=”width: 529px; text-align: center” | 4Q 2013 versus 3Q 2013
|-
| style=”width: 121px” | Company
| style=”width: 132px; text-align: center” | Low end
| style=”width: 144px; text-align: center” | Midpoint
| style=”width: 132px; text-align: center” | High end
|-
| style=”width: 121px” | Intel
| style=”width: 132px; text-align: center” | -2%
| style=”width: 144px; text-align: center” | 2%
| style=”width: 132px; text-align: center” | 5%
|-
| style=”width: 121px” | Qualcomm
| style=”width: 132px; text-align: center” | -3%
| style=”width: 144px; text-align: center” | 2%
| style=”width: 132px; text-align: center” | 6%
|-
| style=”width: 121px” | TI
| style=”width: 132px; text-align: center” | -12%
| style=”width: 144px; text-align: center” | -8%
| style=”width: 132px; text-align: center” | -4%
|-
| style=”width: 121px” | Micron
| style=”width: 132px; text-align: center” |
| style=”width: 144px; text-align: center” | 30%*
| style=”width: 132px; text-align: center” |
|-
| style=”width: 121px” | ST
| style=”width: 132px; text-align: center” | -4%
| style=”width: 144px; text-align: center” | 0%
| style=”width: 132px; text-align: center” | 4%
|-
| style=”width: 121px” | Broadcom
| style=”width: 132px; text-align: center” | -11%
| style=”width: 144px; text-align: center” | -8%
| style=”width: 132px; text-align: center” | -5%
|-
| style=”width: 121px” | Renesas
| style=”width: 132px; text-align: center” |
| style=”width: 144px; text-align: center” | -10%
| style=”width: 132px; text-align: center” |
|-
| style=”width: 121px” | Infineon
| style=”width: 132px; text-align: center” | -9%
| style=”width: 144px; text-align: center” | -7%
| style=”width: 132px; text-align: center” | -5%
|-
| style=”width: 121px” | AMD
| style=”width: 132px; text-align: center” | 2%
| style=”width: 144px; text-align: center” | 5%
| style=”width: 132px; text-align: center” | 8%
|-
| colspan=”4″ style=”width: 529px” | *estimate based on bit growth and price guidance
|-

What will be semiconductor market growth in 2014? We at Semiconductor Intelligence expect growth to accelerate from 2013 to 2014. One major factor driving the acceleration is the expectation of increasing global GDP growth in 2014. The table below shows the International Monetary Fund (IMF) November 2013 forecast for GDP growth. The IMF expects World GDP growth to accelerate from 2.9% in 2013 to 3.6% in 2014. Advanced economies are forecast to grow 2.0%, up from 1.2% in 2013. The key drivers in advanced economies are the U.S., with GDP growth accelerating by one percentage point, and the Euro Area, which should move from a 0.4% decline in 2013 to 1.0% growth in 2014. Developing Economies are projected to grow 5.1% in 2014, up from 4.5% in 2013. China is forecast to have slightly lower growth in 2014 than in 2013, but other developing economies such as India, Mexico, Russia, Eastern Europe and Southeast Asia are all expected to see accelerating growth in 2014.

[TABLE] align=”center” border=”1″
|-
| colspan=”3″ style=”width: 389px” |

Real GDP Annual Percent Change
(IMF, November 2013)

|-
| style=”width: 187px” | Region
| style=”width: 108px; text-align: center” | 2013
| style=”width: 94px; text-align: center” | 2014
|-
| style=”width: 187px” | World
| style=”width: 108px; text-align: center” | 2.9
| style=”width: 94px; text-align: center” | 3.6
|-
| style=”width: 187px” | Advanced Economies
| style=”width: 108px; text-align: center” | 1.2
| style=”width: 94px; text-align: center” | 2.0
|-
| style=”width: 187px” | U.S.
| style=”width: 108px; text-align: center” | 1.6
| style=”width: 94px; text-align: center” | 2.6
|-
| style=”width: 187px” | Euro Area
| style=”width: 108px; text-align: center” | -0.4
| style=”width: 94px; text-align: center” | 1.0
|-
| style=”width: 187px” | Japan
| style=”width: 108px; text-align: center” | 2.0
| style=”width: 94px; text-align: center” | 1.2
|-
| style=”width: 187px” | Developing Economies
| style=”width: 108px; text-align: center” | 4.5
| style=”width: 94px; text-align: center” | 5.1
|-
| style=”width: 187px” | China
| style=”width: 108px; text-align: center” | 7.6
| style=”width: 94px; text-align: center” | 7.3
|-

Many factors affect the semiconductor market, but GDP growth is a key element. The components of GDP include business investment and consumer durable goods spending – both major drivers of semiconductors. We at Semiconductor Intelligence have developed a proprietary model of semiconductor market growth based on changes in GDP. The model is illustrated below for 2003 to 2014. The model is generally accurate in predicting the acceleration or deceleration of the semiconductor market. The only exception in the last 10 years is 2012, when the model predicted slight acceleration in semiconductor market growth while the market actually declined. In six of the last ten years the model has been within a couple of percentage points of the actual market change. Based on the IMF forecast of 3.6% GDP growth in 2014, the model predicts semiconductor market growth of 12%. Of course the accuracy of the model is dependent on the accuracy of the GDP forecast.


In November 2012 Semiconductor Intelligence forecast semiconductor market growth of 9% in 2013 and 12% in 2014. In May 2013 we revised this to 6% in 2013 and 15% in 2014. We are continuing to hold to this forecast. As stated earlier, 2013 will probably finish with 5% to 6% growth. Although the model calls for 12% growth in 2014, we believe there is upside potential for GDP and semiconductor market growth.

How does our 15% growth for 2014 compare to other semiconductor market forecasts? The optimists are Objective Analysis and Future Horizons. In June Jim Handy of Objective Analysis projected 2014 growth of over 20%. Malcolm Penn of Future Horizons recently called for 25% growth. Other forecasters expect 2014 growth to be similar to 2013, ranging from 2.9% from IDC to 8% from IC Insights.

lang: en_US

More Articles by Bill Jewell…..


The Rosetta Stone of Lithography

The Rosetta Stone of Lithography
by Paul McLellan on 11-20-2013 at 3:14 pm

At major EDA events, CEDA (the IEEE council on EDA, I guess you already know what that bit stands for) hosts a lunch and presentation for attendees and others. This week was ICCAD and the speaker was Lars Liebmann of IBM on The Escalating Design Impact of Resolution-Challenged Lithography. Lars decided to give us a whirlwind tour of the history of recent lithography. I’ll summarize things here and talk about some of the future technologies and challenges that he described in a later blog.

Lars started by presenting what he called the Rosetta Stone of lithography. This summarizes the past challenges survived and the future challenges to come in a single slide. Almost anything you need to know about lithography as an EDA professional is on this one slide. One important thing to realize is that process names are increasingly just names. The critical thing is what is the minimum pitch that is allowed on the layer. For example, at 22nm the minimum pitch is 80nm. At 10nm the minimum pitch is 48nm.

The fundamental equation of lithography is that the resolution (always talked about as the half-pitch) is k[SUB]1[/SUB] * lambda / NA, where

  • k[SUB]1[/SUB] is the Rayleigh parameter, which is a measure of the lithography complexity. Yield is affected if it drops below 0.65 and then we need to do something about it (such as OPC or double patterning, but that story is yet to come)
  • NA is the numerical aperture, which is the sine of the largest diffracted angle captured by the lens. It is hard to scale since lens manufacture is hard for NA>0.5 but worse, the depth of field scales NA[SUP]-2[/SUP] making planarity of the wafer more and more critical
  • lambda is the wavelength of light, which for many years has been 193nm.

The actual pitch is twice this number. So if the number is 100 then you can have metal (or whatever) at 100nm width and 100nm space (or two numbers that are close but add up to 200).

In the early days of semiconductor manufacturing, before this Rosetta Stone even begins, we scaled by scaling lambda, the wavelength of the light we used. First we used G-line at 436nm and then in 1984 went to I-line at 365nm. In 1989 we switched to KrF light sources at 248nm and in 2001 to ArF at 193nm. We then expected to go to F[SUB]2[/SUB] at 157nm but that never happened. It was too difficult to build effective optics and masks. And by the time we though about Ar[SUB]2[/SUB] at 126nm that already required full vacuum and reflective optics so why not go all the way to X-rays (EUV is at 14nm wavelength). So we have been stuck at 193nm light since 2001, as you can see on the 3rd line down on the Rosetta Stone, the one that only has one entry.

The slides starts at 130nm which was the first time that we used 193nm light. At that point we could use conventional lithography without doing anything unusual: flash the light through the reticle onto the wafer without really more than rudimentary correction on the mask. Since then we have had to scale using NA and k[SUB]1[/SUB] down to 28nm and which point scaling NA ran into the wall since it was impossible to manufacture lenses, and we were left with only being able to scale k[SUB]1[/SUB].

At 90nm we needed powerful optical proximity correction (OPC) essentially turning the masks into less of a mask and more of a diffraction grating where the light that got through interfered in just the way we wanted to give us something approaching the pattern we required. We couldn’t make square corners, the OPC is a sort of low-pass filter, but we could live with rounded corners and vias that were more circular than square. But OPC couldn’t correct everything so from an EDA point of view we needed tools to check the design, locate hot-spots that OPC would fail to correct, and get the designer to fix them.

From 65nm to 32nm we used off-axis illumination and asymmetric illumination. Without going into all the details, one of the inputs into the equation of to what angle to tilt the illumination is the pitch of the patterns on the wafer. So for DRAM not such a big issue but for logic we had to have a lot of rules about the dominant direction on a layer and increasingly complicated design rules since not all pitches were allowed any more. This was also when immersion lithography was introduced which got us down to 32nm.

To get us the next process generation to 22nm (80nm pitch) off-axis illumination and immersion lithography was no longer enough. For layers that didn’t only have patterns in one direction, we needed double exposure, one mask for the horizontal patterns and one for the vertical. However, still only one photoresist step and one etch step. The rules about prohibited pitches became more complex leading to unbelievably huge design rule decks.

80nm pitch is the least we can get out of the optical system. To go further we need to go to double patterning (DP), what lithographers call LELE (litho-etch-litho-etch). In principle this should take us down to 40nm but since the two masks used in double patterning are not self-aligned, we need to give up 10nm for those errors and 50nm is the smallest we can get with double patterning. I have written in detail about double patterning on Semiwiki here.

There is also triple patterning TP, (called LE[SUP]3[/SUP] by the lithographers). But this is not used to increase resolution (it isn’t really possible to use it that way) but rather to get better 2D resolution. But this leads to some big issues in EDA such as how to communicate complex structures that cannot be 3-colored.

Another type of double patterning is what IBM calls sidewall image transfer and what many people call SADP for self-aligned double patterning. In this the two separate patterns in DP are constructed in a way that removes that 10nm penalty. A mandrel is constructed using a single mask, and then it used to build sidewalls on each side of the mandrel. The mandrel is removed leaving everything at the desired pitch. Another wrinkle is that it is no longer possible to build anything other than gratings with no ends. A separate cut-mask is required to divide these up. In fact this approach is also used on some critical layers even with LELE DP. If you have ever seen any 20nm layout, that is why it looks so regular: only certain pitches are allowed and the lines have to be continuous and then cut.

Another problem is that the area that we need to inspect for interactions increases. Actually, of course, the area actually remains the same but the number of patterns drawn into the area increases. So from the point of view of someone sitting in front of a layout editor, more and more polygons need to be considered. In particular, it is no longer just the nearest neighbor but the next one over too. This causes big problems when cells are placed next to each other since the interaction area stretches deeper into the cell. Further, vias, which used to simply be colored the same as the metal they contacted, can interact over greater distance and so need to be actively colored leading to more complexity in the routes.

So this is where we are today. First generation multiple patterning required only a few levels using LELE (DP). Cell-to-cell interactions could be managed through simple rules. As we go to 10nm we will have more layers using LELE, a few levels using LE[SUP]3[/SUP] (TP). Then a few levels needed SADP. With lots of complex cell-to-cell interfactions.

That’s enough for one blog. More next week.

The presentation and a video of the talk should be here on the CEDA website when it eventually gets published.


More articles by Paul McLellan…


Revisiting Andy Grove’s "Only the Paranoid Survive"

Revisiting Andy Grove’s "Only the Paranoid Survive"
by Ed McKernan on 11-19-2013 at 10:00 pm

Over the course of the last fifty years there have been two significant books that have delivered emotional and operational clarity on the rise and fall of high tech companies and industries: The Innovator’s Dilemma and Only the Paranoid Survive. Amazingly, these two books were released within a year of each other (1996, 1997) and at the height of Andy Grove’s tenure as CEO of Intel. Still today, the Innovator’s Dilemma is more often used as a short-handed catch phrase to describe the sure apparent fall of an established player in a maturing industry whereas Grove’s phrase is seen as a rallying cry to remain vigilant against competitors forays into ones market. What is most remarkable about Grove’s book is that it really provides a roadmap for companies to avoid the Innovator’s Dilemma trap and would be a timely read today as Intel wrestles with its future.

Finished in 1996, “Only the Paranoid Survive” describes Inflection Points and 10X factors that can impact a company positively or negatively and thus lead to launching a company into high growth at the expense of its competitors or conversely into a downturn from which survival is in serious doubt. In fact, if a CEO and his team is not able to capitalize on the inflection point, a business exit is more than likely. Grove uses the case of Intel’s exit from the DRAM business in the mid 1980s and the response to the Pentium bug debacle in 1994 to highlight how the company moved off of an inflection point in an upwardly, positive way.

Intel was founded in 1968 by Robert Noyce and Gordon Moore to develop the DRAM, an integrated circuit, as a low cost, small footprint replacement to core memory used in mainframe computers. Andy Grove was an assistant of Moore’s at Fairchild and was hired on as the first employee. Quite often he is referred to as the third founder, primarily because he became the most recognizable face of Intel as he transform the company from a commodity memory player to a dominate microprocessor supplier that expanded revenue from roughly $2.7B in 1987 to nearly $21B when he stepped down in 1997.

The company relied mainly on DRAMs during its first decade as sales exploded to $400M by 1978, however with growth came many competitors, including nimble startups like Mostek and well-capitalized Japanese conglomerates like NEC, Toshiba and Hitachi. The field became over crowded and when the dollar soared in value relative to the Yen, price-cutting and dumping ensued to the point Intel’s market share crashed to a low of 1.3% in 1984. It would have been the end for Intel, the innovative silicon valley startup with some of the brightest minds in the industry were it not for the experimental work in the early days on a new memory called EPROM and a 4 bit calculator chip called the microprocessor. All three semiconductor building blocks of the modern computer were invented by 1971, three years into the company’s existence and yet each would reach its prime importance at various stages.

The often-recounted story of exiting the DRAM business occurs in mid 1985, a time that Grove describes as being after a year of wandering aimlessly. He is meeting with then CEO Gordon Moore in his office discussing the quandary of remaining in the DRAM business. Emotionally he and many of Intel’s employees are attached to DRAM as the device they road to success and in many ways critical as it was considered the technology driver for new process technologies given its uniformity and high volume. However, each generation of DRAM densities invariably had a different leader. To be profitable meant being first to market and without assurance, only deep pockets could guarantee survival across multiple generations. The Japanese had the advantage of cheap financing and the ability to employee multiple design teams in order to increase their chance of winning the next generation design.

As Grove looks out the window of his office at the rotating Ferris Wheel of the Great American amusement park, he turns to Moore and asks, “If we get kicked out and the board brought in a new CEO, what do you think he would do?” Gordon answered without hesitation, “He would get us out of memories.” And so the two walk out the door and reenter convinced that they must execute on the plan to get out of the memory business and concentrate on Microprocessors and EPROMs, which Intel would grow to dominate over the following years.

For those readers who are not close followers of Intel, there is sometimes an assumption that DRAMs made up the majority of the revenue and that microprocessors were a nascent business. In reality, the company was saved by IBM’s selection of the 8088 processor for its PC, which launched in 1981 and shipped roughly 400 thousand units in its first year or in the words of Bill Lowe, VP of the Personal System’s Group, more than the installed base of big blue’s mainframes. Also key was an investment by IBM in December 1982 to guarantee Intel had the resources to support the company’s growth and development of new processors.

While Intel would take three years to exit the DRAM business, Grove notes that credit had to be given to the middle managers making resourcing decisions on a day-to-day basis, such as allocating more production wafers to microprocessors than DRAM, as the critical part of the transition process. Still plants had to be closed and with it mass layoffs. Intel’s future survival and dominance would require a roadmap out of commodity and into a sole source technology leadership position.

Andy Grove’s remaking of Intel would continue during the next dozen years during which time he pushed AMD out of a second sourcing agreement that originally was required by IBM; he outmaneuvered the RISC processor competitors and Microsoft; subsumed all the ancillary chipset logic of the PC, sans graphics controllers; led a dramatic branding campaign that made Intel a worldwide recognizable household name; and kept the PC market split amongst many rivals with none attaining even 30% market share. All of these tactics added up to market dominance and a market capitalization of $197B, up from $4B when he took over and more than 50% higher than today.

The story of Intel’s dominance from the 386 generation until the end of the century can sit in mighty contrast to the missed mobile inflection point of these past ten years and what is likely the next one, that of its leading edge process technology that enables high margin x86 server and PC processors and could be used with others in a Foundry arrangement. Recounting the history of Intel allows one to view not only the inflection points but the mistakes and successes along the way. Survival, in Grove’s book was more than Paranoia, it was reinforcing a market trend as well as developing contingency plans, listening to the remote Field Sales Cassandras and proactively developing and testing for new markets. Grove was not mistake free and some initiatives started under his watch were not snuffed out in time to prevent damage to the company. All this is what makes looking at Intel uniquely interesting.

lang: en_US

More articles by Ed McKernan…