RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Common Platform: Onward to the Future

Common Platform: Onward to the Future
by Paul McLellan on 03-14-2012 at 3:03 pm

There were keynotes from all three semiconductor partners in the Common Platform Alliance and, as if to show how common they are, they all talked about the problems that need to be addressed in the next decade and a half and they all said pretty much the same thing. Gary Patton of IBM went first and so he got to say everything first. Plus, it is clear, IBM does all the early research before technologies reach the point at which cooperative development can begin.

Gary started off by pointing out technology shifts seem to last about a decade before a new disruptive change is required. They used to build servers out of bipolar in the 1980s until the heat limits meant that they switched to planar CMOS in the 1990s. Various changes such as high-K metal gate and embedded high performance memory extended this. In 2010 things go 3D, at both the big and small level. Small, in the switch to FinFETs, and large in the sense of stacking die using TSV technology. These should last until about 2020 or so when new technology will be needed based on silicon and carbon nanotubes and integrated photonics.

For the time being, IBM is mostly focused on SOI with up to 15 levels of metal for servers. Samsung and GF are mainly focused on bulk for the SoC business, especially mobile.

The challenge to get down to about 10nm fall into 4 areas:

  • lithography
  • devices
  • interconnect
  • packaging and subsystem integration

In litho, we are having to switch to double patterning (which, of course, takes twice as long). But the lithography is now so complex it is no longer possible to stick with restrictive design rules (don’t do this and your chip will work) and instead there is a switch to prescriptive design rules (these are the only patterns that you are allowed).

EUV is coming at some point but there are still major challenges. The biggest, of course, is that you can’t build lenses so there is the switch to reflective optics. One thing I hadn’t realized is that this means that your mask can’t have a pellicule to keep defects out of the optical plane, which is another challenge. That’s on top of the changes: new photoresist, new mask material, new wavelength of light and so on. EVU is currently at around 5-10 wafers per hour and needs to get up to over 100 to be economically viable.

For devices, as probably everyone knows, we will switch to fully depleted FinFETs in the short term. Further out we will want to switch to carbon electronics which has very high carrier mobility. But manufacturing is a big challenge since carbon nanotubes don’t always form properly as semiconductors and sometimes simply form as metallic conductors.

Interconnect is a major challenge since nothing good comes out of scaling: resistance goes up, capacitance goes up, reliability gets worse, insulator reliability gets worse. One promising area is integrated photonics, where multiple signals (wavelengths) can be propagated down the same waveguide and then multiplexed out. Otherwise it is basic incremental process improvement, slightly better materials and so forth.

The challenge in packaging is firstly power and thermal, getting the heat out. But there are also issues as die get more fragile and packages get less rigid leading to reliability issues. And all the time it is necessary to boost the bandwidth between the chip and its environment using finer pitches and improved contacts. One thing I hadn’t realized is that the switch to lead-free means stiffer contact between the chip and the package, again a reliability issue.

At the lunch, GF said that 32nm is in production with real product in the line (IBM confirmed this saying they are running 32nm SOI at GF). The yield issues seem to be solved with yields doubling in a quarter. 28nm is in early production with real products in the line. 20nm risk production for LPE will be this year and for LPM next year.


Timing Closure for ECOs in your SOC Design

Timing Closure for ECOs in your SOC Design
by Daniel Payne on 03-14-2012 at 1:07 pm

I decided to attend a webinar today hosted by Synopsys, “Streamline Your PrimeTime ECO Flow For Fastest Setup, Hold and Timing DRC Closure.” The format was to present slides first then hold for questions until the end. Enough time was spent on questions which made this webinar different than most other webinars I’ve attended. The on-demand webinar is here.

David Guinther announced the webinar and noted the DATE conference in Germany had Synopsys users presenting. The next webinar in the series is called “Faster Timing Signoff” scheduled for April. Continue reading “Timing Closure for ECOs in your SOC Design”


SICAS is dead (and WSTS isn’t feeling too good)

SICAS is dead (and WSTS isn’t feeling too good)
by Bill Jewell on 03-13-2012 at 8:16 pm

The SICAS (Semiconductor Industry Capacity Statistics) program has been discontinued after the release of the 4Q 2011 data, available through the SIA at http://www.sia-online.org/industry-statistics/semiconductor-capacity-utilization-sicas-reports/

The latest report stated: “Due to significant changes in the SICAS program participation base in 2011, the quarterly SICAS capacity and utilization report will be discontinued, effective Quarter 1 2012.” SICAS lost the Taiwanese companies Nanya Technology, Taiwan Semiconductor Manufacturing Company Ltd. (TSMC) and United Microelectronics Corporation (UMC) beginning in 2Q 2011. SICAS members likely questioned the value of continued participation without the two largest wafer foundry companies.

The end of SICAS is a major disappointment. The semiconductor industry has lost the definitive source of capacity and utilization data, a key component in determining the current and near term industry conditions. It is especially disappointing to me since I served on the founding executive committee of SICAS in 1995.

The death of SICAS follows the withdrawal of Intel and Advanced Micro Devices (AMD) from World Semiconductor Trade Statistics (WSTS) as originally reported in the Wall Street Journal: http://www.djnewsplus.com/rssarticle/SB133045771410082239.html
Without Intel and AMD, it will be extremely difficult for WSTS to report accurate statistics for microprocessors. Intel and AMD account for over 90% of the microprocessor market and microprocessors account for about 15% of the semiconductor market. WSTS may need to drop microprocessors from its product coverage, which would result in WSTS no longer being a reliable source of data on the overall semiconductor market.

As with the loss of SICAS, the loss of Intel and AMD in WSTS is a significant disappointment. I was Texas Instrument’s representative to WSTS for 14 years and served a term as WSTS chairperson. However I believe WSTS will adapt and survive. The organization provides detail on numerous product and application markets which provide vital information to member companies and industry analysts.

Final SICAS data

The chart below shows SICAS data for total IC capacity in thousands of eight-inch equivalent wafers per week. Capacity for TSMC and UMC was added to the SICAS capacity beginning with 2Q 2011 for comparison with prior quarters. 4Q 2011 IC capacity (including TSMC and UMC) was 2,205 thousand wafers, up 2.5% from 2,151 thousand in 3Q 2011 and the seventh consecutive quarterly increase. IC capacity in 4Q 2011 was just 1% below the record capacity of 2,223 thousand wafers in 3Q 2008 and was up 14% from the cyclical low of 1,927 thousand wafers in 3Q 2009.

The trend for MOS IC capacity utilization is shown in the chart below. The SICAS data on capacity utilization for MOS ICs excluding foundry wafers was used through 1Q 2011. This data series is fairly comparable to the 2Q-4Q 2011 SICAS total MOS IC capacity utilization which does not included TSMC and UMC. For the current cycle, utilization peaked at 94.9% in 2Q 2010. 4Q 2011 utilization dropped to 88.9%, the lowest level since 88.8% in 4Q 2009. The 4Q 2011 drop in utilization was expected due to the weak semiconductor market – primarily caused by floods in Thailand disrupting HDD production and economic weakness in Europe.

Capacity utilization will likely recover by 2Q 2012. Digitimes say industry sources expect TSMC’s 2Q 2012 utilization to be around 95% due to strong orders. Capacity growth in 1Q 2012 should be relatively flat. Combined data on semiconductor manufacturing equipment bookings and billings from SEMI and SEAJ shows billings have declined in each of the last three quarters, with 4Q 2011 down 24% from 1Q 2011. Bookings began to pick up in 4Q 2011, indicating capacity growth should resume by the second half of 2012. Year 2012 billings were $34 billion, up 8.8% from 2011. SEMI forecasts 18% growth in fab equipment spending in 2013.


CDNLive: the Keynotes

CDNLive: the Keynotes
by Paul McLellan on 03-13-2012 at 2:24 pm

There were three keynotes at CDNLive this morning, and one theme ran through them: collaboration. In fact there was one specific instance of collaboration that all three people mentioned. Taping out an ARM Cortex-A15 in TSMC 20nm technology using a Cadence tool flow.

Lip-Bu, Cadence’s CEO, went first. He had some numbers showing that semiconductors and electronics should continue to grow at twice the rate of world GDP. And the GSA semiconductor index is all going up for the next couple of quarters. Underlying this growth is that increasing integration leads to many more devices. Mainframes shipped perhaps 1M units. PCs 100M units. But mobile internet (smartphone, iPad) are in the 10B unit range.

Rick Cassidy of TSMC was next up. He had an interesting retrospective on cost. From 1970 to today, transistor cost has reduced by 10[SUP]-8[/SUP] and microprocessor cost per transistor per cycle by 10[SUP]-11[/SUP]. An example Rick uses with MBA students (who know nothing about semiconductor) is that if Manhattan was a chip in 1962, by today it has shrunk so much that it fits in an iPod screen. And if we continue on the same path for another 50 years, the entire world will fit in that screen. Of course driving this is the scale of fabs like those TSMC is building. 180,000 12″ wafers per month. Last year, TSMC shipped 13.2M 8″ equivalent wafers. That’s a lot of silicon.

Tom Lantzsch of ARM started by asking everyone whether they were more likely to return home if they had forgotten their wallet than their phone and most of us figured we could do more easily without our wallet. His interesting statistic of the day is that there is a need for approximately one server in the cloud for every 600 mobile phones (and, for every 120 or so tablets). ARM is increasingly moving into the home (smartTV) and the car (infotainment). Underlying everything ARM does is energy efficiency (aka low power). This is why ARM is moving into servers in a move that has many commentators perplexed. ARM’s view is that servers will mimic what has happened in SoCs in smartphones, with a general purpose CPU (Intel) being replaced by multiple smaller CPUs and specialized functions such as video decode (of course, no prizes for guessing which CPUs Tom is expecting those to be). Developing countries simply don’t have the power to build a datacenter the way they are currently done. Instead, he expects servers with a power budget of 5W such as the Calxeda one, simplifying not just power, but cabling and physical size. Servers are likely to be specialized since, for example, netflix doesn’t need general purpose servers, just specialized video pumps.

Tom’s equivalent of TSMC’s wafer statistics were that ARM now has 275+ silicon partners, 900+ ecosystem partners, 30B+ ARM-based chips shipped. 50+ mobile phone application processors. 100+ phone designs. 100+ tablet designs (where are they all?), 1B+ applications. That’s a lot of compute power.


The Coming Gamer Tablet from…… Apple!

The Coming Gamer Tablet from…… Apple!
by Ed McKernan on 03-13-2012 at 11:25 am

After the introduction of the NEWiPAD, Apple has placed itself just two short steps away from dominating the computer market – including PCs. One step, which is widely reported, is a smaller iPAD with an 8” screen that aims for a $299 price point. Amazon will take the rest of the market under $299. The second step is purely speculation on my part but is within easy reach of the current hardware and will result in re-alignments in the semiconductor industry to the benefit of Apple and detriment of everyone else. A leveling will occur that forces some to return to their area of expertise, including a software company based in Redmond. The strategy is multi-faceted and thus will require an additional blog to fully outline.

To begin with we must recognize that the NewiPAD will not only reinforce but expand Apple’s lead in the tablet market. Tim Bajarin in his latest tech opinions column calls it revolutionary because of the impact the high-resolution screen will have on several industries. It is a great article that I highly recommend to anyone who follows the computer industry. After reading this, I sensed that Apple will not want to stop with what they have in the current iPAD but extend it further in order to make it even more competitive with respect to ultrabooks.

As a long time processor marketing guy, I see an opportunity for Apple to go one step further and that is to leverage the Retina display with a high performance graphics solution connected to its future A6 processor. For an additional $20-$25 of BOM cost, Apple can add an AMD or nVidia mobile graphics chip that will push the iPAD into a similar performance window as the ultrabooks to be powered by Intel’s Ivy Bridge – ULV parts. That’s because in this tablet, ultrabook market the Processor is not required to be high performance as the work-load shifts to the graphics unit. Thus the decision by Intel to spend the new transistor budget on improving the Ivy Bridge graphics unit.

The interesting dynamic that will play out in 2012 is that Intel’s Ivy Bridge ULV part will probably not drop below $225 for PC OEMs. Apple therefore will have the opportunity to introduce a $899-$999 Gamer iPAD with 4G LTE that has a $175 processor+graphics cost advantage. Furthermore, I can envision a Gamer iPAD with an 11” screen (same as MAC Air PC) that with an optional keyboard can find its way into the corporate market as an alternative to a Windows 8 machine running on an ARM based nVidia Tegra 3 or an Intel x86 based Atom processor. Apple then will leave it up to corporations to decide if they need an Intel Ivy Bridge based MAC Air or an A6 ARM based iPAD with PC level graphics selling at a discount but delivering higher margins to Apple.

In my previous blog, I mentioned how Apple’s Phil Schiller, Senior VP or Worldwide Marketing, stated that the new A5X processor with quad core graphics was 4 times faster than nVidia’s Tegra 3 and that nVidia resisted responding aggressively to the claims because they had other business at risk with Apple. The de-positioning of the Tegra 3 was a deliberate attempt by Apple to say to nVidia that they should abandon their tablet and smartphone attempts and return to their core, which is developing high-performance PC and supercomputing graphics solutions (In addition, Apple has effectively tagged competitive tablets as less than devices). With a Gamer Tablet, Apple will in effect force nVidia to choose between competing with them or cooperating, because the alternative is to let AMD take the business.

In the next blog we can look at Apple’s likely impact on Intel and Qualcomm in the coming year.

Full Disclosure: I am Long AAPL, INTC, QCOM and ALTR


More Growth in EDA

More Growth in EDA
by Daniel Payne on 03-12-2012 at 6:53 pm

I love to read good news about growth in EDA especially when our industry has seen single-digit growth for several years now. What I read on March 8th from ClioSoft stated a 53% increase in bookings for 2011, now that’s what I call growth.

ClioSoft provides Hardware Configuration Management (HCM) software to EDA users typically doing transistor-level IC design for both schematics and layout. I’ve been able to speak with several users of ClioSoft tools last year to find out first-hand what their experience was in adopting and using HCM in an IC design flow:

I blogged recently about, “What Just Changed on my Transistor-Level Schematic” and it’s getting plenty of discussion both here at SemiWiki and on LinkedIn with some 1,261 page views at SemiWiki in under a month. Engineers are interested in how to manage their IC design projects better, especially as their team size grows and becomes geographically separated.

Also Read

What Changed On My Transistor-Level Schematic?

Manage Your Cadence Virtuoso Libraries, PDKs & Design IPs (Webinar)

EDA Tool Flow at MoSys Plus Design Data Management


Virtual Prototype your SoC including FlexNoC

Virtual Prototype your SoC including FlexNoC
by Eric Esteve on 03-12-2012 at 1:10 pm

Designing larger than ever SoC, integrating multiple ARM’s Cortex-A15 and Cortex-A9 microprocessor cores as well as complexes IP functions like HDMI controller, DDR3 Memory controller, Ethernet, SATA or PCI Express controller are pushing designers to search for better price, performance and area tradeoffs and the SoC interconnect plays a vital role in serving this need. Using an advanced Network-on-Chip (NoC) like Arteris FlexNoC is an efficient solution to optimize the SoC and get the best possible return on the high investment linked with state of the art SoC development, by launching the IC in the right market window and benefit from a TTM advantage over the competition. Architects also want to virtually prototype their design, as it can be a good way to run these price, performance and area tradeoffs at early stages of the design, that they can do using Carbon’s SoCDesigner Plus, and prove their design assumptions before committing to the design implementation.

The joint Carbon/Arteris solution offers design teams a way to easily create and import accurate Arteris FlexNoC interconnect models for Carbon SoCDesigner Plus: the new Carbon/Arteris flow allows Carbon’s SoCDesigner Plus users to use Arteris FlexNoC to configure their NoC interconnect fabric IP and then upload the configuration to Carbon IP Exchange. The web portal then creates a 100% accurate virtual model of the configuration and makes it available for download and use in SoCDesigner Plus. “We see strong demand for models of Arteris’ NoC interconnect IP,” states Bill Neifert, chief technology officer at Carbon Design Systems®, the leading supplier of virtual platform and secure model solutions. “Our partnership with Arteris enables engineers to make architectural decisions and design tradeoffs based upon a 100%-accurate virtual representation.”

“Simulation with virtual models of our NoC interconnect IP are the best way to make system-on-chip architectural optimizations and tradeoffs,” comments Kurt Shuler, Arteris’ vice president of marketing. “By partnering with Carbon to make 100% accurate models of our IP available on Carbon IP Exchange, we are empowering design teams to utilize virtual models earlier in the design process.”

Going on Carbon IP Exchange web portal, we can see that the partnership allow to run a secured flow: the FlexNoC model is compiled directly from Arteris’s register transfer level (RTL) code and maintains 100% functional accuracy. The model integrates directly with Carbon’s SoC Designer Plus virtual platform. There is no “interpretation”, no modeling task in between the RTL source code from Arteris and the virtual prototyping usage within Carbon SoCDesigner Plus.

To learn more about FlexNoC IP from Arteris availability in IP Exchange web portal from Carbon design Systems, just go here.

By Eric Esteve from IPNEST


Book Review – iWoz

Book Review – iWoz
by Daniel Payne on 03-12-2012 at 11:01 am

I bought my first personal computer in 1979, it was a Radio Shack TRS-80 Model I with just 16KB of RAM, a BW monitor and casette tape for storage. The reason that I chose the Radio Shack over the Apple II was that it cost less, so I was always interested in Apple products and the engineers behind them since the early days. It was pure delight to read the autobiography of Steve Wozniak called iWoz, and the Kindle version costs only $8.61.

Wozniak writes in a style that sounds like he is having a conversation with you, face to face, very readable and personal. In the book he covers his family, upbringing, school days, pranks and how his father’s job in engineering attracted him to also pursue an engineering career.

A fascination with science, math and building electronics projects set the stage for Steve Wozniak to meet the younger Steve Jobs and together they start building and selling illegal phone devices to make toll-free calls anywhere in the world. Wozniak does stints in college at Colorado and California, then lands a dream job at HP where he loves working on calculator designs. Part-time outside of HP he designs and builds the Apple I and Jobs sells the first 100 to a store for $50,000 to start their fledgling company.

In those early days an engineer like Wozniak would first design the computer system on paper, then breadboard the design with wire wrap, plug in chips, then start debugging it by using oscilloscopes or simply connecting the output to a TV. Very little software existed to simulate an actual design, so it was build first, then debug in hardware.

Wozniak hesitantly leaves HP to found Apple and the folks at HP simply let him go. They forgot to copyright the Apple 1 and soon there was an exact clone, so for the Apple II they didn’t repeat that mistake. Engineering won out over marketing (Jobs) in the Apple II as Wozniak insisted that they give seven expansions slots, not just a measly two. It was interesting to read about how DRAM chips from Intel replaced SRAM in the Apple computer, how they choose their CPU, writing a dialect of BASIC, interfacing with the first 5 1/4 inch floppy drives, creating boot ROMs, etc. If you are a computer geek, then this story is for you.

The rate of growth of Apple and how its IPO made hundreds of millionaires is the stuff that drives risk takers of all sizes in Silicon Valley and around the world to build their own companies and try to change the world.

After the IPO at Apple Wozniak had a plane crash, recovered and then started to move in a different direction – like finishing up his fourth year of college under an assumed name, sponsor rock concerts and promote peace between the US and the Soviet Union.

Fame and fortune took a personal toll as Wozniak got divorced and then re-married. He talks about helping out his local school district by teaching a computer class, then venturing into charities and even funding the Tech Museum in San Jose.

In the 1980’s I can remember visiting Apple Computer to sell them EDA software at a time when they were still designing their own graphics chips. When Steve Jobs saw the GUI on our EDA tool he said, “Well, that’s just brain dead.” Of course, he was right and now that EDA company is history.

If you are curious about the history of the Personal Computer and want to hear it straight from the engineer of the Apple I and Apple II, then get this book and enjoy the journey.


My Design Automation and Test in Europe Conference Agenda (DATE 2012)

My Design Automation and Test in Europe Conference Agenda (DATE 2012)
by Daniel Nenni on 03-11-2012 at 7:00 pm


As this blog is being posted I’m on my way to Dresden for the 2012 Design Automation and Test Conference. DATE used to bounce between Munich and Paris, I have attended many times but not in the past couple of years. No excuse really, just busy with other things.
DATE 2012 Highlights in Dresden include E-Mobility and More-than-Moore in the Conference and Ecosystem Exhibition. For me DATE 2012 holds three exciting things:
Continue reading “My Design Automation and Test in Europe Conference Agenda (DATE 2012)”


Power Issues for Chip and Board: webinar

Power Issues for Chip and Board: webinar
by Paul McLellan on 03-10-2012 at 4:24 pm

Last month Brian Bailey at EDN moderated an interesting webinar about power issues. Unusually, it combined two different domains: doing things by modeling and actually taking measurements off real chips and boards. The two participants were Arvind Shanmugavel from the Apache subsidiary of Ansys, and Randy White from Tektronix.

Nobody needs me to tell them that power is a major issue for chip and board design. This webinar wasn’t so much about how to reduce power, important though that is, but more about how to deliver power and analyze what is being delivered, and then take measurements to see what is really happening. With low noise margins but a transistor threshold voltage that cannot change much, issues with the power supply such as voltage droop will cause systems to fail.

One of the biggest areas is active state power management whereby the software works with the underlying chip to control things like voltage island and power down blocks. If a block doesn’t need to produce its result fast then why bother to run it in high speed/high power mode. The challenge with this is that the transitions, changing the voltage of a block or powering it on or off produce major transients in the power network. The most extreme is power up a block that was powered down. Done naively the inrush current will cause the voltage to drop to the whole chip and so the received wisdom is to power the block up slowly (which, of course, means you know far enough in advance that you’ll need it) and only connect the main power transistors when the block is up to the supply voltage (so no inrush current).

For me the most interesting parts were Randy’s comments about measuring everything since it’s not an area I know lots about (I’m a software guy by background). At GHz performance levels, everything affects everything. Every probe has its own inductance, capacitance, resistance and so affects the measurement. I had no idea that Tektronix provides Spice models of all their probes so that you can work out what you expect to see on the scope given which probes you are using, since it differs from what the simulation says the actual signal value will be.

Arvind had an interesting example showing how the thermal map of a die varies dependent on the package. And not just due to thermal aspects of the package but transient power supply effects too.

We used to have a lot of margin on power supplies, with as much as 25% of tolerance. Now, especially in battery powered mobile devices it can be as low as 5%. This requires both an accurate power model and accurate ways of taking measurements off the actual devices without, Heisenberg style, perturbing the actual system too much.

The webinar is here.