100X800 Banner (1)

Semiconductor market to grow 3% in 2011, 9% in 2012

Semiconductor market to grow 3% in 2011, 9% in 2012
by Bill Jewell on 11-16-2011 at 9:00 pm

The outlook for the global semiconductor market in 2011 has deteriorated from earlier in the year due to multiple factors including slower than expected economic growth in the U.S., debt crises in Europe and the Japan earthquake and tsunami. Recent forecasts have narrowed down to a range of -1.4% to 3.5%. In the first half of 2011, forecasts ranged from 5% to 10%. 2012 growth is expected to improve over 2011, with a range of 3.4% to 10.4%.

WSTS has released data on the semiconductor market through 3Q 2011. Thus year 2011 growth will be determined by growth in 4Q 2011. The mid-points of key company guidance for 4Q 2011 revenue growth vary widely. Microprocessor companies Intel and AMD expect growth of about 3%. Qualcomm’s mid point is 10%. Texas Instruments and STMicroelectronics, which are largely analog, expect declines of 2% and 9%, respectively. The major Japanese semiconductor companies are continuing to bounce back from the March earthquake and tsunami. 3Q11 revenue growth over 2Q11 (in yen) was 21% for Toshiba’s semiconductor business and 17% for Renesas Electronics. Based on revenue forecasts for the fiscal year ending March 2012 and assuming the same growth rates for 4Q11 and 1Q12, Toshiba’s 4Q11 semiconductor revenue growth is estimated at 24% and Renesas is estimated at 4%.

We at Semiconductor Intelligence have developed three scenarios for 4Q11 and year 2011 semiconductor revenue growth which we believe encompass the likely alternatives. As shown in the table below, the lowest case is no growth in 4Q11, leading to 2.3% annual growth. The middle case results in 3% growth in 2011 and the high case results in 3.5% growth. Our official forecast is 3% growth in 2011 and 9% in 2012, one percentage point lower in each year from our August forecast.

Why are our forecasts for 2011 and 2012 semiconductor market growth at the high end compared to other forecasters? The answer is demand for electronics remains healthy despite global economic problems. The chart below shows worldwide unit shipment change versus a year ago based on data from IDCand Strategy Analytics. Mobile phone growth has moderated to 11% to 12% growth in 2Q11 and 3Q11 after a strong recovery from the recession. PC growth has slowed significantly, with a decline in 1Q11 and growth in the 3% to 4% range for the last two quarters. Some of the slowdown in PC growth can be attributed to the rapid rise of media tablets, such as the Apple iPad. For many users, a media tablet is a replacement for a PC. Adding media tablet units and PC units results in higher growth for the combination, in the 14% to 17% range for the last two quarters. Thus growth in mobile phones and in PCs + media tablets has been double digit in 2Q11 and 3Q11, similar to growth rates in the first halfof 2008 prior to the global financial crisis.

Semiconductor Intelligence, LLC can perform a variety of services to provide you and your company with the intelligence needed to compete in the highly volatile environments of the semiconductor and electronics markets.


The Power of the Platform!

The Power of the Platform!
by Daniel Nenni on 11-16-2011 at 10:06 am

The Nintendo Wii is one of the most successful gaming platforms with the most diverse set of games — from fun games that can be enjoyed by the whole family to fitness programs that can be used by adults. They beat the dominant Sony Playstation and the Microsoft Xbox by thinking outside the box and creating a platform that was really easy to use and powerful enough to create the wide range of ‘games’.

When playing a game on the Wii with my son, I had the thought that probably the first really successful gaming platform was a deck of playing cards. (In retrospect, I should probably not have verbalized my thought because it brought teenage laughter and derision, unwarranted in my opinion). Unlike other board games, the playing card is a true platform that has spawned thousands of games played by young and old in every corner of the world. While games like Trivial Pursuit or Sudoku may enjoy intense popularity for a period of time, the interest will eventually fade. However, playing cards will continue to be forever because new games and variations can be invented by anyone.

The power of platforms has most recently been demonstrated by Apple and Google. Nokia and Blackberry created devices that did a few things very well and were wildly successful for a period of time. Apple and Google trumped them by creating powerful platforms in the iOS and Android, thus unleashing the creativity of millions of developers to create hundreds of thousands of applications that no single company could even imagine, much less develop.

Extensible platforms not only allow third parties to expand and enhance your product offering, they also provide customers the power to customize and control their own user experiences. This is particularly true in technical and complex areas, such as EDA, where each design team has unique requirements and a cookie cutter application would not be feasible. An extensible platform adds new functionality from third parties to your product, making it more appealing to new customers. It also makes your product much more ‘sticky’ because once customers customize and integrate it into their flow, it is that much less likely that it would replace your product with another competing product without a REALLY good reason.

EDA vendors have long realized the power of the platform and the need for extensibility. One very successful platform that I am quite familiar with is Cadence Virtuoso. With the powerful Skill extension language Cadence has created a strong ecosystem of third party products that integrate and enrich the Virtuoso flow beyond anything that Cadence could invent or implement on its own. The millions of lines of Skill code written by customers and partners virtually ensures the dominance of Virtuoso for years to come despite strong new competition from several major EDA vendors.

This brings me to DAC 2011. Some of our competitors were talking about a new application — IP management. It seems that someone was using long-legged, provocatively clad women to try to build a buzz about IP management. We know that several of our customers are managing and sharing IP blocks and PDKs across multiple projects using ClioSoft SOS Enterprise Edition hardware configuration management (HCM) platform. Did we miss the marketing boat?

What do you really need to manage your IP? Broadly speaking, you need to be able to manage and version control your IP, allow users across the enterprise to browse and search for IP based on functionality and attributes such as technology or foundry (preferably with a web browser), view datasheets or compare IP to help select the right one, re-use and track usage of the IP, get notifications about new revisions, easily upgrade if necessary, and track and report issues – all while making sure that access is controlled.

ClioSoft’s SOS platform already has all the underlying functionality and it has been in use for years:

  • Reference, reuse, track, and update IP
  • Customize and manage any attributes
  • Web interface to browse and search for IP based on attributes
  • Integration with issue tracking systems
  • Comprehensive access controls
  • Patented Universal DM Adaptor to manage composite objects

One customer who dropped by our booth at DAC told us that he is already managing IP on the SOS platform and did not understand what all the hype was about. He came by to request a few changes to SOS to make it even better. Since ClioSoft owns the SOS HCM platform and does not rely on third party software configuration management systems to do the heavy lifting, we were able to add the suggested enhancements quickly.

We realized that what we were missing was an ‘app’ to better demonstrate how to use the SOS platform to manage IP. By defining the right set of attributes and adding some custom GUI elements, in just a few days, we were able to build our own ‘app’ to demonstrate how the production-proven SOS platform can be used to manage and reuse IP across the enterprise. Since every design team has a different interpretation of what IP is and how it should be managed, an open application built on the SOS platform allows customers to easily customize the interface, attributes, and flow to mange and reuse IP. Customers are not forced to adopt a methodology built into an IP management application. Instead the SOS platform is easily adapted to meet customers’ needs.

The power of the platform – a robust and custom IP management solution in just days. Request an SOS platform demonstration HERE.

by Srinath Anantharaman, founder and CEO of ClioSoft

Also Read

Analog IP Design at Moortec

Hardware Configuration Management approach awarded a Patent

Transistor Level IC Design?


Who is winning the cell-phone wars?

Who is winning the cell-phone wars?
by Paul McLellan on 11-15-2011 at 2:17 pm

Answer: it depends how you count. Units, market share, revenue, profit.

According to Gartner, Android has doubled its market share and now run just over half of the world’s smartphones. Android handset sales actually tripled during the year, selling 61 million last quarter, not that far off a million a day.

iPhone sales increased by 4 million but market share dropped. This is somewhat to be expected since iPhone 4S was delayed (the story I’ve heard is that they had power problems with the A5 chip inside the smaller enclosure of a phone versus an iPad, and that delayed the release from early summer). But everyone knew it was coming which meant that Apple partially Osborned themselves as people waited for the new model. Nonetheless, Apple is still making almost all the profit in the smartphone market. The direct profit, that is. Google’s monetization is different and harder to measure. But the Android phones in general are much lower margin than iPhone for their manufacturers.

Symbian is being phased out and so its market share is cratering. Nokia is switching to WP7 but hasn’t really got its act together there yet. They have their first products just coming out “real soon now” and it remains to be seen whether, as Gartner predicts, Nokia+WP7 makes a real go of it in the second half of next year. I remain somewhat skeptical but I think that it is interesting that Microsoft is regarded by the carriers as a safe third choice against the Apple/Google behemoth. That’s not how the PC manufacturers thought of Microsoft. I remain skeptical about Nokia, under attack at the low end by cheap Chinese phones and at the high end by iPhone/Android. As Nokia’s then CEO said at iPhone’s launch: “it’s only a handset announcement.” Yes, like a diamond is only carbon.

RIM (blackberry) continued to lose market share and, at this point, I have to say that I think it is doomed. It is now down to just 10% of the US market. And even more doomed in tablets where apparently the second version of their tablet still doesn’t have email, unless connected to a Blackberry.

If you look at units, Nokia is still #1 but will soon be overtaken by Samsung. The total market at 440MU/qtr is not far off 2 billion handsets per year. With the world population at 7 billion that’s an amazing number. Perhaps more amazing. Rovio just announced 500M downloads of Angry Birds (mostly at 99c). Which means one in 14 people in the world has downloaded it (although some people have it on more than one platform). Apparently 400,000,000,000 birds have been launched.

According to another analyst, Cannaccord, Apple now has four percent of the cell-phone market by unit volume but over half the profits. In Q2 I read one report that they actually had two-thirds of the profit but there were some companies (Nokia I’m looking at you) who had a loss which may distort things a little but Nokia eeked out a profit last quarter. When Apple launched iPhone, Nokia made 67% of the profits, now it is down to 4%.


Formally verifying protocols

Formally verifying protocols
by Paul McLellan on 11-15-2011 at 1:19 pm

I attended much of the Jasper users’ group a week ago. There were several interesting presentations that I can’t just blog about because companies are shy, and some that would only be of interest if you were a user of Jasper’s products on a daily basis.

But for me the most interesting presentations were several on an area that I didn’t realize this sort of formal verification was being used for. The big driver is that modern multi-core processors now require much more sophisticated cache control than before. ARM in particular has created some quite sophisticated protocols under the AMBA4 umbrella that they announced at DAC.

In the old days, cache management was largely done in software, invalidating large parts of the cache to ensure no stale data could get accessed, and forcing the cache to gradually be reloaded from main memory. There are several reasons why this is no longer appropriate. Caches have got very large and the penalty for off-chip access back to main memory is enormous. Large amounts of data flowing through a slow interface is bad news.

As a historical note, the first cache memory I came across was on the Cambridge University Titan computer. It had 32 words of memory accessed using the bottom 7 bits as the key and was only used for instructions (not data). This architecture sounds too small to be useful, but in fact it ensures that any loop of less than 32 instructions runs out of the cache and so that trivial amount of additional memory made a huge performance difference.

Anyway, caches now need to have more intelligence. Instead of invalidating lots of data that might turn out to be stale, the cache controllers need to invalidate on a line-by-line basis in order to ensure that anybody reading an address gets the latest value written. This even needs to be extended to devices that don’t even have caches since a DMA device cannot simply go to the main memory due to delayed write-back.

Obviously these protocols are pretty complicated, so how do you verify them? I don’t mean how do you verify that a given RTL implementation of the protocol is good, that is the normal verification problem that formal and simulation techniques have been used on for years. I mean how do you verify that the protocol itself is correct. In particular, that the caches are coherent (any reader correctly gets the last write) and deadlock-free (all operations will eventually complete, or a weaker condition, at least one operation can always make progress).

Since this is a Jasper User Group Meeting it wouldn’t be wild to guess that you use formal verification techniques. The clever part is that there is a table driven product called Jasper ActiveModel. This creates a circuit that is a surrogate for the protocol. Yes, it has registers and gates but these are not something implementable, they capture the fundamental atomic operations in the protocol. Then, using standard proof techniques, this circuit can be analyzed to make sure it has the good properties that it needed.

It was a very worthwhile exercise. It turned out that the original spec contain some errors. Of course they were weird corner cases, but that is what formal verification is so good at. Simulation hits the stuff you already thought of. There was one circumstance under which the whole cache infrastructure could deadlock, and another in which it was possible to get access to stale data.

Similar approaches have been taken for verifying communication protocols, which also have similar issues: they might deadlock, the wrong data might get through and so on.


Media Tablet & Smartphones to generate $6 Billion market in… power management IC segment by 2012, says IPnest

Media Tablet & Smartphones to generate $6 Billion market in… power management IC segment by 2012, says IPnest
by Eric Esteve on 11-15-2011 at 10:59 am

With worldwide annual media tablet shipments forecast changing –growing- almost every quarter, the latest from ABI research calling for shipment to approach 100 million units in 2012 and passing 150 million in 2014, with the same kind of forecast for smartphone passing 400 million units this year (438 million) and approaching the billion units shipment for 2016, no doubt that these two applications are the key drivers for the semiconductor market for the next five years.

But the good question is which type of semiconductor? Let’s have a look at the compared bill of material (BOM) for Media Tablet and Smartphone:

From the first column (smartphone) we extract a total value of about $200 and $300 for the column associated with media tablet. Clearly, a significant value is coming from display, touch screen and mechanical, especially when looking at media tablet. Because this site is semiwiki (and not displaywiki or mechawiki… even if it would be interesting) we will concentrate on the SC content only. We have built the following table, restricted to SC, where we can see that the BOM ratio is no more 2 to 3, but rather 8 to 9. The SC content for both is very similar: Nand Flash, DRAM and Application processor can be seen as identical (which is no more true if you compare a Media Tablet enabled with 32GB Nand Flash with a Smartphone with 16 GB Nand Flash, so this is a “theoretical” case). The Nand Flash and DRAM suppliers are well known (Samsung, Elpida, SanDisk…), and it would be highly doubtful that a fabless new comer emerges in these two segments. That why we have decided to “zoom” to the semiconductor content excluding the memories. Then the BOM goes down to $42 for the smartphone and $50 for the media tablet.

We have addressed in a previous postthe Application Processor segment, pretty crowded, as we count seven big players (Broadcom, Freescale, Intel, Marvell, Nvidia, Qualcomm, Renesas, ST-Ericsson and TI) and a bunch of new comers, some of these starting to be well established now (Mtekvision or Spreadtrum):

  • Anyka Technologies Corporation
  • Beijing Ingenic Semiconductor Co., Ltd.
  • Chongqing Chongyou Information Technologies Company
  • Fuzhou Rockchip Electronics Co., Ltd
  • Hisilicon Technologies
  • Leadcore Technology
  • MagicEyes Digital, Inc
  • MStar Semiconductor
  • Mtekvision
  • Novatek Microelectronics
  • Spreadtrum
  • Shanghai Jade Technologies Co., Ltd.

Because this segment is so crowded, why not looking elsewhere? It could be connectivity (WiFi, WLAN or Blutooth) or sensors (gyroscope or accelerometer) chips… but we have selected the Power Management (PM) IC: the PM emiconductor content reach 40% of the SC BOM for media tablets and 20% for smartphone.

Let’s try to make a quick assessment of the PM IC Total Addressable Market (TAM). IPnest has already built a smartphone shipment forecast for 2010-2016 (see blogin Semiwiki), and we have forecast information available for media tablet from ABI research. To derive the PM IC TAM forecast for 2011-2016, we have to:

  • consolidate these two forecasts (media tablet and smartphone shipment)
  • Assess the price erosion, or ASP evolution, for PM IC

First, the combined forecast by unit shipments, for 2010-2016:

Then, we can calculate the Total Available Market for the Power Management IC, in both Smartphone and Media Tablet Application, associating the PM IC respective ASP in each of these application. We have neglected the potential decline in PM IC usage in media tablet or smartphone, as it is unlikely to happen: to provide satisfaction to the end user by increasing the time between charging, the trend is to provide systems with better power management capabilities. As well, we have neglected the potential growth pervasion for power management devices. We think such a growth would lead to higher TAM, and higher TAM would increase the number of competitors, leading to more drastic price erosion. But we have assumed the same price erosion rate than for the application processors, or 33% over a five years period of time. With these assumptions, the power management IC market segment is expected to reach $6 Billion by next year, and up to $8 Billion by 2016.

Power management market segment has been relatively neglected by analyst or bloggers so far, at least when compared with the application processor segment, where you can find multiple articles. Looking at this IC segment market weight allow to better understand the long term strategy of companies like Texas Instrument, trying to consolidate position on the PM segment (which, by the way, is NOT part of the Wireless Business Unit) by running high price acquisitions, when one can feel that they defocus from the wireless market – which is completely wrong! They are just focusing more on the power management segment… the application becoming a kind of enabler for the PM IC!

Eric Esteve from IPnest


Jen Hsun Huang’s Game Over Strategy for Windows 8!

Jen Hsun Huang’s Game Over Strategy for Windows 8!
by Ed McKernan on 11-15-2011 at 6:00 am

It is always a treat to listen to the nVidia earnings conference call as Jen Hsun Huang offers his take on the industry as well as a peek at his company’s future plans. Invariably a Wall St. analyst will ask about Windows 8 and Project Denver – the code name for the ARM based processor designed to run Windows 8 with great graphics performance and in categories that Jen Hsun describes in meticulous detail as Tablets and Clamshells. In last week’s call, Jen Hsun went out a little further on the ski tips as he claimed that he is going to take ARMs architecture into market segments it hasn’t gone before through “extensions.” Let me cut to the chase, he is going to build a CPU with x86 instruction translation with the help of the cadre of engineers imported from Transmeta.

Before I go any further, let me back up for those of us who didn’t get the Microsoft update. Up until a Microsoft Analyst meeting in September, the standard line from the Redmond folks was that everything that ran on Windows 7 would run on Windows 8, regardless of the processor (x86 or ARM). Renee James of Intel mentioned at a Spring Intel conference that x86 apps would not run on ARM based Windows 8 machines. Microsoft had a cow and let the world know that Ms. James wasincorrect. Turns out she was correct and she should know since she heads up the Software Group at Intel that makes sure that Windows 8 and applications will run on Intel’s newest processors. Uh Oh, the Emperor Just Lost His Clothes!

Just to set some leveling here… Intel and Microsoft are going through a long divorce. It will be messy and stretch out for years, maybe even decades. Renee James’ comments are the type that are strategic and the wording is reviewed in great detail by multiple people, including CEO Paul Otellini. So Paul is saying to Steve Ballmer, “Time to come clean buddy.” And to ARM, Otellini is saying, “x86 isn’t dead in PCs by a long shot.” Or maybe nVidia has a different answer.

That brings us up to Microsoft’s September statement from Steven Sinofsky on what Windows 8 can run:

STEVEN SINOFSKY: Sure. I don’t think I said quite that. I think I said that if it runs on a Windows 7 PC, it’ll run on Windows 8. So, all the Windows 7 PCs are X86 or 64-bit.

We’ve been very clear since the very first CES demos and forward that the ARM product won’t run any X86 applications. We’ve done a bunch of work to enable that — enable a great experience there, particularly around devices and device drivers. We built a great deal of what we call class drivers, with the ability to run all sorts of printers and peripherals out of the box with the ARM version.

Oh what would we do without analysts to tell us the future?

Back in the days when Transmeta was still around, it shopped itself to the usual suspects. Jen Hsun passed on a direct buyout and instead hired some of the best engineers. Rumors flew around about nVidia building a direct x86 competitor to Intel, however the true value of Transmeta engineering was in the x86 compatible software translator that sat on top of a VLIW core with some hardware hooks for performance. The significance of Transmeta was both the translator and the discovery that the world was about to ditch MHz and go for mobile wireless devices with extra long battery life that emphasized the visual experience over some PC Mag benchmark. All the benchmarks in the day however centered around jumping in and out of Office Applications making tweaks here and there demonstrating why 1GHz Pentiums were much more valuable than 933MHz ones.

Bear in mind that Jen Hsun Huang expects the Project Denver question at every quarterly earnings call, so here in his own words:

“Project Denver, our focus there is to supplement, add to ARM’s capabilities by extending the ARM’s architecture to segments in a marketplace that they’re not, themselves, focused on. And there are some segments in the marketplace where single-threaded performance is still very important and 64 bit is vital. And so we dedicated ourselves to create a new architecture that extends the ARM instruction set, which is inherently very energy-efficient already, and extend it to high-performance segments that we need for our company to grow our market. And we’re busily working on Denver. It is on track. And our expectation is that we’ll talk about it more, hopefully, towards the end of next year. And until then, until then I’m excited, as you are.”

Essentially, nVidia’s model is that for most of the PC market what matters is compatibility and graphics performance. In the nVidia model the x86 CPU is a sidecar. In the future you will pay more for a better graphics experience than CPU performance. If the performance of Jen Hsun’s multicore ARM is way beyond what a typical Microsoft Office User expects. An x86 software translator on top of the ARM cores at 20% performance of native should be just fine. I picked 20% – maybe it’s 25% or 30% but you get the idea. To be unique and to get away from the pack (TI and Qualcomm), nVidia will implement some instruction extensions to enable the translator. Since nVidia already has the gaming community on its side writing games that directly go to the graphics GPU, Jen Hsun can envision a scenario where it is Game Over!


Not your father’s Tensilica

Not your father’s Tensilica
by Paul McLellan on 11-14-2011 at 5:27 pm

Tensilica has been around for quite a long time. Their key technology is a system for generating a custom processor, the idea being to better match the requirement of the processor for performance, power and area as compared with a fully-general purpose control processor (such as one of the ARM processors). Of course generating a processor on its own isn’t much use: how would you program it? So the system also generates custom compilers, virtual platform models and so on. Everything that you need to be able to use the processor.

I’ve said before in the context of ARM that what is most valuable is not the microprocessor design itself, it is the ecosystem that surrounds it. That is the barrier to entry, not the fact that ARM does a reasonable job of implementing processors.

In the early days of Tensilica, this technology was what they sold. Early adopters who needed a custom processor could buy the system, design their processor, put it on an SoC, program it using the compiler and model. ARC (now part of Synopsys via Virage) was the other reasonably well-known competition. I remember talking to them once and they admitted that lots of people really wanted a fixed processor because they wanted to know the performance in advance, for example.

Tensilica found the same thing. There isn’t a huge market of people wanting to design their own processor. But there is a huge market of people who want a programmable block that has certain characteristics, and a market for people who want a given function implemented without having to write a whole load of Verilog to create a fully-customized solution.

So Tensilica have been taking their own technology and using it to create blocks that are easier to use. Effectively they are the custom processor design experts so that their customer don’t have to be. The first application that got a lot of traction was 24-bit audio.

More recently, there is the ongoing transition to LTE (which stands for Long Term Evolution, talk about an uninformative and generic name) for 4G wireless. This is very complicated, and will be high-volume (on the handset side anyway, base-station not so much).

Difficult to use but flexible technologies often end up finding a business like this. The real experts are in the company and it is easier for them to “eat their own dogfood” then it is to teach other people to become black-belt users.



Semiconductor Power Consumption and Fingertop Computing!

Semiconductor Power Consumption and Fingertop Computing!
by Daniel Nenni on 11-13-2011 at 4:38 pm

Can semiconductor devices change the temperature of the earth? The heat from my Dell XPS changes the temperature of my lap! A 63” flat screen TV changes the temperature of my living room. I just purchased six of the latest iPhones for my family (under duress) and signed up for another two years with Verizon, so our carbon footprint changes once again.

As computing goes from the desktop to the laptop to the fingertop with a total available market (TAM) of 7B+ people, power has become a critical mess. A recent Time Magazine article, 2045: The Year Man Becomes Immortal suggests that we will successfully reverse-engineer the human brain by the mid-2020s. Replicating the computing power of the human brain is one thing, unfortunately, replicating its power efficiency is quite another!

Power efficiency in the semiconductor design and manufacturing ecosystem is overwhelming design reviews: cost and power, performance and power, temperature and power, etc…. Who knows this better than Chris Malachowsky, co-founder and CTO of NVIDIA? Chris and the IEEE Council on EDA bought me lunch at ICCAD last week. Chris talked about everything from superphones to supercomputers and the semiconductor power challenges ahead.

Chris was quick to point out that NVIDIA is a processor company not a graphics (GPU) company, with $3.5B in revenue, 6,900 employees, and 2,000+ patents. Chris currently runs a 50+ PhD research group inside NVIDIA. One of the projects his group is working on is a supercomputer capable of a MILLION TRILLION calculations per second for the Department of Energy ExascaleProgram, all in the name of science. The hitch is that it can only consume 20MWs!

NVIDIA has 3 very synergistic market segments for their technology:

[LIST=1]

  • Mobile (Tegra)
  • Gaming/Visual Computing (GForce/Quadro)
  • Supercomputing (Tesla)

    The largest market today is computer gaming at $35B+ but fingertop computing (mobile) is where the hyper growth is as it intersects all three markets. We now live in a pixel based world and whoever controls those pixels wins!

    One of the worst kept secrets is NVIDIA’s new Tegra 3 architecture, which is an example of what Chris Malachowsky called a mutli-disciplinary approach to semiconductor power management. The best write-up is Anandtech’s NVIDIA’s Tegra 3 Launched: Architecture Revealed.

    The Tegra 3 is a quad core SoC with almost twice the die size of its predecessor, from 49mm^2 to around 80mm^2, built on the TSMC 40nm LPG process. The performance/throughput of Tegra 3 is about five times better than last year’s Tegra 2 with 60%+ less power consumption. The Tegra 4 (code name Wayne) has already been taped out on TSMC 28nm and will appear in 2012. We will never see a better process shrink than 40nm to 28nm in regards to performance and power so expect the Tegra 4 to be extra cool and fast!

    The key to the Tegra 3’s low power consumption is the fifth ARM Cortex 9 “companion” core, running at 500MHZ. You can run one to four cores at 1.3GHZ or just the companion core for background tasks, thus the power savings, and thus the mutli-disciplinary approach to low power SoC realization. You will see a flood of Tegra 3 based devices at CES in January so expect NVIDIA to have a Very Happy New Year!

    The fingertop computing technology appetite is insatiable, and with a TAM of 7B+ units you can expect many more multi-disciplinary approaches to low power semiconductor design and manufacturing, believe it!



  • Physical Verification of 3D-IC Designs using TSVs

    Physical Verification of 3D-IC Designs using TSVs
    by Daniel Payne on 11-12-2011 at 10:36 am

    3D-IC design has become a popular discussion topic in the past few years because of the integration benefits and potential cost savings, so I wanted to learn more about how the DRC and LVS flows were being adapted. My first stop was the Global Semiconductor Alliance web site where I found a presentation about how DRC and LVS flows were extended by Mentor Graphics for the Calibre tool to handle TSV (Thru Silicon Via) technology. This extension is called Calibre 3DSTACK.

    With TSV each die now becomes double-sided in terms of metal interconnect. DRC and LVS have to now verify the TSV, plus front and back metal layers.

    The new 3DSTACK configuration file controls DRC and LVS across the stacked die:

    A second source that I read was at SOC IP where there were more details provided about the configuration file.

    This rule file for the 3D stack has a list of dies with their order number, postion of each die, rotation, orientation, location of the GDS layout files and associated rule files and directories.

    To do the parasitic extraction requires new information about the size and electrical properties of the microbumps, copper pillars and bonding materials.

    One methodology is to first run DRC, LVS and extraction on each die separately, then add the interfaces. The interface between the stacked dies uses a separate GDS, and LVS/DRC checks are run against this GDS.

    For connectivity checking between dies text labels are inserted at the interface microbump locations.

    With these new 3D extensions then Calibre can run DRC, LVS and extraction on the entire 3D stack. A GUI helps you to visual the 3D rules and results from DRC, LVS and extraction:

    TSMC Partner of the Year Award
    Based on this extension of Calibre into the 3D realm, TSMC has just announced that Mentor was chosen as the TSMC Partner of the Year. IC designers continue to use the familiar Calibre rule decks with the added 3DSTACK technology.

    Summary
    Yes, 3D-IC design is a reality today where foundries and EDA companies are working together to provide tools and technology to extend 2D and 2.5D flows for DRC, LVS and extraction.

    Further Info

    var _gaq = _gaq || [];
    _gaq.push([‘_setAccount’, ‘UA-26895602-2’]);
    _gaq.push([‘_trackPageview’]);

    (function() {
    var ga = document.createElement(‘script’); ga.type = ‘text/javascript’; ga.async = true;
    ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl’ : ‘http://www’) + ‘.google-analytics.com/ga.js’;
    var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(ga, s);
    })();


    SPICE Circuit Simulation at Magma

    SPICE Circuit Simulation at Magma
    by Daniel Payne on 11-11-2011 at 11:36 am

    All four of the public EDA companies offer SPICE circuit simulation tools for use by IC designers at the transistor-level, and Magma has been offering two SPICE circuit simulators:

    • FineSIM SPICE (parallel SPICE)
    • FineSIM PRO (accelerated, parallel SPICE)

    An early advantage offered by Magma was a SPICE simulator that could be run in parallel on multiple CPUs. The SPICE competitors have all now followed suit and re-written their tools to catch up to FineSim in that feature.

    I also blogged about FineSIM SPICE and FineSIM Pro in June at DAC.

    When I talk to circuit designers about SPICE tools they tell me that they want:

    • Accuracy
    • Speed
    • Capacity
    • Compatibility
    • Integration
    • Value for the dollar
    • Support

    The priority of these seven attributes really depends on what you are designing.

    Feedback from anonymous SPICE circuit benchmarks concludes that FineSim SPICE can be preferred versus Synopsys HSPICE:

    • Accuracy – about the same, qualified at TSMC for 65nm, 40nm and 28nm
    • Speed – FineSim SPICE can be 3X to 10X faster
    • Capacity – around 1.5M MOS devices, up to 30M RC elements
    • Compatibility – uses inputs: HSPICE, Spectre, Eldo, SPF, DSPF. Models: BSIM3, BSIM4. Outputs: TR0, fsdb, WDF.
    • Integration – co-simulates with Verilog, Verilog-A and VHDL
    • Value – depends on the deal you can make with your Account Manager
    • Support – excellent

    Room for Improvement
    Cadence, Synposys and Mentor all have HDL simulators that support: Verilog, VHDL, System Verilog and System C. These HDL simulators have been deeply integrated with their SPICE tools, letting you simulate accurate analog with the SPICE engine in context with Digital. Magma has no Verilog or VHDL simulator and only does a co-simulation, which is really primitive in comparison to these deeper integrations using single kernel technology.

    Memory designers use hierarchy and FineSim Pro does offer a decent simulation capacity of 5M MOS devices, although it is not a hierarchical simulator so you cannot simulate a hierarchical netlist with 100M or more transistors in it. Both Cadence and Synopsys offer hierarchical SPICE simulators. With FineSim Pro you have to adopt a methodology of netlist cutting to simulate just the critical portions of your hierarchal memory design.

    Summary
    You really have to benchmark a SPICE circuit simulator on your own designs, your models, your analysis, and your design methodology to determine if it is better than what you are currently using. This is a highly competitive area for EDA tools and by all accounts Magma has world-class technology that works well for a wide range of transistor-level netlists, like: custom analog IP, large mixed-signal designs, memory design and characterization.

    We’ve setup a Wiki page for all SPICE and Fast SPICE circuit simulatorsto give you a feel for which companies have tools.