DAC2025 SemiWiki 800x100

Intel’s 22nm Process. Atom, ARM, Apple

Intel’s 22nm Process. Atom, ARM, Apple
by Paul McLellan on 05-05-2011 at 9:52 am

Intel had a big press event yesterday at which they announced details of their 22nm process. In a change from their current processes, it goes with a vertical gate. In fact 3 gates which gives them much better control of leakage through transistors that are switched off, along with more transmission through the on transistors. They claim to get 37% better performance and 50% power reduction compared to 32nm. Although vertical transistors have been talked about for nearly a decade, Intel’s tri-gate is the first time anyone has brought them into volume production.

One speculation is whether the performance of Atom at 22nm hits a better power/performance than ARM does at 28nm in TSMC and other foundries. Of course the ATOM vs ARM battle isn’t really just about technical specs but rather about ecosystems and partners. The mobile industry in particular are unlikely to suddenly switch everything to Atom for a small increase in battery life because it would be very expensive and risky.

Before the press conference there were rumors that Intel would also announce one or more foundry deals (they didn’t). They are building 4 fabs to run 22nm which is a lot of capacity. Presumably they reckon that with their solid lead of 1-2 process generations against the competition they can wrench some big volume names from TSMC and Samsung. Apple, in particular, is one name that keeps coming up. Supposedly they are in transition right now from using Samsung as their foundry to using TSMC, mainly because Samsung competes with them in cell-phones and tablets. But Intel will have this problem too if they are a foundry for Apple. They just acquired Infineon wireless (so I’m assuming they must have an ARM license through that), but it’s not clear how serious they will be since Infineon was not really a market leader and has struggled once Siemens got out of mobile-phones completely.

Intel clearly either needs to acquire some silicon hungry product lines or else bring on a lot of wafer foundry business if it is going to fill its 22nm capacity. I still don’t see how Atom can realistically displace ARM because Intel has to run some ARM stuff (their own, Apple, someone else) so at least part of the ARM ecosystem will have access to Intel’s manufacturing, and that probably isn’t decisive anyway. Apple on Intel does make some sense though, giving them the possibility to keep its specs ahead of anything that its competition would be able to come up with.

*Note: to read/write comments you must log in.


Chip Power Models

Chip Power Models
by Paul McLellan on 05-04-2011 at 4:21 pm

As the complexity of the chip-package-system (CPS) interactions has increased, the tradeoffs in doing a power and noise analysis has had to gradually increase. As is so often the case in semiconductor designs, issues first arise as second-order effects that can largely be ignored but each process node makes the problem worse so that it can no longer be ignored.

Traditionally, CPS co-analysis and co-optimization lacked accuracy and limited productivity. Analysis was done using heuristic-based models of the die or used certain assumptions such as the lumped Cdie/Rdie numbers. Without a good model of the die, the only way to do a reasonable analysis was to provide artwork from the package and PCB to the chip level designers so that they could model the full-chip along with an appropriate amount of the off-chip interconnect. But these simulations were very time consuming making CPS convergence difficult. What was required was a good chip power model so that the package and board engineers could iterate their part of the design independently and with a much quicker iteration time.

Apache introduced their CPM as a die modeling technology in 2006. It leverages full-chip time domain and AC analysis technologies to create a compact and highly accurate electrical representation of the chip in various operating modes. It models the entire die power delivery network (PDN) including device level (switching, leakage) and parasitic information to create a SPICE-based model with ports at the die level bumps or pads. It accurately represented the electric response of the chip for a wide range of frequencies from DC to multi-GHz.

CPM 2.0 is Apache’s second generation of this modeling technology. It models the operation of the chip in a manner that causes additional stress for the system PDN, in particular taking into account resonance frequencies in the PDN. These are increasingly important in nodes below 40nm. This allows package and board engineers to view the impact of their design changes deep inside the chip.

PDN analysis must use both frequency and time domain simulations. Frequency domain simulations are used to understand the various resonance point of the combined PDN and ensure the system impedance does not get too high. It is not feasible to do a full analysis on a modern chip but CPM 2.0 addresses this issue by including a resonance-aware mode in which current signatures are deliberately picked around the CPS resonance frequency. This is in contrast to CPM 1.0 which targeted the scenario with highest power consumption. But the two are not the same.

Variable power features are used to model transients on the chip, such as when the chip moves from one clock-gating mode to another, when power-down areas are powered on and so on. The CPM provides stress mode coverage of these potentially significant events.

Apache CPM white paper


Apache at DAC

Apache at DAC
by Paul McLellan on 05-04-2011 at 2:38 pm

DAC is less than a month away, June 6-8th for the tradeshow, longer depending on what other events you might also be attending. Apache is in booth 2448 (marked in red on the DAC floorplan map.

Many of the presentations at the Apache booth will be customers (such as ARM, Xilinx, ST Ericsson, GlobalFoundries and TSMC) discussing various power-related topics.

Apache also has 3 presentations at the exbiitor forum (which takes place at booth 1005):

  • Tuesday June 7th at 1pm: Solutions for full-chip automated ESD design and verification
  • Tuesday June 7th at 3.40pm: RTL to silicon power integrity flow
  • Wednesday June 8th at 10.15am: Fundamentals of full-chip substrate noise analysis

As if that isn’t enough, Apache is also appearing at several of its partners booths: TSMC, GlobalFoundries and ChipEstimate.

Apache is also sponsoring the new DAC Monday evening cocktail hour, along with ARM.


Two New Platforms for Systems Designers

Two New Platforms for Systems Designers
by Daniel Payne on 05-03-2011 at 7:03 pm

Introduction
Today Cadence announced at the Embedded Systems Conference something of interest to systems designers.

What’s New?
The Rapid Prototyping Platform and Virtual System Platform are what’s new, and they intend to enable and automate concurrent hardware and software development. I can remember Mentor promoting concurrent design back in the early 1990’s so the idea is ripe and still pertinent today.

Ran Avinun
I had the pleasure of speaking with Ran Avinun, a marketing group director at Cadence by phone last week to learn more about this.

Q: What’s so different with your four platforms?
A: They are actually designed to work together, instead of four point tools that don’t talk to each other.

Q: Why would I use Cadence tools for my system design?
A: With this new approach you’ll have tools that are Open, Connected and Scalable.

Q: Who is using the new platforms?
A: NVIDIA is using three of the platforms: Rapid Prototyping, Incisive, Verification Computing.

Q: What makes your Rapid Prototyping Platform special?
A: It’s fast to bring up your new system, because it uses an ASIC flow, not an FPGA flow (although it uses FPGA chips).

Q: Do I have to tweak my RTL code to make it work on the FPGAs?
A: No, just stick to your familiar ASIC coding style and flow.

Q: How many gates can I prototype?
A: Each board holds up to 30M ASIC gates.

Q: What is the Virtual System Platform all about?
A: It is more software centric and allows TLM (Transaction Level Models) and RTL to work together.

Q: What kind of systems would model best in the Virtual System Platform?
A: ARM-based designs, wireless multimedia, iOS or Android systems.

Q: Can I debug a multi-core system?
A: Yes, you can.

Q: Which models, compilers and debuggers can I use?
A: Many 3rd party companies provide these.

Q: When will the Virtual System Platform be ready?
A: Later this year.

Summary
Cadence is now offering all four platforms for systems designers: Virtual System, Incisive Verification, Verification Computing and Rapid Prototyping. These platforms are not just point tools rather they were designed to play together which should enable concurrent hardware and software design. Mentor and Synopsys have some new competition to consider in the systems design space.


37 Billion IC with MTP IP from now to 2015: clearly, Kilopass and GlobalFoundries partnership make sense…

37 Billion IC with MTP IP from now to 2015: clearly, Kilopass and GlobalFoundries partnership make sense…
by Eric Esteve on 05-02-2011 at 4:42 am

Although there has been always a strong relationship between Kilopass and Chartered Semiconductor, this relationship has been even enhanced after the acquisition of Chartered by GLOBALFOUNDRIES, allowing Kilopass’s customers to integrate NVM IP on advanced technology nodes, down to 40nm or even 28nm in the near future.







Before going more in detail into the NVM technology and type of IP available on GLOBALFOUNDRIES technologies, I would like to understand to which extent NVM is –and will be in the future- a strategic piece of a SoC design. To do so, let’s try to quickly build a forecast for IC, ASIC or ASSP, which will integrate at least one NVM IP. This NVM can be used for:


  • Chip Identification (unique ID per IC)

  • Trimming or calibration (usually for mixed signal IC)

  • Coefficient storage (specific to Image sensor)

  • Configuration

  • ROM patching

  • MCU code storage
We will look at the two major market segments where NVM is commonly used: Mobile Electronic Devices (Wireless handset and Media Tablet) and Consumer Electronics (Set-Top-Box and HDTV), simply because the applications from these two segments are generating huge production volumes. Using information from the very good article: “Reap the Benefits without the Cost: Mobile Handset Chips Utilize Antifuse NVM from Configuration to Code Storage” that you can find here (***), we have been able to evaluate the number of IC using NVM in a Smartphone, as well as in the more traditional feature phone and low cost handset. We have used the Wireless Handset 2010-2015 forecast built by IPnest to evaluate the number of systems, as well as some data from ABI research for the Media Tablet, so we have built the following table:Let me be precise and state that we had to make some approximation: the Automotive and Industrial segments have been neglected; in the Consumer Electronic segment I have only counted HDTV and STB. Nevertheless, the result is impressive! The number of IC manufactured in 2010, including a NVM IP is above 3 Billion units growing up to almost 9 Billion units in 2015 and the cumulated forecast for 2010-2015 suggest that more than 37 BILLION IC will be manufactured, including one or more NVM IP. Are you still questioning whether NVM is strategic for a Silicon foundry? You should not, as the answer is clearly “Yes, it is!” As usual, looking at the history helps to better understand GlobalFoundries positioning in respect with NVM. Having worked for Atmel, a company founded by one of the patent holder for the Flash technology, Georges Perlegos (when he was on Intel payroll) — I know that offering Flash function to an ASIC or ASSP customer using a CMOS process was very attractive. The issue we had at that time (in 1999-2002) was that we could only offer a lagging CMOS process (0.25um when the market was on 0.13um), with an added cost of about 40% (the number of extra masks you need to support Flash on top of the native CMOS process). Finally, ASIC with embedded Flash was a very good marketing argument, but such projects were very few, and did not bring that much revenue. This was 10 years ago… If we look today, in 2011, GLOBALFOUNDRIES is able to support IC integrating NVM from Kilopass not only on the mature technology nodes supported by Chartered Semiconductor, from 130nm to 110nm, but also on 40nm and soon on 65nm. (A test chip integrating NVM IP has been recently taped out on 40nm process with GlobalFoundries), and in the near future on 28nm technology also. The technology barrier has been broken. It is now possible to design SoC on the most advanced technology nodes, for Media Processor or Wireless Processor platforms, integrating NVM IP, to support trimming or calibration (a few hundred bits) or to integrate large capacity code storage. NVM is available on pure CMOS technologies, on the latest technologies, and is able to scale. The offer from Kilopass, as we can see below, covers the majority, if not all the needs, whether in term of code size (from 16b to 4 Mb) or in term of supported technologies, from 180nm down to 28 nm:

According to Kilopass, their differentiation in the crowded NVM IP market is linked to the fact the company was created in 2001. This makes them the Innovator on this market. They also insist on the Quality of their solution, going with a high level of Reliability. Also, as it is not possible to reverse engineering, thus the technology is Secure. Finally and probably the most important, their NVM IP is Scalable, allowing it to support customer product evolution through several process nodes, like one of their customer who has ported the NVM IP from 150nm down to 40nm, passing through 130nm, 90nm and 65nm, this guaranteeing real Longevity.

For more information: Kilopass

<script type=”text/javascript” src=”http://platform.linkedin.com/in.js”></script><script type=”in/share”></script>

Eric Esteve
IP-Nest


Graphical DRC vs Text-based DRC

Graphical DRC vs Text-based DRC
by Daniel Payne on 05-01-2011 at 11:42 am

Introduction
IC designs go through a layout process and then a verification of that layout to determine if the layout layer width and spacing rules conform to a set of manufacturing design rules. Adhering to the layout rules will ensure that your chip has acceptable yields.

At the 28nm node a typical DRC (Design Rule Check) deck will have about 2,500 rules which requires that a fab engineer writes about 15,000 lines to check those rules.

In the early days of IC design these layout rules were basically a minimum width and spacing number for each layer of your design.

It’s not so simple today because we have layout rules where the width and spacing depend upon the length of the net and also upon adjacent interconnect.

Even when your IC layout is DRC clean there can be locations on the layout that will cause unintended yield loss.

If you only knew ahead of time where these areas are located then you could change the layout in order to improve your yield.

What the Foundry Knows
The good news is that every foundry knows through silicon experience what many of these yield limiting layouts look like. The issue is how to describe these complex layouts because using text-based rules takes a lot of expertise to create and much time to verify correctness.

Graphical DRC versus Text-based DRC
DRC decks have traditionally been written using a text editor, just like software code. A new approach has been added for defining layout rules using graphical patterns instead of writing rules. This new approach has been named DRC+ and Globalfoundries has defined a set of layout patterns that limit yield.

Calibre Pattern Matching
I met with Michael White, Product Marketing Manager at Mentor Graphics to learn more about how pattern matching has been added to the Calibre product line.

Q: Who would use this new DRC+ graphical approach?
A: Mostly the foundry, or IDM would use it to find yield detractors. Even fab lite companies can create their own set of yield detractors.

Q: Who defines the patterns?
A: Anyone involved with yield analysis would create these patterns.

Q: When I graphically define a pattern can I then see the rules that it writes?
A: No, those rules are encrypted and used by Calibre internally.

Q: Any pattern could be rotated or flipped, how do you handle that?
A: Calibre finds all 8 orientations of a defined pattern automatically.

Q: If I already know how to use Calibre, then how much do I have to learn about DRC+?
A: You would run Calibre like before and view results with RVE (Results Viewing Environment). You can learn the graphical tool to enter new patterns and that takes a day.

Q: Can I mix DRC+, equations and rules?
A: Yes, they can all be used on your IC layouts.

Q: Can my patterns find all-angle layouts?
A: Today we support Manhattan patterns although it’s technically feasible to add all-angle support.

Q: Could you read an existing SVRF (Standard Verification Rule Format) deck and create a graphical version of it?
A: That’s not really practical.

Q: What kind of time savings is there if I use graphical pattern matching to create my rules?
A: Toshiba reports that it takes 36X less time to create a golden SVRF deck using this graphical approach.

Q: About how much time does it take to learn how to use graphical rules?
A: Customers take about one to two weeks elapsed time to come up to speed with this approach.

Q: Do the foundries protect their graphical rules?
A: Yes, they use encryption in our tools to protect their rules.

Q: Are their limitations to the number of graphical rules than can be run?
A: We see engineers using millions of patterns with acceptable run times.

Q: When a process node goes from 40nm to 28nm, then how many patterns are there?
A: About 10X new patterns for each successive node.

Q: When Calibre finds a pattern match, then how can the violation be fixed?
A: Within our P&R flow we do automated fixing of DRC violations.

Q: What are the benefits of using Calibre for pattern matching?
A: Fast speed, a golden flow with DRC+ added, single deck with both DRC and DRC+, same Calibre RVE for debug, fits into major IC tool flows, supports hierarchy.

Q: Globalfoundries supports DRC+, what about other foundries?
A: Stay tuned.

Summary
Text-based DRC has been used for decades, however the addition of DRC+ will significantly reduce the amount of time that it takes to create a golden deck of layout design rules. Globalfoundries and Mentor have cooperated to add DRC+ to Calibre and serve IC customers at the 28nm node.


Mentor 2 : Carl Icahn 0

Mentor 2 : Carl Icahn 0
by Daniel Nenni on 05-01-2011 at 9:46 am

The corporate raiders are still throwing rocks at Mentor Graphics. I have followed this reality show VERY closely and find their latest assault seriously counterproductive. Disinformation is common in EDA but I expected more from Carl Icahn and the Raiderettes. They are quite the drama queens. Here is a billion dollar question: Why did Carl Icahn go afterMentorGraphics and not Cadence?

My first Carl Icahn blog: Personal Message to Carl Icahn RE: MENT is here. Followed by: MentorGraphics Should Be Acquired or Sold: Carl Icahnhere.Thousands of people read these blogs, dozens of people have quizzed me on it, my opinion is the same today: EDA needs Mentor Graphics!

Carl C. Icahn issued the following open letter to shareholders of Mentor Graphics Corporation:

CARL C. ICAHN
767 Fifth Avenue, 47th Floor
New York,New York10153
April 28, 2011

Dear Fellow Shareholders:


Blah blah blah blah blah………


Sincerely yours,

CARL C. ICAHN

You can find the complete letter here but I would not waste your time. Lots of numbers, lots of obfuscation, here is what I would care about if I was a MENT shareholder:

Cadence projects the year at $1B in revenue with non-gaap earnings of .40.

Mentor projects the year at $1B in revenue with non-gaap earnings of 1.00.

I was around Mentor before Wally joined. I was there when Cadence and Synopsys were born. Mentor is the #2 EDA company today no matter what anybody says, believe it. Cadence has been rotting from the inside since the Avant! fiasco. I worked for Avant! so that is where my opinion comes from.

They just closed the Blockbuster store near me, I switched to Netflix years ago. In 2005 Carl Icahn attacked Blockbuster and the CEO, it got personal. Carl obviously did not know the video delivery business, Carl does not know EDA, déjà vu all over again. Does an EDA duopoly sound like fun? Are you enjoying your leading edge smart phones and tablets? Do you really want two EDA companies (SNPS / CDNS) that literally hate each other controlling your semiconductor future?

As they say, what doesn’t kill you makes you stronger. Can Mentor Graphics reduce expenses? Of course they can, and I believe they will, that is the easy part. Can Mentor increase shareholder value? Absolutely! EDA is built on acquisitions and the biggest fault I find with Mentor is a “bottom feeding” acquisition strategy. They certainly have made some good ones, LogicVision for example. Mentor paid $13M and have made ten times that much. Actually I think the LogicVision ROI is more than eleven times as much ($140M+) but what have the Mentor M&A guys been doing since? Time for a new M&A team?

My blog: MentorAcquires Magma? can be found here. It was my most viewed blog last year. As I said before, the company cultures are the farthest thing from a match you will ever see but the product match is excellent. Mentor acquiring Magma would be comparable to the Synopsys acquisition of Avant!, all product and no executives which certainly complicates things. But if you look back, if not for Avant! products where would Synopsys be? Synopsys would be a distant number two if Cadence merged with Avant! instead of lawyering them to death.

Just my opinion of course but I would be happy to debate anyone who disagrees. Rumor has it Wally Rhines gave Carl Icahn the definitive book on EDA: EDA Graffiti. Read the book Carl. Mentor’s annual shareholder meeting is May 12th in Wilsonville, maybe I will see you there.

*Note: to read/write comments you must log in.


TSMC Conference Call is a 6.5 on the Richter Scale

TSMC Conference Call is a 6.5 on the Richter Scale
by Daniel Nenni on 04-28-2011 at 12:17 am

TSMC continues to drive the economic recovery with impressive Q1 numbers and an even more impressive Q2 and Q3 outlook. TSMC is my economic bellwether due to its diverse customer base and shear volume of consumer electronics silicon. The big surprise in the 1 hour Q1 conference call is a new Giga Fab (#15) ground breaking this year for added 40nm and 28nm production. This is TSMCs 3[SUP]rd[/SUP] Giga Fab which can produce 100,000+ wafers per month. So the TSMC strategy is clear, economies of scale, out produce your competitors and prepare for a wafer price war like no other.

Coincidently I’m in Taiwan this week and yes, another earthquake hit, this time just hours before my arrival. As I blogged before, my Taiwan friends think I bring California earthquakes to Taiwan. My first was in September 1999, then again in July 2009. This year my earthquake karma is better. My Taiwan trips usually start with a 6am Monday morning arrival and a 7pm Thursday evening departure. My March trip ended early so I was in the air for the Thursday 6.9 earthquake. This trip started late with a Monday evening arrival so again I was in the air for the Monday 6.5 quake. I am now required by the Taiwan government to give 30 days advance notice upon my arrival so they can be earthquake prepared. No Fab damage was reported after Monday’s quake.

The other big surprise for some is that TSMC 28nm production will start the end of Q2 2010. Dr. Shang-Yi Chiang, Vice President of TSMC R&D, confirmed that “TSMC plans to start trial-run production of 28-nanometer technology in June”, which is what he told me personally at our April 13th meeting. GlobalFoundries told me 28nm trial-run production is scheduled for Q1 2011 so TSMC is still 6 months ahead. GlobalFoundries also announced a 20nm node, again following TSMC.

On the financial side, TSMC’s balance sheet can be found here, conference call materials here, management report here, earnings release here, and the conference call transcript here. The most interesting numbers to me are the Revenue by Applications which showed gains in communications and consumer electronics but a decrease in computers. Also Revenue by Technology: Advanced process technologies (0.13-micron and below) accounted for 71% of wafer revenues, 90-nanometer process technology accounted for 17% of wafer revenues, 65-nanometer 27%, and 40-nanometer jumped to 14% of total wafer sales.

Moving forward, TSMC expects Q2 sales to reach T$100-102 billion from Q1 T$92.19 billion, beating market expectations. TSMC also said second-quarter gross profit margin should be 48-50 percent, compared with the 47.9 percent in the previous three months. TSMC expects an operating profit margin of 36.5-38.5 percent, versus the first quarter’s 37%.

Clearly semiconductor manufacturing outsourcing is moving forward at a rapid pace. TSMC Chairman and CEO Morris Chang forecast 2010 sales in the global semiconductor market (+22%) will again be outpaced by foundry market growth (+36%). Unfortunately, with all electronics sectors showing stronger than seasonal demand, wafer rationing is amongst us.

According to Morris Chang:

“We have been building capacity as fast as we could and the result is still that demand is 30 per cent greater than supply”

“TSMC’s plants are likely to continue to run at full capacity for the next year”

“In the very short term [over the next nine to 12 months], it makes no sense to ask our customers to give us more orders [because of the lack of capacity]“

“Perhaps in a year or two our utilisation will drop below 100 per cent, but we’ll take that in stride”


Ivo Bolsens of Xilinx and Crossover Designs

Ivo Bolsens of Xilinx and Crossover Designs
by Paul McLellan on 04-27-2011 at 4:14 pm

I was at Mentor’s u2u (user group) meeting and one of the keynotes was by Ivo Bolsens of Xilinx. The other was by Wally Rhines and is summarized here.

Ivo started off talking analogizing SoCs as the sports-cars of the industry (fast but expensive), and FPGAs as the station wagons (not cool). In fact he even said that when Xilinx started an FPGA was a pretty silly idea. When everyone was optimizing every square micron of area, take 20 gates to do the work of 1. But, of course, each process generation makes the idea less silly. A crossover SoC is a design that takes some of the integration ideas from SoCs, couples them with the flexibility and time-to-market from programmable logic along with the financial constraints and application focus of ASSPs. Like a crossover vehicle which is really an SUV built on a car chassis.

Of course people have tried to do this sort of thing before. Remember structured ASICs like LSI Logic’s RapidChip. Or multi-chip modules which suffered from excessive power consumption and the signals between chips being too slow (not much better than going off chip across a PCB).

There are two technologies that Ivo sees as being the foundation of crossover chips. One is the monolithic CPU+programmable-logic. The other is 3D system in package, or mostly what has come tb be called 2.5D systems, flipping microbumped die onto a silicon interposer. More about that is also here and see the wiki on 3D chips here. The main thing about both of these approaches is that it tightens the connection between CPU and FPGA.

So Xilinx created the Xinq series (and not just to get the all time high score at Scrabble) containing:

  • Dual ARM Cortex-A9 MP
  • Integrated ARM memory controllers and peripherals
  • Integrated on-chip memory
  • 28nm Xilinx state-of-the-art FPGA
  • Flexible array of I/Os: serial tranceivers, analog-to-digital converter inputs

So it is blend between a CPU and an FPGA. Or, another way of looking at it, it is an SoC with just the CPU subsystem realized in custom silicon, and everything else realized by programming the FPGA. By adding cache coherency between the processor cores and the FPGA you get automatic cache-to-cache data movement, low latency access to FPGA data from the CPU and vice-versa and no need for repeat cache flushing.

Standards are clearly going to be important in the crossover world. Standards for FPGA-CPU interconnect and protocols. Rules for TSV, microbumps and interoperability of silicon from different companies.

In conclusion:

  • FPGA landscape will change
  • Heterogeneous multicore and 3D are key
  • Standards are key enablers
  • New business models will emerge

Wally’s u2u keynote

Wally’s u2u keynote
by Paul McLellan on 04-27-2011 at 3:25 pm

I was at Wally’s u2u (Mentor user group) keynote yesterday. The other keynote was by Ivo Bolsens of Xilinx and is here. He started off by looking at how the semiconductor industry has recovered and silicon area shipments are now back on trend after a pronounced drop in 2009 and revenue has followed. Finally the semiconductor business broke through the $300B barrier that it has nudged up against a couple of times in the past.

The Japan’s tsunami came along. With 22% of world semiconductor production Japan is the #1 area, slightly ahead of Taiwan (I was surprised by this, I assumed that Taiwan was #1, but Wally’s numbers are always rock solid). But things seem to be recovering reasonably fast and the current forecast is that world GDP growth will be 3.6% versus 3.9% before the tsunami, a real drop but not a catastrophe.

Over the last 20 years Japan has really specialized. Back then every company made VCRs, every company made TVs and so on. And there were (and still are) some odd combinations, Yamaha in musical instruments and motorbikes, for example. But now that Japanese companies have specialilzed, in many cases they have very large market shares of key ingredients to upstream processes. For example, one of the ingredients for black automotive paint was made in just one factory destroyed in the tsunami, leading to Ford being able to offer you a car in “any color you want as long as it is not black.” This combination of specialization and sole-sourcing has had tremendous economic benefit, not wasting learning by spreading it across multiple companies, reduced inventory and WIP. There risks being a backlash against this as a result of the disaster in Japan, moving towards dual sources and geographic dispersion. But there are huge increased costs associated with this, along with a decrease in R&D efficiency, reduce rate of quality improvement, increased inventories and so on.

EDA also has tremendous specialization. There are 68 product categories with more than $1M in revenue, almost all of them dominated by a single company: big segments dominated by big EDA companies and small ones by small companies. In many cases the #1 has over 70% market share and never less than 40% (for the big categories).

EDA has consistently been about 2% of semiconductor revenue, although that has gradually been declining since 2001 to 1.5%. Unfortunately, EDA headcount has been increasing faster than EDA revenue leading to huge pressure on costs and on the need for each company to grow its market share to get back in balance.

As Wally pointed out in his 2006 DAC keynote, EDA grows from new segments that spring into existence, become large and then remain at that size almost indefinitely. In 2006 he predicted lots of growth in DFM, ESL and AMS/RF and, lo, so it came to pass.

Looking forward, where will the growth come from? Wally’s predictions:

  • High level design: HLS, virtual platforms etc
  • Intelligent test-bench: automatic generation of tests scalable to multicore speedups, and integrated with emulation (including emulating the test harness too)
  • Physical verification and implementation. This is not a new area, of course, but the complexity of new design rules means that a completely new approach is needed to get a single pass flow that takes the real rules (including DFM, impact of metal fill on timing etc) into account early enough
  • Embedded software. The cost of designing the silicon of an SoC has been roughly constant for several years. But the cost of the embedded software for that SoC has exploded. There is an opportunity to take some of the re-use and automated verification from the silicon world into the software world.