Synopsys IP Designs Edge AI 800x100

Silicon IP to take over CAE in EDAC results… soon but not yet!

Silicon IP to take over CAE in EDAC results… soon but not yet!
by Eric Esteve on 07-20-2011 at 11:44 am

Very interesting results launched by EDAC for Q1 2011, if Computer Aided Engineering (CAE) is still the largest category with $530.6M, the second category is Silicon IP (SIP) with $371.4M, followed by IC Physical Design & Verification at $318.5M. Even more significant is the four quarter moving average results, showing growth in every category, +12.9% for CAE and +7.6% for IC PD & V, but as high as +27.9% for SIP!

To check if this really is a long term trend, I had a look at the results for Q1 2006, the same quarter five years ago. At that time, CAE results were at $510M and IC PD & V at $315M, both very close to the 2011 results and SIP was at $225M. This means that the largest two EDA categories, purely based on S/W tools, have stayed almost flat in five years, SIP has grown by 65%! If SIP keep the same growth rate (as well as CAE keep staying flat), it will take only three years for SIP to pass CAE category. In other words, we should see SIP to be the largest category as reported by EDAC during 2014.

Honestly, I don’t see any reason why this would not happen. The Compound Annual Growth Rate (CAGR) for SIP (at least from the companies being part of EDAC) has been 13.5% during the last five years. The forecast we have built at IPnest for the Interface IP only exhibit a 14.3% CAGR between 2010 and 2015. In fact, if we take into account the revenues coming from ALL the IP vendors, we will see that the Silicon IP Licensing revenues are already at the same level than the CAE revenues!

From Semico: The SIP market is undergoing a round of consolidation with the number of companies shrinking approximately 50% by 2010 compared to 2000. However, this is not a sign of a weakening market, but rather of the market sorting itself out with strong contenders consolidating their positions.
Semico projects this market to continue to grow, exhibiting a CAGR of 12.6% from 2010 – 2015.

If you take a look at the EDAC member list, you will realize that, if the most important IP vendors like ARM Ltd or Synopsys are members, most of the “small” IP vendors are not. This means that the SIP category as reported by EDAC is representative of the market trends, when SIP grows, the overall IP market grows, but is not a 100% precise image of the IP market. If we consolidate the results of SIP category (from EDAC) for 2010, it comes to $1300M. We know the overall IP market is much larger, but we have no direct data. The latest available data are the 2006 results as reported by Gartner: $1 770M, including “Technology Licensing” revenue of 442.7M coming from companies like Rambus, IBM, Saifun, Nvidia or MOSAID technologies. If we decide to remove the Technology Licensing revenues, it comes to $1 327M for SIP licensing only in 2006. Then, if we apply the same growth rate that we have seen in EDAC results between 2010 and 2006, or 45%, the evaluation of the overall Silicon IP licensing revenues gives: $1 924M in 2010. This simply means that overall revenues coming from SIP category were already very close to revenues coming from CAE category in 2010! For information the later, as reported by EDAC, was at $2 006M

By Eric Esteve from IPnest


EDA Consortium Newsletter, Q1 2011

EDA Consortium Newsletter, Q1 2011
by Daniel Payne on 07-19-2011 at 1:08 pm

Each quarter, the EDA Consortium publishes the Market Statistics Service (MSS) report containing detailed revenue data for the EDA industry. The report compiles data submitted confidentially by both public and private EDA companies into tables and charts listing the data by both EDA category and geographic region. This newsletter highlights the results for the first quarter of 2011. Additional details are available in the press release, or by subscribing to the EDA Consortium MSS report.
Overall, first quarter 2011 EDA revenues increased 16% compared to the same period in 2010. Total revenue for Q1 was $1446.4 million. Figures 1 and 2, below, summarize the revenue growth for Q1 2011 compared to Q1 2010, detailed by category (figure 1) and geographic region (figure 2). The MSS report contains many additional sub-categories, allowing subscribers to perform a more detailed analysis of revenues affecting their business. A complete list of categories for 2011 is available here.

Figure 1: Q1 2011 EDA revenue growth by category Figure 2: Q1 2011 EDA revenue growth by region
Tables 1 and 2, below, show the percentage growth for the EDA industry by major category and region. (Negative growth is listed in parentheses).
Category Revenue
($ Millions) % Change
CAE 530.6 15.7
IC Physical Design & Verification 318.5 16.1
PCB & MCM 140.4 28.3
SIP 371.4 15.7
Services 85.6 2.2
Region Revenue
($ Millions) % Change
Americas 602.4 22.2
EMEA 241.8 7.8
Japan 295.3 17.7
APAC 307.0 10.0

Table 1: Q1 2011 EDA revenue growth by category Table 2: Q1 2011 EDA revenue growth by region
Figure3 (below) shows the EDA revenue percentages by major category. As the chart shows, CAE remains the largest category, followed by Semiconductor IP and IC Design and Verification tools. Geographically, the Americas is the largest consumer of EDA tools, with the remainder divided amongst Europe, Middle East, and Africa (EMEA), Japan, and Asia Pacific (APAC), as shown in Figure 4.

Figure 3: Q1 2011 EDA revenue percentage by category Figure 4: Q1 2011 EDA revenue percentage by region
Figure 5 shows the historical EDA revenue for the major categories (CAE, PCB & MCM, IC Physical Design and Verification, SIP and Services) from Q1 1996 through Q1 2011. Each quarter’s MSS report contains detailed data for the current year as well as the previous 3 years quarterly data in both tabular and graphical formats.

Figure 5: EDA revenue history, 1996 – present
Data is reported confidentially to an independent accounting firm, which allows both public and private companies to report revenue data by detailed category. Individual company data is not reported, and steps are taken to further protect individual data for categories with a small number of reporting companies. Contributing data is free, and contributors will receive the quarterly MSS executive summary report. The full report is available via subscription, and contains substantially more detailed information for EDA revenues by category and region, providing the information subscribers need to analyze trends in EDA.
For more information on the MSS report, including information on subscribing to the report and the benefits of joining the EDA Consortium, please visit the EDAC web site, or email mss11@edac.org.


Synopsys Virtualizer

Synopsys Virtualizer
by Paul McLellan on 07-19-2011 at 8:00 am

As you probably know, Synopsys last year acquired VaST and CoWare and a couple of years early had acquired Virtio. All three companies primarily competed in the virtual platform market. In addition, Synopsys is the #2 IP company (behind ARM) and has a wide range of tools for SoC design. So the interesting question is how would they pull all this technology together.

I talked to them yesterday. I focused what I discussed on what is new. I’ve written many times elsewhere about the attractiveness of the value proposition for using virtual platforms for software development and for tying the software development teams more tightly to the hardware (often chip, but not necessarily) development teams. The challenge has never been that message, but the practicalities of implementing it. The two big challenges were always how to create the models in a timely enough manner for the software developers. And how to tie the virtual platform into the environment that the software developers wanted to use anyway (and, to a lesser extent, how to tie it into the environment that the system and chip developers were using anyway). Obviously the later the models and the more cumbersome the integration then the less attractive switching the virtual platform approach was.

The answer turns out to be a new product called Virtualizer. It pulls together the three simulation environments. There is more integration to be done but this is the first product that allows all the models to run and so pulls together the portfolio of models from all three companies, along with models for Synopsys DesignWare. None of the old ways of doing modeling are made obsolete, but for non-processor models the preferred approach is System-C TLM. For processor models, which are actually JIT cross-compilers under the hood, there is a proprietary approach (System-C models of a processor would never be fast enough for software development). To make creating specific platforms faster they have also created reference platforms that make an easy starting point to remove unwanted peripherals and add new ones without having to start completely from scratch.

The next area they have focused on is going beyond just running code fast but addressing integration of debug and analytics with the platform. The idea, as with most software debug/testing environments, is to reduce the time from when a problem is detected until the line of code creating the problem is identified.

The third problem they have focused on is how to adapt Virtualizer into exsiting flows. Not just flows for software development, although that is obviously one of the main challenges, but also interfacing virtual platforms to other tools such as VCS simulation, emulation and HAPS FPGA prototyping). Further, in the three main markets where they are focused–wireless, automotive and semiconductor–they are focused on higher system level integration, such as tying a handset simulation together with a base-station simulator. Some work is also going on in secondary areas such as networking, aerospace and industrial.

One area that is making virtual platforms more important in automotive is a new standard ISO-26252 concerned with software in safety control. In Europe this is already almost being treated as a regulation. Of course there have been several high profile recalls due to software problems, most notably the braking problems of the Toyota Prius, which also puts a premium on ensuring that adequate testing is done, can be documented, can start earlier and so on. Precisely the attributes that virtual platforms deliver.

If you want to play with virtual prototyping, Synopsys have acloud-based demonstration that allows you to play around for a couple of hours. There is also a recent webinar on optimizing power management with virtual platforms.


SpringSoft Community Conferences

SpringSoft Community Conferences
by Paul McLellan on 07-18-2011 at 5:31 pm

During the next 6 months or so, SpringSoft will be running a dozen community conferences. These are open not just to users but to anyone interested in SpringSoft’s technology.

There will be 3 conferences in US in October in Irvine, Austin and San Jose. For more details as they become available check here. There will be three in Bangalore (India), Seoul (Korea) and Yokohama (Japan).

But the first three, coming up next month, are in Taiwan and China.

August 4th in Hsinchu, Taiwan 八月四号在新竹台灣

August 10th in Shanghai. 八月十号在上海中国

August 12th in Beijing。 八月十二号在北京中国

All have the same agenda. The morning will consist of two keynotes followed by lunch. Then in the afternoon there are parallel session covering either functional verification (Verdi and Protolink) or physical layout (Laker). The day wraps up at 5pm with closing results and a drawing for an iPad2.

Full details of the Asian seminars are here.


Variation Analysis

Variation Analysis
by Paul McLellan on 07-18-2011 at 1:33 pm

I like to say that “you can’t ignore the physics any more” to point out that we have to worry about lots of physical effects that we never needed to consider. But “you can’t ignore the statistics any more” would be another good slogan. In the design world we like to pretend that the world is pass/fail. But manufacturing is actually a statistical process and isn’t pass/fail at all. One area that is getting worse with each process generation is process variation and it is now breaking the genteel pass/fail model of the designer.

For those of you interested in variation, there is an interesting research note from Gary Smith EDA. One of the biggest takeaways is that, of course, you are interested in variation if you are designing ICs in a modern process node, say 65nm or below. In a recent survey of design engineer management, 37% identified variation-aware design as important at 90nm and all the way up to 95-100% at 28nm and 22nm. If you are not worrying about variation now, you probably should be and certainly will be. 65nm seems to be the tipping point.

Today, only about a quarter of design organizations already have variation-aware tools deployed with another quarter planning to deploy this year. The only alternative to using variation-aware tools is to guard-band everything with worst-possible-case behavior. The problem with this is that at the most advanced process nodes there isn’t really any way to do this, the worst case variation is just too large. The basic problem is well illustrated by this diagram: for some parameter the typical (mean) performance advances nicely, but the worse case performance doesn’t advance nearly so much since the increased variation means that some number of standard deviations from the mean hardly moves (and can even actually get worse). Inadequate handling of variation shows up as worse performance in some metric, or forces respins when the first design doesn’t work or, when problems get detected late in the design cycle, lead to tapeout delays.

All the main foundries have released reference flows that incorporate variation and analysis tools, primarily from Solido Design Automatioin.

Solido is the current leader supplying tools to address variation. The tools are primarily used by the people designing at the transistor level: analog and RF designers, standard-cell designers, memory designers and so on. STARC in Japan recently did a case study and the Solido variation tools exceeded STARC’s performance specifications across process corner and local mismatch conditions. Solido is also in the TSMC 28nm AMS 2.0 reference flow and have been silicon validated.

Gary Smith’s full report is here.
Solido’s website is here.
TSMC AMS 2.0 Wiki is here.


Richard Goering does Q&A with ClioSoft CEO

Richard Goering does Q&A with ClioSoft CEO
by Daniel Payne on 07-18-2011 at 11:05 am

Richard Goering is well-known from his editorial days at EE Times (going back some 25 years), now at Cadence he blogs at least once a week on EDA topics that touch Cadence tools.

Before DAC he talked with Srinath Anantharaman about how Cadence tools work together with ClioSoft tools to keep IC Design Data Management Simple.

Through just nine questions Richard finds out where ClioSoft came from, how their tools work inside of a Cadence IC flow, and what is new at DAC this year.

Also Read

Hardware Configuration Management at DAC

Cadence Virtuoso 6.1.5 and ClioSoft Hardware Configuration Management – Webinar Review

How Avnera uses Hardware Configuration Management with Virtuoso IC Tools


Intel Briefing: Tri-Gate Technology and Atom SoC

Intel Briefing: Tri-Gate Technology and Atom SoC
by Daniel Nenni on 07-17-2011 at 3:00 pm

Sorry to disappoint but my 2 hours at the Intel RNB was a very positive experience. It is much more fun writing negative things about industry leaders because I enjoy the resulting hate mail and personal attacks, but the candor and transparency of the Intel guys won me over. They even asked ME questions which was a bit telling. I also picked up a very nice Intel hat. I now blog for hats!

The first meeting was with Jon Carvill, Mobile Media guy at Intel. Before that he was VP of Communications at GlobalFoundries and Head of PR at AMD/ATI. I worked with Jon at GlobalFoundries, he’s a stand-up guy, very technical, and almost 7 feet tall. I don’t take pictures with Jon since he makes me look like a dwarf.

The second meeting was with Rob Willoner, a long time manufacturing guy at Intel and Radoslaw Walczyk, an Intel PR guy. You can find Rob’s Tri-Gate presentation HERE. In these types of meetings you watch the face of the PR guy when the technology guy answers questions. If the PR guy flinches you are getting good information!

The questions they asked me were about 40nm yield and 28nm ramping (I will blog on that next week). It was interesting that the conversation went there.

The questions I asked them were about Tri-Gate and Atom in regards to the foundry business. I’m a foundry guy and would really like to see Intel get serious and “raise the foundry competitiion bar”. With that said, here are my comments on Intel in the foundry business, Tri-Gate technology, and Atom SoCs:

[LIST=1]

  • Intel is definitely serious about the foundry business. Not only to promote Atom as an SoC block, but also to fill 22nm capacity. Intel will start the foundry business with FPGAs from Achronix and Tabula. FPGA’s have very regular structures which are easier to tune a process for. FPGA performance is also important and Intel is certainly the expert on high speed silicon.
  • Intel will not manufacture ARM designs. This kills the “Apple to foundry at Intel” rumors. The Apple A6 processor will be fabbed at TSMC 20nm HKMG using ultra low power 3D IC technology, believe it! This also makes Intel a “boutique” foundry like Samsung and not an “open” foundry like TSMC. That position could change of course but probably not in my life time.
  • Intel still has a lot to learn about a pure-play foundry design ecosystem. None of my design questions were answered and it was because they just did not know. Example: Intel does not acknowledge the term restricted design rules (RDRs) since microprocessor design rules have always been restricted. TSMC just went to RDRs in 28nm as a result of the 40nm ramping problem. More about that next blog.
  • The Ivy Bridge processor is not in production at 22nm. It’s a test vehicle only and will not be in production until sometime next year. 22nm Atom SoC production will be in 2013. The Intel PR guy flinched at this one. 😉 To be fair, Intel production levels are much higher than most so the Intel internal definition of production is not the same as the Intel PR definition.
  • What is the difference between Tri-Gate and FinFet? Tri-Gate is a type of FinFet, FinFet is more of a global term. Intel Tri-gate work started in 2002 and the current implementation uses standard semiconductor manufacturing equipment with a few extra steps. More on Tri-Gate HERE.
  • Tri-Gate manufacturing costs are +2-3%? That would be wafer manufacturing costs, which does not include mask and other prep costs. 2-3% is definitely a PR spin thing and not the actual cost delta.

    Clearly this is just scratching the surface of the briefing so if you have questions post them in the comment section and they will get answered. You have to be a registered SemiWiki user to read/write comments, when you register put “iPad2” in the referral section and you might even win one.

    By the way, when I’m not in Taiwan, I’m on the Iron Horse Trail with my new walking partner Max. Max is a six month old Great Dane and he already weighs 110 lbs. I like how Max’s big head makes mine look small. Peet’s Coffee in Danville is our favorite destination so stop on by and say “Hi”. Be nice though or Max will slobber on you.


  • Webinar: IP integration methodology

    Webinar: IP integration methodology
    by Paul McLellan on 07-17-2011 at 12:24 pm

    The next Apache webinar is coming up on 21st July at 11am Pacific time on “IP integration methodology”.

    This webinar will be conducted by Arvind Shanmugavel, Director Applications Engineering at Apache Design Solutions. Mr. Shanmugavel has been with Apache since 2007, supporting the RedHawk and Totem product lines. Prior to Apache he worked at Sun Microsystems for several years, leading various design initiatives for advanced microprocessor designs. He received his Masters in Electrical Engineering from the University of Cincinnati, Ohio.

    Today’s SoC consists of several IP blocks, developed internally or externally. To achieve a successful integration of IP into a single chip design requires a methodology that considers the power noise impact of merging sensitive analog circuitry with high-speed digital logic on the same piece of silicon. In addition, it must handle the sharing of IP information and knowledge between disparate design groups to ensure the design will work to specification and at the lowest cost. Apache’s power analysis and optimization solutions allow IP designers to validate their design and create protected and portable models that can be used for mixed-signal analysis and SoC sign-off. Apache’s IP Integration Methodology targets the design, validation, and cost reduction of highly integrated mixed-signal SoCs to help deliver robust single chip designs.

    More details on the webinars here.

    Register to attend here (and don’t forget to select semiwiki.com in the “How did you hear about it?” box).


    First low-power webinar: Ultra-low-power Methodology

    First low-power webinar: Ultra-low-power Methodology
    by Paul McLellan on 07-13-2011 at 12:10 pm

    The first of the low power webinars is coming up on July 19th at 11am Pacific time. The webinar will be conducted by Preeti Gupta, Sr. Technical Marketing Manager at Apache Design Solutions. Preeti has 10 years of experience in the exciting world of CMOS power. She has a Masters in Electrical Engineering from Indian Institute of technology, New Delhi, India.

    Meeting the power budget and reducing operational and/or stand-by power requires a methodology that establishes power as a design target during the micro-architecture and RTL design process, not something that can be left until the end of the design cycle. Apache’s analysis-driven reduction techniques allow designers to explore different power saving modes. Once RTL optimization is completed and a synthesized netlist is available, designers can run layout-based power integrity to qualify the success of RTL stage optimizations, ensuring that the voltage drop in the chip is contained. Apache’s Ultra-Low-Power Methodology enables successful design and delivery of low-power chips by offering a comprehensive flow that spans the entire design process.

    More details on the webinars here.

    Register to attend here (and don’t forget to select semiwiki.com in the “How did you hear about it?” box).


    And it’s Intel at 22nm but wait, Samsung slips ahead by 2nm…

    And it’s Intel at 22nm but wait, Samsung slips ahead by 2nm…
    by Paul McLellan on 07-12-2011 at 12:46 pm

    Another announcement of interest, given all the discussion of Intel’s 22nm process around here, is that Samsung (along with ARM, Cadence and Synopsys) announced that they have taped out a 20nm ARM test-chip (using a Synopsys/Cadence flow).

    An interesting wrinkle is that at 32nm and 28nm they used a gate-first process but that for 20nm they have switched to gate-last. Of course taping out a chip is different from having manufactured one and got it to yield well. There have been numerous problems with many of the novel process steps in technology nodes below 30nm.

    The chip contains an ARM Cortex-M0 along with custom memories and, obviously, various test structures.

    It is interesting to look at Intel vs Samsung’s semiconductor revenues (thanks Nitin!). In 2010 Intel was at $40B and Samsung was at $28B. But Samsung grew at 60% versus “only” 25% for Intel. Another couple of years of that an Samsung will take Intel’s crown as #1 semiconductor manufacturer.

    As I’ve said before, Intel needs to get products in the fast-growing mobile markets, and I’m still not convinced that Atom’s advantages (Windows compatibility) really matter. Of course Intel’s process may be enough to make it competitive but that depends on whether Intel’s wafers are cheap enough.