DAC2025 SemiWiki 800x100

Global Technology Conference 2011

Global Technology Conference 2011
by Daniel Nenni on 07-24-2011 at 1:13 pm

Competition is what made the semiconductor industry and semiconductors themselves what they are today! Competition is what drives innovation and keeps costs down. Not destructive competition, where the success of one depends on the failure of another, but constructive competition that promotes mutual survival and growth where everybody can win. The semiconductor design ecosystem on the other hand is the poster child for destructive competition, which is why EDA valuations are a fraction of what they should be, but I digress…..

GlobalFoundries is the first “truly global” foundry which brings a different type of competition. Truly global is defined as having fabs in Dresden, New York, Singapore, and a new fab planned for Abu Dhabi and other parts of the world. India? Russia? If they put a fab in Russia maybe Sarah Palin can see it from Alaska! 😀

The first Global Technology Conference was one of the best I have attended. It was packed with semiconductor industry executives from around the world. Even as a lowly Blogger, I was welcomed with executive interviews and V.I.P. treatment all the way. The Global guys are a class act, believe it.

This year:
GLOBALFOUNDRIES senior executives and technologists share their vision and perspectives on driving leading-edge technology innovation through True Collaboration as the industry moves to the 32/28nm technology node and beyond.

In addition to the technical highlights of the GFI roadmap to 20nm and 3DIC, here is the meat of the conference as planned today:

The GlobalFoundries ecosystem partners will also be there for discussions and demonstrations:

This conference is all about communication within the semiconductor design and manufacturing ecosystem which is the biggest challenge we face as an industry today. It’s time to take action. It’s time to take personal responsibility for the industry that supports my extravagant lifestyle. Attend this conference and make a difference!


Intel Q2 Financial Secret: “Shhhh….We’re on Allocation”

Intel Q2 Financial Secret: “Shhhh….We’re on Allocation”
by Ed McKernan on 07-22-2011 at 10:47 pm

xeone7

Every Semiconductor Analyst has given Intel the once over a hundred times about their slowing PC unit volume. They are looking in the wrong place because the true secret of the Q2 earnings – in my humble opinion – is that Intel’s factories are full and parts are on allocation. What???

Check it out, high-end, 8 and 10 core XEON processors introduced this spring are selling for between $100 and $1200 more on the gray market. Gray markets can act as a bleed off valve and in times of production excess parts will sell for under list while in times of shortage, a customer will make a quick buck selling out the back door to others who have an urgent need.

I didn’t have a clue about the current “Allocation Situation” until I listened to the earnings conference call. From my perspective it was all stellar until they got to the data center revenue growth. It was up only 15% year over year. I was expecting 25-30% – which is what Intel cranked out the last 3 quarters. Why 15% after Google ups the Cap Ex by 100% and IBM waxes about the Cloud?

With a 30% year over year increase in the Data Center Group, Intel would have hit $13.4B in revenue – a true blowout. Then to add to the intrigue, they go on to forecast revenue of $14B ($14B +/- $500M) for Q3 and a large increase in R&D and even more Cap EX for 2011. How can this be if PC sales are lagging and dividends are being shoveled out the door at a furious pace? Did Paul Otellini lose control of the checkbook?

I believe there once was a high tech exec that said, “cash flow is more important than your mother.” Obviously he wasn’t invited over to Sunday dinner with Mom after that. History shows that Otellini runs a tight ship but makes strong bets on forward trends. The trend in Data Center is strong and worthy of writing some big checks for more capacity and a few hundred more engineers to kill ARM by 14nm (more about this in a later column).

Here’s a second little nugget to chew on. Otellini knows that regardless of the shortfalls in netbooks – a minor $350M business per quarter – Intel is at a Tipping point with the data center revenue and that profits are compounding at a rate that is staggering. 80%+ gross margins and 50%+ operating margins means he needs to go ahead and build the Fabs as fast as possible to capitalize on the customers that are waiting for these new ultra performance efficient server chips that reduce sky high power and cooling bills.

These new XEON processors that were introduced in the spring at an ASP that is 25% higher than the old models are mighty big die with some measuring over 500mm2 in area. The pricing data suggests that they are not yielding high enough yet to satisfy the demand however they are yielding to a point that it is extremely profitable. All it takes is for 5 or 10 good die per wafer to hit the high profit margins. So while the XEON family makes up less than 6% of Intel’s unit volume they probably occupy a complete fab.

One can ask if this scenario is true then what is the impact in the notebook and desktop space. I believe Intel built to a level to satisfy the typical seasonal demand and then cut over all other wafers to the server side leaving some crumbs for AMD –which turned in a good quarter for Q2 and forecasted a strong Q3.

Back in the 1990s Intel experienced capacity crunches multiple times during 386, 486 and Pentium ramps. Their solution was to hold prices flat for several quarters. So I reviewed Intel’s CPU pricing since January across what has to be 300 SKUs by now and I cannot see one CPU that has seen a price reduction. The second derivative of the flat pricing is that AMD sees a surprise pickup and/or the industry sees lower unit volume.

So the mystery grows… is the PC slowdown to single digit growth actually do somewhat to a capacity crunch at Intel and not the iPAD or the crummy economy in Europe or the US. I believe that Paul Otellini’s end game is at 14nm and he would certainly sacrifice PC units with $20 Atom chips in Netbooks and $60 Celerons destined for low-end notebooks in order to sell $2000 – $4600 XEON chips that will fund the accelerated deployment of larger 14nm fabs that come on line in 2013 and the armies of engineers designing x86 for tablets and smartphones going into those same fabs.

Full Disclosure: As an investor I am long INTC. However this is not a recommendation to buy any of the stocks covered in this article. Every investor needs to do his homework with regards to investing.


Space is limited so register for GTC 2011 HERE today!

PowerArtist webinar

PowerArtist webinar
by Paul McLellan on 07-21-2011 at 3:21 pm

The next Apache webinar is on PowerArtist, RTL Power Analysis on July 26th at 11am Pacific time. The webinar will be conducted by David “Woody” Norwood, Principal Applications Engineer at Apache Design Solutions. David has been supporting RTL Power products for the past 8 years. He has broad EDA industry experience with 25 years in a variety of Applications Engineering and Management roles focusing on power and logic verification technologies.

A complete RTL design-for-power platform providing fully-integrated advanced analysis and automatic reduction technologies, including sequential logic, combinatorial clock gating, memory, and data path for complex IP and SoC designs. By enabling analysis, reduction, and optimization early in the design cycle, PowerArtist helps designers meet power budget requirements and increase the power efficiency of their ICs.

To register for the webinar go here.


Intel’s Barbed Wire Fence Strategy

Intel’s Barbed Wire Fence Strategy
by Ed McKernan on 07-21-2011 at 11:38 am

Analysts tend to make judgments regarding Intel based on an existing conventional wisdom (CW) and projecting straight line into the future. As a former Intel, Cyrix, and Transmeta processor marketing guy I would like to offer a different perspective as I have been both inside the tent looking out and outside looking in.

The current CW is that Intel is doomed… it’s OK, we have been here before. Each time CW says Intel is doomed they implement what I will call their Barbed Wire Strategy to counter their threats and expand their market and influence. If I may, I will explain the Barbed Wire Strategy.

A month ago I moved with my Family to Austin. I had the job of driving our car with one of my boys from Silicon Valley through Arizona, New Mexico and West Texas. In West Texas there are a lot of ranchers with big tracts of land (thousands of acres) all ringed with barbed wire. It is not particularly high but it is there to keep cattle in and people out. These days it’s also keeping in a lot of windmills. Typically the ranch house is deep inside the property off of a long dirt road that most people couldn’t find the entrance to. Consider the ranch house like Intel’s processors. They are very valuable and have existed forever. If someone were to invade the property they would not make it to the house. Now if the rancher wants to increase his property to handle more cattle (or windmills) then he can buy the property next to him and move the barbed wired fence farther out and increase his personal wealth.

Today’s CW is that the ARM Camp has Intel’s number and the game is over. My take is that Intel is already well down the path to implementing the Barbed Wire Strategy on a number of fronts. I will talk about servers today.

Warren Buffet talks about investing in companies with high castle walls and big moats however, in the ever changing tech business you need the Barbed Wire Strategy. At the first sign of a competitive threat – Intel looks to expand the property lines and move its Barbed Wire farther out from the center of the ranch. This past week they acquired Fulcrum– a switch chip startup that competes with Broadcom. Except no system guy would buy from them because of the fear they would not be around in a crunch. Intel acquired Fulcrum in order to own the whole line card in the Data Center (sans DRAM). The switch business is around $1B for Broadcom. So as Broadcom commoditized Cisco’s switch business the past 5 years, now Intel will commoditize Broadcom and Marvell’s switch business. Intel may not get the whole $1B of revenue but it will be additive and more importantly it will move the barbed wire fence farther out from the ranch house.

For someone new to understand this you need to review history. The clearest example I can give is back in the early 1990’s, Intel was facing a resurgent AMD and new processor vendors Cyrix, Nexgen and C&T. The chipset market was a thriving 3rd party market. Intel wanted to increase the barriers to entry for all so they took it upon themselves to develop their own chipset for Pentium. The chipset added a minor amount of revenue, but the protective barrier it set up allowed Pentium prices to rise dramatically. Chipset vendors melted away along with Cyrix. AMD acquired Nexgen.

Expect Data Center revenues to rise with the Fulcrum acquisition and more importantly start thinking about the impact this will have on the ARM vendors aiming for the server space.

More Barbed Wired Stories to Follow.

Note: You must log in to read/write comments

Space is limited so register for GTC 2011 HERE today!

Want to learn Mixed-Signal Design and Verification?

Want to learn Mixed-Signal Design and Verification?
by Daniel Payne on 07-20-2011 at 6:13 pm

Workshops are a great where to learn hands-on about IC design technology. Mentor has a free workshop to introduce you to creating, simulating and verifying mixed-signal (Analog and Digital) designs.

PLL waveforms showing both digital and analog signals.

Dates in Fremont, California
July 26, 2011
September 15, 2011
November 8, 2011

Their tool is called Questa ADMS and spans both digital design with HDL and analog design using SPICE or Fast SPICE.

These tools work both in a Mentor environment and the Cadence environment.


Questa ADMS inside Design Architect IC


Questa ADMS inside of the Cadence Virtuoso Analog Design Environment

Overview

Mentor Graphics cordially invites you to attend a FREE “hands-on” Mixed-Signal Design and Verification Workshop. In this workshop we will explore the current trends of IC design and highlight the challenges these trends create. This workshop will expose you to comprehensive solutions necessary to improve your design and verification productivity.
During this lab-intensive technical workshop, you will gain first-hand experience evaluating Questa ADMS, Mentor’s Mixed-Signal Simulation Solution.
Lab 1 – Getting started with ADMS

  • Explore the ADMS graphical interface and infrastructure
  • Run digital and mixed-signal simulations with an adder and ADC circuits

Lab 2 – Mixed-Signal Simulation: Digital-centric

  • Learn about using analog SPICE circuits within a digital netlist hierarchy
  • Explore the analog-digital interface and how to bridge the domains
  • Understand the importance of validating analog and digital blocks together

Lab 3 – Mixed-Signal Simulation: Analog-centric

  • Learn how to use HDL behavioral models within a schematic
  • Observe the impact of AMS modeling on performance and functionality on a PLL circuit



Gary Smith on the Apache acquisition

Gary Smith on the Apache acquisition
by Paul McLellan on 07-20-2011 at 4:44 pm

Gary Smith has a note out about the Apache acquisition by Ansoft (unfortunately if you get his email newsletter the link there takes you to the wrong article but it really is here or here as pdf). Most of the note actually describes the acquisition and the Apache product line which won’t reveal much new to anyone here.

He regards the product lines as largely complementary:Together Apache brings the low power IC design solutions where ANSYS provides the extraction software for electronic packages and boards. While Apache and Ansys will overlap with common customers together they will offer complementary software solutions that will enhance the technological solutions for chip, package and board system design, particularly in the areas of electromagnetic interference (EMI), thermal stress and reliability, signal integrity and power integrity.


Oasys’s customers

Oasys’s customers
by Paul McLellan on 07-20-2011 at 1:36 pm

I haven’t made a secret of the fact that I maintain Oasys Design System’s website. So I had a small task yesterday of adding Qualcomm to the list of customer logos that cycle through on the home page. It is a pretty impressive list including Juniper Networks, Netlogic Microsystems, Texas Instruments and ST Microelectronics. Oh yes, and Xilinx, who have licensed the technology but haven’t publicly said anything about what they are doing with it.

Here’s the elevator summary of what is different about Oasys’s Chip Synthesis from traditional synthesis. Traditional synthesis, including Ambit BuildGates that the founders of Oasys worked on a decade ago, works by doing a quick and dirty conversion of the RTL to a gate-level netlist and then applying a powerful but slow optimizer to the netlist. Problem #1, it’s slow. Problem #2, you need to keep the entire netlist in memory the whole time which limits the size of design that can be handled. Chip Synthesis takes almost the opposite approach, and puts most of the intelligence into how to turn RTL into optimized gates a small piece at a time. Further optimization is done, not by working directly on that netlist as in traditional synthesis, but by returning to the RTL level with a bit more information on the local constraints and generating a new small peiece of netlist from a small piece of RTL.

As Jaggy Rao of TI said:RealTime Designer is not just an incremental improvement, it is truly the next generation of physical synthesis for complex, multi-million gate designs at leading-edge process nodes.

If you want the 500 storey building elevator version of how Chip Sythesis works, there is a white paper here.

Oh yes, and if you didn’t go to DAC, or you did go but you missed it, then have a good laugh at the 2011 Oasys DAC video.


Silicon IP to take over CAE in EDAC results… soon but not yet!

Silicon IP to take over CAE in EDAC results… soon but not yet!
by Eric Esteve on 07-20-2011 at 11:44 am

Very interesting results launched by EDAC for Q1 2011, if Computer Aided Engineering (CAE) is still the largest category with $530.6M, the second category is Silicon IP (SIP) with $371.4M, followed by IC Physical Design & Verification at $318.5M. Even more significant is the four quarter moving average results, showing growth in every category, +12.9% for CAE and +7.6% for IC PD & V, but as high as +27.9% for SIP!

To check if this really is a long term trend, I had a look at the results for Q1 2006, the same quarter five years ago. At that time, CAE results were at $510M and IC PD & V at $315M, both very close to the 2011 results and SIP was at $225M. This means that the largest two EDA categories, purely based on S/W tools, have stayed almost flat in five years, SIP has grown by 65%! If SIP keep the same growth rate (as well as CAE keep staying flat), it will take only three years for SIP to pass CAE category. In other words, we should see SIP to be the largest category as reported by EDAC during 2014.

Honestly, I don’t see any reason why this would not happen. The Compound Annual Growth Rate (CAGR) for SIP (at least from the companies being part of EDAC) has been 13.5% during the last five years. The forecast we have built at IPnest for the Interface IP only exhibit a 14.3% CAGR between 2010 and 2015. In fact, if we take into account the revenues coming from ALL the IP vendors, we will see that the Silicon IP Licensing revenues are already at the same level than the CAE revenues!

From Semico: The SIP market is undergoing a round of consolidation with the number of companies shrinking approximately 50% by 2010 compared to 2000. However, this is not a sign of a weakening market, but rather of the market sorting itself out with strong contenders consolidating their positions.
Semico projects this market to continue to grow, exhibiting a CAGR of 12.6% from 2010 – 2015.

If you take a look at the EDAC member list, you will realize that, if the most important IP vendors like ARM Ltd or Synopsys are members, most of the “small” IP vendors are not. This means that the SIP category as reported by EDAC is representative of the market trends, when SIP grows, the overall IP market grows, but is not a 100% precise image of the IP market. If we consolidate the results of SIP category (from EDAC) for 2010, it comes to $1300M. We know the overall IP market is much larger, but we have no direct data. The latest available data are the 2006 results as reported by Gartner: $1 770M, including “Technology Licensing” revenue of 442.7M coming from companies like Rambus, IBM, Saifun, Nvidia or MOSAID technologies. If we decide to remove the Technology Licensing revenues, it comes to $1 327M for SIP licensing only in 2006. Then, if we apply the same growth rate that we have seen in EDAC results between 2010 and 2006, or 45%, the evaluation of the overall Silicon IP licensing revenues gives: $1 924M in 2010. This simply means that overall revenues coming from SIP category were already very close to revenues coming from CAE category in 2010! For information the later, as reported by EDAC, was at $2 006M

By Eric Esteve from IPnest


EDA Consortium Newsletter, Q1 2011

EDA Consortium Newsletter, Q1 2011
by Daniel Payne on 07-19-2011 at 1:08 pm

Each quarter, the EDA Consortium publishes the Market Statistics Service (MSS) report containing detailed revenue data for the EDA industry. The report compiles data submitted confidentially by both public and private EDA companies into tables and charts listing the data by both EDA category and geographic region. This newsletter highlights the results for the first quarter of 2011. Additional details are available in the press release, or by subscribing to the EDA Consortium MSS report.
Overall, first quarter 2011 EDA revenues increased 16% compared to the same period in 2010. Total revenue for Q1 was $1446.4 million. Figures 1 and 2, below, summarize the revenue growth for Q1 2011 compared to Q1 2010, detailed by category (figure 1) and geographic region (figure 2). The MSS report contains many additional sub-categories, allowing subscribers to perform a more detailed analysis of revenues affecting their business. A complete list of categories for 2011 is available here.

Figure 1: Q1 2011 EDA revenue growth by category Figure 2: Q1 2011 EDA revenue growth by region
Tables 1 and 2, below, show the percentage growth for the EDA industry by major category and region. (Negative growth is listed in parentheses).
Category Revenue
($ Millions) % Change
CAE 530.6 15.7
IC Physical Design & Verification 318.5 16.1
PCB & MCM 140.4 28.3
SIP 371.4 15.7
Services 85.6 2.2
Region Revenue
($ Millions) % Change
Americas 602.4 22.2
EMEA 241.8 7.8
Japan 295.3 17.7
APAC 307.0 10.0

Table 1: Q1 2011 EDA revenue growth by category Table 2: Q1 2011 EDA revenue growth by region
Figure3 (below) shows the EDA revenue percentages by major category. As the chart shows, CAE remains the largest category, followed by Semiconductor IP and IC Design and Verification tools. Geographically, the Americas is the largest consumer of EDA tools, with the remainder divided amongst Europe, Middle East, and Africa (EMEA), Japan, and Asia Pacific (APAC), as shown in Figure 4.

Figure 3: Q1 2011 EDA revenue percentage by category Figure 4: Q1 2011 EDA revenue percentage by region
Figure 5 shows the historical EDA revenue for the major categories (CAE, PCB & MCM, IC Physical Design and Verification, SIP and Services) from Q1 1996 through Q1 2011. Each quarter’s MSS report contains detailed data for the current year as well as the previous 3 years quarterly data in both tabular and graphical formats.

Figure 5: EDA revenue history, 1996 – present
Data is reported confidentially to an independent accounting firm, which allows both public and private companies to report revenue data by detailed category. Individual company data is not reported, and steps are taken to further protect individual data for categories with a small number of reporting companies. Contributing data is free, and contributors will receive the quarterly MSS executive summary report. The full report is available via subscription, and contains substantially more detailed information for EDA revenues by category and region, providing the information subscribers need to analyze trends in EDA.
For more information on the MSS report, including information on subscribing to the report and the benefits of joining the EDA Consortium, please visit the EDAC web site, or email mss11@edac.org.


Synopsys Virtualizer

Synopsys Virtualizer
by Paul McLellan on 07-19-2011 at 8:00 am

As you probably know, Synopsys last year acquired VaST and CoWare and a couple of years early had acquired Virtio. All three companies primarily competed in the virtual platform market. In addition, Synopsys is the #2 IP company (behind ARM) and has a wide range of tools for SoC design. So the interesting question is how would they pull all this technology together.

I talked to them yesterday. I focused what I discussed on what is new. I’ve written many times elsewhere about the attractiveness of the value proposition for using virtual platforms for software development and for tying the software development teams more tightly to the hardware (often chip, but not necessarily) development teams. The challenge has never been that message, but the practicalities of implementing it. The two big challenges were always how to create the models in a timely enough manner for the software developers. And how to tie the virtual platform into the environment that the software developers wanted to use anyway (and, to a lesser extent, how to tie it into the environment that the system and chip developers were using anyway). Obviously the later the models and the more cumbersome the integration then the less attractive switching the virtual platform approach was.

The answer turns out to be a new product called Virtualizer. It pulls together the three simulation environments. There is more integration to be done but this is the first product that allows all the models to run and so pulls together the portfolio of models from all three companies, along with models for Synopsys DesignWare. None of the old ways of doing modeling are made obsolete, but for non-processor models the preferred approach is System-C TLM. For processor models, which are actually JIT cross-compilers under the hood, there is a proprietary approach (System-C models of a processor would never be fast enough for software development). To make creating specific platforms faster they have also created reference platforms that make an easy starting point to remove unwanted peripherals and add new ones without having to start completely from scratch.

The next area they have focused on is going beyond just running code fast but addressing integration of debug and analytics with the platform. The idea, as with most software debug/testing environments, is to reduce the time from when a problem is detected until the line of code creating the problem is identified.

The third problem they have focused on is how to adapt Virtualizer into exsiting flows. Not just flows for software development, although that is obviously one of the main challenges, but also interfacing virtual platforms to other tools such as VCS simulation, emulation and HAPS FPGA prototyping). Further, in the three main markets where they are focused–wireless, automotive and semiconductor–they are focused on higher system level integration, such as tying a handset simulation together with a base-station simulator. Some work is also going on in secondary areas such as networking, aerospace and industrial.

One area that is making virtual platforms more important in automotive is a new standard ISO-26252 concerned with software in safety control. In Europe this is already almost being treated as a regulation. Of course there have been several high profile recalls due to software problems, most notably the braking problems of the Toyota Prius, which also puts a premium on ensuring that adequate testing is done, can be documented, can start earlier and so on. Precisely the attributes that virtual platforms deliver.

If you want to play with virtual prototyping, Synopsys have acloud-based demonstration that allows you to play around for a couple of hours. There is also a recent webinar on optimizing power management with virtual platforms.