Banner 800x100 0810

Learning Properties, Assertions and Covers for Hardware Design

Learning Properties, Assertions and Covers for Hardware Design
by Daniel Payne on 02-25-2013 at 12:10 pm

How do you learn new hardware design topics? I just got trained online about property-based verification for hardware designers using a free online class at Aldec. The material was created by Jerry Kaczynski, a Research Engineer at Aldec.

Continue reading “Learning Properties, Assertions and Covers for Hardware Design”


FinFET Design Challenges at 14nm and 10nm

FinFET Design Challenges at 14nm and 10nm
by Daniel Payne on 02-25-2013 at 11:09 am

speaker vassiliosgerousis

At DAC 2012 we were hearing about the 20nm design ecosystem viability, however IC process technology never stands still so we have early process development going on now at the 10nm and 14nm nodes where FinFET technology is being touted. Earlier in February Vassilios Gerousis, a distinguished engineer at Cadence presented a session at the Common Platform Technology Forum: Next Generation R&D and Advanced Tools for 14nm and Beyond. Richard Goering blogged about this.

There have already been three tapeouts at 14nm using FinFET test chips:

  • Cadence, ARM – Cortex M0, IBM – SOI
  • Cadence, ARM – SRAM macros, Samsung – Bulk CMOS
  • Cadence, ARM – Cortex-A7, Samsung – Bulk CMOS

The mantra of collaboration continues into 2013, as EDA vendors, foundries and design, all combine into a virtual team to get the IC design job done. Double patterning (DPT) started at the 20nm node and continues at 14nm and 10nm, however with an additional wrinkle called litho-etch, litho-etch or LELE to create yet another acronym. With LELE the foundry is exposing and etching more than once using alternating masks.

At the 10nm node there’s a new block mask added which is part of the Self-Aligned Double Patterning (SADP) where relief patterns are created, sidewalls are deposited, and finally the unintended shapes are trimmed away. That’s way more complex than DPT at 20nm.

EDA tools like routers need to take into account at 10nm things like:

  • Color-mappable rule set
  • Block masks
  • Sidewall Image Transfer (SIT) effects

With FinFET transistors the width of the devices are quantized, instead of having continuous width ranges.

Cadence has a FinFET tool flow and for circuit simulation there’s a BSIM-CMG device model which they helped develop:

All of this research will help ensure that SoC designers will have proven IP and EDA tool methodology in place, within a few years to exploit 14nm and 10nm nodes. I look forward to hearing about the silicon results of these test chips.

Further Reading


At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT

At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
by Graham Bell on 02-24-2013 at 8:10 pm

By now, you will have seen several postings about all the different activities that are going on at Design and Verification Conference being held Feb. 25-28 at its usual location – the DoubleTree Hotel in San Jose, CA. Besides organizing an experts panel “Where Does Design End and Verification Begin?“, Real Intent with Calypto Design Systems and DeFacTo Technologies are presenting a joint tutorial on Thursday afternoon – “Pre-Simulation Verification for RTL Sign-Off“.

It is interesting to note that the Big Three EDA companies each have their tutorials on Thursday morning, and three companies from the next-tier each have a tutorial in the afternoon. Conference attendees will have the tough decision of picking which tutorial to attend. I wanted to share with the SemiWiki audience some quick insights on the pre-simulation verification topic and a slide that will be presented next week.

The scope of pre-simulation verification is so broad it is not covered by any one company. While it might be argued one of the big three can do it all, I think they would not claim to have a best-in-class solution for the different problem areas that specific attention at the RT-level.

What are the specific areas that need attention? In the tutorial we will be covering power exploration, analysis and optimization using an abstract model with high-level synthesis (HLS), followed by the RTL static verification for: syntax and semantic checking (lint); constraints planning and management; reset analysis and optimization; automatic intent verification; clock domain crossing sign-off; Design-For-Test analysis and insertion; and X-analysis and related optimism and pessimism correction.

This slide from the presentation gives a quick illustration of the entire flow. The benefits are early elimination of complex bugs before simulation, and early closure on power and DFT before the gate-level.

There is a lot of material to cover and I think this joint tutorial with Calypto and DeFacTo will make it clear what needs to happen before simulation (and synthesis) so teams have a design have RTL that is signed-off for implementation.


MUST: DSP ready solution for tomorrow smartphone based on CEVA-XC 4000

MUST: DSP ready solution for tomorrow smartphone based on CEVA-XC 4000
by Eric Esteve on 02-24-2013 at 5:13 am

Like Guiness dark beer, competition is good for you! I mean good for end user, as it pushes DSP IP supplier to provide ever better solution. I am not talking about me-to type of competition, like that we have seen in the past with IBM trying to displace TI at Nokia, by offering a LEAD (DSP IP core from TI used in every NOKIA wireless phone at that time) clone, it happened in the 90’s… MUST (Multicore System Technology) does not try to mimic any competitor solution: this multi-core system based on CEVA-XC 4000 family includes Data Traffic Management (DTM), Data cache & cache coherency, vector floating point instruction set and high performance FPU, as well as a port-folio of ultra-low power co-processor for wireless modems.

The latest Modem specifications in Wireless terminal are very challenging:

  • Strict latency requirements are defined by the standard: the Multiple DSP processors and co-processors must be fully synchronized with minimal overhead
  • Supporting true multi-mode modem design: require using ultra low-power Co-processors allowing multi-mode support (LTE-A, HSPA+, WiFi, etc.)
  • System interconnect should support very high bandwidth: data management has to be integrated into the DSP IP solution
  • Size of modem is dominated by memory buffers: Smart architecture can reduce up to 30% memory size requirements

We better understand why a DSP IP vendor like CEVA had to move from a DSP core supplier to a technology supplier, as illustrated by MUST solution above.

MUST integrate Data Traffic Manager (DTM) and an Advanced System Interconnect. I love the above picture illustrating DTM in action, with three cases, as I feel I can, probably for the first time, understand how it works:

  • First case (Top): The HW accelerator integrated buffer being FULL, send a flag to the DTM, the DTM stop sending data from the CEVA-XC associated buffer
  • Second case (middle): CEVA-XC associated buffer is EMPTY, send a flag to the TCE, the TCE can start working
  • Third case (bottom): CEVA-XC associated buffer is OK to send data through the TCM to the Core associated buffer, which is OK to receive (not FULL), the data transmission can continue.

This Advanced System Interconnect is based on AXI-4, allowing easy system integration and high Quality of Service (QoS). The Multi-layer FIC (Fast Inter-Connect) provides low latency high throughput master and slave ports, and the system is based on Multi-level memory architecture using local TCMs.

I also love the “Dynamic Scheduling Scenario” picture, it allows me to feel again like understanding the Dynamic Scheduling in Symmetric System. This allows dynamic task allocation to DSP cores in runtime, but requires software abstraction using tasks oriented APIs and to use shared external memories. Such architecture is more commonly used in wireless infrastructure application (than in terminal).

The move to multi-core processor (DSP or CPU) has lead to implement cache coherency techniques. I think we easily can understand the need for cache coherency, even if the implementation of the concept is all but simple. Once again, the above picture greatly helps understanding cache coherency. Data present in the shared L2 cache can be Core 1 exclusive (red), or shared (orange), or Core 2 exclusive (yellow). The Cache coherency mechanism, HW implemented in each DSP core, will allow the system to properly run.

Finally, if we look at the complete view of MUST (above) we better understand why it was necessary to describe the different bloc and concepts, as the complete solution is complex. Let’s also mention Vectored Floating Point Unit (FPU) capability of the DSP core, as well as offloading capabilities: MUST also integrate
co-processors, these Tightly Coupled Extensions (TCE) optimized to support for example FFT or Viterbi, or even User Defined function

The best IP solution has to be back-up by a development board (see above), as well as ESL tools integration with full debug capabilities, compliant with TLM 2.0, with full support for Carbon and Synopsys, to allow smooth design integration and debug, leading to short TTM.

To learn more about CEVA DSP and platforms, visit http://www.ceva-dsp.com/DSP-Cores.html.

Eric Esteve from IPNEST –


How Can You Work Better with Your Foundry?

How Can You Work Better with Your Foundry?
by glforte on 02-22-2013 at 5:40 pm

The fabless revolution in the digital semiconductor industry is no more, with just a few integrated device manufacturers (IDMs) remaining on the playing field, it is now the normal way to do business. However, the learning curve for each new process node continues as it always has, with a host of new technical challenges for the designer, whether he is in a fabless model or not. What is different is that leading edge IC designs now must be carefully crafted to be compatible with the manufacturing requirements, and these are, in varying degrees, unique for each different foundry. Moreover, designers are also doing more with legacy processes, which is also creating new technical challenges as well as opportunities.

Design for Manufacturing, HKMG, double patterning, smart fill, FinFET, ESD checking, waiver management, IP verification, 3D-IC–the list goes on. Mentor Graphics is engaged with all the key foundry companies to develop and qualify new tools that address all these issues so they will be ready when you need them for your latest IC designs, regardless of which process node is your newest node. Along the way we gain some valuable insights and lessons that we would like to pass on to you. We’re putting this information in one location, called our Foundry Solution Site, to make it easy for you to find. You can access it directly from this link: Mentor Graphics Foundry Solution site, or navigate to it from the Mentor Graphics home page (www.mentor.com) by clicking on the “Solutions” tab at the top.

So if you are a fabless IC designer dealing with today’s design to manufacturing challenges, or you need to educate yourself on what is coming at the next node, come visit our Foundry Solution site, bookmark the page, and follow us on Twitter to get alerts when new content is posted.


Is debugging a task, or a continuous process?

Is debugging a task, or a continuous process?
by Don Dingee on 02-22-2013 at 2:59 pm

Early in my so-called EE career, I sat in a workshop led by the director of quality for the Ford truck plant in Louisville, KY, where “Quality is Job #1.” At that time, they were gaining experience in electronic control modules (ECMs) for fuel efficiency and emissions control. Who better to transfer the secrets of Crosby and Deming to a bunch of missile designers?

Continue reading “Is debugging a task, or a continuous process?”


Modeling TSV, IBIS-AMI and SERDES with HSPICE

Modeling TSV, IBIS-AMI and SERDES with HSPICE
by Daniel Payne on 02-21-2013 at 8:10 pm

The HSPICE circuit simulator has been around for decades and is widely used by IC designers worldwide, so I watched the HSPICE SIG by video today and summarize what happened. Engineers from Micron, Altera and AMD presented on how they are using HSPICE to model TSVs, IBiS-AMI models and SERDES, respectively.
Continue reading “Modeling TSV, IBIS-AMI and SERDES with HSPICE”


Innovative or Die, NoC IP Market is Cruel…

Innovative or Die, NoC IP Market is Cruel…
by Eric Esteve on 02-21-2013 at 8:14 am

I have blogged in 2011 about the Arteris-Sonics case, initiated by Sonics, claiming that Arteris NoC IP product was infringing Sonics patent. In this article, we have seen that the architecture of Sonics interconnects IP product was not only older but also different from Arteris’ NoC architecture: the products launched initially by Sonics, in the 1995-2000 years, were closer to a crossbar switch than to a Network-on-Chip. Sonics management probably realize that the crossbar switch based architecture was 10 years old, and late, compared with the NoC, and launched Sonics SGN (“router-based network architecture for SoCs” ) in September 2011. The timing is important here: the direct competitor product, FlexNoC from Arteris, was already on the market since 2005 and was being adopted by many SoC chip makers, with a strong adoption rate in 2011, as the company gain more customers that year than during the cumulated last five years. Then, who could imagine that Sonics management has selected the legal field to compensate for an obvious engineering and technology weakness: the company was five years late!


SNAP from Sonics launched in 2009… any success on the market?

It seems that this maneuver had exactly the opposite result than expected by the initiator of the legal case. Instead of discouraging potential and actual Arteris customers from adopting or continuing to integrate FlexNoC, the legal move from Sonics made semiconductor makers not only nervous, but also wary… about Sonics. We have to wait for the outcome of the legal battle, but this Marketing/Legal could become a case that you learn in MBA schools, illustrating what a company being late on the market should absolutely avoid to do: try to compensate a lack of technology by moving on the legal field, when you don’t have the right ammunitions…

In fact, the market is also buzzing about some rumors, which are difficult to verify (that’s almost the definition of a rumor), so please don’t take this as facts… until validated:

  • At the end of 2012, Sonics was about to be acquired by a large EDA vendor (think about music, not movie), but the deal was not concluded: did the buyer realized that Sonics was really too late on the NoC battle field?
  • Sonics Armenia, where the company has R&D operation, will fire 18 of the 30 employees…
  • Sonics is abandoning their China/APAC operations: They are shutting down their Taiwan office and all the employees are now looking for jobs…
  • 2 of the 3 marketing people have been laid off…
  • Cherry on the cake (except for Sonics employees…): some source says they are in a huge cash crunch, with a few months cash left.

I have found an article posted on September 9, 2011 on Chip Estimate: “On Chip Interconnection IP gains attention” by [email]j.blyer@ieee.org[/email], this extract is very interesting, as it put the light on the key point:

Two major EDA companies focused entirely on the on-chip interconnect IP market are Arteris and Sonics. Each approaches the core-to-core communication problem in a different way.

Arteris replaces traditional on-chip, fixed bus architectures with a packet-based, “network on a chip (NOC)” technology. Instead of point-to-point dedicated wire connections, the company’s NOC approach reuses existing wires. Data is sent as scheduled, layered packets across the same wires.

Sonics also uses NoC technology, but in the past emphasized socket-based instead of packet-based design. The company’s recent SonicsGN (SGN) announcement shifted that emphasis to a router-based network architecture for SoCs.

Amazingly, both Sonics and Arteris representatives have been interviewed by the author, in this article clearly illustrating that Sonics was not selling the right product, but had decided to “shift… to a router-based network architecture for SoCs”.

In fact, you better understand why Sonics sued Arteris: to slow down a competitor, and to sell Sonics SGN into the vacuum created by the lawsuit. Unfortunately, Sonics did not foresee two problems:

  • Sonics made big Arteris customers very mad. As a result, these customers started working even more closely with Arteris.
  • Sonics has not finished Sonics SGN, so they didn’t have a product to sell into their planned vacuum.

By the way, the two above bullets are pure speculations, and the whole article is “for entertainment purpose only”, the real battle is run on two fronts, the legal (but we still don’t know the outcome) and the market… it seems that the comparison of NoC IP designs win made by the two companies in 2011 and 2012 should be used to identify the winner!

Eric Esteve


Intel’s x86 – Foundry Breakup Comes into View

Intel’s x86 – Foundry Breakup Comes into View
by Ed McKernan on 02-21-2013 at 12:46 am

The announcement by Intel during their January earnings call that they were going to hike Capex in 2013 over 2012 left many folks scrambling as to the reasons and the what-the hecks? Here was a company that was exiting 2012 with 50% utilization of their advanced 22nm process and yet signaling more building was to come. Furthermore, this increase seems counter to the claim that most of its current 22nm tools can be repositioned to the 14nm Fabs that ramp later this year. It was Paul Otellini who spoke the words of the strategy that likely came from Intel Chairman and Manufacturing Guru: Andy Bryant. Could Intel be saying that to ultimately win they had to hit the gas pedal guessing that only they could get to 14nm and then 10nm with cash still in the bank? Or is there something else at play (i.e. a Breakup of x86 from Foundry followed by the signing of multiple large customers).

If one goes back just 12 months ago, you will hear Paul Otellini articulate very clearly that Intel receives equivalent value from the x86 architecture and from its fab process. With the slowdown in the PC market and the likely strong corporate adoption of iPADs, the x86 architecture value fades and we are left with the Process Technology and Manufacturing as the asset ready for a true breakout. As mentioned in a previous article, Intel could continue to operate as a strong cash flow generating company with a Data Center that doubles over the next five years and a client side x86 business that is 2/3 or better than current revenue and margins. The assumption is that Capex was dialed back by nearly 40% and everything wafer was utilized. But that scenario is now off the table.

Intel’s continued capacity buildout is of such a magnitude that it will not only support the entire x86 market (assume no AMD or NVDA at the extreme) but also that of a combination of several smartphone and tablet silicon leaders (i.e. Apple and Qualcomm, Broadcom, and nVidia). With this build out we should expect that wafer pricing for next generation leading edge process technology (20nm or below) should come down significantly as Intel rifles in low ball pricing to TSMC and Samsung customers who in turn ask their suppliers to sharpen their pencils. If Intel can’t get TSMC customers then they will reset the margins in the industry. One expects though that in the end Intel will sign on customers at favorable 22nm or 14nm pricing and try to make back the profit at 10nm.

Three years ago, I was convinced that Intel put in place a plan that would allow them to wrap up most of the x86 market by building processors that focused on much higher performing graphics while simultaneously forcing down TDP thermals making it extremely difficult to design a thin notebook that could utilize a standalone graphics controller. Die area shifted from the x86 core to the graphics to close the gap with AMD and nVidia. This business plan, unbeknownst to most, is still modus operandi with Haswell being perhaps the pinnacle of their cannibalization strategy. Soon many analysts will declare that Haswell as overshot the performance target at the expense of a larger die size relative to that of Ivy Bridge. Intel will need to immediately go on a die size crash diet in order to meet pricing requirements at good margins that stabilize the declining PC market. The flip side of this is that x86 client capacity with drop as a % of Intel’s total fab footprint, especially at 14nm. Again, what was the $13B in 2013 CapEx targeted at?

During this same period of three years ago, I was exchanging emails with a long time semiconductor analyst on the prospect of Apple moving to Intel as a foundry due to the expectation that Samsung would be an eventual competitor. It appears Intel made a run for Apple’s business based on its dramatically increased Capex in 2011 and 2012. However, the ambitions of its internal mobile group to pursue the smartphone and tablet markets with the slow off the mark Atom processors likely made Apple reticent to deal. This it should be noted was the Paul Otellini strategy that was operating in parallel to the Andy Bryant Fab build out strategy.

The resignation of Paul Otellini came as a surprise to many but may in hindsight be seen as significant a moment in Intel’s history as that of Andy Grove and Gordon Moore exiting the DRAM market and focusing on x86 during the early days of the PC. I believe what it says is that we have reached the point in time when the Grove-Otellini model of tightly integrated custom product design with manufacturing that maximized performance and margins in the PC market is not transferrable to the new mobile market. The pace of innovation in the applications processor and the rise of the Baseband-Wireless infrastructure as the most valuable component in mobiles has exposed the design side weakness of Intel. Andy Bryant knows that Intel’s most valuable asset is its process technology and the dollars to be gained servicing the mobile silicon suppliers will be many times greater than today’s $35B x86 client business.

Think about it for a moment. The model that has driven Intel since 1968 is soon to be relegated to Server processors and legacy x86 clients as the Foundry takes the company reigns. Android and iOS broke the link Wintel computing link requiring x86.

At what stage is Intel at in its ascendancy of the Foundry model. First, there has to be a true breakdown of the old model. A signpost appeared just last week of what we should expect in the months that lead up to Foundry Independence.

With a loud Bronx cheers from Wall St. analysts, Apple dropped the price of its 13” Mac Book Pro notebook from $1699 to $1499. The press attributed it to slowing Apple sales. Not likely, as Apple has resisted Mac Book Pro price cuts for the past three years in order to provide an umbrella under which the iPad could flourish.

My investigation suggests that Intel cut the price of their Ivy Bridge i5 processor by roughly $100 to spur notebook demand and close the gap with tablets. Intel has chips to move and must remain competitive on cost even as Haswell rolls in later this summer. More importantly, I see this the first of many price reduction steps that effectively move the Ivy Bridge i5 into the Celeron space as the i7 moves down to take the i5 space. It represents the collapse of the Intel x86 price model that has been the driver of its business for the past 20 years. Thus diminishes the x86 client business. It is incredible to think that Andy Bryant may for the sake of saving Intel, have to toss to the side a $30B+ business with large profits in order to chase the much larger mobile business.

At some point soon, Andy Bryant will have to consider implementing one or both of the following options. One is to radically scale back Intel’s mobile group to focus on just the traditional PC and Mac market and thereby signal to Apple, Qualcomm and others that there will be no competition from the Intel product groups. The second option is to spin out the whole x86 business (client and Data Center). The value of the pieces would be much greater than the whole and Intel Foundries could then open themselves up to an even broader set of customers (e.g. nVidia, AMD, Broadcom). Competing and taking customers from Samsung and TSMC is the only business plan that unlocks Intel’s true valuation and based on Intel’s CapEx plans it seems to be on the horizon. Look for a dramatically New Intel to Communicate this plan as soon as their Analyst Meeting in May.

Full Disclosure: I am Long AAPL, INTC, QCOM, ALTR


Prediction is very difficult, especially about the future

Prediction is very difficult, especially about the future
by Bill Jewell on 02-20-2013 at 8:01 pm

The above quote is attributed to both physicist Niels Bohr and baseball’s Yogi Berra. The statement certainly applies to predicting the semiconductor market. Semiconductors operate on physical principles. However the market for semiconductors is affected by numerous factors. The outcome of a baseball game can be determined by a single pitch which could result in a strikeout or a home run. Semiconductors are at the end of the electronics food chain. Semiconductor companies are dependent on electronics companies, who are dependent on buying patterns of businesses, government and consumers. At each level of the supply chain are distributors, inventories, and outsourced manufacturing. Despite these complexities, many organizations, companies and individuals still try to forecast the semiconductor market.

How accurate are these predictions? I have collected forecasts for the last five years from publicly available sources. The forecasts used were released in the period from October of the prior year to the end of February in the forecast year. Thus the forecasts were made before any monthly data for the forecast year was available from WSTS (World Semiconductor Trade Statistics). The charts below show the forecasts in sequential order compared to the final market data for the year from WSTS.

Year 2008 semiconductor market projections ranged from 12% from Future Horizons to 3.4% from Gartner. At the end of 2007 most expected solid growth in 2008. However as time went on weakness in the economy became more apparent. Gartner’s March forecast reflected some of this weakness with only 3.4% growth. The economy continued to deteriorate in 2008, resulting in a semiconductor market decline of 2.8%. No one predicted this decline a year in advance, but the weakness of the world economy in 2008 surprised almost all the experts.


For 2009, the forecasts were universally pessimistic. In November 2008, WSTS predicted a modest decline of 2.2%. In early 2009 it became apparent the market could see a significant decline. The 4Q 2008 market was down 24% from 3Q 2008, a record decline. Forecasts in January and February of 2008 called for major declines in 2009 of over 20%. However the semiconductor market bounced back quickly after it became obvious the global recession (driven by a housing market collapse) was not having a major impact on demand for electronics. A market decline of 16% in 1Q 2009 was followed by 20% increases in both 2Q 2009 and 3Q 2009. The year wound up with a 9% decline. WSTS turned out to have the most accurate forecast – not because it foresaw the strong bounce back but because it was made before the weakness in 4Q 2008 became known.

The momentum of the recovery in 2009 led to optimistic projections for 2010. Future Horizons (F.H.) in November 2009 predicted 22% growth for 2010. WSTS and IC Insights were more conservative with 12.2% and 15%, respectively. Later forecasts were 20% or higher. I began forecasts for my company Semiconductor Intelligence (SI) with 25%. Based on the available forecasts, my 25% forecast was the highest made before any 2010 monthly data was released. The year finished stronger than anyone predicted, with 31.8% growth in 2010. Although no one came very close to the final growth rate, most forecasters correctly predicted a very strong market in 2010.

Following the exceptionally high growth in 2010, forecasters expected growth to moderate in 2011 but remain healthy. My October 2010 forecast for Semiconductor Intelligence (SI) was 9%. IC Insights, Semico Research and IDC also had forecasts of 9% or higher. WSTS, Gartner and iSuppli were more conservative in the 4% to 6% range. Mike Cowan’s forecasts use a model based on historical WSTS data and thus make no assumptions about the future. Cowan’s December 2010 forecast was for 2.3% growth in 2011. The global economy was weaker than expected in 2011 due to a weak recovery in the U.S., the earthquake and tsunami in Japan, and the European debt crisis. The 2011 semiconductor market finished up a slight 0.4%. Mike Cowan’s forecast was the most accurate.

Projections for year 2012 primarily fell into two camps. Semico Research, Future Horizons (F.H.) and IC Insights expected growth of 7% to 8%. WSTS, IDC, Gartner, IHS iSuppli and Mike Cowan were in the 2% to 3% range. My forecast for Semiconductor Intelligence in February 2010 called for a 1% decline. Based on available forecasts, Semiconductor Intelligence was the first to call for a decline in the 2012 market. The global economy in 2012 again fell below expectations and the semiconductor market finished with a decline of 2.7%.

What conclusions can be drawn from these comparisons? First, no one is consistently correct. In general the forecasters are closer to each other than they are to the final result. This is not much of a surprise since most forecasters are working with similar assumptions. Unexpected events can lead to a final result significantly different from expectations. Second, often the most accurate forecast is the latest –since it is based on the most recent data. This is not always the case, as seen with the November 2008 WSTS forecast for 2009.

Third, why bother? If no one can accurately predict the semiconductor market, why should anyone pay attention to forecasts? The answer is every business needs a plan. The plans should be specific but flexible. The semiconductor market forecasts provide guidance based on the latest available data and assumptions. Ideally the forecasts should be updated frequently based on the latest results. We at Semiconductor Intelligence update our forecasts quarterly. Mike Cowan updates his forecast monthly.

What is the outlook for the current year? The forecasts for 2013 range from 3.6% from Mike Cowan to 8.3% from IHS iSuppli. This is a fairly narrow range compared to prior years. However as we have seen before the final 2013 market growth could be outside of this range.

Semiconductor Intelligence does not claim to be consistently more accurate in our forecasts than other companies or organizations. However we were the most accurate of the publicly available forecasts for both 2010 and 2012. We provide the assumptions behind our forecasts and update quarterly. More information is available on our website at: http://www.semiconductorintelligence.com/