wide 1

HP Loses Its Autonomy!

HP Loses Its Autonomy!
by Daniel Nenni on 12-09-2012 at 9:00 pm

HP buys Autonomy for $11B then does an $8.8B writedown?!?!?! Was HP swindled by Autonomy? As a long time HP customer I’m outraged by this behavior. Not just the over priced acquisition but the behavior of HP on a whole! Even today 4 of the 6 laptops in my house are HP as are my printers. How am I supposed to buy HP products with a straight face after this?

Having friends at Oracle, HP, and Autonomy, this virtual reality show interests me and is a continuing humorous email thread amongst friends. Humorous, as in we can’t believe this is happening!

The cast of characters all have one thing in common, they all should have known better:

Raymond J. Lane Executive Chairman since 2011
Mr. Lane has served as HP’s Executive Chairman since September 2011. Previously, Mr. Lane served as HP’s non-executive Chairman from November 2010 to September 2011. Mr. Lane has served as Managing Partner of Kleiner Perkins Caufield & Byers, a private equity firm, since 2000. Prior to joining Kleiner Perkins, Mr. Lane was President and Chief Operating Officer and a director of Oracle Corporation, a software company. Before joining Oracle in 1992, Mr. Lane was a senior partner of Booz Allen Hamilton, a consulting company. Prior to Booz Allen Hamilton, Mr. Lane served as a division vice president with Electronic Data Systems Corporation, an IT services company that HP acquired in August 2008. Mr. Lane also is a director of Quest Software, Inc. and several private companies.

Given Ray’s depth of experience how does he get duped into acquiring Autonomy for an inflated price? The same could be said for the entire HP board who in my opinion should be held accountable for this $11B debacle.

The most interesting character is Frank Quattrone who represented Autonomy in this deal. I have written about him before, Frank handled the Synopsys acquisition of Magma.

Investment banker Frank Quattrone, formerly of Credit Suisse First Boston (CSFB), took dozens of technology companies public including Netscape, Cisco, Amazon.com, and coincidentally Magma Design Automation. Unfortunately CSFB got on the wrong side of the SEC by using supposedly neutral CSFB equity research analysts to promote technology stocks in concert with the CSFB Technology Group headed by Frank Quattrone. Frank was also prosecuted personally for interfering with a government probe.

To make a long story short: Frank Quattrone went to trial twice: the first ended in a hung jury in 2003 and the second resulted in a conviction for obstruction of justice and witness tampering in 2004. Frank was sentenced to 1.5 years in prison before an appeals court reversed the conviction. Prosecutors agreed to drop the complaint a year later. Frank didn’t pay a fine, serve time in prison, nor did he admit wrongdoing! Talk about a clean getaway! Quattrone is now head of the merchant banking firm Qatalyst Partners which is staffed with cronies and former CSFB people.

Another interesting note, Quattrone took Netscape public, Marc Andreessen co-founded Netscape, and Marc Andreessen is on the board of HP. The plot thickens!

Here’s a two sentence version of what happened based on what I have read and heard:

Simply stated, Autonomy over reported their product revenues by including services revenue as product sales at the time of the sale rather than at the time of the service. For example, if you sell a 5 year service agreement you are supposed to recognize that revenue over the five year period versus all 5 years up front. Revenue recognition 101, this really is the oldest trick in the M&A book.

Shame on Autonomy CEO Dr. Mike Lynch for allowing it, double shame on the auditors hired by HP for not catching it, but for me the blame sits squarely on the HP board of directors for not doing their jobs and protecting the stakeholders of HP. Just my opinion of course!


Apple Will NOT Manufacture SoCs at Intel

Apple Will NOT Manufacture SoCs at Intel
by Daniel Nenni on 12-09-2012 at 5:00 pm

The internet is a funny place where rumors are true and truths are rumors. The latest one has Apple using Intel as a foundry. This is fuel for the rivalry between SemiWiki blogger Ed McKernan and me. Ed says Apple will use Intel, I say Apple will use TSMC, we have a very expensive dinner riding on this one.

TSMC falls on possible competition from Intel
Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s largest made-to-order chip-maker, fell yesterday on reports it may compete with Intel for Apple orders...

Yes, TSM stock took a hit based on a rumor so we have motive (Apple consumes more than 200,000 12-inch wafers per year). The Intel rumors started last year with Intel saying they would “like” to manufacture the Apple A4 and A5 SoCs. Clearly that did not happen nor will it. Moving an SoC amongst foundries is serious business, serious technical and political business.

Samsung and Intel manufacturing processes could not be farther from compatible. I work with Sagantec, the leading process migration company, and can tell you a migration between Intel and Samsung is not technically feasible. It would be a complete re-design. Apple uses Samsung IP. Apple uses commercial EDA tools certified by Samsung. The ROI is not there for Apple to move the Apple Ax SoC between fabs.

Let’s not forget how Apple became a fabless semiconductor company. They first started with Samsung as an ASIC customer. Apple did the initial RTL design and tossed it over the wall to Samsung for IP integration and physical implementation. Apple then acquired P.A. Semi (Palo Alto Semiconductor) for $278M in 2008 which brought serious SoC design experience in-house. Do a quick search on LinkedIn and you will see quite a few PA Semi people are still at Apple. Apple has also made other acquisitions and investments in the fabless semiconductor ecosystem.

Politically it is a problem as well. Apple’s experience with Samsung, where a vendor (Samsung) directly competes with their largest foundry customer (Apple), will definitely change the way we all do business. Qualcomm is in the same boat with Samsung. Samsung was the largest Snapdragon SoCcustomer and now the Samsung Exynos SoC is replacing Snapdragon. I was at ISSCC when Samsung first presented a paper on Exynos and saw the question line queue up quickly with competing SoC engineers from QCOM, BCOM, TI, Apple, etc. Few if any questions were directly answered as is the Samsung way.

For the record, I have no problem with Samsung competing with customers by providing low cost alternatives. It is good for consumers. It is good for the consumer electronics industry. It is good for the semiconductor ecosystem. It will force Apple and Qualcomm to innovate and differentiate at the different pricing levels. I also have no problem with the resulting legal actions as it will force Samsung to innovate and differentiate rather than replicate. It’s all about the customer experience, better products at affordable price points, that is what business is all about.

Back to the Apple “TSMC versus Intel” debate:

Can Intel be successful in the foundry business? Of course they can but it will not happen anytime soon. It took Samsung 10+ years to get to the number 4 spot. Today Intel Foundry uses the ASIC business model like Samsung did in the early days. Customers throw RTL designs over the Intel wall for physical implementation. This helps Intel learn the SoC foundry business and it protects Intel process secrets. Moving forward Intel will have to develop a fabless semiconductor ecosystem (exposing process secrets) and forge EDA and IP partnerships with the likes of ARM.

Intel will also have to avoid the competing with customers conundrum. The Intel UltraBooks are a blatant copy of Macbooks. The Intel Atom will someday compete with ARM and don’t be surprised if Intel comes out with an SoC of their own. Sounds a bit like Samsung right? Deja vu all over again. TSMC on the other hand is a pure play foundry and does not compete with customers.

My bet is: moving forward Apple will use Samsung for 28nm (iPhone 5s) and TSMC for 20nm (iPhone 6). Intel certainly has a shot at 14nm and 10nm but never ever count out TSMC. If you want to bet a lunch on Apple manufacturing at Samsung or Intel for 20nm post it in the comment section. I will cover all lunch bets against TSMC.

Full disclosure: I can eat my weight in sushi!


How Apple Plans to Leverage Intel’s Foundry

How Apple Plans to Leverage Intel’s Foundry
by Ed McKernan on 12-09-2012 at 4:00 pm

Tim Cook’s strategy to disengage from Samsung as a supplier of LCDs, memory and processors while simultaneously creating a worldwide supply chain from the remnants of former leaders like Sharp, Elpida, Toshiba and soon Intel is remarkable in its scope and breadth. By 2014, Apple should have in place a supply chain for 500M iOS devices (iPhones, iPADs, iTVs and iPODs). Add in the near term foundry relationship with TSMC and it spells not just freedom but a true Vertical Operations independent from its number 1 competitor. Add to this the fact that Apple may soon be lower in cost, an inconceivable thought to most observers. How can Apple have lower cost than Samsung? This is the story of 2013-2014 as Tim Cook is attempting to put the last pieces in place to build affordable $200 iPhones and $250 iPAD Minis for the other half of the world that is currently unaddressed today (eg China Mobile and India).

A recent article in digitimes questions whether TSMC will have enough capacity to support Apple’s projected demand of 200MU while also servicing its other major customers like Qualcomm, Broadcom, nVidia and the rest of the leading edge silicon buyers. Will TSMC make the investment to capture all of mobile while Intel sits on three empty 14nm fabs? Not likely, unless Apple and Qualcomm write checks to guarantee the demand. Six months ago Qualcomm started down the path of diversification when demand skyrocketed and TSMC could not turn on a dime to fulfill. Tim Cook knows Jony Ive will continue to pump out great products at a more rapid pace in the coming quarters and try to deny Samsung and Microsoft an opening like in the past during summer lulls when yearly refreshes were still waiting in the wings. All that is holding Apple back is the global supply chain that is larger and lower cost than what Samsung has today.

Many in the press have spent the past 6 months talking about the transition taking place from Samsung to TSMC. They have overlooked how Apple is leveraging its Japan infrastructure to build LCDs, DRAM and NAND Flash at a time when the Yen is about to collapse. From the mid 1980s to today the Yen has appreciated by 300% relative to the dollar. Moore’s Law is a hugely deflationary force that requires a country to offset its Creative Destruction by inflating away its currency. So the US and half the world linked to the Dollar (including China) have prospered while the Japanese economy has contracted during these past 25 years. If the Yen drops dramatically from 80 to 120 or 150, as is necessary to restart their economy, then Apple will have the beginnings to its low cost strategy to outflank Samsung. A robust, high volume processor and baseband chip supply chain is all that remains.

When the Intel earnings call comes in January, I anticipate a number of analysts will demand a full accounting of the projected fab loadings for 22 and 14nm for 2013 and 2014. The timing of recent leaks on Intel and Apple negotiating a Fab deal is not unexpected. On the table are three 14nm Fabs that require no Apple capital investment to satisfy the demand beyond the 200MU that TSMC and Samsung can supply. Quite likely Apple has the opportunity to support more than 500MU additional units with Intel at current die sizes. But wait, Apple will get an extra shrink and significantly lower power in a move to Intel’s 14nm process. Therefore the Intel Fab option is really the one road available that catapult Apple beyond Samsung in their battle for market share leadership in Smartphones and Tablets. It is also the one option that is unavailable to Samsung.

Analyst Doug Freedman recently speculated that Intel and Apple are considering a Foundry partnership where the former agrees to build ARM chips for the iPhone while the latter will convert the iPAD to x86. It’s possible, but in the end I believe that three empty Fabs running 14nm Finfet is a much stronger religion than x86 architected chips. Intel would be happy with anything Apple wants built because there are tremendous sunk costs in place with the new fabs. This is where TSMC has to be cautious.

While Stacy Smith, Intel’s CFO, repeated at a conference last week that they don’t see PC volume falling again in 2013, there are hints that Ivy Bridge pricing has already been reduced to move the ultrabook price points down to sub $500. Smith also noted that 22nm Fab equipment is now being transitioned into 14nm fabs. The Ireland Fab may be on hold now, but in reality it is like an Aircraft Carrier waiting to be outfitted with Fighters. Six large Fabs are what await the combined x86 and Apple demand. Of the six, less than three are currently needed to support 350M PCs and 20M-x86 based Servers. Any slide in x86 demand just opens up more space for Apple and at lower prices.

At some point the operating margins building the A7 for Apple will cross above those of an Atom chip. Just to be clear, I am saying that an A7 will deliver more profit than an Atom chip when Sales, Marketing and Engineering costs are factored in for the latter. If Intel can sell Apple on its 4G LTE baseband, then the low cost, dual sourced processor and baseband supply chain will be complete.

Scenarios such as those above are only possible when markets are going through huge transitions. I am still intrigued by Intel’s decision to double its fab footprint back in 2010. What were the reasons for the boldness and is that view today still held internally. I can only speculate that they truly believed that they alone would get to 14nm while the rest of the industry faded.

Full Disclosure: I am Long AAPL, INTC, ALTR, QCOM


Apache Power Artist Capabilities II

Apache Power Artist Capabilities II
by Paul McLellan on 12-09-2012 at 4:00 pm

This is the second part of my discussion with Paul Traynar, Apache’s PowerArtist guru. The first part discussed sequential reduction capabilities. Part I was here.

There are two big challenges with doing power analysis at the RTL level. Firstly, how do you get an accurate enough model of what the design will dissipate given that you only have RTL and not gates or anything even more accurate. And secondly, what vectors do you use since you probably just have a lot of functional verification vectors and not a carefully hand-crafted set of a few vectors that exercise the design just right.

The reason that this is important is that underdesigning the power grid or choosing a package that cannot cope with the power will lead to reliability issues. On the other hand, overdesigning the power grid or choosing an unnecessarily expensive package affects the competitiveness of chip or the final system.

PowerArtist contains two approaches that address these issues.

The first is PowerArtist Calibration and Estimator or PACE. This analyzes similar designs and looks as factors such as net capacitance, which library cells are used, how the clock-tree is implemented. There is a bit of a chicken and egg problem the first time through with a new process or a new design style, although usually data from earlier nodes can be used with some judicious scaling.

PACE turns out to be surprisingly accurate, with an accuracy at RTL within about 15% of post-gate physical design with parasitics.

The second approach in PowerArtist is the RTL Power Model or RPM. This is a reduction of the design to the critical factors that can then be used in a tool like RedHawk to analyze the power grid, package selection and perhaps even PCB noise issues. The challenge is to get vectors that represent realistic worst-case scenarios. This has to be done by using information in the RTL to identify power critical situations.

RPM is generated using whatever vectors exist. By examining how the vectors interact with the design it is possible to reduce the number of vectors to just a few critical ones that get the circuit into a critical state. For example, analyzing peak power in the presence of high average power (which tends to drain the decaps) or finding vectors that have very high changes in current when, for example, large parts of the design come out of clock-gating and start to toggle. The RPM captures how the circuit behaves during these critical periods which can then be used to stress other parts of the design and ensure reliability.

While Paul was over from UK he also recorded an educast on Power Budgeting Using RTL Power Models.


A Brief History of the MIPS Architecture

A Brief History of the MIPS Architecture
by IMGJen on 12-07-2012 at 1:00 pm

MIPS is one of the most prolific, longest-living industry-standard processor architectures, existing in numerous incarnations over nearly three decades.

MIPS has powered products including game systems from Nintendo and Sony; DVRs from Dish Network, EchoStar and TiVo; set-top boxes from Cisco and Motorola; DTVs from Samsung and LG; routers from Cisco, NetGear and Linksys; automobiles from Toyota, Volvo, Lexus and Cadillac; printers from HP, Brother and Ricoh; digital cameras from Canon, Samsung, FujiFilm, Sony, Kodak, Nikon, Pentax and Olympus; and countless others.

At the heart of MIPS is its RISC (reduced instruction set computing) instruction set, which was an entirely new concept for computer architecture at the time of its inception in the early 1980s.

The basic idea of RISC is that using simple instructions, which enable easier pipelining and larger caches, can dramatically boost performance. The emergence of high-level programming languages and compilers in the early 1980s enhanced this value proposition, making it possible for programmers to compile code into simple instructions that would execute extremely quickly.

Around that time, several other industry trends converged, creating an ideal environment for a disruptive company to bring RISC technology to market. These included the advent of the fabless semiconductor model, introduction of UNIX as an open, portable operating system, and proliferation of VLSI design in research labs, enabling increased experimentation with new ideas and technologies such as RISC. You can read more about these trends in a transcript of a 2011 Computer History Museum panel, MIPS Oral History Panel Session 1: Founding the Company.

In 1981, Dr. John Hennessy at Stanford University led a team of researchers in building a new microprocessor using RISC principles – this was the MIPS project.

Technologists John Moussouris, Edward “Skip” Stritter and others then joined Hennessy in 1984 to commercialize the MIPS project, through the creation of MIPS Computer Systems, a fabless semiconductor company that was the first to produce a commercially available RISC microprocessor.

MIPS’ value proposition was in providing high performance at low cost. In retrospect, the benefits of RISC may sound obvious, but at the time, RISC represented a fundamental philosophical shift. The MIPS team not only had to develop and sell a new product, but they also had to evangelize a new design philosophy.

The company’s first chip design, the R2000, was released in 1985, based on the MIPS I Instruction Set Architecture (ISA), and the next design, the R3000, was released in 1988. It was used primarily in SGI’s workstations, and later in workstations and servers from DEC. Based on its growing traction, in 1989, MIPS Computer Systems went public. The R6000 was then introduced based on the MIPS II ISA. In 1991, MIPS released the world’s first 64-bit microprocessor, the R4000. This design was extremely important to MIPS’ biggest customer, SGI, and as such, SGI bought MIPS in 1992. SGI subsequently incorporated MIPS as a wholly owned subsidiary, MIPS Technologies, Inc. It was during this time that SGI introduced its IRIS Crimson product, the first 64-bit workstation, which was featured in the movie Jurassic Park. More about this on Wikipedia here and here.

Over the next several years, MIPS introduced the R8000, R10000 and other processor variants. With each generation, more functionality was added to the MIPS architecture. During this time, MIPS began licensing its processor designs to other companies. By the late 1990s, the MIPS architecture continued to proliferate, and in 1997, the company shipped a record-breaking 48 million units. Riding on this momentum, SGI spun MIPS out as an IP company, with a business model based on licensing its architecture and microprocessor core designs. MIPS Technologies, Inc. subsequently held an IPO in 1998. You can get more in-depth information on the history of MIPS from Wikipedia here.

Around this time, MIPS introduced the 32-bit MIPS32® ISA and the 64-bit MIPS64® ISA – compatible architectures that leveraged the previous MIPS I, MIPS II, MIPS III, MIPS IV and MIPS V releases. With this move, the privilege resource architecture (PRA) and ISAs were standardized, laying the foundation for future innovation.

Since then, the MIPS architecture has continued to evolve, adding numerous innovations including a 64-bit floating point unit for 32-bit CPUs, multi-threading, DSP support, microcontroller-specific extensions, the microMIPS™ code compression ISA, enhanced virtual addressing (EVA) and much more. With Release 5 of the MIPS architecture, the company is rolling out hardware virtualization and SIMD support.

Of course it’s not all about the elegance or capabilities of the ISA. Today, ecosystem is everything. Over the nearly three decades of its life, a large infrastructure of supporting development tools, operating systems, applications, middleware, device drivers and more have grown around MIPS. Support continues to grow, including initiatives such as Linux, Android, Java, JavaScript, HTML5, WebGL, GNU and LLVM toolchains, apps, games and more.

MIPS licensees have shipped more than 3 billion units since 2000. The ever-evolving MIPS architecture is inside of a large range of products, including the majority of DTVs, set-top boxes and WiFi routers shipping today. With the advent of the open, portable Android OS, MIPS is also now shipping inside of mobile devices. As intelligence is increasingly designed into just about every product from thermostats to high-end consumer products, the embedded microprocessor market is thriving, and the MIPS architecture continues to evolve to meet the needs of new generations of products while maintaining its simple, elegant RISC roots. For more info: www.mips.com.


Subsystem IP, myth or reality?

Subsystem IP, myth or reality?
by Eric Esteve on 12-07-2012 at 5:00 am

I have participated to a panel during IP-SoC, I must say that “Subsystem IP, myth or Reality” was a great moment. The panel was a mix of mid-size IP vendor (CAST, Sonics), one large EDA (Martin Lund from Cadence), Semiwiki blogger and one large IDM (Peter Hirt from STM) who has very well represented the customer side. And, to make the panel even more efficient, the audience was great, with Joachim Kunkel from Synopsys and Philippe Quiniot, STM Group VP, IP Sourcing & Strategy, or Marc Miller, VP Marketing for Tabula, to name a few very active participants.

There are many ways to introduce such a question; I propose to start with my first slide, just to position the problem. Subsystem IP is a great idea, I am sure everybody will agree with the concept, but could it be another one of this great story which end up falling, like Transputer, Gate Array integrating FPGA or Structured ASIC IP Platform? If you prefer to position the problem in a positive way: how should the industry proceed (I mean, vendors AND customers) in order to keep this Subsystem IP alive and growing?

Hal Barbour and Bill finch from CAST have decided to be specific, and to talk about “Platform” instead of Subsystem IP, and have taken a “CPU Subsystem” as an example. I am not sure that I agree with the “Platform” concept, but I must say that some of the Subsystem IP are CPU based, like for example the “ARC based sound system IP” from Synopsys. Why do I am not convinced by Platform for an IP vendor? Just platform sounds to me like a frozen system, and this probably means a lack of flexibility, and something which is “over-designed” if to be used by a variety of customers. Some of these customers will pay for extra gates that they don’t use…

To immediately answer to my objection, I am sharing with you a slide presented by Philippe Quiniot during the morning keynotes, and this picture clearly shows that STM think they should go toward Unified Platform development! Nevertheless, you should notice that these platforms will be developed by STM, for STM internal product group needs… In other words, STM does not expect any IP vendor to provide such a complex, and complete, platform…

Sonics has proposed a view of Subsystem IP which is more Application Specific, showing the Integration trends for Smartphones and Tablets, extracted from a Gartner report. This picture could be representative of the roadmap of the various chip makers addressing this market segment (it’s a bit early to know if integration will effectively happen this way, right?), but it can be used to raise one of the next big question which has been debated during the panel, differentiation.

Imagine that you are an IP vendor who decides to trust Gartner and build various Subsystem IP according with this roadmap, then go to sell these products to the chip makers active in Smartphone and Tablets segments… you may be successful with the companies developing me-too products. But the leaders, the Qualcomm, Nvidia, STM, will tell you that what they expect, above right performance and good deliverables, supposed to be their basic needs, is differentiation.

As mentioned by Peter Hirt “It’s all about differentiation”, and “Subsystems developed by IP vendors are available to everybody”. And conclude the slide by saying that “…Customer may select subset of Subsystem…” The important word here is “subset”! That means that an IP vendor may decide to develop a complete subsystem, to prove being a competence center for a given area, but may end-up selling only a subset.

We should link this comment with the very interesting assertion made by Martin Lund (made during the morning keynotes, and again during the panel): IP reuse is an Oxymoron and Fallacious!

That Martin did say is that Cadence has never sold exactly the same version of an IP function to two different customers, and this was, by the way, confirmed by other IP vendors. Why? Because customers need to differentiate, and to do it, they will always need highly configurable IP. Does that mean that Subsystem IP concept is dead? Not at all, but be sure it will be configurable enough to allow differentiation!

Finally, I will end up with my last slide (above), trying to remind some of the basics if you want to build a Subsystem IP business case. About the reasons why great ideas like Transputer or ASIC integrating FPGA blocs have failed in the past: Moore’s Law for Transputer, as it was possible to build very complexes CPU a few years later, and over-cost for the second great idea. So we could add “Lack of Differentiation” for Subsystem IP, if IP vendors forget to insert high configurability to allow for differentiation…

When I told you that this was a great panel, you can trust me, or read this extract from an Email of Hal Barbour, who has organized the panel, “I received numerous compliments from audience members of how great the panel discussion was. And I spoke to Gabriele this morning and she also told me that use had heard favorable comments too. It was clear that we had a group of people that were experts in this subject matter…”

Eric Esteve from IPNEST


Yield Analysis and Diagnosis Webinar

Yield Analysis and Diagnosis Webinar
by Beth Martin on 12-06-2012 at 10:02 pm

Sign up for a free webinar on December 11 on Accelerating Yield and Failure Analysis with Diagnosis.

The one hour presentation will be delivered via webcast by Geir Eide, Mentor’s foremost expert in yield learning. He will cover scan diagnosis, a software-based technique, that effectively identifies defects in digital logic and scan chains. as well as recent advancements in diagnosis technology and industrial case studies. There is time after the presentation for Q&A and further discussion.

Topics to be covered include:

  • Best practices for data collection and diagnosis of digital semiconductor devices
  • Statistical analysis of diagnosis results to pick the correct dies for an effective failure analysis
  • Layout-aware diagnosis
  • Scan chain diagnosis
  • Correlating diagnosis and DFM analysis results

Who should attend:

  • Engineers and managers responsible for digital semiconductor product design, test, quality, or yield
  • Engineers and managers responsible for digital semiconductor product and technology advancement
  • Failure Analysis Lab Managers or Process Engineers
  • Engineers involved in manufacturing production or process development
  • Anyone involved with the impact of low yield or low product quality

Sign up today!


Apache Power Artist Capabilities I

Apache Power Artist Capabilities I
by Paul McLellan on 12-06-2012 at 2:05 pm

I sat down last week with Paul Traynar who was over from UK. He is Apache’s PowerArtist guru. The first thing we talked about was PowerArtist’s sequential power reduction capabilities.

Forward propagation of enables means that when a register is clock gated and feeds a downstream register then that register can be gated on the following clock cycle. Of course what often happens is that a register remains unchanged for many clock cycles. Often the downstream register is not just depending on a single upstream register but multiple ones (for example, the output from an adder) in which case care needs to be taken to ensure the downstream register is clocked whenever its input changes.

The other approach, which is a bit trickier, is to propagate upstream. If a register is not clocked on this clock-cycle then registers that feed it do not need to be correct on the previous clock cycle because the value is not going to be used. It is harder to figure out how to create signals a clock cycle earlier though since there may not be an appropriate signal available.

Observability Don’t Care (ODC) is when several registers feed into a mux. Given the value of the mux only one of the registers is actually required to have the correct value and the others don’t need to be up to date since the values will be blocked by the mux.

Another approach, which is not sequential, that is sometimes appropriate is to compare the input and output of a register. This only makes sense if the input is stable over long periods of clock cycles. If the registers are large the comparison can generate huge xor-gates, but splitting these registers often yields good results. The comparison can be used to inhibit clocking downstream registers when the register is not changing its value.

There are also reduction approaches associated with memories. There are huge power savings possible from looking at address/data stability and inhibiting unnecessary memory operations.


PowerArtist works by analyzing the design, looking at the topology and doing power analysis. It then makes judgements on how much power will be saved and thus which changes to the design make sense. If registers have many bits then the power saved by not clocking them can be large, and so can justify overhead (in power and area) to calculate the clock gating signals.

The user interface (see the picture above) to the PowerArtist sequential reduction browser highlights registers that can be clock-gated and the power that can be saved. These same registers can also be seen traced in the schematic view

Coming in part II: RTL Power Model (RPM) and Power Artist Calibration Estimator (PACE)


Intel Taps The Debt Market: Should They Go Private?

Intel Taps The Debt Market: Should They Go Private?
by Ed McKernan on 12-06-2012 at 9:25 am

Intel’s ability this week to raise $6B in debt at rock bottom interest rates should give one a moment to pause and consider what this portends for the future of the company and whether it remains in public hands. We live in extraordinary times where a fiscally excessive government can sell 10 year treasuries at 1.6% and the largest of firms can tap the markets at will for rates that are not much higher. Is it possible that Intel will go private by the choice of management or even by way of a Leveraged Buyout? When one steps back and examines Intel’s financials it is easy to see how its current valuation, even with the ARM threat in mobile is in no way reflective of the cash flows that can be generated by the growing Data Center that overcomes a declining PC group and the future untapped leading edge process capacity coming on stream. It leads me to believe that Intel management is in deep discussion not only about the future of the x86 chip business but also how it can maximize its valuation for the executives and employees. As they say: better to figure out your own business or someone else will do it for them

Until last fall, when they raised $5B in the debt markets, Intel had avoiding what is seen in the markets as a mark of death for growth companies. Debt is for consumer oriented, slow growth companies as a means to lower capital costs especially as it offsets profits through the reduction of corporate tax. However, the Walmart’s of the world can prove to the capital markets that its revenue and cash flows will be consistent unlike tech companies. But the low P/E valuations of Microsoft, Intel and even Apple are at the opposite extremes of what occurred in 2000. Therefore to satisfy investors in this time of global uncertainty, the tech giants have had to borrow money in order to buy back stock and increase dividend yields. Intel now offers a 4.5% dividend yield and as the chart above shows has spent $90B in the past decade to buy back stock and payout dividends. Another $90B and the company will just about have a complete buyout covered.

If we assume that Intel’s Data Center business grows to $20B in 2016 from roughly $10B in 2011, then given its 50% operating margins it can offset a reduction of the PC processor business by roughly a third, from $35B to less than $25B. This number obviously depends somewhat on the ASPs and margins of the client group. Long term I see it as a larger, more profitable version of what IBM put in place with their legacy mainframe business in the Gerstner Era of the 1990s.

In terms of comparison, last year a Server processor had an average ASP of $500 while a PC socket was near $100. The former had gross margins exceeding 80% while the PC processors came in at just under 60%. The two x86 processor families are diverging in terms of die sizes and power requirements as servers deliver more ASP as they get larger and can consumer over 110W max. The PC processors are headed towards smaller die sizes and lower voltages in order to scale eventually down to sub 5W max. This will shift the mix inside the fabs as the server chips begin to dominate the number of wafers shipped. It is easy to imagine that the combined x86 groups consuming roughly the same amount of wafers from three dedicated fabs. Thus the need to fill the extra fabs with anything that is high volume.

Intel’s ability to generate a huge amount of cash has been overlooked by many analysts and is probably due to the high outlay of capital equipment these past two years, the stock buybacks and the rich dividend. If Intel scales its CapEx back from $12B to $7B (along average historical outlays) and excludes the stock buyback, then they would be able to generate roughly $17B annually in cash at the runrate of the last four quarters. As a side note, for 2011 Intel generated $6B in free cash after spending $18.4B on stock buy backs and dividends. And for all that the stock is below $20, which is well below the $70 high of 2000.

Therefore in a scenario I outlined above, with Data Center Revenue rising to $20B in 2016 and Client Revenue dropping to $25B in the same timeframe, it is possible to see how Intel generates enough cash to buy itself out in 6-7 years at todays stock price. I am of course assuming that CapEx is scaled back to $7B and some amount of Operating expenses are reduced to reflect a company with lower growth potential. This scenario, however, does not even account for the addition of Foundry Revenue that three 14nm fabs could offer. Perhaps then a buyout is possible in 5 years. By comparison, Apple’s cash flow would require over 10 years for a buyout.

Intel’s new $6B debt has an average interest rate of 2.38% vs the current 4.5% dividend yield. When taxes are figured in, the cost of the debt being used to buyback stock will actually increase Intel’s cash flow by over $160M a year (by reducing dividend outlays). This opens up the question as to how much debt, Intel could secure that would enable management to run it for the benefit of themselves. I consider this an opening round and someone on Wall St. will pick up a pencil and start playing with some possibilities now that debt is free and equity is expensive. A private buyout of Intel to tap its enormous cash flow is something that no one could have imagined in 2000 or even last year. This is how far Fear has taken over the market.

Full Disclosure: I am Long AAPL, INTC, ALTR, QCOM


A Brief History of the Fabless Semiconductor Ecosystem

A Brief History of the Fabless Semiconductor Ecosystem
by Daniel Nenni on 12-05-2012 at 7:00 pm

Clearly the fabless semiconductor ecosystem is driving the semiconductor industry and is responsible for both the majority of the innovation and the sharp decline in consumer electronics costs we have experienced. By definition, a fabless semiconductor company does not have to spend the time and money on manufacturing related issues and can focus research and development efforts on specific market segments much more quickly. Seriously, without the fabless semiconductor revolution would we even have the mobile devices we depend on today? Given that, what does the future hold for the traditional semiconductor integrated device manufacturer (IDM)?

An integrated device manufacturer (IDM) is asemiconductorcompany which designs, manufactures, and sellsintegrated circuit(IC) products. As a classification, IDM is often used to differentiate between a company which handlessemiconductor manufacturingin-house, and afabless semiconductor company, whichoutsourcesproduction to a third-party. Due to the dynamic nature of thesemiconductor industry, the term IDM has become less accurate than when it was coined (Wikipedia).

Depending on whom you ask, either Xilinx or Chips and Technologies was officially the first fabless semiconductor company and from the stories I have heard both companies mentioned building a fab in their business plans in order to get funding while neither actually planned on doing it. The rest is history with the legions of fabless semiconductor companies that followed and today dominate the industry.

The fabless semiconductor ecosystem went professional in 1994 with the founding of the Fabless Semiconductor Association (FSA) later renamed the Global Semiconductor Association (GSA). Why didn’t the fabless leadership join Wilf Corrigan (LIS Logic), Robert Noyce (Intel), Jerry Sanders (AMD), Charles Sporck (National Semicondctor), and John Welty (Motorola) at the industry leading Semiconductor International Association (SIA)? Because they were not allowed to that’s why! Back then the fabless business model was dismissed with the infamous catch phrase, “Real men have fabs.”

The Global Semiconductor Alliance (GSA) mission is to accelerate the growth and increase the return on invested capital of the global semiconductor industry by fostering a more effective ecosystem through collaboration, integration and innovation. It addresses the challenges within the supply chain including IP, EDA/design, wafer manufacturing, test and packaging to enable industry-wide solutions. Providing a platform for meaningful global collaboration, the Alliance identifies and articulates market opportunities, encourages and supports entrepreneurship, and provides members with comprehensive and unique market intelligence. Members include companies throughout the supply chain representing 25 countries across the globe.

The FSA started with forty charter members of which only a handful remain due to acquisitions and attrition. Currently more than five hundred companies participate in five different member segments: Semiconductor Members, Supplier Partner Members, Service Partner Members, Industry Partner Members, and Organizations / Associations / Government & Educational Partner Members. Today this is called crowdsourcing, a commonly accepted practice enabled by the internet and social media. Back in the 1990’s however this type of collaboration within the semiconductor industry was unheard of and publicly lambasted by the IDMs. Fortunately the founding FSA members were comfortable with disruptive business models and the fabless semiconductor ecosystem was born.

It is interesting to see how the top semiconductor companies have evolved financially based on the top 10 company ratings over the last 25 years. According to Gartner Dataquest and later iSupply, Japan dominated the semiconductor industry in 1987 with Intel barely making the top 10:

[TABLE] cellspacing=”3″
|-
| Rank
1987

|
| Company
| Country of origin
| Revenue
(million
$
USD)
|-
| 1
|
| NEC Semiconductors
| Japan
| 3 368
|-
| 2
|
| Toshiba Semiconductor
| Japan
| 3 029
|-
| 3
|
| Hitachi Semiconductors
| Japan
| 2 618
|-
| 4
|
| Motorola Semiconductors
| USA
| 2 434
|-
| 5
|
| Texas Instruments
| USA
| 2 127
|-
| 6
|
| Fujitsu Semiconductors
| Japan
| 1 801
|-
| 7
|
| Philips Semiconductors
| Netherlands
| 1 602
|-
| 8
|
| National Semiconductor
| USA
| 1 506
|-
| 9
|
| Mitsubishi Semiconductors
| Japan
| 1 492
|-
| 10
|
| Intel Corporation
| USA
| 1 491
|-

Five years later it is really just a re-ordering with Intel as #1 and National Semiconductor being replaced by Matsushita (Panasonic) :

[TABLE] cellspacing=”3″
|-
| 1
| 3
| Intel Corporation
| USA
| 5 091
|-
| 2
| 1
| NEC Semiconductors
| Japan
| 4 869
|-
| 3
| 2
| Toshiba Semiconductor
| Japan
| 4 675
|-
| 4
| 4
| Motorola Semiconductors
| USA
| 3 634
|-
| 5
| 5
| Hitachi Semiconductors
| Japan
| 3 851
|-
| 6
| 6
| Texas Instruments
| USA
| 3 087
|-
| 7
| 7
| Fujitsu Semiconductors
| Japan
| 2 553
|-
| 8
| 8
| Mitsubishi Semiconductors
| Japan
| 2 213
|-
| 9
| 10
| Philips Semiconductors
| Netherlands
| 2 113
|-
| 10
| 9
| Matsushita Semiconductors
| Japan
| 1 942
|-

Five years later Samsung continues the climb to the top as the Japanese semiconductor companies begin to decline and consolidate. Consolidation also brought SGS-Thompson to the top 10.

[TABLE] cellspacing=”3″
|-
| Rank
1997

| Rank
1996

| Company
| Country of origin
| Revenue
(million
$
USD)
|-
| 1
| 1
| Intel Corporation
| USA
| 21 746
|-
| 2
| 2
| NEC Semiconductors
| Japan
| 10 222
|-
| 3
| 3
| Motorola Semiconductors
| USA
| 8 067
|-
| 4
| 6
| Texas Instruments
| USA
| 7 352
|-
| 5
| 4
| Toshiba Semiconductor
| Japan
| 7 253
|-
| 6
| 5
| Hitachi Semiconductors
| Japan
| 6 298
|-
| 7
| 7
| Samsung Semiconductors
| South Korea
| 5 856
|-
| 8
| 9
| Philips Semiconductors
| Netherlands
| 4 440
|-
| 9
| 8
| Fujitsu Semiconductors
| Japan
| 4 622
|-
| 10
| 10
| SGS-Thomson
| France Italy
| 4 019
|-

Another five years and it’s more of the same with Samsung quickly climbing to #2 and SGS-Thompson being renamed STMicro Electronics.

[TABLE] cellspacing=”3″
|-
| Rank
2002

| Rank
2001

| Company
| Country of origin
| Revenue
(million
$
USD)
|-
| 1
| 1
| Intel Corporation
| USA
| 23 700
|-
| 2
| 5
| Samsung Electronics
| South Korea
| 8 750
|-
| 3
| 3
| Toshiba Semiconductor
| Japan
| 6 420
|-
| 4
| 2
| STMicroelectronics
| France Italy
| 6 380
|-
| 5
| 4
| Texas Instruments
| USA
| 6 350
|-
| 6
| 8
| Infineon Technologies (Semiconductor spin-off from Siemens)
| Germany
| 5 370
|-
| 7
| 7
| NEC Semiconductors
| Japan
| 5 320
|-
| 8
| 6
| Motorola Semiconductors
| USA
| 4 810
|-
| 9
| 9
| Philips Semiconductors
| Netherlands
| 4 360
|-
| 10
| 12
| Hitachi Semiconductors
| Japan
| 4 210
|-

Five more years and the shuffling continues with fewer Japanese semiconductor companies and one more from South Korea. Interesting note, Hyundai Electronics merged with LG Semiconductor and was renamed Hynix, now SK Hynix after being merged with the SK Group, the 3rd largest conglomerate in South Korea.

[TABLE] cellspacing=”3″
|-
| Rank
2007

| Rank
2006

| Company
| Country of origin
| Revenue
(million
$
USD)
| 2007/2006 changes
| Market share
|-
| 1
| 1
| Intel Corporation
| USA
| 33 995
| +7.8%
| 12.6%
|-
| 2
| 2
| Samsung Electronics
| South Korea
| 19 691
| -0.8%
| 7.3%
|-
| 3
| 3
| Texas Instruments
| USA
| 12 275
| -2.6%
| 4.6%
|-
| 4
| 4
| Toshiba Semiconductor
| Japan
| 12 186
| +20.2%
| 4.5%
|-
| 5
| 5
| STMicroelectronics
| France Italy
| 10 000
| +1.5%
| 3.7%
|-
| 6
| 7
| Hynix
| South Korea
| 9 047
| +15.0%
| 3.4%
|-
| 7
| 6
| Renesas Technology
| Japan
| 8 001
| 1.3%
| 3.0%
|-
| 8
| 15
| Sony
| Japan
| 7 974
| +55.5%
| 3.0%
|-
| 9
| 14
| Infineon Technologies
| Germany
| 6 201
| +21.1%
| 2.3%
|-
| 10
| 8
| AMD
| USA
| 5 918
| -21.2%
| 2.2%
|-
| 11
| 10
| NXP
| Netherlands
| 5 746
| +0.7%
| 2.1%
|-

Qualcomm was the first fabless semiconductor company to reach the top 10 in 2009 and in 2011 sits at #6 with fellow fabless company Broadcom at #10.

[TABLE] cellspacing=”3″
|-
| Rank
2011

| Rank
2010

| Company
| Country of origin
| Revenue
(million
$
USD)
| 2011/2010 changes
| Market share
|-
| 1
| 1
| Intel Corporation(1)
| USA
| 49 685
| +23.0%
| 15.9%
|-
| 2
| 2
| Samsung Electronics
| South Korea
| 29 242
| +3.0%
| 9.3%
|-
| 3
| 4
| Texas Instruments(2)
| USA
| 14 081
| +8.4%
| 4.5%
|-
| 4
| 3
| Toshiba Semiconductor
| Japan
| 13 362
| +2.7%
| 4.3%
|-
| 5
| 5
| Renesas Electronics
| Japan
| 11 153
| -6.2%
| 3.6%
|-
| 6
| 9
| Qualcomm(3)
| USA
| 10 080
| +39.9%
| 3.2%
|-
| 7
| 7
| STMicroelectronics
| France Italy
| 9 792
| -5.4%
| 3.1%
|-
| 8
| 6
| Hynix
| South Korea
| 8 911
| -14.2%
| 2.8%
|-
| 9
| 8
| Micron Technology
| USA
| 7 344
| -17.3%
| 2.3%
|-
| 10
| 10
| Broadcom
| USA
| 7 153
| +7.0%
| 2.3%
|-

According toIC Insights, semiconductor R&D spending will hit a record high of $53.4B in 2012 which is 16.2% of the total $329.9B semiconductor industry sales. It is interesting to note that 7 of the top 12 semiconductor R&D spenders are fabless. Another interesting number would be total fabless R&D dollars spent per year which would include all companies that are part of the fabless semiconductor ecosystem. Even if you just took ARM, ARM customers, and the foundries that make the ARM based products it would probably be greater than one trillion dollars as compared to Intel’s $8.3B R&D spend in 2011.

What’s in store for the semiconductor industry over the next 5-10 years? Clearly more consolidation, more fabless semiconductor companies in the top 10, and fewer fabs being built by even fewer companies. The old guard semiconductor IDMs are closing fabs left and right and going fab-lite or like AMD, completely fabless. Moving forward, the majority of the leading edge fabs will be built by Intel, Samsung, and TSMC. Intel and Samsung have even opened their semiconductor manufacturing doors to the fabless market as boutique foundries focusing on select customers in target markets (FPGA and Mobile) in order to fill excess capacity so we have come full circle yet again.

A Brief History of Semiconductors
A Brief History of Moore’s Law
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs