CAST Compression IP Webinar 800x100 (2)

Broadcom’s Bet the Company Acquisition of Netlogic

Broadcom’s Bet the Company Acquisition of Netlogic
by Ed McKernan on 09-12-2011 at 1:19 pm

I surmised a month ago that Broadcom could be a likely acquirer of TI’s OMAP business in order to compete more effectively in Smart Phones and Tablets. I was not bold enough. Instead, Broadcom has offered $3.7B for Netlogic in order to be an even bigger player in the communications infrastructure market by picking up TCAMs and a family of muli-processor MIPs solutions. The acquisition is not cheap as they are offering to buy Netlogic at a price equal to 10 times their current sales. In addition it represents, 42% of Broadcom’s current valuation. Although one can argue, that semiconductors as a whole are undervalued in the market.

I want to highlight the significance of the acquisition relative to the two competing visions in the market place as to how best serve the communications market from a semiconductor solution point of view. On the one side are the off the shelf standard ASSP solutions from Broadcom, Marvell, Netlogic Cavium, Freescale, EZChip etc.. On the other are the customizeable solutions that were once done entirely with ASICs but which now are more and more being taken over by FPGAs. Altera and Xilinx have made this a focus because they are able to generate a lot of revenue by selling very high ASP parts as prototype and early production units.

The first camp believes that over time multicore MIPs based processors are the most flexible way to build and update communications equipment. Broadcom’s major weakness was that they were not able to get a view into new high performance designs with switch chips better suited for cost sensitive volume switches. Netlogic, on the other hand gets to see nearly every new high-end design because they are the de-facto sole provider of TCAM chips. Broadcom, in a sense is paying a premium to get this inside worldwide view into the customer base.

Switching to the other side, the FPGA camp. Cisco and Juniper built their businesses in the 1990s and 2000s off of custom ASICs fabricated at IBM and TI. The development teams still consist mostly of ASIC designers. They have been asked to rely less on ASICs and more on FPGAs because they never generate high volume. The design flow of FPGAs is like ASICs which is a big plus. What Xilinx and Altera figured out several years ago is that if they bolted on the fastest Serdes to their latest chips, they would attract more design wins. Altera of late is doing the best at pushing Serdes speeds. Whereas in the 1990s, FPGAs were used for simple bus translations, in the 2000s they became standard front ends to many line cards because of the Serdes speeds and flexibility.

Turning to this decade, the FPGAs are being used more and more for building data buses, deep packet inspection and even traffic management. If Xilinx and Altera offered unlimited gates at a reasonable price, the likelihood is that they would own most of the line card. There is one area that they are coming up short and I expect this will be corrected soon. The networking line card needs some amount of high level processing. Traditionally, this has been MIPs and recently Intel has shown up. FPGAs need to incorporate processors and have a clean interface into the high-speed fabric.

At the 28nm process node, Altera and Xilinx are attempting to take it one step further in their ability to be more economical by offering more gates at the same price. Xilinx is pushing the envelope on gate count by utilizing 3D package technology. This should allow them to effectively double the gate counts at less than 2X the cost.

Altera’s approach is to set aside silicon area for customers to drop in their own IP block. A customized solution that may be of benefit to the Cisco and Juniper’s of the world who don’t like to share their company jewels. There is a metal mask adder but it is still a time to market and lower cost alternative to a full ASIC. I call this approach a “Programmable ASSP” because it combines the benefits of both.

In the short term there will be no clear-cut winner as both approaches have benefits and a preference to design engineers. There is however a longer term financial aspect that can sway the market and it is that the FPGA vendors have a much higher margin structure than Broadcom and the rest of the ASSP vendors. Altera and Xilinx have 14 – 18% higher gross margins but more importantly the operating margins are over two times greater. It comes down to R&D. Altera and Xilinx get much more for their R&D dollar than Broadcom or Netlogic. All this seems to lead to the conclusion that Altera and Xilinx have the advantage in their ability to explore ways of crafting new solutions in the communications space.

Broadcom has a lot to prove over the coming months. In the announcement, they forecasted that the Netlogic acquisition would be accretive. Netlogic’s TCAM business is sound, however it hasn’t grown to the level expected with the rollout of IPv6. More significantly for Broadcom is the fact that the processor business is expensive. Unlike the PC market, the NPU market never achieved the multibillion-dollar size that many analysts expected a decade ago. It is, according to the Linley Group roughly $400M in size, which is relatively small for the investment needed at 28nm and below.


Synopsys STAR Webinar, embedded memory test and repair solutions

Synopsys STAR Webinar, embedded memory test and repair solutions
by Eric Esteve on 09-12-2011 at 8:16 am

The acquisitions of Virage Logic by Synopsys in 2010, have allowed building a stronger, diversified IP port-folio, including the embedded SRAM, embedded non-volatile memory and embedded test and repair solution. Looking back in time, I remember the end of the 80’s: at that time the up-to-date solution to embed SRAM in your ASIC design was to use a compiler, provided by the ASIC vendor, implement the SRAM, and develop yourself the test vectors. In 2000, most of the ASIC vendors were externally sourcing the SRAM compiler (to Virage Logic…), the ASIC designers were taking benefit of faster, denser memories with Built-In-Self-Test (BIST) integrated. But that was not enough, as the SRAM, becoming very large, may have a negative impact on the yield of the ASIC. Then, in 2002, Virage Logic has introduced the repair capability with the STAR product (for Self-Test and Repair).

To register to this STAR Webinar, just go here.

Specifically, the DesignWare Self-Test and Repair (STAR) Memory System consists of:

  • Test and repair register transfer level (RTL) IP, such as STAR Processor, wrapper compiler, shared fuse processor and synthesizable TAP controller
  • Design automation tools such as STAR Builder for automated insertion of RTL and STAR Verifier for automated test bench generation
  • Manufacturing automation tools such as STAR Vector Generator for automated generation of WGL/STIL and programmability in patterns, and STAR Silicon Debugger for rapid isolation, localization and classification of faults
  • An open memory model for all memories. In order to generate DesignWare STAR Memory System views, Synopsys provides the MASIS memory description language. In addition a MASIS compiler is available to memory developers to automate generation and verification of the memory behavioral and structural description
  • The DesignWare STAR ECC IP, which offers a highly automated design implementation and test diagnostic flow that enables SoC designers to quickly address multiple transient errors in advanced automotive, aerospace and high-end computing designs

This webinar will be hold by Yervant Zorian, Chief Architect for embedded test & repair products line and Sandeep Kaushik, the Product Marketing Manager for the Embedded Memory Test and Repair product line at Synopsys. They will explain:

  • The technical trends and challenges associated with embedded test, repair and diagnostics in today’s designs.
  • The trade-offs and design impact of various solutions.
  • How Synopsys’ DesignWare® STAR Memory System® can meet your embedded test, repair and diagnostics needs.

They will tell you why STAR can lead to:
Increased Profit Margin

  • The DesignWare STAR Memory System can enable an increase of the native die yield through memory repair, leading to increased profit margins

Predictable High Quality with Substantial Reduction in Manufacturing Test Costs Shorter Time-to-Volume

  • The DesignWare STAR Memory System has superior diagnostics capabilities to enable quick bring up of working silicon, thereby enabling manufacturing to quickly ramp to volume production. The DesignWare STAR Memory System also has automated test bench capabilities and a proven validation flow to ensure a successful bring up of first silicon on the automatic test equipment

Minimum Impact on Design Characteristics (Performance, Power and Area)

  • Because the test and repair system is transparently integrated within the DesignWare STAR Memory System, it ensures minimal impact on timing and area and allows designers to quickly achieve timing closure. This advanced embedded test automation can reduce insertion time by weeks

To register to this STAR Webinar, just go here.

Eric Esteve


Cadence ClosedAccess

Cadence ClosedAccess
by Paul McLellan on 09-11-2011 at 4:00 pm

There are various rumors around about Cadence starting to close up stuff that has been open for a long time. Way back in the midst of time, as part of the acquisition of CCT, the Federal Trade Commission forced Cadence to open up LEF/DEF and allow interoperability of Cadence tools (actually only place and route) I believe for 10 years. Back then Cadence was the #1 EDA company by a long way, in round figures making about $400M in revenue each quarter and dropping $100M to the bottom line. Cadence opened up LEF/DEF and created the Connections Program as a result.

Recently I’ve heard about a couple of areas where Cadence seems to be throwing its weight around.

Firstly, it seems that they have deliberately made Virtuoso so that some functions only operate with Cadence PDKs (SKILL-based) and not with any of the flavors of open PDKs that are around.

I’ve written before about the various efforts to produce more open PDKs that run on any layout system, as opposed to the Skill-based PDKs that only run in Cadence’s Virtuoso environment.

Apparently what is happening is this. OpenAccess includes a standard interface for PCells which is used by all OA PCells whether implemented in Skill, Python, TCL, C++ or anything else. There is also a provision for an end application to query the “evaluator” used with each PCell. In IC 6.1.5 code has been added to Virtuoso GXL which uses this query and if the answer is anything other than “cdsSkillPCell” then a warning message “whatever could not complete” is issued and that whatever GXL feature is aborted. Previous versions 6.1.4 and earlier, all worked correctly. In particular, modgen no longer works.

Apparently semiconductor companies are very annoyed about this for a couple of reasons. Firstly, they have to generate multiple PDKs if Cadence won’t accept any of the open standards (and Cadence has enough market share that they cannot really be avoided). Second, the incredibly complex rules that modern process nodes require are simply much easier using the more powerful modern languages used in the open PDKs (e.g. Python). At least one major foundry has said publicly that they can deliver modern-language PDKs to their customers weeks earlier than they can with Skill.

Cadence apparently claim that it is purely an accident that certain tools fail with anything other than Skill-based PDKs, as opposed to something that they have deliberately done to block them. But they won’t put any effort into finding what is wrong (well, they know). To be fair to them, they have had major financial problems (remember those Intel guys?) that has meant that they have had to cut back drastically on engineering investment and have to really prioritize what they can do.

One rumor is that this non-interoperability has escalated to CEO level and Cadence’s CEO refused to change this deliberate incompatibility. TSMC must be very frustrated by this since it causes them additional PDK expense. Relations between the two companies seem to be strained over Cadence not supporting the TSMC iPDK format. It would be an interesting battle of the monsters if TSMC took a stand to only support open PDKs, a sort of who’s going to blink first scenario.

The second rumor is that Cadence have changed the definitions of LEF/DEF and now demand that anyone using them license the formats. To be fair, Synopsys does the same with .lib. I don’t know if they are demanding big licensing fees or anything. Of course there may be technical reasons LEF/DEF have to change to accommodate modern process nodes, just as .lib has had to change to accommodate, especially, more powerful timing models.

It is, of course, ironic that Cadence’s consent decree came about due to its dominance in place & route. Whatever else Cadence’s can be accused of, dominance in place & route is no longer one of them. Synopsys and Magma have large market shares and Mentor and Atoptech have credible technical solutions too. It is in custom design with Virtuoso where Cadence arguably has large enough market share to fall under laws that restrict what monopolists can do. Things like the essential facilities doctrine which overrides the general principle that a company does not have to deal with its competitors if it chooses not to.

Obviously I’m not a lawyer, but it’s unclear if Cadence is doing anything it is not allowed to. But is certainly doing things that contradict its EDA360 messaging about being open, and that would probably have been illegal during the decade of the consent decree which recently ended. Plus apparently they have 50 people in legal (no wonder they are tight in engineering) but they seem to have their hands full with class action suits (funny thing: I just tried to use Google to see if there was any color worth adding to this and the #1 hit was a website called Law360!).

And then there is the Cadence “dis-connections” partner program. But that’s a topic for another time.

Related blogs: Layout for AMS Nanometer ICs

Note: You must be logged in to read/write comments


2.5D and 3D designs

2.5D and 3D designs
by Paul McLellan on 09-07-2011 at 1:54 pm

Going up! Power and performance issues, along with manufacturing yield issues, limit how much bigger chips can get in two dimensions. That, and the fact that you can’t manufacture two different processes on the same wafer, mean that we are going up into the third dimension.

The simplest way is what is called package-in-package where, typically, the cache memory is put into the same package as the microprocessor (or the SoC containing it) and bonded using traditional bonding technologies. For example, Apple’s A5 chip contains an SoC (manufactured by Samsung) and memory chips (from Elipida and other suppliers). For chips where both layouts are under control of the same design team, microbumps can also be used as a bonding technique, flipping the top chip over so that the bumps align with equivalent landing pads on the lower chip completing all the interconnectivity.

The next technique, already in production at some companies like Xilinx, is to use a silicon interposer. This is (usually) a large silicon “circuit board” with perhaps 4 layers of metal built in a non-leading edge process and also usually containing a lot of decoupling capacitors. The other die are microbumped and flipped over onto the interposer, and the interposer is connected to the package using through silicon vias (TSVs). Note that this approach does not require TSVs on the active die, avoiding a lot of complications.

I think it is several years before we will see true 3D stacks with TSVs through active die and more than two layers of silicon. It requires a lot of changes to the EDA flow, a lot of changes to the assembly flow, and the exclusion areas around TSVs (where no active circuitry can be placed) may be prohibitive, forcing the TSVs to the periphery of the die and thus lowering significantly the number of connections between die that is possible.

But all of these approaches create new problems to verify power, signal and reliability integrity. To solve this requires a new verification methodology that provides accurate modeling and simulation across the whole system: all the die, interposers, package and perhaps even the board.

TSVs and interposer design can cause inter-die noise and other reliability issues. As I said above, the interposer usually contains decaps and so the power supply integrity needs to take these into account. In fact it is not possible to analyze the die in isolation since the power distribution is on the interposer.

One approach, if all the die layout data (including the interposer) is available, is to do concurrent simulation. Typically some of the die may be from an IP or memory vendor and in this casee a mdel-based analysis can be used, with the CPMs (chip power model) standing in for the detailed data that is unavailable.

One challenge that going up in the 3rd dimension creates is the issue of thermal induced failures. Obviously heat generated has a harder time getting out from the center than in a traditional two dimensional chip design. The solution is to create a chip thermal model (CTM) for each die, that must include temperature dependent power modeling (leakage is very dependent on temperature), metal densite and self-heating power. Handing all these models to ta chip-package-system thermal/stress simulation tool for power-thermal co-analysis, the power and temperature distribution can be calculated.

A final problem is signal integrity. The wide I/O (maybe thousands of connections) between the die and the interposer can cause significant jitter due to simultaneous switching. Any SSO (simultaneously switching outputs) solution needs to consider the drivers and receivers on the different die as well as the layout of the buses on the interposer. Despite the interposer being passive (no transistors) its design still requires a comprehensive CPS methodology.

Going up into the 3rd dimension is an opportunity to get lower power, higher performance and smaller physical size (compared to multiple chips on a board). But it brings with it new verification challenges in power, thermal and signal integrity to ensure that it is all going to perform as expected.

Norman Chang’s full blog entry is here.

Related blog: TSMC Versus Intel: The Race to Semiconductors in 3D!



TSMC and Dr. Morris Chang!

TSMC and Dr. Morris Chang!
by Daniel Nenni on 09-05-2011 at 6:14 pm

While I was in Taiwan last month battling a Super Typhoon, Morris Chang was in Silicon Valley picking up his IEEE Medal of Honor. Gordon Moore, Andrew Grove, and Robert Noyce all have medals. The other winners, including 10 Nobel prize recipients, are listed HERE. An updated wiki on Dr. Morris Chang is located HERE.

The 12+ hour plane ride home gives a person plenty of time for reflection on why TSMC is so successful. Leadership is certainly important, just take a look at the executive staff on the new TSMC corporate website ( www.tsmc.com ). But in my opinion, TSMC’s success boils down to one thing, they are a dedicated IC foundry that is dependent on its customers and ecosystem partners and TSMC has never forgotten that.

Foundry 2010 Revenues:
(1) TSMC $13B
(2) UMC $4B
(3) GFI $3.5B
(4) SMIC $1.5B
(5) Dongbu $512M
(6) Tower/Jazz $509M
(7) Vanguard $508M
(8) IBM $430M
(9) Samsung $420M
(10) MagnaChip $405M

But if you ask how TSMC and Dr. Morris Chang himself got where they are today it can be summed up in three words: Business Model Innovation. Other business model innovators include: eSilicon, ARM, Apple, Dell, Starbucks, Ebay, Google, etc…. I would argue that without TSMC some of these businesses would not even exist.

Morris Chang’s education started at Harvard but quickly moved to MIT as his interest in technology began to drive his future. From MIT mechanical engineering graduate school Morris went directly into the semiconductor industry at the process level and was quickly moved to management. After completing an electrical engineering PhD program at Stanford, Morris leveraged his process level semiconductor management success and went to Taiwan to head the Industrial Technology Research Institute (ITRI) which lead to the founding of TSMC.

In 1987 TSMC started 2 process nodes behind current semiconductor manufacturers (IDMs). Morris Chang made the first TSMC sales calls with a single brochure: TSMC Core Values: Integrity, commitment, innovation, partnership. 4-5 years later TSMC was only behind 1 node and the orders started pouring in. In 10 years TSMC caught up with IDMs (not Intel) and the fabless semiconductor industry blossomed enabling a whole new era of semiconductor design and manufacturing.

Morris Chang Awards

  • 1998, “Top 25 Managers of the Year” and “Stars of Asia” by Business Week.
  • 1998, “One of The Most Significant Contributors in the 50 years of Semiconductor Industry” by BancAmerica Robertson Stephens.
  • 2000, “IEEE RobertN. Noyce Award” for Exceptional Contributions to Microelectronics Industry.
  • 2000, “Exemplary Leadership Award” from the Fabless Semiconductor Association (GSA).
  • 2005, “Top 10 Most Influential Leaders of the World” by Electronic Business.
  • 2008, “Semiconductor Industry Association’s Robert N. Noyce Award”
  • 2009, “EE Times Annual Creativity in Electronics Lifetime Achievement Award”
  • 2011, IEEE Medal of Honor

Dr. Morris Chang turned 80 on July 10[SUP]th[/SUP] 2011, I have seen him in Fab 12 but we have not met. Morris returned to the CEO job in June of 2009 and is still running TSMC full time as CEO and Chairman. He works from 8:30am to 6:30pm like most TSMC employees and says that a successful company life cycle is: rapid expansion, a period of consolidation, and maturity. The same could be said about Morris himself.

Here is a new 5 minute video from TSMC. I highly recommend watching it:

Pioneer of Dedicated IC Foundry Business Model

Related Blogs:TSMC 28nm / 20nm Update!


Manufacturing Analysis and Scoring (MAS): GLOBALFOUNDRIES and Mentor Graphics

Manufacturing Analysis and Scoring (MAS): GLOBALFOUNDRIES and Mentor Graphics
by Daniel Payne on 09-05-2011 at 3:37 pm

Last week GLOBALFOUNDRIES and Mentor Graphics presented at the Tech Design Forum on how they collaborated on a third generation DFM flow. When I reviewed the slides of the presentation it really struck me on how the old thinking in DRC (Design Rule Checking) of Pass/Fail for layout rules had been replaced with a score represented as a number between 0 and 1.

An example is shown above where an enclosure rule is described as:M1 minimum overlap past CA for at least two opposite sides with the other two sides >= 0.00um (Rectangular enclosure).

The top Metal line has a Via with short overlap and its score is about 0.4 (less manufacturable, lower yielding) while the bottom Metal line has a Via with a much longer overlap so its score is about 0.9 (more manufacturable, higher yielding).

When I first started designing full-custom ICs all of our rules were described as Pass/Fail, nothing ambiguous at all. Today however DRC rules are a different story at the 28nm node and lower because of DFM and lithography challenges, so IC designers need a new way to be warned about layout practices that impact yield early in the design process.

One way that GLOBALFOUNDRIES started describing the scoring on a rule was to provide two categories of rules: Low Priority and High Priority


MCD Scoring Model

The blue line on the top is for Low Priority rules and the X axis represents the Score, while the slope of the line gradually moves downward. In contrast the High Priority rule line shown in red has a much steeper slope, meaning that it’s score approaches Zero more quickly.

With just these two rule lines we can bound all of the DRC rules in our process by placing each rule into one of two categories: High or Low priority.

An improvement over the MCD scoring model is called the MAS scoring model where each DRC rule has its own slope based on actual Silicon measurements:


MAS Scoring Model

The green and grey lines correspond with two DRC rules that have been measured and plotted.

This new MAS Scoring Model is support by Mentor’s Calibre CFA tool:

In this dialog I can see the MAS score for each DFM rule and then decide if I want to edit my layout to improve the score.

Many process effects are covered by the MAS score: litho, stress, charging, random particle defect, etc.

I can now sort and rank which DFM layout issues to work on first.

Summary
GLOBALFOUNDRIES is using DFM scoring to qualify IP at the 65nm and smaller nodes as a means to improve quality. Mentor Calibre tools use the equation-based MAS and litho friendly design so that a designer can converge more quickly on DFM fixes in their IP.


What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?

What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?
by Eric Esteve on 09-05-2011 at 4:53 am

Even if nSys acquisition by Synopsys will not have a major impact on Synopsys’ balance sheet, it is a kind of earthquake in the Verification market landscape. After the Denali acquisition by Cadence in 2010, nSys was most probably the market leader in verification IP, if we look at the independent VIP providers (excluding Cadence). The company VIP port-folio can bear the comparison with Cadence, as nSys supports: PCI family, Communication standards, Storage Interfaces, USB Port, DDR3/2 Memory Controller, MIPI, AMBA and some miscellaneous interfaces. nSys being privately owned, we don’t know the company revenue, neither if the company is profitable. Was this acquisition an “asset sale”, and just an opportunistic deal closed by Synopsys at low cost, with the side effect to compete directly with Cadence in a market where the company has heavily invested during the last three years? Or the goal is to consolidate Synopsys position in the Interface IP market, where the company is the dominant provider, present, and leader, in every segment (DDRn, PCIe, USB, SATA, HDMI, MIPI…), by adding “independent” VIP to the current offer?

Synopsys was offering “bundled” VIP, and this is not the best way to valorize the product, as the Design IP customer expect to get a bundled VIP almost for free. If Synopsys acquisition of nSys illustrates a real strategy inflection, another side effect will be the lack of accuracy of the “Yalta description”: Cadencedominant in VIP and Synopsysin IP market!

Only Synopsys and nSys management teams know the answer. Today we only can evaluate the impact of this acquisition on the day to day life of SoC design teams, when this SoC integrates an Interface IP… which happen in most of the cases.

I found an interesting testimonial on nSys web site: “We had debated using bundled VIP solutions, which were available with PCIe IP, but after evaluating the nSys Verification IP for PCIe, we dropped the idea. We were impressed by the level of maturity of the PCIe nVS. We also realized that the PCIe nVS provided us with the ability to do independent verificationof the IP that could not have been achieved with the bundled models. The nSys solution has helped our engineering team increase productivity too.” From Manoj Agarwal, Manager ASIC, EFI.

The important word in this testimonial is “independent”. We have expressed this concern in the past, when saying:

“Common sense remark about the BFM and IP: when you select a VIP provider to verify an external IP, it is better to make sure that the design team for the BFM and for the IP are different and independent (high level specification and architecture made by two different person). This is to avoid the “common mode failure”, principle well known in aeronautic for example.”

A SoC project manager will have the option to buy an “independent” VIP to run the verification of the interface function… to the same vendor selling the IP. He still can buy it to the main competitor (Cadence) or to one of the remaining VIP provider (Avery, ExpertIO, PerfectVIP, Sibridge Technology, SmartDV Technology), but the one stop shop argument (buy the Controller, the PHY and the Verification IP together) will be reinforced, especially because the VIP comes now from a real independent source.

Is Synopsys acquisition of nSys an opportunistic asset sale? Honestly I don’t know, but this is certainly a stone in Cadence’ garden (as the company has bought Denali in 2010 and products from Yogitech SpA, IntelliProp Inc. and HDL Design House in October 2008 to consolidate their VIP port-folio) and a threat for the remaining VIP provider. Is it good news for Synopsys customers? Yes, because the “one stop shop” will ease the procurement and technical support process. Synopsys customers should just make sure to buy at the right price… market consolidation can make life easier… and price higher!

Eric Estevefrom IPnest


HP Will Farm Out Server Business to Intel

HP Will Farm Out Server Business to Intel
by Ed McKernan on 09-04-2011 at 7:36 pm


In a Washington Post Column this past Sunday, Barry Ritholtz, A Wall St. Money Manager and who has a blog called the Big Picture, recounts the destruction that Apple has inflicted on a wide swath of technology companies (see And then there were none). He calls it “creative destruction writ large.” Ritholtz though is only accounting for what has occurred to date. I would contend that we are about to start round two and the changes coming will be just as significant. If I were to guess, HP will soon decide to Farm out its Server Business to Intel. Intel will soon realize that they will need to step up to the plate for a number of reasons.

When HP hired Leo Apotheker, the ex-CEO of Software Giant SAP, the Board of Directors (which includes Marc Andreessen and Ray Lane, formerly of Oracle) implicitly fired the flare guns that they were in distress and were going to make radical changes as they reoriented the company into the software sphere of the likes of Oracle and IBM. To do this, they had to follow IBM’s footsteps by first stripping out PCs. IBM, however, sold its PC group to Lenovo back in 2004 before the last downturn. Unfortunately for HP, it will get much less for its PC business than what they paid for Compaq.

The next step for HP is risky but necessary. They need to consolidate server hardware development under Intel. Itanium based servers are selling at a run rate of $500M a quarter at HP are now less than 5% of the overall server market compared to IBM Power and Oracle SPARC, which together account for nearly 30% of the server dollars. Intel and AMD x86 servers make up the rest (See the chart below). In addition, IBM’s mainframe and Power server businesses are growing while HP’s Itanium is down 10% year over year.

Oracle’s acquisition of Sun always intrigued me as to whether it was meant as a short-term effort to force HP to retreat on Itanium or as a much longer-term strategy of giving away hardware with every software sale. When Oracle picked up Sun, it still held a solid #2 position in the RISC world, next to IBM. By taking on Sun, Oracle guaranteed SPARC’s survival and at the same time put a damper on HP growing more share. New SPARC processors were not falling behind Itanium as Intel scaled back on timely deliveries of new cores at new process nodes. More importantly, the acquisition was a signal to ISVs (Independent Software Vendors) to not waste their time porting apps to yet another platform, namely Itanium. Oracle made sure that HP was seen, as an orphaned child when it announced earlier this year that is was withdrawing support for Itanium.

There is only one architecture, at this moment, that can challenge SPARC and Power and it is x86. It is in HP’s interest to consolidate on x86 and reduce its hardware R&D budget. If needed, a nice software translator can be written to get any remaining Itanium apps running on x86. Since the latest XEON processors are three process nodes ahead of Itanium, there should be little performance difference. But what about Intel, do they want to be the box builder for HP?

I would like to contend that Intel has to get into the box business and is already headed there. There chief issue in holding them back is the reaction from HP, Dell and IBM. Neither of them is generating great margins on x86 servers. With regards to Dell, Intel could buy them off with a processor discount on the standard PC business, especially since they will now be the largest volume PC maker. IBM is trickier.

But why does Intel want to go into the server systems business. The answer is several fold. From a business perspective they need more silicon dollars as well as sheet metal dollars. Intel sees another $20-$30B opportunity in ramping up and they will need it to counteract any flatness or drop in processor business in the client side of the business. Earlier this year, Intel bought Fulcrum, if they build the boxes for the data center, then they have the potential to eat away at Broadcom’s $1B switch chip business.

A more interesting angle is the data center power consumption problem. Servers consume 90% of the power in a data center. It used to be that processors were the majority of the power, but with the performance gap growing between processors and DRAM and the rise of virtualization it now becomes a processor and memory problem. Intel is working on platform solutions to minimize power but they expect to get paid for their inventions.

Intel has started to increase prices on server processors based on reducing a data center’s power bill. Over the course of the next few years they will let processor prices creep up, even with the looming threat of ARM. This is a new value proposition that can be taken one step further. If they build the entire data center box with processors, memory, networking and eventually storage (starting with SSDs), then they can maximize the value proposition to data centers, who may not have alternative suppliers.

In some ways Intel is at risk if they just deliver silicon without building the whole data center rack. There are plenty of design groups at places like Google, Facebook and others who understand the tradeoffs of power and performance and would like to keep cranking out new systems based on the best available technology. By Intel putting down its big foot, it could eliminate these design groups and make it more difficult for a new processor entry (AMD or ARM based) from entering the game.


I love you, you love me, we’re a happy family…

I love you, you love me, we’re a happy family…
by Paul McLellan on 08-31-2011 at 8:00 pm

The CEO panel at the 2nd GTC wasn’t especially enlightening. The theme was that going forward will require cooperation for success and everyone was really ready to cooperate.

The most interesting concept was Aart talking about moving from what he called “scale complexity” aka Moore’s law to what he called “systemic complexity” where we are moving from the age where transistors are cheaper at each process generation to where you can build larger systems, but the per transistor cost will not be less.

Aart also had the most memorable image of the afternoon. He was talking about how amazing it is that you go to a fine-dining restaurant with 8 people and course after course things come to the table at the same time all perfectly prepared. My daughter’s boyfriend is the chef of just such a restaurant so I know about it a bit from behind the scenes and it is still amazing. Delivering a foundry capability is like that: the process, the tools, the IP, the manufacturing ramp and everything else needs to all be ready at the same time. The output isn’t the sum of the factors but the product, and if one is zero the whole thing is a big zero. Global Foundries’ kitchen just happens to cost billions of dollars, a bit more than even the most over the top fine dining restaurant.

Mojy, who was chairing the session, had one question to try and break up the love-in. Global Foundries, IBM and Samsung all compete, and yet they cooperate in process development even going as far as fab-same implementation (same equipment etc in all the fabs of each company). Will the big EDA companies cooperate in the same way? Of course this is a bit of an unfair question. The only reason that semiconductor companies cooperate is that technology development has got too expensive for any one company (except Intel, always the exception) to do it alone as they would have done 15 years ago (and, indeed, did). While foundries and semiconductor companies get some differentiation through process, most comes from what they design (for IDMS) or how they service customers operationally (for foundries). The software that EDA companies create is their differentiation. If Cadence, Synopsys, Magma and Mentor cooperated to build a shared next generation place and route system then it is hard to see how they would differentiate themselves. Yes, some companies have better AEs, some have better geographic coverage in some places etc, but basically they would all be selling a product that they could only differentiate by price. Today, with unique systems but with broadly similar capabilities, they are already close to that situation. So the CEOs largely ducked the question since “no” would have been too direct an answer.


Global Technology Conference 2011

Global Technology Conference 2011
by Paul McLellan on 08-31-2011 at 7:07 pm

I went to the second Global Technology Conference yesterday. It started with a keynote by Ajit Manocha who is the CEO of about 2 months. I hadn’t realized until someone asked him during the press lunch that he is technically only the “acting” CEO. Actually, given his experience he might be the right person anyway, rather than just a safe pair of hands in the meantime. He was Chief Manufacturing Officer for NXP (nee Philips Semiconductors) and so already has, as he put it, experience of running multiple fabs in multiple countries. When asked if he might become the permanent CEO he basically said that he’d advised the board to look for the best person possible. And then he added that, of course, if he didn’t deliver he’d be out of a job anyway.

Ajit (and everyone at Global) makes a big deal about being globally distributed as opposed to clustered in one country like any companies using the “traditional model”, such companies going unmentioned as if the mere mention of TSMC might lose business (oh, wow, I didn’t know you had a competitor, I must give them a call). Of course the tsunami in Japan has made people more aware of how vulnerable supply chains are sometimes, and of course Taiwan also sits in earthquake country, not to mention political instability country if China’s leaders decided to do something stupid. Global tries to have every process (the recent ones, anyway, the old ones are only in the old Chartered fabs in Singapore) in at least two of their fabs (Singapore, Dresden in the old East Germany, and the one under construction in upstate New York which is now ready for equipment install ahead of schedule).

Ajit talked mostly about getting closer to customers and being the vendor choice. Of course at some level everyone tries to do that and it is much easier to talk about than it is to achieve in practice. But here are a few interesting statistics: over 150 customers, over 11,000 employees and spending $8B on capex 2010-2011.

The capacity they have in place is quite impressive. Fab 1 (the old AMD fab in Dresden) is expanding to 80,000 wafer starts per month. I assume that means 300mm wafers rather than 200mm wafer-equivalents that is sometimes used. The focus is 45/40/32/28nm. Fab 8 (under construction in New York) is big: 6 football fields of clean room with over 7 miles of overhead track for moving wafer transport vehicles around. It will have 60,000 wafer starts per month once ramped, focused on 28/20nm. And in Singapore (the old Chartered fabs) they have a lot of 200mm capacity and, in fab 7, 50,000 wafer starts per month anywhere from 130nm to 40nm.

The meat of what Global is up to was in Gregg Bartlett’s presentation on the implementation of their process roadmap. He is very proud that they have gate-first 32nm HKMG ramped while other people using it are struggling. During the lunch he was asked about Intel’s 3D transistor. He thinks that despite some advantages, they will prove very difficult to control in the vertical dimension and are too restrictive for a general foundry business. Which is interesting if true since Intel has more capacity than it needs and so is entering the foundry business!

At 28nm they will use basically the same FEOL (front end of line, i.e. transistors) as at 32nm, namely gate-first HKMG. Compared to 40nm this is a 100% density increase and either a 40% increase in performance or a 40% reduction in power (depending on how you take it). He reckons that die are 10-20% smaller relative to 28nm gate-last processes. That would be TSMC.

But apparently at 20nm, litho restrictions mean that you can no longer get that 10-20% benefit so they will switch to gate-last. Versus 28nm this is nearly a 50% area shrink and they are investing in innovation in interconnect technologies.

And after that? 16/14nm. Multi-gate FinFET transistors, lots of innovation. Also innovation in EUV (extreme ultraviolet) where they have been doing lots of development work (over 60 masks delivered) and will have production installation in New York in the second half of next year.