SILVACO 073125 Webinar 800x100

Semiconductor Yield @ 28nm HKMG!

Semiconductor Yield @ 28nm HKMG!
by Daniel Nenni on 08-28-2011 at 4:00 pm

Whether you use a gate-first or gate-last High-k Metal Gate implementation, yield will be your #1 concern at 28nm, which makes variation analysis and verification a big challenge. One of the consulting projects I have been working on with the foundries and top fabless semiconductor companies is High-Sigma Monte Carlo (HSMC) verification technologies. It has been a bumpy two years certainly, but the results make for a good blog so I expect this one will be well read.

GLOBALFOUNDRIES Selects Solido Variation Designer for High-SigmaMonte Carlo
and PVT Design in its AMS Reference Flow

“We are pleased to work with Solido to include variation analysis and design methodology in our AMS Reference Flow,” said Richard Trihy, director of design enablement, at GLOBALFOUNDRIES. “SolidoVariation Designer together with GLOBALFOUNDRIES models makes it possible to perform high-sigma design for high-yield applications.”

Solido HSMC is a fast, accurate, scalable, and verifiable technology that can be used both to improve feedback within the design loop, as well as for comprehensive verification of yield critical high-sigma designs.

Since billions of standard Monte Carlo (MC) simulations would be required for six sigma verification, most yield sensitive semiconductor designers use a small number of MC runs and extrapolate the results. Others manually construct analytical models relating process variation to performance and yield. Unfortunately, both approaches are time consuming and untrustworthy at 28nm HKMG.

Here are some of the results I have seen during recent evaluations and production use of Solido HSMC:

Speed:

  • 4,700,000x faster than Monte Carlo for 6-sigma analysis
  • 16,666,667x fewer simulations than Monte Carlo for 6-sigma analysis
  • Completed in approximately 1 day, well within production timelines

Accuracy:

  • Properly determined performance at 6-sigma, with an error probability of less than 1e-12
  • Used actual Monte Carlo samples to calculate results
  • Provided high-sigma corners to use for design debug

Scalable:

  • Scaled to 6-sigma (5 billion Monte Carlo samples)
  • Scaled to more than 50 process variables

Verifiable:

  • Error probability was reported by the tool
  • Results used actual Monte Carlo samples – not based on mathematical estimates


Mohamed Abu-Rahma of Qualcomm did a presentation at #48DAC last June in San Diego. A video of his presentation can be seen HERE. Mohamed used Solido HSMC and Synopsys HSPICE for six sigma memory design verification.

Other approaches to six-sigma simulation include:

  • Quasi Monte Carlo (QMC)
  • Direct Model-based
  • Worst-Case Distance (WCD)
  • Rejection Model-Based (Statistical Blockade)
  • Control Variate Model-Based (CV)
  • Markov Chain Monte Carlo (MCMC)
  • Importance Sampling (IS)

None of which were successful at 28nm due to excessive simulation times and the inability to correlate with silicon. Especially the Worst-Case Distance approach, which is currently being peddled by an EDA vendor who’s name I will not mention. They claim it correlates to silicon but it does not! Not even close! But I digress…..

Being from Virage Logic and working with Solido the last two years, this blog is based on my personal experience. If you have hard data that suggests otherwise let me know and I will post it.

I would love to describe in detail how Solido solved this very difficult problem. Unfortunately I’m under multiple NDA’s with the penalty of death and dismemberment (not necessarily in that order). You can download a Solido white paper on high-sigma Monte Carlo verification HERE. There is another Solido white paper that goes into greater detail of how they solved this problem but it requires an NDA. You can also get a Webex HSMC briefing by contacting Solido directly HERE. I observed one just last week and it was quite good, I highly recommend it!


Layout for analog/mixed-signal nanometer ICs

Layout for analog/mixed-signal nanometer ICs
by Paul McLellan on 08-26-2011 at 5:24 pm

Analog has always been difficult, a bit of a black art persuading a digital process to create well-behaved analog circuits, capacitors, resistors and all the rest. In the distant past, we would solve this by putting the analog on a separate chip, often in a non-leading-edge process. But modern SoCs integrate large amounts of digital logic along with RF, analog and mixed-signal functionality on a single SoC and then manufacture it in the most bleeding edge process. This is a huge challenge.

The complexity of design rules when you get down to 28nm and below has greatly complicated the process. Traditionally custom layout is tedious and inflexible once started (you only have to think of the layout impact of a trivial digital gate-level change to see this). As a result, the layout teams don’t start layout until the circuit design is nearly complete and so they have to work under tremendous tape-out pressure. However, a more agile approach is possible using automated efforts. These reduce the effort needed, allow layout to be overlapped with circuit design and produce better (specifically smaller) layouts.

Timely design of the RF, analog and mixed-signal parts of many SoCs has become the long pole in the tent, the part of the schedule driving how fast the chip can be taped out. Part of the challenge is technical since the higher variability of silicon in 28mm and below (and above too, but to a lesser extent) threatens to make many analog functions unmanufacturable. To cope with variability and control parasitics, foundries have introduced ever more complex design and DFM rules and designers have come up with ever more elaborate circuits with more devices. Of course, both of these add to the work needed to complete designs.

The solution is an agile layout flow involving large-scale automation. The key advance is the capability to quickly lay out not just a single structure (e.g. a current mirror) in the presence of a few localized rules, but rather the entire design hierarchy in the presence of all the rules (complex design rules, area constraints, signal flow, matching, shielding etc).

Only by automating the whole layout process is it possible to move to an agile “throw away” layout to be done before the circuit design is finalized, and thus start layout earlier and do it concurrently with circuit design.

The advantages of this approach are:

  • significantly lower layout effort, since tasks are automated that were previously done by hand. This is especially the case in the face of very complex design rules where multiple iterations to fix design rule violations are avoided
  • large time-to-market improvement since the design is started earlier and takes less total time, so finishes soon after circuit design is complete
  • typically, at 28nm, a 10-20% die size reduction versus handcrafted layout

For more details of Ciranova’s agile layout automation, the white paper is here.


Will AMD and Samsung Battle Intel and Micron?

Will AMD and Samsung Battle Intel and Micron?
by Ed McKernan on 08-26-2011 at 2:00 pm

We received some good feedback from our article on Intel’s Back to the Future Buy of Micron and I thought I would present another story line that gives readers a better perspective of what may be possibly coming down the road. In this case, it is the story of AMD and Samsung partnering to counter Intel’s platform play with Micron. The initial out of the box idea of Intel buying Micron is based on my theory that whoever controls the platform wins. The new mobile environment is driven by two components, the processor and NAND flash. You can argue wireless technologies, but Qualcomm is like Switzerland supplying all comers. Intel (CPU centric) and Samsung (NAND centric) are the two likely head to head competitors. Each one needs a partner to fill out the platform. Thus, Intel with Micron and Samsung with AMD. The semiconductor world can operate in a unipolar or bipolar fashion, multipolar eventually consolidates to one of the former.

The challenge to the company who operates like a monopoly is that it slows down in the delivery of new products or fails to address new market segments. Intel has had a long run as the leading supplier of processors in laptops and notebook PCs. However, as most are witnessing now, they missed on addressing the Smartphone and tablet markets. In the notebook market, Intel could deliver processors with a 35 Watt (TDP) threshold. Now Intel is scrambling to redesign an x86 based processor that can meet the more stringent power requirements of iPhones and iPADs. The ultrabook initiative, which started in the spring, is an attempt to close the gap with tablets with a PC product that has much better battery life and is closer in weight.

It will take 2 years for the initiative to come to full completion.The new mobile world of iPADs, Smartphones and MAC Airs can trace their genealogy to version 2.0 of the iPOD. It was at this point that Steve Jobs converted Apple to a roadmap that would build innovative products around the lower power and smaller physical dimensions of NAND Flash. And with Moore’s Law behind it, NAND Flash offers a path to lower cost storage for future products that will tap into the cloud. When one looks at the bill of materials profile of the components in Apple’s iPhones and iPADs, one can see that the NAND flash is anywhere from 2 to 5 times the dollar content of the ARM processor. In the MAC Air the NAND flash content is 1/2 to slightly more that of the Intel processor. If you were to combine the three platforms, flash outsells the processor content by at least 3:1. Given current trends this will grow and therefore becomes the basis for Intel seeking to be a flash supplier. This is especially true if they can make a faster proprietary link between the processor and storage.

Turning to Samsung’s side of the platform. They obviously recognize the growing trend of NAND flash in the mobile compute platforms. Samsung should look to leverage this strength into a platform play that also includes the processor. In this case, it includes ARM and x86. Samsung will also look for ways to separate themselves from competitors such as Toshiba and Sandisk. This is where pushing ahead early to 450mm Fabs could have an impact.

During the course of the next decade, there are three major platform battles that will take place between ARM and x86 processors. Today Intel has dominance in the server and legacy desktop and notebook PCs while ARM is dominating in the SSD based Smartphones and Tablets. The crossover platform is the MAC Air with an Intel processor and an SSD. Intel has been and will likely continue to increase ASPs in the server market as a value proposition on data center power consumption. In the traditional PC space, Intel is confronted with a slow growth market that will require them to reduce ASPs in order to prevent ARM from entering the space and to avoid losing share to the SSD based platforms. There is not a direct 1:1 cannibalization in this scenario but we will understand more fairly soon. ASP declines by Intel will in one way or another be a function on how to keep their fabs full.

As one can see, there are a low of variables in determining who wins and to what degree in all three of the major platforms. If Samsung wants to be a major player in all three then it needs an x86 architecture as well as a top notch ARM design team to compete against Intel. Assuming NAND Flash will grow in revenue faster than x86 processors, then Samsung should utilize AMD’s x86 to strip the profits out of the legacy PC and the highly profitable server space. Intel will likely utilize Flash to enhance the platform in terms of improving overall performance relative to ARM. Due to the fact that Intel supplies 100% of Apple’s x86 business, they will have a more difficult time offering discounts to non-Apple customers because any discount will be immediately subtracted from Apple’s purchases. Since Apple is the growth player in the PC market, they will dictate Intel’s floor pricing. AMD is not an Apple supplier, therefore it has the freedom to experiment with x86 pricing with the rest of the PC market. To implement a complementary strategy to the MAC Air, AMD needs to make adjustments to their processor line by developing a processor that moderates x86 performance for greater graphics performance. The combined solution (or APU in AMD terminology) must be sold for $50 – $75 or more than $150 less than Intel’s solution. And finally the maximum thermal design power (TDP) of the processor should be in the rangeof 2-5W.

In the past month, many of the Taiwan based notebook OEMs have complained that they are unable to match Apple’s price on the entry level $999 MAC Air. Apple can now secure LCD panels, NAND Flash and other components at lower prices. In addition, the PC makers must pay Microsoft an O/S license fee. For these vendors to be able to compete, they must utilize a non-Intel processor. The lowest cost Intel i5 ULV processor is roughly $220 and Intel will likely not offer a lower cost ULV processor until Ivy Bridge reaches the mid life cycle of its production sometime in 2013.

On the ARM front, Samsung needs an experienced design team to develop a family of processors for smartphones and tablets. The highly successful A4 processor was designed by a group in Austin called Intrinsity, which Apple snapped up last year. Mark McDermott, one of the co-founders, and someone I once worked with at Cyrix in the 1990s, has been designing ultra low power processors for 20 years. Experience counts and Samsung is in need of processor designers who can make the performance and power tradeoffs between processor and graphics cores. AMD is overloaded with engineering talent.

The platform wars, not just processor wars, are heating up as Intel and Samsung look to gain control of the major semiconductor content going into new mobile devices, legacy PCs and data center servers. It looks to be a decade long struggle that will be better understood after 450mm fabs are in place. What may have seemed to be out of the question a few months ago (e.g. Intel buying Micron or Samsung teaming up with AMD) is likely to be up for serious consideration. Who would have guessed a month ago that Google would buy Motorola or HP exiting the PC business. The tectonic plates are shifting.


Transistor Level IC Design?

Transistor Level IC Design?
by Daniel Payne on 08-26-2011 at 1:23 pm

If you are doing transistor-level IC design then you’ve probably come up against questions like:

  • What Changed in this schematic sheet?
  • How did my IC layout change since last week?

In the old days we would hold up the old and new versions of the schematics or IC layout and try to eye-ball what had changed. Now we have an automated tool that does this comparison for us and it’s called Visual Design Diff or VDD from ClioSoft.

If you’d like to win an iPad 2 then go and play their game to spot the differences.

Also Read

How Tektronix uses Hardware Configuration Management tools in an IC flow

Richard Goering does Q&A with ClioSoft CEO

Hardware Configuration Management at DAC


Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics

Third Generation DFM Flow: GLOBALFOUNDRIES and Mentor Graphics
by Daniel Payne on 08-26-2011 at 11:17 am

calibre yield analyzer

Introduction
Mentor Graphics and GLOBALFOUNDRIES have been working together for several generations since the 65nm node on making IC designs yield higher. Michael Buehler-Garcia, director of Calibre Design SolutionsMarketing at Mentor Graphics spoke with me by phone today to explain how they are working with GLOBALFOUNDRIES on a 3rd generation DFM (Design For Manufacturing) flow.

3rd party IP providers like ARM and Virage have been using this evolutionary DFM flow to ensure that SOCs will have acceptable yields. If the IP on your SOC is litho-clean then the effort to make the entire SOC clean is decreased. GLOBALFOUNDRIES has a mandate that their IP providers pass a DFM metric.

Manufacturing Analysis and Scoring (MAS)
In box A of the flow shown above is where GLOBALFOUNDRIES measures yield and the yield modeling info needed to give Mentor the design to silicon interactions. This could be equations that describe the variation in fail rates of recommended rules, or defect density distributions for particle shorts and opens.

Random and Systematic Defects and Process Variations
At 100nm and below nodes there are both random defects and process variations that limit yield. Critical Area Analysis (CAA) is used for random defects and Critical Failure Analysis (CFA) is used for systematic defects and process variations. These analysis help pinpoint problem areas in the IC layout prior to tape out.

DRC+ Pattern-based Design Rule Checking Technology
Patterns that identify low-yield areas of an IC can be defined visually then run in a DRC tool like Calibre.

Litho Friendly Design (LFD)
Calibre LFD accurately models the impact of lithographic processes on “as-drawn” layout data to determine the actual “as-built” dimensions of fabricated gates and metal interconnects. There are new LFD design kits for the 28nm and 20nm nodes at GLOBALFOUNDRIES.

Calibre LFD uses process variation (PV) bands that predict failure in common configurations including pinching, bridging, area overlap and CD variability.

In the early process development of 20nm the foundry uses Calibre LFD to predict the hot-spots and then create the design rules.

Place and Route
Integration with the Olympus-SOC™ design tool enables feed-forward of Calibre LFD results to give designers guidance on recommended layout improvements, and to enable revalidation of correct timing after modifications.

Summary
Foundries, EDA vendors and IC design companies are collaborating very closely to ensure that IC designs will have both acceptable yield and predictable performance. GLOBALFOUNDRIES and Mentor Graphics continue to partner on their 3rd generation DFM flow to enable IC designs at 28nm and smaller nodes. AMD is a leading-edge IC company using the Calibre DFM tools on the GLOBALFOUNDRIES process.

To learn more about how Mentor and GLOBALFOUNDRIES are working together you can visit the Global Technology Conference at the Santa Clara Convention Center on August 30, 2011


Mentor catapults Calypto

Mentor catapults Calypto
by Paul McLellan on 08-26-2011 at 10:36 am

Mentor has transferred its Catapult (high level synthesis) product line, including the people, to Calypto. Terms were not disclosed but apparently it is a non-cash deal. Calypto gets the product line. Mentor gets a big chunk of ownership of Calypto. So maybe the right way to look at this is as a partial acquisition of Calypto.

It has to be the most unusual M&A transaction that we’ve seen in EDA since, maybe, the similar deal when Cadence transferred SPW to CoWare. There are some weird details too: for example, current Catapult customers will continue to be supported by Mentor.

Who are Calypto? It was formed years ago to tackle the hard problem of sequential formal verification (sequential logical equivalence checking or SLEC). The market for this was people using high-level synthesis (HLS) since they didn’t have any way to check that the tool wasn’t screwing up other than simulation. There were many HLS tools: Mentor’s Catapult, Synfora (now acquired by Synopsys), Forte (still independent). More would come along later: AutoESL (now acquired by Xilinx), Cadence’s C to Silicon. But there really weren’t enough people at that time using HLS seriously to create a big enough market for SLEC.

So Calypto built a second product line on the same foundation, to do power reduction by sequential optimization. This is looking for things like “if this register could be clock-gated under certain conditions, so it doesn’t change on this clock cycle, then the downstream register can be clock gated on the following clock cycle because it won’t change.” For certain types of designs this turns out to save a lot of power. And a lot more people were interested in saving power than doing SLEC (although everyone who saved power this way needed to use SLEC to make sure that the design was functionally the same afterwards).

So now Calypto also has HLS so has a tidy little portfolio: HLS, SLEC for checking that HLS didn’t screw up, power reduction. Presumably in time some of the power reduction technology can be built into HLS so that you can synthesize for power or performance or area or whatever.

Calypto was rumored to be in acquisition talks with Cadence last year but obviously nothing happened (my guess: they wanted more than Cadence was prepared to pay). They were also rumored to be trying to raise money without success.

Mentor says they remain deeply committed to ESL and view this transaction as a way to speed adoption. I don’t see it. I’m not sure how this works financially. Catapult was always regarded as the market leader in HLS (by revenue) but Mentor also had a large team working on it. If the product is cash-flow positive then I can’t see why they would transfer it, if it cash-flow negative I don’t see how Calypto can afford it unless there is a cash injection as part of the transaction.

So what has Mentor got left to speed adoption towards. The other parts of Simon Bloch’s group (apparently he is out too) were FPGA synthesis (normal RTL leval) and virtual platform technology called Vista.

Maybe Mentor decided that HLS required the kind of focused sales force that only a start-up has. Mentor seems to suffer (paging Carl Icahn) from relatively high sales costs (although Wally says that is largely because they account slightly different from, say, Synopsys). Their fragmented product line means that their sales costs are almost bound to be higher compared to the other big guys who are largely selling a complete capability (give us all your money, or most of it, and we’ll give you all the tools you need, or most of them).

Or perhaps it is entirely financial driven. Mentor gets some expenses off their books, and reduces their sales costs a little. But without knowing the deal or knowing the low-level ins and outs of Mentor’s financials it’s not really possible to tell.

Full press release here.


20nm SoC Design

20nm SoC Design
by Paul McLellan on 08-25-2011 at 12:48 am

There are a large number of challenges at 20nm that didn’t exist at 45nm or even 32nm.

The biggest issues are in the lithography area. Until now it has been possible to make a reticle using advanced reticle enhacement technology (RET) decoration and have it print. Amazing when you think that at 45nm we are making 45nm features using 193nm light. A mask is a sort of specialized diffraction grating. But at 20nm we need to go to double patterning whereby only half the polygons on a given layer can be one a reticle and a second reticle is needed to carry the others. Of course the rules for which polygons go on which reticle are not comprehensible to designers directly. It is also likely that we are moving towards restricted design rules, where instead of having minimum spacing rules we have rules that restrict spacing to a handful of values. We’ve pretty much been doing that for contact and via layers for years but now it will affect everything. This explosion of design rules means that the design rules themselves are pretty much opaque to the designers who have to follow them and tight integration of design rule checking, RET decoration, reticle assignment and so on must be tightly integrated into both automated tools (such as place and route) and more manual tools (such as layout editors).

Another theme that goes through many blog entries here is that variation is becoming more and more extreme. The variance gets so large that it is not possible to simply guard band it or else you’ll find you’ve built a very expensive fab for minimal improvement in performance. Variation needs to be analyzed more systematically than that.

These two aspects are major challenges. Of course we have all the old challenges that designs get larger and larger moving from node to node requiring more and more productive tools and better databases. Not to mention timing closure, meeting power budgets, analysis of noise in the chip-package and board, maybe 3D TSV designs. Manufacturing test. The list goes on.

To get a clear vision of what success will require, view Magma’s webinar on 20nm SoC design here.



Magic Media Tablet: illusion about a niche market?

Magic Media Tablet: illusion about a niche market?
by Eric Esteve on 08-24-2011 at 9:31 am

According with ABI research, worldwide annual media tablet shipments are expected to top 120 million units in 2015, which is more than decent for a niche market. But if you compare it with the smartphone market (you can find the smartphone shipments forecast here), media tablet will weight 15% in unit shipment of the smartphone in 2015. Maybe the media tablet Silicon content will generate more $ per unit? Alas the answer is no, at least if we look at the heart of the system, the application processor, as the IC used in media tablet is strictly the same than for smartphone: OMAP4 from TI, Tegra2-NVIDIA and even Apple using A4 for iPhone4 and iPad (or A5 for iPhone5 and iPad2). So the SC Total Addressable Market (TAM) is directly linked with the Media Tablet unit shipments, at least if we evaluate the application processor TAM, and the comparison with Smartphone unit shipments make sense.

Is this TAM really addressable by the Nvidia, TI, Qualcomm and more?
The same research from ABI says “Android media tablets have collectively taken 20% market share away from the iPad in the last 12 months. However, no single vendor using Android (or any other OS) has been able to mount a significant challenge against it.” This means that, during the last 12 months the “real” TAM for the SC pure players (excluding Apple) was 20% of the 40 million units shipped (IPnest evaluation). Or 8 million application processors, at an Average Selling Price (ASP) of $20, that makes a “huge” $ market of $160M. Let’s compare it with the same figures for the smartphone market for the same period: about 400 million units shipped in total, 330 million being non Apple, create a $ TAM over $6B. The application processor TAM for media tablet is today less than 3% of the equivalent for smartphone!

It will certainly change in the future, as both products shipment are expected to strongly grow and we can expect Android based media tablet to gain market share against iPad, increasing both the market size and the addressable part of the application processor IC market. If we make the assumption that Apple market share of the media table pass from 80% to 50% from 2011 to 2015, it will leave about 60 million to A5 competitors, Nvidia, Qualcomm, TI and others. As the ASP for Application processor is expected to decline to about $15 in 2015, the $ TAM will grow to about $1B, which is pretty good for a niche market! But this is still a mere 10% or so of the application market for the smartphones.

You may want to question these forecast about the media tablet, thinking the market size growth evaluation is underestimated… Let’s have a look at the regional split from the first figure: the largest part is in North America, then comes Western Europe and then Asia-pacific. Eastern Europe, MEA and Latin America are staying very low. The first conclusion is that it’s a market for “riche” people (in countries where the personal income is high).

If you still have a doubt, just imagine you are 20, part of the middle class in one of these “rich” countries, and you have $500 to invest. What will you buy first?
·Laptop(even if you probably need to spend a little more)?
·Netbook (in this case some money will remain)?
·Smartphone (you will probably not pay it through the subscription)?
·Or a media tablet?
I would bet that media tablet will come last, or just before the netbook!

Last point: a couple of years ago some analysts have predicted the Netbook segment to explode. As a matter of fact, the Netbook shipments have seen very strong growth in 2009. It seemed confirming these predictions, but this was in fact due to the recession: it was more economical to buy a Netbook than a Laptop. If we look at the forecast for Netbook shipment for 2011, they are expected to decline by 18%. Partly because the economy is better than in 2009, partly because they are cannibalized by the media tablet. So, the media tablet is not essential, like can be a PC or a smartphone, and you can guess that they would be severely hurt by any economy recession, more than smartphones would be. We think media tablet is a nice, niche product for rich people, but it will never generate the expected revenues as the smartphones will for the application processor IC manufacturers. But you may have a different opinion… feel free to post it!

Eric Esteve from IPnest


Itanium Neutron Bombs Hit HP Campuses, Oracle Looking for Survivors

Itanium Neutron Bombs Hit HP Campuses, Oracle Looking for Survivors
by Ed McKernan on 08-23-2011 at 11:37 pm

attachment

It was a series of Itanium Neutron Bombs detonating during the reign of 4 management teams (Platt, Fiorina, Hurd and Apotheker) that left HP campuses in Cupertino and Palo Alto in the custody of crickets. The devastation to employees and stockholders is absolutely immense and the current strategy calls for a further retreat into the enemy territory of IBM and Oracle. If you want to point to a date that will live in Infamy – it is July 6, 1994. The day when Intel and HP tied the knot with a pair of Itanium rings. The stock then sat at $9.25, a point it may soon revisit.

It was a marriage intended to solidify the positions of both spouses as monopolistas in their respective markets. Intel, the king of PC processors was looking for a way to expand its reach into the rich, RISC based server space that was then dominated by IBM, Sun and HP. HP wanted to leverage Intel’s processor design and Fab economics to propel them into the lead. If HP were to sprinkle a new Intel 64 bit processor across all its various lines of workstations and minicomputers it could outperform and outsell Sun’s SPARC. There were numerous issues with this deal from the start. Would Intel sell Itanium at a reasonable price? Would Intel hit a performance target that met the market needs? Would software vendors port over to the new architecture?

The answer to these questions was the same back then as they are today. No hindsight is required. In fact by early 1994, the industry had just witnessed the knockout blow Intel inflicted upon the RISC camp with its complex x86 architecture. Intel prevailed with volume, good enough performance, and a truckload of software apps that made it impossible for an industry to move to RISC (see Intel’s Back to the Future Buy of Micron).

But a new twist appeared that created confusion and uncertainty inside Intel that would set-in-motion the Itanium train. HP Engineers were playing around with a new architecture called VLIW (Very Long Instruction Word). They claimed it was the architecture of the future that would outperform any implementation of x86.

Factions inside of Intel argued both sides of the x86 vs. VLIW. A Titanic Battle ensued where the only possible outcome was to do not one or the other but both. However to win the approval of analysts, roadmaps appeared in the mid 1990s showing that Itanium processors would occupy not only high-end servers but also desktops and laptops. To get there though, would require porting Microsoft and the whole PC software industry. Intel listened to the sweet talking server Harlot, girded its loins and poured billions into the Itanium hardware and software ecosystem. Seventeen years later the needle has barely budged.

All was quiet on the western front for Intel and HP as they continued to print money through the late 1990s. However HP thought it wiser if it could assemble the complete set of Over-the-Hill Gang minicomputer architectures and convert them to Itanium as well. So added to their PA RISC and Apollo computers, were Convex Computer acquired in 1995 and Tandem and DEC VAX acquired with Compaq in 2001. Each entity though had the issue of porting applications and to this, HP created software translators that effectively ran at only 10-20% of a native speed Itanium. A good example of Less is More.

No worries, with Compaq out of the way, HP could win the PC market through economies of scale on the purchasing side and through domination of the retail channel by securing most of the shelf space. Fiorina and Hurd kept cutting the operating expenses like it was kudzu in North Georgia. But no matter how much they cut, they couldn’t eliminate the next lower cost competitor coming out of Taiwan or China.

Craig Barrett, the CEO of Intel, went on his own buying spree in the late 1990s for networking silicon to fill his fabs, all the while neglecting the threat that AMD had planned with the launch of its 64 bit x86 server processor in September 2003. Intel finally threw in the towel on converting the server world to Itanium when they launched their 64 bit Xeon processor in July 2004, which was exactly 10 years after the HP – Intel handshake. Imagine where Intel would be today if they could rewind the clock and instead of pouring billions into Itanium, they built an x86-64 bit that hit the market in 2000. There likely would have been no resurrection of AMD coming out of the tech downturn.

The HP that began in 1939 in a garage in Palo Alto does live on in the 1999 spinout called Agilent and its further spinout called Avago. The largest IPO in Silicon Valley at the time, Agilent and Avago were considered too slow growth to fit under the HP corporate umbrella. But both are highly profitable like they were when they spun out. On a price to sales basis Agilent and Avago are 4 and 10 times respectively more valuable than HP.

As Mark Hurd and Larry Ellison huddle in Oracle’s headquarters, what if anything is left of HP that would be of value (sans printers)? Ah, you say – the property that the HP campuses occupy in Cupertino and Palo Alto! Let’s jump in the car and take a look!

The drive along the tree shaded Pruneridge Avenue in Cupertino is very calm. I know it well because my wife and I lived in an apartment on the corner of Pruneridge and Wolfe when we moved to California in 1998. My wife worked for HP for a short period of time in the old Tandem building. I would point out the campus to my boys as we drove past on our way to our pool past the end of the campus. For many years the buildings were unoccupied until a real estate agent put in a bid from a secret buyer – Larry’s friend Steve. In 2015 it will all come alive again when a pentagon-sized spaceship campus opens up.

<script src=”http://platform.linkedin.com/in.js” type=”text/javascript”></script>

<script type=”IN/Share” data-counter=”right”></script>


Formal Verification for Post-silicon Debug

Formal Verification for Post-silicon Debug
by Paul McLellan on 08-23-2011 at 5:52 pm

OK, let’s face it, when you think of post-silicon debug then formal verification is not the first thing that springs to mind. But once a design has been manufactured, debugging can be very expensive. As then-CEO of MIPS John Bourgoin said at DesignCon 2006, “Finding bugs in model testing is the least expensive and most desired approach, but the cost of a bug goes up 10× if it’s detected in component test, 10× more if it’s discovered in system test, and 10× more if it’s discovered in the field, leading to a failure, a recall, or damage to a customer’s reputation.”

But formal verification at the chip level is a major challenge. There are capacity issues, of course, since chips are much larger than individual blocks (yeah, who knew?). The properties to be validated are high-level and complex. So basic assertion-based verification (ABV) is inadequate and deep formal is required.

Traditionally when a bug is found in silicon, additional simulation is done to try and identify the source of the undesired behavior. But this approach can be very time-consuming and is inadequate for finding those bugs that only occur in actual silicon after a long time (and so only appear in simulation after a geological era has gone by). Of course the original testbench doesn’t exhibit the behavior (or the chip wouldn’t have been released to manufacturing) meaning that more vectors need to be created without necessarily having much idea about what vectors would be good.

Formal verification avoids this problem since it ignores the testbench completely and uses its own input stimuli. As a result, formal analysis is capable of generating conclusive answers to verification problems.

Formal verification has the capability to cut through this. Declare a property that the bad behavior does not exist. But since it is known to exist, formal verification should fail this property and, at the same time, generate an exact sequence of waveforms to reproduce the problem.

Only formal tools with a focus on full proofs of complex design properties will have the capacity to converge on the problem within an acceptable time. Additionally, once the problem has been identified and a fix proposed, formal verification can be used to prove that the problem really is fixed.

With formal’s unique ability to remove verification ambiguity, it becomes an invaluable tool in reducing the time and effort spent addressing post-silicon debug issues. Formal’s ability to quickly identify bugs, assure the cleanliness of sub-blocks, and verify the completeness of design fixes provides the highest value post-silicon debug tool in the team’s arsenal.

For the Jasper white paper on the subject, go here.