RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Intel Proves Last Year’s Conventional Wisdom Wrong

Intel Proves Last Year’s Conventional Wisdom Wrong
by Ed McKernan on 12-11-2011 at 7:00 pm

Back in the 1990’s, Richard Branson, the legendary Entrepreneur and investor was asked how to become a millionaire, and he allegedly responded, “There’s really nothing to it. Start as a billionaire and then buy an airline.” I think the same principle can be applied to a large part of the Semiconductor Industry as we witness another major downturn that has been in the works since this summer and cuts across memory, analog and logic vendors. The one true shelter in the storm has been Intel, whose stock saw a major upswing following the September 13[SUP]th[/SUP] announcement by Microsoft that Windows 8 would need an x86 processor to insure backward compatibility – a necessary requirement in the business world. Not surprisingly ARM’s stock has declined since then.

Forecasting and controlling for variables can be tricky as anyone can argue that the semiconductor industry could be reflecting the slowdown in Europe or the Thailand Flood that took out 30% of the world’s hard drive production. Last week I had the opportunity to meet a number of customers in China focused on the consumer electronics business. It was quite shocking to hear the magnitude of the price drops that have occurred in the memory and microcontroller market since June. The Thailand flood caused DRAM vendors to dump product immediately, not waiting for the PC cutbacks expected in Q1. Wafers were then reallocated to build more Flash and MCUs, which led to price declines there as well. Recent semiconductor forecasts show DRAM down nearly 30% year over year. The bright spots were in x86 processors and NAND flash.

The viewpoint that I have been trying to communicate is much longer term. What should we expect over the next two to four years? At the beginning of 2011, the conventional wisdom (CW) was that Apple’s growth in tablets would spill over into other vendors and as a result PCs would see a major slowdown at the expense of ARM suppliers despite Intel communicating to the world that it was significantly upping its Capital Expenditure to nearly $11B to retrofit 4 fabs for 22nm and to build 2 fabs for 14nm. ARM and nVidia’s stock raced ahead of the expected tablet boom and the follow on Windows 8 driven “ARM PCs” coming in 2H 2012.

Few analysts thought for a moment to look into the reasoning behind Intel’s massive CapEx that was followed by an even greater stock buyback combined with increasing dividend payouts. It turns out that Intel has known for more than a year that Microsoft’s Windows 8 was going to have to be split in order to have a light weight O/S for consumer and a heavy duty version for corporate. Furthermore, the data center build out with $4000 Xeon processors and double digit emerging market growth would overcome any PC cannibalization in the Western World due to Tablets. In the end, Otellini could write the checks and still sleep at night.

It is true that Intel would wish to have a competitive tablet processor to close any pathway for ARM to build on its Smartphone success. But from all the presentations that Intel has given this year, it is apparent that they believe they just need to get to 14nm production with Finfets in 2014 and then they will be All Alone with a 4-year process lead. Doors will close on competitors and foundry partnerships will be established – particularly with Apple and probably Qualcomm and one of the large FPGA vendors.

From our current observation point, we can see that Intel has a greater wind at its back today as compared to 12 months ago. The tablet market is Apple’s and Amazon’s based on a $10 processor. Intel will field a $10 part for the purpose of forcing nVidia, Qualcomm, TI and others to play in the mud. I expect many ARM CPU vendors will re-evaluate the worthiness of playing in the tablet and smartphone markets at such a low price and return on investment.

AMD has fallen off the radar screen in the near term giving Intel sole ownership of the ultrabook market. Intel will look to convert 70%+ of the mobile PC market into ultrabooks because in 18-24 months (after Haswell) they could own it all and diminish nVidia’s graphics business that thrives today on attachments to Sandy Bridge.

Finally, in 2011, Intel figured out that the right way to look at tablets and smartphones was as the drivers of the cloud that is built on $4000 Xeon processors. Intel now expects its server and storage business to double in the next 5 years to $20B. I think this is conservative. Regardless, it is rare to hear of a large Fortune 500 company growing an 80%+ Gross Margin business at double-digit rates.


During a Question and Answer segment at the recent Credit Suisse Investors Conference, Paul Otellini was confident as he explained the economics of today’s Fabs and the $10B ones coming with 450mm in 2018. The dwindling number of players who are able to afford the price tag and the 4-year process lead with 14nm coming in 2014 should make competitors shudder. The capital-intensive airline business model that Richard Branson spoke about may be about to come to most of the semiconductor industry, with the likely exception of Intel.

FULL DISCLOSURE: I am Long AAPL and INTC


Synopsys Eats Magma: What Really Happened with Winners and Losers!

Synopsys Eats Magma: What Really Happened with Winners and Losers!
by Daniel Nenni on 12-10-2011 at 6:00 pm

Conspiracy theories abound! The inside story of the Synopsys (SNPS) acquisition of Magma (LAVA) brings us back to the 1990’s tech boom with shady investment bankers and pump/dump schemes. After scanning my memory banks and digging around Silicon Valley for skeletons with a backhoe here is what I found out:

The Commission brings this action against defendant Credit Suisse First Boston LLC, f/k/a Credit Suisse First Boston Corporation (“CSFB”) to redress its violation of provisions of the Securities Exchange Act of 1934 (“Exchange Act”) and pertinent rules thereunder, and rules of NASD Inc. (“NASD”) (formerly known as the National Association of Securities Dealers) and the New York Stock Exchange, Inc. (“NYSE”).

Investment banker Frank Quattrone, formerly of Credit Suisse First Boston (CSFB), took dozens of technology companies public including Netscape, Cisco, Amazon.com, and coincidentally Magma Design Automation. Unfortunately CSFB got on the wrong side of the SEC by using supposedly neutral CSFB equity research analysts to promote technology stocks in concert with the CSFB Technology Group headed by Frank Quattrone. Frank was also prosecuted personally for interfering with a government probe.

6. The undue and improper influence imposed by CSFB’s investment bankers on the firm’s technology research analysts caused CSFB to issue fraudulent research reports on two companies: Digital Impact, Inc. (“Digital Impact”) and Synopsys, Inc. (“Synopsys”). The reports were fraudulent in that they expressed positive views of the companies’ stocks that were contrary to the analysts’ true, privately held beliefs.

The full complaint is HERE, it is an interesting read.

To make a long story short: Frank Quattrone went to trial twice: the first ended in a hung jury in 2003 and the second resulted in a conviction for obstruction of justice and witness tampering in 2004. Frank was sentenced to 1.5 years in prison before an appeals court reversed the conviction. Prosecutors agreed to drop the complaint a year later. Frank didn’t pay a fine, serve time in prison, nor did he admit wrongdoing! Talk about a clean getaway! Quattrone is now head of merchant banking firm Qatalyst Partners, which, coincidently, handled the Synopsys acquisition of Magma on behalf of Magma.Qatalyst is staffed with Quattrone cronies and former CSFB people. Disclosure: This blog is opinion, conjecture, rumor, and non legally binding nonsense from an internationally recognized industry blogger who does not know any better. To be clear, this blog is for entertainment purposes only.

Okay, here’s what I think happened: Qatalyst went to Magma CEO Rajiv Madhavan with a doom and gloom Magma prediction for 2012 and a promise of a big fat check from Synopsys. In parallel Qatalyst went to a Synopsys board member(s) and suggested that investors want to see a return on the $1B+ pile of cash Synopsys was hoarding and added that “if you don’t buy Magma your competition will”. The rest is in the press releases.

A couple of interesting notes: Synopsys will have to pay Magma $30M if the acquisition does not go through. I can assure you there are some people who definitely do NOT want this merger to go through so there is a possibility it will not pass regulatory scrutiny. Frank Quattrone’s involvement may not help this process assuming he has some regulatory enemies from his legal victory.

Magma will have to Pay Synopsys $17M if they get a better offer and back out of the deal. Mentor only has $120M in cash so they are in no position for a bidding war, even though I think that is the rightful home for Magma products. Cadence has $700M in cash but I don’t think they could outbid Synopsys even if they wanted to, which from what I have been told they don’t.

“Bringing together the complementary technologies of Synopsys and Magma, as well as our R&D and support capabilities, will help us deliver advanced tools earlier, thus, directly impacting our customers demand for increased design productivity.” Aart J. de Geus Synopsys (SNPS) Q4 2011 Earnings Call November 30, 2011 5:00 PM ET

If “complimentary technologies” means “overlapping products” I agree with Aart. Daniel Payne did a nice product comparison table on the SemiWiki Synopsys Acquires Magma!?!?!? Forum thread. 10,000+ people have viewed it thus far which would be considered “going viral” on our little EDA island.

Winners and Losers?

Synopsys is the biggest winner. Magma has been undercutting EDA pricing since day one so expect bigger margins for Synopsys! Aart also gets to write the final Magma chapter which has gotta feel pretty good. Kudos to Synopsys on this one.

Emerging EDA companies like Berkeley Design Automation and ATopTech are big winners. One of Magma’s biggest attractions was that they were NOT Synopsys/Cadence. Big EDA customers and semiconductor foundries do not like product monopolies and will search out innovative alternatives.

Magma is a winner with a very nice exit. Being dog number four in a three dog race is not much fun. Magma’s accomplishments are notable, no shame there, and they do have some excellent people/technology.

Cadence is a winner/loser. Winner as they do not have to compete with Magma anymore. Loser as they are now even farther behind Synopsys in just about everything. Magma customers are losers. If history repeats, Synopsys will upsize prices and legacy the overlapping Magma products, as they did with EPIC, NASSDA, etc…

Mentor is the biggest loser. If Mentor had acquired Magma (as I blogged), Mentor would be the #2 EDA company hands down. Carl Ichan really missed a great opportunity to make history. You really let me down here Carl. Comments will not be allowed on this blog.

Please share your thoughts on the Synopsys Acquires Magma!?!?!?Forum thread. Send all personal attacks and death threats to me directly at: idontcare@semiwiki.com.


Atrenta’s users. Survey says….

Atrenta’s users. Survey says….
by Paul McLellan on 12-09-2011 at 7:32 pm

Atrenta did an online survey of their users. Of course Atrenta’s users are not necessarily completely representative of the whole marketplace so it is unclear how the results would generalize for the bigger picture, your mileage may vary. About half the people were design engineers, a quarter CAD engineers and the rest split between test engineers, verification and other things.

There are some questions that focus on use of Atrenta’s tools that I don’t think are of such wide interest, I’ll focus on the things that caught my eye.

Firstly, the method to design your RTL. Do you create it from scratch or modify existing RTL? It is now a 40:60 split with 40% of designers writing their own RTL and 60% modifying existing RTL.

When it comes to the top level RTL, there is a split between doing it manually (57%), with scripts (57%) and using a 3rd party EDA tool (12%). Yes, those numbers total more than 100%, some people obviously use more than one technique.

On the main limitations of their current approach, designers had a litany of woes. Missing design manipulation features (35%), support not consistent and reliable (26%) and ECOs hard to handle (34%). But clearly the #1 problem is the difficulty of debugging design issues at 49%. There were many other things listed from missing IP-XACT files, IP being unqualified, to just plain “error prone”.


The final question was about what aspects of the design flow were most critical to improve. The choices for each feature were critical, very important, nice to have, not important and don’t know. So let’s take the critical and very important groups and see what the top concerns were.

First was reduce verification bugs due to connectivity problems. The next 3 are all facets of a similar problem: rapidaly adapt legacy designs, effort to integrate 3rd party IP and effort to make updates when 3rd party IP is in use. Slightly behind that is to reduce the time and effort to create test benches.


Challenges in 3D-IC and 2½D Design

Challenges in 3D-IC and 2½D Design
by Paul McLellan on 12-09-2011 at 5:18 pm

3D IC design and what has come to be known as 2½D IC design, with active die on a silicon interposer, require new approaches to verification since the through silicon vias (TSVs) and the fact that several different semiconductor processes may be involved create a new set of design challenges

The power delivery network is a challenge in a modern 2D (i.e. normal) die but designing the power delivery network is more challenging still with TSVs, passing the power up from the die at the bottom of the stack (or the interposer) up to the higher die. There are possibilities of inter-die noise and other issues. There are two approaches to handle this. The first approach, which can be used if all the die data is available, is to simulated everything concurrently. The second approach is to use models where a chip power model (CPM) is generated for missing data for co-analysis with the available data.

Another specific power-related problem is thermal and thermal-induced stress failures. The IC power is very temperature-dependent, especially leakage power. In a 3D design the thermal dissipation is more complex. Similar to CPM, a chip thermal model (CTM) can be generated for each die in the design, including temperature dependent power and per-layer metal density. The CTM is used for accurate prediction of power and temperature distribution.

From a signal integrity point of view, a new problem is jitter noise analysis for wide-I/O applications. In an interposer design, which is a lot less pin limited than a regular package, a parallel bus might have as many as 8K bits which, apart from skew considerations, can introduce significant jitter due to simultaneous switching.

So it is clear that a new approach is required, a comprehensive anaysis for power, noise, and reliability to ensure successful tape-out of 3D and silicon-interposer-based designs.

This is a summary of a paper by Dr Norman Chang of Apache presented at the IEEE/CPMT society 3D-IC workshop held in Newport Beach on December 9th. The conference website is here.


Low power techniques

Low power techniques
by Paul McLellan on 12-08-2011 at 5:49 pm

There was recently a forum discussion about the best low power techniques. Not surprisingly we didn’t come up with a new technique nobody had ever thought of but it was an interesting discussion.

First there are the techniques that by now have become standard. If anyone wants more details on these then two good resources are the Synopsys Lower Power Methodology Manual (LPMM) and the Cadence/Si2 Practical Guide to Low Power Design. The first emphasizes UPF and the second CPF but there is a wealth of background information in both books that isn’t especially tied to either power format.

  • Clock gating
  • Multiple Vt devices
  • Back/forward biasing of devices
  • Power gating for shutdown
  • State retention
  • Multi-voltage supplies
  • Dynamic voltage scaling
  • Dynamic frequency scaling (Dynamic Voltage and Frequency Scaling – DVFS)

A lot can be done at the system/architectural level, essentially controlling chip level power functionality from the embedded software, such as powering down the transmit/receive logic in a cell-phone when no call is taking place.

Asynchronous logic offers potential for power saving, especially for the ability to take silicon variation in the form of lower power as opposed to binning for higher performance. After all, 30% of many SoCs power budget is consumed by the clock itself. But there are huge problems with the asynchronous design flow since synthesis, static timing, timing driven place & route, scan-test etc are all inherently built on a synchronous model and break down when there is no clock. These are soluble problems if enough people wanted to use asynchronous approaches, but a lot of tools need to be fixed all at once (but to be fair, this was the case with the introductions of CPF and UPF too). But definitely it has a feel of “you just have to boil the ocean.”

WIth more powerful tools, such as those from Calypto, more clock gating can be done than the simple cases that synthesis handles (replacing muxes recirculating values with a clock gating cell). If a register doesn’t change on this clock cycle, then the downstream register won’t change on the next clock cycle. Some datapaths have a surprising amount of these sorts of structures that can be optimized, although the actual power savings are usually data dependent.

As we have gone down through the process nodes, leakage power has gone from an insignificant part of total power dissipation to being over half in some chips. Some of the new Finfet transistor approaches look like they will have a big positive effect on leakage power too, but there is a lot that can be done with any process using libraries containing both low-leakage and high-performance cells and using the high-performance cells only on the most critical paths.

The real bottom line is that power requires attention at all levels. The embedded software ‘knows’ a lot about which parts of the chip are needed and when. For example, the iPad supposedly has multiple clock rates for the A5 chip and only goes up to the full 1 GHz when that performance is needed for CPU intensive operations. Architectural level techniques such as choice of IP blocks can have a major impact. Low power synthesis is with clock gating and multiple libraries. Circuit design techniques. And finally process innovation that keeps the power under control as the transistors get smaller and smaller.


Driving in the bus lane

Driving in the bus lane
by Paul McLellan on 12-08-2011 at 1:16 pm

Modern microprocessor and memory designs often have hundreds of datapaths that traverse the width of the chip, many of them very wide (over one thousand signals). To meet signal timing and slope targets for these buses, designers must insert repeater cells to improve the speed of the signal. Until now, the operations associated with managing large numbers of large buses have been manual: bus planning, bus routing, bus interleaving, repeater cell insertion and so on. However, the large and growing number of buses, especiallly in multi-core microprocessor designs, means that a manual approach is both too slow and too inaccurate. So Pulsic have created an automated product to handle this increasingly critical task, Unity Bus Planner.

Another problem that automation helps to address is that the first plan is never the last. As von Clausewitz said: “No campaign plan survives first contact with the enemy”. In just the same way, no bus plan survives once the detailed placement of the rest of the chip gets done. Repeater cells, since they involve transistors, don’t fly over the active areas but have to interact with them, so as the detailed layout of the chip converges the bus plan has to be constantly amended.

During the initial floorplanning stage, designers do block placement and power-grid planning followed by initial bus and repeater planning. Buses that cross the whole chip need to be carefully planned. At the end of this stage initial parasitic data is available for simulating critical parts of the design.

During bus planning itself, designers fit as many buses as possible through dedicated channels to avoidtiming issues. Very fast signals require shielding with DC signals such (such as ground) to prevent crosstalk noise issues. Often architects interleave buses so that they provide shielding for each other rather than using valuable resources for dedicated shielding. But planning, interleaving and routing hundreds of very wide buses is slow and error-prone. Internal tools created to do this are often unmaintainable and inadequate for the new generation of still larger chips.

Signals on wide buses need to arrive simultaneously with similar slopes so that they can be correctly latched. This means that the paths must match in terms of metal layers, vias, repeater and so on, a very time consuming process, especially when changes inevitably need to be made as the rest of the design is completed.

With interactive and automated planning, routing and managment, designers can complete bus and repeater-cell planning in minutes or hours rather than days or weeks. Automation also makes the inevitable subsequent modifications faster and more predictable.

The Unity Bus Planner product page is here.



Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes

Improving Analog/Mixed Signal Circuit Reliability at Advanced Nodes
by glforte on 12-07-2011 at 3:52 pm

Preventing electrical circuit failure is a growing concern for IC designers today. Certain types of failures such as electrostatic discharge (ESD) events, have well established best practices and design rules that circuit designers should be following. Other issues have emerged more recently, such as how to check circuits for correct supply connections when there are different voltage regions on a chip in addition to other low power and electrical overstress (EOS) issues. While these topics are not unique to a specific technology node, for analog and mixed signal designs they become increasingly critical as gate oxides are get thinner for the most advanced nodes and circuit designers continue to put more and more voltage domains into their designs.

In addition to ESD protection circuit errors, some of the emerging problems that designers may face at advanced nodes include:

  • Possible re-spins due to layout errors

    • Thin-oxide/low-power transistor gate driven by the wrong (too high) voltage supply causing transistor failures across the chip
    • This might also cause degradation over a period of time, leading to reliability issues and product recalls
  • Chip reliability and performance degradation

    • Un-isolated ground/sink voltage levels between cells or circuits
    • High-voltage transistors operating in non-saturation because of insufficient/low-voltage supply

TSMC and MGC have worked together to define and develop rule decks for Calibre® PERC™ that enable automatic advanced circuit verification that addresses many of these issues. For example, in TSMC Reference Flow 11 and 12, and in AMS Reference Flow 1 and 2, as well as in standard TSMC PDK offerings, MGC and TSMC have collaborated to provide checks for ESD, latch-up and multi-power domain verification at the 28nm and 40nm nodes. The companies estimate that by using a robust and automated solution like Calibre PERC, users can achieve over 90% coverage of advanced electrical rules with no false errors and runtimes measured in minutes. This is a significant improvement over marker layers, which may achieve around 30% coverage and often result in false positives, and it is far better than visual inspection, which typically achieves only about 10% coverage and is extremely labor intensive.

Calibre PERC introduces a different level of circuit verification capability because it can utilize both netlist and layout (GDS) information simultaneously to perform checks. In addition, it can employ topological constraints to verify that the correct structures are in place wherever circuit design rules require them. Here is a representative list of the checks that Calibre PERC can be used to perform:

  • Point to point resistance
  • Current density
  • Hot gate/diffusion identification
  • Layer extension/coverage
  • Device matching
  • DECAP placement
  • Forward biased PN junctions
  • Low power checking

    • Thin-oxide gate considerations, e.g., maximum allowable voltage
    • Voltage propagation checks, e.g., device breakdown and reverse current issues
    • Detect floating gates
    • Verify correct level shifter insertions and correct data retention cell locations
  • Checks against design intent annotated in net lists

    • Matched pairs
    • Balanced nets/devices
    • Signal nets that should not cross
    • MOS device guard rings

Customers are constantly finding new ways to employ Calibre PERC to automate new types of circuit checks. Leave a reply or contact Matthew Hogan if you would like to explore how Calibre PERC might be used to improve the efficiency and robustness of new or existing checks.

by Steven Chen, TSMC and Matthew Hogan, Mentor Graphics

Acknowledgements
The authors would like to recognize members of the TSMC R&D team Yi-Kan Cheng, and his teams, Steven Chen, MJ Huang, Achilles Hsiao, and Christy Lin, as well as the entire Calibre PERC development and QA team for their support and dedication in making these outstanding capabilities possible.

This article is based on a joint presentation by TSMC and Mentor Graphics at the TSMC Open Innovation Platform Ecosystem Forum. The entire presentation is available on line on theTSMC web site (click here).

var _gaq = _gaq || [];
_gaq.push([‘_setAccount’, ‘UA-26895602-2’]);
_gaq.push([‘_trackPageview’]);

(function() {
var ga = document.createElement(‘script’); ga.type = ‘text/javascript’; ga.async = true;
ga.src = (‘https:’ == document.location.protocol ? ‘https://ssl’ : ‘http://www’) + ‘.google-analytics.com/ga.js’;
var s = document.getElementsByTagName(‘script’)[0]; s.parentNode.insertBefore(ga, s);
})();


How Fast (and accurate) is Your SPICE Circuit Simulator?

How Fast (and accurate) is Your SPICE Circuit Simulator?
by Daniel Payne on 12-06-2011 at 6:17 pm

In my dad’s generation they tweaked cars to become hotrods while in EDA today we have companies that tweak SPICE circuit simulators to become crowned speed champions. The perennial question though is, “How fast and accurate is my SPICE circuit simulator?”
Continue reading “How Fast (and accurate) is Your SPICE Circuit Simulator?”


Methodics can Now use www.methodics.com after Domain Name Battle in EDA

Methodics can Now use www.methodics.com after Domain Name Battle in EDA
by Daniel Payne on 12-06-2011 at 5:36 pm

Imagine trying to run your EDA business only to have a competitor squat on your domain name and then make disparaging remarks about you. This sounds like a match made for reality TV however it is quite real, and now this chapter in EDA has a happy ending because Methodics can use www.methodics.com as their domain name.


The Bad Guy
We’ve been blogging at SemiWiki about how Shiv Sikand of competitor IC Manage registered www.methodics.com in frustration after a sales employee left for Methodics. The publication of the following content at www.methodics.com raised the controversy quite high because it falsely implied that Methodics was out of business:

The Resolution
Fortunately for businesses that have domain name disputes you have recourse by going to the World Intellectual Property Organization (WIPO) and on November 28, 2011 Methodics won the domain name of www.methodics.com, which they now use for their new site name replacing www.methodics-da.com.

IC Data Management Companies
There are a few EDA companies offering tools that help manage the IC design process:

Conclusion
Don’t cyber squat on your competitor’s company name or product name, just compete based on the merits of your product and the skills of your sales force.