SMT webinar banner3

How much IP & reuse in this SoC?

How much IP & reuse in this SoC?
by Eric Esteve on 03-17-2011 at 10:12 am

According with the survey from GSA-Wharton design IP blocks reuse in a new IC product is 44% in average. Looking at the latest Wireless platform from TI, OMAP5, we have listed the blocks which have been (or will be) reused, coming from internal (or external) IP sourcing. For a license cost evaluation of -at least- $10M!

For those who missed the information, the latest wireless platform, or application processor from TI, has been announced last month. This true System on Chip has been designed in 28nm technology, and includes no less than four processor core (two ARM Cortex-A15 and two Cortec-M4), one DSP core (TI C64x) and multiple image and audio dedicated processors, to mention only pure computing power. All these core are clearly IP, whether they comes from ARM, Imagination Technologies or TI himself. The M-Shield system security technology, as being initially introduced by TI is more an internally developed function, being enhanced (crypto DMA added) and re-used from previous generation (OMAP4) , as well as the Multi-pipe display sub-system. So far, all these blocks are IP or reused functions.

Let’s take a look at the Interface blocks, including external memory controllers and protocol based communication functions, these are strong candidates for IP sourcing or internal reuse:
·Two LPDDR2 Memory Controllers (EMIF1, EMIF2)
·One NAND/NOR Flash memory Controller (GPMC)
·One Memory Card Controller (MMC/SD)
·To my knowledge, the first time in a wireless processor, a SATA 2.0storage interface (a PHY plus a Controller)
·For the first time on OMAP family, a SuperSpeed USB OTG, signing USB 3.0 penetration of the wireless handset market (USB 3.0 PHY plus a Dual Mode OTG Controller)
·More usual is the USB 2.0 Host, but the function is by 3 (USB 2.0 PHY plus a Host Controller)
·For the MIPI functions, the list of supported specifications is long:oLLI/uniport to interface with a companion device in order to share the same external memory and save a couple of $ on each handset (MIPI M-PHY and LLI controller)
oTo interface with the Modem, another LLI (same goal as above?) and two HSIfunctions (High Speed Synchronous Link, a legacy functions to be probably replaced by DigRF in the future)
oTo interface with the various (!) cameras, one CSI-3 function (M-PHY, up to 5 Gbs, and CSI-3 Controller) and not less than three CSI-2 function (D-PHY, limited to 1 Gbs, and the CSI-2 Controllers)
oTo handle the Display, two DSI (D-PHY and the DSI Controllers)serial interfaces and one DBI (Bus Interface)
oAnd SlimBus, a low performance, serial, low power interface with Audio chips

·With HDMI 1.4a, we come back to a well known protocol used in PC and Consumer

I understand you start to be tired reading such a long list, so I will stop here for the moment. Interesting to notice: almost each of the above listed interface function would generate an IP license cost of about $500K (can be less or more). This assume an external sourcing, which is certainly not true for all the blocks. If we look now at the different processor cores, all except the DSP have to be licensed. The “technology license” paid by TI to ARM to use Cortex M-15 and M-4 weights several million dollars (even if the core can be reused in other IC). So, in total, the processing power used in OMAP5 has a 3 to $5M cost.

To be exhaustive, we have to add to this list of IP (or re-used blocks) all the Interfaces and I/Os (UART, GPIO, I2C and so on) not listed before as well as some high value blocks, like: embedded memory (L2 cache, L3 RAM), a Network on Chip Interconnect, more than one PLL… Maybe more.

If we look at the block diagram, we see that the IP or re-used blocks can match all the listed functions. Does that means that 100% of OMAP5 is made of IP? Certainly not, as the block diagram does not show the testability or the power management, essential part of this SoC. But an estimate of 80% of IP/Re-use, at a theoretical license cost in the range of $10M looks realistic for OMAP5.


Apache files S-1

Apache files S-1
by Paul McLellan on 03-14-2011 at 3:50 pm

Apache Design Solutions today filed their S-1 with the SEC in preparation for its initial public offering (IPO). This is a big deal since there hasn’t been an IPO of an EDA company for may years (Magma was the last 10 years ago). As a private company they have not had to reveal their financials until now.

It turns out that they did $44M in revenue last year (2010), $34M the year before (2009) and $25M two years ago (2008). That’s pretty impressive growth. They were profitable in all 3 years. Apache has done a good job of building up multiple product lines each of which is strong in its space. It is almost impossible to get to IPO levels of revenue and growth with a single product line unless it is in one of the huge market spaces such as synthesis or place & route.

Of course there is a large amount of uncertainty in the market due to the situation in Japan, Libya and maybe the Euro. None of that has anything to do with Apache directly but Wall Street is an emotional place and the general market can make a big impact on the success of an IPO. Few people remember that SDA was going to go public on what turned out to be the day of the 1987 stock-market crash. Oops. They never went public, and ended up merging with (already public) ECAD systems to create Cadence.

If Apache’s IPO is a big success, then a couple of other companies are rumored to be thinking about IPOs. Atrenta and eSilicon. They are both private so I’ve not seen the numbers for either of them, but the numbers I’ve heard make them sound ready for the costs of Sarbanes-Oxley and all the other annoyance that come with being a public company.


Checking AMS design rules instantly

Checking AMS design rules instantly
by Paul McLellan on 03-13-2011 at 5:25 pm

With each process generation, the design rules get more and more complex. One datapoint: there are twice as many checks at 28nm as there are at 90nm. In fact, the complexity of the rules is outpacing the ability to describe them using the simplified approaches used in the DRCs built-into layout editors or formats like LEF.

Worse still, at 28nm and below the rules are really too complicated for a designer to understand. As Richard Rouse of MoSys said “at 28nm, even the metal rules are incomprehensible.” It goes without saying that the rules for the even more critical layers for transistor formation are even more opaque.

It is going to get worse too, since at 22nm and below we will need to use double patterning for the most critical layers, which means that it is not really even possible to know whether a design is in compliance, that depends on the double patterning algorithm being able to assign shapes to the two patterns in a way that the lithography will work.

It used to be that analog designers were comfortable staying a couple of nodes behind the digital guys but that is no longer possible. 70% of designs are mixed-signal SoCs, driven by the communications market in particular. Mobile consumer chips need analog, and even some of the digital, such as graphics accelerators, typically require custom design datapaths.

Digital designs are about scaling, using simplified rules to be able to handle enormous designs. Custom and analog design is about optimization and so need to push to the limits of what the process can deliver.

The only solution to this problem is to run the real signoff DRC with the real signoff rules. The signoff DRC is pretty much universally Mentor’s Calibre (sorry Cadence and Synopsys). But running Calibre from within theLaker™ Custom Layout Automation System is inconveniently slow and disruptive. So Mentor have created an integratable version called Calibre RealTime (and by the way, doesn’t Mentor know that Oasys have a product already called RealTime Designer) which operates on top of the OpenAccess (OA) database and Laker is the first, and so far only, layout editor to do the integration. Rules are continuously checked in parallel with editing. A full check takes about 0.2 to 0.3 seconds. In the demonstration I saw at Springsoft’s booth it was almost instantaneous, rather like the spell-checker in a word processor. In fact at 28nm it was checking 6649 shapes in 0.3 second. Laker optimizes which checks are required based on what changes were made to the design (if you don’t change the metal you don’t need to run those metal checks again, for example) which further improves performance.

Historically, layout designers have been very reluctant to adopt anything that might slow them down. But the complexity of the rules in modern process nodes along with the high performance and unobtrusiveness of the integration, has a good chance of immediate adoption of the Laker signoff-driven layout flow.


Moore’s Law and Semiconductor Design and Manufacturing

Moore’s Law and Semiconductor Design and Manufacturing
by Daniel Nenni on 03-12-2011 at 4:51 am

The semiconductor design and manufacturing challenges at 40nm and 28nm are a direct result ofMoore’s Law, the climbing transistor count and shrinking geometries. It’s a process AND design issue and the interaction is at the transistor level. Transistors may be shrinking, but atoms aren’t. So now it actually matters when even a few atoms are out of place. So process variations, whether they are statistical, proximity, or otherwise, have got to be thoughtfully accounted for, if we are to achieve low-power, high-performance, and high yield design goals.

"The primary problem today, as we take 40 nm into production, is variability,” he says. “There is only so much the process engineers can do to reduce process-based variations in critical quantities. We can characterize the variations, and, in fact, we have very good models today. But they are time-consuming models to use. So, most of our customers still don’t use statistical-design techniques. That means, unavoidably, that we must leave some performance on the table.”Dr. Jack Sun, TSMC Vice President of R&D

Transistor level design which lnclude Mixed Signal, Analog/RF, Embedded Memory, Standard Cell, and I/O, are the most susceptible to parametric yield issues caused by process variation.

Process variation may occur for many reasons during manufacturing, such as minor changes in humidity or tempature changes in the clean-room when wafers are transported, or due to non uniformities introduced during process steps resulting in variation in gate oxide, doping, and lithography; bottom line it changes the performance of the transistors.

The most commonly used technique for estimating the effects of process variation is to run SPICE simulations using digital process corners provided by the foundry as part of the spice models in the process design kit (PDK). This concept is universally familiar to transistor level designers, and digital corners are generally run for most analog designs as part of the design process.

Digital process corners are provided by the foundry and are typically determined by Idsat characterization data for N and P channel transistors. Plus and minus three sigma points maybe selected to represent Fast and Slow corners for these devices. These corners are provided to represent process variation that the designer must account for in their designs. This variation can cause significant changes in the duty cycle and slew rate of digital signals, and can sometimes result in catastrophic failure of the entire system.
However, digital corners have three important characteristics that limit their use as accurate indicators of variation bounds especially for analog designs:

  • Digital corners account for global variation, are developed for a digital design context and are represented as “slow” and “fast” which is irrelevant in analog design.
  • Digital corners do not include local variation effects which is critical in analog design.
  • Digital corners are not design-specific which is necessary to determine the impact of variation on varying analog circuit and topology types.

These characteristics limit the accuracy of the digital corners, and analog designers are left with considerable guesswork or heuristics as to the true effects of variation on their designs. The industry standard workaround for this limitation has been to include ample design margins (over-design) to compensate for the unknown effects of process variation. However, this comes at a cost of larger than necessary design area, as well as higher than necessary power consumption, which increases manufacturing costs and makes products less competitive. The other option is to guess at how much to tighten design margins, which can put design yield at risk (under-design). In some cases under and over-design can co-exist for different output parameters for a circuit as shown below. The figure shows simulation results for digital corners as well as Monte Carlo simulations which are representative of the actual variation distribution.

To estimate device mismatch effects and other local process variation effects, the designer may apply a suite of ad-hoc design methods which typically only very broadly estimate whether mismatch is likely to be a problem or not. These methods often require modification of the schematic and are imprecise estimators. For example, a designer may add a voltage source for one device in a current mirror to simulate the effects of a voltage offset.

The most reliable and commonly used method for measuring the effects of process variation is Monte Carlo analysis, which simulates a set of random statistical samples based on statistical process models. Since SPICE simulations take time to run (seconds to hours) and the number of design variables is typically high (1000s or more), it is commonly the case that the sample size is too small to make reliable statistical conclusions about design yield. Rather, Monte Carlo analysis is used as a statistical test to suggest that it is likely that the design will not result in catastrophic yield loss. Monte Carlo analysis typically takes hours to days to run, which prohibits its use in a fast, iterative statistical design flow, where the designer tunes the design, then verifies with Monte Carlo analysis, and repeats. For this reason, it is common practice to over-margin in anticipation of local process variation effects rather than to carefully tune the design to consider the actual process variation effects. Monte Carlo is therefore best suited as a rough verification tool that is typically run once at the end of the design cycle.

The solution is a fast, iterative AMS Reference Flow that captures all relevant variation effects into a design-specific corner based flow which represents process variation (global and local) as well as environmental variation (temperature and voltage).

Graphical data provided by Solido Design Automation‘s Variation Designer.


Getting Real Time Calibre DRC Results

Getting Real Time Calibre DRC Results
by Daniel Payne on 03-10-2011 at 10:00 am

Last week I met with Joseph Davis, Ph.D. at Mentor Graphics in Wilsonville, Oregon to learn about a new product designed for full-custom IC layout designers to improve productivity.

The traditional flow for full-custom IC layout designers has been nearly unchanged for decades:

  • Read a schematic or use Schematic Driven Layout (SDL)
  • Create an IC layout with polygons, objects, generators or cells
  • Manually interconnect or use a P&R tool
  • Run DRC and LVS in batch
  • Find and fix errors

As designers use the newest layout nodes at 28nm the complexity explosion in DRC rules has been quite daunting with a doubling of rules compared to 90nm:

IC layout designers cannot keep all of these new rules in their head while doing custom layout and running DRC tools in batch mode dozens or hundreds of times a day will certainly slow down design productivity.

Faced with these pressures of more DRC rules and increased difficulty to get a clean layout a design team could choose several options:

[LIST=1]

  • Avoid optimizing the layout, just get it to DRC, LVS clean and get the job done.
  • Take more time to get the layout clean.
  • Hire more layout designers or contractors.

    None of these options looks appealing to me, so it was good news to hear about a new approach to make the life of IC layout design go faster. This approach is to use the Calibre tool interactively while doing IC layout instead of using it in batch mode:

    The demo that I saw was a custom IC layout at 28nm with dozens of transistors, 6,649 shapes and about 3,000 checks. When this layout was run with Calibre in batch mode it took about 20 seconds to produce results, however when used interactively the results came back in under 1 second. Both runs used identical rule decks for Calibre, and were run on the same CPU, the difference being that the interactive version of Calibre produced near instant results.

    Mentor calls this new tool Calibre RealTime and it works in a flow that uses the Open Access database, a de-facto industry standard. It was a bit odd to be inside of a Mentor building and watch a demo using the SpringSoft Laker tool for IC layout since it competes with IC Station from Mentor. You can get this same technology for Mentor’s IC Station tool but you have to wait until July.

    Benefits of this interactive approach with Calibre RealTime are:

    • I can see my DRC errors rapidly, just mouse over the error region and read a detailed description of my DRC violations.

    • The DRC rule deck is the full Calibre deck, so I’m getting sign-off quality results at the earliest possible time. In the past IC tools had some rudimentary interactive DRC checks however they were never sign-off quality, so you still had to run a batch DRC run only to discover that you had lot of fixes still required.
    • With the SpringSoft Laker tool I just push one button and get Calibre results in under a second. There’s no real learning curve, it’s just that simple to install and start using immediately.
    • Of course I wouldn’t run this interactive DRC check on a design with more than 1,000 transistors at a time, it’s really meant for smaller blocks. If you use Calibre RealTime while your design is being created on small blocks, then it’s going to catch your DRC errors almost instantly so you can start fixing the layout and avoiding making the same mistake again.
    • Mentor IC Station users will benefit from this same productivity boost in just a few months, so you should be able to see a demo at DAC.
    • Works with MCells, PCells, routers andmanual editing. So it fits into your IC flow at every step of the process.

    Summary
    The Calibre RealTime tool will certainly save you time on full-custom IC layout by producing DRC results in under a second on small blocks. SpringSoft Laker users will benefit first, then IC Station users in a few months.

    It makes sense for Mentor tools to work with the Open Access database, now I wonder if IC Station will be the next tool to use OA?


  • Semiconductor IP would be nothing without VIP…

    Semiconductor IP would be nothing without VIP…
    by Eric Esteve on 03-10-2011 at 6:35 am

    …but what is the weight of the Verification IP market?

    If the IP market is a niche market (see: **) with revenue of about 1% of the overall semiconductor business, how could we qualify the VIP market? Ultra-niche market? But the verification of the IP integrated into the SoC is an essential piece of the engineering puzzle when you are involved in SoC design. Let me clarify what type of verification I want to discuss: the verification of a protocol based function, which can be an Interface protocol (USB, PCIe, SATA, MIPI and more), a specific memory Interface (DDR, DDR2,…, GDDR,…, Flash ONFI and more) or a Bus protocol (AMBA AXI,…, OCP and more).

    ** http://www.semiwiki.com/forum/f152/ip-paradox-such-low-payback-such-useful-piece-design-369.html

    Courtesy of Perfectus
    The principle of this type of VIP is: you write Test benches, send to a scenario Generator which activates the Drivers of the Bus Function Model (BFM), accessing the function you want to verify, the Device Under Test (DUT). The Verification Engine itself is made of the transaction Generator, Error injector and so on and of the BFM, which is specific to the protocol and to the agent configuration: if the DUT is a PCIe Root Port x4, the BFM will be a PCIe End Point x4. If the DUT is the hand, the BFM is the glove. When you write Test benches, you in fact access to a library of protocol specific test suite (test vectors written by the VIP vendor), which is sold on top of the VIP. So, after applying test benches, you have now to monitor the behavior of the DUT and check for the compliance with the protocol specification. This defines another product sold by the VIP vendor: the Monitor/Checker, which allows you to evaluate both the coverage of your Test Benches, and the compliance in respect with the protocol.

    Common sense remark about the BFM and IP: when you select a VIP provider to verify an external IP, it is better to make sure that the design team for the BFM and for the IP are different and independent (high level specification and architecture made by two different person). This is to avoid the “common mode failure”, principle well known in aeronautic for example.

    If we look at this products from a cost perspective, the unit cost is relatively low (in the few $10K), but as you probably know, the verification task weight is heavy (60 or 70% of the overall design project), so you want to parallelize as much as possible to minimize the time devoted to verification. To do so, you have to buy several licenses for the same VIP, ending up to a cost being in the few $100K per project. Now, if we try to evaluate the market size of the VIP, like we can do for the Design IP, we realize that there is no market data available. We simply don’t know it! Making an evaluation is absolutely not straightforward. You could think that looking at the number of IP licenses sold for protocol based function (above listed) could lead to a simple equation: Market Size = (Nb of IP Licenses) x (average cost of VIP).

    But that does not work! In fact, the people buying an IP do not necessarily run the complete Verification on it, but some IDM or Fabless do it: it is not a binary behavior! That you can say is that a design team who internally develop a protocol based function will most certainly run a complete Verification of the function. But you don’t know precisely how many IDM or Fabless prefer developing internally a protocol based function rather than buying an IP!

    If we look at the VIP players, we see that Cadence has heavily invested into this market: when they bought Denali in 2010 (for $315M), they bought a strong VIP port folio and competencies, and probably good market share. They also bought products from Yogitech SpA, IntelliProp Inc. and HDL Design House in October 2008 to complete their port-folio. When you consider that Synopsys integrates their VIP into Design Ware, so they do not try to get value for each VIP, you can say that there is a kind of Yalta: Design IP to Synopsys, VIP to Cadence. But Cadence is not alone, and face pretty dynamic competition, mostly coming from mid size companies with R&D based in India and sales offices in the US (VIP is design resources intensive), like nSys, Perfectus, Avery, Sibridge Technologies and more.

    The VIP paradox: we don’t know the market size, but VIP is an essential piece of the puzzle for SoC design, as most of the SoC are using several protocols based functions (sometimes up to several dozen). We don’t know if just selling VIP is a profitable business, or if you need to also provide design services (like many vendors do). That we propose is to run a market survey, asking to the VIP vendors to disclose their revenue, so we could build a more precise picture of this market, evaluating the current size, understand the market trends and building a forecast. The ball is in the VIP vendor camp!


    Apple Creates Semiconductor Opportunities . . .

    Apple Creates Semiconductor Opportunities . . .
    by Steve Moran on 03-09-2011 at 7:22 pm

    There has been a lot of press this past week surrounding the release of iPad2. While it has some significant improvements, they are, for the most part, incremental. In my view the lack of flash, a USB port and a memory card slot continue to be huge deficits. Until this past week my reservations about the iPad have been mostly theoretical, never having actually used one. Then, earlier this week I got a chance to use an iPad for half a day while traveling. It was an OK experience and, in a pinch, it would serve my needs, but I found myself wanting to get back to my laptop as soon as possible. I would even have preferred my little Dell netbook.

    I am already an Android user on my smart phone and I think that, when I make the leap to a pad, I will look for an Android platform.That said, I give Apple all the credit for creating a new challenge for its competitors, while continuing to lead the marketplace and creating the ultimate user experience, continuing to lead the pack and making buckets of money along the way.

    In the very near term it is certain there will be a plethora of Android (and other) pads. The iPad will continue to set the standard others will attempt to meet or exceed. With this background, I recently came across an article in the EEherald that addresses the substantial opportunities for semiconductor companies to be a part of the Android pad ecosystem. You can read the whole article here but these are the highlights:

    – In a very short period of time price will be nearly as as important as features.

    – With streaming media and real time communication, multicore processing (more than 2) will be essential.

    – The marketplace is likely to branch into application-specific pads such as health care, engineering, environmental. Much of the differentiation will be software, but hardware will play a role.

    – Power consumption will continue to be a major problem and opportunity.

    – There will likely be the need to include more radios.

    The EEherald article concludes by pointing out that the odds of any company getting their chips inside the Apple iPad are remote, but that with so many companies preparing to leap into the Android pad market this is a huge emerging market.

    I am looking forward to the burst of innovation that Apple has spawned.


    TSMC 2011 Technology Symposium Theme Explained

    TSMC 2011 Technology Symposium Theme Explained
    by Daniel Nenni on 03-09-2011 at 6:49 pm

    The 17[SUP]th[/SUP] Annual TSMC Technology Symposium will be held in San Jose California on April 5[SUP]th[/SUP], 2011. Dr. Morris Chang will again be the keynote speaker. The theme this year is “Trusted Technology and Capacity Provider”and I think it’s important to not only hear what people are saying but also understand why they are saying it, so that is what this blog is all about.

    You can bet TSMC spent a lot of time on this theme, crafting every word. When dealing with TSMC you have to factor in the Taiwanese culture which is quite humble and reserved. Add in the recent semiconductor industry developments that I have been tweeting and I offer you an Americanized translation for “Trusted Technology and Capacity Provider”, a phrase made famous by the legendary rock band Queen “We are the Champions!”

    DanielNenni #TSMC said to make 40nm Chipset for #INTEL’s Ivy Bridge CPU:
    http://tinyurl.com/46qk89b March 5

    DanielNenni #AMD contracts #TSMC to make another CPU Product:
    http://tinyurl.com/4lel5zy March 2

    DanielNenni #Apple moves #SAMSUNG designs to #TSMC:
    http://tinyurl.com/64ofq67 February 15

    DanielNenni #TSMC 2011 capacity 2 rise 20%
    http://tinyurl.com/4j5v6qtFebruary 15

    DanielNenni #SAMSUNG orders 1M #NVIDIA #TEGRA2 (#TSMC) chips:
    http://tinyurl.com/4aa2xo6February 15

    DanielNenni #TSMC and #NVIDIA ship one-billionth GPU:
    http://tinyurl.com/4juzdvd January 13

    TRUST in the semiconductor industry is something you earn by telling people what you are going to do then doing it. After inventing and leading the pure-lay foundry business for the past 21 years you have to give them this one: TSMC is the most trusted semiconductor foundry in the world today.

    TECHNOLOGY today is 40nm, 28nm, and 20nm geometries. Being first to a semiconductor process node is a technical challenge, but also a valuable learning experience, and there is no substitute for experience. TSMC is the semiconductor foundry technology leader.

    CAPACITY is manufacturing efficiency. Capacity is yield. Capacity is the ability to ship massive quantities of wafers. From MiniFab to MegaFab to GigaFab,TSMC is first in semiconductor foundry capacity.

    Now that you have read the blog let me tell you why I wrote it. The semiconductor foundry business is highly competitive which breeds innovation. Innovation is good, I like innovation, and would like to see more of it. Other foundries take note, the foundry business is all about TRUST, TECHNOLOGY, and CAPACITY.

    Right now I’m sitting across from Tom Quan, TSMC Design Methodology & Service Marketing, in the EVA Airways executive lounge. Both Tom and I are flying back from Taiwan to San Jose early to participate in tomorrow’s Design Technology Forum. Tom is giving a presentation on “Successful Mixed Signal Design on Advanced Nodes”. I will be moderating a panel on “Enabling True Collaboration Across the Ecosystem to Deliver Maximum Innovation”. I hope to see you there.


    Essential signal data and Siloti

    Essential signal data and Siloti
    by Paul McLellan on 03-05-2011 at 3:24 pm

    One of the challenges with verifying today’s large chips is deciding which signals to record during simulation so that you can work out the root cause when you detect something anomalous in the results. If you record too few signals, then you risk having to re-run the entire simulation when you omitted to record a signal that turns out to be important. If you record too many, or simply record all the signals to be on the safe side, then the simulation time can get prohibitively long. In either case, re-running the simulation or running it very slowly, the time taken for verification increases unacceptably.

    The solution to this paradox is to record the minimal essential set of data needed from the logic simulation to achieve full visibility. This guarantees that it will not be necessary to re-run the simulation without slowing down the simulation unnecessarily. A trivial example: it is obviously not necessary to record both a signal and its inverse since one value can easily be re-created from the other. Working out which signals form the essential set is not feasible to do by hand for anything except the smallest designs (where it doesn’t really matter since the overhead of recording everything is not high).

    SpringSoft’s Siloti automates this process, minimizing simulation overhead while ensuring that the necessary data is available for the Verdi system for debug and analysis.

    There are two basic parts to the Siloti process. Before simulation is a visibility analysis phase in which the essential signals are determined. These are the signals that will be recorded during the simulation. Then, after the simulation, is a data expansion phase when the signals that were not recorded are calculated. This is done on-demand, so that values are only determined if and when they are needed. Additionally, the signals recorded can be used to re-initialize a simulation and re-run any particular simulation window without requiring the whole simulation to be restarted from the beginning.

    Using this approach results in dump file sizes of around 25% of what results when recording all signals, and simulation turnround times around 20% of the original. With verification taking up to 80% of design effort these are big savings.

    More information:


    Mentor Graphics 1 : Carl Icahn 0!

    Mentor Graphics 1 : Carl Icahn 0!
    by Daniel Nenni on 03-04-2011 at 10:03 pm

    This is just another blog about Carl Icahn and his quest to conquer EDA, when in fact EDA is conquering him. It includes highlights from my dinner with Mentor Graphics and Physicist Brian Greene, the Mentor Q4 conference call, and meeting Mentor CEO Wally Rhines at DvCon 2011.

    It wasn’t just the free food this time, dinner with Brian Greene was big enough to lure me to downtown San Jose on a school night. Brian has his own Wikipedia page so you know he is a big deal. Mentor executives and their top customers would also be there and I really wanted to hear “open bar” discussions about the Carl Icahn saga. Unfortunately the Mentor executives did not show and nobody really knew what was going on with Carl, nor did they care (except for me), so the night was a bust in those regards.

    The dinner however was excellent and the “who’s who” of the semiconductor world did show up so it was well worth my time. Brian Greene’s lecture on parallel universes versus a finite universe was riveting. According to string theory there is nothing in modern day physics that precludes a “multiverse”, which totally supports the subplot of the Men in Black movie.

    The excuse used for the Mentor executives not coming to dinner was the Q4 conference call the next day. Turns out it was a very good excuse! No way could Wally sit through dinner with a straight face with these numbers in his pocket:

    Walden Rhines: Well,Mentor‘s Q4 ’11 and fiscal year set all-time records in almost every category. Bookings in the fourth quarter grew 45% and for the year, 30%. Revenue grew 30% in the fourth quarter and 14% for the year, the fastest growth rates of the big three EDA companies. Established customers continue to purchase moreMentor products than ever, with growth in the annualized run rate of our 10 largest contract renewals at 30%.

    The transcript for the call is HERE. It’s a good read so I won’t ruin it for you but it definitely rained on Carl Icahn’s $17 per share offer. Carl would be lucky to buy Mentor Graphics at $20 per share now! After careful thought, after thorough investigation, after numerous debates, I have come to the conclusion that Carl has no big exit here. If someone reading this sees a big exit for Carl please comment because I just don’t see it.

    Wally Rhines did make it to DvCon and he was all smiles, business as usual. His Keynote “From Volume to Velocity” was right on target! Wally did 73 slides in 53 minutes, here are my favorites:

    This is conservative compared to what I see with early access customers. Verification is a huge bottle neck during process ramping.

    The mobile semiconductor market is driving the new process nodes for sure. Expect that to continue for years to come.

    The SoC revolution is driving gate count. One chip does all and IP integration is key!

    Purchased IP will continue to grow so gentlemen start your IP companies!

    Sad but true.

    Here is the bottom line Mr Icahn: Verification and Test are not only the largest EDA growth applications, verification and test are very “sticky”. Mentor Graphics owns verification and test so Mentor Graphics is not going anywhere but up in the EDA rankings. If you want to buy Mentor Graphics you will have have to dig deep ($20+ per share).

    Related blogs:
    Personal Message to Carl Icahn RE: MENT
    Mentor Acquires Magma?
    Mentor – Cadence Merger and the Federal Trade Commission
    Mentor Graphics Should Be Acquired or Sold: Carl Icahn
    Mentor Graphics Should Be Acquired or Sold: Carl Icahn COUNTERPOINT