RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Brief History of ASIC, part II

A Brief History of ASIC, part II
by Paul McLellan on 08-23-2012 at 8:00 pm

All semiconductor companies were caught up in ASIC in some way or another because of the basic economics. Semiconductor technology allowed medium sized designs to be done, and medium sized designs were pretty much all different. The technology didn’t yet allow whole systems to be put on a single chip. So semiconductor companies couldn’t survive just supplying basic building block chips any more since these were largely being superseded by ASIC chips. But they couldn’t build whole systems like a PC, a TV or a CD player since the semiconductor technology would not allow that level of integration. So most semiconductor companies, especially the Japanese and even Intel, started ASIC business lines and the market became very competitive.

ASIC turned out to be a difficult business to make money in. The system company owned the specialized knowledge of what was in the chip, so the semiconductor company could not price to value. Plus the system company knew the size of the chip and thus roughly what it should have cost with a reasonable markup. The money turned out to be in the largest, most difficult designs. Most ASIC companies could not execute a design like that successfully so that market was a lot less competitive. The specialized ASIC companies that could, primarily VLSI Technology and LSI Logic again, could charge premium pricing based on their track record of bringing in the most challenging designs on schedule. If you are building a sky-scraper you don’t go with a company that has only built houses.

As a result of this, and of getting a better understanding of just how unprofitable low-volume designs were, everyone realized that there were less than a hundred designs a year being done that were really worth winning. It became a race for those hundred sockets.

Semiconductor technology continued to get more powerful and it became possible to build whole systems (or large parts of them) on a single integrated circuit. These were known as systems-on-chip or SoCs. The ASIC companies all started to build whole systems such as chipsets for PCs or for cell-phones alongside their ASIC businesses which were more focused on just those hundred designs that were worth winning. So all semiconductor companies started to look the same, with lines of standard products and, often, an ASIC product line too.

One important aspect of the ASIC model was that the tooling, the jargon word for the masks, belonged to the ASIC company. This meant that the only company that could manufacture the design was that ASIC company. Even if another semiconductor company offered them a great deal, they couldn’t just hand over the masks and take it. This would become important in the next phase of what ASIC would morph into.

ASIC companies charged a big premium over the raw cost of the silicon that they shipped to their customers. ASIC required a network of design centers all over the world staffed with some of the best designers available, obviously an expensive proposition. Customers started to resent paying this premium, especially on very high volume designs. They knew they could get cheaper silicon elsewhere but that meant starting the design all over again with the new semiconductor supplier.

Also, by then, two other things had changed. Foundries such as TSMC had come into existence. And knowledge about how to do physical design was much more widespread and, at least partially, encapsulated in software tools available from the EDA industry. This meant that there was a new route to silicon for the system companies, namely ignore the ASIC companies, do the entire design including the semiconductor-knowledge-heavy physical design, and then get a foundry like TSMC to manufacture it. This was known as customer-owned-tooling or COT, since the system company as opposed to the ASIC company or the foundry, owned the whole design. If one foundry gave poor pricing the system company could transfer the design to a different manufacturer.

However, the COT approach was not without its challenges. Doing physical design of a chip is not straightforward. Many companies found that the premium that they were paying ASIC companies for the expertise in their design centers wasn’t for nothing, and they struggled to complete designs on their own without that support. As a result, companies were created to supply that support, known as design services companies.

Design service companies played the role that the ASIC companies’ design centers did, providing specialized semiconductor design knowledge to complement the system companies’ knowledge. In some cases they would do the entire design, known as turnkey design. More often they would do all or some of the physical design and, often, manage the interface with the foundry to oversee the manufacturing process, another area where system companies lacked experience.

One company in particular, eSilicon, operates with a business model identical to the old ASIC companies except in one respect. It has no fab. It actually builds all of the customers’ products in one of the foundries (primarily TSMC).

Another change has been the growth of field-programmable gate-arrays (FPGAs) which are used for many of the same purposes as ASIC used to be.

So that is the ASIC landscape today. There is very limited ASIC business conducted by a few semiconductor companies. There are design services companies and virtual ASIC companies like eSilicon. There are no pure-play ASIC companies. A lot of what used to be ASIC has migrated to FPGA. A Brief History of ASIC Part I is HERE.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


Book Review: Mixed-Signal Methodology Guide

Book Review: Mixed-Signal Methodology Guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.



Book Review: Mixed-Signal Methodology guide

Book Review: Mixed-Signal Methodology guide
by Daniel Payne on 08-23-2012 at 4:00 pm

Almost every SoC has multiple analog blocks so AMS methodology is an important topic to our growing electronics industry. Authored by Jess Chen (Qualcomm), Michael Henrie (Cliosoft), Monte Mar (Boeing) and Mladen Nizic (Cadence), the book is subtitled: Advanced Methodology for AMS IP and SoC Design, Verification and Implementation. Cadence published the book and I’ve just read the first chapter and deemed the tome worthy to review because it shows both the challenges of AMS design plus verification and discusses multiple approaches, while not favoring one particular EDA vendor tool.

Review of Chapters 2 through 11 are here.

Mladen Nizic
I spoke by phone with Mladen Nizic on Thursday afternoon to better understand how this book came to be.

Q: Why was this book on AMS design and verification methodology necessary?
A: Technology and business drivers are demanding changes in products and therefore methodology. We need more discussion between Digital and Analog designers, collaborating to adopt new methodologies.

Q: How did you select the authors?
A: I knew the topics that I wanted to cover, then started asking customers and people that I knew. We gathered authors from Boeing, Cadence, Qualcomm and ClioSoft.

Q: What are the obstacles to adopting a new methodology?
A: Organizational and technical barriers exist. Most organizations have separate digital SOC and analog design groups, they just do design differently and are kind of isolated. You start to see engineers with AMS Verification Engineer appearing now. The complexity of the AMS designs is increasing with more blocks being added. Advanced nodes bring challenges, which require even more analysis.

Q: Will reading the book be sufficient to make a difference?
A: The whole design team needs to read the book, then discuss their methodology, and start to adapt some or all of the recommended techniques. Analog designers need to learn what their Digital counterparts are doing for design and verification.

Q: Why should designers spend the time reading about AMS methodology?
A: To become better rounded in their approach to design and verification. You can also just read the chapters that are of interest to each group.

Q: Where do I buy this book?
A: Right now you can pre-order it at LuLu.com, and soon afterwards on Amazon.com.

Q: Can I buy an e-Book version?
A: There will be an e-Book version coming out after the hard-copy.

Q: Is there a place for designers to discuss what they read in the book?
A: Good idea, we’re still working on launching that, so stay tuned.

Chapter 1: Mixed-Signal Design Trends & Challenges [Online]

The continuous time output of Analog IP blocks are contrasted with the binary output of Digital IP blocks and various approaches to Mixed-signal verification are introduced:

To gain the verification speed you must consider abstracting the analog IP into behavioral models, instead of only simulating at the transistor level with a SPICE or Fast-SPICE circuit simulator.

Topics raised: mixed-signal verification, behavioral modeling, low-power verification, DFT, chip planning, AMS IP reuse, full-chip sign-off, substrate noise, IC/package co-design, design collaboration and data management.

Other chapters include topics like:

  • Overview of Mixed-Signal Design Methodologies
  • AMS Behavioral Modeling
  • Mixed-Signal Verification Methodology
  • A Practical Methodology for Verifying RF Designs
  • Event-Driven Time-Domain Behavioral Modeling of Phase-Locked Loops
  • Verifying Digitally-Assisted Analog Designs
  • Mixed-Signal Physical Implementation Methodology
  • Electrically-Aware Design Methodologies for Advanced Process Nodes
  • IC Package Co-Design for Mixed-Signal Systems
  • Data Management for Mixed-Signal Designs.

Also Read

Interview with Brien Anderson, CAD Engineer

Managing Differences with Schematic-based IC design

ClioSoft Update 2012!


ARM + Broadcom + Linux = Raspberry Pi

ARM + Broadcom + Linux = Raspberry Pi
by Daniel Payne on 08-23-2012 at 12:28 am

Broadcom has designed an impressive SOC named the BCM2835 with the following integrated features:

  • ARM CPU at 700MHz
  • GPU – VideoCore IV
  • RAM – 256 MB

The British chaps at Raspberry Pi have created a $35.00 Linux-based computer based on the Broadcom BCM2835 chip that is tiny in size but big in utility: Continue reading “ARM + Broadcom + Linux = Raspberry Pi”


Debugging Subtle Cache Problems

Debugging Subtle Cache Problems
by Paul McLellan on 08-22-2012 at 5:11 pm

When I worked for virtual platform companies, one of the things that I used to tell prospective customers was that virtual prototypes were not some second-rate approach to software and hardware development to be dropped the moment real silicon was available, that in many ways they were better than the real hardware since they had so much better controllability and observability. An interesting blog entry on Carbon’s website really shows this strongly.

A customer was having a problem with cache corruption while booting Linux on an ARM-based platform. They had hardware but they couldn’t freeze the system at the exact moment cache corruption occurred and since they couldn’t really have enough visibility anyway, they had not been able to find the root problem. Instead they had managed to find a software workaround that at least got the system booted.

Carbon worked with the customer to put together a virtual platform of the system. They produced a model of the processor using Carbon IP Exchange (and note, this did not require the customer to have detailed knowledge of the processor RTL). They omitted all the peripherals not needed for the boot (and also didn’t need to run any driver initialization code, speeding and simplifying things). Presumably the problem was somewhere in the memory subsystem. There was RTL for the L2 Cache, AHB fabric, boot ROM, memory controller, and parts of their internal register/bus structure. This was all compiled with Carbon Model Studio to a 100% accurate model that could be dropped into the platform. And when they booted the virtual platform the cache problem indeed showed up. When you are having a debugging party it is always nice when the guest of honor deigns to show up. Nothing is worse than trying to fix a Heisenbug, where when you add instrumentation to isolate it the problem goes away.By analyzing waveforms on the ARM High-performance Bus (AHB), the cache-memory, the memory controller, and assembly code (all views that Carbon’s SOCDesigner provides) they isolated the problem. It turned out that the immediate problem was that a read was taking place from memory before the write to that memory location had been flushed from the cache, thus picking up the wrong value. In turn, the real reason was that a writethrough to the cache (which should write the cache and also write through to main memory) had been wrongly implemented as a writeback (which just goes to the cache immediately and only gets written to main memory when that line in the cache is flushed). With that knowledge it was easy to fix the problem.

They had not been able to fix this problem using real hardware because they couldn’t see enough of the hardware/software interaction, bus transactions and so forth. But with a virtual platform, where the system can be run with fine granularity, complete accuracy and with full visibility, this type of bug can be tracked down. As Eric Raymond said about open source, “with enough eyeballs, all bugs are shallow.” Well, with enough visibility all bugs are shallow too.Read more detail about this story on the Carbon blog here.


Synopsys up to $1.75B

Synopsys up to $1.75B
by Paul McLellan on 08-22-2012 at 4:24 pm

Synopsys announced their results today. With Magma rolled in (but not yet SpringSoft since that hasn’t technically closed) they had revenue of $443M up 15% from $387M last year. This means that they are all but a $1.75B company and a large part of the entire EDA industry (which I think of as being $5B or so, depending on just what you do about services, IP, ARM in particular etc). GAAP earnings per share were $0.50 (non-GAAP was $0.55). GAAP (which stands for generally accepted accounting principles excludes various deductions especially in the way acquisitions are accounted and stock compensation is accounted).

For their fourth quarter they expect revenue to be flat versus this quarter at $440-448 million. So for fiscal 2012 (which ends in October) they expect revenue of $1.74 to $1.75B, with positive cash flow for the year of $450M. If Synopsys manage to grow another 15% next year (either through organic growth or through acquisition, and presumably SpringSoft is already in the bag) then they will be over $2B a year running roughly $500M/quarter, which I think is an amazing achievement given that Synopsys in 2000 was $784M, which it now takes just 5 months to do.

And, talking of acquisitions, it still has close to $1B in cash. The one area where Synopsys is weak is emulation so I wouldn’t be the least bit surprised to see them go after Eve.

Full press release including P&L, balance sheet etc is here.


Cadence at 20nm

Cadence at 20nm
by Paul McLellan on 08-21-2012 at 8:10 pm

Cadence has a new white paper out about the changes in IC design that are coming at 20nm. One thing is very clear: 20nm is not simply “more of the same”. All design, from basic standard cells up to huge SoCs has several new challenges to go along with all the old ones that we had at 45nm and 28nm.

I should emphasize that the paper is really about the problems of 20nm design and not a sales pitch for Cadence. I could be wrong but I don’t think it mentions a single Cadence tool. You don’t need to be a Cadence customer to profit from reading it.

The biggest change, and the one that everyone has heard the most about, is double patterning. This means that for those layers that are double patterned (the fine pitch ones) two masks are required. Half the polygons on the layer go on one mask and the other half on the other mask. The challenge is that no patterns on either mask can be too close, and so during design the tools need to ensure that it is always possible to divide the polygons into two sets (so, for example, you can never have three polygons that are minimum distance from each other at any point, since there is no way to split them legally into two). Since this is algorithmically a graph-coloring problem this is often referred to as coloring the polygons.

Place and route obviously needs to be double pattern aware and not create routing structures that are not manufacturable. Less obvious is that even if standard cells are double pattern legal, when they are placed next to each other they may cause issues between polygons internal to the cells.

Some layers at 20nm will require 3 masks, two to lay down the double patterned grid and then a third “cut mask” to split up some of the patterns in a way that wouldn’t have been possible to manufacture otherwise.

Another issue with double patterning is that most patterning is not self-aligned, meaning that there is variation between polygons on the two masks that is greater than the variation between two polygons on the same mask (which are self-aligned by definition). This means that verification tools need to be aware of the patterning and, in some cases, designers need to be given tools to assign polygons to masks where it is important that they end up on the same mask.

Design rules at 20nm are incredibly complicated. Cadence reckon that of 5,000 design rules only 30-40 are for double patterning. There are layout direction orientation rules, and even voltage dependent design rules. Early experience of people I’ve talked to is that the design rule are now beyond human comprehension and you need to have the DRC running essentially continuously while doing layout.

The other big issue with 20nm are layout dependent effects (LDEs). The peformance of a transistor or a gate no longer depends just on its layout in isolation but also on what is near it. Almost every line on the layout such as the edge of a well has some non local effect on the silicon causing performance changes in active areas nearby. At 20nm the performance can vary by as much as 30% depending on the layout context.

A major cause of LDEs is mechanical stress. Traditionally this was addressed by guardbanding critical paths but at 20nm this will cause too much performance loss and instead all physical design and analysis tools will need to be LDE aware.

Of course in addition to these two big new issues (double patterning, LDEs) there are all the old issues that just get worse, whether design complexity, clock tree synthesis and so on.

Based on numbers from Handel Jones’s IBS, 20nm fabs will cost from $4-7B (depending on capacity), process R&D will be $2.1-3B on top of that, and mask costs will range from $5-8M per design. And design costs: $120-500M. You’d better want a lot of die when you get to production.

Download the Cadence white paper here.


A Brief History of ASIC, part I

A Brief History of ASIC, part I
by Paul McLellan on 08-21-2012 at 7:00 pm

In the early 1980s the ideas and infrastructure for what would eventually be called ASIC started to come together. Semiconductor technology had reached the point that a useful number of transistors could be put onto a chip. But unlike earlier, when a chip only held a few transistors and thus could be used to create basic generic building blocks useful for everyone, each customer wanted something different on their chip. This was something the traditional semiconductor companies were not equipped to provide. Their model was to imagine what the market needed, create it, manufacture it and then sell it on the open market to multiple customers.

The other problem was that semiconductor companies knew lots about semiconductors, obviously, but didn’t have system knowledge. The system companies, on the other hand, knew just what they wanted to build but didn’t have enough semiconductor knowledge to create their own chips and didn’t really have a means to manufacture those chips even if they did. What was required was a new way of doing chip design. The early part of the design, which was system knowledge heavy, would be done by the system companies. And the later part of the design, which was semiconductor knowledge heavy, would be done by the company that was going to manufacture it.

Two companies in particular, VLSI Technology and LSI Logic, pioneered this. They were startup companies with their own fabs (try getting that funded today) with a business model that they would, largely, build other people’s chips. In the early days, since there weren’t really any chips out there to be built, they had to find other revenue streams. VLSI Technology, for example, made almost all of its money building read-only-memories (ROMs) that went into the cartridges for the first generation of video game consoles.

Daisy, Mentor and Valid, who had electronic design systems mostly targeted at printed circuit boards, realized that they could use those systems for the front end of ASIC design too.

ASIC design typically worked like this. A system company, typically someone building an add-on board for the PC market since that was the big driver of electronics in that era, would come up with some idea for a chip. They would negotiate with several ASIC companies to decide which to select, although it was always a slightly odd negotiation since only a vague idea of the size of the design was available at that point. They would then partner with one ASIC company such as LSI Logic who would supply them with a library of basic building blocks called cells.

The system company would use a software tool called a schematic editor to create the design, picking the cells they wanted from the library and deciding how they should be connected up. The output from this process is called a netlist, essentially a list of cells and connections.

Just like writing software or writing a book, the first draft of the design would be full of errors. But with semiconductor technology it isn’t possible to build the part and see what the errors are. Even back then it cost tens of thousands of dollars and a couple of months to manufacture the first chip, known as the prototype. Also, unlike with writing a book, it’s not possible to simply proofread it and inspect the schematic, too many errors would still slip through.

Instead, a program called a simulator was used. A flight simulator tells a pilot what would happen if he or she moves the controls a certain way, and, unlike in the real world doesn’t cost a fortune if the plane crashes. In the same way, a simulation of the design checked how it behaved given certain inputs without requiring the expense of building the chip. Errors that were detected could be fixed and the simulation run again.

When finally the design was determined to be correct, the netlist was sent from the system company to the ASIC company for the next step. Using a program known as place & route tool the netlist would be converted to a full chip design. In addition to creating the actual layout for manufacture, this process also created detailed timing, just how long every signal would take to change. This detailed timing would be sent back to the system company so that they could do a final check in the simulator to make sure that everything worked with the real timing versus the estimated timing that was used earlier.

At that point the system company took a deep breath and gave the go ahead to spend the money to make the prototypes. This was (and is) called tapeout.

The ASIC company then had the masks manufactured that were be needed to run the design through their fab. In fact there were two main ASIC technologies, called gate-array and cell-based. In a gate-array design, only the interconnect masks were required since the transistors themselves were pre-fabricated on a gate-array base. In cell-based design, all the masks were required since every design was completely different. The attraction of the gate-array approach was that, in return for giving up some flexibility, the manufacturing was faster and cheaper. Faster, because only the interconnect layers needed to be manufactured after tapeout. Cheaper, because the gate-array bases themselves were mass produced in higher volume than any individual design.

A couple of months later the prototypes would have been manufactured and samples shipped back to the system company. These parts typically would then have been incorporated into complete systems and those systems tested. For example, if the chip went into an add-in board for a PC then a few boards would have been manufactured, put into a PC and checked for correct operation.

At the end of that process, the system company took another deep breath and placed an order with the ASIC company for volume manufacturing, ordering thousands or possibly even millions, of chips. They would receive these a few months later, build them into their own products, and ship those products to market. A Brief History of ASIC Part II is HERE.

A Brief History of Semiconductors
A Brief History of ASICs
A Brief History of Programmable Devices
A Brief History of the Fabless Semiconductor Industry
A Brief History of TSMC
A Brief History of EDA
A Brief History of Semiconductor IP
A Brief History of SoCs


A Brief History of Mentor Graphics

A Brief History of Mentor Graphics
by Beth Martin on 08-20-2012 at 11:00 pm

In 1981, Pac-Man was sweeping the nation, the first space shuttle launched, and a small group of engineers in Oregon started not only a new company (Mentor Graphics), but an entirely new industry, electronic design automation (EDA).


Mentor founders Tom Bruggere, Gerry Langeler, and Dave Moffenbeier left Tektronix with a great idea and solid VC funding. To choose a company name, the three gathered at Bruggere’s home, former “world headquarters,” of the fledgling enterprise. Moffenbeier’s choices, ‘Enormous Enterprises’ and ‘Follies Bruggere,’ while witty did not seem calculated to inspire confidence in either potential investors or customers. Langeler, however, had always wanted to own a company called ‘Mentor.’ Later, ‘Graphics’ was added when it was discovered that the lone word ‘Mentor’ was already trademarked by another company. Mentor, now based in Wilsonville, Oregon, became one of the first commercial EDA companies, along with Daisy Systems[SUP]1[/SUP] and Valid Logic Systems[SUP]2[/SUP].

The Mentor Graphics team decided what kind of product it would create by surveying engineers across the country about their most significant needs, which led them to computer-aided engineering (CAE), and the idea of linking their software to a commercial workstation. Unlike the other EDA startups, who used proprietary computers to run their software, the Mentor founders chose Apollo workstations as the hardware platform for their first software products. Creating their software from scratch to meet the specific requirements of their customers, and not building their own hardware, proved to be key advantages over their competitors in the early years. One wrinkle—at the time they settled on the Apollo, it was still only a specification. However, the Mentor founders knew the Apollo founders, and trusted that they would produce a computer that combined the time-sharing capabilities of a mainframe with the processing power of a dedicated minicomputer.

The Apollo computers were delivered in the fall of 1981, and the Mentor engineers began developing their software. The goal was to demonstrate their first interactive simulation product, IDEA 1000, at DAC in Las Vegas the following summer. Rather than being lost in the crowd in a booth, they rented a hotel suite and invited participants to private demonstrations. That is, invitations were slipped under hotel room doors at Caesar’s Palace, but because they didn’t know which rooms DAC attendees were staying in, the invitations were passed out indiscriminately to vacationers and conference-goers alike. The demos were very well received (by conference-goers, anyway), and by the end of DAC, one-third to half of the 1200 attendees had visited the Mentor hotel suite (No record of whether any vacationers showed up). The IDEA 1000 was a hit.

By 1983, Mentor had their second product, MSPICE, for interactive analog simulation. They also began opening offices across the US, and in Europe and Asia. By 1984, Mentor reported its first profit and went public. Long time Mentor employee Marti Brown said it was an exciting time to work for Mentor. The executives worked well together, complementing each other’s strengths, and Bruggere in particular was dedicated to creating a very worker- friendly environment, including building a day care center on the main campus. Throughout the 80s, Mentor grew aggressively (you can see all the companies Mentor acquired on the EDA Mergers and AcquisitionsWiki). As an aside, Tektronix, where the Mentor founders worked previously, also entered the market with the acquisition of a company called CAE systems. The technology was uncompetitive, though, and they eventually sold the assets of CAE Systems to Mentor Graphics in 1988.[SUP]3
[/SUP]
Times got tough for Mentor between 1990-1993, as they faced new competition and a changing EDA landscape. While Mentor had always sold complete packages of software and workstations, competitors were beginning to provide standalone software capable of running on multiple hardware platforms. In addition, Mentor fell far behind schedule on the so-called 8.0 release, which was a re-write of the entire design automation software suite. This combination of factors led some of their earliest and most loyal customers to look elsewhere, and forced Mentor to declare its first loss as a public company. However, learning from the experience, Mentor began to redesign its products and redefine itself as a software company, expanding its product offerings into a wide range of design and verification technology. As it made the transition, the company began to recapture market share and attract new customers.


In 1992, founder and president Gerry Langeler left Mentor and joined the VC world.
In 1993, Dave Moffenbeier left to become a serial entrepreneur. That same year, Wally Rhines came on as president and CEO, replacing CEO Tom Bruggere who also retired as chairman in early 1994.[SUP]4[/SUP] Under the leadership of Rhines, Mentor has continued to grow and thrive, introducing new products and entering new markets (such as thermal analysis and wire harness design), cementing its position as the IC verification reference tool set for major foundries, and reporting consistent profits. In recent years, Mentor has been a target for acquisition (by Cadence, 2008) and an activist investment (by famed corporate pirate Carl Icahn, 2010). Despite those events, Mentor continued to grow its revenue and profitability to record levels by introducing products for totally new markets. Throughout its 31 years, Mentor has been a solid anchor of the EDA industry that it helped to create.

Very Interesting References and Asides

[LIST=1]

  • Daisy Systems merged with Cadnextix, who was acquired by Intergraph, who spun out the EDA business as VeriBest, which was bought by Mentor in 1999.
  • Valid was acquired by Cadence in 1991.
  • Thanks to industry gadfly Simon Favre for this information.
  • Another interesting aside about Bruggere. His English manor-style home, which is for sale, was used last year as a filming location for the t.v. series Grimm–as the site of poor Mavis’ murder (Season 1, episode 20).

  • The Business Case for Algorithmic Memories

    The Business Case for Algorithmic Memories
    by Adam Kablanian on 08-20-2012 at 11:00 am

    Economic considerations are a primary driver in determining which technology solutions will be selected, and how they will be implemented in a company’s design environment. In the process of developing Memoir’s Algorithmic Memory technology and our Renaissance product line, we have held fast to two basic premises: Our technology and products have to work as promised, and we have to reduce the risk and total cost of development for our customers. The reality is that the entire semiconductor ecosystem needs to be approached in a new way. Gone are the days when ROI was a second or even third tier concern. Gone, also, are the days when multiple iterations of a product are not only tolerated, they are actually accepted as the norm.

    One of the most expensive and risky parts of chip design is silicon validation. From the beginning, Memoir has focused on developing its technology using exhaustive formal verification that eliminates the need for further silicon validation by the customer. It may sound like this approach should be a given in today’s economically challenging product development environment. However, implementing this philosophy as part of our product portfolio takes a lot of understanding of the underpinnings of embedded memory technology. We have invested a substantial amount of time and energy in developing our exhaustive formal verification process that is used to test and certify our Algorithmic Memory before shipping it to customers. This is very unique for an IP company and this is the cornerstone of our risk reduction strategy, which also significantly reduces cost for the customer.

    For the past 40 years, the semiconductor industry blindly continued to use the 8T bit cell to build dual port SRAM memories. Today, successfully incorporating 8T bit cells in dual port memories into SoC designs is not as simple as it used to be. The current word in the industry is that 8T bit cell is problematic in terms of design margins and VDD min, which is paramount for low power designs. Additionally, yields are also a concern. So, rather than just coming up with a different way to implement 8T bit cell to design synchronous dual port memoires, we have chosen a different path. With Algorithmic Memory technology, customers can use single port memory utilizing the 6-transistor (6T) bit cell to create new dual port and multi-port memories for synchronous chips. This matches the performance of an 8T bit cell-based design methodology. By eliminating the need for 8T cell, the testing is also simplified since only a single type of memory using only the 6T bit cell needs to be tested. This helps to reduce overall design and test complexity, which translates into faster time-to-market, better yields, and cost savings.

    Algorithmic Memory brings an innovative design methodology that results in a reduction in overall product development risk, design time and implementation costs. While it’s difficult to translate these savings into specific dollar amounts, what we have learned is that by focusing on all the levels of the embedded memory development ecosystem, we can reduce the number of physical memory compilers that our customers have to develop by half. In addition, there is substantial cost savings because fewer physical memory compilers have to be developed, maintained, and silicon validated again and again every time there is a technology change. Still, the greatest savings is that because of our exhaustive formal verification process we have eliminated the need for further silicon validation. For synchronous designs this is a major advancement in design and product development methodologies. It represents an industry sea-change in how SoC IP technology is developed and deployed.

    In the past, there have been pockets of innovation in the embedded memory space. However, with Algorithmic Memory, for the first time, there is now a third-party IP offering that can have a significant, industry-level impact to help advance the semiconductor ecosystem as a whole.