RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Tanner EDA Newsletter – Fall 2011

Tanner EDA Newsletter – Fall 2011
by Daniel Payne on 09-15-2011 at 10:47 am

logo top

From the President: Another Great YearThanks to innovative, cost-effective technology and excellence in customer support, we’ve just ended fiscal year 2011 (on May 31st) with solid growth. Revenue was up 8%, we added 139 new customers, and we’re continuing to reach out to technology partners for MEMS and for the analog and mixed-signal suite. As we continue to create powerful new tools and move as fast as standards allow towards more open systems, the dedication of our FAEs and the quality of both product development and ongoing maintenance and support have been major factors in our success. (For more information on our Fiscal year-end, click here).
Largely as a result of the release of HiPerSilicon v15 very early in fiscal year 2010, our existing loyal customers accounted for a 13% greater share of new purchases in 2011 than in 2010. Tanner EDA’s L-Edit MEMS and related product offerings also saw strong growth over the year; continuing to gain traction as this design space matures.
Product line growth was also significant. The company previewed version 16 of HiPer Silicon with OpenAccess database compatibility for layout (to be release in Q4 2011) at DAC 2011. The preview of v16 included Open Access database compatibility for layout, which enables designers to share files with colleagues and business partners using Si2 database standards. Larger design teams also appreciated the redesigned multi-user functionality. For more detail, see Tanner EDA at DACin the sidebar. We also extended and deepened our PDK portfolio, announcing availability of a jointly-developed Tanner EDA/ Dongbu HiTek foundry-certified 0.18-micron analog CMOS PDF at the end of May.
2011 also saw Tanner EDA focusing on extending and deepening technology partner relationships in areas related to MEMs (SoftMEMs) and analog SPICE simulation (Berkeley Design Automation). See more.
One of the things that continues to make Tanner EDA successful is our close working relationships with all our users. Many thanks to customers old and new for continuing, with us, to drive analog design innovation.
– Greg Lebsack, President
Click here for a September 2011 interview of Greg Lebsack on SemiWiki.
BACK TO TOP

What’s New in Tanner EDA Tools

V15 of our full-flow design suite got even better with releases v15.13 and v15.14.
In v15.13, we included major improvements to the Verilog export in S-Edit as well as modified temperature sweeps to give better user control. T-Spice saw an updated SPICE command wizard and a new duty cycle function that computes the duty cycle of a waveform. New legend controls were added to W-Edit along with a new linear regression measurement and expanded image format support. L-Edit saw improvements to the SDL placement algorithm and increased integration with our HiPer DevGen tool. HiPer Verify users saw improved performance and memory usage in Extract.
In v15.14, we made updates to Verilog views and improved conditions and other options for imports within S-Edit. T-Spice and W-Edit have several corrections and modifications to improve performance and user functionality. L-Edit received a host of functional corrections and several key improvements that dramatically improve quality of DXF imports. HiPer Verify has several functional improvements and performance enhancements.
For release notes, click here
V16, which will be release in Q4, was previewed at DAC 2011. Built on our 24-year legacy of providing industry-leading price-performance and interoperability, v16 will include full Open Access database compatibility for layout. By enabling file sharing based on Open Access database standards, large design teams will be able to more easily collaborate. For preview information, contact your sales person directly or email us at sales@tannereda.com
BACK TO TOP

Analog Insights: Tanner EDA Partners with Berkeley Design Automation for Faster Spice Simulation

Tanner EDA and Berkeley Design Automation (BDA) are partnering to expand Tanner EDA’s powerful integrated circuit design environment to include a FastSPICE product. BDA’s Analog FastSPICE (AFS) is integrating with Tanner EDA’s S-Edit schematic capture and W-Edit waveform analyzer, giving analog designers unprecedented speed in a powerful, easy-to-use design system. Users will be able to drive the AFS simulator directly from S-Edit to get all of the speed and accuracy necessary for nanometer design. The faster simulations will allow users to perform a much more thorough verification of their designs.
BDA Analog FastSPICE
AFS accelerates the simulation through advanced numerical analysis and computational without sacrificing accuracy. AFS is different from most FastSPICE products because it generates true operating points and solves the full circuit matrix and original device equations for each timestep without ever taking shortcuts, thus ensuring the highest accuracy. AFS speed improvements are most dramatic when a design has digital circuitry or when simulating an entire design with parasitics. Faster simulations allow for more corner analyses or Monte Carlo runs, resulting in a much more thorough verification of a design. More comprehensive verification means a more robust design and less risk.
Design Flow
Tanner EDA users will have two choices for the simulation engine: T-Spice and AFS. AFS is good for entire designs with digital circuitry and/or parasitics while T-Spice is good for small blocks and circuits with high analog content. Users can create schematics of their design in S-Edit through its powerful, easy-to-use interface and then choose to simulate with either T-Spice or AFS. Users can then interactively view, measure, and analyze waveforms through W Edit’s intuitive and highly configurable multiple window, multiple chart interface. W-Edit will directly read both T-Spice and AFS results. After laying out a design in L-Edit, users can perform post-layout extraction with or without routing parasitics and then simulate them with either simulation engine. Tanner EDA shortens design cycles through tight integration of schematic, simulation, and waveform analysis. Users can easily select which simulator they want to use in the Setup Simulation Dialog.
Example
The simulation of an 8-bit successive approximation ADC is shown as an example. The ADC has an 8-bit R2R DAC, a comparator, and various digital circuitry to control the DAC and produce the digital output. The design consists of 1,160 MOSFETs, 19 resistors, 1 capacitor, and 731 nodes.
The input of the ADC, the temperature, and five corner models were swept for a comprehensive verification. The input of the ADC was sweep over the range (0.0 to 2.2) with steps of ½ Least Significant Bit (LSB) (4.3mv). The temperature was swept from -55° to 125° in increments of 10°. The five major corner models were simulated (TT, FF, SS, FS, SF). Differential Nonlinearity (DNL) and Integral Nonlinearity (INL) were measured at each temperature, input and corner.
[TABLE] style=”width: 495px”
|-
| Simulator
| style=”width: 175px” | DNL
| style=”background-color: #DDDDDD” | INL
|-
| style=”width: 100px” | AFS
| width=”175″ | 0.04 LSB
| ±1 LSB
|-
| width=”100″ | T-Spice
| width=”175″ | 0.04 LSB
| ±1 LSB
|-

For this ADC, AFS and T-Spice gave the same results although AFS produced results more quickly than T-Spice because the ADC had a large amount of digital circuitry. The results were easily displayed and analyzed in W-Edit regardless if they came from AFS or T-Spice. Below are plots of the DNL for all temperatures and corner models.
BACK TO TOP

FAE Focus: Ragip Ispir, Tanner EDA Japan

Ragip Ispir is Technical Manager at Tanner EDA in Tokyo. He grew up in Turkey and received his Bachelor’s Degree in Electrical and Electronics Engineering at Middle East Technical University (METU) in Ankara, Turkey. He later went on to complete his Ph.D., with specialization in RF and Microwave circuits, at Okayama University, Japan. Then he worked for Murata Manufacturing Co., Ltd. at Yokohama Technical Center as a design engineer for two years. Ragip joined Tanner EDA in 2006 and is currently responsible for technical support of both pre-sales and post-sales as well as PDK development.
Ragip enjoys hiking and reading books.
BACK TO TOP

Tips & Tricks

From Ragip Ispir, Technical Manager at Tanner EDA in Tokyo
Tip #1: Making use of T-Cells from sample files
Many users are familiar with using T-Cells to generate parametric layouts. Like other cells, T-Cells can easily be copied — even referenced — from another file. In the sample folder, which can be setup by “Help > Setup Examples”, there are several examples of useful T-Cells. Below are some of them, with a short explanation and screen shot of the auto generated layouts.
[TABLE] style=”width: 495px”
|-
| style=”background-color: #DDDDDD” | T-Cell Name
| style=”background-color: #DDDDDD” | Auto Generated Cell
|-
| “Ellipse Generator”
| an ellipse for which you can specify the number of sides (vertices).
|-
| “Rounded Rectangle”
| a rectangle, corners of which can be rounded.
|-
| “Spiral Generator”
| a spiral for which you can specify number, spacing, width of rings.
|-
| “Concentric Tori”
| a number of tori for which you can specify start and stop angles, incremental spacing between each torus
|-
| “Layout TextGenerator”
| Instance”.
|-
| “NFET Generator”
| a N-type MOSFET. Rules for the MOSFET are read in from Standard DRC rules setup. Before instancing this T-Cell, make sure that you have necessary layers and required rules defined in the DRC setup. In the T-Cell code, you can customize layer names or initialize parameters without referring to DRC rules setup.
|-

How to instance a T-Cell from another file
To instance a T-Cell distributed in the sample folder, simply run the “Cell > Instance” command. In the “Select Cell to Instance” dialog window, browse for the “Installed Sample FolderFeatures By ToolL-EditT-Cells”, select the tdb file, and then select the T-Cell from the list. In the “Reference type” select “Copy cell to current file”.
Note that, if you select “External reference”, a version of the cell (XRefCell) will still be copied to the current file, but it will be locked and linked to the original file so that if the original cell is changed you will be informed to update it when you open your file. You can examine XRef cells from “Cell > Examine XRefCells” and do operations like Redirect, Unlink etc.

If the technology names are different, a Conflict Resolution dialog is displayed. You need to put a check mark in “Ignore different technologies”. Also, if some layers are missing or a cell with the same name exists in your file, you need to take proper action as guided in the dialog.
When the T-Cell is copied to your design, you will be asked to enter parameters to generate the auto-generated cell.

Tip #2: Controlling the number of vertices when outputting GDSII file
In L-Edit, when you output to a GDSII file, curved objects like circles, arcs, tori and curved polygons are converted to polygons based on the manufacturing grid specified in the design. The manufacturing grid corresponds to the resolution at which the manufacturer can produce layout objects. If you have big curved objects in your design, such objects may exceed the maximum number of vertices the manufacturer can process. When exporting to GDSII, you can put a check mark in “Fracture polygons with more that 199 vertices”, where 199 is the maximum number of vertices which you can change. When exporting to GDSII, the curved objects will be first converted to polygons and then fractured if the number of vertices exceeds 199.
You can control the number of vertices of selected objects manually by the commands available in “Draw>Convert” menu.

Suppose that you have a circle with radius 500u, and the manufacturing grid in your design is set to 0.05u (“Setup > Design | Grid”). When you convert this circle to a polygon using the “Draw > Convert > To Polygon…” command, the resultant polygon will have 592 vertices. You can fracture it with the “Draw > Convert > Fracture Polygons…” command. If you set the maximum number of vertices to 199, you end up with 4 polygons each having 150 vertices.
In some cases, depending on the process, manufacturers charge more if the number of vertices exceeds a certain number. To decrease the number of vertices, you can increase the “Manufacturing grid” temporarily in “Setup > Design | Grid” before converting the selected curved objects to a polygon. If you change “Manufacturing Grid” from 0.05u to 0.5u, then convert the same circle to a polygon, you will find that it has only 179 vertices.

BACK TO TOP

In This Issue

From the President: Another Great Year
What’s New in Tanner EDA Tools
Analog Insights: Tanner EDA Partners with Berkeley Design Automation for Faster Spice Simulation
FAE Focus: Ragip Ispir, Tanner EDA Japan
Tips & Tricks
Happenings @ Tanner EDA
Partner Spotlight: EDA Solutions

Happenings @ Tanner EDA

Tanner EDA Press Releases Since the Last Issue:
Tanner EDA Announces Solid Growth in Revenue, Technology Partnerships and Customer Base for Fiscal 2011
Tanner EDA Lands Experienced EDA Account Manager for Growing Customer Base
Berkeley Design Automation and Tanner EDA Accelerate Circuit Verification for A/MS Designers
Tanner EDA Unveils HiPer Silicon v16 with Open Access at DAC 2011
Read More on Our Press Page


Tanner EDA in the News
SemiWiki:
Interview with Tanner EDA president Greg Lebsack

ChipEstimate / Japan:
“Full-flow Design Suite for Analog and Mixed-signal: Tanner EDA Provides In-depth On-demand Demo Online”

EDACafe:
Video interview of John Zuk at DAC

EETimes / Europe:
“Foundry-certified 0.18-micron analog CMOS PDK”

Electronic Products:
“High-productivity design tools for custom analog ICs”

SemiWiki:
“Tanner EDA at DAC” and “An Affordable 3D Field Solver at DAC”

Tanner EDA at DAC
DAC 2011 was held in San Diego from June 5-10. Tanner EDA and partners participated in exhibitor forums, a technical panel, and booth presentations and demonstrations. Tanner EDA also co-sponsored the 5th Annual IPL Luncheon at DAC: Interoperable PDK Standards are Here to Stay: New Era of Analog / Custom Innovation

Booth presentations by Tanner EDA & Partners:
A pre-release version of v16, including Open Access database compatibility for layout, which enables designers to share files with colleagues and business partners using Si2 database standards. Larger design teams also appreciated the redesigned multi-user functionality.
A joint solution developed by Berkeley Design Automation, Inc., the nanometer circuit verification leader, and Tanner EDA to accelerate circuit verification for A/MS designers. Both Tanner EDA and Berkeley Design Automation demonstrated the integration of Tanner EDA’s HiPer Silicon™ design suite with the Berkeley Design Automation Analog FastSPICE™ Platform.
TowerJazz
Speaker: Ofer Tamir, Director CAD, Design Enablement & Support

X-Fab
Speaker: Joerg Doblaski, Senior Engineer

Tanner EDA on HiPer Verify High Performance Physical Verification
Speaker: Jeff Miller, Director of Product Management

Tanner EDA on HiPer DevGen Layout Acceleration
Speaker: Nicolas Williams, Director of Product Management

Exhibitor Forums:
Analog IC Design: Why a Cohesive Tool Flow Drives Productivity
Mass Sivilotti, Chief Scientist, and John Zuk, VP Marketing & Strategy, outlined how productivity has become a mandate as analog IC designers strive to keep pace with the rapidly increasing market demands around quicker time-to-market. The productivity advantages of using a cohesive analog design tool suite comprised of schematic capture, simulation, layout, and physical verification were discussed. Preliminary responses from a recent survey of analog designers were presented (e.g., 28% of 42 preliminary respondents prefer to use development tools from a single vendor/ compatibility; expense and lack of single point of control were cited as main challenges when using tools from multiple vendors.)

To contribute to this survey of the effect of a cohesive workflow on A/MS design productivity, please visit the survey here. Full results will be published in Q4.
Analog IC Design at the Edge: A New Twist for Nanoscale Productivity
Dr Lanny Lewyn, President, Lewyn Consulting and Nicolas Williams, Director of Product Management for Tanner EDA, explained why nanoscale analog IC design productivity is a major concern as chip device counts approach 1 billion at 32 nm. A multitude of physical device pattern separation dimensions must now be entered into the pre-layout simulation models to accurately predict post-layout circuit performance. The presented approach — based on the seminal work of Mead and Conway — offers a novel method that enables rapid circuit simulation in a multitude of nanoscale technology nodes and platform options. As a result, pre-layout simulation accuracy is improved, which has a direct impact on increasing analog IC manufacturing yields while simultaneously increasing design productivity.

For a copy of this presentation, please contactsales@tannereda.com
Pavilion Panel:
Why the Delay in Analog PDK?
Mass Sivilotti from Tanner EDA, Tom Quan from Taiwan Semiconductor Manufacturing Co., Ltd. (TSMC) and Ofer Tamir from TowerJazz, with Steve Klass of 7Stalks Consulting acting as moderator, participated in a lively discussion on why it takes so long for foundries to release analog/ mixed-signal process design kits (PDKs). With the amount of A/MS content in designs growing, and the pressure to move to smaller process nodes increasing, the audience appreciated a chance to talk to the people who develop PDKs and reference flows.

Upcoming Industry Events
TowerJazz Global Symposium – silver sponsorship and presentation by joint customer Novocell Semiconductor on Nov 3 in Newport Beach, CA.
EDS Fair from Nov 16-18 in Yokohama, Japan – exhibiting
Learn more about Tanner EDA Events


What’s New in Tanner EDA Tools
View latest release notes here

Webinars on Demand
We are now offering Webinars On-Demand from our webinar library. Click to visit our Videos & Demos homepage to download webinars on:

    • • Analog layout and Tanner EDA’s L-Edit and Specialty Tools
    • • Analog acceleration and Tanner EDA’s HiPer DevGen tool
    • • High performance physical verification and Tanner EDA‘s HiPer Verify

See a schedule of upcoming webinars

Training on Tanner EDA Tools
We offer training for analog and mixed-signal IC & MEMS design, taught by Tanner EDA experts with extensive design experience.

    • • Training at Tanner EDA is available each month in our own classroom with workstations loaded with our entire tool suite. (See information below.)
    • • Customized training can be planned and modified to meet the unique needs of an individual, design team or company.
    • • All our training can be scheduled at your site, via the web or at our corporate offices in Monrovia, California.

Sign up for Training

Partner Spotlight: EDA Solutions

In November 2001, Tanner EDA appointed EDA Solutions as the exclusive European representative for Tanner EDA tools. Ten years on Paul Double, founder and CEO, joined us for a brief Q&A for this quarter’s issue of Tanner EDA Today:
Q:Please tell us a little about EDA Solutions.
A:We have been exclusive representatives for Tanner for 10 years now and in that time we have brought over 400% growth in sales in the region, and helped develop the tools, the brand and the reputation for Tanner as the best supported, most cost-effective design solution on the market.
Q:What makes Tanner EDA solutions a good fit for EDA Solutions to represent in Europe?
A:The design tools from Tanner EDA offer an effective and affordable alternative to those from the traditional big vendors. It therefore suits a small highly-focused company like ours to represent such a tool flow. Like Tanner EDA, we have a commitment to unrivalled customer service, which has helped us to build up a very mutually beneficial relationship with our customers.
Q:Which is the biggest market segment served by EDA Solutions? Is it commercial or educational? What design types?
A:We have a long-standing relationship with Europractice to serve our academic customers. For Tanner EDA in Europe, though, the majority of EDA Solutions’ focus has been on the commercial business. Our main application areas include sensing, imaging/display, power/HV and MEMS. We are helped in our role by the strength of the European analog/mixed signal foundries and our PDK support for their processes. Our traditional customer base has been startups, but we have seen a big surge in the number of more established design/product companies switching to Tanner EDA as more of the market becomes aware of the company’s “less is more” approach to providing just the right level of features and functionality within a complete analog design flow from schematic capture, circuit simulation, and waveform probing to physical layout and verification.

Past Newsletter Issues

Summer 2011
Spring 2011

Read more in Newsletter archives

© Copyright 2010, Tanner EDA. All Rights Reserved


Simulating in the Cloud

Simulating in the Cloud
by Paul McLellan on 09-13-2011 at 1:43 pm

Yesterday I met with David Hsu who is the marketing guy for Synopsys’s cloud computing solution that they announced at their user-group meeting earlier this year. It was fun to catch up; David used to work for me back in VLSI days although he was an engineer writing place and route software back then.

David admits that this is an experiment. Nobody really knows how EDA in the cloud is going to work (or even if it will work) but for sure nobody is going to find out sitting meditating in their office. So Synopsys took what they considered a bounded problem:

  • High-utlization, otherwise it will be too difficult to get even initial interest
  • Not incredibly highly-priced, so that the cloud solution doesn’t immediately undercut the existing business
  • Consumes lots of cycles when used, so the scalability of the cloud is a genuine attraction

VCS simulation seemed to meet all those aims. Verification is, depending on who you ask and how you count, 60-70% of design. So certainly high utilization. VCS is not a $250K product. Of course nobody, not even Synopsys, really knows what customers are paying for it since they do large bundled deals. But not incredibly highly priced for sure. And it consumes lots of cycles when used. Lots. Especially coming up to tapeout when it may be necessary to run the entire RTL verification suite in as short a time as possible. Under those circumstances, a thousand machines for a day is a big difference from 10 machines for 3 months.

The cloud solution is sold by the hour, all inclusive. The precise price depend on how much you buy and all the usual negotitions but is of the order of $2.50 to $5/hour. The back end is Amazon’s cloud solution and the sweet spot is batch mode regression testing where the thing everyone is most interested in optimizing is wall-clock time.

The biggest challenge has been security issues. If a customer wants to buy a VCS simulator, an engineering manager can cut a PO. If a customer wants to ship the crown-jewels of their company out of the building then legal, and senior management and even, in some companies, the Chief Security Officer need to get involved. Solving some of this is pure emotion, not driven by numbers. David told me of one meeting where a lawyer asked an engineering manager “how can your risk the company’s survival like this?” I suggested that an appropriate response from the engineering manager would have been “how can you risk the company’s survival by delaying all our tapeouts?” What is rational to an engineer is emotional to a lawyer, and vice versa.

But the underlying driver for the business is strong. Large companies are doubling the size of their server farms every two years (Moore’s Law for server farms I guess). Increasingly companies want a base load, maybe 75-80% of peak load, done in-house and then offload the remaining 20-25% to something such as the Synopsys cloud solution.

I asked David if the business is growing faster or slower than expected. He admitted it was slower but also pointed out, reasonably, that nobody has a clue how fast it should grow–it is a new market. But it does seem to be starting to catch on. The biggest attraction is that knob to turn that an engineering manager has never had before: you can reduce the time to run a regression by spending more money. But not much more money. And you don’t end up with licenses and hardware that will sit unused until the next peak load comes along.

To me one of the biggest challenges for EDA in the cloud is that nobody has a single vendor flow. But if you have to keep moving data out of one cloud into another, especially with the size of design databases today, that is probably not tractable. David admitted this was a problem: they have customers who want to use Denali (now Cadence) with the Synopsys cloud solution, just like they do in their own farms. But in the cloud that requires more than just handing out two purchase orders, it requires Synopsys and Cadence to co-operate. How’s that working these days? Conceivably a 3rd party edacloud.com (I just made up the name) could integrate tools from multiple suppliers and deliver the true value of making everything work together, but it is probably not a realistic financial model to buy all the tools required up-front as a “normal” multi-year license. While it might be easier for a 3rd party to get Denali from Cadence and VCS from Synopsys and put them up on Amazon, it is not clear. Ultimately it is customers who have the power to drive this. But as with the open PDK situation, even that might not be enough.

Moving an entire design database (once it gets to physical) is not just simply a matter of copying it across the net. You can put it on a disk drive and FedEx it to Amazon, and they will do this. In fact that is apparently how Netflix got its database of streamable video onto Amazon, although they shipped whole file-servers. But in the iterative loops of any design process this seems unwieldly, to say the least.

Anyway, VCS in the cloud is clearly promising but also too soon to tell whether it is a harbinger of things to come or a backroad.

For further details, visit the Synopsys cloud page where you can find the white papers too.

Note: You must be logged in to read/write comments



Hardware Configuration Management approach awarded a Patent

Hardware Configuration Management approach awarded a Patent
by Daniel Payne on 09-13-2011 at 11:21 am

Hardware designers use complex EDA tool flows that have collections of underlying binary and text files. Keeping track of the versions of your IC design can be a real issue when your projects use teams of engineers. ClioSoft has been offering HCM (Hardware Configuration Management) tools that work in the most popular flows of: Cadence, Mentor, Synopsys and SpringSoft.

Today ClioSoft announced that patent number 7,975,247 has been issued by the US Patent & Trademark Office. This patent has a title of, “Method and system for organizing data generated by electronic design automation tools”. The product which uses this technology is called the Universal Data Management Adaptor (UDMA).

Abstract

A method and system for organizing a plurality of files generated by an Electronic Design and Automation (EDA) tool into composite objects is disclosed. The system provides a plurality of rules, which may be configured for various EDA tools. These rules may be configured for any EDA tool by specifying various parameters such as filename patterns, file formats, directory name patterns, and the like. Using these rules which are configured for an EDA tool, the files that form a part of the design objects are identified and packaged in the form of composite objects.

Overview


Patent Front Cover

Figure 4a shows a PCB file structure with folders for schematics (sch), symbols (sym) and wiring (wir) along with multiple versions. With UDMA this complex tree can be represented by figure 4b where version control shows that schematic 2 and symbol 1 are the latest versions:

Figure 5 shows the basic flow of how files from an EDA tool are configured by the UDMA and readied for HCM:

Inventors
Anantharaman; Srinath (Pleasanton, CA), Rajamanohar; Sriram (Fremont, CA), Pandharpurkar; Anagha (Fremont, CA), Khalfan; Karim (San Jose, CA)

Summary
ClioSoft has built an IC-centric HCM system from the ground up that is unique enough to be patented. Your favorite IC tools have already been integrated by ClioSoft, or you can use the UDMA tool and add any new EDA tool for HCM.

Also Read

Transistor Level IC Design?

How Tektronix uses Hardware Configuration Management tools in an IC flow

Richard Goering does Q&A with ClioSoft CEO


Another Up Year in a Down Economy for Tanner EDA

Another Up Year in a Down Economy for Tanner EDA
by Daniel Payne on 09-13-2011 at 11:00 am

Almost every week I read about a slowing world economy, yet in EDA we have some bright spots to talk about, like Tanner EDA finishing its 24th year with an 8% increase in revenue. More details are in the press release from today.

I spoke with Greg Lebsack, President of Tanner EDA on Monday to ask about how they are growing. Greg has been with the company for 2 1/2 years now and came from a software business background. During the past 12 months they’ve been able to serve a broad list of design customers, across all regions, with no single account dominating the growth. Our previous meeting was at DAC, three months ago where I got an update on their tools and process design kits.

Annual Highlights
Highlights for the year ending in May 2011 are:

  • 139 new customers
  • HiPer Silicon suite of analog IC design tools increasingly being used for: sensors, imagers, medical, power management and analog IP
  • Version 16 demonstrated at DAC (read my blog from DAC for more details)
  • New analog PDK added for Dongbu HiTek at 180nm, and TowerJazz at 180nm for power management
  • Integration between HiPer Silicon and Berkeley DA tools (Analog Fast SPICE)

Why the Growth?
I see several factors causing the growth in EDA for Tanner: Standardized IC Database, SPICE Integration, Analog PDKs and a market-driven approach.

Standardized IC Database
While many of their users run exclusively on Tanner EDA’s analog design suite (HiPer Silicon), their tools can co-exist in a Cadence flow as version 16 (previewed at DAC) uses the OpenAccess database. This is an important point because you want to save time by using a common database instead of importing and exporting which may loose valuable design intent. Gone are the days of proprietary IC design databases that locked EDA users into a single vendor, instead the trend is towards standards-based design data where multiple EDA tools can be combined into a flow that works.

SPICE Integration
Analog Fast SPICE is a term coined by Berkeley Design Automation years ago as they created a new category of SPICE circuit simulators that fit between SPICE and Fast SPICE tools. By working together with Tanner EDA we get a flow that uses the HiPer Silicon tools shown above with a fast and accurate SPICE simulator called AFS Circuit Simulator (see the SPICE wiki page for comparisons). Common customers often drive formal integration plans like this one. I see Tanner EDA users opting for the AFS Circuit Simulator on post-layout simulations where they can experience the benefits of higher capacity.

Analog PDKs
Unlike digital PDKs grabbing the headlines at 28nm the analog world has PDKs that are economical at 180nm as shown in the following technology roadmap for Dongbu HiTek, an analog foundry located in South Korea:

Another standardization trend adopted by Tanner EDA is the Interoperable PDK movement, known as iPDK. Instead of using proprietary models in their PDK (like Cadence with Skill) this group has standardized in order to reduce development costs. In January 2011 I blogged about how Tanner EDA and TowerJazz are using an iPDK at the 180nm node.

Market Driven Approach
I’ve worked at companies driven by engineering, sales and marketing. I’d say that Tanner EDA is now more market-driven, meaning that they are focused on serving a handful of well-defined markets like analog, AMS and MEMS. For the early years I saw Tanner EDA as being primarily engineering driven, which is a good place to start out.

Summary
Only a handful of companies survive for 24 years in the EDA industry and Tanner happens to be in that distinguished group, because they are focused and executing well we see them in growth mode even in a down economy.


When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!

When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!
by Daniel Nenni on 09-13-2011 at 9:22 am

In working with TSMC and GlobalFoundries on AMS design reference flows I have experienced first hand the increasing verification challenges of nanometer analog, RF, and mixed-signal circuits. Tools in this area have to be both silicon accurate and blindingly fast! Berkeley Design Automation is one of the key vendors in this market and this blog comes from discussions with BDA CEO Ravi Subramanian. I first met Ravi at the EDAC CEO Forecast panel I moderated in January, he is probably the only EDA CEO that spends more time in Taiwan than I do!

When analog/RF/mixed-signal IC design meets nanometer CMOS geometries, the world changes. Analog/RF circuit complexity increases as more transistors are used to realize circuit functions; radically new analog circuit techniques that operate at low voltages are born, creating new analysis headaches; digital techniques are mixed into analog processing chains, creating complex requirements for verifying the performance of such digitally-controlled/calibrated/assisted analog/RF circuits; more and more circuits need to operate where devices are in the nonlinear operating region, creating analysis headaches; layout effects determine whether the full-potential of a new circuit design can be achieved; and second and third order physical effects now become first-order effects in the performance of these circuits.

Circuit simulation remains the verification tool of choice, but with little innovation from traditional analog/RF tool suppliers, designers are forced to fit their design to the limitations of tools – breaking down blocks into sizes that lend themselves to easy convergence in transient or periodic analysis, using linear approximations to estimate the performance of nonlinear circuits, ignoring the impact of device noise because of the impracticality of including stochastic effects in circuit analysis, characterizing circuit performance for variation without leveraging distribution theory, and cutting corners in post-layout analysis because of the long cycle times in including layout dependent effects in circuit performance analysis.

In the face of all of this, design flows are being retooled to leverage the best in class technologies that have emerged to solve these new problems with dramatic impact on productivity and silicon success. Retooling does not mean throwing out the old and bringing in the new – rather it is an evolutionary approach to tune the design flow by employing the best technology in each stage of the flow that will remove limitations or enable new analyses.

On September 22[SUP]nd[/SUP] at TechMart in Santa Clara key EDA vendors will showcase advanced nanometer circuit verification technologies and techniques. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.

I hope to see you there! Register today, space is limited.



Memo To New AMD CEO: Time For A Breakout Strategy!

Memo To New AMD CEO: Time For A Breakout Strategy!
by Ed McKernan on 09-12-2011 at 2:52 pm

“Where’s the Taurus?” In the history of company turnarounds, it was one of the most penetrating and catalyzing opening questions ever offered by a new CEO to a demoralized executive team. The CEO was Alan Mullaly, who spent years at Boeing and at one point in the 1980s studied the successful rollout of the original Ford Taurus. For one to get a full sense of what Mulally faced you have to follow the progression of the dialogue in a Fast Company article written after the encounter:

Continue reading “Memo To New AMD CEO: Time For A Breakout Strategy!”


Broadcom’s Bet the Company Acquisition of Netlogic

Broadcom’s Bet the Company Acquisition of Netlogic
by Ed McKernan on 09-12-2011 at 1:19 pm

I surmised a month ago that Broadcom could be a likely acquirer of TI’s OMAP business in order to compete more effectively in Smart Phones and Tablets. I was not bold enough. Instead, Broadcom has offered $3.7B for Netlogic in order to be an even bigger player in the communications infrastructure market by picking up TCAMs and a family of muli-processor MIPs solutions. The acquisition is not cheap as they are offering to buy Netlogic at a price equal to 10 times their current sales. In addition it represents, 42% of Broadcom’s current valuation. Although one can argue, that semiconductors as a whole are undervalued in the market.

I want to highlight the significance of the acquisition relative to the two competing visions in the market place as to how best serve the communications market from a semiconductor solution point of view. On the one side are the off the shelf standard ASSP solutions from Broadcom, Marvell, Netlogic Cavium, Freescale, EZChip etc.. On the other are the customizeable solutions that were once done entirely with ASICs but which now are more and more being taken over by FPGAs. Altera and Xilinx have made this a focus because they are able to generate a lot of revenue by selling very high ASP parts as prototype and early production units.

The first camp believes that over time multicore MIPs based processors are the most flexible way to build and update communications equipment. Broadcom’s major weakness was that they were not able to get a view into new high performance designs with switch chips better suited for cost sensitive volume switches. Netlogic, on the other hand gets to see nearly every new high-end design because they are the de-facto sole provider of TCAM chips. Broadcom, in a sense is paying a premium to get this inside worldwide view into the customer base.

Switching to the other side, the FPGA camp. Cisco and Juniper built their businesses in the 1990s and 2000s off of custom ASICs fabricated at IBM and TI. The development teams still consist mostly of ASIC designers. They have been asked to rely less on ASICs and more on FPGAs because they never generate high volume. The design flow of FPGAs is like ASICs which is a big plus. What Xilinx and Altera figured out several years ago is that if they bolted on the fastest Serdes to their latest chips, they would attract more design wins. Altera of late is doing the best at pushing Serdes speeds. Whereas in the 1990s, FPGAs were used for simple bus translations, in the 2000s they became standard front ends to many line cards because of the Serdes speeds and flexibility.

Turning to this decade, the FPGAs are being used more and more for building data buses, deep packet inspection and even traffic management. If Xilinx and Altera offered unlimited gates at a reasonable price, the likelihood is that they would own most of the line card. There is one area that they are coming up short and I expect this will be corrected soon. The networking line card needs some amount of high level processing. Traditionally, this has been MIPs and recently Intel has shown up. FPGAs need to incorporate processors and have a clean interface into the high-speed fabric.

At the 28nm process node, Altera and Xilinx are attempting to take it one step further in their ability to be more economical by offering more gates at the same price. Xilinx is pushing the envelope on gate count by utilizing 3D package technology. This should allow them to effectively double the gate counts at less than 2X the cost.

Altera’s approach is to set aside silicon area for customers to drop in their own IP block. A customized solution that may be of benefit to the Cisco and Juniper’s of the world who don’t like to share their company jewels. There is a metal mask adder but it is still a time to market and lower cost alternative to a full ASIC. I call this approach a “Programmable ASSP” because it combines the benefits of both.

In the short term there will be no clear-cut winner as both approaches have benefits and a preference to design engineers. There is however a longer term financial aspect that can sway the market and it is that the FPGA vendors have a much higher margin structure than Broadcom and the rest of the ASSP vendors. Altera and Xilinx have 14 – 18% higher gross margins but more importantly the operating margins are over two times greater. It comes down to R&D. Altera and Xilinx get much more for their R&D dollar than Broadcom or Netlogic. All this seems to lead to the conclusion that Altera and Xilinx have the advantage in their ability to explore ways of crafting new solutions in the communications space.

Broadcom has a lot to prove over the coming months. In the announcement, they forecasted that the Netlogic acquisition would be accretive. Netlogic’s TCAM business is sound, however it hasn’t grown to the level expected with the rollout of IPv6. More significantly for Broadcom is the fact that the processor business is expensive. Unlike the PC market, the NPU market never achieved the multibillion-dollar size that many analysts expected a decade ago. It is, according to the Linley Group roughly $400M in size, which is relatively small for the investment needed at 28nm and below.


Synopsys STAR Webinar, embedded memory test and repair solutions

Synopsys STAR Webinar, embedded memory test and repair solutions
by Eric Esteve on 09-12-2011 at 8:16 am

The acquisitions of Virage Logic by Synopsys in 2010, have allowed building a stronger, diversified IP port-folio, including the embedded SRAM, embedded non-volatile memory and embedded test and repair solution. Looking back in time, I remember the end of the 80’s: at that time the up-to-date solution to embed SRAM in your ASIC design was to use a compiler, provided by the ASIC vendor, implement the SRAM, and develop yourself the test vectors. In 2000, most of the ASIC vendors were externally sourcing the SRAM compiler (to Virage Logic…), the ASIC designers were taking benefit of faster, denser memories with Built-In-Self-Test (BIST) integrated. But that was not enough, as the SRAM, becoming very large, may have a negative impact on the yield of the ASIC. Then, in 2002, Virage Logic has introduced the repair capability with the STAR product (for Self-Test and Repair).

To register to this STAR Webinar, just go here.

Specifically, the DesignWare Self-Test and Repair (STAR) Memory System consists of:

  • Test and repair register transfer level (RTL) IP, such as STAR Processor, wrapper compiler, shared fuse processor and synthesizable TAP controller
  • Design automation tools such as STAR Builder for automated insertion of RTL and STAR Verifier for automated test bench generation
  • Manufacturing automation tools such as STAR Vector Generator for automated generation of WGL/STIL and programmability in patterns, and STAR Silicon Debugger for rapid isolation, localization and classification of faults
  • An open memory model for all memories. In order to generate DesignWare STAR Memory System views, Synopsys provides the MASIS memory description language. In addition a MASIS compiler is available to memory developers to automate generation and verification of the memory behavioral and structural description
  • The DesignWare STAR ECC IP, which offers a highly automated design implementation and test diagnostic flow that enables SoC designers to quickly address multiple transient errors in advanced automotive, aerospace and high-end computing designs

This webinar will be hold by Yervant Zorian, Chief Architect for embedded test & repair products line and Sandeep Kaushik, the Product Marketing Manager for the Embedded Memory Test and Repair product line at Synopsys. They will explain:

  • The technical trends and challenges associated with embedded test, repair and diagnostics in today’s designs.
  • The trade-offs and design impact of various solutions.
  • How Synopsys’ DesignWare® STAR Memory System® can meet your embedded test, repair and diagnostics needs.

They will tell you why STAR can lead to:
Increased Profit Margin

  • The DesignWare STAR Memory System can enable an increase of the native die yield through memory repair, leading to increased profit margins

Predictable High Quality with Substantial Reduction in Manufacturing Test Costs Shorter Time-to-Volume

  • The DesignWare STAR Memory System has superior diagnostics capabilities to enable quick bring up of working silicon, thereby enabling manufacturing to quickly ramp to volume production. The DesignWare STAR Memory System also has automated test bench capabilities and a proven validation flow to ensure a successful bring up of first silicon on the automatic test equipment

Minimum Impact on Design Characteristics (Performance, Power and Area)

  • Because the test and repair system is transparently integrated within the DesignWare STAR Memory System, it ensures minimal impact on timing and area and allows designers to quickly achieve timing closure. This advanced embedded test automation can reduce insertion time by weeks

To register to this STAR Webinar, just go here.

Eric Esteve


Cadence ClosedAccess

Cadence ClosedAccess
by Paul McLellan on 09-11-2011 at 4:00 pm

There are various rumors around about Cadence starting to close up stuff that has been open for a long time. Way back in the midst of time, as part of the acquisition of CCT, the Federal Trade Commission forced Cadence to open up LEF/DEF and allow interoperability of Cadence tools (actually only place and route) I believe for 10 years. Back then Cadence was the #1 EDA company by a long way, in round figures making about $400M in revenue each quarter and dropping $100M to the bottom line. Cadence opened up LEF/DEF and created the Connections Program as a result.

Recently I’ve heard about a couple of areas where Cadence seems to be throwing its weight around.

Firstly, it seems that they have deliberately made Virtuoso so that some functions only operate with Cadence PDKs (SKILL-based) and not with any of the flavors of open PDKs that are around.

I’ve written before about the various efforts to produce more open PDKs that run on any layout system, as opposed to the Skill-based PDKs that only run in Cadence’s Virtuoso environment.

Apparently what is happening is this. OpenAccess includes a standard interface for PCells which is used by all OA PCells whether implemented in Skill, Python, TCL, C++ or anything else. There is also a provision for an end application to query the “evaluator” used with each PCell. In IC 6.1.5 code has been added to Virtuoso GXL which uses this query and if the answer is anything other than “cdsSkillPCell” then a warning message “whatever could not complete” is issued and that whatever GXL feature is aborted. Previous versions 6.1.4 and earlier, all worked correctly. In particular, modgen no longer works.

Apparently semiconductor companies are very annoyed about this for a couple of reasons. Firstly, they have to generate multiple PDKs if Cadence won’t accept any of the open standards (and Cadence has enough market share that they cannot really be avoided). Second, the incredibly complex rules that modern process nodes require are simply much easier using the more powerful modern languages used in the open PDKs (e.g. Python). At least one major foundry has said publicly that they can deliver modern-language PDKs to their customers weeks earlier than they can with Skill.

Cadence apparently claim that it is purely an accident that certain tools fail with anything other than Skill-based PDKs, as opposed to something that they have deliberately done to block them. But they won’t put any effort into finding what is wrong (well, they know). To be fair to them, they have had major financial problems (remember those Intel guys?) that has meant that they have had to cut back drastically on engineering investment and have to really prioritize what they can do.

One rumor is that this non-interoperability has escalated to CEO level and Cadence’s CEO refused to change this deliberate incompatibility. TSMC must be very frustrated by this since it causes them additional PDK expense. Relations between the two companies seem to be strained over Cadence not supporting the TSMC iPDK format. It would be an interesting battle of the monsters if TSMC took a stand to only support open PDKs, a sort of who’s going to blink first scenario.

The second rumor is that Cadence have changed the definitions of LEF/DEF and now demand that anyone using them license the formats. To be fair, Synopsys does the same with .lib. I don’t know if they are demanding big licensing fees or anything. Of course there may be technical reasons LEF/DEF have to change to accommodate modern process nodes, just as .lib has had to change to accommodate, especially, more powerful timing models.

It is, of course, ironic that Cadence’s consent decree came about due to its dominance in place & route. Whatever else Cadence’s can be accused of, dominance in place & route is no longer one of them. Synopsys and Magma have large market shares and Mentor and Atoptech have credible technical solutions too. It is in custom design with Virtuoso where Cadence arguably has large enough market share to fall under laws that restrict what monopolists can do. Things like the essential facilities doctrine which overrides the general principle that a company does not have to deal with its competitors if it chooses not to.

Obviously I’m not a lawyer, but it’s unclear if Cadence is doing anything it is not allowed to. But is certainly doing things that contradict its EDA360 messaging about being open, and that would probably have been illegal during the decade of the consent decree which recently ended. Plus apparently they have 50 people in legal (no wonder they are tight in engineering) but they seem to have their hands full with class action suits (funny thing: I just tried to use Google to see if there was any color worth adding to this and the #1 hit was a website called Law360!).

And then there is the Cadence “dis-connections” partner program. But that’s a topic for another time.

Related blogs: Layout for AMS Nanometer ICs

Note: You must be logged in to read/write comments


2.5D and 3D designs

2.5D and 3D designs
by Paul McLellan on 09-07-2011 at 1:54 pm

Going up! Power and performance issues, along with manufacturing yield issues, limit how much bigger chips can get in two dimensions. That, and the fact that you can’t manufacture two different processes on the same wafer, mean that we are going up into the third dimension.

The simplest way is what is called package-in-package where, typically, the cache memory is put into the same package as the microprocessor (or the SoC containing it) and bonded using traditional bonding technologies. For example, Apple’s A5 chip contains an SoC (manufactured by Samsung) and memory chips (from Elipida and other suppliers). For chips where both layouts are under control of the same design team, microbumps can also be used as a bonding technique, flipping the top chip over so that the bumps align with equivalent landing pads on the lower chip completing all the interconnectivity.

The next technique, already in production at some companies like Xilinx, is to use a silicon interposer. This is (usually) a large silicon “circuit board” with perhaps 4 layers of metal built in a non-leading edge process and also usually containing a lot of decoupling capacitors. The other die are microbumped and flipped over onto the interposer, and the interposer is connected to the package using through silicon vias (TSVs). Note that this approach does not require TSVs on the active die, avoiding a lot of complications.

I think it is several years before we will see true 3D stacks with TSVs through active die and more than two layers of silicon. It requires a lot of changes to the EDA flow, a lot of changes to the assembly flow, and the exclusion areas around TSVs (where no active circuitry can be placed) may be prohibitive, forcing the TSVs to the periphery of the die and thus lowering significantly the number of connections between die that is possible.

But all of these approaches create new problems to verify power, signal and reliability integrity. To solve this requires a new verification methodology that provides accurate modeling and simulation across the whole system: all the die, interposers, package and perhaps even the board.

TSVs and interposer design can cause inter-die noise and other reliability issues. As I said above, the interposer usually contains decaps and so the power supply integrity needs to take these into account. In fact it is not possible to analyze the die in isolation since the power distribution is on the interposer.

One approach, if all the die layout data (including the interposer) is available, is to do concurrent simulation. Typically some of the die may be from an IP or memory vendor and in this casee a mdel-based analysis can be used, with the CPMs (chip power model) standing in for the detailed data that is unavailable.

One challenge that going up in the 3rd dimension creates is the issue of thermal induced failures. Obviously heat generated has a harder time getting out from the center than in a traditional two dimensional chip design. The solution is to create a chip thermal model (CTM) for each die, that must include temperature dependent power modeling (leakage is very dependent on temperature), metal densite and self-heating power. Handing all these models to ta chip-package-system thermal/stress simulation tool for power-thermal co-analysis, the power and temperature distribution can be calculated.

A final problem is signal integrity. The wide I/O (maybe thousands of connections) between the die and the interposer can cause significant jitter due to simultaneous switching. Any SSO (simultaneously switching outputs) solution needs to consider the drivers and receivers on the different die as well as the layout of the buses on the interposer. Despite the interposer being passive (no transistors) its design still requires a comprehensive CPS methodology.

Going up into the 3rd dimension is an opportunity to get lower power, higher performance and smaller physical size (compared to multiple chips on a board). But it brings with it new verification challenges in power, thermal and signal integrity to ensure that it is all going to perform as expected.

Norman Chang’s full blog entry is here.

Related blog: TSMC Versus Intel: The Race to Semiconductors in 3D!