Synopsys IP Designs Edge AI 800x100

AMS Design, Optimization and Porting

AMS Design, Optimization and Porting
by Daniel Payne on 09-19-2011 at 2:35 pm

AMS design flows can follow a traditional path or consider trying something new. The traditional path goes along the following steps:
[LIST=1]

  • Design requirements
  • Try a transistor-level schematic
  • Run circuit simulation
  • Compare the simulated results versus the requirements, re-size the transistors and go back to step 3 or 2
  • Create an IC layout
  • Extract parasitics, re-run circuit simulation
  • Compare the simulated results versus the requirements, re-size the transistors and go back to step 5 or 2

    You probably noticed that there are iteration loops in the traditional flow after I’ve created a sized schematic or produced an IC layout. These loops take both precious CPU time and wall time, which means that your schedules tend to slip because you’re not meeting your specs soon enough.

    There is another AMS design flow called model-based design that can reduce the time to design, optimize or port an IC design:
    [LIST=1]

  • Design requirements
  • Try a transistor-level schematic
  • Create a model
  • Run an optimizer (Inputs: Model, Constraints, Process. Output: Sized schematic)
  • Schematic driven layout
  • Extract parasitics, re-run circuit simulation
  • Compare the simulated results versus requirements

    Magma has created this new AMS design flow and calls their tool the Titan Analog Design Accelerator (ADX). What strikes me most about this approach is that I’m not spending the majority of my time running circuit simulations and tweaking transistor sizes manually, instead I’m creating a model of my IC design using equations then asking an optimizer to do the hard work of creating the best sized schematic for me that meets the specification.

    The design constraints for the analog optimizer could be:

    • Area
    • Power
    • New specification
    • Speed or frequency
    • PVT corners

    ADC Example
    Using the model-based approach in Titan ADX an ADC circuit was designed then automatically optimized. The following plot has 11 different results from the optimizer showing Power versus Input Range in Red, then Active Area versus Input Range in blue.

    I can look at the trade-offs shown in the plot and then choose which of these 11 sized schematics to use. Both area and power are minimized around the 1.9V input range according to these results.

    Optimizer Feedback
    The analog optimizer provides plenty of info to help you make design tradeoffs:

    • Sensitivity info for each constraint
    • Constraints that limit design objectives most
    • Critical PVT corners
    • Floorplan constraints
    • Layout constraints

    Porting an IC Design
    Let’s say that you wanted to port an AMS block from 130nm to 90nm. The following table will give you an idea of the time difference between a traditional flow and the newer model-based flow to port your design:

    You still have to learn the new model-based design approach before you can start seeing results like this, so take that into account. Analog designers can be resistant to changing their methodologies however this new approach can provide your company with attractive time-saving and optimization benefits.

    To get the most benefit from this Magma flow you have to add the Analog Virtual Prototyper (AVP) which lets you define the placement for your transistors.

    Summary
    If you are an AMS designer that wants an optimal IC schematic and layout sooner, then consider looking at the model-based approach offered by Magma called the Titan Analog Design Accelerator.



  • PVT and Statistical Design in Nanometer Process Geometries

    PVT and Statistical Design in Nanometer Process Geometries
    by Daniel Nenni on 09-18-2011 at 9:00 am

    On Sept 22, 2011, the nm Circuit Verification Forumwill be held in Silicon Valley, hosted by Berkeley Design Automation. At this forum, Trent McConaghy of Solido DA will present a case study on the TSMC Reference Flow 2.0 VCO circuit, to showcase Fast PVT in the steps of extracting PVT corners, verifying PVT, and doing post-layout PVT verification. The presentation will cover the speed benefit of Solido Fast PVT, and the multiplicative speed benefit when combined with Berkeley DA’s Analog FastSpice simulator. The picture below shows the benefits in the context of a corner-driven design flow, reducing the time taken for a thorough PVT flow from 4.8 days to 1.8 hours.

    Process, voltage, and temperature (PVT) variations are often modeled as a set of PVT corners. Traditionally, only a handful of corners have been necessary: with FF and SS process (P) corners, plus extreme values for voltage (V) and temperature (T), all combinations would mean 2^3=8 possible corners.

    With modern processes, many more process corners are often needed in order to properly bracket process variation across different device types. Furthermore, transistors are smaller, performance margins are smaller, voltages are lower, and there may be multiple supply voltages. To properly bracket these variations, more variables with more values per variable are needed.

    This leads to more corners. Consider the reference VCO circuit from the TSMC AMS Reference Flow 2.0 on the TSMC 28nm process. A reasonable setup of its PVT variation has 15 modelset values, 3 values for temperature, and 5 values for each of its three voltage variables, totalling 3375 corners. Industry-standard simulators take 70 s to simulate this, which means it takes 66 hours to evaluate all corners. Even with 10 parallel cores, runtime is 6.6 hours.

    Designers may cope by guessing which corners cause the worst-case performance, but that is risky: a wrong guess could mean that the design hasn’t accounted for the true worst-case, which means failure in testing followed by a re-spin, or worse, failure in the field.

    And what about layout parasitics? Ideally one does a thorough PVT analysis after layout. But each of these simulations takes 18 minutes on industry-standard simulators. Therefore, even with 10 cores, it would take 4.2 days to run 3375 corners.

    What is sorely needed is a way to quickly yet thoroughly identify worst-case PVT corners, when there are hundreds, thousands, or even tens of thousands of possible corners.



    Solido Design Automation has developed a new application called Fast PVT to address this. It uses adaptive machine learning technologies to rapidly identify the worst-case corners, often reducing the number of simulations by 10x or more. Fast PVT enables users to rapidly extract a handful of worst-case corners, which the user subsequently uses in rapid-turnaround design iterations. Once the corners meet spec, Fast PVT can be used for a more conservative PVT verification. Fast PVT is also applicable to post-layout analysis.

    Of course, PVT is not always the way. Some designers have access to sufficiently good statistical MOS models to consider doing statistical analysis, which is inherently more accurate than PVT. Ideally, one would consider statistical process variation effects during the design loop, in order to get to optimal power, performance, and area subject to a target yield. However, since Monte Carlo (MC) simulations are far too slow within the design loop, MC is traditionally run as a verification afterthought. For high-sigma, the challenge is even greater, since it is not feasible to do the 5 billion or so MC simulations to verify at 6 sigma yield. There is a final challenge: designers don’t traditionally think in statistics, they think in terms of corners.

    Fortunately, there is a way for designers to design with corners, yet consider statistical (and even high-sigma statistical) variations. The key is to extract statistical corners that actually bound the 3-sigma or 6-sigma output performance for the design at hand. To reiterate: these corners bound the performance of the circuit in a statistical sense, rather than traditional global MOS models like “FF” which bound the performance of the device. Also, there needs to be a fast, pragmatic statistical (or high-sigma statistical) verification step. These steps of corner extraction and verification incorporate into a familiar-feeling corner-driven design flow: extract corners, design against them, then verify, and iterate if needed.

    This is exactly the same flow as the PVT corner-driven flow. The only difference is how corner-extraction / verification tools themselves behave. In the end, we have a unified, designer-friendly approach to handle PVT, statistical, or high-sigma variation.

    As shown below, Solido DA supplies appropriate tools to support the flows for all three styles of variation, and in conjunction with Berkeley DA’s AFS, provides speedups of 10x+ to 100x+.

    Trent McConaghy is the Solido Chief Science Officer, an engaging speaker, and someone who I have thoroughly enjoyed working with the past two years. If you are doing 28nm analog/RF, IO, memory or standard cell digital library design you will not want to miss talking to Trent!


    Fast Track Seminars

    Fast Track Seminars
    by Paul McLellan on 09-15-2011 at 6:11 pm


    Atrenta’s SoC realization seminars, “Fast Track Your SoC Design” have started.The first one was in Ottowa last Tuesday, and it was a full house. In a straw poll, most of the attendees acknowledged facing IP handoff and quality issues. The keynote speaker was Dr Yuejian Wu, director of ASIC development at Infinera and an adjuct professor at the University of British Columbia (which seems about as far away as you can get from Ottowa without actually leaving Canada!). He talked about “Fast silicon validation with built-in funcitonal tests.” Other attendees shared their experience with the SpyGlass tools and methodologies. Most of the interest, as judged by the questions, seemed to be on GenSys, Power and Advanced Lint.

    The next seminar is coming up next Tuesday, September 27th from noon until 5pm at the Network Meeting Center (5201 Great America Parkway, Santa Clara, by the Hyatt Hotel). The keynote will be Suk Lee, director of design infrastructure marketing at TSMC, another longtime EDA guy who worked for me at one point years ago. He will be speaking about soft IP quality.

    The seminar is free and includes lunch (who said there is no such thing as a free lunch) and closes with a cocktail reception.

    To register, go here. There is also a seminar in Bangalore on October 13th.


    Phil Bishop and marketing at Magma

    Phil Bishop and marketing at Magma
    by Paul McLellan on 09-15-2011 at 4:59 pm

    Earlier in the week I met with Phil Bishop, who is the corporate VP of worldwide marketing at Magma.

    I started by asking him where he came from. He originally started as a designer at Motorola in microprocessors and microcontrollers. Then he moved to Silicon Compiler Systems (remember them?) who ended up being acquired by Mentor. He stayed at Mentor for twelve years and ended up as VP of consulting with some product and IP framework. Then he decided the lure of fish & chips was too great and went to the UK for 5 years as the CEO of Celoxica. He built that up and took it public. Interestingly one place he found revenue was in the banking sector, selling them Xilinx-based add-in boards to accelerate Black-Scholes option pricing. Sounds more fun than the usual EDA term-license deals. When he came back from the UK he became Pyxis CEO and left soon before its sale to Mentor last year.

    Rajeev recruited him to Magma, initially to do global account sales until one day he found himself in charge of all of marketing: marcom, product marketing, solutions marketing and foundry interface.

    His key marketing thrust is to pull Magma’s product line together into a unified message (I used to call this empirical marketing when I was doing it at Cadence). The Silicon One tag is an umbrella name for this. Magma has a strong product line to do this with Finesim and Titan to address analog/mixed-signal, and a strong verification portfolio of Tekton (timing verification), QCP (extraction) and Quartz (DRC/LVS). Of course there are some holes that Magma has to fill with partners. It has no DFT solution of its own and no IP portfolio of its own, where it works closely with ARM, MIPS, Imagination and others.

    Another thing he is trying to do is drive the product marketing folks to take responsibility for moving opportunities from the funnel into true pre-sales engagements, so as to better understand all the adoption issues and pushback. At the same time, do a better job on competitive analysis, looking out a year or so to where the market will be, not just looking at what the competition has this quarter. This allows them to do a better job of driving engineering who are operating on that kind of a timescale anyway.

    The big thrust that Phil feels is driving Magma’s business is the move to have everything on the same chip. All the high-performance digital, all the analog, RF.

    Talking of Silicon One, coming up are the Silicon One seminars. The keynote speakers are pretty interesting:

    • Boston on 22[SUP]nd[/SUP] has Carl Anderson of IBM talking about the EDA cloud. IBM has put a lot of effort into building its own cloud for EDA and significantly reduced its IT cost as a result.
    • Santa Clara on 26[SUP]th[/SUP] has Jack Harding of eSilicon. He will talk about how chips are no longer primarily digital or primarily analog, they are totally both and the challenge is how to model, simulate, validate and optimize such beasts.
    • Austin on 28[SUP]th[/SUP] has Ty Garibay now of Altera and recently in charge of engineering at TI wireless. He is talking about how to integrate SoC and FPGA into multi-chip packages, given the change semiconductor economics whereby 20nm will be slow in coming and not cheap when it gets here, extending the push for greater integration than ever at 28nm.

    After that the seminars go to Asia and Europe.

    To see the agenda for the day at the seminars go here.

    To register for any of these seminars go here.

    Note: You must be logged in to read/write comments


    Tanner EDA Newsletter – Fall 2011

    Tanner EDA Newsletter – Fall 2011
    by Daniel Payne on 09-15-2011 at 10:47 am

    logo top

    From the President: Another Great YearThanks to innovative, cost-effective technology and excellence in customer support, we’ve just ended fiscal year 2011 (on May 31st) with solid growth. Revenue was up 8%, we added 139 new customers, and we’re continuing to reach out to technology partners for MEMS and for the analog and mixed-signal suite. As we continue to create powerful new tools and move as fast as standards allow towards more open systems, the dedication of our FAEs and the quality of both product development and ongoing maintenance and support have been major factors in our success. (For more information on our Fiscal year-end, click here).
    Largely as a result of the release of HiPerSilicon v15 very early in fiscal year 2010, our existing loyal customers accounted for a 13% greater share of new purchases in 2011 than in 2010. Tanner EDA’s L-Edit MEMS and related product offerings also saw strong growth over the year; continuing to gain traction as this design space matures.
    Product line growth was also significant. The company previewed version 16 of HiPer Silicon with OpenAccess database compatibility for layout (to be release in Q4 2011) at DAC 2011. The preview of v16 included Open Access database compatibility for layout, which enables designers to share files with colleagues and business partners using Si2 database standards. Larger design teams also appreciated the redesigned multi-user functionality. For more detail, see Tanner EDA at DACin the sidebar. We also extended and deepened our PDK portfolio, announcing availability of a jointly-developed Tanner EDA/ Dongbu HiTek foundry-certified 0.18-micron analog CMOS PDF at the end of May.
    2011 also saw Tanner EDA focusing on extending and deepening technology partner relationships in areas related to MEMs (SoftMEMs) and analog SPICE simulation (Berkeley Design Automation). See more.
    One of the things that continues to make Tanner EDA successful is our close working relationships with all our users. Many thanks to customers old and new for continuing, with us, to drive analog design innovation.
    – Greg Lebsack, President
    Click here for a September 2011 interview of Greg Lebsack on SemiWiki.
    BACK TO TOP

    What’s New in Tanner EDA Tools

    V15 of our full-flow design suite got even better with releases v15.13 and v15.14.
    In v15.13, we included major improvements to the Verilog export in S-Edit as well as modified temperature sweeps to give better user control. T-Spice saw an updated SPICE command wizard and a new duty cycle function that computes the duty cycle of a waveform. New legend controls were added to W-Edit along with a new linear regression measurement and expanded image format support. L-Edit saw improvements to the SDL placement algorithm and increased integration with our HiPer DevGen tool. HiPer Verify users saw improved performance and memory usage in Extract.
    In v15.14, we made updates to Verilog views and improved conditions and other options for imports within S-Edit. T-Spice and W-Edit have several corrections and modifications to improve performance and user functionality. L-Edit received a host of functional corrections and several key improvements that dramatically improve quality of DXF imports. HiPer Verify has several functional improvements and performance enhancements.
    For release notes, click here
    V16, which will be release in Q4, was previewed at DAC 2011. Built on our 24-year legacy of providing industry-leading price-performance and interoperability, v16 will include full Open Access database compatibility for layout. By enabling file sharing based on Open Access database standards, large design teams will be able to more easily collaborate. For preview information, contact your sales person directly or email us at sales@tannereda.com
    BACK TO TOP

    Analog Insights: Tanner EDA Partners with Berkeley Design Automation for Faster Spice Simulation

    Tanner EDA and Berkeley Design Automation (BDA) are partnering to expand Tanner EDA’s powerful integrated circuit design environment to include a FastSPICE product. BDA’s Analog FastSPICE (AFS) is integrating with Tanner EDA’s S-Edit schematic capture and W-Edit waveform analyzer, giving analog designers unprecedented speed in a powerful, easy-to-use design system. Users will be able to drive the AFS simulator directly from S-Edit to get all of the speed and accuracy necessary for nanometer design. The faster simulations will allow users to perform a much more thorough verification of their designs.
    BDA Analog FastSPICE
    AFS accelerates the simulation through advanced numerical analysis and computational without sacrificing accuracy. AFS is different from most FastSPICE products because it generates true operating points and solves the full circuit matrix and original device equations for each timestep without ever taking shortcuts, thus ensuring the highest accuracy. AFS speed improvements are most dramatic when a design has digital circuitry or when simulating an entire design with parasitics. Faster simulations allow for more corner analyses or Monte Carlo runs, resulting in a much more thorough verification of a design. More comprehensive verification means a more robust design and less risk.
    Design Flow
    Tanner EDA users will have two choices for the simulation engine: T-Spice and AFS. AFS is good for entire designs with digital circuitry and/or parasitics while T-Spice is good for small blocks and circuits with high analog content. Users can create schematics of their design in S-Edit through its powerful, easy-to-use interface and then choose to simulate with either T-Spice or AFS. Users can then interactively view, measure, and analyze waveforms through W Edit’s intuitive and highly configurable multiple window, multiple chart interface. W-Edit will directly read both T-Spice and AFS results. After laying out a design in L-Edit, users can perform post-layout extraction with or without routing parasitics and then simulate them with either simulation engine. Tanner EDA shortens design cycles through tight integration of schematic, simulation, and waveform analysis. Users can easily select which simulator they want to use in the Setup Simulation Dialog.
    Example
    The simulation of an 8-bit successive approximation ADC is shown as an example. The ADC has an 8-bit R2R DAC, a comparator, and various digital circuitry to control the DAC and produce the digital output. The design consists of 1,160 MOSFETs, 19 resistors, 1 capacitor, and 731 nodes.
    The input of the ADC, the temperature, and five corner models were swept for a comprehensive verification. The input of the ADC was sweep over the range (0.0 to 2.2) with steps of ½ Least Significant Bit (LSB) (4.3mv). The temperature was swept from -55° to 125° in increments of 10°. The five major corner models were simulated (TT, FF, SS, FS, SF). Differential Nonlinearity (DNL) and Integral Nonlinearity (INL) were measured at each temperature, input and corner.
    [TABLE] style=”width: 495px”
    |-
    | Simulator
    | style=”width: 175px” | DNL
    | style=”background-color: #DDDDDD” | INL
    |-
    | style=”width: 100px” | AFS
    | width=”175″ | 0.04 LSB
    | ±1 LSB
    |-
    | width=”100″ | T-Spice
    | width=”175″ | 0.04 LSB
    | ±1 LSB
    |-

    For this ADC, AFS and T-Spice gave the same results although AFS produced results more quickly than T-Spice because the ADC had a large amount of digital circuitry. The results were easily displayed and analyzed in W-Edit regardless if they came from AFS or T-Spice. Below are plots of the DNL for all temperatures and corner models.
    BACK TO TOP

    FAE Focus: Ragip Ispir, Tanner EDA Japan

    Ragip Ispir is Technical Manager at Tanner EDA in Tokyo. He grew up in Turkey and received his Bachelor’s Degree in Electrical and Electronics Engineering at Middle East Technical University (METU) in Ankara, Turkey. He later went on to complete his Ph.D., with specialization in RF and Microwave circuits, at Okayama University, Japan. Then he worked for Murata Manufacturing Co., Ltd. at Yokohama Technical Center as a design engineer for two years. Ragip joined Tanner EDA in 2006 and is currently responsible for technical support of both pre-sales and post-sales as well as PDK development.
    Ragip enjoys hiking and reading books.
    BACK TO TOP

    Tips & Tricks

    From Ragip Ispir, Technical Manager at Tanner EDA in Tokyo
    Tip #1: Making use of T-Cells from sample files
    Many users are familiar with using T-Cells to generate parametric layouts. Like other cells, T-Cells can easily be copied — even referenced — from another file. In the sample folder, which can be setup by “Help > Setup Examples”, there are several examples of useful T-Cells. Below are some of them, with a short explanation and screen shot of the auto generated layouts.
    [TABLE] style=”width: 495px”
    |-
    | style=”background-color: #DDDDDD” | T-Cell Name
    | style=”background-color: #DDDDDD” | Auto Generated Cell
    |-
    | “Ellipse Generator”
    | an ellipse for which you can specify the number of sides (vertices).
    |-
    | “Rounded Rectangle”
    | a rectangle, corners of which can be rounded.
    |-
    | “Spiral Generator”
    | a spiral for which you can specify number, spacing, width of rings.
    |-
    | “Concentric Tori”
    | a number of tori for which you can specify start and stop angles, incremental spacing between each torus
    |-
    | “Layout TextGenerator”
    | Instance”.
    |-
    | “NFET Generator”
    | a N-type MOSFET. Rules for the MOSFET are read in from Standard DRC rules setup. Before instancing this T-Cell, make sure that you have necessary layers and required rules defined in the DRC setup. In the T-Cell code, you can customize layer names or initialize parameters without referring to DRC rules setup.
    |-

    How to instance a T-Cell from another file
    To instance a T-Cell distributed in the sample folder, simply run the “Cell > Instance” command. In the “Select Cell to Instance” dialog window, browse for the “Installed Sample FolderFeatures By ToolL-EditT-Cells”, select the tdb file, and then select the T-Cell from the list. In the “Reference type” select “Copy cell to current file”.
    Note that, if you select “External reference”, a version of the cell (XRefCell) will still be copied to the current file, but it will be locked and linked to the original file so that if the original cell is changed you will be informed to update it when you open your file. You can examine XRef cells from “Cell > Examine XRefCells” and do operations like Redirect, Unlink etc.

    If the technology names are different, a Conflict Resolution dialog is displayed. You need to put a check mark in “Ignore different technologies”. Also, if some layers are missing or a cell with the same name exists in your file, you need to take proper action as guided in the dialog.
    When the T-Cell is copied to your design, you will be asked to enter parameters to generate the auto-generated cell.

    Tip #2: Controlling the number of vertices when outputting GDSII file
    In L-Edit, when you output to a GDSII file, curved objects like circles, arcs, tori and curved polygons are converted to polygons based on the manufacturing grid specified in the design. The manufacturing grid corresponds to the resolution at which the manufacturer can produce layout objects. If you have big curved objects in your design, such objects may exceed the maximum number of vertices the manufacturer can process. When exporting to GDSII, you can put a check mark in “Fracture polygons with more that 199 vertices”, where 199 is the maximum number of vertices which you can change. When exporting to GDSII, the curved objects will be first converted to polygons and then fractured if the number of vertices exceeds 199.
    You can control the number of vertices of selected objects manually by the commands available in “Draw>Convert” menu.

    Suppose that you have a circle with radius 500u, and the manufacturing grid in your design is set to 0.05u (“Setup > Design | Grid”). When you convert this circle to a polygon using the “Draw > Convert > To Polygon…” command, the resultant polygon will have 592 vertices. You can fracture it with the “Draw > Convert > Fracture Polygons…” command. If you set the maximum number of vertices to 199, you end up with 4 polygons each having 150 vertices.
    In some cases, depending on the process, manufacturers charge more if the number of vertices exceeds a certain number. To decrease the number of vertices, you can increase the “Manufacturing grid” temporarily in “Setup > Design | Grid” before converting the selected curved objects to a polygon. If you change “Manufacturing Grid” from 0.05u to 0.5u, then convert the same circle to a polygon, you will find that it has only 179 vertices.

    BACK TO TOP

    In This Issue

    From the President: Another Great Year
    What’s New in Tanner EDA Tools
    Analog Insights: Tanner EDA Partners with Berkeley Design Automation for Faster Spice Simulation
    FAE Focus: Ragip Ispir, Tanner EDA Japan
    Tips & Tricks
    Happenings @ Tanner EDA
    Partner Spotlight: EDA Solutions

    Happenings @ Tanner EDA

    Tanner EDA Press Releases Since the Last Issue:
    Tanner EDA Announces Solid Growth in Revenue, Technology Partnerships and Customer Base for Fiscal 2011
    Tanner EDA Lands Experienced EDA Account Manager for Growing Customer Base
    Berkeley Design Automation and Tanner EDA Accelerate Circuit Verification for A/MS Designers
    Tanner EDA Unveils HiPer Silicon v16 with Open Access at DAC 2011
    Read More on Our Press Page


    Tanner EDA in the News
    SemiWiki:
    Interview with Tanner EDA president Greg Lebsack

    ChipEstimate / Japan:
    “Full-flow Design Suite for Analog and Mixed-signal: Tanner EDA Provides In-depth On-demand Demo Online”

    EDACafe:
    Video interview of John Zuk at DAC

    EETimes / Europe:
    “Foundry-certified 0.18-micron analog CMOS PDK”

    Electronic Products:
    “High-productivity design tools for custom analog ICs”

    SemiWiki:
    “Tanner EDA at DAC” and “An Affordable 3D Field Solver at DAC”

    Tanner EDA at DAC
    DAC 2011 was held in San Diego from June 5-10. Tanner EDA and partners participated in exhibitor forums, a technical panel, and booth presentations and demonstrations. Tanner EDA also co-sponsored the 5th Annual IPL Luncheon at DAC: Interoperable PDK Standards are Here to Stay: New Era of Analog / Custom Innovation

    Booth presentations by Tanner EDA & Partners:
    A pre-release version of v16, including Open Access database compatibility for layout, which enables designers to share files with colleagues and business partners using Si2 database standards. Larger design teams also appreciated the redesigned multi-user functionality.
    A joint solution developed by Berkeley Design Automation, Inc., the nanometer circuit verification leader, and Tanner EDA to accelerate circuit verification for A/MS designers. Both Tanner EDA and Berkeley Design Automation demonstrated the integration of Tanner EDA’s HiPer Silicon™ design suite with the Berkeley Design Automation Analog FastSPICE™ Platform.
    TowerJazz
    Speaker: Ofer Tamir, Director CAD, Design Enablement & Support

    X-Fab
    Speaker: Joerg Doblaski, Senior Engineer

    Tanner EDA on HiPer Verify High Performance Physical Verification
    Speaker: Jeff Miller, Director of Product Management

    Tanner EDA on HiPer DevGen Layout Acceleration
    Speaker: Nicolas Williams, Director of Product Management

    Exhibitor Forums:
    Analog IC Design: Why a Cohesive Tool Flow Drives Productivity
    Mass Sivilotti, Chief Scientist, and John Zuk, VP Marketing & Strategy, outlined how productivity has become a mandate as analog IC designers strive to keep pace with the rapidly increasing market demands around quicker time-to-market. The productivity advantages of using a cohesive analog design tool suite comprised of schematic capture, simulation, layout, and physical verification were discussed. Preliminary responses from a recent survey of analog designers were presented (e.g., 28% of 42 preliminary respondents prefer to use development tools from a single vendor/ compatibility; expense and lack of single point of control were cited as main challenges when using tools from multiple vendors.)

    To contribute to this survey of the effect of a cohesive workflow on A/MS design productivity, please visit the survey here. Full results will be published in Q4.
    Analog IC Design at the Edge: A New Twist for Nanoscale Productivity
    Dr Lanny Lewyn, President, Lewyn Consulting and Nicolas Williams, Director of Product Management for Tanner EDA, explained why nanoscale analog IC design productivity is a major concern as chip device counts approach 1 billion at 32 nm. A multitude of physical device pattern separation dimensions must now be entered into the pre-layout simulation models to accurately predict post-layout circuit performance. The presented approach — based on the seminal work of Mead and Conway — offers a novel method that enables rapid circuit simulation in a multitude of nanoscale technology nodes and platform options. As a result, pre-layout simulation accuracy is improved, which has a direct impact on increasing analog IC manufacturing yields while simultaneously increasing design productivity.

    For a copy of this presentation, please contactsales@tannereda.com
    Pavilion Panel:
    Why the Delay in Analog PDK?
    Mass Sivilotti from Tanner EDA, Tom Quan from Taiwan Semiconductor Manufacturing Co., Ltd. (TSMC) and Ofer Tamir from TowerJazz, with Steve Klass of 7Stalks Consulting acting as moderator, participated in a lively discussion on why it takes so long for foundries to release analog/ mixed-signal process design kits (PDKs). With the amount of A/MS content in designs growing, and the pressure to move to smaller process nodes increasing, the audience appreciated a chance to talk to the people who develop PDKs and reference flows.

    Upcoming Industry Events
    TowerJazz Global Symposium – silver sponsorship and presentation by joint customer Novocell Semiconductor on Nov 3 in Newport Beach, CA.
    EDS Fair from Nov 16-18 in Yokohama, Japan – exhibiting
    Learn more about Tanner EDA Events


    What’s New in Tanner EDA Tools
    View latest release notes here

    Webinars on Demand
    We are now offering Webinars On-Demand from our webinar library. Click to visit our Videos & Demos homepage to download webinars on:

      • • Analog layout and Tanner EDA’s L-Edit and Specialty Tools
      • • Analog acceleration and Tanner EDA’s HiPer DevGen tool
      • • High performance physical verification and Tanner EDA‘s HiPer Verify

    See a schedule of upcoming webinars

    Training on Tanner EDA Tools
    We offer training for analog and mixed-signal IC & MEMS design, taught by Tanner EDA experts with extensive design experience.

      • • Training at Tanner EDA is available each month in our own classroom with workstations loaded with our entire tool suite. (See information below.)
      • • Customized training can be planned and modified to meet the unique needs of an individual, design team or company.
      • • All our training can be scheduled at your site, via the web or at our corporate offices in Monrovia, California.

    Sign up for Training

    Partner Spotlight: EDA Solutions

    In November 2001, Tanner EDA appointed EDA Solutions as the exclusive European representative for Tanner EDA tools. Ten years on Paul Double, founder and CEO, joined us for a brief Q&A for this quarter’s issue of Tanner EDA Today:
    Q:Please tell us a little about EDA Solutions.
    A:We have been exclusive representatives for Tanner for 10 years now and in that time we have brought over 400% growth in sales in the region, and helped develop the tools, the brand and the reputation for Tanner as the best supported, most cost-effective design solution on the market.
    Q:What makes Tanner EDA solutions a good fit for EDA Solutions to represent in Europe?
    A:The design tools from Tanner EDA offer an effective and affordable alternative to those from the traditional big vendors. It therefore suits a small highly-focused company like ours to represent such a tool flow. Like Tanner EDA, we have a commitment to unrivalled customer service, which has helped us to build up a very mutually beneficial relationship with our customers.
    Q:Which is the biggest market segment served by EDA Solutions? Is it commercial or educational? What design types?
    A:We have a long-standing relationship with Europractice to serve our academic customers. For Tanner EDA in Europe, though, the majority of EDA Solutions’ focus has been on the commercial business. Our main application areas include sensing, imaging/display, power/HV and MEMS. We are helped in our role by the strength of the European analog/mixed signal foundries and our PDK support for their processes. Our traditional customer base has been startups, but we have seen a big surge in the number of more established design/product companies switching to Tanner EDA as more of the market becomes aware of the company’s “less is more” approach to providing just the right level of features and functionality within a complete analog design flow from schematic capture, circuit simulation, and waveform probing to physical layout and verification.

    Past Newsletter Issues

    Summer 2011
    Spring 2011

    Read more in Newsletter archives

    © Copyright 2010, Tanner EDA. All Rights Reserved


    Simulating in the Cloud

    Simulating in the Cloud
    by Paul McLellan on 09-13-2011 at 1:43 pm

    Yesterday I met with David Hsu who is the marketing guy for Synopsys’s cloud computing solution that they announced at their user-group meeting earlier this year. It was fun to catch up; David used to work for me back in VLSI days although he was an engineer writing place and route software back then.

    David admits that this is an experiment. Nobody really knows how EDA in the cloud is going to work (or even if it will work) but for sure nobody is going to find out sitting meditating in their office. So Synopsys took what they considered a bounded problem:

    • High-utlization, otherwise it will be too difficult to get even initial interest
    • Not incredibly highly-priced, so that the cloud solution doesn’t immediately undercut the existing business
    • Consumes lots of cycles when used, so the scalability of the cloud is a genuine attraction

    VCS simulation seemed to meet all those aims. Verification is, depending on who you ask and how you count, 60-70% of design. So certainly high utilization. VCS is not a $250K product. Of course nobody, not even Synopsys, really knows what customers are paying for it since they do large bundled deals. But not incredibly highly priced for sure. And it consumes lots of cycles when used. Lots. Especially coming up to tapeout when it may be necessary to run the entire RTL verification suite in as short a time as possible. Under those circumstances, a thousand machines for a day is a big difference from 10 machines for 3 months.

    The cloud solution is sold by the hour, all inclusive. The precise price depend on how much you buy and all the usual negotitions but is of the order of $2.50 to $5/hour. The back end is Amazon’s cloud solution and the sweet spot is batch mode regression testing where the thing everyone is most interested in optimizing is wall-clock time.

    The biggest challenge has been security issues. If a customer wants to buy a VCS simulator, an engineering manager can cut a PO. If a customer wants to ship the crown-jewels of their company out of the building then legal, and senior management and even, in some companies, the Chief Security Officer need to get involved. Solving some of this is pure emotion, not driven by numbers. David told me of one meeting where a lawyer asked an engineering manager “how can your risk the company’s survival like this?” I suggested that an appropriate response from the engineering manager would have been “how can you risk the company’s survival by delaying all our tapeouts?” What is rational to an engineer is emotional to a lawyer, and vice versa.

    But the underlying driver for the business is strong. Large companies are doubling the size of their server farms every two years (Moore’s Law for server farms I guess). Increasingly companies want a base load, maybe 75-80% of peak load, done in-house and then offload the remaining 20-25% to something such as the Synopsys cloud solution.

    I asked David if the business is growing faster or slower than expected. He admitted it was slower but also pointed out, reasonably, that nobody has a clue how fast it should grow–it is a new market. But it does seem to be starting to catch on. The biggest attraction is that knob to turn that an engineering manager has never had before: you can reduce the time to run a regression by spending more money. But not much more money. And you don’t end up with licenses and hardware that will sit unused until the next peak load comes along.

    To me one of the biggest challenges for EDA in the cloud is that nobody has a single vendor flow. But if you have to keep moving data out of one cloud into another, especially with the size of design databases today, that is probably not tractable. David admitted this was a problem: they have customers who want to use Denali (now Cadence) with the Synopsys cloud solution, just like they do in their own farms. But in the cloud that requires more than just handing out two purchase orders, it requires Synopsys and Cadence to co-operate. How’s that working these days? Conceivably a 3rd party edacloud.com (I just made up the name) could integrate tools from multiple suppliers and deliver the true value of making everything work together, but it is probably not a realistic financial model to buy all the tools required up-front as a “normal” multi-year license. While it might be easier for a 3rd party to get Denali from Cadence and VCS from Synopsys and put them up on Amazon, it is not clear. Ultimately it is customers who have the power to drive this. But as with the open PDK situation, even that might not be enough.

    Moving an entire design database (once it gets to physical) is not just simply a matter of copying it across the net. You can put it on a disk drive and FedEx it to Amazon, and they will do this. In fact that is apparently how Netflix got its database of streamable video onto Amazon, although they shipped whole file-servers. But in the iterative loops of any design process this seems unwieldly, to say the least.

    Anyway, VCS in the cloud is clearly promising but also too soon to tell whether it is a harbinger of things to come or a backroad.

    For further details, visit the Synopsys cloud page where you can find the white papers too.

    Note: You must be logged in to read/write comments



    Hardware Configuration Management approach awarded a Patent

    Hardware Configuration Management approach awarded a Patent
    by Daniel Payne on 09-13-2011 at 11:21 am

    Hardware designers use complex EDA tool flows that have collections of underlying binary and text files. Keeping track of the versions of your IC design can be a real issue when your projects use teams of engineers. ClioSoft has been offering HCM (Hardware Configuration Management) tools that work in the most popular flows of: Cadence, Mentor, Synopsys and SpringSoft.

    Today ClioSoft announced that patent number 7,975,247 has been issued by the US Patent & Trademark Office. This patent has a title of, “Method and system for organizing data generated by electronic design automation tools”. The product which uses this technology is called the Universal Data Management Adaptor (UDMA).

    Abstract

    A method and system for organizing a plurality of files generated by an Electronic Design and Automation (EDA) tool into composite objects is disclosed. The system provides a plurality of rules, which may be configured for various EDA tools. These rules may be configured for any EDA tool by specifying various parameters such as filename patterns, file formats, directory name patterns, and the like. Using these rules which are configured for an EDA tool, the files that form a part of the design objects are identified and packaged in the form of composite objects.

    Overview


    Patent Front Cover

    Figure 4a shows a PCB file structure with folders for schematics (sch), symbols (sym) and wiring (wir) along with multiple versions. With UDMA this complex tree can be represented by figure 4b where version control shows that schematic 2 and symbol 1 are the latest versions:

    Figure 5 shows the basic flow of how files from an EDA tool are configured by the UDMA and readied for HCM:

    Inventors
    Anantharaman; Srinath (Pleasanton, CA), Rajamanohar; Sriram (Fremont, CA), Pandharpurkar; Anagha (Fremont, CA), Khalfan; Karim (San Jose, CA)

    Summary
    ClioSoft has built an IC-centric HCM system from the ground up that is unique enough to be patented. Your favorite IC tools have already been integrated by ClioSoft, or you can use the UDMA tool and add any new EDA tool for HCM.

    Also Read

    Transistor Level IC Design?

    How Tektronix uses Hardware Configuration Management tools in an IC flow

    Richard Goering does Q&A with ClioSoft CEO


    Another Up Year in a Down Economy for Tanner EDA

    Another Up Year in a Down Economy for Tanner EDA
    by Daniel Payne on 09-13-2011 at 11:00 am

    Almost every week I read about a slowing world economy, yet in EDA we have some bright spots to talk about, like Tanner EDA finishing its 24th year with an 8% increase in revenue. More details are in the press release from today.

    I spoke with Greg Lebsack, President of Tanner EDA on Monday to ask about how they are growing. Greg has been with the company for 2 1/2 years now and came from a software business background. During the past 12 months they’ve been able to serve a broad list of design customers, across all regions, with no single account dominating the growth. Our previous meeting was at DAC, three months ago where I got an update on their tools and process design kits.

    Annual Highlights
    Highlights for the year ending in May 2011 are:

    • 139 new customers
    • HiPer Silicon suite of analog IC design tools increasingly being used for: sensors, imagers, medical, power management and analog IP
    • Version 16 demonstrated at DAC (read my blog from DAC for more details)
    • New analog PDK added for Dongbu HiTek at 180nm, and TowerJazz at 180nm for power management
    • Integration between HiPer Silicon and Berkeley DA tools (Analog Fast SPICE)

    Why the Growth?
    I see several factors causing the growth in EDA for Tanner: Standardized IC Database, SPICE Integration, Analog PDKs and a market-driven approach.

    Standardized IC Database
    While many of their users run exclusively on Tanner EDA’s analog design suite (HiPer Silicon), their tools can co-exist in a Cadence flow as version 16 (previewed at DAC) uses the OpenAccess database. This is an important point because you want to save time by using a common database instead of importing and exporting which may loose valuable design intent. Gone are the days of proprietary IC design databases that locked EDA users into a single vendor, instead the trend is towards standards-based design data where multiple EDA tools can be combined into a flow that works.

    SPICE Integration
    Analog Fast SPICE is a term coined by Berkeley Design Automation years ago as they created a new category of SPICE circuit simulators that fit between SPICE and Fast SPICE tools. By working together with Tanner EDA we get a flow that uses the HiPer Silicon tools shown above with a fast and accurate SPICE simulator called AFS Circuit Simulator (see the SPICE wiki page for comparisons). Common customers often drive formal integration plans like this one. I see Tanner EDA users opting for the AFS Circuit Simulator on post-layout simulations where they can experience the benefits of higher capacity.

    Analog PDKs
    Unlike digital PDKs grabbing the headlines at 28nm the analog world has PDKs that are economical at 180nm as shown in the following technology roadmap for Dongbu HiTek, an analog foundry located in South Korea:

    Another standardization trend adopted by Tanner EDA is the Interoperable PDK movement, known as iPDK. Instead of using proprietary models in their PDK (like Cadence with Skill) this group has standardized in order to reduce development costs. In January 2011 I blogged about how Tanner EDA and TowerJazz are using an iPDK at the 180nm node.

    Market Driven Approach
    I’ve worked at companies driven by engineering, sales and marketing. I’d say that Tanner EDA is now more market-driven, meaning that they are focused on serving a handful of well-defined markets like analog, AMS and MEMS. For the early years I saw Tanner EDA as being primarily engineering driven, which is a good place to start out.

    Summary
    Only a handful of companies survive for 24 years in the EDA industry and Tanner happens to be in that distinguished group, because they are focused and executing well we see them in growth mode even in a down economy.


    When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!

    When analog/RF/mixed-signal IC design meets nanometer CMOS geometries!
    by Daniel Nenni on 09-13-2011 at 9:22 am

    In working with TSMC and GlobalFoundries on AMS design reference flows I have experienced first hand the increasing verification challenges of nanometer analog, RF, and mixed-signal circuits. Tools in this area have to be both silicon accurate and blindingly fast! Berkeley Design Automation is one of the key vendors in this market and this blog comes from discussions with BDA CEO Ravi Subramanian. I first met Ravi at the EDAC CEO Forecast panel I moderated in January, he is probably the only EDA CEO that spends more time in Taiwan than I do!

    When analog/RF/mixed-signal IC design meets nanometer CMOS geometries, the world changes. Analog/RF circuit complexity increases as more transistors are used to realize circuit functions; radically new analog circuit techniques that operate at low voltages are born, creating new analysis headaches; digital techniques are mixed into analog processing chains, creating complex requirements for verifying the performance of such digitally-controlled/calibrated/assisted analog/RF circuits; more and more circuits need to operate where devices are in the nonlinear operating region, creating analysis headaches; layout effects determine whether the full-potential of a new circuit design can be achieved; and second and third order physical effects now become first-order effects in the performance of these circuits.

    Circuit simulation remains the verification tool of choice, but with little innovation from traditional analog/RF tool suppliers, designers are forced to fit their design to the limitations of tools – breaking down blocks into sizes that lend themselves to easy convergence in transient or periodic analysis, using linear approximations to estimate the performance of nonlinear circuits, ignoring the impact of device noise because of the impracticality of including stochastic effects in circuit analysis, characterizing circuit performance for variation without leveraging distribution theory, and cutting corners in post-layout analysis because of the long cycle times in including layout dependent effects in circuit performance analysis.

    In the face of all of this, design flows are being retooled to leverage the best in class technologies that have emerged to solve these new problems with dramatic impact on productivity and silicon success. Retooling does not mean throwing out the old and bringing in the new – rather it is an evolutionary approach to tune the design flow by employing the best technology in each stage of the flow that will remove limitations or enable new analyses.

    On September 22[SUP]nd[/SUP] at TechMart in Santa Clara key EDA vendors will showcase advanced nanometer circuit verification technologies and techniques. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.

    I hope to see you there! Register today, space is limited.



    Memo To New AMD CEO: Time For A Breakout Strategy!

    Memo To New AMD CEO: Time For A Breakout Strategy!
    by Ed McKernan on 09-12-2011 at 2:52 pm

    “Where’s the Taurus?” In the history of company turnarounds, it was one of the most penetrating and catalyzing opening questions ever offered by a new CEO to a demoralized executive team. The CEO was Alan Mullaly, who spent years at Boeing and at one point in the 1980s studied the successful rollout of the original Ford Taurus. For one to get a full sense of what Mulally faced you have to follow the progression of the dialogue in a Fast Company article written after the encounter:

    Continue reading “Memo To New AMD CEO: Time For A Breakout Strategy!”