RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

How to detect weak nodes in a power-off analog circuit?

How to detect weak nodes in a power-off analog circuit?
by Jean-Francois Debroux on 09-01-2014 at 4:00 pm

Most analog cells have a power off mode intended to reduce power consumption. In this mode, all the circuit branches between the supply lines are set in a high impedance mode by driving MOS gates to a blocking voltage. This is a somewhat similar situation to that in tri-state digital circuits.

When a branch is set in that high impedance mode, all the nodes in that branch are in high impedance too. The concern is that voltages on these nodes are undefined. More precisely, the actual voltages on these nodes are defined by leakage currents. Some leakage currents pull the voltages up while others pull them down. Depending on actual leakage currents values, the high impedance nodes or weak nodes can have any value, normally within one diode drop outside the supply voltage lines.

If such a weak node drives a CMOS inverter input, this one can draw a significant current if the weak node voltage is incidentally close to the inverter threshold.

Detecting such a situation is important as it may impair production yield or even worse it may happen in the field later on after some leakage current has drifted. But it is not a simple task since leakage currents are not always realistically modeled and anyway, they change from wafer to wafer and from device to device. Monte-Carlo analysis might help but leakage currents are not always statistically modeled properly.

A feature of most analog simulators such as SPICE derivatives, Eldo, Spectre, and probably others can be used to reveal weak nodes.

This common feature is that circuit branches conductance cannot be lower than a simulator parameter called “gmin”. This parameter often defaults to 1E-12 but can be changed arbitrarily by the user. The effect of this parameter is that the simulated circuit has a 1/gmin resistance in parallel to any branch. It is normally intended to simplify the first iterations. But these added resistances are not suppressed when the solution is reached. This is why gmin must be set appropriately with respect to the circuit operating currents in order to limit the impact on the result. We can use this feature to detect weak nodes.

Let’s analyze the following potentially faulty circuit:

To analyze it, let’s simulate the operating point and measure the supply current for various gmin values over a wide range. The following graph was obtained with Eldo for a 180 nm CMOS process:

The curve shows four different areas from left to right: A horizontal branch, a steep positive slope, a plateau and a positive slope. The rightmost positive slope at high gmin values results from the current flowing through the 1/gmin resistances in the branches. The leftmost horizontal branch results from leakage currents. But the plateau and the steep slope on the left result from the weak node effect. The reason is that when 1/gmin gets low enough, the two branches driving the weak node drive it towards VDD/2 causing simultaneous conduction in the inverter.

Now, let’s fix that circuit by driving the weak node to 0 (1 would work too) and run the same analysis again:


Now, the curve is normal, with a nearly constant about 1 decade per decade slope and an horizontal branch on the left. One can note that the default 1e-12 gmin value is high enough in that case to hide the actual leakage current.

The gmin sweeping method can detect weak nodes through their signature. If a plateau exists for intermediate gmin values and even more significantly if a steep slope exists, there are chances that weak nodes affect the circuit power off current. Then, tracing the currents trough the circuit hierarchy drives you to the issue location.

In order to check that you can use this method with your simulator, you can use the faulty circuit above and its fixed version. This method has proved to be useful in many cases, but it cannot be proved it will always detect weak nodes, especially for large circuits since the faulty current can be hidden by the sum of normal leakage currents. This is why I suggest using this method incrementally for every individual block upwards to the circuit top.


September is Semiconductor Design Webinar Month!

September is Semiconductor Design Webinar Month!
by Daniel Nenni on 09-01-2014 at 9:00 am

The nice thing about webinars is that if you register for the live one and you can’t attend you will still get first notice when the replay goes up. The other nice thing is that you can read a blog review of a webinar or whitepaper on SemiWiki first to see if it is worth your time. If you do attend a webinar you can also post a review of it on SemiWiki if you think it is worth other people’s time, absolutely.

The SemiWiki calendar is one of the most traveled areas and webinars are the primary reason. Members can post webinar notices or other events on the calendar for all to see. Collaboration is the key to the success of the fabless semiconductor ecosystem and webinars are very collaborative.

Looking at September I don’t remember seeing so many webinars in one month but it certainly is an encouraging sign:

[LIST=1]

  • Reduce Your Design Risk By Enabling a Comprehensive Signoff Flow for Timing Constraints
  • VCS AMS for Advanced SoC Mixed-signal Verification
  • Physical Lint: Fast RTL Analysis that Identifies Logic Structures Known to Negatively Impact Design Convergence
  • Flow to improve collaboration within dispersed design teams
  • Pre-silicon Optimization of System Designs using the ARM CoreLink NIC-400 Interconnect
  • SoC Emulation Made Easy
  • Quick Introduction to SCE-MI
  • IP Signoff
  • Static Design Rule Checks in FPGA Design
  • TCAD to SPICE Simulation of SiC and Si Power Devices
  • Powerful and Easy to Use RTL Restructuring
  • TSMC Open Innovation Platform Ecosystem Forum

    Okay, the TSMC OIP is not a webinar but it is definitely a premier fabless semiconductor ecosystem event:

    The TSMC Open Innovation Platform® Ecosystem Forum brings together TSMC’s design ecosystem companies and our customers to share real case solutions to today’s design challenges. Success stories that illustrate best practice in TSMC’s design ecosystem will highlight the event.

    More than 90% of last year’s attendees last year said that the Forum helped them “better understand TSMC’s Open Innovation Platform” and that “they found it effective to hear directly from TSMC OIP member companies.”

    This year’s event will prove equally valuable as you hear directly from TSMC OIP companies about how to leverage their technology to your design challenges!

    This year, the forum is a day-long conference kicking-off with Trend-setting addresses and announcements from TSMC executives.

    The afternoon sessions include 30 selected technical papersfrom TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies, and an Ecosystem Pavilion featuring up to 80 member companies showcasing their products and services.

    Date:
    Tuesday, September 30th, 2014 8:00 AM – 6:30 PM
    Venue: San Jose Convention Center, CA
    Learn About:

    • Emerging advanced node design challenges including 16 FinFET, 16FinFET+, 20nm, 28nm and TSMC-specific design solutions
    • Successful, real-life applications of design technologies and IP from ecosystem members and other TSMC customers
    • Ecosystem-specific TSMC reference flow implementations
    • New innovations for next generation product designs

    Hear directly from ecosystem companies about their TSMC-specific design solutions

    Network with your peers and 1,000 industry experts and end users.

    TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event. Please register to attend. We look forward to seeing you at the 2014 Open Innovation Platform Ecosystem Forum.


  • Power and Thermal Analysis of Data Center and Server ICs

    Power and Thermal Analysis of Data Center and Server ICs
    by Daniel Payne on 08-31-2014 at 4:00 pm

    The server market is a diverse, yet standardized market. The ICs and components designed and manufactured in final assemblies must meet form factor requirements for rack mount and blades. The form factor enclosures and the component placement dictate the thermal-mechanical properties and hence the thermal cooling limits which are driven by the energy and power consumption of the system.

    The workloads for server applications also vary significantly but all share elements of reliability, security and manageability. Many server operating systems combine virtualization technologies, and the applications can be multi-threaded and amenable to heterogeneous or symmetrical multi-processing.Related: Power Modeling and Simulation of System Memory Subsystem

    Simulation of the dynamic behavior of complex multi-core CPU designs, with high reliability storage and memory and high throughput IO is important for meeting thermal and power targets. You can over design and lose on costs, or target narrow use profiles and not meet performance or QoS requirements. Many performance and functional simulators do not address IO throughput and do not provide suitable trace information for accurate system-level power estimation. Traditional benchmark software workload analysis often does not account for error conditions, fault handling, user or network-defined conditions affecting packet throughput, processing as well as security and virtualization features.

    After determining the primary network operating system you will run on the server, estimated number of concurrent users and any storage requirements, the next critical decision to make is selecting the appropriate server form factor.

    Servers come in three general form factors: tower, rack and blade.

    So how could you go about designing IC’s for server requirements with such diversity of applications?

    Build a thermal-mechanical model
    Start with the targeted form factors in which your IC design will be used. In some IU rack enclosures there is no local fan cooling. Many rack enclosures have multiple variable speed fans and elaborate cooling mechanisms. The enclosure and component placement may dictate the TDP (thermodynamic power) limit of key components, notably the CPU’s, chipset and memory modules.

    • Consider the enclosure, PCB/Assembly and packaged device volumes in x, y and z coordinates. Then the material stack up, material properties and HTC’s.
    • Assign the power sources to the associated volumes consuming power and sensors or probes to monitor the temperature.
    • For an IC that could be used in several different enclosures, you could have models of each chassis or enclosure, and then a model of the PCB/assembly for that enclosure and of course, the package model of your IC’s.
    • When you model the HTC you can place sensors or probes at strategic locations such as the inlets, fan exhaust, CPU die/package, memory modules or memory devices.

    Related: ESL Tool Update from #51DAC

    Power models of the ICs
    Pre-existing IP used can be re-used where the dynamic, leakage, and state dependent power equations are applicable. The power model architect can map the server IC power states and system states to the IP block. The power architect must also account for server specific functions such as redundancy, ECC, failover and recovery mechanisms are quantified in terms of logic area. The corresponding active, idle power and standby or leakage power states and percentage of residency in each state. New IP power models are created using the power model parameterization as above recanted as: Logic/transistor count or area, power states, power equations per state, percentage of occupancy or residence in each state as a function of workload and operating condition.

    Power stimuli
    The stimulus can be performance or functional simulators from server database and web server applications in the form of CSV or VCD exported traces. Portions of the trace can also be used to inject error conditions, retries or other activity based on characterized data or statistical data. In this way the power architect can get activity factor and dynamic power of the processing, memory, and storage subsystem that are unique to server workloads. The IO and connectivity power can also be modeled using bandwidth and traffic generators with security and reliability features enabled, and disabled. Throughput can be adjusted based on error conditions, retries and packet payload delivery. The user can create complex power traces by adding steps or tasks and concatenating the power stimulus to drive the power model with concurrent and pipelined tasks.


    For multiple use case and multiple form factors and layouts consider using an ESL power-thermal profiling tool flow:

    Docea Powerprovides an ESL power and thermal solution using using the Ace Thermal Modeler which can generate compact thermal model that can be used to run coupled power-thermal simulations as well as a Thermal Profiling tool which can be used with power traces from characterized workloads.

    Summary
    Power and thermal modeling for server IC’s and SoCs is much like the approach used for SoCs in smartphone, tablet and mobile applications. However the key items are:

    • The IP blocks often have server specific hardware features such as ECC, security and packet processing acceleration. The corresponding power models need to comprehend server specific features and account for the power in server specific power states.
    • Server power states are based on high availability and QoS so throughput is key. Processor and memory active and idle states are highly optimized and many components can be in low latency yet standby power levels. New IP has been developed for latency tolerant IO and new bus technologies.
    • Server specific features like security and error handling need to be included in the power model. The security, reliability and manageability functions may add power, but in some instances the power penalty is highly dependent on system software and operating conditions.
    • The power stimulus should provide configurable conditions for error injection, congestion and encryption/decryption in the event traces to activate server specific features.
    • The thermal model needs to account for various chassis, PCB orientation, airflow and ambient environmental conditions.

    TCAD to SPICE Simulation of Power Devices

    TCAD to SPICE Simulation of Power Devices
    by Daniel Payne on 08-31-2014 at 1:30 pm

    The periodic table shows that Silicon (Si) is in a column along with other elements like Carbon (C) and Germanium (Ge). With so much emphasis on Silicon, you’d think that the other semiconductor materials have been neglected a bit.

    Silicon is a wonderful material and most of our consumer electronics and handheld devices use this material for transistors. However when we start to open the hood of an automobile, or ride an electric subway there’s another semiconductor SiC (Silicon Carbide) used for high-voltage applications. Companies like GeneSiC Semiconductor have designed a family of 1700V and 1200V SiC transistors for power electronics in:

     

    • Telecom and networking power supplies
    • Uninterruptable power supplies
    • Solar inverters
    • Industrial motor control systems
    • Downhole applications


    ​SiC Transistor

    To learn more about SiC, I plan to attend awebinarhosted by Dr. Eric Guichard from Silvacoon September 23rd from 10AM to 11AM (PDT). Eric has been with Silvaco since 1995, and prior to that he worked as a senior SOI engineer at LETI and Thomson Military and Space.Related: Teach Yourself Silvaco


    Dr. Eric Guichard, Silvaco

    Silvaco has a strong presence in the TCAD area and at this webinar will provide a discussion of the methods used to design, simulate and optimize the performance of power devices using TCAD and SPICE simulations. Wide-bandgap semiconductors such as SiC have begun to attract attention due to their projected improved performance over silicon. Simulating SiC devices is more challenging relative to silicon-based device. In this webinar Eric will review the requirements to accurately simulate SiC-based power devices. He will also present a completely automated TCAD to SPICE flow that helps reduce the cost and time taken to develop a Silicon-based IGBT power device.

    Related: Silvaco News. Silicon Valley, China and Korea

    Here’s what to expect at the webinar:

     

    • Key challenges of power device TCAD simulation
    • Key challenges of SiC TCAD simulation
    • TCAD simulation of SiC IGBT, Trench MOS and DMOS
      • 2D and 3D TCAD simulations (meshing, solver, physical models)
      • When to use 3D over 2D
    • Full TCAD to SPICE IGBT flow example
      • Process and Device simulations for IV curve generation
      • TCAD-based SPICE parameter extraction using HiSIM-IGBT compact model
      • Correlation between circuit performance and process variation
      • Circuit performance optimization


    DMOS Transistor cross-section


    ​N-channel IGBT cross-section

    I’ve attended previous Silvaco webinars and found them to be informative, detailed and hosted by technical authorities with deep experience. My favorite part of a webinar is the Q&A time at the end, when engineers get their questions answered by the expert.

    Related: Modeling and Analysis of Single Event Effects (SEE)


    Big Data, the Cloud and the Internet of (Silicon) Things

    Big Data, the Cloud and the Internet of (Silicon) Things
    by Paul McLellan on 08-31-2014 at 7:01 am

    Next week, eSilicon are kicking off a very widespread survey to measure some important semiconductor design and manufacturing challenges. Their goal is to measure customer sentiment regarding how Big Data, the Cloud and the Internet can impact these challenges. But here’s a secret, the survey is already live and you can go and fill it in right now.

    I went through the survey (I didn’t finally submit it since my “latest design” was a blog post stored in the cloud on the internet but didn’t involve any silicon design). The survey is 22 questions long and will take 5-10 minutes to complete. And there are prizes! You’ll be entered to win one of five $100 Amazon gift certificates (or they will donate $100 to the Red Cross if you prefer).

    After asking you about the technology of your last design, eSilicon get onto what factors were difficult. Embedded software, quality/stability of project inputs, power closure, performance, yield management, area, IP bugs, test, packaging, and so on.

    Next, on to business challenges such as time to market, production leadtimes, production cost, mask tooling cost and more.

    The heart of the survey is how you want to see big data, the cloud and the internet leveraged in semiconductor design and manufacturing processes.

    I won’t run through all the questions. After all, you should really go over there and read them all and fill in your answers. The more data collected the more useful the responses will be.

    But question 18 is interesting. It asks which channels are most important for keeping you informed about semiconductor design and manufacturing. Newsletters, magazines, vendor websites, social media, conferences; and which ones. And of course blogs. This question should clearly be answered SemiWiki, although I might just be biased.


    The rubric from eSilicon says: Over the last 14 years eSilicon has optimized ASIC designs and accelerated time-to-market for our customers. Using our deep design and manufacturing experience, plus customizable IP, we’ve successfully delivered almost 200 million chips across 170+ customer designs. We’re working on a wide range of exciting designs today.

    We’ve always leveraged database and Internet technologies to automate our business, resulting in a more transparent, predictable and reliable experience for our customers. We see Big Data and the Cloud playing important roles going forward. We want to know what you think, to better understand your needs and ensure that our Big Data, Cloud and Internet of (Silicon) Things strategies are aligned.

    Your answers will be kept in confidence and your responses to our questions will not be used to try to sell you anything.

    The link to the survey is here.

    More articles by Paul McLellan…


    Webinar: Collaboration Within Dispersed Design Teams

    Webinar: Collaboration Within Dispersed Design Teams
    by Daniel Nenni on 08-30-2014 at 7:00 am

    In the face of shrinking time-to-market windows, semiconductor companies are aggressively vying with each other to emerge with new or variants of existing ICs and SoCs to gain market share. The growth of the mobile market –wireless, networking, storage, and computing – as well as new areas such as the Internet of things (IoT) and wearables has resulted in an increased use of analog/mixed-signal (AMS) and/or RF functionality coupled with digital logic. To emerge as leaders in a specific market segment, design teams need to overcome increasing challenges to tape out chips successfully. Not only are teams using more IPs in a given design, they also are dealing with challenges such as developing complex functionality, managing multifaceted design flows using best-in-class tools, and complex mixed-signal verification, often in the face of shrinking geometries.

    Webinar registration: Flow to improve collaboration within dispersed design team

    Exacerbating all of the above is the difficulty of finding all the necessary talent in one location. This has resulted in semiconductor companies opening up design centers all over the world, wherever good talent is available, and investing heavily in the training of new college graduates. But for an SoC to be taped out on schedule, these dispersed design team members must work in tandem with each other as well as overcome their cultural and communication barriers. As the design teams get larger and spread across geographical boundaries, it becomes an increasing dilemma to coordinate the work amongst the team members, especially for analog and mixed-signal designs.


    For design team managers and engineers to be more productive and efficient, they need quick answers to questions such as:

    • What design changes have been checked in this week?
    • Is the layout for this design DRC/LVS clean?
    • The design was working yesterday. What changed?
    • Which revision of the schematic was the layout created for?
    • Which designs have been completed and frozen?
    • Looks like the schematic had some ECOs! What exactly changed?

    Managing large teams spread across multiple sites has never been very easy. It becomes more difficult when one considers the challenges of taping out a mixed-signal SoC successfully, across different time zones and cultures. To improve collaboration between multi-site design team members, it is important to lay the foundations of a good infrastructure and invest in data management software. Without the presence of design data management and a formal collaboration process, a lot of time is spent on needless communication. This often leads to a big loss of productivity for the engineers and creates spurious errors, which take considerable time to rectify.

    Also Read: Leveraging Design Team Energy!

    Of course, a data management solution has to work with complex design flows and enhance productivity rather than adding more overhead for the design engineers. To learn about a flexible, comprehensive and non-intrusive data management flow that works with different design tools and will help improve design collaboration between widely-dispersed team members, ClioSoft is giving a webinar on the use of the SOS Design Collaboration Platform integrated with Cadence Virtuoso® technology.

    Webinar registration: Flow to improve collaboration within dispersed design team

    Also Read

    Leveraging Design Team Energy!

    Webinar: Making Design Reuse Work

    Importance of Data Management in SoC Verification


    Assertion Synthesis: From Startup to Mainstream

    Assertion Synthesis: From Startup to Mainstream
    by Daniel Payne on 08-30-2014 at 7:00 am

    In college many of us dreamed of starting up our own company by offering something new that has never been done before. Today I spoke by phone with Yunshan Zhuin Shanghai, and he has actually lived out this scenario by founding NextOp in 2006, then getting that company acquired by Atrentain 2012. The new capability that NextOp created was something called assertion synthesis, and the product name is BugScope.


    Continue reading “Assertion Synthesis: From Startup to Mainstream”


    Transistor-level Sizing Optimization

    Transistor-level Sizing Optimization
    by Daniel Payne on 08-29-2014 at 4:00 pm

    RTL designers know that their code gets transformed into gates and cells by using a logic synthesis tool, however these gates and cells are further comprised of transistors and sometimes you really need to optimize the transistor sizing to reach power, performance and area goals. I’ve done transistor-level IC design before, and the old process of manually choosing a transistor size, simulating in SPICE, analyzing, then changing the transistor size to re-iterate is a time consuming process. Once again EDA tools come to the rescue in the form of transistor-level sizing optimization.
    Related: An IO Design Optimization Flow for Reliability in 28nm CMOS

    I just read about an engineer at Altera named Oh Guan Hoe that wanted to use such an automated approach to design their FPGA. He presented his approach at the MunEDA User Group meeting. Alteragot started back in 1983 and offered the first reprogrammable logic semiconductor chips, and today they are #2 behind Xilinx.

    The FPGA routing architecture used at Altera has two levels of hierarchy where the lowest level is an adaptive logic module (LE), and the second level is a collection of rows and columns of routing wires.

    An FPGA routing switch is designed with muxing pass transistors, buffers and output demux transistors. It is the primary contributor to overall FPGA performance and hence critical for circuit optimization.

    In the above diagram the buffering and drive stage has an inverter with a single PMOS device used as a half-latch to restore high levels after the single NMOS pass transistors. The final inverter size is tuned based upon the routing wire length. The NMOS pass transistors at nodes “m” and “n” represent the programmability in the routing region.Related: Five Things You Don’t Know About MunEDA…and One You Do

    The performance and optimization goals were to:

    [LIST=1]

  • Achieve 5% faster delay time
  • Match the delay between rise-rise and fall-fall times on the routing drivers within 2ps
  • Reduce the layout area by 5 to 10%

    The basic flow for transistor-level design and optimization at Altera is shown below:

    Schematic capture is done with the popular Cadence Virtuosotool and then circuit simulation can be done with a selection of SPICE, Analog FastSPICE or FastSPICE tools. The transistor optimization step is done with a tool called WiCkeDfrom MunEDA.

    An initial circuit simulation was run to establish a base line on three different driver circuits, where only DriverA actually met the driver delay goal of < 2ps:

    The WiCkeD simulation analysis included:

    • Simulation
    • Deterministic Nominal Optimization
    • Worst-case operation

    Driver A was run through optimization and showed an area reduction of 12% (Green is in spec, Red is out of spec). Charts on the left-hand side are for the delay times of fall-fall, and rise-rise, respectively. The upper right chart is the delta in time delay between rise and fall. Area reduction is the final chart in the bottom right:

    Results for Driver B after optimization show an area reduction of 9%:

    Finally, results for Driver C after optimization show a 15% reduction in area:

    In the GUI for WiCkeD you can see a comparison of nominal versus worst-case values after optimization:

    Summary

    Three different driver circuits were simulated and optimized at the transistor level using the WiCkeD tool to:

    • Achieve a balance in rise and fall times
    • Reach area reduction goals
    • Run simulation and optimization in a reasonable amount of time
    • The delay times were a bit off from design targets, but still within an acceptable range

    Read the complete 32 page presentation here.

    Upcoming Events

    You can visit with MunEDA at the following events in September:


  • FinFET Design for Power, Noise and Reliability

    FinFET Design for Power, Noise and Reliability
    by Daniel Payne on 08-29-2014 at 4:00 pm

    IC designers have been running analysis tools for power, noise and reliability for many years now, so what is new when you start using FinFET transistors instead of planar transistors? Calvin Chow from ANSYS (Apache Design) presented on this topic earlier in the summer through a 33 minutewebinar that has been archived. There is a brief registration required to view the archived webinar.

    Related: FinFET Based Designs Made Easy & Reliable

    A quick recap of why FinFET device characteristics at 14nm are better than bulk at 20nm or 28nm include:

    • Improved speed
    • Reduced power
    • Higher device density

    This chart shows performance versus VDD values for three technology nodes: FinFET at 14nm, 20nm planar and 28nm planar:

    As the value of VDD lowers, the circuit delay improvement of the FinFET increases versus planar devices. On the downside, FinFET designs add new challenges like:

    • Reduced noise margins
    • Reduced EM (Electro Migration), ESD (Electro Static Discharge) tolerance
    • Increased temperature effects
    • Higher device capacity

    The specific EDA tool that ANSYS offers for power, noise and reliability analysis is called RedHawk 2014 and it addresses each of the new challenges:

    Instead of analyzing chip and package separately for IR drop, the recommended approach is a simultaneous analysis using a distributed model of the package instead of a simple, lumped model. Shown below are IR drop analysis results of chip and package, first using a lumped package model which has a 13.8mV drop, next using a distributed package model which shows a more accurate 19.2mV drop value:

    Typical runtime to do a package extraction with RedHawk-CPA on a 6 layer package is 10 minutes, while using about 15 GB of RAM. To do the IR drop simulation required just 58 minutes and under 8 GB of RAM.

    FinFET Foundries
    The FinFET processes at TSMC 16nm v1.0 and the Intel custom foundry at 14nm are certified for use with RedHawk 2014 on the following types of analysis:

    • Resistance correlation
    • EM rule handling
    • IR, Dynamic Voltage Drop extraction and analysis

    I would expect support for FinFET processes at Samsung and GLOBALFOUNDRIES in the near future.Related: Intel & ANSYS Enable 14nm Chip Production

    Analysis
    With higher drive currents, it’s even more important for FinFET designs to have layout checks for connectivity analysis like:

    • Missing vias
    • Power and ground grid weakness check
    • Resistance checks
    • Power/Ground balance
    • Switch placement
    • Pad placement
    • IR drop checks
    • High power density checks

    Reliability analysis includes things like: Electro Migration (EM), thermal and Electro Static Discharge (ESD). Power noise analysis looks for issues of: dynamic voltage drop (DVD) on the power grid, low power compliance with multiple voltage domains, and the impact from power noise on timing. As an example, here’s a plot showing analysis results for timing hotspot and a DVD map, so you can focus first on fixing the IR drop issues in the timing hotspot area:

    Summary

    The engineers at ANSYS have a long history of building EDA tools for power, noise and reliability analysis. Now they’ve extended that experience into newer IC designs using FinFET technology from foundries like TSMC and Intel using the RedHawk 2014 software release.


    Improving Complex System Design

    Improving Complex System Design
    by Paul McLellan on 08-29-2014 at 7:01 am

    Next week Mike Jensen of Mentor will present a webinar Improving Complex System Design Reliability and Robustness. The webinar will be presented live twice and presumably available for replay soon after, as is usually the case:

    • September 4th 6.00-6.45am pacific (9pm in Asia, 3pm in most of Europe)
    • September 4th 10.00-10.45am pacific


    This is actually a webinar about Mentor’s SystemVision multi-physics development environment. The webinar will cover how using model-based design can produce order-of-magnitude improvements in productivity and quality and help ensure the reliability and robustness of your next system design.

    Complex systems require a unique design approach to ensure reliable and robust performance. A model-based design methodology provides a virtual system incubator for evaluating design ideas, analyzing parameter variability, optimizing costs, and jump-starting production. Using this approach, advanced individual component models overcome limitations in traditional approaches and account for manufacturing and environmental variations so that the integrated system model reflects the cumulative variability. Tolerance stack-ups can be evaluated, and reasonable performance margins verified, resulting in improved design reliability, and lower system and warranty costs.


    Mike will showcase robust design methodologies for system development, focusing on statistical and parametric analysis methods (Monte Carlo, sensitivity, worst case) presented in SystemVision. SystemVision supports full-featured model development using both VHDL-AMS and SPICE formats, simulates systems at multiple levels of model abstraction, quantifies the effect of variation in component performance, and is fully integrated with the popular DxDesigner / Xpedition environment from Mentor Graphics.

    This is a product which spans a huge range of capabilities and has a correspondingly broad range of people who would benefit from attending: system engineers, control engineers, mechanical engineers, engineers developing test environments for complex multi-domain systems, engineers doing analog, digital or mixed signal design. And of course, people managing groups in these areas.

    More information, including a link to register for the webinar, are here. More information on SystemVision is here.


    Also, if you are in automotive in Michigan, there is a special automotive U2U meeting in two weeks on September 10th in the Aurora Hotel in Dearborn. Details, including a registration link (it’s free) are here.


    More articles by Paul McLellan…