RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Know All About ESD and Save Your Chips & Systems

Know All About ESD and Save Your Chips & Systems
by Pawan Fangaria on 08-24-2014 at 7:30 pm

In this age of electronics, especially with so many different types of human held devices and more upcoming wearable devices, it’s utmost important to protect the massive circuitry inside those tiny parts in the devices from ESD related failures. The protection needs to happen at all stages – cells inside the chips, package and the whole system with special ESD cells connected to external pins to absorb or drain the ESD energy and thus save sensitive internal circuits from damage. Just as an example – if not protected, sensitive internals of an IC can be damaged by even 8V to 10V of discharge, much below the level a human can feel. So, is it possible to look at each cell and protect it and yet not overdesign the circuit? It cannot be done without the help of automated tools; complete analysis at chip (layout, connectivity, metal grid, resistances, substrate etc.), package and system level has to be done, weak spots obtained and fixed, and most importantly ESD signoff done before manufacturing a chip or finalizing the assembly of a system.

Amid ever shrinking geometries, isolated power and ground networks for analog and digital in mixed-signal designs and other complex procedures in SoC designing and packaging, the challenge to protect circuits and systems from ESD has only increased. Read this articlefor more details about the typical challenges of ESD and their solutions. The new trends, tools, demos and ‘how to do tutorials’ are ready to be unveiled in this year’s EOS/ESD Symposiumto be held on 7[SUP]th[/SUP] Sep – 12[SUP]th[/SUP] Sepat Westin La Paloma, Tucson, AZ, USA. A quick look on the agenda tells me about an interesting commercial tutorialoffered by a leader in this domain, ANSYS.

The tutorial (including lecture, presentations and demonstrations) will cover ESD analysis (including identification of HBM, MM and CDM on-chip ESD failures) and solution for complete chip-package-system of a semiconductor design involving ANSYS-Apache suite of tools. The PathFinder, robust silicon proven solution certified by several semiconductor foundries, provides a versatile solution for full-chip level static ESD analysis techniques and block/IP level static and dynamic analysis. It checks the design for its adherence to ESD rules and reports any violation or weak areas such as current density exceeding the limit of a wire or via, inappropriate resistance between pads and clamp cells, active devices and clamp cells or multiple clamp cells, and so on. An easy-to-use GUI environment is provided for debugging the violating paths. This system can also be used for early prototyping and design exploration to trade-off between area and metal routing.

The system also uses other ANSYS tools such as RedHawk and Totem to handle ESD verification and signoff of large SoCs and ANSYS HFSS and SIwave platforms (using chip level ESD models generated by PathFinder) for system level ESD solution.

It’s a nice tutorial to attend to understand and appreciate how ESD issues are uncovered and solved by this set of tools in a complex SoC design and assembly. The tutorial for a complete day is divided in three sections as per following schedule –

Tutorial Day – September 11, 2014

Session 1 – Chip-level Static ESD Analysis – 8:30 AM – 12:00 PM

This session includes full-chip layout based ESD analysis with comprehensive pin-to-pin connectivity checks, resistance checks and interconnects failure analysis along with their root causes. There will be case studies demonstrating hard to detect ESD issues, narrow current paths, missing vias across multiple metal layers and so on.

Session 2 – Dynamic ESD Analysis – 1:00 PM – 2:30 PM

In this session, IP level transient ESD simulation will be explained and shown how it helps in failure analysis. The transient ESD behavior diagnosis and analysis will be discussed which will include modeling of die-level metal grid, substrate network and well-diodes, effective capacitance of package, and pogo pin. Interesting case studies will be presented highlighting how PathFinder uncovered quite involved HBM (Human Body Model) and CDM (Charged Device Model) failures.

Session 3 – System-level ESD Analysis – 3:00 PM – 4:30 PM

This session will have details about system-level ESD simulation. This will include CECM (Chip ESD Compact Model) generation using PathFinder for system-level ESD analysis and full-wave model generation of ESD gun, ESD protection devices, PCB wires and vias, and connectors using ANSYS HFSS and SIwave. A comprehensive CPS (Chip-Package-System) based dynamic ESD simulation will be run addressing IEC61000-4-2 testing and correlation to silicon measurement.

So, go ahead and register for this tutorial here. Choose option 3 – Symposium plus tutorials or seminars. Check this tutorial flyer for more information about the tutorial.

If you are attending the Symposium, then do not forget to attend these excellent technical paper presentations by experts in the semiconductor industry –

On September 10, 9:00 AM – 10:15 AM

4A.1 HBM Failure Diagnosis on a High-frequency Analog Design with Full-chip Dynamic ESD Simulation
Paul Tong, Anna Tam, Ping Ping Xu, KS Lin, John Hui, Pericom, Inc.
Norman Chang, Bo Hu, Karthik Srinivasan, Margaret Schmitt, ANSYS, Inc.

4A.2 System-Level ESD Failure Diagnosis with Chip-Package-System Dynamic ESD Simulation
Robert (Soungho) Myoung, Norman Chang, ANSYS, Inc.
Byongsu Seol, Samsung Electronics Co., Ltd.

More Articles by Pawan Fangaria…..


Kilopass v. Sidense Update!

Kilopass v. Sidense Update!
by Daniel Nenni on 08-24-2014 at 12:30 pm

It looks like Sidense finally has closure on their request for attorney fees. Generally, in the U.S., parties in a lawsuit pay for their respective attorney fees which can be staggering. However, U.S. law allows the courts to shift the payment of the winner’s attorney fees to the losing party for “exceptional” reasons. Based on recent legislation “exceptional” now has a much less stringent definition as reflected in recent case law and our own Kilopass v. Sidense is one of those cases.

Also Read: Sidense Beats Kilopass in Court Again!

“Judge Illston’s discretionary order seems unlikely to be overturned,” said Roger Cook of Kilpatrick Townsend & Stockton, who represents Sidense. “Essentially, all her facts are taken verbatim from facts stated in the Federal Circuit opinion and from her order granting summary judgment of noninfringement, which was summarily affirmed.”

Two years ago a judge denied the Sidense request for attorney fees which has just been reversed. You can read the 25 page decision HERE. It is an entertaining read and illustrates what NOT to do when filing a patent infringement claim. Here is one of the more interesting parts:

Despite that advice from its Perkins patent counsel that Sidense did ‘NOT infringe [the] claims literally,’ and that Kilopass’s case was ‘much tougher,’ Kilopass retained the law firm of Morrison Foerster (“MoFo”) to conduct another analysis.

MoFo then immediately began its more detailed investigation in order to meet Kilopass’s deadline. However, eight days later, on March 27, 2008, Kilopass instructed MoFo to stop all work on the project. MoFo subsequently sent Kilopass an invoice for 44 hours of work at a cost of $20,125 “relating to Kilopass’s investigation of potential infringement claims against Sidense.” The invoice was accompanied by a preliminary infringement chart for the ’751 patent reflecting [MoFo’s] analysis.

Also Read:ATopTech’s Legal Woes Continue!

Although MoFo’s preliminary infringement chart opined favorably to Kilopass regarding the doctrine of equivalents, there is no evidence in the record that MoFo’s analys is was complete at that time, nor is there any evidence that Kilopass considered MoFo’s preliminary infringement chart in deciding to bring suit against Sidense.

Moreover, there is no evidence inthe record showing that Kilopass informed MoFo about the Perkins counsel’s prior analysis or Mr. Peng’s prior statements about the size differences between Sidense’s memory cells and Kilopass’s memory cells. Kilopass retained MoFo to conduct an infringement analysis but terminated that relationship only eight days later. It does not appear that Kilopass was aware of how much work MoFo had done up to that point or that MoFo was even in the process of completing an infringement chart. In other words, it appears that Kilopass officials had already set their mind prior to learning of MoFo’s infringement analysis.

Sidense is looking to recover more than $5M in legal fees. Yes, that is how much it costs to defend against a patent infringement claim, frivolous or not. And remember, during discovery all of your private emails, meeting notes, and social media activities are fair game so always be very careful about what you put in writing, absolutely.

About Sidense Corp.
Sidense Corp. provides very dense, highly reliable and secure non-volatile one-time programmable (OTP) Logic Non-Volatile Memory (LNVM) IP for use in standard-logic CMOS processes. The Company, with over 120 patents granted or pending, licenses OTP memory IP based on its innovative one-transistor 1T-Fuse™ bit cell, which does not require extra masks or process steps to manufacture. Sidense 1T-OTP macros provide a better field-programmable, reliable and cost-effective solution than flash, mask ROM, eFuse and other embedded and off-chip NVM technologies for many code storage, encryption key, analog trimming and device configuration uses.For more information, please visit www.sidense.com.


FinFETs for your Next SoC

FinFETs for your Next SoC
by Daniel Payne on 08-24-2014 at 7:00 am

Planar CMOS processes have been offered for decades now, and all the way down through the 28nm node it has been riding the benefits of Moore’s Law. A few years back we started hearing from Intel about TriGate (aka FinFET) starting at the 22nm node as a way to use a more 3D processing approach for transistors instead of planar CMOS. So ever since then the foundries and IDMs have been jockeying for position on FinFET technology.

Should your next SoC design use a FinFET process? It all depends on what your product requirements are, and which libraries are available to implement your design. I viewed a webinar made in July that addressed this topic and it was presented by Prasad Saggurti, he works at Synopsys in the embedded memory IP group.

The basic FinFET layout in ideal form looks like this:

The active gate area is the intersection of the Green-color gate material and the arrows, showing direction of current flow. As a 3D structure the layout area of a FinFET is smaller than a planar device, and the source-drain region is fully-depleted for FinFET, making for:

  • Reduced leakage transistor compared to planar CMOS
  • Less variability
  • Lower operating voltages

Multiple fins are required to get different effective widths for a FinFET transistor, so the device sizing is quantized instead of totally variable:

On the down-side with a FinFET technology you have some new circuit design trade-offs:

  • Quantized widths
  • No body biasing to control leakage or speed
  • Higher parasitic values
  • Aging and self-heating effects

Foundries offering FinFET technology today include:

  • TSMC (16nm, 16nm+)
  • Samsung + GLOBALFOUNDRIES will offer a common FinFET process at 14nm.
  • Intel at 22nm and 14nm, FPGA vendor Altera is using an Intel 14nm FinFET process, while Xilinx is using the TSMC 20nm and 16nm FinFET processes.
  • UMC 14nm

Technology reasons to choose FinFET include performance, where a 16nm FinFET is about 30% faster than a 28nm planar CMOS transistor. Leakage current in FinFET can be one half that of planar CMOS, which then allows for even smaller Vt values, which helps lower Vdd values, finally reducing overall power. It’s even possible to design FinFET-based SRAM circuits that operate at 500mV.Related: Intel 14nm is NOT in Production Yet!

Area reductions in SRAMs when going from 28nm planar to 16nm FinFET are up to 40%.

Synopsys EDA software and IP have been used to implement SoCs in all four FinFET processes (Intel, TSMC, UMC, Samsung). Here’s the list of FinFET-based IP from Synopsys where you can choose to tradeoff speed versus density to match your requirements:

Related: USB 3.0 IP on FinFET may stop port pinching

For FinFET costs you have to deal directly with each foundry and make your own decision on the benefits and ROI.

Failure mechanisms in FinFET are different than with planar, so the memory redundancy approach and BIST changes. For example, you should be checking for resistive shorts and opens between fins. SER (Soft Error Rates) for FinFETs look better than with planar so far, so stay tuned for results from radiation labs.

Early adopters of FinFET technologies are those market segments with:

  • Highest volumes and margins to justify increased costs.
  • Power reduction requirements.
  • Performance improvement needs.

My favorite phrase “techonomics” from Aart DeGeus wasn’t used in this webinar on moving to FinFET technology, although the concept was the same. Migrate to FinFET technology if it makes technical and economic sense for your market segment.

Q&A

Q: Does area reduction consider double patterning?

Yes, it does. DPT adds complexity to circuit designers and providers of IP.

Q: How many gates does it take for SMS Memory BIST to support FinFET-specific failures?

No extra gates are required for BIST, it’s all done in software.

Q: What speeds are you able to achieve on FinFET processes?

It’s design and foundry specific. Typically a 1.5GHz design at 28nm can now run at 3.0GHz in FinFET.

Q: Can I implement memory BIST from Synopsys with my own memory?

Yes, you can use Synopsys memory BIST on your custom memory, it’s been done before.

Q: Do you have any FinFET yield numbers?

You must contact your foundry directly, because that data is proprietary.

Q: How different are the FinFET SRAM defects?

FinFET defects are a superset of existing defects, so we’ve extended our algorithms to account for that. At ITC we have a paper this year on that topic.

Q: Which FinFET processes has Synopsys built IP on?

Our FinFET IP is ready today for all of the foundries talked about.

Q: How do FinFETs compare to FD-SOI?

That’s a long answer, so will be tabled for now.

Related: FD-SOI at 14nm
Q: What track heights will you support with logic libraries?

We offer high speed at 10.5 tracks, high density at 9 tracks, ultra-high density at 7.5 tracks.


Why do you need 9D Sensor Fusion to support 3D orientation?

Why do you need 9D Sensor Fusion to support 3D orientation?
by Eric Esteve on 08-23-2014 at 12:03 pm

Motion sensors are also commonly applied in a broad range of consumer products, including smartphones, wearable devices, game controllers and sports watches, with applications ranging from screen orientation to indoor navigation. If you desire to build an Inertial Measurement Unit (IMU) to efficiently compute 3D orientation, why needing not only a 3D gyroscope, but also a 3D magnetometer and a 3D accelerometer?

In fact, the integration of gyroscope measurement errors will lead to an accumulating error in the calculated orientation. Therefore, gyroscopes alone cannot provide an absolute measurement of orientation. An accelerometer and magnetometer will measure the earth’s gravitational and magnetic fields respectively and so provide an absolute reference of orientation. However, they are likely to be subject to high levels of noise; for example, accelerations due to motion will corrupt measured direction of gravity. The task of an orientation filter is to compute a single estimate of orientation through the optimal fusion of gyroscope, accelerometer and magnetometer measurements.

The White Paper “Ultra Low-Power 9D Fusion” describes a technique to efficiently compute 3D orientation from the sensory inputs of three motion sensors:

[LIST=1]

  • An accelerometer measuring linear acceleration along three axes (XYZ)
  • A gyroscope measuring angular velocity around three axes
  • A magnetometer (compass) measuring the magnetic field strength along three axes.

    Many sensor fusion solutions exist. The 9D fusion algorithm presented in this white paper is based on the above described principles of the orientation filter and is claimed to be at least as accurate as algorithms that are based on Kalman filtering, another popular class of fusion algorithms.
    I suggest you to carefully read the white paper, describing the used quaternion (to represent the current orientation of the device) Q = [ q0 q1 q2 q3 ], and the benefit of using quaternions to represent rotation…

    The 9D fusion application has been implemented on a specific configuration of the DesignWare Sensor IP Subsystem. It consists of an ARC EM4 32-bit processor and a user-configurable set of serial digital interfaces, analog-to-digital converter interfaces, hardware accelerators, and a software library of DSP functions and I/O software drivers. The ARC EM4 processor is configured to have a single-cycle 32-bit integer multiplier. In addition, the subsystem contains fixed-point hardware accelerators for multiply-accumulate (MAC), trigonometric functions and the square root function.

    Synopsys white paper describes the need for sensor fusion when implementing applications like 3D orientation, as integrating a 3D gyroscope is not enough, so 3D magnetometer and accelerator sensors are needed to ensure required precision. Synopsys has built a 9D sensor fusion integrating 3D gyroscope, 3D accelerometer and 3D magnetometer, around ARC EM4 32-bit processor, and a number of user-configurable APEX extensions. To optimize this architecture for low-power consumption, the floating point version of the algorithm has been converted into a fixed point version enables using a (single cycle) integer multiplier and, more importantly, opening the way for further optimization by applying hardware accelerators. The DesignWare Sensor IP Subsystem can reduce energy consumption by a factor of 6-25 compared to other commercial processors, as shown on the above table.

    You will find more about the algorithm and their implementation in the White Paper “Ultra Low-Power 9D Fusion

    From Eric Esteve from IPNEST

    More Articles by Eric Esteve…..


  • Distributed RLCK Models for Transmission Lines in High Speed Applications

    Distributed RLCK Models for Transmission Lines in High Speed Applications
    by Tom Simon on 08-23-2014 at 10:00 am

    Design engineers frequently struggle with transmission line design and modeling. We can define a length of interconnect that contains more than 1/100th of a wavelength as a transmission line. This seems to be the breakpoint where distributed effects to start to become significant. To improve circuit performance these long runs are usually shielded and drawn on silicon as differential pairs. Transmission lines are used for high-speed signal propagation and occur in Low Noise Amplifiers (LNA’s), Power Amplifiers (PA’s), clock distribution networks and many other kinds of high performance analog circuits.

    Figure 1 – TSMC 60GHz RDK Power Amplifier

    There are many tools that can provide a small-signal (i.e. an n-port) model for a transmission line. But the needs of high speed analog circuit design usually require transient simulations. So the engineer is presented with the necessity for models compatible with large-signal simulation. Over the years several techniques have been employed to solve this problem.

    One common approach is to use rational fitting. An arbitrary network is used to represent the equivalent circuit. Using mathematical methods a fit can be obtained. However, it will frequently sacrifice DC behavior (i.e. the DC operating point) and may contain negative device values – resulting in a model having active behavior (gain). This can cause problems with convergence during transient circuit simulation, defeating the purpose of making the linear model to begin with.

    Using a physics based model is generally a better approach. A circuit that represents the physical system topology is used as the target network for the fitting operation. But because transmission lines exhibit distributed effects, a simple lumped RLCK model will not provide a good solution.

    To include distributed effects in their models, engineers will often derive a unit circuit and manually cut the transmission line into short segments that can be fitted individually to the unit circuit. Then these segments are joined at the schematic level providing what is hoped to be an equivalent circuit for the transmission line. This is tedious and potentially error prone. Additionally there are several shortcomings with this method.

    Figure 2 – Physically segmented transmission lines

    Usually the ground is assumed to be an ideal ground and any losses produced in the ground are incorporated into the model for the signal lines – improperly distributing the losses and potentially adversely affecting the DC behavior. This makes the model’s accuracy very sensitive to the actual ground signal connections and will likely violate the assumptions made in transferring the ground loss effect to the signal lines. Therefore the model will not properly reflect actual behavior in a circuit.

    The second issue is linear coupling along the transmission line. If Telegrapher’s equations are used to calculate the inductance per unit length on a short segment and compared to the same calculation from a long segment, the values will not agree. It is necessary to EM simulate the full structure for a variety of reasons, thus precluding us from using a segmented approach to both EM simulation and fitting.

    Below, as shown in figure 3, it is evident how line length affects inductance per unit length. These are PeakView EM simulation results illustrating the variation of inductance per unit length as overall length is increased. The line lengths shown range from 140 microns to 12,000 microns in the upper curve.

    Figure 3 – L per unit length at various overall lengths

    Fortunately starting with its n-port EM model for the full system accounting for all EM effects, PeakView can automatically generate a distributed RLCK model that is segmented to get an accurate RLCK model. This model will even properly reflect the effects of various ground connections. PeakView’s patented 3D full wave electromagnetic solver will take layout directly from Virtuoso, or alternatively PeakView’s PCircuits can be used to synthesize and then model optimal structures using the same method.

    The transmission line Physics Based Model (PBM) will automatically fit a segmented unit cell that accounts for primary, mutual, shunt and return path elements. The user can select the number of segments to be used. PeakView will automatically fit the complete network and return a SPICE compatible RLCK subcircuit that can be sync’ed back to Cadence ADE for simulation.

    Here are some examples of n-port to PBM model comparisons.

    Figure 4 – T-Line S11 magnitude, n-port model vs. PeakView PBM
    Figure 5 – T-line S11 phase, n-port model vs. PeakView PBM

    Figure 6 shows circuit simulation and silicon measurements for the 60GHz power amplifier in figure 1 using PeakView transmission line and inductor models. Below you can see light blue and red are S21 simulated and measured, respectively. Yellow and green are S11 simulated and measured, respectively. The circuit and measurements are courtesy of TSMC.

    Figure 6 – Measurement versus simulation

    In conclusion, as wavelengths get shorter transmission line behavior in interconnect is becoming more significant in high speed analog designs. Despite working at smaller process nodes, signal lines are in many cases are long enough to require full-wave EM modeling. PeakView with its comprehensive EM and simulation model generation capabilities delivers an efficient, reliable and accurate solution for design engineers. PBM (Physics Based Models) are available for single or differential transmission lines. Shielding options include bottom plate and side shields. Ground pins can be specified where needed.

    Lorentz Solution’s flagship EM platform, PeakView, provides a comprehensive solution for automatically generating fully passive broadband distributed RLCK models for a wide range of transmission line structures.


    Cadence white paper helps you selecting what come after DDR4

    Cadence white paper helps you selecting what come after DDR4
    by Eric Esteve on 08-23-2014 at 4:49 am

    The DRAM market is shaking… In 2014, analysts predict that LPDDR4 will surpass DDR4 for the first time. When releasing DDR4 standard, JEDEC has clearly stated that the industry should not expect any DDR5. Does this means that DRAM technology new development is ending with DDR4? According with Mike Howard, principal analyst at IHS iSuppli, “DDR4 for servers, laptops and mobile devices will be around for a long time as no successor is under development”, saying that “It will be the last DDR iteration”. In fact, if you look at the DRAM industry, you can discover four emerging technologies, all based on 3D stacking approach, namely: HMC, HBM, Wide I/O 2 and DDR4-3S. This blog has been built by using information from various articles that you can find on Cadence web page. Let’s take a short look at these emerging technologies.

    Wide I/O 2: Supporting 3D-IC Packaging for PC and Server Applications
    The Wide I/O 2 standard, from JEDEC, is expected to reach mass production in 2015, covers high bandwidth 2.5D silicon interposer and 3D stacked die packaging for memory devices. Wide I/O 2 is designed for high-end mobile applications. The goal is to provide high bandwidth at the lowest possible power. This standard uses a significantly larger I/O pin count than LPDDRn, but at a lower frequency. Because stacking reduces interconnect length and capacitance, this result in lowest I/O power for higher bandwidth. Just note that Wide I/O 2 is still parallel protocol and that a logic die is inserted between the SoC and the stacked memory dies.

    HMC: Breaking Barriers to Reach 400G
    HMC (Figure 4) is being developed by the Hybrid Memory Cube Consortium and is expected to be in mass production in 2014. The architecture essentially combines SerDes based, high-speed logic process technology with a stack of through-silicon-via (TSV) bonded memory die. In an example configuration (see above picture), each DRAM die is divided into 16 “cores” and then stacked. The logic base is at the bottom, with 16 different logic segments, each segment controlling the four or eight DRAMs that sit on top. This type of memory architecture supports more “DRAM I/O pins” and, therefore, more bandwidth (as high as 400G). According to the Hybrid Memory Cube Consortium, a single HMC can deliver more than 15X the performance of a DDR3 module and consume 70% less energy per bit than DDR3. So, Hybrid Memory Cube is based on ultra-high speed (10, 12.5 or 15 Gbps today, 25 Gbps for the next release) serial protocol, the memory chip maker supplying the “cube” also integrating a logic die, the customer SoC being packaged separately.

    HBM: Emerging Standard for Graphics
    HBM (Above Figure) is another emerging memory standard defined by the JEDEC organization. HBM was developed as a revolutionary upgrade for graphics applications. Expected to be in mass production in 2015, the HBM standard applies to stacked DRAM die, and is built using TSV technologies to support bandwidth from 128GB/s to 256GB/s. JEDEC’s HBM task force is now part of the JC-42.3 Subcommittee, which continues to work to define support for up to 8-high TSV stacks of memory on a 1,024-bit wide data interface. In October 2013, the Subcommittee published JESD235: High Bandwidth Memory (HBM) DRAM, which uses wide-interface architecture to achieve high-speed, low-power operation. Please note that HBM is still parallel protocol and that a logic die is inserted between the SoC and the stacked memory dies.

    DDR4-3DS
    A lower cost, lower performance alternative 3D approach is DDR4-3DS. This standard presents an evolutionary development of the current DDR4 interface. With DDR4-3DS, a master-and-slave memory chip pair are bonded together and packaged as shown in Figure 4. Only the master die has the memory interface logic, thus reducing the load on the host controller. As a partial step towards 3D, the technology provides incremental improvement and cost efficiency, but does not achieve the breakthrough performance and power benefits of the other technologies.

    This white paper from cadence: “Five Emerging DRAM Interfaces You Should Know for Your Next Design” will certainly help you deeper your knowledge about these emerging protocols, when “3D Memory Landscape take Shape” specifically addresses the 3D related architectures.

    One or several technologies will eventually replace DDR4, but DDR4 will be used doe a long time, especially because it’s the last protocol iteration. The cumulated DDR4 Memory Controller IP sales have weighted $100 million in 2013 (source IPnest) and will be in excess of several $100s million during 2014-2020. But IP vendors have to prepare the future and Cadence will have to support some if not all these emerging technologies. If you already know the bandwidth requirement, expected release date and acceptable cost per bit for your next application, the above table can help you make a first selection of the emerging memory technology to support.

    By Eric Esteve from IPNEST


    Automatic RTL Restructuring: A Need Rather Than Convenience

    Automatic RTL Restructuring: A Need Rather Than Convenience
    by Pawan Fangaria on 08-22-2014 at 5:00 pm

    In the semiconductor design industry, most of the designs are created and optimized at the RTL level, mainly through home grown scripts or manual methods. As there can be several iterations in optimizing the hierarchy for physical implementation, it’s too late to do the hierarchical optimizations after reaching the floor plan or layout stage at the netlist level. All design explorations and optimizations must be therefore done at the RTL stage. The problem with the netlist level re-structuring is that verification takes weeks to complete and any functional changes to RTL have to undergo repeated modifications at the netlist, and hence there is an interest in moving to RTL level solutions. However, today with the increasing design complexity and multiple parameters to optimize with various constraints, it’s not possible to handle RTL restructuring manually. Effective and easy-to-use tools with automated procedures are needed to handle this important task.

    I came across an articlewritten by Daniel Payne and also a whitepaperon Atrenta’swebsite, which provided great insights into RTL restructuring. It was then a nice opportunity to talk with Samantak Chakrabarti, Director R&D at Atrenta’s Noida site. Samantak provided interesting details about this capability in the SpyGlass RTL Signoff Platform, and clarified some of my queries about RTL restructuring that is done by GenSys RTL and how it is being used by customers. Here are the highlights of the conversation:

    Q: Considering large design sizes, I can definitely see the need of automated RTL restructuring. Are there any specific scenarios that mandate this restructuring?

    A: Yes, there are a few. To begin with, large, and more importantly, complex designs (with several configurations in them) can be optimized with automated RTL restructuring in a much shorter time, thus improving designers’ productivity to a large extent. The need to automate this procedure is more pronounced with power and physical optimization. For power, certain structures need to be aligned with their respective power domains. These power domains are organized by the hierarchy of the RTL. Similarly in physical terms, certain DFT elements need to have defined physical logistics which are defined by the RTL hierarchy. Additionally, there are physical rerouting, feed-throughs and logic insertions that take place to improve congestion in a design. Each of these design improvements requires manipulation to the RTL structure. Doing such operations manually can be very tedious, time consuming, and error prone. Our GenSys RTL tool has an easy-to-use GUI with many automated features that can be utilized to restructure any RTL very efficiently.

    That reminds me of a presentation by ST Microelectronicsat DACabout how they used GenSys RTL in splitting a wide long system bus and organizing the design into four physical units (PUs), aligned appropriately to be fed by the split bus. This definitely provides much easier routing and reduces congestion to a large extent.

    Q: What do you mean by configurations in complex designs? How are those handled?

    A: Today, an SoC is not just one monolithic design. It contains several IP blocks. By using conditional expressions in the RTL, a designer can retain the original IP as is. However, while restructuring, pragmas, assertions, conditions under ifdef, etc. need to be retained while any behavioural code remains in source format, and net & instance names preserved. To maintain all of this critical design information, a designer can utilize the full potential of GenSys to optimize the RTL while keeping the necessary design constraints intact.

    Q: I’m sure design houses must have automated scripts to handle such RTL restructuring. Are those not sufficient?

    A: Scripts can definitely be used; in fact design houses are using them. However, scripts are tailored for a particular design style or language like Verilog or VHDL and can be error prone. Also, writing scripts for every design style and newer language constructs, especially SystemVerilog, and verifying them, can be very time consuming. By using GenSys RTL, one can easily reconfigure design hierarchy visually and generate the new RTL in minutes.

    Q: What kind of GUI does GenSys RTL have for this purpose? How can it be used in an automated way?

    A: If you look at the GUI in the above picture, a designer can intuitively view the current hierarchy, modify it by moving IP across the hierarchy, and undo / redo through hierarchy manipulation widget. When they are satisfied, they simply need to click ‘Apply’ to generate the new RTL. Many kinds of grouping, ungrouping and partitioning operations can be done while modifying the hierarchy. Additionally, the tool supports Tcl commands that can be used to restructure in batch mode as well.

    Q: Interesting! How was the requirement of such a tool conceived?

    A: The initial requirements came from customers who were doing large SoC designs. Since the design exploration activity was consuming a lot of time, they felt the need to automate RTL restructuring for design exploration. We developed the GenSys RTL tool to address this for our partner customers and now we have multiple top semiconductor companies using the tool. We do get interesting feature requirements from them, which we will be adding to the GenSys RTL solution in the near future.

    Q: You mentioned that with this tool, designers can restructure in much shorter time. Can you quantify this time?

    A: Yes, one of our customers who was using scripts, took about two months to complete the restructuring process for optimization of their design. The same design could have been restructured by the same designers in the GenSys RTL environment in just 3 days.

    Q: That’s great! How do you make sure that the automatically generated new RTL is perfect?

    A: It’s a requirement to verify the new generated RTL. The GenSys RTL solution generates a mapping file along with RTL which can be used by standard formal verification toolsto verify the equivalence between the new RTL and the original RTL. This mapping file is also used for restructuring other side files like UPF, SDC etc.

    Q: I have another important question. Designs are done with certain physical intent. How do you make sure that the original physical intent is preserved in the new RTL?

    A: That definitely is an important question. The SpyGlass RTL Signoff solution is a complete platform for design at the RTL level. The newly generated RTL can be verified through SpyGlass Physical for its correct physical intent, along with other aspects that can be verified through other tools within the SpyGlass Platform. Any necessary modification can be done through wrapper scripts or changes in files.

    Q: Nice. Which RTL languages GenSys RTL supports?

    A: GenSys supports all three prominent design languages, Verilog, VHDL, and SystemVerilog.

    Q: Any experience with accommodating changes in 3[SUP]rd[/SUP] party IP?

    A: This is another interesting observation. We have seen our customers verifying 3[SUP]rd[/SUP] party IP against their own design IP by using the GenSys RTL methodology. They simply replace their IP with the 3[SUP]rd[/SUP] party IP, under certain conditions, and verify the overall RTL. This checks the equivalence between the two IPs. Also, in certain cases, IP such as memories can be merged into a single path with GenSys. We also have seen new requirements such as handling behavioral logic, glue logic between IP, and so on, which we will be supporting in GenSys RTL going forward.

    This was a great interaction with Samantak, which makes me think that the GenSys RTL solution has more capabilities for SoC design exploration than just RTL restructuring.

    More Articles by Pawan Fangaria…..


    Cadence and Reverse Debugging

    Cadence and Reverse Debugging
    by Paul McLellan on 08-22-2014 at 7:01 am

    I wrote back in March about Undo Software. They have a reverse debugging solution called UndoDB (the DB is for debug, not database). I have a soft spot for reverse debugging ever since seeing one of the engineers at Virtutech type reverse single step and seeing the code back up a single instruction and realizing that literally months of my career could have been saved if I had had the same capability. Undo have a similar capability for Linux running on x86 and ARM. Of course, under the hood it doesn’t actually run the code backwards. What it does is that regularly it saves a snapshot of the running code and then it records all the inputs. By restoring the snapshot and re-running the code with the same inputs almost all the way, it gives the appearance of code running backwards since it is almost instantaneous.

    Reverse debugging is a nice productivity kick for day-to-day programming. But it is especially powerful for tracking down the really difficult bugs where the source of the bug and the detection of the bug are a long way apart and where the source is completely non-obvious. For example, a data structure is corrupted and has been over-written. The code that did the over-writing might be anywhere. This is especially difficult in systems such as mobile phones where the code doesn’t run the same every time and errors can be very intermittent, or tools like simulation that take in one language and turn it into binary code dynamically, which typically means many software tools such as static analysis don’t work.

    One company that has been making use of UndoDB is one that you are very familiar with: Cadence, in their advanced verification solutions (AVS) business unit. Synopsys and Mentor are also customers, apparently. Cadence AVS used the tool internally to track down hard-to-find bugs. But they also needed to be able to debug code on customer sites. Due to the crown-jewel nature of a lot of semiconductor designs, the designs often are not allowed to leave the customer’s own network and servers. Undo have an option called Out-and-about that can easily be run on customers servers. They could then use UndoDB.

    Over to Jonathan DeCock, a senior software architect at Cadence:Our engineer had spent months struggling to try to track down the problem. It only struck in 1 in 300 runs, making finding it like looking for a needle in a haystack. We’d been using GDB, but that didn’t let us see what had caused the problem, as when the code failed we were so far past the point of failure that we couldn’t find the source of the bug.

    They set up a 20 machine server farm on the customer site, running multiple copies of the tool 24×7 until the problem struck.

    DeCock again:As soon as the code failed we got experts on the line and stepped backwards and forwards line-by-line using UndoDB. We found the bug in three hours, and it then took just two hours to solve, which was a huge win after three months of searching using other methods. Given its nature, we simply couldn’t have found it through source-code analysers, as it was generated within dynamic code.

    That was my experience at Virtutech’s customers too. Bugs that once might have taken months to track down (or, in some cases, had already taken months) were solved in minutes. You just don’t have those “how on earth did that happen?” problems.

    UndoDB works on ARM and x86 (32 and 64 bit) processors; Linux and Android operating systems; and with any language supported by gdb most notably C and C++.

    Undo software have a case-study: Cadence Design Systems, Finding Customer-Critical Bugs with UndoDB. You can download the case-study here.


    More articles by Paul McLellan…


    Crossfire – Your partner for IP development, what’s new?

    Crossfire – Your partner for IP development, what’s new?
    by Pawan Fangaria on 08-21-2014 at 4:00 pm

    As the SoCs and IPs grow in sizes and complexities, the number of formats, databases, libraries of standard cells and IOs also increase. It becomes a clumsy task to check every cell in a library, its consistency among various format with respect to functionality, timing, naming, labels and so on, and its complex physical properties such as size, pitch alignment, routability of its pins and several others. It’s very tedious to do manually; using scripts can be an option, but that may involve risk of missing finer points and may prove costly at the end. The important point here is utilization of designers’ time in real value addition in the design rather than checking such consistencies. Considering the shrinking time-to-market window and increasing complexities of SoCs and IPs needing more development time of designers, it’s imperative that standard automated tools are needed to fill this gap of consistency check between various formats used by different tools and correctness of libraries.

    Fractal Technologieshas rightly perceived this need and is leading this space by offering an excellent tool, Crossfire[SUP]TM[/SUP], you must have heard/seen in DAC; they have been consistently representing it there. There are new features which naturally are driven by demand; I will talk about them a little bit later. Let’s first look at the fool proof methodology it employs to check that consistency across all formats and how this methodology is scalable to add any new format whether standard or proprietary.

    The Crossfire is architected with a central database with associated APIs. Interfaces with standard popular interfaces are available. Any new format which requires verification needs to be read into the central database and then consistency checked. The system already has a rich set of checks including library data (lib, cell, view names, labels, cell presence, net and terminal consistency etc.), functionality (equivalence between functions, e.g. schematic and Spice netlist), physical properties (layout polygon level checks, boundary alignment for abutment, cell size information in LEF SITE construct, pin routability and cell architecture compliance with P&R), characterization data (cross referencing of arcs between liberty, Verilog and SDF back-annotation and their match with respective functions of the cell) and so on. The characterization data check can be performed on LIB (NLDM), ECSM or CCS and specific checks done for the corresponding data.

    The Crossfire provides a unified environment for graphically browsing and viewing cells in any database or format in a single window. It provides easy debugging by automatically flagging error messages which when clicked takes you directly to the applicable data format. The waiving mechanism is also implemented. The library verification setup can be done in GUI as well as batch mode.

    This system also provides database independent checking through an API which can be in Perl, Tcl or Python. The validation scripts available with design houses can be easily integrated into the system. Most of the formats used in the semiconductor design industry are supported in Crossfire. And due to its open architecture, any newer format can be easily added to the Crossfire database.

    For IPs, the Crossfire is optimized to deal with large data such as GDS and to optimize the checking time for various formats, it’s intelligent enough to do only relevant checks; for example routing checks at GDS while cell size checks at LEF. The final Q/A report generated by Crossfire can be delivered along with the IP for the IP customers to cross verify as part of their incoming inspection.

    Coming to what’s newly added in the Crossfire – it’s for rising demand of faster checks and new checks for productivity improvement –

    · Parallel Parsing – allows faster parsing of formats.
    · setupAPI for fast automatic setup creation
    · Speedup in various checks – 2x – 40x
    · New checks added – pg_pin definition, related bias pin, input pins only connected to gates in spice, find parameter differences over a range of .lib files, find PVT corner in a set of .lib files, full difference between two LEF files.
    · Zipped message database
    · Strict syntax checking on .lib files

    TheCrossfire can be used from the very beginning of a design setup up to its final integration to keep a check on the consistency and quality of the design amid several formats, databases, tools in the flow and heterogeneous sources of IPs and blocks, thus conserving and perhaps shortening the design cycle time.

    More Articles by Pawan Fangaria…..


    Synopsys Earnings

    Synopsys Earnings
    by Paul McLellan on 08-21-2014 at 12:00 pm

    The perfect quarterly results are to slightly beat the consensus for earnings and profit, and not say anything negative about guidance for the upcoming quarter. Synopsys delivered all that with their latest quarter yesterday. Revenue was $521M versus $483M last year, giving solid growth of over 8%. Non-GAAP earnings per share was 65c. The revenue number is significant since it should put Synopsys firmly over the $2B revenue level for the whole year.

    Their official guidance for the full year is:

    • Revenue: $2.055 billion – $2.065 billion
    • Other income and expense: $10 million – $12 million
    • Tax rate applied in non-GAAP net income calculations: approximately 20 percent
    • Fully diluted outstanding shares: 155 million – 159 million
    • GAAP earnings per share: $1.57 – $1.63
    • Non-GAAP earnings per share: $2.48 – $2.50
    • Cash flow from operations: at least $500 million

    Aart’s discussion of the results on the conference call was also contained no real surprises. The semiconductor industry is strong but the global economy is vulnerable in all sorts of ways. But Synopsys themselves are very well positioned:We lead in chip implementation. We have a comprehensive solution for verification that reaches all the way up to the intersection of hardware and software. We’re the number two IP company in the world. And with the acquisition of Coverity, we’ve not only significantly expanded our portfolio and customer base, but are transforming Synopsys into its next incarnation.

    Some more interesting facts came out of the prepared remarks, no so much specific to Synopsys but to the semiconductor ecosystem as a whole.

    Aart said they were tracking 150 FinFET designs around the world and Synopsys is integral to more than 95% of them. I don’t think that last remark means anything significant, just that some Synopsys tool is used. I wouldn’t be surprised if Cadence is integral to 95% of them too.

    100 million chips have shipped containing Synopsys’s USB 3.0 IP. The transition to USB3 has been going relatively slowly since so much of what USB is used for doesn’t require any of the extra capability that it delivers. Your mouse and keyboard just don’t need more than USB2 (USB1 actually).

    During the quarter Synopsys announced foundry support for Intel’s 14nm process. Aart said that Synopsys was already heavily used internally at Intel to design their own products. I was surprised since Intel is so paranoid (so they survive!) about ever saying which tools they used. At the Jasper users’ group a couple of years ago Intel gave the keynote and pointed out that it should not be taken as an endorsement of Jasper or even a sign that they used Jasper’s products. Synopsys also announced foundry support for Samsung and TSMC FinFET designs.

    IC Compiler II (the physical design environment) is finally being introduced for full production and should ramp strongly in 2015:Based on intense customer interest across the design spectrum, the introduction of our new IC Compiler II is clearly the highlight of 2014, and a milestone in design automation. This next generation tool delivers an astounding 10X improvement in throughput. That’s 10X, not 10%. Since announcing IC Compiler II in March, we delivered its production version exactly on schedule last month. The number of customer engagements has grown very rapidly, and we have booked our first orders. So far, the accelerating number of tape-outs has been in 40, 28 and 16/14 FinFET designs, and the first chips have already successfully come back from manufacturing.

    One area where Synopsys is doing something new is Coverity. This is not in the EDA space directly, it is in the mainstream software development space. Due to the way acquisition accounting handles deferred revenue, this will be dilutive this year, breakeven next and accretive in 2016. But one important fact is that Coverity sells into different accounts from Synopsys and even when they sell into the same accounts, it is different groups and budgets and so offers incremental growth:Recall that half of Coverity’s customers are brand new to Synopsys, representing a completely new pool of potential buyers. The other half are companies we know well, but where Coverity is used in different parts of the company and accessing different budgets.

    So, delivered what was expected, no surprises going forward. Just what Wall Street likes.

    More articles by Paul McLellan…