RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Fill Solution for 20nm at TSMC

A Fill Solution for 20nm at TSMC
by glforte on 03-17-2014 at 5:12 pm

By Jeff Wilson, Mentor Graphics

We’ve talked about the new requirements for Fill in IC design for advanced nodes in previous blogs on this site. This time I’d like describe the fill solution that Mentor and TSMC have jointly developed to meet the requirements of fill for TSMC’s 20nm (N20) manufacturing process.

The traditional purpose for metal fill was to improve planarity in the chemical-mechanical polishing (or planarization) process. However, at advanced nodes fill is also used to manage effects related to electrochemical deposition (ECD), etch, lithography, stress, and rapid thermal annealing (RTA). TSMC’s N20 design rules require fill shapes to be evenly distributed and also require a greater variety of fill shapes (see figure). Designers need to add fill not just to metal layers, but also to poly, diffusion, and via layers. Other requirements of the newer technologies include analysis of density gradients, as well as absolute density. Additional constraints include perimeter and uniformity of fill spanning multiple layers. In many cases, multiple fill layers are interdependent.

Comparison of Front-End of Line (FEOL) fill at 65 nm and 20 nm. Fill shapes are no longer just squares, but now require support for multiple layers with a specific pattern to achieve higher density targets. (source: Mentor Graphics)

The collaborative solution from TSMC and Mentor for TSMC’s N20 fill requirements uses the Calibre® YieldEnhancer product’s SmartFill technology, which performs fill and analysis concurrently to ensure that the fill patterns meet TSMC’s forbidden pitch, multi-layer fill, and double patterning rules. Concurrent analysis of the layout also helps designers ensure that timing constraints are met by balancing the amount of fill against unwanted parasitic effects (i.e., added capacitance) as the fill process is being performed. The addition of double patterning (DP) at N20 also adds a new dimension to fill. Metal fill, like all the shapes in the layout, must be colored and decomposed into two masks. Fill observes DP rules to balance the light emitted through the mask to improve design uniformity.

Implementation of these requirements for 20nm required new technology development for SmartFill in the areas of core algorithms, support for multi-layer fill shapes and shape groupings, and repeatable cell-based fill patterns. The SmartFill solution uses the Calibre interfaces to integrate with full custom environments like Pyxis, Virtuoso, and Laker, and place and route tools such as Olympus-SoC, IC Compiler and Encounter to support net-aware fill. Designers can provide a list of critical nets that receive special treatment during the fill procedure and the filling engine can make informed fill placement decisions based on both the type of signals and which signals are timing-critical.

A key advantage of the SmartFill solution at TSMC is that it allows designers to meet IC fill constraints in a single pass with reduced impact on circuit performance and no manual intervention. The approach produces a relatively small post-fill GDS database size, which enables faster data transfers and quicker turnaround times, and the implementation fits into existing design flows to support final timing verification.

References
For more detail on the fill solution used at TSMC for 20nm, download the related white paper, “Advanced Filling Techniques for N20 and Beyond – SmartFill at TSMC.”

Author
Jeff Wilson is the DFM Product Marketing Manager in Mentor Graphics’ Calibre organization. He is responsible for the development of products that address the challenges of CMP and CAA. He previously worked at Motorola and SCS. Jeff received a BS in Design Engineering from Brigham Young University and an MBA from the University of Oregon.


GSA Silicon Summit Is On April 10th

GSA Silicon Summit Is On April 10th
by Paul McLellan on 03-17-2014 at 1:01 pm

The annual GSA Silicon Summit is coming up in a few weeks. It is on April 10th at the Computer History Museum. Registration is at 9am and the meeting itself gets started at 9.45am. The summit finishes at 2.15pm. There are three sections during the day, and lunch is provided.

The first section is on Advancements in Nanoscale Manufacturing. The session will open with an overview detailing the challenges of continued gate scaling, as well as the industry’s exploration of alternative materials and processes in the fabrication of nanoscale structures and the resulting applications that may be enabled. A Panel Discussion will follow to address some of the challenges involved in implementing alternative CMOS solutions as well as recent advancements made in nanoscale engineering. It is presented and moderated by Joe Sawicki of Mentor. The panelists are Dan Ambrush of Sematech, Rob Aitken of ARM, Paul Farrar of the Global 450mm consortium, Peter Huang of TSMC and John Kibarian of PDF solutions.

The second session is on Innovative Solutions. The session will open with an overview on how manufacturing and packaging innovations are allowing improvements in semiconductor design, driving increased functionality and performance to fulfill new opportunities arising in the digital economy. A Panel Discussion will follow and explore the current and future advances of integrating digital, RF, analog/mixed-signal, memory and sensors in close proximity to achieve increased performance from a scaling, material and process perspective. It is presented and moderated by Gary Bronner of Rambus. The panelists are Jim Aralis of Microsemi, Charlie Cheng of Kilopass, Peter Gammel of Skyworks, Lawrence Loh of Mediatek and Mark Miscione of Peregrine Semiconductor.

After lunch the third session is on Enabling a 2.5D/3D Ecosystem. Holding great promise for enabling heterogeneous integration and reducing design complexity, the session will provide an overview on where the industry stands in terms of developing and commercializing 2.5D/3D technology and what remains to be done. A Panel Discussion will follow and address the use case for utilizing 2.5D/3D technology, as well as the business needs within the supply chain in order to ignite 2.5D/3D adoption and market growth, changing it if possible, from a nascent alternative to a mature option. The session is presented and moderated by Javier De La Cruz of eSilicon. The panelists are Calvin Cheung of ASE, Gil Levy of Optimaltest, Bob Patti of Tezzaron, Riki Radojcic of Qualcomm, Arif Rahmen of Altera and Brandon Wang of Cadence.

The detailed agenda, including a link for registration, is here.


More articles by Paul McLellan…


The Technology to Continue Moore’s Law…

The Technology to Continue Moore’s Law…
by Eric Esteve on 03-17-2014 at 11:59 am

Can we agree about the fact that the Moore’s law is discontinuing after 28nm technology node? This does not mean that the development of new Silicon technology, like 14nm or beyond, or/and new Transistor architecture like FinFET will not happen. There will be a market demand for chips developed on such advanced technologies: mobile applications or high performance computing to name a few. These applications exist, where more IP (multiples CPU, GPU, SRAM and various “functions”) have to be integrated into a single SoC, running faster than the previous generation but with a better power efficiency. But, when you add to this specification that a single SoC (platform) will have to ship by several dozen of million units, if not hundred, to fit with economics requirements, you realize that only a few applications will be concerned, not the majority. We have seen in a previous blog that the entry ticket for 14nm FinFET was close to $200 million (International Business Strategies, Inc. 2013 report). When you take a look at the picture below, you realize that the Manufacturing Fabs CAPEX (normalized to 1K Wafer per week) has increased by 86% from 28nm planar to 14nm FF. Going further in technology is possible, and will happen, but no more according with Moore’s law. We may even mention the last node marking Moore’s law limit, it’s 28nm!

So, what will happen to the mainstream semiconductor industry, in order to benefit from Moore’s law similar dynamic, if targeting smaller technology node is no more the solution, as it was for now the last 50 years? Our industry will have to be smart! We could say smarter than Moore!

There are certainly solutions, and most probably more than one track to explore. Smart packaging can be a way to increase density (board density instead of chip density), if you place side by side (2D) or pile up (3D), also lowering power dissipation, as the chip to chip interconnections will be shorter and far less capacitive inside the package, thus C*V[SUP]2[/SUP] will decrease. Another approach, directly linked with Silicon processing, can make sense: targeting Fully Depleted Silicon On Insulator (FD-SOI) technology. If you take a look at the above table, you can see that blindly following Moore’s law from 28 bulk, to 20nm bulk and 14nm FF lead to a performance improvement but also a cost increase. The reasons are process related, as we can see on the next picture:

The Bulk transistor architecture that we are using for decades is reaching limits at 20nm. These limits are linked with the law of physics, and lead to the following issues:

  • Transistor Channel Level: High Cost process flow and more critical process steps (impacting yield, then cost too)
  • Heavily Doped Wells (left of source, right of drain): High variability, forcing to a longer minimum gate length and severe layout limitations (impact on density, design becoming more and more complex to handle)
  • high leakage current: this one is a severe limitation, as it can annihilate the benefit gained on dynamic current improvement
  • Weak process compensation: in fact, the design/process co-optimization is low; we will see that Forward Body Bias effect in FD-SOI is a very smart way to compensate the process related variations, which exists whichever the technology.

Then, we can clearly see two possible routes. The closer to the previously known as Moore’s law is to design a FinFET transistor architecture, and to move one technology node further, leading to 14nm FF (apparently 20nm will not be used for long). We will see on the next picture the cost impact of this 3D technology. The other route is to manipulate the Silicon substrate and create a thin buried oxide, the FD-SOI technology, and stay with 2D (planar) architecture, moreover, stay with 28nm gate length!

If you compare the number of mask layers, and even more important the number of critical layers, 14nm exhibits 66% more critical layers than 28nm. Not a surprise, 14nm FF process will cost 38% more than 28nm. This is the right time to remind that SOI wafer cost is $500 higher than regular wafers. If you want to make a complete cost analysis, you also have to remind the 86% CAPEX increase to build the wafer fab! If you remember the wafer cost analysis published a while ago, CAPEX is a pretty heavy part of a processed wafer cost. If you multiply CAPEX by 1.86, that means that you will have to sell 86% more 14nm FF wafers than 28nm FDSOI to compensate this over CAPEX! Or you just say that 14nm FF is more expensive than 28nm FDSOI…

Thus, can we already say that FD-SOI is smarter than Moore?

Let’s take a look at these few examples: (1) CPU, GPU and Logic, (2) Memories and (3) Analog & High-speed, and try to analyze the FD-SOI benefits in respect with 28nm Bulk.

  • CPU, GPU and Logic: here we search for high performance, and power efficiency. FD-SOI technology exhibits excellent leakage power behavior (in fact, there is no source to bulk or drain to bulk leakage!), and the dynamic power can be lower by 30% (at equivalent performance, as described here), we can tick the power benefit. Performance: we have written about this high speed data networking ASIC, designed by a chip maker claiming for a 30% performance improvement, thanks to the Forward Body Bias effect. By the way, you can benefit from one of these 30% at the same time, or decide to optimize for the best performance to power efficiency, for a mobile application for example…
  • Memories: again the Silicon On Insulator technology direct benefit is the lower leakage compared to bulk, thus a SoC designer may optimize the architecture of an embedded IC, integrating more memory or designing for lower power.
  • Analog & High-speed: almost any SoC will integrate Analog (ADC or DAC) or High-speed PHY to support interface protocols (DDRn, USB 3.0, PCIe etc.). Paul has shown in this blog that FD-SOI analog performance is far beyond bulk one. This is going with a better figure of merit for high-speed IP.

So, can we say that a technology that exhibits lower cost than Bulk HPM & FinFET, a better reliability and yield, thanks to true process compensation through body bias and flexibility of usage is smarter than Moore? I would guess so…

From Eric Esteve from IPNEST

More Articles by Eric Esteve…..

lang: en_US


Xilinx & Apache Team up for FPGA Reliability at 20nm

Xilinx & Apache Team up for FPGA Reliability at 20nm
by Pawan Fangaria on 03-17-2014 at 12:00 am

In this age of SoCs with hundreds of IPs from different sources integrated together and working at high operating frequencies, FPGA designers are hard pressed keeping up the chip reliability from issues arising out of excessive static & dynamic IR drop, power & ground noise, electro migration and so on. While the IPs are tested and verified in isolation, their real test begins when all types of IPs (digital, analog, mixed-signal etc.) interface with each other and works together inside an SoC. An integrated analysis, verification and optimization methodology must exist, not only for the chip but also with package and board included, that can sign-off the system for power noise reliability as well as electro migration (EM).

Although the electrical concepts for verifying these appear to be simple, large design sizes with varying degrees of complexities including mixed-signal, multi-voltage, low noise margin etc. make the task a great challenge for designers. The static IR drop works on DC pattern (with average current); it is good to determine early gross weakness of the design but cannot represent true transient noise analysis. Dynamic transient analysis takes into account the actual leakage and transient switching currents. Typically the P/G network is extracted into RLC circuit and Spice level simulation is done for transient noise analysis. However, due to large size and complexity of RLC network, using Spice is not a viable alternative. Fast-Spice simulators have appeared, yet they may not be sufficient to complete the P/G analysis for large SoCs in a reasonable time and can have issues with their methodology in identifying IR and EM bottlenecks.

Xilinx
has developed an integrated comprehensive single pass flow for static and dynamic voltage drop analysis both at IP as well as full-chip level. It uses Apache’sTotem and RedHawk platform for analysis and simulation.


[Static and Dynamic Voltage Drop Analysis Flow]

IP data is provided in GDSII layout, Spice netlist in DSPF/SPEF format, and testbench or input vectors in Spice format. For an IC or SoC, package layout is also needed. Also, technology parameters in industry standard format (iRCX, nxtgrd), device model data, and layer mapping information are required. Simulations are performed with pre-characterization and layout data; then analysis is done for any design weakness or hotspot. Interactive fixing and analysis are done to resolve issues and then a clean transistor level model is written out for full-chip level analysis. This flow is able to perform analysis for large IPs while maintaining Spice level accuracy by using several techniques.


[Circuit Modeling Detail]

The non-linear nature of the circuit is transformed into linear by pre-characterizing the current, intrinsic resistance and device capacitance for each transistor and replacing every transistor by its own model. The RLC elements of P/G network and associated package/board are extracted and simulated for P/G noise in the circuit. The transistor models act as current sink while parasitic network provides the impedance. The capacitance can come from various sources such as P/G mesh, device diffusion and gate, signal line, and on chip. The linear circuit is then solved by proprietary solver technology and voltage/current can be obtained at every wire, via and transistor.


[GDSII Modeling and Simulation Flow Using Totem]

GDSII
data is used to create model of the power and ground geometries along with the locations of transistors. A high performance & capacity RLC extraction engine (that exploits the regularity in P/G mesh) is used to obtain the parasitic of the P/G network for all power domains with selective inclusion of capacitance and inductance of the mesh as needed (not needed for static analysis).

For static analysis, an average or peak device current is used and the DC voltage/current is computed for each wire and via starting from the voltage sources to the transistors. For transient analysis, the true characterized transient current profile is used along with the associated effective transistor resistance and capacitances and time-domain current and voltage waveforms are computed for each wire and via.

The electro-migration for wires and vias is simultaneously determined based on the current flowing through the P/G mesh and violations checked against the limits defined in the technology file. For static analysis, DC or average EM limits are checked. For transient analysis, average, RMS and peak EM limits are checked.


[Signal EM Analysis with Vectored Approach]

The signal interconnect EM is verified in a similar manner where switching currents for transistors connected to signal nets are pre-characterized and used to model the average, RMS and peak currents on a signal net and compared against the limits specified in the technology file. This can be done in vectored as well as vector-less approach. In vector-less approach, the current waveform at a driver output is constructed based on certain basic parameters provided for a net under analysis. The vector-less approach can be used to gain 100% coverage and be able to sign-off EM on each signal net in the semiconductor design.

By aligning with the bottom-up approach of FPGA designs where IPs are developed separately and integrated together, this methodology promotes IP reuse. The challenge is to perform IP level validation and then model top level connectivity with IP level design constraints such as maximum allowable voltage drop. The CMM blocks can be used for full-chip analysis of mixed-signal design without compromising accuracy and runtime.

More details of this methodology can be referred in a technical paperjointly authored by Sujeeth Udipi at Xilinx and Karan Sahni at Apache, and presented at DesignCon 2014.The paper also suggests about the type of analysis required for different types of design styles in order to save time without loss of accuracy.


More Articles by Pawan Fangaria…..


Cadence is all about Semiconductor IP!

Cadence is all about Semiconductor IP!
by Daniel Nenni on 03-16-2014 at 9:00 am

Cadence continues on its quest to be a top semiconductor IP supplier which is a good thing since the semiconductor world now revolves around IP. Cadence CEO Lip-Bu Tan mentioned IP 14 times during his keynote and he was followed by the president of Imagination Technologies and the CEO of recently acquired Tensilica. I was not afforded the slides for these presentations so we will leave it at that for now. I did sneak a quick photo of this slide which is a nice overview of the current Cadence IP offering.


Semiconductor IP will also be a focus at the IEEE EDPS conference next month. Last year I organized FinFET day, this year it will be IP day:

If you look at the Semiconductor IP usage trends over the last five process nodes (65nm, 40nm, 28nm, 20nm, 16nm) the number of unique IP per tape-out is increasing while the ability to re-use IP across nodes is dropping. And thanks to the ultracompetitive mobile market with new products coming at us every day, design cycles are incredibly short and complex. In this session we will explore the Semiconductor IP challenges facing the fabless semiconductor ecosystem.

Coincidentally, Cadence VP of IP Martin Lund will be keynoting:

Every chip is different. So the promise of IP that will work for all doesn’t quite match up to reality. Even with standards-based IP, design teams often request specialized interfaces, memory structures, and other changes so the IP fits better in their SoC. How close are the IP companies getting to delivering IP the way chip designers really want? How close are we to the promised Lego-like approach to chip design with off-the shelf IP? And what are IP companies working on to close the gap?

Also joining us at the podium:

eSilicon: Patrick Soheili, VP, Business Development and VP and GM, IP Solutions
IPextreme:Warren Savage President and CEO
Arteris:Kurt Shuler VP Marketing
TSMC:Lluis Paris,Deputy Director, IP Portfolio Management
Mentor:Carey Robertson, Director of Product Marketing, Mentor Graphics
Atrenta:Bernard Murphy CTO

And my most favorite EDA CEO, Wally Rhines, will be keynoting the Thursday night dinner:

THE BIG SQUEEZE
For decades, we’ve known it was coming and now it’s here. Moore’s Law-which is really just a special case of the “learning curve”-can no longer drive the 30% per year reduction in cost per transistor, beginning with the 20/16/14 nm generation. Either we find innovations beyond just shrinking feature sizes and increasing wafer diameter or we slow our progress down the learning curve, introducing innovative new electronic capabilities at a slower rate than in the past. There are lots of alternatives, including a reduction in profitability of the members of the supply chain, to keep the progress continuing at the same rate as the last fifty years. Dr. Rhines will review the mathematical basis for the dilemma and, with his brand of humor, provide a roadmap of possibilities for the decade ahead.

EDPS Early bird registration is open now, I hope to see you there!

More Articles by Daniel Nenni…..

lang: en_US


MAS370 MH370 DO254 and Cell Data

MAS370 MH370 DO254 and Cell Data
by Luke Miller on 03-15-2014 at 10:00 pm

For the connected, in the instant knowledge, information world we live in, the missing Malaysia Airlines Flight 370 is most humbling. Let us be reminded as we look for details, and theorize… that someone’s Father, Mother, Brother, Sister, Son, Daughter, Friend are missing. Just terrible but the Miller’s continue to pray and hope for that miracle.

What have we learned from a technology standpoint? Me, being a RADAR/EW fella, it amazes me on how many holes we have in the air space as we get further out from US soil. Check out NEADS (Northeast Air Defense Sector), which is right down the road from me. Simply amazing, but it is not possible for all places to have such resources. We have learned about transponders and other fabulous technologies. Engines that keep a pulse to their designers, and the list goes on. It is obvious the plane was not destroyed mid-flight as we know hardware does not keep pinging without power.

If one was to look at the DO254 standard; DESIGN ASSURANCE GUIDANCE FOR AIRBORNE ELECTRONIC HARDWARE, to design in hardware into an aircraft is anything but trivial nor for the faint in heart. Most designers prove their requirements work, the DO254 standard in a nut shell requires one to prove what is not supposed to work. Being an FPGA guru, this is possible but takes stellar IP and a dedicated team. Check out logicircuit.com for an excellent overview of these standards and IP. This basically means every design and bit permutation is tracked and covered to prove that the design could not cause a safety critical issue.

It does look more and more that unfortunately we do not have the world’s best people. Trust is a funny thing, every day we place our lives in the hands of others. I experienced that this week, one with my wife driving me to the airport and then with my six flights this week. You know what I was thinking about? I wonder about the character of these pilots? My co passengers? I looked them over, couldn’t help it.

With the simple flick of a switch the 5th largest passenger Jet in the world, the Boeing 777 went into stealth mode… Humbling.

Technology is revealed in the search for the airplane, the data analysis, the tracking algorithms, the possibilities. I believe it is time to pull the cell phone data from all the pilots and passengers to see if there phones were not in flight mode and correlate that data with what we know. As you know, you can get coverage in spots and there is a record of pings when the cell phone trying to get service… Perhaps some texts got thru. It would be great if some of the phones were on and this data leads to a safe recovery of the people. That would be fantastic. Search the cell data. The answer may be in there.

As you know, after this is over we can expect the over correction, the new rules, the new technology etc. Technology can track us, heal us, help us but it will never give integrity, trust, respect, virtue and the like. Electronics and laws can hold us accountable but the rest is up to each one of us. Still pulling for the miracle.


Getting 3D TV from 2D Content

Getting 3D TV from 2D Content
by Daniel Payne on 03-14-2014 at 7:28 pm

3D TV has been all the rage over the past few years because of the added realism it offers the viewer, but there’s really not that much content that you can stream or play on a Blu-ray device. Wouldn’t it be cool if there was a box that could create 3D on the fly from a 2D stream or Blu-ray? This week I discovered that such a box does exist and I got to see it myself courtesy of a company called VEFXiand their converter box called 3D-Bee.

Manny Muro is the VP of Engineering at VEFXi, and he invited me over to their place in North Plains, Oregon, about a 40 minute drive away from my home. This 3D-Bee product has an FPGA inside, along with some video chips off-the-shelf. Manny and his team are hard at work on the next generation chip that will have higher performance and be implemented as an ASIC where the ASIC vendor will take the RTL code and do the physical implementation. Their design process starts with an architect writing algorithms which then get manually entered as Verilog, followed by logic synthesis with Synopsys. For virtual prototyping they are using a Xilinx Spartan-6development board from Avnet. With an HDMI daughter card they can look at real video results to verify their design.

I asked Manny if they were using any of the High Level Synthesis (HLS) in the Vivado software from Xilinx, but surprisingly he said that they didn’t because they needed a finer level of control over the implementation. I remember that Luke Miller blogged about the HLS in Vivado, and thought that video conversion would be a perfect fit for it at VEFXi.

For a demo they used a Blu-ray player with 2D content and a 3D TV that required glasses, and we watched different action movie clips where the 3D effect made the movie even more compelling. I learned that 3D TV’s have two different technologies:

  • Side by side
  • Frame sequential

I also met Craig Peterson, the founder and CEO of VEFXi. Craig is also on the board for the International 3D Society (I3DS). For the grand finale, Craig showed me a demo of some of their upcoming 3D technology that provides dramatically more depth during viewing compared to any other technology out there.

The holy grail of 3D is to do away with the funny glasses, and view 3D unencumbered, naturally. Stay tuned for some upcoming product announcement in this area from VEFXi. In the industry they use the phrase auto-stereoscopic 3D, which means glasses-free 3D.

The 3D-Bee product family has been on the market since 2011 and you can buy it online directly at www.3d-bee.com/store for just $349. VEFXi is also looking to hire a couple of ASIC design engineers.

lang: en_US


Jasper at DVCon and EJUG

Jasper at DVCon and EJUG
by Paul McLellan on 03-13-2014 at 7:05 pm

The Jasper European User Group meeting (EJUG) is coming up in a couple of weeks. It will be held in the Munich Hilton (which I have stayed in many times, the S-bahn from the airport pretty much stops in the basement) on April 2nd.

The schedule for the day is:
9:00 AM – Registration and continental breakfast
9:30 AM – Jasper Overview
9:45 AM – Customer Success Stories
10:00 AM – ARM Presentation
10:45 AM – Break
11:15 AM – Port-Based Generic Mechanism for Connectivity by Ericsson
11:45 AM – Efficient IP Bring-up with Jasper
12:30 PM – Lunch
2:00 PM – Advanced Functional Verification for Quality and Productivity Increase by ST
2:45 PM – Low Power Formal Verification with Jasper
3:15 PM – Exhaustive Security Verification with Jasper
3:45 PM – Break
4:15 PM – High-Performance Sequential Equivalence Checking with Jasper
4:45 PM – Coverage Verification with Jasper
5:15 PM – Panel: Efficient Verification Problem Solving with Jasper
-Yann Oddos, Intel
-ARM
-Yousaf Gulzar, Ericsson
-Guy Dupenloup, ST
6:00 PM – Closing remarks and cocktail reception

Registration for EJUG is here.

At DVCon some of Jasper’s customers presented on various aspects of using formal techniques. Prosenjit Chatterjee, Scott Fields and Syed Suhaib from Nvidia presented A Formal Verification App Towards Efficient, Chip-Wide Clock Gating Verification. Clock gating is an important part of SoC design for controlling power. But with multiple clock domains the verification can be complex. nVidia presented a fully automated approached based on top of Jasper’s sequential equivalence checking (SEC) app. The SEC app performs various optimizations automatically to achieve deeper proof bounds or even full proofs, in many cases, taking advantage of the symmetry of the setup. Nvidia apply this methodology across the chip to illustrate its usefulness. They found multiple clock gating bugs across many projects using this approach, where over half of these were found after supposedly high simulation coverage of the design. If you want to find obscure corner cases that are not handled correctly, then formal techniques once again outperform simulation.

Shuqing Zhao, Shan Yan and Yafeng Fang from Broadcom presented Practical Approach Using a Formal App to Detect X-Optimism-Related RTL Bugs. X-optimism is a problem during RTL simulation and with SoCs the size they are, gate-level simulation is simply not practical to eliminate the issues. The JasperGold X-propagation verification app reads in the RTL, analyzes the design, and then automatically implements assertions to check for all X occurrences on targets such as clocks, resets, control signals and output ports. If the formally proved X occurrences are determined by user to be unexpected, it usually implies they were masked in RTL simulation due to X optimism. Broadcom discussed results of this approach using two case studies, a power management controller module and an audio processing module, both of which have design bugs masked due to X-optimism.

And save the date. DVCon next year is March 2-5th 2015. And DVCon Europe is October 14-15th this year (also in Munich like EJUG).

The DVCon website is here.


More articles by Paul McLellan…


Cadence and ARM BFF

Cadence and ARM BFF
by Paul McLellan on 03-13-2014 at 6:38 pm

The biggest market for semiconductors is mobile and an ARM processor is the center of the axle around which it revolves. So everyone in the mobile ecosystem needs to work closely with ARM. At CDNLive earlier this week Cadence and ARM announced that they are deepening their partnership. Most of what they announced makes it a lot easier to use Cadence’s products with ARM’s without the user having to put it all together themselves. The announcement is a three-legged stool.

The first leg is that Cadence has new adaptable interconnect performance characterization test suite in the Cadence Interconnect Workbench, along with AMBA Designer integration, that delivers a significant speed-up of performance analysis and verification of CoreLink CCI-400 system IP and NIC-400 design tool based systems.

The second leg of the stool is that Cadence now provides ARM Fast Models combined with the Palladium XP II platform to support ARMv8-based system embedded OS verification. What this means in practice is that it is much easier to use Cadence’s hybrid virtual platform technology using ARM Fast Models for the processor (and perhaps some peripherals) and Palladium emulation to model the parts of the design that only exist at RTL. In particular, operating system (OS) bringup should be straightforward since everything is coming from a single supplier, Cadence.

Thirdly, verification IP (VIP) supporting the ARM AMBA5 CHI protocol for advanced networking, storage and server systems is now available for simulation and the Palladium XP II platform.

Together these three capabilities make bringing up ARM Cortex based systems easier. The lead customer for this is nVidia. As Kevin Kranzusch, vice president, System Software says:The Cadence Palladium solution for embedded software development enabled by ARM-based Fast Models helps us reduce the system software validation cycle and ensures a smoother post-silicon bring-up.

I worked for several years in the virtual platform market, as did Frank Schirrmeister who I met to discuss the announcement. The big problem with virtual platform technology was never the processor modeling, which was amazing technology, but modeling the boring peripherals. No matter how good the modeling technology, it took time and money to create and maintain the models. Since we were selling the ability to start software development earlier, taking time reduced the value proposition, and taking money is always worse than not needing to. It is starting to look really important that the emulation has transformed from something esoteric that the occasional development group had for SoC design, into something mainstream that is part of every verification strategy. The result is that hybrid virtual platforms with processors modeled using the JIT compiler technology and peripherals modeled using RTL on an emulator is the “killer app” for virtual platforms.


More articles by Paul McLellan…


Designing for Wearables!

Designing for Wearables!
by Daniel Nenni on 03-13-2014 at 5:30 pm

Wearables are going to be a real game changer for the fabless semiconductor ecosystem, absolutely. What other high volume semiconductor market segment has such a low barrier of entry? Speaking of low barrier of entry, the first stop on my Southern California trip last week was Monrovia, the home of Tanner EDA. Tanner is already a player in wearable design enablement due to their low cost and low learning curve.

Tanner EDA tools for analog and mixed-signal ICs and MEMS design offers designers a seamless, efficient path from design capture through verification. Our powerful, robust tool suite is ideal for applications including Power Management, Life Sciences / Biomedical, Displays, Image Sensors, Automotive, Aerospace, RF, Photovoltaics, Consumer Electronics and MEMS and Wearables!Try it for free!

Generally speaking, 80% of the silicon is shipped by 20% of the companies (the tried and true 80/20 rule). This may also be the case with wearables from Google, Apple, Samsung, Intel, LG, Sony, Microsoft, etc… but the remaining 20% represents billions of units over the next 10 years and there will be thousands of semiconductor entrepreneurs vying for that 20% so you had better get started!

I guess the smartwatch would be considered the first wearable. My first smartwatch was a Pulsar calculator watch, remember those? I was tossed out of a math exam for wearing one. Not for using it, just for having it on my wrist! Even today wearable technology scares people. The Google “Glasshole” stories are pretty funny.

The history of the wrist watch is pretty interesting. Wearable time pieces were generally kept in a pocket until the military started strapping them to their wrists for more efficient time telling. My Grandfather wore his first wrist watch in WWI but went back to the traditional pocket watch afterwards until his death at age 102. We are creatures of habit for sure. Wrist watches continued to grow in popularity in the 1920s and pretty much everyone wore a watch until the mobile phone became ubiquitous.

I stopped wearing a watch when I started carrying a Blackberry and I have not worn one since. Even though I’m a fitness fanatic I have yet to buy a Fitbit or Fuelband. What’s it going to take to get something back on my wrist? From what I can tell Apple is on the right track by combining a superset of functions from the iPhone and a health wrist band. What I really want to know is how much longer I have left to finish my bucket list? By profiling my many health and movement indicators, a smartwatch should be able to predict a catastrophic health event such as a seizure, stroke, or heart attack. And that capability is coming, believe it. We will change the world once again.

More Articles by Daniel Nenni…..

lang: en_US