Banner 800x100 0810

SOCFIT, Circuit Level Soft Error Analysis

SOCFIT, Circuit Level Soft Error Analysis
by Paul McLellan on 05-13-2013 at 2:50 pm

I blogged recently about reliability testing with high energy neutron beams. This is good for getting basic reliability data but it is not really a useful tool for worrying about reliability while the chip is still being designed and something can be done about it.

That is where IROC Technologies SOCFIT tool comes in. It takes all the data from the type of silicon analysis with real neutrons, and uses it to analyze the way that the various cells on the chip have been hooked together to provide reliability estimates. SOCFIT quickly and accurately calculates the failure rate (FIT) and various derating factors for the SoC. It works from either an RTL or a gate-level representation of the design.


SOCFIT uses the foundry’s SER database for FIT and derating simulation. It can handle very large designs with tens of millions of flops. It then provides an extensive report detailing the contribution of each cell in the design to the overall FIT rate and details of derating. There is a smart fault injection simulation for application derating. SOCFIT is available as a tool but usually, at least the first time, design groups work with IROCtech experts to both get good results and to learn how to interpret them.


How much it is worth investing to make the design more reliable and less sensitive to SEE errors is an economic one, and depends on the end-market. Satellite electronics might justify triple redundancy and voting since they have to survive in a more hostile environment than on the ground. Making a cell-phone that reliable is just not worth the cost since they are not mission critical and will most likely crash regularly due to software bugs and other issues, not due to cosmic rays. And a phone only has to last a few years.

Automobile electronics is one of the areas most focused on reliability. Cars have to last for twenty years, operate in deserts and Alaskan winters, and while it doesn’t matter too much if your radio reboots due to a particle upset, the engine control ECU(s) are another thing. Medical electronics is another area which cannot tolerate much unreliability.

As transistors get smaller and smaller, and the power-supply voltages continue to decrease, the currents that high energy particles can induce are more and more likely to cause SEE upsets. So this isn’t a problem that is going away, it is a problem that is going to continue to get worse.

A case study on SOCFIT is here. The datasheet for SOCFIT is here.

IROCtech will be at DAC in booth #1738.


UVM/SystemVerilog: Verification and Debugging

UVM/SystemVerilog: Verification and Debugging
by Daniel Payne on 05-13-2013 at 2:45 pm

At DAC in just three weeks you can learn about which EDA vendors are supporting the latest UVM 1.1d (Universal Verification Methodology) standard as defined by Accellera. One of those EDA vendors is Aldec, and they have a 45 minute technical session that you can register for online. Space will fill up quickly, so get signed up sooner rather than later.
Continue reading “UVM/SystemVerilog: Verification and Debugging”


Sonics-ARM Form A Potent IP Combination

Sonics-ARM Form A Potent IP Combination
by Randy Smith on 05-13-2013 at 1:30 pm

Recently, Sonics and ARM entered into an agreement whereby ARM licensed a significant portion of Sonics’ patent portfolio. Sonics, Inc. is one of the leading providers of connectivity IP often referred to as network-on-chip, or NoC. ARM is the leading provider of processor intellectual property (IP). The potential scope of their relationship is huge:

“A broad IP ecosystem is critical for the successful deployment of SoCs,” said Tom Cronk, executive vice president and general manager, Processor Division, ARM. “The agreement to license Sonics’ interconnect patents and to support Sonics on their next-generation interconnect and low power management is an important step in strengthening the ecosystem. Sonics and ARM have a clear vision for the future of IP technology, which we look forward to realizing together.”

Most of the leading edge designs today consist of these two components (processors and NoC) plus other modules including memories and various specialized blocks for interfaces (e.g., USB, LVDS) and data processing (e.g., motion estimation). From an architectural perspective, the processor and NoC choices are probably the most critical decisions a chip architect needs to make. That ARM and Sonics are so obviously interested in cooperating with each other is significant and fortuitous for their customers and the industry in general.

While the formal announcement did not contain any financial information, it did indicate that a large number of patents were licensed to ARM – 138 patents. Recent deals in this area usually value patents in the range of $200k to $700k per patent (e.g.,MIPS-Imagination, Google-Motorola, Nortel Patents). While these more public deals involved large corporate sellers, it is reasonable to estimate that ARM paid at least somewhere in the low end of this range or slightly less. So the deal must have been for more than $20M.

The future collaboration between the companies would seem to be focused on power management. Sonics has been investing in the development of advanced power management solutions that will leverage the NoC’s knowledge of interconnect traffic and SOC activity to more efficiently manage power domains. ARM’s collaboration with Sonics on this initiative seems to indicate that ARM believes that Sonics’ approach has merit. Given that ARM’s processors dominate the handheld marketplace power management is a huge issue. See the picture below to see a typical tablet SOC’s various power domains and how the NoC is well positioned to aid in managing power. If Sonics can use their relationship with ARM and their strength in NoC to bring about better power solutions for these devices this relationship will clearly deliver value to their mutual customers and position Sonics for significant growth in the years to come.


DAC does not seem to be a place to meet with NoC companies, though Sonics is sponsoring the “Kickin’ it up in Austin” vents at DAC. However, Sonics is attending the Multicore Developers Conference this month if you’d like to meet them.

Sonics, Inc. is the leader of system IP for cloud-scale SoCs. As a pioneer of network-on-chip (NoC) technology, Sonics offers SoC designers one of the world’s largest portfolios of system IP for mobile, digital entertainment, wireless and home networking. With a broad array of silicon-proven IP, Sonics helps designers eliminate memory bottlenecks associated with complex, high-speed SoC design, streamline and unify data flows and solve persistent network challenges in embedded systems with multiple cores. Sonics has more than 138 patent properties to date and has enabled its customers to ship more than two billion chips worldwide. Founded in 1996, Sonics is headquartered in Milpitas, Calif. with offices worldwide. For more information, please visit www.sonicsinc.com, www.sonicsinc.com/blog, and follow us on Twitter at http://twitter.com/sonicsinc.


A random walk down OS-VVM

A random walk down OS-VVM
by Don Dingee on 05-13-2013 at 11:14 am

Unlike one prevailing theory of financial markets, digital designs definitely don’t function or evolve randomly. But many engineers have bought into the theory that designs can be completely tested randomly. Certainly there is value to randomness, exercising all combinations of inputs, including unexpected ones a designer wouldn’t try but a test engineer without a priori bias would.

Continue reading “A random walk down OS-VVM”


Tektronix at #50DAC

Tektronix at #50DAC
by Daniel Nenni on 05-13-2013 at 10:00 am

If we grew up in similar eras you will know Tektronix as a company that manufactures test and measurement devices. Every lab I was in during high school and college had Tek oscilloscopes and logic analyzers. At #50DAC however, attendees that visit Tektronix will experience firsthand RTL simulation-level visibility to multi-FPGA prototypes eliminating recompiles for faster, more efficient debugging.

BEAVERTON, Ore., May 13, 2013 – Tektronix, Inc., a leading worldwide provider of test, measurement and monitoring instrumentation, today announced it will showcase its recently introduced Certus 2.0 ASIC prototyping debug solution at the 2013 Design Automation Conference in Austin, TX, June 2-6, Booth 819. DAC is the premier conference devoted to the design and automation of electronic systems (EDA), embedded systems and software (ESS), and intellectual property (IP).

Shown for the first time at the Design Automation Conference (DAC) the Certus 2.0 suite of software and RTL-based embedded instruments fundamentally changes the ASIC prototyping flow by enabling full RTL-level visibility and making FPGA internal visibility a feature of the prototyping platform. This simulation-level visibility allows engineers to diagnose multiple defects in a day versus a week or more with existing tools.

“Proactive debug capability for ASIC prototypes has been missing within the FPGA ecosystem,” said Dave Farrell, general manager for the embedded instrumentation group at Tektronix. “DAC attendees will now be able to see firsthand how Certus 2.0 fundamentally changes the ASIC prototyping flow and dramatically increases debug productivity.”

Proactive debug strategy
Certus 2.0 allows designers to automatically instrument all the signals likely to be needed in each of the FPGAs in a multi-FPGA ASIC prototype with a small FPGA LUT impact. This enables a proactive debug and instrumentation strategy, eliminating the need to re-compile the FPGA to debug each new behavior, typically a painful eight to eighteen hour ordeal with traditional tools. Other key capabilities include:

  • Automatic identification and instrumentation of RTL signals based on type and instance name including flip-flops, state machines, interfaces and enumerated types
  • On-chip, at-speed capture and compression of many seconds of data without special external hardware or consuming FPGA I/O resources
  • Advanced on-chip triggering bringing the power of logic analyzer trigger methods to embedded instrumentation
  • Time-correlated capture results across clock domains and multiple FPGAs providing a system-wide view of the entire target design

Certus 2.0 works on any existing commercial or custom ASIC prototyping platform, and does not need special connectors, cables, or external hardware.

Tektronix Embedded Instrumentation Solutions
Following the acquisition of Veridae Systems in 2011, Tektronix Embedded Instrumentation solutions reflect the growing importance of Electronic Design Automation (EDA) software in helping engineers solve difficult instrumentation and debug challenges.

Wondering what else Tektronix is up to? Check out the Tektronix Bandwidth Banter blog and stay up to date on the latest news from Tektronix on Twitter and Facebook.

About Tektronix
For more than sixty-five years, engineers have turned to Tektronix for test, measurement and monitoring solutions to solve design challenges, improve productivity and dramatically reduce time to market. Tektronix is a leading supplier of test equipment for engineers focused on electronic design, manufacturing, and advanced technology development. Headquartered in Beaverton, Oregon, Tektronix serves customers worldwide and offers award-winning service and support. Stay on the leading edge at www.tektronix.com.

lang: en_US


Cliosoft CEO on Design Collaboration Challenges!

Cliosoft CEO on Design Collaboration Challenges!
by Daniel Nenni on 05-12-2013 at 8:30 pm

Cliosoft was one of the first SemiWiki subscribers and it is a pleasure to work with them. They have one of the busiest landing pages with more than 30 articles authored by Daniel Payne, Paul McLellan, and myself. Srinath and I have lunch occasionally and exchange ideas, observations, and experiences:

Q: What are the specific design challenges your customers are facing?

Design teams approach us when they are having issues sharing design data and collaborating between team members. Design teams are growing and hiring talent wherever it is available. Design flows are complex, often using tools from different EDA vendors. Efficiently sharing design data across multiple design centers is a requirement even for small startups.

Q: What does your company do?

ClioSoft provides design data management solutions integrated seamlessly into design flows from all the leading EDA vendors. We provide collaboration, revision control, release management, access controls, IP management & reuse – features similar to what software configuration management (SCM) systems provide for software development teams. Our solutions are design-aware. For instance, users can run commands on an entire design hierarchy or invoke a visual comparison between two revisions of a schematic or layout. We refer to our solutions as Hardware Configuration Management (HCM) because they are built from the ground up to meet the needs of hardware design teams.

Q: Why did you start/join your company?

I have been in EDA software development for over thirty years. I started my career in the early days of commercial EDA with Silvar-Lisco. After that I worked in engineering and management positions at Synopsys and Vantage Analysis Systems. When Vantage was acquired by Viewlogic, a few of us left to start a consulting company called Proxy Modeling. Our consulting assignments often led to streamlining flows and helping set up revision control and design management strategies. Since I had several years of software development experience I had used SCM systems like Apollo Computer’s DSEE and IBM’s Clearcase. I soon realized that SCM systems were not always ideal for managing hardware design data. Software is typically made up of relatively small text files that users create. Hardware designs are often done by graphical tools like schematic or layout editors that generate loads of files and many of them are large binary files. So I founded ClioSoft to provide a revision control and configuration management systems that was built to meet the challenges of hardware design.

For a more detailed history:
http://www.semiwiki.com/forum/content/2011-brief-history-cliosoft.html

Q: How does your company help with your customers’ design challenges?

As more members are added to customer teams, the individual designer’s productivity suffers because more time is spent coordinating and sharing data and information. ClioSoft’s solutions grease the wheels to improve team productivity. Designers can efficiently share design data with their team members whether they are sitting in the next office or across the world. All changes are tracked and this improves accountability and visibility and everyone knows what is happening in the project. Our tools provide insurance against mistakes and peace of mind that the design team taped out using all the correct versions of design files. As teams get more comfortable with using our solutions, the holy grail of design reuse becomes much easier and more practical.

Q: What are the tool flows your customers are using?

We support a variety of different flows from digital front end to analog/ mixed-signal and even PCB designs. We have a close relationship all the major EDA vendors and have seamless integration with Cadence Virtuso, Mentor Pyxis, Synopsys Custom Designer & Laker (previously SpringSoft) and have just added integration with Agilent ADS. Using our Universal DM Adaptor technology, a rule-based system, customers manage data from a variety of flows such as Cadence Allegro, Mentor BoardStation, etc.

Q: What will you be focusing on at the Design Automation Conference this year?

We will be focusing this year on SOS viaADS – our integration with Agilent’s Advanced Design System. This product is the result of close cooperation between Agilent and ClioSoft engineering teams over a period of 18 months. It is a deeply integrated solution that provides revision control and collaboration in the ADS flow for both Windows and Linux platforms. Many of our customers use ADS along with other flows like Cadence Virtuoso. Now ADS users will be able to get the same benefits as Virtuoso users and they will be able to manage all their design data in one SOS project repository.
Here is a link to the press release:

http://www.cliosoft.com/news/press/pr_2013_05_07_agilent.shtml

Also see Cliosoft at #50DAC:
http://www.cliosoft.com/dac/

Q: Where can SemiWiki readers get more information?

http://www.cliosoft.com

http://www.semiwiki.com/forum/content/section/397-cliosoft.html

ClioSoft is the premier developer of hardware configuration management (HCM) solutions. The company’s SOS™ Design Collaboration platform is built from the ground up to handle the requirements of hardware design flows. The SOS platform provides a sophisticated multi-site development environment that enables global team collaboration, design and IP reuse, and efficient management of design data from concept through tape-out. Custom engineered adaptors seamlessly integrate SOS with leading design flows – Cadence’s Virtuoso® Custom IC, Synopsys’ Galaxy Custom Designer, Mentor’s IC flows, and SpringSoft’s Laker™ Custom Layout Automation System. ClioSoft’s innovative Universal DM Adaptor technology “future proofs” data management needs by ensuring that data from any flow can be meaningfully managed. The Visual Design Diff (VDD) engine enables designers to easily identify changes between two versions of a schematic or layout by graphically highlighting the differences directly in the editors.

Also Read

Agilent ADS Integrated with ClioSoft

Data Management for Designers

Modern Data Management


A Big Boost for Equivalency Checking

A Big Boost for Equivalency Checking
by Daniel Payne on 05-12-2013 at 1:41 pm

Thirty years ago in 1983 Professor Daniel Gajski and Kuhn created the now famous Y-Chart to show the various levels of abstraction in electronic system design:

We can still use this Y-Chart today because it still pertains to how engineers are doing their SoC designs. Along the Behavioral axis there is a need to know that each level of abstraction is really equivalent to the other levels to ensure that the design is consistent, and that no errors have crept into the design that may have been caused by:

  • Addition of DFT structures
  • Addition of low-power techniques, like clock gating
  • Changes in cells during timing closure
  • Engineering Change Orders
  • Manual netlist changes

One brute force approach is to run functional simulation and re-use your test benches on each level of behavioral models. Well, that approach takes a lot of time, and still is not guaranteed to find all logical differences between two levels of models.

A more elegant approach is to run a class of EDA tools known as Equivalency Checking (EC), which take a mathematical approach to prove equivalency between two levels. Using Equivalency Checking has traditionally had a few limitations:

  • Slow run-time speeds
  • Limited capacity
  • Complexity in terms of learning and setup

Where there’s a need there’s an opportunity, so the software engineers at Oasys have worked to address each of these three limitations by adding new features to EC like: hierarchy, automatic partitioning and parallel multi-processing. With these new technical features you can use EC with:

  • Faster run-time speeds
  • Higher capacity by scaling
  • Simplicity in use

Let’s look at some actual numbers using this new EC approach:

The Oasys tool name is called RealTime Parallel EC, and their tool can simultaneously verify sub-blocks in a hierarchical design, so the run-times will scale linearly with the number of processors available.

lang: en_US

If you travel to DAC then plan on visiting Oasys in booth #1231 to get your questions answered.

Oasys is a privately held company that was founded in 2004 by a team of leading RTL synthesis developers from Ambit and Cadence. The team created a next generation physical RTL synthesis platform that empowers SoC/ASIC design teams to conquer the timing, power, area, and routability challenges of today’s complex SoCs, ASICs, and IP blocks. Oasys RealTime synthesis optimizes at a higher level of abstraction (the RTL level vs. gate level with other synthesis tools) enabling it to provide up to 10x faster turnaround times and the capacity to synthesize the entire top level of the largest SoC’s, ASICs or IP blocks, all while being physically aware for better correlation with physical design.

The company is funded by Intel Capital, Xilinx Ventures, and several private investors. The first product from Oasys, RealTime Designer, was launched in 2009 and is being used successfully by many of the top semiconductor vendors worldwide. The company’s newest product, RealTime Explorer, provides a unique capability for SoC/ASIC front end design teams to quickly identify and resolve top-level timing and routability issues before RTL hand-off to the back-end groups for synthesis and physical design implementation reducing schedules by an average of 1-2 months.


iDRM Brings Design Rules to Life!

iDRM Brings Design Rules to Life!
by Pawan Fangaria on 05-11-2013 at 8:00 pm

Much awaited, automatic tool for DRM (Design Rule Manual) and DRC (Design Rule Check) deck creation is here now! I am particularly excited to know about this because I had been hearing for its need (in different context) from the designers with whom I was working to improve their design productivity through the use of our EDA tools (in my past company). Considering ever growing size and complexity of DRM (in terms of number of complex rules with multiple variables and conditions associated with them) as we go down on process node, it’s natural to expect an automated tool to ease the process.

Traditionally, DRM is written manually by process engineers without any standards; these are secret rules (or limitations in other words) of fabs at a particular process node, available in hard copy or at best, PDF. Programmers or CAD engineers are at mercy of that description to understand it in right manner and develop DRC (Design Rule Check) deck which is software code to implement checks for those rules and flag violations. The whole process is very rigid, manual, unidirectional, time consuming and error prone. Ironically, the designers who actually have to verify their designs against these rules have no say in this whole process. For any change in any of the rule, they have to wait for long sacrificing on the window of opportunity for their design. The DRM and DRC deck, at the very first instance, take years (going through several iterations) before they become available to designers and others with reasonable confidence of correctness.

I am impressed with Sage Design Automation who could visualise this process bottleneck in the overall value chain of semiconductor industry, changed the paradigm and came up with an innovative concept and tool called iDRM (Integrated Design Rule Management).

iDRM is a essentially a design rule compiler integrated with a graphical editor which can capture design rules in terms of layout patters, arrows marking limitations between different shapes such as width, separation etc. and expressions defining the rules; like the example above.

Once the rule is captured, iDRM automatically transforms it into an executable program which can be run on any production layout to validate that against the rule. This delights the process engineer who can then run it on a particular layout, obtain pass/fail report and compare with the actual process induced issues such as litho hotspots. In case of any mismatch, he/she can quickly modify the rule description to match it accurately with the process.

[Correlating iDRM rule with imported fabrication/litho failure data]

iDRM can automatically generate QA pass/fail test patterns for each captured rule. This can be used to generate large sets of QA test structures with maximum coverage, which used to be a much time consuming process otherwise. Moreover, it is consistent and accurate with respect to the captured rule. These can be used to verify correctness and completeness of any third party tool DRC deck.

[DRC deck QA test patterns generated by iDRM]

iDRM can also generate statistical graph (in various formats such as bar chart) of all occurrences of any particular pattern (matching to the captured rule) in the design and its integrated layout viewer can locate the exact position of a pattern. This provides a good way to scan and analyze the overall layout.

Overall concept is very novel which bridges the gap between process and design by automating the design rule generation and verification. The tool, iDRM is very user friendly, flexible, easy-to-use and provides a graphical platform for formal, clear, unambiguous depiction of design rules, thus eliminating any communication gap or error for faster closure. It takes order of magnitude lesser time to create a complete and correct DRM and DRC deck together. Any change can easily be accommodated. Definitely, it provides competitive advantage to those who are using it. Designers too can cheer up on using this tool in their design flow to create specific, robust and optimum layout structures that can provide high yield and performance; of course design rule correct. That can provide them a differentiated edge!!

Further information can be found at Sage’s white paper here.

Sign up for a demo at DAC booth #2233 here.


Winning in Monte Carlo: Managing Simulations Under Variability and Reliability

Winning in Monte Carlo: Managing Simulations Under Variability and Reliability
by Daniel Nenni on 05-11-2013 at 7:00 pm

I recently talked to Trent McConaghy about his book on variation-aware design of custom ICs and the #50DAC tutorial we are doing:

Winning in Monte Carlo: Managing Simulations Under Variability and Reliability.

Trent is the Solido Chief Technology Officer, an engaging speaker, one of the brightest minds in EDA, and someone who I have thoroughly enjoyed working with for the past three years.

Topic Area:Design for Manufacturability
Date:Monday, June 3, 2013
Time:11:00 AM — 1:00 PM
Location:13AB
Summary: Thanks to FinFETs and other process innovations, we are still shrinking devices. But it comes at a steep price: variability and reliability have become far worse, so effective design and verification is causing an explosion in simulations. First, Daniel Nenni will do the introductions and present process variation content and analytics from SemiWiki.com. Presenter Prof. Georges Gielen, will describe CAD and circuit techniques for variability and reliability. Next, Yu (Kevin) Cao from ASU will describe how variability and aging affect bulk vs. FinFET device performance. More corners and statistical spreads will come into play, so advanced IC design tools will be needed to minimize design cycle times. Then, Trent McConaghy from Solido Design Automation, will describe industrial techniques for fast PVT, 3-sigma, and high-sigma verification. Finally, Ting Ku, Director of Engineering at Nvidia, will describe a signal integrity case study using variation-aware design techniques.
To Monte Carlo… and beyond!


Q: What is Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide about?

It describes a way to think about approaching design of custom ICs, such that (a) PVT and statistical variation doesn’t kill your circuit, and (b) handling the variation doesn’t chew up all your simulation resources. It doesn’t focus on the physical mechanisms causing variation, but instead examines and compares possible ways to cope with variation.

Q: What is variation-aware design?

Variation is the uncontrollable aspects of your circuit and its environment that affect the behavior of your design. These include global process variation, local process variation (mismatch), and environmental variation such as temperature and loading. While you cannot control such parameters in the real world, you *can* control them in simulations to understand their effects, and to determine designs that perform and yield well despite the variations. Variation-aware design is the act of designing circuits to meet yield, performance, and power targets despite variation.

Q: Why is variation-aware design important?

It’s not important in its own right, but it can contribute in a big way to the things that really matter. For companies, it’s the bottom line that matters, and the bottom line is affected by yield, number of chips per wafer (area), and sales (which are affected by power and performance, and time to market). For designers, what matters is making the design work, yield well, and perform well. These days, variation can profoundly yield, area, power and performance, not to mention cause respins and kill schedules. So, designing while variation-aware helps ensure that the targets for yield, area, power, and performance are hit in a timely fashion… and ultimately helps to maintain a healthy bottom line.

Q: What inspired you to write the book?

There are two main factors:

First, our customers had been asking for deep technical white papers about Solido tools, which we had taken the time to write over the years. They were very happy with those white papers, and it helped them to become super-users. But the white papers were piecemeal. We were finding that there was a simple, practical way to think about handling variation at a high level; and from that high level one could zoom into the details of handling PVT, statistical, and high-sigma variation. We thought this knowledge might be useful to our users, and also to a broader audience curious about how to address variation. We realized that a book was a great way to package this content.

Second, writing the book was a way of “giving back” to the academic community. Since I started doing CAD in the late 90s, I’ve always strived to balance industry and academia. I paused my time in industry to go back for a PhD, during which I published a fair bit, and capped off with a book. I had a good experience doing that book with Springer, so when they approached me for a new book I took it very seriously. We’re quite happy with the book’s reception from the academic community — some professors have already started using it as part of their courses. While Solido is a for-profit company, I’m glad we were able to give back to academia.

Q: What is unique about Solido’s methods?

We have three main tools, for PVT verification, 3-sigma verification, and high-sigma verification. For each of these three tools, we are the first to invent and introduce tools that are simultaneously (a) fast (b) accurate (c) scalable, and (d) verifiable [i.e. can I trust it?]. It turns out that it’s fairly easy to hit two or three of those criteria, but hitting all four is actually quite hard, and each tool took algorithmic breakthroughs. For example, in high-sigma verification there have been many publications using importance sampling algorithms, but those were typically on 6 variables. 24 variables was considered high dimension! But if you have an accurate model of process variation, with 10 local process variables per device, then even a 6T bitcell has 60 local variables! A sense amp or flip flop could hit 200. To solve the high-sigma problem in a way that achieved (a)(b)(c)(d) simultaneously, we had to completely re-think the high-sigma problem, to chop down the algorithmic computational complexity.

Q: What is the intended audience for the book?

The primary target audience is designers, who want to ship chips that work, without having to muck around too much with variation. I like to say that ‘no designer should have to know the definition of kurtosis. The book shows designers a straightforward way to think about variation to address specific variation issues, while avoiding all the ratholes along the way. For example, designers are often asked to run 100 Monte Carlo samples, but the next step is unclear. As the book describes, the way to approach MC sampling is to have a specific task or question attached, such as “does my circuit meet the 3-sigma yield target?” and appropriate tools to facilitate answering the question.

The managers and CAD managers will also find the book of value, as it can help guide them in design of their flows. For example, thanks to the deeper understanding that the book brought, some managers have been implementing PVT signoff flows where each PVT signoff run may cover >10,000 corners (at a cost of hundreds of simulations). Other managers have been implementing “true 3-sigma corner” statistical design flows. And so on.

Finally, as implied above, CAD academics will find this book of value because it provides a well-defined set of technical CAD problems, as well as baseline approaches to each CAD problem. The book can serve as a reference and starting point for their own work.

Q: How do I get a copy of the book?

All the usual suspects:
-Amazon http://www.amazon.com/Variation-Aware-Design-Custom-Integrated-Circuits/dp/146142268X
-Springer online http://www.springer.com/engineering/circuits+%26+systems/book/978-1-4614-2268-6
-The Springer booth at ISSCC, DAC, DATE, etc. Say hi to Chuck!
-It’s also been rumored that existing users of Solido software can get a copy if they ask… 🙂

Q: Will you be exhibiting at the Design Automation Conference?

Yes, readers can signup for a demo here: http://www.solidodesign.com/page/dac-2013-demo-signup/

lang: en_US


Calypto, in Three Part Harmony

Calypto, in Three Part Harmony
by Paul McLellan on 05-11-2013 at 8:00 am

As Julius Caesar said, “Gallia est omnis divisa in partes tres.” All Gaul is divided into 3 parts. Calypto is similar with three product lines that work together to provide a system level approach to SoC design. Two of those product lines are not unique, in the sense that similar capabilities are available from a handful of other companies, but the original core technology that Calypto worked on when it was first founded, sequential logical equivalence checking (SLEC), is.

The three technologies that make up Calypto are:

SLEC: in the same way as logical equivalence checking proves that a gate-level netlist is equivalent to the RTL of a design (and thus that synthesis did its job correctly), SLEC proves that the output RTL is equivalent to the C/C++/SystemC of a design (and thus HLS did its job correctly). When Calypto was founded, the CEO was Devadas Varma who I’d worked with at Ambit, and what he and his team proposed doing seemed beyond the frontier of what was possible, but they succeeded and today’s SLEC product is the result.

Catapult: this is high-level synthesis (HLS) technology that was originally a Mentor product but which was spun out of Mentor and into Calypto along with some people (and having worked for Greg Hinckley, the COO/CFO of Mentor when I was at VLSI I’m sure it was a sophisticated financial transaction too). Since HLS requires SLEC for verification (and it is probably even more important than for RTL/gates since HLS is less mature technology) this transaction made a lot of sense from a sales point of view, since one salesperson could sell both products from one company. Plus the Calypto sales team is focused on this market, whereas Mentor has a huge product line and it is easy for products to suffer from lack of focus with sales.

PowerPro: this performs power analysis and sequential power optimization. Since this alters the sequential behavior of the design, this also requires SLEC to verify that the changes didn’t alter the functionality, merely reduced the power. The tool works by identifying register transfers that will not alter the results of the design, and generates a small amount of additional circuitry to suppress them.

Calypto has integrated these three technologies to offer industry’s only HLS solution that can synthesize power optimized RTL from C++ or SystemC and formally verify the synthesized RTL against the original C++ or SystemC. It is almost a general rule in EDA that optimizations done at a high level are more powerful than at a lower level. By moving up to a level above RTL, the designer has more options for power-performance-area (PPA) tradeoffs, and by having power optimization under the hood of high-level synthesis, it makes those tradeoffs simpler to explore.

You can see these three products at DAC at the Calypto booth, #1247. There is a complete list of suite demos that you can register for here.