Synopsys IP Designs Edge AI 800x100

Tektronix at #50DAC

Tektronix at #50DAC
by Daniel Nenni on 05-13-2013 at 10:00 am

If we grew up in similar eras you will know Tektronix as a company that manufactures test and measurement devices. Every lab I was in during high school and college had Tek oscilloscopes and logic analyzers. At #50DAC however, attendees that visit Tektronix will experience firsthand RTL simulation-level visibility to multi-FPGA prototypes eliminating recompiles for faster, more efficient debugging.

BEAVERTON, Ore., May 13, 2013 – Tektronix, Inc., a leading worldwide provider of test, measurement and monitoring instrumentation, today announced it will showcase its recently introduced Certus 2.0 ASIC prototyping debug solution at the 2013 Design Automation Conference in Austin, TX, June 2-6, Booth 819. DAC is the premier conference devoted to the design and automation of electronic systems (EDA), embedded systems and software (ESS), and intellectual property (IP).

Shown for the first time at the Design Automation Conference (DAC) the Certus 2.0 suite of software and RTL-based embedded instruments fundamentally changes the ASIC prototyping flow by enabling full RTL-level visibility and making FPGA internal visibility a feature of the prototyping platform. This simulation-level visibility allows engineers to diagnose multiple defects in a day versus a week or more with existing tools.

“Proactive debug capability for ASIC prototypes has been missing within the FPGA ecosystem,” said Dave Farrell, general manager for the embedded instrumentation group at Tektronix. “DAC attendees will now be able to see firsthand how Certus 2.0 fundamentally changes the ASIC prototyping flow and dramatically increases debug productivity.”

Proactive debug strategy
Certus 2.0 allows designers to automatically instrument all the signals likely to be needed in each of the FPGAs in a multi-FPGA ASIC prototype with a small FPGA LUT impact. This enables a proactive debug and instrumentation strategy, eliminating the need to re-compile the FPGA to debug each new behavior, typically a painful eight to eighteen hour ordeal with traditional tools. Other key capabilities include:

  • Automatic identification and instrumentation of RTL signals based on type and instance name including flip-flops, state machines, interfaces and enumerated types
  • On-chip, at-speed capture and compression of many seconds of data without special external hardware or consuming FPGA I/O resources
  • Advanced on-chip triggering bringing the power of logic analyzer trigger methods to embedded instrumentation
  • Time-correlated capture results across clock domains and multiple FPGAs providing a system-wide view of the entire target design

Certus 2.0 works on any existing commercial or custom ASIC prototyping platform, and does not need special connectors, cables, or external hardware.

Tektronix Embedded Instrumentation Solutions
Following the acquisition of Veridae Systems in 2011, Tektronix Embedded Instrumentation solutions reflect the growing importance of Electronic Design Automation (EDA) software in helping engineers solve difficult instrumentation and debug challenges.

Wondering what else Tektronix is up to? Check out the Tektronix Bandwidth Banter blog and stay up to date on the latest news from Tektronix on Twitter and Facebook.

About Tektronix
For more than sixty-five years, engineers have turned to Tektronix for test, measurement and monitoring solutions to solve design challenges, improve productivity and dramatically reduce time to market. Tektronix is a leading supplier of test equipment for engineers focused on electronic design, manufacturing, and advanced technology development. Headquartered in Beaverton, Oregon, Tektronix serves customers worldwide and offers award-winning service and support. Stay on the leading edge at www.tektronix.com.

lang: en_US


Cliosoft CEO on Design Collaboration Challenges!

Cliosoft CEO on Design Collaboration Challenges!
by Daniel Nenni on 05-12-2013 at 8:30 pm

Cliosoft was one of the first SemiWiki subscribers and it is a pleasure to work with them. They have one of the busiest landing pages with more than 30 articles authored by Daniel Payne, Paul McLellan, and myself. Srinath and I have lunch occasionally and exchange ideas, observations, and experiences:

Q: What are the specific design challenges your customers are facing?

Design teams approach us when they are having issues sharing design data and collaborating between team members. Design teams are growing and hiring talent wherever it is available. Design flows are complex, often using tools from different EDA vendors. Efficiently sharing design data across multiple design centers is a requirement even for small startups.

Q: What does your company do?

ClioSoft provides design data management solutions integrated seamlessly into design flows from all the leading EDA vendors. We provide collaboration, revision control, release management, access controls, IP management & reuse – features similar to what software configuration management (SCM) systems provide for software development teams. Our solutions are design-aware. For instance, users can run commands on an entire design hierarchy or invoke a visual comparison between two revisions of a schematic or layout. We refer to our solutions as Hardware Configuration Management (HCM) because they are built from the ground up to meet the needs of hardware design teams.

Q: Why did you start/join your company?

I have been in EDA software development for over thirty years. I started my career in the early days of commercial EDA with Silvar-Lisco. After that I worked in engineering and management positions at Synopsys and Vantage Analysis Systems. When Vantage was acquired by Viewlogic, a few of us left to start a consulting company called Proxy Modeling. Our consulting assignments often led to streamlining flows and helping set up revision control and design management strategies. Since I had several years of software development experience I had used SCM systems like Apollo Computer’s DSEE and IBM’s Clearcase. I soon realized that SCM systems were not always ideal for managing hardware design data. Software is typically made up of relatively small text files that users create. Hardware designs are often done by graphical tools like schematic or layout editors that generate loads of files and many of them are large binary files. So I founded ClioSoft to provide a revision control and configuration management systems that was built to meet the challenges of hardware design.

For a more detailed history:
http://www.semiwiki.com/forum/content/2011-brief-history-cliosoft.html

Q: How does your company help with your customers’ design challenges?

As more members are added to customer teams, the individual designer’s productivity suffers because more time is spent coordinating and sharing data and information. ClioSoft’s solutions grease the wheels to improve team productivity. Designers can efficiently share design data with their team members whether they are sitting in the next office or across the world. All changes are tracked and this improves accountability and visibility and everyone knows what is happening in the project. Our tools provide insurance against mistakes and peace of mind that the design team taped out using all the correct versions of design files. As teams get more comfortable with using our solutions, the holy grail of design reuse becomes much easier and more practical.

Q: What are the tool flows your customers are using?

We support a variety of different flows from digital front end to analog/ mixed-signal and even PCB designs. We have a close relationship all the major EDA vendors and have seamless integration with Cadence Virtuso, Mentor Pyxis, Synopsys Custom Designer & Laker (previously SpringSoft) and have just added integration with Agilent ADS. Using our Universal DM Adaptor technology, a rule-based system, customers manage data from a variety of flows such as Cadence Allegro, Mentor BoardStation, etc.

Q: What will you be focusing on at the Design Automation Conference this year?

We will be focusing this year on SOS viaADS – our integration with Agilent’s Advanced Design System. This product is the result of close cooperation between Agilent and ClioSoft engineering teams over a period of 18 months. It is a deeply integrated solution that provides revision control and collaboration in the ADS flow for both Windows and Linux platforms. Many of our customers use ADS along with other flows like Cadence Virtuoso. Now ADS users will be able to get the same benefits as Virtuoso users and they will be able to manage all their design data in one SOS project repository.
Here is a link to the press release:

http://www.cliosoft.com/news/press/pr_2013_05_07_agilent.shtml

Also see Cliosoft at #50DAC:
http://www.cliosoft.com/dac/

Q: Where can SemiWiki readers get more information?

http://www.cliosoft.com

http://www.semiwiki.com/forum/content/section/397-cliosoft.html

ClioSoft is the premier developer of hardware configuration management (HCM) solutions. The company’s SOS™ Design Collaboration platform is built from the ground up to handle the requirements of hardware design flows. The SOS platform provides a sophisticated multi-site development environment that enables global team collaboration, design and IP reuse, and efficient management of design data from concept through tape-out. Custom engineered adaptors seamlessly integrate SOS with leading design flows – Cadence’s Virtuoso® Custom IC, Synopsys’ Galaxy Custom Designer, Mentor’s IC flows, and SpringSoft’s Laker™ Custom Layout Automation System. ClioSoft’s innovative Universal DM Adaptor technology “future proofs” data management needs by ensuring that data from any flow can be meaningfully managed. The Visual Design Diff (VDD) engine enables designers to easily identify changes between two versions of a schematic or layout by graphically highlighting the differences directly in the editors.

Also Read

Agilent ADS Integrated with ClioSoft

Data Management for Designers

Modern Data Management


A Big Boost for Equivalency Checking

A Big Boost for Equivalency Checking
by Daniel Payne on 05-12-2013 at 1:41 pm

Thirty years ago in 1983 Professor Daniel Gajski and Kuhn created the now famous Y-Chart to show the various levels of abstraction in electronic system design:

We can still use this Y-Chart today because it still pertains to how engineers are doing their SoC designs. Along the Behavioral axis there is a need to know that each level of abstraction is really equivalent to the other levels to ensure that the design is consistent, and that no errors have crept into the design that may have been caused by:

  • Addition of DFT structures
  • Addition of low-power techniques, like clock gating
  • Changes in cells during timing closure
  • Engineering Change Orders
  • Manual netlist changes

One brute force approach is to run functional simulation and re-use your test benches on each level of behavioral models. Well, that approach takes a lot of time, and still is not guaranteed to find all logical differences between two levels of models.

A more elegant approach is to run a class of EDA tools known as Equivalency Checking (EC), which take a mathematical approach to prove equivalency between two levels. Using Equivalency Checking has traditionally had a few limitations:

  • Slow run-time speeds
  • Limited capacity
  • Complexity in terms of learning and setup

Where there’s a need there’s an opportunity, so the software engineers at Oasys have worked to address each of these three limitations by adding new features to EC like: hierarchy, automatic partitioning and parallel multi-processing. With these new technical features you can use EC with:

  • Faster run-time speeds
  • Higher capacity by scaling
  • Simplicity in use

Let’s look at some actual numbers using this new EC approach:

The Oasys tool name is called RealTime Parallel EC, and their tool can simultaneously verify sub-blocks in a hierarchical design, so the run-times will scale linearly with the number of processors available.

lang: en_US

If you travel to DAC then plan on visiting Oasys in booth #1231 to get your questions answered.

Oasys is a privately held company that was founded in 2004 by a team of leading RTL synthesis developers from Ambit and Cadence. The team created a next generation physical RTL synthesis platform that empowers SoC/ASIC design teams to conquer the timing, power, area, and routability challenges of today’s complex SoCs, ASICs, and IP blocks. Oasys RealTime synthesis optimizes at a higher level of abstraction (the RTL level vs. gate level with other synthesis tools) enabling it to provide up to 10x faster turnaround times and the capacity to synthesize the entire top level of the largest SoC’s, ASICs or IP blocks, all while being physically aware for better correlation with physical design.

The company is funded by Intel Capital, Xilinx Ventures, and several private investors. The first product from Oasys, RealTime Designer, was launched in 2009 and is being used successfully by many of the top semiconductor vendors worldwide. The company’s newest product, RealTime Explorer, provides a unique capability for SoC/ASIC front end design teams to quickly identify and resolve top-level timing and routability issues before RTL hand-off to the back-end groups for synthesis and physical design implementation reducing schedules by an average of 1-2 months.


iDRM Brings Design Rules to Life!

iDRM Brings Design Rules to Life!
by Pawan Fangaria on 05-11-2013 at 8:00 pm

Much awaited, automatic tool for DRM (Design Rule Manual) and DRC (Design Rule Check) deck creation is here now! I am particularly excited to know about this because I had been hearing for its need (in different context) from the designers with whom I was working to improve their design productivity through the use of our EDA tools (in my past company). Considering ever growing size and complexity of DRM (in terms of number of complex rules with multiple variables and conditions associated with them) as we go down on process node, it’s natural to expect an automated tool to ease the process.

Traditionally, DRM is written manually by process engineers without any standards; these are secret rules (or limitations in other words) of fabs at a particular process node, available in hard copy or at best, PDF. Programmers or CAD engineers are at mercy of that description to understand it in right manner and develop DRC (Design Rule Check) deck which is software code to implement checks for those rules and flag violations. The whole process is very rigid, manual, unidirectional, time consuming and error prone. Ironically, the designers who actually have to verify their designs against these rules have no say in this whole process. For any change in any of the rule, they have to wait for long sacrificing on the window of opportunity for their design. The DRM and DRC deck, at the very first instance, take years (going through several iterations) before they become available to designers and others with reasonable confidence of correctness.

I am impressed with Sage Design Automation who could visualise this process bottleneck in the overall value chain of semiconductor industry, changed the paradigm and came up with an innovative concept and tool called iDRM (Integrated Design Rule Management).

iDRM is a essentially a design rule compiler integrated with a graphical editor which can capture design rules in terms of layout patters, arrows marking limitations between different shapes such as width, separation etc. and expressions defining the rules; like the example above.

Once the rule is captured, iDRM automatically transforms it into an executable program which can be run on any production layout to validate that against the rule. This delights the process engineer who can then run it on a particular layout, obtain pass/fail report and compare with the actual process induced issues such as litho hotspots. In case of any mismatch, he/she can quickly modify the rule description to match it accurately with the process.

[Correlating iDRM rule with imported fabrication/litho failure data]

iDRM can automatically generate QA pass/fail test patterns for each captured rule. This can be used to generate large sets of QA test structures with maximum coverage, which used to be a much time consuming process otherwise. Moreover, it is consistent and accurate with respect to the captured rule. These can be used to verify correctness and completeness of any third party tool DRC deck.

[DRC deck QA test patterns generated by iDRM]

iDRM can also generate statistical graph (in various formats such as bar chart) of all occurrences of any particular pattern (matching to the captured rule) in the design and its integrated layout viewer can locate the exact position of a pattern. This provides a good way to scan and analyze the overall layout.

Overall concept is very novel which bridges the gap between process and design by automating the design rule generation and verification. The tool, iDRM is very user friendly, flexible, easy-to-use and provides a graphical platform for formal, clear, unambiguous depiction of design rules, thus eliminating any communication gap or error for faster closure. It takes order of magnitude lesser time to create a complete and correct DRM and DRC deck together. Any change can easily be accommodated. Definitely, it provides competitive advantage to those who are using it. Designers too can cheer up on using this tool in their design flow to create specific, robust and optimum layout structures that can provide high yield and performance; of course design rule correct. That can provide them a differentiated edge!!

Further information can be found at Sage’s white paper here.

Sign up for a demo at DAC booth #2233 here.


Winning in Monte Carlo: Managing Simulations Under Variability and Reliability

Winning in Monte Carlo: Managing Simulations Under Variability and Reliability
by Daniel Nenni on 05-11-2013 at 7:00 pm

I recently talked to Trent McConaghy about his book on variation-aware design of custom ICs and the #50DAC tutorial we are doing:

Winning in Monte Carlo: Managing Simulations Under Variability and Reliability.

Trent is the Solido Chief Technology Officer, an engaging speaker, one of the brightest minds in EDA, and someone who I have thoroughly enjoyed working with for the past three years.

Topic Area:Design for Manufacturability
Date:Monday, June 3, 2013
Time:11:00 AM — 1:00 PM
Location:13AB
Summary: Thanks to FinFETs and other process innovations, we are still shrinking devices. But it comes at a steep price: variability and reliability have become far worse, so effective design and verification is causing an explosion in simulations. First, Daniel Nenni will do the introductions and present process variation content and analytics from SemiWiki.com. Presenter Prof. Georges Gielen, will describe CAD and circuit techniques for variability and reliability. Next, Yu (Kevin) Cao from ASU will describe how variability and aging affect bulk vs. FinFET device performance. More corners and statistical spreads will come into play, so advanced IC design tools will be needed to minimize design cycle times. Then, Trent McConaghy from Solido Design Automation, will describe industrial techniques for fast PVT, 3-sigma, and high-sigma verification. Finally, Ting Ku, Director of Engineering at Nvidia, will describe a signal integrity case study using variation-aware design techniques.
To Monte Carlo… and beyond!


Q: What is Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide about?

It describes a way to think about approaching design of custom ICs, such that (a) PVT and statistical variation doesn’t kill your circuit, and (b) handling the variation doesn’t chew up all your simulation resources. It doesn’t focus on the physical mechanisms causing variation, but instead examines and compares possible ways to cope with variation.

Q: What is variation-aware design?

Variation is the uncontrollable aspects of your circuit and its environment that affect the behavior of your design. These include global process variation, local process variation (mismatch), and environmental variation such as temperature and loading. While you cannot control such parameters in the real world, you *can* control them in simulations to understand their effects, and to determine designs that perform and yield well despite the variations. Variation-aware design is the act of designing circuits to meet yield, performance, and power targets despite variation.

Q: Why is variation-aware design important?

It’s not important in its own right, but it can contribute in a big way to the things that really matter. For companies, it’s the bottom line that matters, and the bottom line is affected by yield, number of chips per wafer (area), and sales (which are affected by power and performance, and time to market). For designers, what matters is making the design work, yield well, and perform well. These days, variation can profoundly yield, area, power and performance, not to mention cause respins and kill schedules. So, designing while variation-aware helps ensure that the targets for yield, area, power, and performance are hit in a timely fashion… and ultimately helps to maintain a healthy bottom line.

Q: What inspired you to write the book?

There are two main factors:

First, our customers had been asking for deep technical white papers about Solido tools, which we had taken the time to write over the years. They were very happy with those white papers, and it helped them to become super-users. But the white papers were piecemeal. We were finding that there was a simple, practical way to think about handling variation at a high level; and from that high level one could zoom into the details of handling PVT, statistical, and high-sigma variation. We thought this knowledge might be useful to our users, and also to a broader audience curious about how to address variation. We realized that a book was a great way to package this content.

Second, writing the book was a way of “giving back” to the academic community. Since I started doing CAD in the late 90s, I’ve always strived to balance industry and academia. I paused my time in industry to go back for a PhD, during which I published a fair bit, and capped off with a book. I had a good experience doing that book with Springer, so when they approached me for a new book I took it very seriously. We’re quite happy with the book’s reception from the academic community — some professors have already started using it as part of their courses. While Solido is a for-profit company, I’m glad we were able to give back to academia.

Q: What is unique about Solido’s methods?

We have three main tools, for PVT verification, 3-sigma verification, and high-sigma verification. For each of these three tools, we are the first to invent and introduce tools that are simultaneously (a) fast (b) accurate (c) scalable, and (d) verifiable [i.e. can I trust it?]. It turns out that it’s fairly easy to hit two or three of those criteria, but hitting all four is actually quite hard, and each tool took algorithmic breakthroughs. For example, in high-sigma verification there have been many publications using importance sampling algorithms, but those were typically on 6 variables. 24 variables was considered high dimension! But if you have an accurate model of process variation, with 10 local process variables per device, then even a 6T bitcell has 60 local variables! A sense amp or flip flop could hit 200. To solve the high-sigma problem in a way that achieved (a)(b)(c)(d) simultaneously, we had to completely re-think the high-sigma problem, to chop down the algorithmic computational complexity.

Q: What is the intended audience for the book?

The primary target audience is designers, who want to ship chips that work, without having to muck around too much with variation. I like to say that ‘no designer should have to know the definition of kurtosis. The book shows designers a straightforward way to think about variation to address specific variation issues, while avoiding all the ratholes along the way. For example, designers are often asked to run 100 Monte Carlo samples, but the next step is unclear. As the book describes, the way to approach MC sampling is to have a specific task or question attached, such as “does my circuit meet the 3-sigma yield target?” and appropriate tools to facilitate answering the question.

The managers and CAD managers will also find the book of value, as it can help guide them in design of their flows. For example, thanks to the deeper understanding that the book brought, some managers have been implementing PVT signoff flows where each PVT signoff run may cover >10,000 corners (at a cost of hundreds of simulations). Other managers have been implementing “true 3-sigma corner” statistical design flows. And so on.

Finally, as implied above, CAD academics will find this book of value because it provides a well-defined set of technical CAD problems, as well as baseline approaches to each CAD problem. The book can serve as a reference and starting point for their own work.

Q: How do I get a copy of the book?

All the usual suspects:
-Amazon http://www.amazon.com/Variation-Aware-Design-Custom-Integrated-Circuits/dp/146142268X
-Springer online http://www.springer.com/engineering/circuits+%26+systems/book/978-1-4614-2268-6
-The Springer booth at ISSCC, DAC, DATE, etc. Say hi to Chuck!
-It’s also been rumored that existing users of Solido software can get a copy if they ask… 🙂

Q: Will you be exhibiting at the Design Automation Conference?

Yes, readers can signup for a demo here: http://www.solidodesign.com/page/dac-2013-demo-signup/

lang: en_US


Calypto, in Three Part Harmony

Calypto, in Three Part Harmony
by Paul McLellan on 05-11-2013 at 8:00 am

As Julius Caesar said, “Gallia est omnis divisa in partes tres.” All Gaul is divided into 3 parts. Calypto is similar with three product lines that work together to provide a system level approach to SoC design. Two of those product lines are not unique, in the sense that similar capabilities are available from a handful of other companies, but the original core technology that Calypto worked on when it was first founded, sequential logical equivalence checking (SLEC), is.

The three technologies that make up Calypto are:

SLEC: in the same way as logical equivalence checking proves that a gate-level netlist is equivalent to the RTL of a design (and thus that synthesis did its job correctly), SLEC proves that the output RTL is equivalent to the C/C++/SystemC of a design (and thus HLS did its job correctly). When Calypto was founded, the CEO was Devadas Varma who I’d worked with at Ambit, and what he and his team proposed doing seemed beyond the frontier of what was possible, but they succeeded and today’s SLEC product is the result.

Catapult: this is high-level synthesis (HLS) technology that was originally a Mentor product but which was spun out of Mentor and into Calypto along with some people (and having worked for Greg Hinckley, the COO/CFO of Mentor when I was at VLSI I’m sure it was a sophisticated financial transaction too). Since HLS requires SLEC for verification (and it is probably even more important than for RTL/gates since HLS is less mature technology) this transaction made a lot of sense from a sales point of view, since one salesperson could sell both products from one company. Plus the Calypto sales team is focused on this market, whereas Mentor has a huge product line and it is easy for products to suffer from lack of focus with sales.

PowerPro: this performs power analysis and sequential power optimization. Since this alters the sequential behavior of the design, this also requires SLEC to verify that the changes didn’t alter the functionality, merely reduced the power. The tool works by identifying register transfers that will not alter the results of the design, and generates a small amount of additional circuitry to suppress them.

Calypto has integrated these three technologies to offer industry’s only HLS solution that can synthesize power optimized RTL from C++ or SystemC and formally verify the synthesized RTL against the original C++ or SystemC. It is almost a general rule in EDA that optimizations done at a high level are more powerful than at a lower level. By moving up to a level above RTL, the designer has more options for power-performance-area (PPA) tradeoffs, and by having power optimization under the hood of high-level synthesis, it makes those tradeoffs simpler to explore.

You can see these three products at DAC at the Calypto booth, #1247. There is a complete list of suite demos that you can register for here.


Prototyping Over 100 Million ASIC Gates Capacity

Prototyping Over 100 Million ASIC Gates Capacity
by Daniel Payne on 05-10-2013 at 12:42 pm

Most SoCs today are being prototyped in FPGA hardware before committing to costly IC fabrication. You could just design and build your own FPGA prototyping system, or instead choose something off the shelf and then concentrate on your core competence of SoC design.

Thanks to the FPGA vendors like Xilinx we now have FGPA prototyping platforms that can reach over 100 million ASIC gates in capacity at a reasonable cost. Aldec has created such an FPGA prototyping platform called the HES-7 and we’ve been blogging about it here on SemiWiki:


These prototyping boards use two of the Xilinx Virtex-7 2000T devices which have 4 million FPGA logic cells, or about 24 million ASIC gates in capacity, and that’s not counting the RAM and DSP resources available. You can then connect up to four of these boards together, so your total ASIC gate count is 96 million plus the RAM and DSP resources, crossing the 100 million gate barrier.

The popular ARM architecture is also available as the dual-core Cortex-A9 MPCore using the Xilinix Zynq-7000 device. You can even run open-source Linux, Android and FreeRTOS available from Xilinx on these prototyping boards, enabling your hardware and software teams to verify more quickly.

The speed of the HES-7 is fast enough (up to 1GHz or so) that you can prototype SoCs designs for: Video, Communications, Control Systems and Bridging. Add your own hardware to this board using the daughterboard connectors, which are open and fully specified. See a full list of features here.

Further Reading

There’s a White Paper: ARM Cortex SoC Prototyping Platform for Industrial Applications

Aldec at DAC
At DAC next month in Austin you have a chance to meet with the best in brightest in EDA and Semi IP all in one convenient place and time. Consider signing up for the 45 minute Technical Session at Aldec in booth #2225:

lang: en_US


Is my Library or Semi IP really OK to use?

Is my Library or Semi IP really OK to use?
by Daniel Payne on 05-10-2013 at 11:42 am

The tremendous growth in IC and SoC design complexity has now enabled engineers to place bilions of transistors on a single chip. To make that growth possible design teams resort to using libraries and semi IP provided by other groups in their company, or outside IP vendors. To lower risk, you must know that the IP being used in your next SoC is correct and that no errors are present.

You could create some incoming tests on your re-used IP, or maybe even buy some Verification IP (VIP). There’s a three year old EDA start-up called Fractal Technologies that has a tool that help you test the quality of IP by:

  • Reporting mismatches or modeling errors for Libraries and IP

    • Do all schematic pins occur as terminals in layout and abstract views?
    • Are all delay arcs from Liberty present in Verilog?
    • Can all pins be routed in first-metal?
    • Is a reset pin active-low in SPICE, Verilog, and .lib files?
    • Does the LEF abstract correctly cover the layout view?
    • Do all cells abut?
    • Check on presence and contents of cell- or pin-properties?
    • Verify that certain pins are located correctly within a cell?
  • Checking view consistency (ECSM, CCS)

    • Are CCS peak currents increasing with capacitance?
    • Are cell delays increasing with increasing temperature and decreasing supply voltage?
  • Checks occurence and correctness of cells, pins and terminals
  • Cross-checks delay tables, delay path conditions, setup and hold-times
  • Checks consistency of Liberty characterization data
  • Checks routability requirements on cell terminals
  • Checks functionality descriptions
  • Checks layout representations
  • Checks can be coded by end-users in popular scripting languages

This checking technology is called Crossfire and it works with industry standard formats:

  • LEF, DEF
  • GDS II, Oasis
  • CDB
  • OA (Open Access)
  • Liberty NLDM, NLPM, NLNM, CCS, CCSN, ECSM
  • Milkyway from Synopsys
  • Verilog, SystemVerilog, Verilog AMS, VHDL
  • PLIB
  • Timing Library Format
  • HSPICE
  • FastScan, Tetramax
  • STIL/CTL (Core Test Language)

If you are a group that creates or uses Libraries or semi IP, then using this technology would improve your quality in a shorter time.

At DAC you can see the folks at Fractal Technologies in booth #1617, ask for Rene Donkers.

lang: en_US


Forte CEO on Design and Verification Complexity

Forte CEO on Design and Verification Complexity
by Daniel Nenni on 05-10-2013 at 9:00 am

Sean Dart’s first DAC (Las Vegas) was as a customer in 1989. Designs were hitting 15,000 gates back then so he was looking for better schematic editors and simulators for gate level design. Fast forward 25 years and Sean’s customers are doing 15,000,000 gate subsystems and that number is growing steadily every year. Unfortunately design schedules are not growing so again EDA is critical. By automating the generation of high-quality RTL code from high-level design descriptions, companies like Forte provide a way for designers to handle the increasing gate counts without increasing design schedules.

Q: What are the specific design challenges your customers are facing?

The dimension that Forte addresses for our customers is that of complexity. Allowing designers to code models at much higher levels of abstraction allows them to deal with the complexity of both design and verification in a much more complete manner.

There is, of course, a productivity benefit in the initial creation of models (IP), but the most noticeable improvement comes through reuse. High-level IP is significantly more reusable (and retargetable) than RTL IP. This dramatically improves the value of that IP for our customers by both increasing the effective longevity of the IP and making it much cheaper (and more timely) to retarget in future designs.

Q: What does your company do?

Our flagship product is Cynthesizer, the number one SystemC-based high-level synthesis tool on the market. In the last few years we have also invested heavily in IP. This includes industry-leading fixed-point and floating-point IP which is shipped in tens of millions of consumer devices. We have also added a lot of IP in SystemC form, which is very high-level, retargetable and ready for use in your ESL flow. I believe that the industry will move to high-level IP in order to realize the gains offered by its flexibility and productivity gains.

Q: Why did you join your company?

I started out with Chronology in 1997 and was focused on verification. We merged with CynApps in 2001 to form Forte and the product direction moved to being more synthesis-focused. At that time, I was the VP engineering and became CEO in 2006.

The idea of building a high-level synthesis tool was intriguing to me and the concept of ensuring that verification was considered in the tool flow from the very ground up obviously fit my previous experience. This has proved to be one of the critical components leading to successful deployment of HLS in the marketplace. I am still very passionate about the technology and the continued growth in adoption of Cynthesizer is great motivation to continue down the path.

Q: How does your company help with your customers’ design challenges?

As I mentioned before, our products really help with the issue of design complexity. But it is not only about the core implementation tools. Forte has added a lot of supplementary IP that comes with the tool in order to help users get started more quickly. One example of this is the Interface Generator, which is a utility that allows users to quickly configure a number of complex interfaces and have full SystemC versions of those interfaces generated automatically.

Other non-core elements that are very important include tools to develop and debug your SystemC code and very complete training and kick-start materials and examples. This includes many white-papers, a complete online Knowledge Base, online instructional videos and detailed pre-canned examples and tutorials. Simply producing the best result from synthesis is not the only requirement. We have recognized that we need to work closely with new users to get them to the “expert” development level as quickly as possible, and these collateral materials are a critical element in that process.

Q: What are the tool flows your customers are using?

The Forte tool flow sits right on top of our customers’ existing RTL development flows. The input to the process is SystemC and the output is Verilog RTL that is then processed by all the major logic-synthesis tools and simulators. Forte provides complete methodology which includes integration and automation with those downstream tools to ease the path of adoption of Cynthesizer.

Q: What will you be focusing on at the Design Automation Conference this year?

We’ve widely been considered the industry standard for SystemC synthesis and this year we’re announcing the next generation of our Cynthesizer SystemC synthesis product – Cynthesizer 5.0.

Cynthesizer 5.0 is the culmination of several years worth of work to redesign our core synthesis platform from the ground up. We’ll be demonstrating the advantages of the new “C5” platform in terms ease-of-use, performance and quality of results.

Perhaps more importantly though, we are also introducing Cynthesizer Low Power, our low power synthesis product that utilizes the C5 platform and performs a number of low power optimizations directly in the Cynthesizer core – not as an RTL post-processing step.

We’re also rolling out our new ease-of-use products including a SystemC IDE called Cynthesizer Workbench and our new YouTube channel for customer education.

Q: Where can SemiWiki readers get more information?

We have a number of online resources to provide more information.

Our web site: www.ForteDS.com

Our YouTube channel, containing a number of instructional videos and demos:www.youtube.com/ForteDesignSystems

Our Facebook page:www.facebook.com/ForteDS

Our Blog: CynCity.ForteDS.com

And now our SemiWiki landing page:Forte on SemiWiki.com

Forte Design Systems™ is the #1 provider of electronic system-level (ESL) synthesis software, confirmed by Gary Smith EDA, provider of market intelligence for the global Electronic Design Automation (EDA) market. Forte’s software enables design at a higher level of abstraction and improves design results. Its innovative synthesis technologies and intellectual property offerings allow design teams creating complex electronic chips and systems to reduce their overall design and verification time. More than half of the top 20 worldwide semiconductor companies use Forte’s products in production today for ASIC, SoC and FPGA design. Forte is headquartered in San Jose, Calif., with additional offices in England, Japan, Korea and the United States. For more information, visit www.ForteDS.com.

lang: en_US


Modern SoC designs require a placement- and routing-aware ECO solution to close timing

Modern SoC designs require a placement- and routing-aware ECO solution to close timing
by Jamie Chen on 05-09-2013 at 9:30 pm

As an applications engineer for over 15 years supporting physical design tools that enable implementation closure, I have seen the complexity of timing closure grow continuously from one process node to the next. At 28nm, the number of scenarios for timing sign-off has increased to the extent that is way beyond the number that a Place & Route tool can handle. Most designers turned to Static Timing Analysis (STA) tools for a solution. But the STA tools have two limitations:

  • STA tools usually run in a scenario-by-scenario fashion. For STA tools to generate ECOs that close timing for all scenarios, one would need to run multiple sessions at the same time, one session for each scenario. This requires the STA tools to be run simultaneously on multiple servers, with each server needing a license.
  • Current STA tools do not have or use the physical information. As a result, many ECO’s (Engineering Change Orders) generated by STA tools may end up being not implementable in the physical world due to placement and/or routing congestions.These limitations prompted for a new solution that can:
  • Simultaneously handle large number of scenarios without requiring large number of licenses/server machines
  • Understand the impact placement and routing have on those scenarios and hence implement ECO directive accordinglyThese requirements are critical to effectively and efficiently achieve timing closure.

    Without these capabilities, designers are forced into not only a process that takes too many iterations and longer time to closure, and often have to accept lower chip performance for time to market.

    In a recent customer engagement, I had to help the customer close timing on a design that was highly congested in both placement and routing. In addition, the design required timing closure on more than 100 sign-off scenarios. It would have taken multiple engineers and many weeks to close timing using an STA based methodology.

    A key point to note is that not all routing congested areas are also placement congested, such as the channels between the macros at the top level of an SoC design. Hence, to effectively address timing violations, the tools and flow must understand both placement and routing congestion. Otherwise, one might cause new setup violations while fixing the hold violations due to detoured ECO routes. This is the primary reason why an STA based flow that is not placement and most importantly routing-aware takes many iterations to close timing.


    We identified the congestion issues and used a placement and routing aware timing closure solution that could simultaneously handle all MMMC scenarios. Results: quicker timing closure with far fewer iterations!


    At 20nm, a timing closure solution must be routing aware, because the additional requirements of double patterning and Vt implanting rules have a direct impact on timing and hence closure.

    Welcome your comments and sharing your experiences with timing closure.

    ICScape Inc. (Santa Clara, California) develops and markets solutions that accelerate SoC design closure. Its flagship products, ClockExplorer and TimingExplorer were released to the market in 2006 and 2009 respectively. They have been successfully used and taped-out in over 100 SoC designs. Other products from ICScape include PowerExplorer, RCExplorer and LibExplorer. It offers sales and technical support for its products in US, China, Japan, South Korea and Taiwan.

    lang: en_US