Synopsys IP Designs Edge AI 800x100

A Programmable Electrical Rule Checker

A Programmable Electrical Rule Checker
by Daniel Payne on 04-29-2013 at 11:21 pm

IC designers involved with physical design are familiar with acronyms like DRC (Design Rule Check), LVS (Layout Versus Schematic) and DFM (Design For Manufacturing), but how would you go about checking for compliance with ESD (Electro Static Discharge) rules? You may be able to kludge something together with your DRC tool and some Tcl or Skill code, but it turns out that there is an easier approach by using a Programmable Electrical Rule Checker. At Mentor Graphics they’ve dubbed this product as Calibre PERC. I’ve blogged about PERC before, but I wanted to see what was new and decided to watch an on-demand web seminar where the emphasis was on actually using the tool.

This demo was conducted by Dina Medhat, TME at Mentor Graphics.

Dina explained that PERC can be used for three types of checks:

[LIST=1]

  • Advanced ERC Checks, something you cannot do with just DRC
  • ESD Checks
  • Design Guidelines, enforce your circuit design methodlogy

    When I designed full-custom ICs at Intel we had a written circuit design methodology which embodied our best transistor-level practices, however we had no automated method to enforce these practices, it was a manual process with a senior circuit designer staring at schematics and IC layout looking for something out of place. Talk about tedious and error prone.

    Seven ERC Checks

    The first ERC check demonstrated was an input cell pin incorrectly tied to a power pin.

    The second ERC check was finding two connected cells with different power supplies without a level-shifter cell.

    Rules were written for each of these cases and then a test design with each violation included was run through Calibre PERC. The output results looked very intuitive and descriptive to me because they pinpointed which rule was violated and where to find it in the schematic or layout views:

    The violation for different VSS nets showed one cell using VSS1 (green) and the second cell using VSS2 (pink).

    The GUI reminded me of using a web browser, because links opened up a schematic viewer that showed which rule was violated.

    For ESD the first check had three constraints:

    • Resistor must exist
    • Resistor value must be greater than 100 ohms
    • A Turn-off MOS device must exist

    Each ESD constraint has a rule written for PERC, then it was run on a test layout showing results in both schematic and layout. The second constraint was violated in this layout because it had a Resistor value of only 65.0 ohms, instead of the spec of 100 ohms or more (shown in Orange):

    A second ESD check was looking for a CDM (Charge Device Model) clamp and capacitor configuration in a voltage divider network:

    For the third ESD check they wanted to ensure that IO pads were all using protection devices with bipolar transistors, resistors and diodes in a specific topology:

    Two checks were desired to validate design guidelines:

    • Logic gates with common outputs cannot have different inputs
    • Incomplete pass gates must be flagged


    This demo didn’t show the details of how you write the rules, but I recall from previous discussions at Mentor that most rules take a dozen or so lines of code.

    Summary
    If you do transistor-level IC design and want to enforce your best practices, then consider using an automated approach with a tool like PERC.

    Further Reading

    lang: en_US


  • Recovery in 2013 Semiconductor Capex

    Recovery in 2013 Semiconductor Capex
    by Bill Jewell on 04-29-2013 at 11:00 pm

    Semiconductor manufacturing equipment has been on an upswing for the last few months. Combined data from Semiconductor Equipment and Materials International (SEMI) and Semiconductor Equipment Association of Japan (SEAJ) shows three-month-average bookings have increased for five consecutive months through March 2013. Billings have increased for the last two months.

    The question is whether the recovery in semiconductor equipment will continue. After a severe fall off during the 2008-2009 recession, semiconductor equipment recovered in the second half of 2009 and through 2010. The market weakened again in March 2011 following the Japan earthquake and tsunami. A recovery beginning in late 2011 was short lived. Bookings and billings peaked at $3 billion in May 2012 before beginning another decline due to concerns over the European debt crisis and a weak U.S. economic recovery. March 2013 bookings and billings were only about two-thirds of the May 2012 peak.

    SEMI’s December 2012 forecast called for 2013 semiconductor equipment shipments to be down 0.4%. The 0.4% decline would require average quarter-to-quarter growth in shipments of about 10%, equal to the 1Q 2013 growth over 4Q 2012. IC Insights was slightly more optimistic in its March 2013 semiconductor capital expenditure forecasts, calling for 2013 growth of 1.8%.

    The announced capital expenditure plans of the largest semiconductor manufacturers indicate moderate growth in 2013 capex. The table below shows the 2013 capex guidance for the three largest integrated device manufacturers (IDMs): Intel, Samsung and SK Hynix and for the three largest wafer foundries: TSMC, Global Foundries and UMC.

    Of the six companies, three plan significant capex increases in 2013 ranging from 9% to 17%. One company is flat and two predict declines. The total of the six companies (estimating a 10% drop for SK Hynix) is a capex increase of $2.5 billion, up 6% from 2012. Assuming other semiconductor manufacturers follow this trend, 2013 should show moderate but positive growth in capex and semiconductor equipment shipments. We at Semiconductor Intelligence are forecasting 2013 semiconductor capital expenditures will increase 5% from 2012. Semiconductor manufacturing equipment billings (which lag overall capex) should increase 2% from 2013.

    lang: en_US


    Hot Topic – CMOS Image Sensor Verification!

    Hot Topic – CMOS Image Sensor Verification!
    by Daniel Nenni on 04-29-2013 at 7:30 pm

    Mobile applications require CMOS image sensor devices that have a low signal-to-noise ratio (SNR), low power, small area, high resolution, high dynamic range, and high frame rate. CMOS image sensor imaging performance is noise limited requiring accurate noise analysis on the pixel array electronics and column readout circuitry.

    Image sensor noise sources can be categorized as spatial and temporal noise sources. Spatial noise sources include dark fixed pattern, light fixed pattern, column fixed pattern, row fixed pattern, defect pixels, dead and sick pixels, scratches, and so on. In the case of dark fixed pattern, the dark current becomes very small in deep nanometer processes and its effect is typically not noticeable during normal pixel operation.

    Temporal noise is random in nature and fundamentally limits image sensor performance. Temporal noise includes kT/C noise, flicker noise (1/f), dark current shot noise, photon shot noise, power supply noise, phase noise, ADC quantization noise, and so on. Temporal noise dominates the pixel random noisefloor and is the main source of noise in the readout circuitry.

    With the Berkeley Design Automation Analog FastSPICE (AFS) Platform, designers can use transient noise analysis to verify the impact of temporal random device noise on the readout circuitry, including ADCs and comparators, with nanometer SPICE accuracy. In addition, designers can include post-layout parasitics and characterize the circuit for process variation and device mismatch.

    In the case of comparators, as illustrated in the plots, AFS transient noise analysis quantifies absolute jitter in the trigger point with nanometer SPICE accuracy. This accuracy is important, because the comparator is a sharp transition circuit where small noise can cause large waveform perturbations.

    For full-circuit functional verification of CMOS image sensor devices, the AFS Platform has the accuracy, performance, and capacity to handle multiframe verification of a representative subset of the full array and readout circuitry with nanometer SPICE accuracy.

    Further reading: CMOS Image Sensor Verification Hot Topic:

    http://www.berkeley-da.com/prod/hot_topic_req.html

    Berkeley Design Automation, Inc. is the recognized leader in nanometer circuit verification. The company combines the world’s fastest nanometer circuit verification platform, Analog FastSPICE, with exceptional application expertise to uniquely address nanometer circuit design challenges. More than 100 companies rely on Berkeley Design Automation to verify their nanometer-scale circuits. Berkeley Design Automation was recognized as one of the 500 fastest growing technology companies in North America by revenue in 2011 and again in 2012 by Deloitte. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital. For more information, visit http://www.berkeley-da.com.

    lang: en_US


    Beyond one FPGA comfort zone

    Beyond one FPGA comfort zone
    by Don Dingee on 04-29-2013 at 5:00 pm

    Unless you are a small company with one design team, the chance you have standardized on one FPGA vendor for all your needs, forever and ever, is unlikely. No doubt you probably have a favorite, because of the specific class of part you use most often or the tool you are most familiar with, but I’d bet you use more than one FPGA vendor routinely.

    Continue reading “Beyond one FPGA comfort zone”


    Transient Noise Analysis (TNA)

    Transient Noise Analysis (TNA)
    by Rupindermand on 04-29-2013 at 4:21 pm

    Tanner EDA Applications Engineers see a broad range of technical challenges that our users are trying to overcome. Here’s one worth sharing – it deals with transient noise analysis (TNA) for a comparator design. The customer is a producer of advanced flow measurement devices for application in medicine and research. The designer was trying to simulate (quantify) the jitter caused by the output noise of a comparator through its transition region. This signal goes into a buffer – and the buffer should also contribute some amount of jitter. Performing a noise simulation to assess the total onoise while biasing all devices in the transition region (the comparator being a very high-gain amplifier) resulted in very high total output noise.

    The Designer was using LTSpice to run AC simulations with noise to calculate onoise and then divide it by the gain of the comparator to get equivalent inoise. The calculated inoise was then used to run a transient simulation. SPICE simulators include small-signal AC analysis that calculates the noise contributed by various devices as a function of frequency. This noise analysis applies to simple circuits that operate at a constant DC operating point.

    To obtain better insight into the problem, we used the Tanner-AFS Transient Noise Analysis (TNA) to simulate the realistic noise with the customer’s data. We setup a test bench in S-Edit to run TNA with T-AFS. Working closely with the Designer, we measured maximum Peak-to-Peak Period Jitter and a maximum Peak-to-Peak Absolute Jitter. The transient noise analysis was first run without noise. Then it was run with a noise seed of 500 and a noise scale of 1. Results of the simulations were analyzed and compared in W-Edit to determine the effect of noise. To see the spread of Period Jitter, a set of simulations were conducted (each using different seeds); with statistical measurements performed using histograms in W-Edit. The results were utilized to inform the final design, supporting a successful tape-out.

    Tanner EDA will exhibit at DAC 2013, June 2-4[SUP]th[/SUP], in booth 2442 and in the ARM Connected Community® (CC) Pavilion, #921. The entire analog and mixed-signal design suite will be demonstrated:

    • Front-end design tools for schematic capture, analog SPICE and FastSPICE simulation, digital simulation, transient noise analysis, waveform analysis,
    • Back-end tools, including analog layout, SDL, routing and layout accelerators as well as static timing and synthesis, and
    • Physical verification, including DRC and LVS.

    Visit www.tannereda.comto learn more. DAC demo sign-ups are HERE.

    Tanner EDA provides a complete line of software solutionsthat drive innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs) and MEMS. Customersare creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices. A low learning curve, high interoperability, and a powerful user interface improve design team productivity and enable a low total cost of ownership (TCO). Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

    Founded in 1988, Tanner EDA solutions deliver just the right mixture of features, functionality and usability. The company has shipped over 33,000 licenses of its software to more than 5,000 customers in 67 countries.


    lang: en_US



    MOS-AK/GSA Munich Workshop

    MOS-AK/GSA Munich Workshop
    by Daniel Nenni on 04-29-2013 at 4:06 pm

    The MOS-AK/GSA Modeling Working Group, a global compact modeling standardization forum, completed its annual spring compact modeling workshop on April 11-12, 2013 at the Institute for Technical Electronics, TUM, Munich. The event received full sponsorship from leading industrial partners including MunEDA and Tanner EDA. The German Branch of IEEE EDS was the workshop technical program promoter. More than 30 international academic researchers and modeling engineers attended three sessions to hear 12 technical compact modeling presentations.

    The workshop’s three sessions focused on common compact modeling actions. Sessions included: How to consolidate and build consistent simulation hierarchy at all levels of advanced TCAD numerical modeling; Compact/SPICE modeling for Analog / Mixed Signal circuits; and Corner modeling and statistical simulations.

    The MOS-AK/GSA speakers discussed: statistical modeling with backward propagation of variance (BPV) and covariance equations (K.-W. Pieper; Infineon); circuit sizing: corner model challenges and applications (M. Sylvester; MunEDA); compact modeling activities in the framework of the EU-Funded “COMON” project (B. Iñiguez; URV); effective device modeling and verification tools (I. Nickeleit; Agilent); modeling effects of dynamic BTI degradation on analog and mixed-signal CMOS circuits (L. Heiss; LTE, TUM); STEEPER: tunnel field effect transistors (TFETS) technology, devices and applications (T. Schulz; Intel); current and future challenges for TCAD (C. Jungemann; RWTH); advances in Verilog-A compact semiconductor device modeling with Qucs/QucsStudio (M. Brinson; London Metropolitan University); FDSOI devices bentchmarking (B.-Y. Nguyen; SOITEC); COMON: SOI multigate devices modeling (A. Kloes; THM); COMON: FinFET modeling activities (U. Monga; Intel); COMON: HV MOS devices modeling (M. Bucher; TUC).

    The event was accompanied by a series of the software/hardware demos by MOS-AK/GSA industrial partners: Agilent, MunEDA and Tanner EDA. The session technical and software/hardware demo presentations are available for download at: http://www.mos-ak.org/munich_2013/

    The MOS-AK/GSA Modeling Working Group is coordinating several upcoming modeling events: a special compact modeling session at the MIXDES Conference in Gdynia (https://www.mixdes.org); an autumn Q3/2013 MOS-AK/GSA workshop in Bucharest, a winter Q4/2013 MOS-AK/GSA meeting in Washington DC, and a spring Q2/2014 MOS-AK/GSA meeting in London (http://www.mos-ak.org).

    About MOS-AK/GSA Modeling Working Group:
    In January 2009, GSA merged its efforts with MOS-AK, a well-known industry compact modeling volunteer group primarily focused in Europe, to re-activate its Modeling Working Group. Its purpose, initiatives and deliverables coincide with MOS-AK’s purpose, initiatives and deliverables. The Modeling Working Group plays a central role in developing a common language among foundries, CAD vendors, IC designers and model developers by contributing and promoting different elements of compact model standardization and related tools for model development, validation/implementation and distribution.

    About
    MunEDA:
    MunEDA provides leading EDA software technology for analysis, modelling, optimization, and verification of performance and yield of analog, mixed-signal and digital designs. Founded in 2001 MunEDA has its headquarters in Munich, Germany, with worldwide offices and representations by leading EDA distributors worldwide. MunEDA solutions are in industrial use by leading semiconductor companies in the areas of communication, computer, memories, automotive, and consumer electronics.

    About
    Tanner EDA:
    Tanner EDA provides a complete line of software solutions that drive innovation for the design, layout and verification of analog and mixed-signal (A/MS) integrated circuits (ICs) and MEMS. Customers are creating breakthrough applications in areas such as power management, displays and imaging, automotive, consumer electronics, life sciences, and RF devices. Capability and performance are matched by low support requirements and high support capability as well as an ecosystem of partners that bring advanced capabilities to A/MS designs.

    lang: en_US


    Challenges of 20nm IC Design

    Challenges of 20nm IC Design
    by Daniel Payne on 04-29-2013 at 11:38 am

    Designing at the 20nm node is harder than at 28nm, mostly because of the lithography and process variability challenges that in turn require changes to EDA tools and mask making. The attraction of 20nm design is realizing SoCs with 20 billion transistors. Saleem Haider from Synopsys spoke with me last week to review how Synopsys has re-tooled their EDA software to enable 20nm design.


    Saleem Haider, Synopsys
    Continue reading “Challenges of 20nm IC Design”


    NVM IP Security Solutions…

    NVM IP Security Solutions…
    by Eric Esteve on 04-29-2013 at 8:51 am

    If you need securely storing in your SoC a data which is by nature unique, like encryption key, or a software code update, then you will probably decide to implement a Non Volatile Memory (NVM) block, delivered as an IP function, instead of using an expensive CMOS technology with embedded Flash capability. For example, Synopsys DesignWare non-volatile memory (NVM) AEON®/multiple-time programmable (MTP) EEPROM IP delivers EEPROM-level performance in standard CMOS processes. The target applications for NVM IP range from Multimedia SoC (for Digital Right Management purpose), to Analog chips calibration and trimming. The silicon-proven DesignWare NVM IP is delivered as a hard GDSII block and includes all the required control and support circuitry including the charge pump and high voltage distribution circuits. As we will see, NVM IP should be available in multiple technology nodes, and multiple process flavors.

    NVM IP are frequently used in wireless (Bluetooth, digital radio, NFC) and digital home (HDMI port processors) SoCs to store customer configuration and calibration data. In this case, the NVM IP should be available in the most advanced nodes, in order to support the technologies in use for wireless and digital home application. Such applications do not require very high endurance, a maximum of 10, 000 writes cycle is largely enough. As well, a temperature range of -40°C, +125°C is well suited for digital home type of SoC, as indicated in the table below (click to get a better view):

    For NVM to be used in High Voltage technologies, like HV CMOS or Bipolar-CMOS-DMOS, to support power management (battery fuel gauge, digital power) for performance tracking or configuration settings type of application, the requirements are different. The IP need to be qualified over a wider temperature range, -40°C, +150°C, and the design on more mature technology node allow providing a higher endurance, up to 1,000,000 write cycles in 250 nm technologies. Read and Programming voltages can be much higher as well, as you can see on the table below:

    For a third family of NVM IP is specifically dedicated to analogor mixed-signal designs, like encryption or authentication, EEPROM replacement, customization or calibration settings. The target technologies are ranging from 180 nm down to 90 nm, allowing supporting high endurance, or 100,000 write cycles minimum, and up to 1,000,000 in some case.

    You will learn much more about all of the DesignWare NVM IP and start integrating MTP capabilities into your advanced SoCs by going to the above link, and download one of these papers:

    Comparison of data storage methods in floating gate and antifuse NVM IP technologies

    As far as I am concerned, I would recommend the White Paper titled “Protect your Electronic Wallet Against Hackers”. This paper will not only teach you about the different data storage methods, like “floating gate” or “antifuse” NVM IP technologies, it will also explain what protection level these different design approaches are offering. Even more exciting, the paper will precisely describe three common reverse engineering techniques used by hackers to get access to the supposedly safely stored information – your wallet.

    The aim of the paper is to direct you to the most resistant NVM IP technology, as you can see per this extract:

    The Most Resistant NVM IP Technology to Reverse Engineering Schemes
    Both antifuse and floating gate NVM IP technologies are relatively immune to reverse engineering and both require someone with the right equipment and right level of skill to extract the contents. But there are several advantages to floating gate for MTP applications over antifuse for OTP applications that designers should consider when developing SoCs for data storage application that have higher security requirements:

    • Floating gate technology makes no physical change to the silicon structure and thus is more resistant to techniques such as top-down planar inspection
    • The contents of a floating gate technology can be disturbed or erased by plasma etch techniques during the preparation process. Antifuse technology is not affected by plasma etch and samples can be prepared easily
    • The act of attempting to reverse engineer a floating gate technology using voltage contrast will erase the data contents after one attempt. Antifuse technology allows for multiple attempts without disturbing the data contents.

    The paper show you, step by step, how hackers proceed to reverse engineer NVM IP (I don’t think we are talking about 15 years old geek…), like for example with this voltage contrast measurement technique:

    De-processing required for effective voltage contrast measurements

    Just have a look at the beginning of the paper summary here:

    The capability to protect personal information from hackers through a secure element is critical to the continued development of the NFC ecosystem. Design engineers and system architects who most effectively implement data security from the start will have a competitive advantage in the marketplace. One of the key aspects of data security in NFC is the NVM in which the data is stored. There are two main technologies in use today for NVM IP in SoC applications, antifuse for OTP and floating gate for MTP. Understanding the basic differences between the two technologies and the impact that they have on which reverse engineering techniques is critical to making the right NVM IP technology choice for the end application. After reviewing three common reverse engineering techniques, the conclusion is … (just go to Protect your Electronic Wallet Against Hackers to get the final words…)

    Eric Esteve from IPNEST

    lang: en_US


    Properly Handing Of Clock Tree Synthesis Specifications

    Properly Handing Of Clock Tree Synthesis Specifications
    by Randy Smith on 04-28-2013 at 1:00 pm

    Given today’s design requirements with respect to low power, there is increasing focus on the contribution to total power made by a design’s clock trees. The design decisions made by the front-end team to achieve high performance without wasting power must be conveyed to back-end team. This hand-off must be accurate and complete. A key component of that hand-off is the clock tree synthesis (CTS) constraints.

    Let’s look at what can go wrong and how to avoid these pitfalls.The clock trees in chips ten years ago were fairly simple and most chips had only a handful of clock trees. In today’s technologies this has exploded into a forest of clock trees. Sheer volume alone points to the need for automation. But even more daunting are complexities of today’s clock trees. Clock gating has been in use for a while now to aid in reducing power. Included IP blocks will have their own clock requirements. There are generated clocks, overlapping clocks, clock dividers, and on and on. All of this information needs to be packaged by the front-end team into the SDC file and clock specification (clock constraint) file for use by the back-end team.

    ICScape’s ClockExplorer tool was developed to provide analysis tools to help both teams understand the entire clock graph being developed. It crosschecks equivalence of constraints generated by front-end and back-end teams. Both teams could use ClockExplorer to analyze and sign-off the netlist and clock constraints. ClockExplorer’s platform checks the clock structure and aids in the generation constraints for a CTS tool, including CTS sequencing for complex situations with multiple SDC files and overlapping clock trees.

    If these tasks are done manually by either team, mistakes are much more likely to occur.Beyond the important capabilities of simply generating and checking the constraints, ClockExplorer also optimizes the clock topology to reduce latency. As a visual aid, ClockExplorer also generates a clock schematic, greatly assisting in reviews and discussions between the teams. For a more detailed look at all the analysis features of ClockExplorer, including more details on its SDC constraint checking features, see the white paper.


    By using tools such as ICScape’s ClockExplorer, I think that front-end and back-end design teams will be able to cut design errors due to improper understanding of, or generation of, clock tree synthesis constraints. They will have a common view of the clock system, consistent checking and automated generation handling the key aspects of the constraint files. This should make a difficult task much easier and more reliable. Where discrepancies due crop up, the visual aid enabled by the automatic generation of the clock schematics should make debugging and communications between the teams much easier.

    You can also see ICScape at DAC. Schedule a meeting by clicking here.


    Reduce Errors in Multi-threaded Designs

    Reduce Errors in Multi-threaded Designs
    by Randy Smith on 04-28-2013 at 1:00 pm

    Many advanced algorithmic IPs are described in C++. We use this language because of its flexibility. Of course software algorithms are written to be executed on processors so they don’t solve all the issues of getting the algorithm implemented in hardware directly. This is not simply a high-level synthesis (HLS) issue. Usually for implementation in hardware a software algorithm needs to be transformed to operate on a streaming, sample-by-sample basis. In order to achieve performance characteristics, a monolithic software algorithm is implemented as a chain of modules operating in parallel. The single-threaded C++ algorithm won’t meet the system constraints if it is left in a single-threaded form. For such a multi-module or multi-thread implementation you’re going to need more architectural information such as the macro-level block diagram and the number and type of interfaces between the modules. This type of information is best captured in a language like SystemC. But how to you get there?

    Making transformations like these is a task that requires both familiarity with hardware as well as familiarity with the algorithms. One part which can be automated is the insertion of the communications channels to properly interface the threads or modules. This is not just the synchronization mechanism, but also storage and buffering since the streams and modules are working independently. The communications between blocks is a necessary part of the design but may not specifically add unique value.

    Forte has created an application inside its Cynthesizer Workbench called Interface Generator which automatically generates the mechanisms to efficiently manage the data transfer between multiple threads and modules. The important decisions requiring algorithmic understanding, such as the types of channel to use to interface two modules, are left to the designer. The designer is given many types of channels to choose from – the data type, data storage capacity, how the data is to be synchronized, etc.

    Using this interface generation approach, these custom SystemC channels are added to the design library. The designer can use a set of function calls implemented in the channels to handle the transfer of data between the streams or modules. The standardized function calls created by the Interface Generator give the designer a new layer of abstraction, hiding the details and reducing implementation errors inherent in creating the interface code manually (see this video for an example). Errors are reduced by using these standardized function calls to implement the complex interface behavior. Also, this gives the designer the flexibility to try different types of channels to see which type of channel is best for meeting the target specification without having to write low-level RTL protocols for each attempt. Accessing the communication channel by calling these functions allows the designer to work at a higher level of abstraction, with the details of the storage and synchronization protocols encapsulated inside the channel.

    It seems to me that this approach would be useful in a huge number of applications. For example, many video vendors have C or C++ implementations of picture improvement algorithms. The algorithms are implemented under different specifications each time they are used. In order to meet the various constraints placed on the picture improvement module such as area, performance (e.g., frames per second), and power, the designers will explore different ways to parallelize the design. How the threads or modules are connected will have an impact on the correctness of the design as well as its performance. Use the Interface Generator the designer can easily experiment with multiple channel types to see which types meet the over design specification.

    Another situation where such a problem may come up is when an algorithm is deemed too large and needs to be broken into multiple modules. This could be due to a chip’s floorplan constraints or to allow the design to be broken down for easier verification. It could be a way to save cost by splitting an algorithm into two or more less expensive FPGAs instead of one large FPGA, or it could be to assign the work to a number of designers working at the same time. The value of the Interface Generator is quite clear here as errors are reduced and multiple different interface approaches can be tried in order to meet the design objectives. A video showing usage of the Interface Generator in the design of an edge detection filter can be found here.

    Bottom line: Designers who need to take an untimed C++ design and implement it as a multi-threaded or multi-module hardware design can benefit from the automatic creation of communication channels.

    For more information see the Forte website here.

    Forte is an ‘I LOVE DAC’ sponsor. To get your free DAC badge, or to sign up for a Forte demo at DAC, click here.