BannerforSemiWiki 800x100 (2)

Raised on radio: RPUs target autos and wearables

Raised on radio: RPUs target autos and wearables
by Don Dingee on 12-29-2013 at 9:00 pm

We’ve become familiar with Imagination Technologies as a leading provider of IP for mobile GPUs, and within the last year the acquisition of the MIPS architecture has established them further in NPU and SoC circles. Their latest move targets an IP solution more in line with their heritage.

Imagination, way before becoming famous in every iPhone, was a digital radio company. With the UK home to the world’s largest digital radio network, much of that success was powered by the Pure family of digital audio broadcasting (DAB) receivers. Initially designed as a way to showcase Imagination digital radio chipsets, Pure took off as a consumer brand by breaking the £100 DAB receiver price point.

The timeline of the computer industry is proof history repeats itself. The smartphone and tablet have blossomed, with most now carrying Wi-Fi and Bluetooth as standard payload. Digital connectivity is moving into new application areas such as automotive, the Internet of Things, and wearables, with requirements for even smaller and more power efficient radio implementations.

There is no shortage of combination chipsets for Wi-Fi plus Bluetooth out there, well represented by the likes of Broadcom, Intel, Marvell, MediaTek, Qualcomm Atheros, Redpine Signals, and TI. Chipset solutions evolved because SoC makers were reluctant to place RF elements directly on chip, for several reasons. Historically, RF solutions have been finicky, resisting attempts by mere digital design mortals to integrate – but that resistance is diminishing as solutions improve.

More importantly, RF solutions have posed a thorny business problem. Adding a diverse set of multiple radios drives bill-of-material costs hard, and including unnecessary features in devices targeting low price points is a losing proposition. The standards themselves have been a moving target, with state of the art today being 802.11ac for Wi-Fi, and Bluetooth 4.0 including Low Energy technology. Also, varying regulatory and carrier requirements in worldwide markets have made integration challenging, driving many device vendors to prefer RF chipsets external to the mobile SoC so solutions can be tailored.

This RF churn may finally be settling out, as 802.11ac and Bluetooth 4.0 are becoming checklist items no device vendor wants to be without. There is another checklist item for automotive makers: FM radio, still alive and well everywhere in the world. FM radio also fits well with wearables, offering users the option of over-the-air broadcasts or Internet radio capability.

On that backdrop, the battle may be shifting to a more integrated, lower power SoC design with combination RF capability built in. Imagination is well poised to fill this role, combining their RF heritage with deep experience in lower power SoC IP. This is the motivation for the new Ensigma Series4 radio processor unit (RPU), an IP core with a combination of Wi-Fi, Bluetooth, and FM radio receiver.

Combining physical interfaces is one thing, but significant attention needs to be paid to blending traffic under real-world use scenarios such as audio streaming. The Ensigma Series4 carries advanced coexistence algorithms with features like Bluetooth cycle prediction, packet prioritization supporting QoS, and optimized use of unified memory. If effective, this could mean significant improvements in power and performance compared to external chipsets.

The Ensigma Series4 is also configurable, so baseband solutions can be customized by the OEM where that need exists. For more on the Ensigma Series4 architecture, visit the Imagination blog:

Ensigma combo IP delivers Wi-Fi, Bluetooth, and FM connectivity

Will Imagination succeed with RPU IP? Combo RF chipset vendors have so far feared to tread on this ground, but with the experience earned in trailblazing digital radio markets and delivering power-efficient IP for mobile solutions, an Imagination RPU solution is certainly plausible for a new generation of tiny wearable devices. The addition of an FM receiver – which presumably could be configured for other over-the-air standards like HD Radio or DAB+ – adds some extra intrigue, especially for automotive OEMs.

What do you think of the markets and this approach?

More articles by Don Dingee…

lang: en_US



Intel Wafer Pricing Exposed!

Intel Wafer Pricing Exposed!
by Daniel Nenni on 12-28-2013 at 12:00 pm

One of the big questions on Intel’s foundry strategy is: Can they compete on wafer pricing? Fortunately there are now detailed reports that support what most of us fabless folks already know. The simple answer is no, Intel cannot compete with TSMC or Samsung on wafer pricing at 28nm, 20nm, and 14nm.

In fact, recent reports have shown that Intel 32nm versus TSMC 28nm gives TSMC a 30%+ wafer cost advantage. At Intel 22nm versus TSMC 20nm the cost advantage is 35%+. This is an apple to apple comparison with Atom SoC versus ARM SoC silicon. Another key metric is capacity. During the recent investor meeting Intel CFO Stacy Smith claimed Intel was at 80% capacity so that is the number that was used in the wafer cost calculations for both Intel and TSMC. I question this number since Intel has three idle fabs (OR, AZ, Ireland) and TSMC 28nm was at 100% capacity up until recently but I digress…..

On the technical side we now know that, even with Intel’s superior process claims, TSMC 28nm SoCs easily beat Intel at 32nm in both power and performance. TSMC 20nm SoCs will again beat Intel 22nm. 14nm SoCs have yet to launch but one thing I can tell you is that Intel will NOT win business from TSMC’s top customers which will make up more than 50% of fabless revenues:

[LIST=1]

  • Qualcomm: TSMC and Samsung
  • Apple: TSMC and Samsung
  • NVIDIA: TSMC and Samsung
  • AMD: TSMC and GlobalFoundries
  • MediaTek: TSMC
  • Marvell: TSMC and Samsung
  • Broadcom: TSMC and Samsung
  • TI: TSMC
  • Spreadtrum: TSMC and Samsung
  • Xilinx: TSMC

    As you can see most of these customers will straddle TSMC and Samsung at 14nm to get pricing concessions which will make it even more difficult for Intel to compete. Additionally, Intel will have the added burden of the three idle fabs which brings utilization down to 50% (my guess since Intel was not “transparent” about it during analyst day). I’m really looking forward to the utilization conversation on the next earnings call. Mr. Smith has some explaining to do! Let’s see what kind of answer $15M+ in CFO compensation will get us. Since TSMC 20nm and 16nm use the same metal fabric the fabs are the same so expect a very high utilization rate.

    Also read: Should Intel Offer Foundry Services?

    Bottom line is that the Intel 14nm “Fill the Fab” foundry strategy is a paper tiger to appease Wall Street. At 10nm it may be a different story all together. If Intel does in fact deliver 10nm SoCs a year or two ahead of the foundries they may get business at the normal Intel price premium. But at 14nm it is simply not going to happen, no way, no how.

    I also question the business model where you allow your products to be manufactured by a direct competitor. It is a conflict of interest. It is a desperate business move. It is the reason why pure-play foundries exist. But these are desperate times with only one pure play foundry (TSMC) for leading edge SoC silicon. If GlobalFoundries and UMC had the capacity and were able to deliver wafers lockstep with TSMC, Samsung and Intel would not have a chance in the foundry business, absolutely.

    More Articles by Daniel Nenni…..

    lang: en_US


  • Macro Placement Challenges

    Macro Placement Challenges
    by Paul McLellan on 12-27-2013 at 7:28 pm

    One of the challenges of physical design of a modern SoC is that of macro placement. Back when a design just had a few macros then the flooplanning could be handled largely manually. But modern SoCs suffer from a number of problems. A new white paper from Mentor covers Olympus-SOCs features to address these issues:

    • As we move to smaller technology nodes we can put a lot more functionality on the same die. Not perhaps quite as much as you want since metal fabric is not scaling as fast as historically. The varying aspect ratios of the macros make it difficult to pack them tightly without wasting silicon area. Designers are left with the challenge of running multiple iterations to identify a reasonable configuration to take through full place and route.
    • Poor seed placement. Floorplan designers typically run multiple trials using various seed placement configurations with the hope of finding the ideal solution. Since the quality of the initial placement is rarely optimal, you see an increase in both the number of trials with various seed placements and also the number iterations with the pre-CTS runs.
    • Design space exploration. Conventional tools lack the ability to perform early design space exploration that considers various design metrics such as timing, area, leakage power, and congestion. This leads to inferior seed placement and also to performance and area being left on the table. There is typically no automated method to collate and present the results of all the trial macro placements done, so there is no way to make an informed decision on the full-blown implementation.
    • Missing macro analysis functionality. Traditional tools provide very primitive macro analysis capabilities for determining whether a certain placement configuration is best suited for implementation. The analysis engines typically do not have the intelligence to analyze the connectivity through multiple sequential stages. Another drawback is that if a macro placement already exists for a block based on legacy design experience, the analysis functionality is either not supported or is very minimal. In order to determine the best placement, it is critical to have very powerful analysis and incremental what-if analysis capabilities.
    • Pre clock-tree synhesis (CTS) optimization After getting the initial seed placement, there are typically many cycles iterating through the full blown pre-CTS flow to analyze timing, area, congestion and other design metrics before choosing a configuration for implementation. Because the seed placement QoR is often sub-par, the design needs to be analyzed to determine the feasibility of closing the design or block, which involves launching multiple trials in parallel. The pre-CTS runs are costly both in terms of resources and time, but can be eliminated with a good quality seed placement.
    • The result of these problems causes more iterations and as a result the time to accomplish timing closure increases.


    Olympus-SoC offers a completely automated macro placement (AMP) technology for both block level and hierarchical designs that delivers the best QoR in the shortest time. It offers tools for design space exploration to make the right trade-offs to meet various design metrics like timing, area, and congestion. AMP includes powerful what-if macro analysis capabilities for fine tuning macro placement results. The AMP flow significantly reduces the design time and eliminates weeks of manual iterations.

    Olympus-SoC’s AMP technology offers the following features:

    • Data flow graph driven macro placement and legalization
    • Design space exploration with customizable recipes for parallel exploration
    • Powerful macro analysis and refinement
    • Completely automated flow with minimal manual intervention

    The white paper covers the flow through the graph driven macro placement engine, how to assess quality of macro placement, refining the placement. There is a small case study.

    The white paper is here.


    More articles by Paul McLellan…



    Get into a Xilinx FPGA for Under $90

    Get into a Xilinx FPGA for Under $90
    by Luke Miller on 12-27-2013 at 6:30 pm

    Jump into Xilinx Programmable Logic today! I wanted to encourage my dear readers if you have not tried using an Xilinx FPGA (Field Programmable Gate Array) or even CPLD (Complex Programmable Logic Device) then it is worth your time to begin your evaluation. Maybe you got one for Christmas? If not, it is easier than you think to start exploring the wonderful world of Xilinx!

    For many, even the non-nerdy folk, know what a CPU, GPU are etc… But the FPGA remains in the shadows at times, but I would like to highlight some suggestions that will allow you to, without much money and effort to enter into the World of Xilinx programmable Logic Devices. While Xilinx is certainly the world’s FPGA leader, with the Virtex-7, 28 nm a huge success, coupled with Zynq (Dual ARM devices and even a leg) and the 20nm UltraScale family, boasting 5500+ DSPs and 64 GT’s, things are looking wonderful for Xilinx. But maybe you don’t need all that horsepower and are just curious and want to learn more without a heavy investment of time and money.

    So how do you try Xilinx Programmable Logic? To the outsider, things can feel scary, hearing things like VHDL, Verilog, Synthesis, Place and Route, Constraints, Timing Closure. In fact it can feel overwhelming. Well the good news is it is not as hard as one may think. Most of us in our FORTRAN, BASIC (or Abacus for some of you ;)) days had to write the ‘Hello World’ Program. The equivalent of the ‘Hello World’ for the Xilinx Programmable Logic in my humble opinion is the ‘blinking LED’. For the Blinking LED to occur, many successful steps were taught and caught and the LED winking at you will just propel you to want to learn more and more of the multidimensional world of Xilinx FPGAs.

    Like anything new there is a learning curve but I am convinced that once you start playing with the Xilinx FPGAs you will have more ideas of designs that you would like to implement. The possibilities are endless as Xilinx FPGAs do not tie you down to any particular standard. You program or design the interface, you design the algorithms. It could be SPI, RS-232, PCIe, Fiber, 10,100 GbE feeding a video tracking algorithm for, well whatever tickles you’re fancy. The point is you can control more than just some functions, you own the space, the IO, the RAM, the DSP.

    Today, you can get into a Xilinx board from $89 from Diligent, Avnet has the MicroZed for $199, and Adapteva has the Zynq based Parallella Board for $99. They come with the necessary software license to begin playing with your Xilinx FPGA board. What you’ll find is that you have a blank piece of canvas in the form of Silicon. What do you want it do? Once you are comfortable with the Xilinx FPGA board and the tool flow you’ll really be wondering why you did not do this sooner. There are literally unlimited resources, forums, groups that are creating a growing Xilinx FPGA community. So what will you create? Check outXilinx.com for more details on all there Programmable Logic Solutions and some of the aforementioned boards above to see what you may have been missing.

    More articles by Luke Miller…

    lang: en_US


    Patterns looking inside, not just between, logic cells

    Patterns looking inside, not just between, logic cells
    by Don Dingee on 12-27-2013 at 5:00 pm

    Traditional logic testing relies on blasting pattern after pattern at the inputs, trying to exercise combinations to shake faults out of logic and hopefully have them manifested at an observable pin, be it a test point or a final output stage. It’s a remarkably inefficient process with a lot of randomness and luck involved.

    Getting beyond naïve (typified by the “walking 0 or 1” or transition sequences such as hex FF-00-AA-55 applied across a set of inputs) and random patterns hoping to bump into stuck-at defects, computational strategies for automated test pattern generation (ATPG) began modeling logic looking for more subtle faults such as transition, bridging, open, and small-delay. However, as designs have gotten larger, the gate-exhaustive strategy – hitting every input with every possible combination in a two-cycle sequence designed to shake both static and dynamic faults – quickly generates a lot of patterns but still fails to find many cell-internal defects.

    A new white paper from Steve Pateras at Mentor Graphics explores a relatively new idea in ATPG that challenges many existing assumptions about finding and shaking faults from a design.

    Cell-Aware Test

    A cell-aware testapproach goes much deeper, seeking to target specific defects with a smaller set of stimuli. Rather than analyzing the logic of the design first, cell-aware test goes after the technology library of the process node. It begins with an extraction of the physical library, sifting through the GDSII representation looking at transistor-level effects with parasitic resistances and capacitances. These values then go into an analog simulation, looking for a spanning set of cell voltage inputs (still in analog domain) that can expose the possible faults.

    After extending the simulation into a two-cycle version looking for the small-delay defects, the analog simulation is ready to move into the digital domain. The list of input combinations is converted into necessary digital input values to expose each fault within each cell. With this cell-aware fault model, now ATPG can go after the gate-level netlist using the input combinations, creating a much more efficient set of patterns to expose the faults.

    A big advantage of the cell-aware test approach is its fidelity to a process node. Once the analog analysis is applied to all cells in a technology library for a node, it becomes golden – designs using that node use the same cells, so the fault models are reusable.

    But how good are the patterns? Pateras offers a couple pieces of analysis addressing the question. The first is a bit trivial: cell-aware patterns cover defects better in every cell, by definition from the targeted analysis. The second observation is a bit more surprising: for a given cell, cell-aware analysis actually produces more patterns in many cases compared to stuck-at or transition strategies, but detects more defects per cell.

    Play that back again: the cell-aware test approach is not about reducing pattern count, but rather about finding more defects. Pateras presents data for cell-aware test showing on average, the same number of patterns shakes out 2.5% more defects, and a 70% increase in patterns gets 5% more defects. Those gains are significant considering those defects are either all blithely passing through device level test, or requiring more expensive screening steps such as performance margining or system-level testing to be exposed.

    The last section of the paper looks at the new frontier of FinFETs, with a whole new set of defect mechanisms. Cell-aware test applies directly, able to model the physical properties of a process node and its defects. A short discussion of leakage and drive-strength defects explains how the analog simulation can handle these and any other defects that are uncovered as processes continue to evolve.

    For finding more defects, cell-aware test shows significant promise.

    More articles by Don Dingee…




    Smart Watch, Phone, Phablet, Tablet, Thin Notebook…?

    Smart Watch, Phone, Phablet, Tablet, Thin Notebook…?
    by Pawan Fangaria on 12-26-2013 at 12:30 pm

    There are more, but wait a while, from this set which ones do you need? Or let me ask the question differently (I know you may like something impulsively and have money to buy), which ones do you want to buy and own? Still confused? I guess what you need, you already have, but you want to change it for something new and different. While I talk about some more, functionally different, fancy wearable things later, let me analyse these ones to help making a rational decision.

    When I first saw an SmartWatch advertisement, all these devices flashed through my mind and I had been thinking to write an article on this topic from user’s point. After reading Beth Martin’s article, “The hottest real estate? Your wrist!”, I couldn’t wait anymore.

    So, what’s there in a SmartWatch? It has most of what you have in your SmartPhone or less than that. But because it’s wearable, you don’t have to carry it in your pocket. Carrying something is a problem! Yes, that’s the problem with laptop too; one has to carry it in a separate case. And so, most of the laptop functionality started getting into SmartPhone (so that you don’t have to carry a separate case) and it became smaller and smaller to fit into your pocket, until it was realized that we need a bigger screen for applications like reading or compiling a document. So, we now see a trend of having larger screen on SmartPhone!!


    Where do we go from here, a Phablet, Tablet or Notebook again? Well, a Phablet is a negotiation between a phone and a tablet, although not easily pocket-able, but manageable. Talking about impulse buying, Tablets came in as a huge success with fantastic customer demand as that was easy to handle with touch keyboard. It was a fancy among school going kids, I guess it still is. But that market for Tablets is vanishing, taken away by Phablets and now thin sleek Notebooks.

    With larger screen of a Notebook, one has great convenience of working, like in an office, and that cannot be ruled out. The negatives are being mostly eliminated by the emergence of thinner and lighter Notebooks with better screens, hybrid form factors, ergonomics and longer battery life. That eliminates another inconvenience of carrying a Bluetooth keyboard along with a Tablet.

    Coming back to the question, which all devices can be laden on our body, pocket or bag? My opinion – the new ultra-thin Notebooks (like MacBook Air and Lenovo Yoga) will make a comeback in our bags and SmartPhones will remain evergreen in our pockets. SmartWatch and other wearable devices will not be mainstream products, they need a careful approach to target the appropriate segment and position the smart product smartly for that segment. It’s okay to be as geek devices for that matter.

    For a SmartWatch to position itself well, it must have important features meaningful to the user, such as health monitoring, fitness tracking, built-in GPS, a battery life of about a month, wireless charging, better with a self-charging solar cell and other nifty features. Emulating a SmartPhone on that small screen may be a mistake. In age-old wrist watch positioning, niche segment of Rolex and Rado is still ruling and there are many in mainstream segment. And we have seen the death of digital watches which had arrived with a big bang in the past.

    FuelBand, Nike’ssporty brand, can be a strong contender for that real estate of your wrist, specifically for athletes and sports enthusiasts. Then there are other uncommon wearable devices such as SmartBra (by Microsoft), clearly targeted towards women; Sproutling Fitness Tracker for babies, clearly useful for mothers of small babies; and there are others like SmartWig (by SONY), Google Glassetc. Imagine a SmartWig that sits on your head containing various sensors to track your brain activities, blood pressure, temperature, vibrator to signal a phone call etc. and then camera, speakers, GPS and so on. Are you ready to wear it whole day long and expose your brain to all that radiation?

    Time to think what you want to wear, what you want to keep in your pocket and what you want to carry in your bag!! Eliminate one over the other if its functionality (as needed by you) is completely overlapped by the other and that makes you easy to go!! Happy investing in your personal technology 🙂

    More Articles by Pawan Fangaria…..

    lang: en_US


    Highest Test Quality in Shortest Time – It’s Possible!

    Highest Test Quality in Shortest Time – It’s Possible!
    by Pawan Fangaria on 12-26-2013 at 10:30 am

    Traditionally ATPG (Automatic Test Pattern Generation) and BIST (Built-In-Self-Test) are the two approaches for testing the whole semiconductor design squeezed on an IC; ATPG requires external test equipment and test vectors to test targeted faults, BIST circuit is implemented on chip along with the functional logic of IC. As is evident from the ‘external and internal to an IC’ approaches, both have unique advantages and limitations.

    By using ATPG any digital design chip can be tested with high quality covering wide variety of defect detection and fault models such as stuck-at, transition, bridge, path-delay, small-delay etc. However, it requires large number of test patterns to get high fault coverage. And 99% and more fault coverage is a norm in today’s complex chip design to sustain required quality. Therefore, compressed test data is stored in the tester and is applied to the chip scan chains put between a de-compressor and a compactor, thus speeding up the test and requiring lesser tester memory.

    On the other hand logic BIST can be used in any environment (without the need of any tester) for designs that need re-test on system. This proves to be very important for system self-test in critical applications such as satellite, automobile or flight control systems. It’s a very practical approach for plug-and-play design support with short test time, provided the design is not pseudo-random resistant. In an internal arrangement, a pseudo-random pattern generator (PRPG) generates test patterns which pass through the scan chains and the responses are collected in a multiple input signature register (MISR) that determines the pass or fail by comparing to the expected signature. This approach has less coverage on faults that need special targeting unlike ATPG.

    Again, in ATPG, a FF capturing a wrong value (due to design issue) can be masked at a mild loss of fault coverage. But in logic BIST, entire chain between the PRPG and MISR will need masking for such cases. Logic BIST cannot tolerate any unknown value (due to any non-scan instance) whereas ATPG can ignore any unknown value at the tester. That means a higher design impact in case of BIST insertion than in ATPG compression.


    So what should be the best strategy to attain maximum coverage in shortest possible test time? As we can see, ATPG and BIST can be used to complement each other, and both offering more common features these days, Mentorhas exploited these into a hybrid approach by combining the logic from embedded compression ATPG and logic BIST (LBIST).


    [Hybrid shared test logic with top-level LBIST controller to manage clocking and sequencing of LBIST tests]

    Most common logic of the two test methodology is merged, thus saving in area. Compression ATPG and LBIST are shared within each design block and there is an additional LBIST controller at the top level. In this approach, ATPG needs to target only those faults that are not already detected by LBIST, thus saving test time significantly.

    This hybrid approach which can be used in top-down as well as bottom-up design flow, saves hardware cost as well as test time and provides highest fault coverage. Also, in case of bottom-up flow Mentor’s Tessent TestKompress can re-use the patterns previously generated at the core level, thus saving in test pattern generation time.

    EDA vendors such as Mentor provide tools to re-use logic between embedded compression ATPG and LBIST. A whitepaperat Mentor’s website provides a great level of detail about ATPG, LBIST and the hybrid approach of test strategies. Interesting read!!

    More Articles by Pawan Fangaria…..

    lang: en_US


    A little FPGA-based prototyping takes the eXpress

    A little FPGA-based prototyping takes the eXpress
    by Don Dingee on 12-26-2013 at 9:00 am

    Ever sat around waiting for a time slot on the one piece of big, powerful, expensive engineering equipment everyone in the building wants to use? It’s frustrating for engineers, and a project manager’s nightmare: a tool that can deliver big results, and a lot of schedule juggling.

    Continue reading “A little FPGA-based prototyping takes the eXpress”


    SLEC is Not LEC

    SLEC is Not LEC
    by Paul McLellan on 12-20-2013 at 3:00 pm

    One of the questions that Calypto is asked all the time is what is the difference between sequential logical equivalence checking (SLEC) and logical equivalence checking (LEC).

    LEC is the type of equivalence checking that has been around for 20 years, although like all EDA technologies gradually getting more powerful. LEC is typically used to verify that a netlist implementation of a design corresponds to the RTL implementation (although more rarely it can be used for RTL to RTL, and netlist to netlist verification). However, LEC suffers from a major restriction: the sequential behavior of the design must remain unchanged. Every register and every flop in one of the designs must correspond exactly to an equivalent one in the other. Tools vary as to how restrictive they are about whether the registers need to be named the same. And this is not quite true, there are a few simple transformation that RTL synthesis does that a typical LEC tool can handle, such as register retiming (whereby logic is moved across registers and might invert the register contents in a very predictable way).

    SLEC is used when the sequential behavior is not the same or when the high level description is completely untimed C++ or SystemC and so the sequential behavior is not fully described. The three most common cases where this can happen are:

    • a tool such as Calpyto’s PowerPro is used to do sequential power optimization. It does this by suppressing register transfers when the results are not going to be used, but this completely changes the sequential behavior at the register level, although if everything is done correctly it should not change the behavior of the block at the outputs. SLEC is used to confirm that this is indeed the case
    • a high level synthesis (HLS) tool such as Catapult is used to transform a design from a C/C++ description to RTL. SLEC can check that the HLS tool created functionally correct RTL from the high level input
    • a high level C/C++ description of the design is automatically or manually transformed into another C/C++ description (perhaps to make it synthesize better) and SLEC can be used to ensure this transformation did not introduce any errors

    So stepping back to the 50,000 foot level, LEC is used to check that logic synthesis tools have done their job correctly (and these days logic synthesis is buried inside place and route so it is not just the pure front end tools that are involved). It can also be used to check that manual changes made at that level (such as hand optimizing some gates) is done correctly.

    SLEC is used to check that high level synthesis tools have done their job correctly or that other tools such as PowerPro that make sequential changes have done their job correctly.

    By combining both technologies gives you complete end to end verification from high level description through the various tools that change the design, all the way down to netlist.


    More articles by Paul McLellan…