Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/spec-rtl-coverage-success.2869/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Spec + RTL + Coverage = Success

J

joconnor

Guest
I recently listened in on a Webcast where the Wilson Research Group and Mentor Graphics 2012 Functional Verification Study was presented. You can read about it here. The slides show that the percentage of non-FPGA respins caused by logic or functional flaws was ≈55% in 2004 and ≈48% in 2012. That’s only a 7% improvement in 8 years! They also show that only 25% of development teams are using functional coverage as sign-off criteria. Why such a meager following?

Shortly into my management career I was working early for a chip start-up and we were executing to an aggressive schedule and were taking many shortcuts. When we entered the system-level verification stage, the progress slowed. The number of bugs we were finding was higher than expected and we were spending too much time and effort debugging simple mistakes within the design. Sound familiar?

We made the choice to finish writing the module-level specifications with the promise the project would take less time if we committed to this path. We reviewed each module’s interface and functionality and made sure they were aligned and coded to the same rules. We did the module level verification again against the updated specs. When that work was completed, we re-started the system-level verification and after weeding out the remaining minor problems, we spent a couple of months running pseudo-random verifications before we found another bug. We kept validating through the Fab cycle with no new bugs found.

It didn’t take long to find “The Bug” once the chip was soldered down. It took even less time to isolate the problem. A path that should have been sequential was coded combinatorial making one of the features unusable. After all our planning, time and effort how could we have missed it?

Over the next 16 months, we taped out additional designs, but we changed our release criteria based on what we learned from the first chip. We added three requirements:

  • Functional coverage at the module level
  • Functional coverage at the system level
  • Performance measurement of every test at the system level

Our architecture documents had been written with flow charts to describe the functionality. Our chip specifications either used those same diagrams or created new ones to match the intended implementation. One of the designers realized we could extract the functional coverage from these diagrams. We added additional functional coverage in both the “C” behavioral model and from the flowcharts but only to the modules that changed. Adding coverage to modules already validated didn’t make sense. We wrote “C” to extract the chips performance during each test and verified it was within spec. We ran simulations until we achieved full coverage. The chip tape outs were successes. There were architectural and layout issues but we found no functional or performance problems.

The timeframe above was over a decade ago. We didn’t have SystemVerilog or uVM. Assertion Based Verification wasn’t even a buzzword. Formal Verification was still in its infancy. Solid Oak Technologies? Not even envisioned yet. And yet, we used functional coverage as our signoff criteria.

What we learned was to follow these 3 steps for success:
  1. Write a complete and detailed specification – Don’t take shortcuts with your specifications. Like the experience above shows, until you write the complete specification you don’t really know what to create or how it is intended to work. Write the details in such a way that extracting the intent is easy.
  2. Code RTL from the specification and follow the KISS principle – I can’t tell you how many times I’ve reversed engineered code into flowcharts and the designer doesn’t recognize the logic because it’s been modified so many times from the original design intent. Too often, the coding is started before the spec is completed or before the design intent is completely understood. The end result is band-aided code that is neither simple nor understandable nor matches the original intent.
  3. Extract the functional coverage from the specification, not the RTL– if it’s not in the specification, why code it or waste time testing it? Today’s design complexities require RTL to be created in manageable modules, i.e. no one codes a whole chip in a single file. Write specs at the module level and extract the design intent from them.


Here is an example design using functional coverage as the sign-off criteria. It contains an implementation of a SHA-2/256 hash generator complete with the module level specification, RTL, assertions, SystemVerilog and uVM test benches, the formal scripts for Mentor’s Questa® Formal, and a functional coverage report. All of this was generated from the diagrams in the specification using Solid Oak’s CoverAll™ software. Only the test benches were modified to obtain the functional coverage.

Jim O’Connor – President, CEO and Founder of Solid Oak Technologies

<script src="//platform.linkedin.com/in.js" type="text/javascript">
lang: en_US
</script>
<script type="IN/Share" data-counter="right"></script>
 
Last edited by a moderator:
Good learning

Hi

First of all thanks for this article.

It was really very nice and learning experience reading this article. Being a relatively new in this field I find mostly all the mistakes you have prompted here. and decided after I face several problems the same thing that you have posted, but you have made the things more organized for a newbie like us. Here I would also like you to ask that can the functional coverage that you have described is similar to the testbench checking, or is it something different. I am a mostly VHDL(occasionally Verilog) designer. Can we perform functional coverage using VHDL as well, (if it is something different from the testbench checking).

Can you quote some learning link about that.
If only the system-Verilog can perform the functional coverage ( I think I read System-C can also do, but I don't know this language), are there some free simulators for system-Verilog for students like us. Since ModelSim is highly priced, and at system level we cannot use the free version since it only supports 500 lines

Bests,
Jaffry
 
Last edited:
Here I would also like you to ask that can the functional coverage that you have described is similar to the testbench checking, or is it something different. I am a mostly VHDL(occasionally Verilog) designer. Can we perform functional coverage using VHDL as well, (if it is something different from the testbench checking).

Can you quote some learning link about that.
If only the system-Verilog can perform the functional coverage ( I think I read System-C can also do, but I don't know this language), are there some free simulators for system-Verilog for students like us. Since ModelSim is highly priced, and at system level we cannot use the free version since it only supports 500 lines
Jaffry

Hi Jaffry,

I look at functional coverage in 2 ways: capturing the intent of the design and the external environment. The first belongs to the design engineer since who knows the intent better than the person who designed it. The second belongs to the verification engineer who is responsible for modelling the external environment and covering all the possible test scenarios the design must respond to. The two engineers must work closely together to ensure it's all done properly. Typically a verification engineer will develop a test plan and execute it against the design. Once the test plan is exhausted, the design intent coverage will show any test coverage holes. The plan can then be augmented to cover the holes. This provides a clear indication of when the verification task is complete.

Testbench checking is typically done by a scoreboard(uVM) or some other method and usually involves a behavioral model. This is different than what I've described above. The scoreboard indicates the output stream of the design is correct (or incorrect) given the current state of the input stream.

As far as using VHDL for coverage, there's a paper on the subject here. System C does have coverage capabilities. Solid Oak's tools support SVA, PSL and OVL assertion languages.

I haven't kept up with the free simulators so I don't currently know if they support SystemVerilog but a 500 line limit would make it tough to include functional coverage along with the design.

Jim
 
Back
Top