Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/design-for-verification-on-the-block-level.1662/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Design for Verification on the block level

Amnon Parnass

New member
Design for verification is defined differently by EDA vendors and other stake holders. The clearest definition you can find comes from synopsys: “the purpose of DFV is to leverage a designer’s intent and knowledge to strengthen the verification effort”. This definition calls for a clear articulation of the design intent by the designer. The definition further calls for “maximizing the correctness of functionality of individual modules” and for “Ensuring the correctness of integration of these modules”. As those requirements indeed define DFV in a compelling way, it is the implementation of them in current design and verification flows which does not lead to the required result where the design is clean and ready for integration.

The above definition requires some of the block verification work, such as assertion definition, to be done upfront by the designer at an early stage of the development process. Although it may sound simple and straight forward, this kind of effort seldom fits with chip project schedule and the extra effort cannot be justified when the pressure is to finish the design stage and allow other critical downstream work to start. The verification engineer given the task of checking the block is therefore overwhelmed with low level bugs and fails to concentrate on the high level block functionality and its integration into the chip level.

Proposing to solve this prevalent scenario, the RMM (Reuse Methodology Manual) suggests that “companies have set up internal development groups dedicated to developing reusable blocks”. This may be correct for some of the large design IPs that may be re-used in multiple projects, but not for low level modules affecting the block level verification effort.

The same principles used for large IP blocks, that are either developed by separate teams or bought from third party, can be applied to the low level generic components. If those components abide by the above design for verification, their correctness is maximized, their integration is simple and their behavior is well defined, the benefits to verification effort may be very significant.

An example block:

View attachment 4018

All the blocks in red could potentially be pre-designed and pre-verified saving many test cases and enabling the verification to concentrate on the central block where the impact on the overall functionality of the device is most critical.

The question to ask is, why treat the large IP blocks different than the low level generic portions that may repeat many times in each device?
 
Amnon,

Good points.

Today's block is from yesterday's SoC, so the verification task is clearly one of the critical tasks in all modern IC designs. I've heard estimates that on GPU and CPU designs there are 7-10 verification engineers per 1 design engineer.

If you can define a design methodology where low-level blocks are pre-verified and you can easily re-use the verification, then let's hear about it.
 
Daniel,
Thank for your comment

What I am proposing is to use low level components and functions, the same way we use the high level IP.
For example, when designing an SOC, the architects of the device would specify the main block and interfaces like USB, DDR, CPU, I2C that can potentially be outsourced from an IP vendor and some blocks which are proprietary and require internal design.
Much in the same way, when designing a blocks, generic components can be identified, modules like arbiters, FIFOs, memory structures, inter-block interfaces and more can be designed separately, pre-verified and put in a library for usage by other blocks. Supplementing those blocks with assertions to check proper interface functionality and predefined coverage items would also turn them into effective tool for debugging and verifying the surrounding logic.
The advantages can be huge as designers concentrate on higher level design and verification engineers see less of the low level time consuming bugs, not to mention the advantages of unified design structure where blocks around the chip use the same building blocks. some handy tools for version control, watermarking, clear metrics for decisions about what components to include would help integrating this flow into modern design environment.
I believe a library of such components should be developed over time and maintained vigorously and smaller teams should even outsource it.

Thanks,
 
Amnon,

I like where you're heading with this.

Many years back, we took the approach of capturing design intent in the form of flow and timing diagrams and converting these diagrams to RTL and assertions manually. That methodology was very successful and resulted in a number of first silicon tapeouts. As you stated, it took us a bit longer to tapeout, but with a first pass success the overall project schedule was much shorter.

A couple of years ago, I started a company to automate the process, because as the designs got bigger, conversion by hand was tedious and error prone. As part of this process, we added the ability to create libraries of functions, captured as flow or timing diagrams, to be used in creating bigger modules. The idea being, once you created and verified a FIFO, for example, all designers should use that same RTL code and assertions, i.e. the design intent. You can add the design assumptions as well; such things as the FIFO can't be written when full or in the case of timing diagrams, the protocol assumptions and assertions. This way all designers don't have to spend time generating the same assertions for generic items like FIFOs, arbiters, etc basically anything the design teams uses more than once. Once done you have a robust ABV model of your design suitable for verification.

In addition, the higher level blocks and one-of-a-kind functions are created with the same flow and timing diagrams, using the generic building blocks in order to leverage as much reuse as practical. When the higher level modules get reused themselves, the ABV model represented by all the flow and timing diagrams goes with the module into the verification task. Since the RTL can be auto-generated from the flow diagramsl, you basically get it for free. The real design effort goes into creating the architecture, capturing the diagrams and design assumptions.Finally, with a robust ABV model, bugs are found quickly, the verification metrics are readily available and the verification task is now bounded and measurable.

Whether you automate the task or do it manually, to get the most from this process you need to capture the full design intent, both functional intent and design assumptions.

Jim O'Connor
Solid Oak Technologies, LLC
Solid Oak Technologies - Assertion Based Verification
 
Hi Jim,

its good to know you see a potential in improving the way we do block level design. i looked through your website and i like the approach of gathering the design intent and especially the option to reuse generic blocks that can be replicated from one design to the next.
the addition of the assertions into the flow, really improves it as the assertions serve to check the surrounding design.
i followed a similar approach and started a website (see the link below), providing those low level building blocks so designers can concentrate on the higher level functionality of their blocks, as well as save time and effort. adding assertions was my next step. i am currently using some of the blocks very successfully in a new ASIC and can actually see people take into consideration the available building blocks such as FIFOs, synchronizers and arbiters when defining the micro-architecture and ultimately get their design running in a much shorter cycle. people are actually instantiating the pre-designed blocks first and then fill the rest of the logic in between.

to my view every design team should have an access to a library of pre-designed low level building blocks. this way many bugs can be avoided and the block architecture and performance requirements would not be lost in the details.
 
Back
Top