Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/verification-futures-india-2013-quick-recap.2577/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Verification Futures India 2013 : Quick recap

Verification Futures started off in 2011 from UK and in 2013 touched the grounds at India too. It is a one day conference organized by T&VS providing a platform for users to share the challenges in verification and for the EDA vendors to respond with potential and upcoming solutions. The conference was held on 19th MAR in Bangalore and turned out to be a huge success. It’s a unique event extending an opportunity to meet the fraternity and collaborate to discuss on challenges. I thank Mike Bartley for bringing it to India and highly recommend attending it.

The discussions covered a variety of topics in verification and somewhere all the challenges pointed back to the basic issue of ‘verification closure’. Market demands design with more functionality, small foot print, high performance and low power to be delivered in continuously shrinking time window. Every design experiences constant changes in the spec till the last minute expecting all functions of the ASIC design cycle to respond promptly. With limited resources (in terms of quantity and capabilities), the turnaround time for verification falls into critical path. Multiple approaches surfaced during the discussions at the event giving enough food for thought to solution seekers and hope to the community. Some of them are summarized below.

Hitting the coverage parameters is the end goal for closure. However, definition of these goals is biased on one side by an individual’s capability to describe the design into specification and on the other side to converge it into coverage model. Further, disconnect with the software team aggravates this issue. The software may not exercise all capabilities of hardware and actually hit cases not even imagined. HW-SW co-verification could be a potential solution to narrow-down the ever-increasing verification space and to increase the useful coverage.

Verifying the designs and delivering one that is an exact representation of spec has been the responsibility of the verification team. Given that the problem is compounding by the day there may be a need to enable “Design for Verification”. Designs that are correct by construction, easier to debug and follow bug avoidance strategies during development. EDA would need to enhance tool capabilities and the design community would need to undergo a paradigm shift to enable this.

Constrained Random Verification has been adopted widely to hit corner cases and ensure high confidence on verification. However, this approach also leads to redundant stimulus generation covering same ground over & over again. This means, even with grading in place, achieving 100% coverage is easier said than done. Deploying directed approaches (like graph based) or formal has its own set of challenges. A combination of these approaches may be needed. Which flow suits to what part of the design? Is 100% proof/coverage a ‘must’? Can we come up with objective ways of defining closure with a mixed bag? The answer lies in collaboration between the ecosystem partners including EDA vendors, IP vendors, design service providers and the product developers. The key would be to ‘learn from each other’s experiences’.

If we cannot contain the problem, are there alternates to manage the explosion? Is there a replacement to the CPU based simulation approach? Can we avoid the constraint of limited CPU cycles during peak execution period? Availability of cloud based solution extending elasticity or increasing velocity with hardware acceleration or enhanced performance using GPU based platforms are some potential solutions.

The presentation from Mentor included a quote from Peter Drucker –
- What gets measured, gets done
- What gets measured, gets improved
- What gets measured, gets managed

While the context of the citation was coverage, it applies to all aspects of verification. To enable continual improvement we need to think beyond the constraints, monitor beyond the signals and measure beyond coverage!
 
Back
Top