Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/threads/how-to-balance-directed-and-constraint-random-testing.3474/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021770
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

How to Balance Directed and Constraint Random Testing

A critical aspect of any VIP, is the test suite. Without appropriate type of tests, it would be impossible to verify a design optimally. One must take care to ensure that the tests cover all important areas of the design that are to be verified.

Let us start by asking - what is the difference between directed tests and constraint random tests?


Directed tests are simple tests, wherein a particular scenario is recreated for a known feature, and the expectations are set accordingly. Constraint random tests can cover a wide number of scenarios and/or multiple configurations. This is possible because the scenario generated is randomized with constraints, restricting parameters to values defined within the specification range.


Note: While an IP designer would wish to verify all the features using constraint random tests, a verification engineer would want to use as few constraint random tests as possible.


What are the trade-offs of using directed tests versus constraint random tests?


With too many directed tests, you could end up missing out on some important permutations.

With too many constraint random tests you could end up with increased complexity. This increase in complexity arises due to three reasons:

  1. Verification environment requires all checks to be implicitly implemented since expectations cannot be set within the tests, leading to increased test bench complexity.
  2. Uncertainty in features being tested and scenarios being covered, leading to increased complexity in mapping a test to a feature.
  3. Errors encountered during test runs would require a considerable amount of effort to figure out the exact issue, leading to increased debug complexity.

So, when does one use directed tests?


Use directed tests to check if the features have been implemented correctly. Dedicate a directed case to each unique feature identified. Such tests help in getting a clean picture of various functionalities covered by the feature, along with performing a basic check on them. If there are problems with some feature implementations in the design, directed tests can catch them. This is especially handy during the initial stages of development, when developers want to test each feature separately, to check if it has been implemented correctly.


And, when does one use constraint random tests?


Use constraint random tests for relatively important features and related scenarios, to ensure that design behavior is as per the specification, under all possible real world scenarios and configurations. Given the complexity involved in constraint random tests, you must identify when constraint randomization wouldn't really be necessary as directed tests could be sufficient to verify the feature. For instance, features like test mode or unrecoverable timeouts requiring top-level application intervention don’t justify for the constraint random tests. These types of features can be verified with the help of a directed test.


Thus, before implementing tests, you must dissect the design thoroughly, and prioritize all features and related scenarios based on criticality of the feature. Use directed tests to check if the features have been implemented correctly whereas use the constraint random tests to check if the feature behaves as per the expectations under multiple circumstances.


At Arrow Devices, we believe optimal verification can be accomplished by striking the right balance between constraint random and directed test cases.

Using this approach, you can combine the efficiency of constraint random tests and transparency offered by directed tests. It ensures that all features are tested and design operation in real world scenarios are verified optimally. Thus concerns of both the verification engineers and the IP designers, in meeting verification goals, are addressed.

Contributors: Sudhanshu AR, Anand Shirahatti, Neha Mittal
 
Back
Top