WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 716
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 716
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)
            
Q2FY24TessentAI 800X100
WP_Term Object
(
    [term_id] => 159
    [name] => Siemens EDA
    [slug] => siemens-eda
    [term_group] => 0
    [term_taxonomy_id] => 159
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 716
    [filter] => raw
    [cat_ID] => 159
    [category_count] => 716
    [category_description] => 
    [cat_name] => Siemens EDA
    [category_nicename] => siemens-eda
    [category_parent] => 157
)

Randomization Fools Us Some of the Time

Randomization Fools Us Some of the Time
by Bernard Murphy on 10-13-2020 at 6:00 am

Though hopefully not some of us all of the time. Randomization is a technique used in verification to improve coverage in testing. You develop tests you know you have to run, then you throw randomization on top of that to search around those starter tests, to explore possibilities you haven’t considered. Truly random tests are not actually very useful. Many won’t represent realistic possibilities, making you waste time and compute resource on useless verification. More useful is to constrain randomization in ways that should ensure the randomized tests you run are still meaningful. Unsurprisingly, this is known as constrained random testing, a mainstay in functional verification today. It’s a low effort way to increase coverage. Or is it? Constrained randomization fools us sometimes. Dave Rich at Mentor just released a white paper on that topic.

Randomization Fools Us

Mixing Variable types

SystemVerilog is pretty easy-going about letting you mix types in expressions, a philosophy inherited I assume from C. In SV you can have even more finely specified word sizes than in C, but the same principle holds. From a few hundred feet up they’re all just values. Throw them together into a complex expression and let the compiler figure out the details. Especially in constraints, when we’re not trying to synthesize hardware, we just want to calculate.

But the devil is in those details. Variables in a constraint sub-expression may need to be extended for correct evaluation. Expressions may overflow, with unexpected consequences. We’re pretty careful about this kind of thing in synthesis, perhaps less so in constraints. Dave uses an example expression A+B>>C>D to illustrate. This is already ugly in relying on implicit operator precedence, but beyond that, in his example A is 3-bit, B and D are 4-bit and C is an integer. The shift operation may truncate the value of A+B. As a result of which the comparison may not deliver what you expected. This is the first problem. An expression will evaluate the way the language reference manual says it should, possibly not the way you intended. There is no option for “do what I mean, not what I say” in the language.

Rich shares other examples such as comparing signed and unsigned variables, where a signed value unintentionally overflows from a positive value to a negative value. Like most bugs, obvious when you see it, but easy to overlook when you forget one of the variables is signed.

Randomization adds more devilry

So far this about being careful with calculations in SystemVerilog. A generally good practice whether or not you’re using those expressions in constraints. However it’s one thing to carefully reason your way through each sub-expression when you can reason about values that make sense to you. Though Dave doesn’t mention this, I suspect there’s an additional level of danger when those variables are randomized. Did you really reason your way through all the possible randomized values those could take on?

There’s one other concern in any kind of random generation, especially constrained random. That’s distribution. You want to avoid generated tests heavily weighted to certain variable values with little testing for other values. Constraints skew distributions; this is unavoidable. You need to be able to control that skew. Dave gives some hints on how this can be managed.

Good white paper. You can read it in full HERE.

Also Read:

Siemens PAVE360 Stepping Up to Digital Twins

Verifying Warm Memory. Virtualizing to manage complexity

Trusted IoT Ecosystem for Security – Created by the GSA and Chaired by Mentor/Siemens

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.