Home > Community > Blogs > Functional Verification > tech tip weighting generation of quot extreme quot values

 Login with a Cadence account. Not a member yet? Create a permanent login account to make interactions with Cadence more conveniennt. Register | Membership benefits
 Get email delivery of the Functional Verification blog (individual posts).

## Email

Recipients email * (separate multiple addresses with commas)

Message *

 Send yourself a copy

## Subscribe

Intro copy of the newsletter section here, some intro copy of the newsletter. Instruction of how to subscribe to this newsletter.

First Name *

Last Name *

Email *

Company / Institution *

 Send Yourself A Copy

# Tech Tip: Weighting Generation of "Extreme" Values

[Team Specman welcomes guest blogger Vitaly Lagoon, an Architect in the Generation Technology R&D group]

Consider the case where you have a generatable variable "x" taking random values from [0..99], i.e. all values are declared equal from the generation point of view.  However, imagine that from a verification perspective you are most interested in checking the extreme cases x==0 and x==99, as well as a few random values in between 0 and 99 for safety's sake.

One approach would be to capture this in a coverage definition by defining three buckets [0][1..98][99].  However, running this code would actually give you really bad coverage of the extreme ends of the range because the generator would hit the middle bucket 98 times out of 100.

Fortunately, there is an easy and effective alternative.  First, define a generatable variable – let’s call it 'range_x' in this example:

range_x :[min,max,others

then constrain it using:

keep (range_x==min) == (x==0);

keep (range_x==max) == (x==99);

// The third case is implied so, you don't need the constraint

Now, running with coverage on 'range_x' you will see things converge a lot faster -- in only five generation cycles we would have an 87% chance of filling all three ‘range_x’ buckets.  And in 10 cycles, the chance of filling all three buckets is 98%!  Clearly, making 'range_x' generatable has the effect of uniformly directing the solver towards one of the areas of interest.

Of course this example can be extended and specialized by using 'select' on 'range_x', e.g. you can easily put more weight on some buckets.

The general point: when taking coverage on generatable parameters it makes sense move the bucket (and also cross) definition out of coverage specification and into constraints.  The buckets defined in coverage only help to observe things, whereas the equivalent definition in constraints actually targets the generator towards the interesting stuff.

One final note: this example / capability is only available with IntelliGen.  With the legacy PGEN generator, the ordering of the fields would affect the order of generation.  Thus, if you did not define "range_x" before "x" (or use a specific gen(range_x) before (x) statement) then you would not see a uniform generation on "range_x".

Happy generating!

Vitaly Lagoon
Architect
Generation Technology R&D

By Cedric Fau on May 21, 2009
Wouldn't be simpler to constraint directly 'x' with a big weight on extreme values?
keep soft x == select {
1 : 0 ;
1 : [1..98] ;
1 : 99;
};
In that case, you cover all the ranges with 3 different seeds.

By Vitaly Lagoon on May 22, 2009
To the previous comment.
Using 'select' works fine in the above, somewhat trivial example. The alternative I'm suggesting may work better in a more realistic setting. Here is why
a) 'select' only works on individual variables thus, 'cross'es cannot be done just as easily
b) when using 'select' on 'x' you have to duplicate the definition of the coverage buckets -- in the cover group declaration and in the select. That adds to the maintenance of the code. In my approach you only need to define the buckets once as constraints, and then use plain and simple coverage of 'range_x' variable
c) my method allows constraining the model directly in terms of coverage. For example, in a specific test we may want to use 'keep range_x != max'.