When running coverage regressions, some coverpoints almost never get hit regardless of how many seeds are run. This is due to how our testbench sets weighting to certain constraints.
Some constraints get hit 1000 times during a regression, and some get hit 0 times.
This is more prominent as an issue when we have 2 different constraints which only run 10% of the time.. and we are cross covering them.
Some of the combinations of these constraints are not hit, until we reach an insanely high number of seeds.
Is there a way to:
Create a pool of constraints values that must be run..
Set a fixed number of times that we want a constraint variant to be hit during a coverage regression.
Once it is hit number of times, it is removed from a pool of constraint variants that can be set
Set a fixed distribution that must be adhered. i.e. we run 1000 tests..
Make sure a constraint is hit % of total tests, once it is hit % number of times, it is removed from a pool
I will simplify our dut for this example.
Our testbench has an enum called datatype(This maps to 2b logic to be driven to DUT interface..).
Our testbench has a datatype constraint:
Datatype constraint (datatype_c) weights
- INT8 - 70%
- FLOAT16 - 20%
- INT4 - 10%
The datatype_c constraint configures our testbench so that:
- We can configure our DUT to expect this datatype
- We can configure what data we will generate and drive to our dut
We have a test called datatype_test
We want to run 1000 seeds of this test
We want INT4 and FLOAT16 to be hit atleast 100 times
Once INT4 is hit 100 times, could we remove it from the possible values that datatype could be set to, for remaining tests.
We want INT4 and FLOAT16 to be hit 10% of the time each
How would I do this?