Appropriately sampling DUT inputs for functional coverage

I’m looking for some input on proper coverage sampling - specifically with respect to whether sampling on DUT inputs is allowable. There appears to be differing guidelines on the matter:

From Dolous, ‘Easier UVM Guidelines’ (https://www.doulos.com/knowhow/sysverilog/uvm/easier_uvm_guidelines/detail/#link413):
“Sample values within the DUT or at the outputs of the DUT. Do not sample the stimulus applied to the inputs of the DUT. Sample DUT registers when the register value is changed by the DUT, not when it is changed directly by the stimulus.”

From ASIC World (Functional Coverage Part-I):
“These set of coverage points are coded in class which is instantiated inside a input monitor or coverage class itself will have ability to sample the DUT input signals. This is perfect for having functional coverage on stimulus that DUT is being driven with,”

And from Mentor’s own ‘Coverage Cookbook’:
“Functional Coverage Validity
Functional coverage data is only valid when a check passes. If you are not using automatic tools to merge coverage from
different simulations, then you should ensure that the coverage model is only sampled when a test passes.
However, if you are using verification management tools to collect coverage from multiple simulations, then there is no
need to do this since the coverage results from tests that fail would not be merged with the rest of the functional coverage
data”

Is there a way to reconcile these to propose a better guideline (or set of)?

For example, consider the verification of a very simple DUT that is just a combinational logic circuit. A requirement could be that an output shall be equal to a Boolean expression of the inputs. Assuming we have a proper check, how should we sample coverage to make sure the DUT is exercised with inputs we deem interesting?
Would it be wrong to sample coverage from, say, a transactional item sent from a monitor of the input?
If not, then when does it become wrong to do this?
What about if we intend to run multiple simulations and merge the results?

Thanks!

In reply to bborden:

The Coverage Cookbook refers to coverage in general which includes code coverage or assertion coverage. It only make sense to merge coverage from tests that pass. I think what it is trying to say is that if you have a single simulation run that consists of many tests, you need a way to not have the coverage from the failing portion of a test not included in the overall metrics.

I think what the Dolous guideline is try to say is sample the input stimulus downstream of the input ports. I don’t think that’s absolutely necessary, or at least there should be some combination of input and output coverage.

In reply to dave_59:

Hi Dave, thanks for the reply

I think what you’ve said makes sense - we don’t want to collect coverage on a failing test.

Are there any good examples of these kind of guards? Is it best to then hold off sampling coverage for say, a transaction item, until the test using that item is complete?

Thanks

In reply to bborden:
I rarely see people writing multiple tests in a single simulation. Conceptually, if any test fails because a design bug, then design has to change, and all previous coverage could be considered invalid. But there needs to be some trade-off in the extremes.