The UVM Cookbook shows coverage collectors getting items directly from a monitor/agent.
For projects I’ve worked on coverage integrity is very important since this is used to show what has been tested and is proven to be working. Careful steps are taken to avoid any possibility of false positive coverage. For this reason I use coverage as a “proof of checking” not a “proof of randomization.” Connecting a coverage collector directly to a monitor is essentially proof of randomization – it shows that the monitor observed the value but not necessarily that the value covered was handled properly by the DUT. To prove that the value covered worked as expected, the coverage should be sampled on items that have passed scoreboarding/checking only.
Sure, some people argue that if things don’t check out the test will fail and the coverage will never be merged but there are ways this can go wrong. For example, sometimes people
- forget to implement checking of certain things
- comment out certain checking and forget to un-comment
- demote errors to warnings early in the development to get things passing but forget to change it back
- waive errors that they shouldn’t
- finish tests with observed values that haven’t been compared/checked yet (but have had coverage scored)
- turn off checking altogether
What does the Verification Academy community think about this? Do others think it’s okay to score coverage coming directly out of a monitor?
In reply to jeremy.ralph:
I agree with your position wholeheartedly.
What I am wondering is what is the best way to structure the intercommunication between the scoreboard and the coverage block? As per your assertion on the importance of data being checked before covered, there would necessarily be numerous “sample()” calls to the coverage block scattered throughout the scoreboard code. Perhaps this is unavoidable, and maybe some generic function can be added to avoid substantial code duplication, but a new connection between the scoreboard and the coverage block will need to be added (instead of just between monitor and subscribers).
As an added bonus, littering of “sample()” calls in the scoreboard actually avoids a duplication of “checking” code that you would have to have had in the coverage block to ensure that sampling occurs only after the particular combination of events that represent the feature you are covering have indeed occurred. Since this checking code already must exist in the scoreboard, it doesn’t make sense to duplicate it also in the coverage block.
In reply to astoddard:
Yes, what is the best way to structure for proof of checking? This is a good question.
I’ve known some who have used analysis ports to connect monitors to predictors, checkers, and coverage models in a very regimented way. For some, this may be too rigid and have too much overhead but it does keep things clean and decoupled (which is good for multi-developer/owners). Some have also created tools to automate the related “switch-board” of analysis ports, connections, type-casting…
Under this system, transaction items from monitors go into predictor(s) and into comparator’s observed port. Expected transaction items from predictor go into comparator’s expected port (and can have extended fields added for coverage if state from prediction is to be derived/covered). Then, check-passing expected items out of the comparator are passed to coverage component(s) via analysis port for coverage processing.
The decoupling is also good for unit testing of the predictor, checking and coverage.