The UVM Cookbook shows coverage collectors getting items directly from a monitor/agent.
For projects I've worked on coverage integrity is very important since this is used to show what has been tested and is proven to be working. Careful steps are taken to avoid any possibility of false positive coverage. For this reason I use coverage as a "proof of checking" not a "proof of randomization." Connecting a coverage collector directly to a monitor is essentially proof of randomization -- it shows that the monitor observed the value but not necessarily that the value covered was handled properly by the DUT. To prove that the value covered worked as expected, the coverage should be sampled on items that have passed scoreboarding/checking only.
Sure, some people argue that if things don't check out the test will fail and the coverage will never be merged but there are ways this can go wrong. For example, sometimes people
- forget to implement checking of certain things
- comment out certain checking and forget to un-comment
- demote errors to warnings early in the development to get things passing but forget to change it back
- waive errors that they shouldn't
- finish tests with observed values that haven't been compared/checked yet (but have had coverage scored)
- turn off checking altogether
What does the Verification Academy community think about this? Do others think it's okay to score coverage coming directly out of a monitor?