Are all possible scenarios should be included on the functional coverage?

Do we need to put all possible scenarios on the functional coverage?
For example, I have the following scenarios:

  1. Mode 0 write → reset → issue_any_cmd
  2. Mode 0 read → reset → issue_any_cmd
  3. Mode 1 write → reset → issue_any_cmd
  4. Mode 1 write → reset → issue_any_cmd

  5. Mode 5 write …
  6. Mode 5 read …

Now, in order for me to ensure that each of these scenarios has been tested, then I can do 2 things. First, I can put all of these in a functional coverage and then create a single test that randomizes generation of each of these scenario, and then put large number of seeds so that each of these scenario will be generated… The second one is, I will not include this on the functional coverage but I will create testcases for each of this scenario. So that will be 10 testcases. Then I will add these 10 testcases on my regression test list and just run the regression. In this way, even though did not include these on the functional coverage, I am still sure that these scenarios have been exercised.

Which one do you think is the right way to do it? The first or the second?

I am thinking that the disadvantage of the 2nd way is that I am not basing it on the actual signals that came in to the DUT but only based it on running the testcase… Running the testcase and saying that I have covered that scenario is not reliable since the actual signals or information that entered to the DUT may be different due to a possible bug on the drivers or some part of the testbench…

Am I correct?

Thank you.

Regards,
Reuben

In reply to Reuben:

At the end of the day you want to make sure that you exercised the required functionality by really running those scenarios. How you ensure that you did that shouldn’t really matter. I’ve heard a lot of dogmatic people preach that you need a cover point for everything, but in some situations this just isn’t practical.

You’re right to say that in the second case you’re not basing your information on stuff you observed from the DUT. A possible bug in the testbench causing the scenario to not be run would give you a false positive, but at the same time, coverage definitions and especially collection can also be buggy, causing you to cover more than you’re actually driving into the design.