I started creating an uvm environment to verify a counter/timer with different interrupts upon certain reaching certain count time, etc.
a. So created a simple APB driver for write/reading or configuring the counter register.
b. Along with this have an UVM register model.
What will be a good idea to create as a self-checking environment ? Can I create some model within the testbench ? which predicts the proper count operations and generation of interrupts and compare the results with the RTL output ?
Or what would be a better checking mechanism that I can implement within the environment ? Do send in your suggestion !!
You need a small model - code a scoreboard containing a behavioral model of your counter/timer [make it accurate enough for your spec and test plan but not too complicated].
It would watch config register changes [triggered from the ‘bus’ end of the register model where they actually take effect, not from the ‘API’ end]
It would watch the clock source to the counter [via a small interface/monitor if anything other than the bus clock. Consider whether you only need the simple case where the clock is running all the time at the same period or the more complex case where you need to count each clock edge.]
It would watch the interrupt output via a small interface/monitor.
It would predict how many clock cycles remain before the interrupt would be expected, update that prediction with each config register change or clock edge observed, and do 2 checks: (1) triggered by the interrupt - did the interrupt occur too soon → fail, and (2) triggered by its internal prediction - did too many clock cycles pass without an interrupt yet → fail.
That’s the checking side, the rest is stimulus (registers and clocking and a higher level reflecting the use model: setup, wait for interrupt, clear interrupt, etc)
You can do a lot of checking with assertions in either the DUT, in a checker or module bound to the DUT, or in a SV interface.
This alleviates the need for the scoreboard. Below is sample code with assertions for a simple counter with specific requirements.
Also, see my White paper: “Using SVA for scoreboarding and TB designs”
Abstract
Though assertions are typically used for the verification of properties, they can be
applied in many other verification applications. For example, in scoreboarding functions can be called from within a sequence match item after the assertion reaches a desired point
“It would watch config register changes [triggered from the ‘bus’ end of the register model where they actually take effect, not from the ‘API’ end]”
a. Is it a good practice to create the model within the scoreboard itself ?
b. Or can I create a predictor-model separately and output the results to the scoreboard ? So that the scoreboard gets the results from the predictor model and as well from the DUT, so that the checking alone can be done inside scoreboard ?
a. Is it a good practice to create the model within the scoreboard itself ?
b. Or can I create a predictor-model separately and output the results to the scoreboard ? So that the scoreboard gets the results from the predictor model and as well from the DUT, so that the checking alone can be done inside scoreboard ?
Either might be right, it depends on the DUT. I don’t believe there is any best practice on this choice of scoreboard design, in my experience with real world designs it is rare to be able to separate the predictor and scoreboard comparison like (b), that only seems to work for textbook examples. Most designs have complex temporal interactions and/or aggregation of data items from different sources, and the comparison needs to be done in the middle of that with a customized data structure like (a). Assertions can help - both low-level ones to save replicating cause and effect as procedural code, and high-level ones wielding rules on abstracted events above a layer of procedural model code.
Read more about various Scoreboard design considerations in my DvCon 2012 paper on Scoreboard Architecture.
Assertions can help - both low-level ones to save replicating cause and effect as procedural code, and high-level ones wielding rules on abstracted events above a layer of procedural model code.
A scoreboard / predictor / assertions are all techniques to implement a verification or checking that the design meets the requirements. In effect, they are all “assertions”, in the general sense, in that they assert or state/check that indeed everything that the DUT has experienced so far is indeed OK.
UVM puts a big hoopla about the scoreboard/monitor approach; but, as Gordon and I mentioned, an assertion language (e.g., SVA) is a methodology that is very efficient from a coding viewpoint because it skips those steps of monitoring and predicting the results. SVA does not work too well in all cases, but for many cases it does a fairly good job, and it should not be discounted, just because it is “NOT UVM”.
Another advantage of SVA is that it clarifies the requirements, something that should be done prior and during the design process.
Ben Cohen SystemVerilog.us
a. Say as you suggested, have a reference model which does processing and predicts the desired behavior of the DUT.
SCENARIO:
Say after process X, the DUT has updated the DUT status register(internal operation).
Seeing the register changes, the testbench does a backdoor process and updates the register model to keep track of the DUT internal updates. So this internally changes the mirror value and the desired values of the register model.
At the same time, the reference model tries to update the desired values of the register model. Can both happen at the same time and cause problem ? Or because of simulation time semantics, once after the DUT updates ~ then the reference model updates the register model ?
b. So is it better to have separate register model for the reference model, so that the desired/correct status flag etc. registers values can be maintained and later compared with the RTL ~ register model ?
c. Or is it better, that the reference model update the desired value of the register model and in-turn can be compared with the mirror-value which the DUT has updated ?
Kindly comment on the same and suggest some good coding practice for such scenarios !!
Say after process X, the DUT has updated the DUT status register(internal operation).
Seeing the register changes, the testbench does a backdoor process and updates the register model to keep track of the DUT internal updates. So this internally changes the mirror value and the desired values of the register model.
I maybe wrong, but it seems odd to me that a verification code would be built based on a DUT’s (maybe faulty) behavior. IMHO, a checker, however built, should be an independent, and unbiased construct. In real life, they are called auditors.
At the same time, the reference model tries to update the desired values of the register model. Can both happen at the same time and cause problem ? Or because of simulation time semantics, once after the DUT updates ~ then the reference model updates the register model ?
The reference model is, and should be, a different version or look at the results. For example, if the DUT is a multiplier, the checker would expect a result in x number of cycles to be result= a * b. The checker would not a copy of the RTL implementation of the multiplier.
b. So is it better to have separate register model for the reference model, so that the desired/correct status flag etc. registers values can be maintained and later compared with the RTL ~ register model ?
YES, IMHO
c. Or is it better, that the reference model update the desired value of the register model and in-turn can be compared with the mirror-value which the DUT has updated ?
As I said, the checker is a separate independent entity. A monitor may gather observed DUT behavior, and sends (or writes) the condensed results to the checker. For example: