Hello:
I need to test a DUT that processes some information and returns the results serially.
So a typical tests for this DUT, in my UVM frame work is like this:
- Use interface A to configure the DUT by writing to its register bank (Done through the a driver that uses SeqItemA)
- Generate a sequence item for its input data, using interface B. (Here the type of item for the driver would be SeqItemB ).
- Use interface C to start the process. (SeqItemC).
- “Wait” for the data to be serially outputted so it can be compared to the result generated by the predictor. The data here is on interface D, and its monitor gathers it in a SeqItemD.
If we assume that this process is done only once (a single simple test), each sequencer for items A, B and C will contain only one item of the type. And when all sequencers are finished, the simulation will end.
The problem is that the data still needs to be processed and then it needs to be outputted. When all drivers are done, the DUT still requires more time to output all of its processed data. So the monitor for Interface D can never complete an item and the comparison is never carried out. The number of bits for the output varies widely depending on the configuration. It can be anywhere from 60 bits to over 60000.
So what I need is a way to end simulation when the monitor for interface D has finished, and not before. I imagine that this must be a rather common ocurrence, but being new to using UVM, I don’t know what is good practice in these scenarios.