How to control that a simulation does not end too soon when using the UVM Framework

Hello:

I need to test a DUT that processes some information and returns the results serially.

So a typical tests for this DUT, in my UVM frame work is like this:

  1. Use interface A to configure the DUT by writing to its register bank (Done through the a driver that uses SeqItemA)
  2. Generate a sequence item for its input data, using interface B. (Here the type of item for the driver would be SeqItemB ).
  3. Use interface C to start the process. (SeqItemC).
  4. “Wait” for the data to be serially outputted so it can be compared to the result generated by the predictor. The data here is on interface D, and its monitor gathers it in a SeqItemD.

If we assume that this process is done only once (a single simple test), each sequencer for items A, B and C will contain only one item of the type. And when all sequencers are finished, the simulation will end.

The problem is that the data still needs to be processed and then it needs to be outputted. When all drivers are done, the DUT still requires more time to output all of its processed data. So the monitor for Interface D can never complete an item and the comparison is never carried out. The number of bits for the output varies widely depending on the configuration. It can be anywhere from 60 bits to over 60000.

So what I need is a way to end simulation when the monitor for interface D has finished, and not before. I imagine that this must be a rather common ocurrence, but being new to using UVM, I don’t know what is good practice in these scenarios.

In reply to aarelovich:

See why phase raise_objection and drop_objection is required in test class run_phase? | Verification Academy

In reply to dave_59:

Thank you! This was very helpful. I was able to implement a simple logic variable in my comparator that enabled me to wait until the the predicted outputs matched in number the DUT outputs. And then wait until that variable became 1 to drop an objection in my test.