Delaying end of phase (From run_phase to extract_phase)

What is the best way to delay end of phase after all the other components have agreed that the phase should end?
I have a driver and a sequence to model a pipelined protocol. The sequence is called in the test during the main_phase. Driver is driving the interface in the run_phase. Since it is a pipelined protocol, after the last sequence item is sent, the test moves to the extract phase. However to delay the phase I am using phase_ready_to_end.
My code snippet is as shown:

class my_test extends uvm_test;
…
task main_phase(uvm_phase phase);
phase.raise_objection(this);
seqh.start(seqrh);
phase.drop_objection(this);
endtask

endclass

class my_driver extends uvm_driver;
…
busy = 1;
task run_phase(uvm_phase phase);
forever begin
…
if (ending) begin
phase.drop_objection(this);
busy = 0;
end
end

virtual function void phase_ready_to_end(uvm_phase phase);
if(phase.get_name == “main”) begin
ending = 1;
if(busy)
phase.raise_objection(this);
end
endfunction
…
endclass
I get an error in this case, as the phase for raising and dropping does not match when it enters phase_ready_to_end.
However this works fine when I change the main_phase to run_phase in the test.
So, does this mean that we have to use run_phase in the test? Is this a limitation wrt the sub phases?

1 Like

We suggest everyone stick with the run_phase.

If your test only operates in the main_phase, that is identical to the run_phase. But what happens if you add a shutdown_phase to your test? Did you really want your driver to extend the main_phase? Or do you just want the last time consuming phase to be extended? The end of the run_phase represents the end of the last time consuming phase.

Thanks Dave. I meant the last time consuming phase.

The above can also be achieved by delaying the sequence using one of the below methods
1. By triggering events in the driver using the transaction uvm_event_pool and waiting for these events in the sequence
2. Using the response handler in the sequence and tracking the number of transactions as and when the driver puts the response.
…and then end the sequence

Which is the best method to adopt? Does it have any impact on performance using one over the other.

In reply to malathi@aceic.com:

I don’t see how using the response handler will extend the sequence. And I don’t recommend using events as they can be difficult to follow and maintain.

As long as you are not raising and dropping objections for every transaction, performance should not be an issue.

What I would do(and do in my environments) is have every part of your environment run in the run phase with the exception of sequences and potentially one component that has a task based phase that runs after the main phase. Have sequences run in the main phase. When those sequences end b/c the test is done the main phase will proceed to post main,etc. You can then add a delay in any of the uvm phases that run after main that also run in parallel to the run phase. That phase can be added to any component in your env, or you can start a default sequence on one of these phases. My person preference is the shutdown phase.

If I understand you objective properly, what you are trying to do is extend the simulation after the last transaction/sequence has been executed. In this case you can extend the simulation by setting the proper “drain time” of the objection. With this the simulation will continue for #drain_time before it ends after the last transaction/sequence in the final time consuming phase is executed.

In reply to ktmills:

Although adding delays in other time consuming phases serves the purpose I would advise against it. The same can be achieved by setting drain time. Phases can been added for proper structuring and synchronization of testbench code, so adding delays in phases seems arbitrary to me.