UVM End of test

I am exploring different ways to end a UVM test. One method that has come often from studying different blogs from Verification Academy and other sites is to use the Phase Ready to End. I have some questions regarding the implementation of this method.

I am using this method in scoreboard class, where my understanding is after my usual run phase is finished, it will call the phase ready to end method and implement it. The reason I am using it my scoreboard’s run_phase finishes early, and there are some data into queues that need to be processed. So I am trying to prolong this scoreboard run_phase using this method. Here are is some pseudo-code that I have used.

function void phase_ready_to_end(uvm_phase phase);
 if (phase.get_name() != "run") return;
 if (queue.size() != 0) begin
  phase.raise_objection(.obj(this)); 
  fork 
    begin 
     delay_phase(phase);
    end
  join_none
 end
endfunction

task delay_phase(uvm_phase phase);
 wait(queue.size() == 0);
 phase.drop_objection(.obj(this));
endtask

I have taken inspiration for this implementation from this link UVM-End of Test Mechanism for your reference. Here are some of the ungated thoughts in my mind on which I need guidance and help.

to the best of my understanding the phase_ready_to_end is called at the end of run_phase and when it runs it raises the objection for that scoreboard run_phase and runs delay_phase task.

That Delay Phase task is just waiting for the queue to end, but I am not seeing any method or task which will pop the items from the queue. Does I have to call some method to pop from the queue or as according to the 1st point above the raised objection will start the run phase so there is no need for that and we have to wait for a considerable amount of time?

In reply to AbdulRauf1251:

The function phase_ready_to_end gets called at the end of every task based phase when all objections have been dropped (or never raised at all).

Typically a scoreboard has a queue or some kind of array of transactions waiting to be checked sent from a monitor via an analysis_port write() method.

In reply to dave_59:
Hi, Dave thanks for your reply. I am using this function in the scoreboard, when all objections have been dropped, there are still some items in the queue. I am using this function to get those items out.

The Question which I really want to ask this forum. When this method is called, a task has been called inside it like I have done, where I am waiting for the queue to pop all its item. Will this happen or not?

In reply to AbdulRauf1251:

The Thing is I got into this function, I raised the objection and I called that task, but I am still waiting. There is nothing happening from the scoreboard side to empty its queues.

In reply to AbdulRauf1251:

The scoreboard needs 2 values to compare something. You should have 2 storages where the values to be compared are stored. If 1 is empty nothing else happens.
Raising and dropping objections in the phase_ready_to_end is not necessary because it prolonges the run_phase of your scoreboard.

In reply to AbdulRauf1251:

Then you need to debug what is still in the queue and why it is not getting removed.

In reply to dave_59:

Let me give you some pre-context to this question. I have a scoreboard where there are two queues whose write methods are implemented and they are being fed correctly by their source.

task run_phase (uvm_phase phase);
 forever begin
   compare_queues(); // this method takes data from two queues and compares them, both queues implementation are fine and they take data from their respective sources. Let me give you a scenario, let's suppose there are a total of 10 transactions being generated but the scoreboard was able to process only 6 of them and there are 4 transactions left when all objections are dropped. So to tackle that I implement this phase_to_ready_end method in my scoreboard. 
end
endtask 

The problem with this method that I am having is that, when I raise the objection in this phase_ready_to_end and call delay_phase method, nothing happens. And I am curious is there more to this implementation or not?

In reply to AbdulRauf1251:
It is useless to orais/drop objections in the phase_ready_to_end, because this task extends the run_phase of the corresponding component.
Printing the remaining content/size of your queues would help to solve your problem.

In reply to chr_sue:

Thanks for your response. So, you are saying I should not raise or drop objection inside that phase_ready_to_end. One thing I want to ask you is if I don’t raise any objection that means the run_phase is extending.
I did that and check the size of the queues. The Scoreboard didn’t perform its task(which was to pop queue entries if there) instead of that it was stuck at that point.

In reply to AbdulRauf1251:

Yes, you undersatnd me correctly.
BTW, using analysis_fifios insetad of quees has a few benefits. Because the get command is blocking until there is an entry in the fifo. You do not have to check the size as you have to do for the queue.

In reply to chr_sue:

One more thing to ask you, I have observed that my scoreboard’s run_phase and other’s component run_phase are out of sync. This means Let suppose there are 10 transactions that have been generated. Monitor samples of these 10 transactions and send them to the scoreboard using queues. But Scoreboard Run_phase was only able to process 7 transactions and after It had processed 7, it kind of turned off.
Is this behaviour possible??

In reply to AbdulRauf1251:

If your environment is constructed correctly theen ther is no ‘out-of-sync’.
If the monitor extracts 10 transactions it will also publish these 10 transactions to the components connected, i.e. scoreboard and/or coverage collector. The scoreboard needs 2 differnt inputs to compare something. The storage which receives the transactions from the monitor has to have in total 10 transactions. But you might see less, because some of them are already processed.
You did never say where the 2nd compare value comes from. Could you please clarify this.

In reply to chr_sue:

We have a pair of monitors that calls write method implemented inside the scoreboard. The monitors typically capture the transaction from BUS and call these WR methods to push the transactions. Thus two source and destination monitors WR into two - source and destination - queues as and when they find the transactions.

  1. We have a checker task with RD-n-check running in a forever loop in the run-phase of the scoreboard. It’s in a while loop and watches if the destination queue has non-zero entry. Once it finds so, it pops the head entry from destination queue and then pops the head entry from source queue as well and compares the two entries to declare if the check was a PASS or FAIL.

  2. There are more than 2 queues and more than a pair of source/destination of course, but broadly this is the architecture around here.

  3. Now in the current scenario, it seems that the checker tasks stop prints after certain point of time in some of the test cases. Upon adding debug prints thoroughly, it seems that checker tasks that does the job #2/#3 above and gets called inside the forever loop of the run-phase, exits gracefully one last time. However they are entered again - which is to say that the forever loop that should be calling them didn’t call. As if the forever loop of run-phase stopped completely.

  4. We also added another forever loop in run-phase that observes whether the queues are empty. From prints inside that parallel loop and from the monitor prints, we know that the queues aren’t empty and monitors did push WRs into the queues for a long time.

  5. It seems that the forever loop stopped working suddenly ( going by prints spewed out) all of a sudden but another set of threads that we added in runphase in another forever loop just to monitor those queues - keep printing that the queues have contents. So run-phase shouldn’t be over but the checker tasks running in forever has stopped.

  6. We are using Vivado 2020.2 for the simulation. This is a baffling/weird problem for us and we did go through prints multiple times to make sure nothing has been missed out. It seems we are missing very very basic or has hit a bug/broken some basics of UVM coding to land into here.

If you have any help, thoughts here, will appreciate that greatly.

In reply to AbdulRauf1251:

Because I do not see your code I can only guess what your problem is. It might come from the usage of queues for storing transactions instead of uvm_tlm_analysis_fifos. The queue does not have a blocking interface, but the fifo has. If there a transaction is missing the fifo waits until it get one. The queue runs empty.

In reply to chr_sue:
Thanks for the response and your time. Below this text is a pseudo-code to explain the problem statement.

Few points -

  1. Currently enb is equal to 2’b11, so we are running both the threads in parallel and queueRec2 and queueRec1 are the two queues.
  2. Whenever the monitor writes into the analysis port, transactions reach to scoreboard through the analysis port to these queues.
  3. When the queue’s size becomes non-zero, data is popped from the back of the queue.
  4. In the code you can also see those relative write methods for the respective analysis port. Through these display messages, in one such simulation, I come to know that the monitor has written into the Port 10-times.
  5. But the debugger print message in thread1 and thread2 I find the prints only 6 times. At the same time, the queue’s Write Print is showing it got entries inside it.
task scoreboard::write_rec2(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 queueRec2.push_front(pkt);
 $display("Queue Got this data: %0h",pkt.data);

endtask
task scoreboard::write_rec1(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 queueRec1.push_front(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask

task scoreboard::run_phase (uvm_phase phase);
  forever begin
     if (enb == 2'b11) begin
	fork 
		begin thread1(); end
		begin thread2(); end
	join
     end
     else if (enb == 2'b10) begin
	thread1();
     end
     else if (enb == 2'b01) begin
	thread2();
     end
  end
endtask

task scoreboard::thread2();
	wait (queueRec2.size != 0);
	Rec2 = queueRec2.popback();
	$display("Queue Received this Data: %0h", Rec2.data);
	wait (queueTrans2.size != 0);
	trans2 = queueTrans2.popback();
	compare(trans2.data,Rec2.data);  // this function two inputs and check equality
endtask

task scoreboard::thread1();
	wait (queueRec1.size != 0);
	Rec1 = queueRec1.popback();
	$display("Queue Received this Data: %0h", Rec1.data);
	wait (queueTrans1.size != 0);
	trans1 = queueTrans1.popback();
	compare(trans1.data,Rec1.data);  // this function two inputs and check equality
endtask

task monitor:: run_phase(uvm_phase phase);
	forever begin
	  sample_data();
	end 
endtask 

task monitor:: sample_data();
	for (int i = 0; i < 16; i++) begin
		@(posedge clk iff(rdy && vld));
		out_tr.data[i] = vif.databit;	// out_tr.data is an array of 16, storing databit (which is of single bit) at every rdy && vld.
	end 
	anlysis_port.write(out_tr)
endtask
I will look into your suggestion of using fifos instead of queues. but I am not able to understand what leads to this behaviour. I have read somewhere in the blog due to some delay in 
the testbench environment's components sometimes the scoreboard can't process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven't seen much info about the theory. Can you please share some thoughts about this side too if you think that's a possibility....?

In reply to AbdulRauf1251:

One of the most important rules in implementing testbenches is: ‘as easy as possible’.
With your queue approach you do not follow this rule, because the UVM lib has analysis fifos which behave better than the queue. The write method puts the transaction into the fifo. The fifo has a blocking get interface. If you do a get on the fifo and it is empty, it waist until a transaction is available. Then you do not need to check if there is something in the storage.

With respect to your code I have 2 questions:
(1) where does the variable enb come from?
(2) where does the transactions come from (sources). Do you have 2 monitors.

With respect to the monitor functionality it is recommended to send a copy of the collected transaction to the analysis port. Then you do not have to take care with the copying in the scoreboard.

In reply to chr_sue:

So, the answer to the first question enb value comes from the test class. Test class set its value to uvm_root using uvm_configdb. In the scoreboard, I will take value from the uvm_config_db inside the scoreboard’s build_phase.
The answer to the second question is yes. We have multiple monitors, their source code is the same more a less.

In reply to AbdulRauf1251:

@Chr_sue what about this, have any thoughts on that??
I have read somewhere in the blog due to some delay in the testbench environment’s components sometimes the scoreboard can’t process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven’t seen much info about the theory. Can you please share some thoughts about this side too if you think that’s a possibility…?

In reply to AbdulRauf1251:

In reply to chr_sue:
So, the answer to the first question enb value comes from the test class. Test class set its value to uvm_root using uvm_configdb. In the scoreboard, I will take value from the uvm_config_db inside the scoreboard’s build_phase.
The answer to the second question is yes. We have multiple monitors, their source code is the same more a less.

You should set enb in the test class to scoreboard and not to uvm_root.
And you do not need to differentiate between the write function. The write is only exected to the components connected to this specific analysis_port.
Th transaction level does not know real timing. Only the order of the data is relevant. Using tlm_analysis_fifos synchronizes the data automatically.

In reply to chr_sue:

Thanks, I will look into the fifos for synchronization.