UVM End of test

In reply to AbdulRauf1251:
It is useless to orais/drop objections in the phase_ready_to_end, because this task extends the run_phase of the corresponding component.
Printing the remaining content/size of your queues would help to solve your problem.

In reply to chr_sue:

Thanks for your response. So, you are saying I should not raise or drop objection inside that phase_ready_to_end. One thing I want to ask you is if I don’t raise any objection that means the run_phase is extending.
I did that and check the size of the queues. The Scoreboard didn’t perform its task(which was to pop queue entries if there) instead of that it was stuck at that point.

In reply to AbdulRauf1251:

Yes, you undersatnd me correctly.
BTW, using analysis_fifios insetad of quees has a few benefits. Because the get command is blocking until there is an entry in the fifo. You do not have to check the size as you have to do for the queue.

In reply to chr_sue:

One more thing to ask you, I have observed that my scoreboard’s run_phase and other’s component run_phase are out of sync. This means Let suppose there are 10 transactions that have been generated. Monitor samples of these 10 transactions and send them to the scoreboard using queues. But Scoreboard Run_phase was only able to process 7 transactions and after It had processed 7, it kind of turned off.
Is this behaviour possible??

In reply to AbdulRauf1251:

If your environment is constructed correctly theen ther is no ‘out-of-sync’.
If the monitor extracts 10 transactions it will also publish these 10 transactions to the components connected, i.e. scoreboard and/or coverage collector. The scoreboard needs 2 differnt inputs to compare something. The storage which receives the transactions from the monitor has to have in total 10 transactions. But you might see less, because some of them are already processed.
You did never say where the 2nd compare value comes from. Could you please clarify this.

In reply to chr_sue:

We have a pair of monitors that calls write method implemented inside the scoreboard. The monitors typically capture the transaction from BUS and call these WR methods to push the transactions. Thus two source and destination monitors WR into two - source and destination - queues as and when they find the transactions.

  1. We have a checker task with RD-n-check running in a forever loop in the run-phase of the scoreboard. It’s in a while loop and watches if the destination queue has non-zero entry. Once it finds so, it pops the head entry from destination queue and then pops the head entry from source queue as well and compares the two entries to declare if the check was a PASS or FAIL.

  2. There are more than 2 queues and more than a pair of source/destination of course, but broadly this is the architecture around here.

  3. Now in the current scenario, it seems that the checker tasks stop prints after certain point of time in some of the test cases. Upon adding debug prints thoroughly, it seems that checker tasks that does the job #2/#3 above and gets called inside the forever loop of the run-phase, exits gracefully one last time. However they are entered again - which is to say that the forever loop that should be calling them didn’t call. As if the forever loop of run-phase stopped completely.

  4. We also added another forever loop in run-phase that observes whether the queues are empty. From prints inside that parallel loop and from the monitor prints, we know that the queues aren’t empty and monitors did push WRs into the queues for a long time.

  5. It seems that the forever loop stopped working suddenly ( going by prints spewed out) all of a sudden but another set of threads that we added in runphase in another forever loop just to monitor those queues - keep printing that the queues have contents. So run-phase shouldn’t be over but the checker tasks running in forever has stopped.

  6. We are using Vivado 2020.2 for the simulation. This is a baffling/weird problem for us and we did go through prints multiple times to make sure nothing has been missed out. It seems we are missing very very basic or has hit a bug/broken some basics of UVM coding to land into here.

If you have any help, thoughts here, will appreciate that greatly.

In reply to AbdulRauf1251:

Because I do not see your code I can only guess what your problem is. It might come from the usage of queues for storing transactions instead of uvm_tlm_analysis_fifos. The queue does not have a blocking interface, but the fifo has. If there a transaction is missing the fifo waits until it get one. The queue runs empty.

In reply to chr_sue:
Thanks for the response and your time. Below this text is a pseudo-code to explain the problem statement.

Few points -

  1. Currently enb is equal to 2’b11, so we are running both the threads in parallel and queueRec2 and queueRec1 are the two queues.
  2. Whenever the monitor writes into the analysis port, transactions reach to scoreboard through the analysis port to these queues.
  3. When the queue’s size becomes non-zero, data is popped from the back of the queue.
  4. In the code you can also see those relative write methods for the respective analysis port. Through these display messages, in one such simulation, I come to know that the monitor has written into the Port 10-times.
  5. But the debugger print message in thread1 and thread2 I find the prints only 6 times. At the same time, the queue’s Write Print is showing it got entries inside it.
task scoreboard::write_rec2(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 queueRec2.push_front(pkt);
 $display("Queue Got this data: %0h",pkt.data);

endtask
task scoreboard::write_rec1(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 queueRec1.push_front(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask

task scoreboard::run_phase (uvm_phase phase);
  forever begin
     if (enb == 2'b11) begin
	fork 
		begin thread1(); end
		begin thread2(); end
	join
     end
     else if (enb == 2'b10) begin
	thread1();
     end
     else if (enb == 2'b01) begin
	thread2();
     end
  end
endtask

task scoreboard::thread2();
	wait (queueRec2.size != 0);
	Rec2 = queueRec2.popback();
	$display("Queue Received this Data: %0h", Rec2.data);
	wait (queueTrans2.size != 0);
	trans2 = queueTrans2.popback();
	compare(trans2.data,Rec2.data);  // this function two inputs and check equality
endtask

task scoreboard::thread1();
	wait (queueRec1.size != 0);
	Rec1 = queueRec1.popback();
	$display("Queue Received this Data: %0h", Rec1.data);
	wait (queueTrans1.size != 0);
	trans1 = queueTrans1.popback();
	compare(trans1.data,Rec1.data);  // this function two inputs and check equality
endtask

task monitor:: run_phase(uvm_phase phase);
	forever begin
	  sample_data();
	end 
endtask 

task monitor:: sample_data();
	for (int i = 0; i < 16; i++) begin
		@(posedge clk iff(rdy && vld));
		out_tr.data[i] = vif.databit;	// out_tr.data is an array of 16, storing databit (which is of single bit) at every rdy && vld.
	end 
	anlysis_port.write(out_tr)
endtask
I will look into your suggestion of using fifos instead of queues. but I am not able to understand what leads to this behaviour. I have read somewhere in the blog due to some delay in 
the testbench environment's components sometimes the scoreboard can't process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven't seen much info about the theory. Can you please share some thoughts about this side too if you think that's a possibility....?

In reply to AbdulRauf1251:

One of the most important rules in implementing testbenches is: ‘as easy as possible’.
With your queue approach you do not follow this rule, because the UVM lib has analysis fifos which behave better than the queue. The write method puts the transaction into the fifo. The fifo has a blocking get interface. If you do a get on the fifo and it is empty, it waist until a transaction is available. Then you do not need to check if there is something in the storage.

With respect to your code I have 2 questions:
(1) where does the variable enb come from?
(2) where does the transactions come from (sources). Do you have 2 monitors.

With respect to the monitor functionality it is recommended to send a copy of the collected transaction to the analysis port. Then you do not have to take care with the copying in the scoreboard.

In reply to chr_sue:

So, the answer to the first question enb value comes from the test class. Test class set its value to uvm_root using uvm_configdb. In the scoreboard, I will take value from the uvm_config_db inside the scoreboard’s build_phase.
The answer to the second question is yes. We have multiple monitors, their source code is the same more a less.

In reply to AbdulRauf1251:

@Chr_sue what about this, have any thoughts on that??
I have read somewhere in the blog due to some delay in the testbench environment’s components sometimes the scoreboard can’t process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven’t seen much info about the theory. Can you please share some thoughts about this side too if you think that’s a possibility…?

In reply to AbdulRauf1251:

In reply to chr_sue:
So, the answer to the first question enb value comes from the test class. Test class set its value to uvm_root using uvm_configdb. In the scoreboard, I will take value from the uvm_config_db inside the scoreboard’s build_phase.
The answer to the second question is yes. We have multiple monitors, their source code is the same more a less.

You should set enb in the test class to scoreboard and not to uvm_root.
And you do not need to differentiate between the write function. The write is only exected to the components connected to this specific analysis_port.
Th transaction level does not know real timing. Only the order of the data is relevant. Using tlm_analysis_fifos synchronizes the data automatically.

In reply to chr_sue:

Thanks, I will look into the fifos for synchronization.

In reply to AbdulRauf1251:

Hi @chr_sue, I have used fifos instead of queues. But the results are still the same, which I don’t understand why they are happening? I have given the scoreboard updated code with fifo’s addition.


task scoreboard::write_rec2(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 fifoRec2.try_put(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask

task scoreboard::write_rec1(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 fifoRec1.try_put(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask

task scoreboard::run_phase (uvm_phase phase);
  forever begin
     if (enb == 2'b11) begin
	fork 
		begin thread1(); end
		begin thread2(); end
	join
     end
     else if (enb == 2'b10) begin
	thread1();
     end
     else if (enb == 2'b01) begin
	thread2();
     end
  end
endtask

task scoreboard::thread2();
	fifo_Rec2.get(Rec2);
	fifo_Trans2.get(trans2);
	$display("Queue Received this Data: %0h", Rec2.data);
	compare(trans2.data,Rec2.data);  // this function two inputs and check equality
endtask

task scoreboard::thread1();
	fifo_Rec1.get(Rec1);
	fifo_Trans1.get(trans1);
	$display("Queue Received this Data: %0h", Rec1.data);
	compare(trans1.data,Rec1.data);  // this function two inputs and check equality
endtask

can you also give me some thought about this theory?
I have read somewhere in the blog due to some delay in the testbench environment’s components sometimes the scoreboard can’t process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven’t seen much info about the theory. Can you please share some thoughts about this side too if you think that’s a possibility…?

In reply to dave_59:
Hi @dave_59 please give your thoughts
We have a pair of monitors that calls write method implemented inside the scoreboard. The monitors typically capture the transaction from BUS and call these WR methods to push the transactions. Thus two source and destination monitors WR into two - source and destination - queues as and when they find the transactions.

  1. We have a checker task with RD-n-check running in a forever loop in the run-phase of the scoreboard. It’s in a while loop and watches if the destination queue has non-zero entry. Once it finds so, it pops the head entry from destination queue and then pops the head entry from source queue as well and compares the two entries to declare if the check was a PASS or FAIL.

  2. There are more than 2 queues and more than a pair of source/destination of course, but broadly this is the architecture around here.

  3. Now in the current scenario, it seems that the checker tasks stop prints after certain point of time in some of the test cases. Upon adding debug prints thoroughly, it seems that checker tasks that does the job #2/#3 above and gets called inside the forever loop of the run-phase, exits gracefully one last time. However they are entered again - which is to say that the forever loop that should be calling them didn’t call. As if the forever loop of run-phase stopped completely.

  4. We also added another forever loop in run-phase that observes whether the queues are empty. From prints inside that parallel loop and from the monitor prints, we know that the queues aren’t empty and monitors did push WRs into the queues for a long time.

  5. It seems that the forever loop stopped working suddenly ( going by prints spewed out) all of a sudden but another set of threads that we added in runphase in another forever loop just to monitor those queues - keep printing that the queues have contents. So run-phase shouldn’t be over but the checker tasks running in forever has stopped.

  6. We are using Vivado 2020.2 for the simulation. This is a baffling/weird problem for us and we did go through prints multiple times to make sure nothing has been missed out. It seems we are missing very very basic or has hit a bug/broken some basics of UVM coding to land into here.

If you have any help, thoughts here, will appreciate that greatly.

In reply to AbdulRauf1251:

I do not see the fifos and the code is not correct? try_put should not never be used on the analysis path. If your transactions coming from different analysis ports you do not need the suffixes.
Please show the structural scoreboard code and say where the transactions are coming from.
Please note compare is a built-in function of the uvm_sequence_item which has only 1 input.

In reply to chr_sue:

Thanks for the response, Here is my basic scoreboard structure. Following are the points which need to be taken into consideration.

  1. The below structure is having two analysis imports, which are connected to Two different monitors (let just say monitor_trans1 and monitor_Rec1). Both these monitor writes into the scoreboard using these analysis imports.
  2. To differentiate between two different write imports, I have used decl macro. I am using tlm fifo (not analysis fifo) here to store incoming transactions. Why I am using it? because I have analysis import and I can use it write method, I know analysis fifo has its own merits, but my scope of work suits this kind of implementation.
  3. Both monitors as described in point-1 send data, monitor-trans1 is monitoring the driven data to the DUT and monitor_Rec1 is monitoring response data from the DUT.
  4. Now what is happening, I come to the Issue, You see in the analysis_import write implementation I am also printing the received transaction from its respective monitor (you can assume it as a write print). In task thread1 when we are getting the stored transaction, we are again printing the transaction (you can say it as read print).
  5. I have run a test containing 10-transactions, which means 10 transactions being sent to DUT and DUT will give a response and that will also be 10 in number. So the total entries to each respective monitor will be 10. (means there will be 10 write prints.)
  6. Now in task thread1(), I am getting those transactions from the fifos and comparing the concerning data. What I am seeing in my log, there are 10-write print present for each fifo, which means monitor have sent transaction 10-times, but when I am seeing read print for each fifo, they are only 7, which means there are still some entries in both fifos, which are not taken out during the normal simulation time of scoreboard run-phase.

I need to know, why this behavior is happening, I have used queues, fifos everything, but this behaviour keeps on happening. I feel somehow my scoreboard is shutdown and it did not recover.
I read somewhere in the blog, when there is some delay in a uvm_oomponent in response, other uvm component which are dependent on that response of uvm_component shuts themselves down. Is it correct or not



`uvm_analysis_imp_decl(_trans1)
`uvm_analysis_imp_decl(_Rec1)

class scoreboard extends uvm_scoreboard;
 uvm_tlm_fifo #(seq_item) fifoRec1,fifotrans1;
 uvm_analysis_imp_fifotrans1 #(seq_item,scoreboard) item_fifotrans1;
 uvm_analysis_imp_fifoRec1 #(seq_item,scoreboard) item_fifoRec1;

 function new(string name, uvm_component parent);
  super.new(name,parent);
 endfunction
 
// There is a usual build_phase is implemented. I have not given its implementation.
endclass

task scoreboard::write_trans1(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 fifotrans1.try_put(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask
 
task scoreboard::write_Rec1(input seq_item tr);
 seq_item pkt = new();
 pkt.copy(tr);
 fifoRec1.try_put(pkt);
 $display("Queue Got this data: %0h",pkt.data);
endtask
 
task scoreboard::run_phase (uvm_phase phase);
  forever begin
    thread1();
  end
endtask
function compare_data (input bit [31:0] indata1, indata2);
 if (indata1 == indata2) begin
  $display("Pass");
 end
 else begin 
  $display("Fail");
 end
endfunction 

task scoreboard::thread1()
	fifo_Rec1.get(Rec1);
	$display("Queue Received this Data: %0h", trans1.data);
	fifo_trans1.get(trans1);
	$display("Queue Received this Data: %0h", Rec1.data);
	compare_data(trans1.data,Rec1.data);  // this function two inputs and check equality
endtask

In reply to AbdulRauf1251:
Your scoreboard look strange. BTW imp stands for ‘implementation’ and not for ‘import’.
Your scoreboard should look like this:

class scoreboard extends uvm_scoreboard;
  `uvm_component_utils(scoreboard)
   uvm_tlm_analysis_fifo #(seq_item) fifo_Rec1;
   uvm_tlm_analysis_fifo #(seq_item) fifo_Trans1;
 
   function new(string name, uvm_component parent);
     super.new(name,parent);
   endfunction

   function void build_phase(uvm_phase);
      fifo_Rec1 = new("fifo_Rec1", this);
      fifo_Trans1 = new("fifo_Trans1", this);
   endfunction
 
task run_phase (uvm_phase phase);
  seq_item Rec1;
  seq_item Trans1;
  thread1();
endtask

function void compare_data (input bit [31:0] indata1, indata2);
 if (indata1 == indata2) begin
    `uvm_info(get_type_name(), $sformatf("compare passed"), UVM_LOW)
 end
 else begin 
    `uvm_error(get_type_name(), $sformatf("compare failed with indata1 = %0h indata2 = %0h", indata1, indata2))
 end
endfunction 
 
task thread1();
   forever begin
     fifo_Rec1.get(Rec1);
     `uvm_info(get_type_name(), $sformatf("Rec1 Received this Data: %0h", Rec1.data), UVM_MEDIUN)
     fifo_Trans1.get(Trans1);
     `uvm_info(get_type_name(), $sformatf("Trans1 Received this Data: %0h", Trans1.data), UVM_MEDIUM)
     compare_data(Trans1.data,Rec1.data);  // this function two inputs and check equality
   end	
endtask
endclass

Please note:
(1) You do not need to declare the suffixes
(2) a uvm_tlm_fifo is not sufficient for the analysis path. It has to be a uvm_tlm_analysis_fifo
(3) the uvm_tlm_analysis_fifo has a built-in uvm_analysis_export. You do not need a uvm_analysis_imp
(4) You do not need to implement a specific write function to put the data into the uvm_tlm_analysis_fifo
(5) a function has to have always a type

In reply to chr_sue:

Thanks, chr_sue, can you also give me some thought about this theory?
I have read somewhere in the blog due to some delay in the testbench environment’s components sometimes the scoreboard can’t process kinds of stuff and ends up being broken. Except for these very high-level thoughts, I haven’t seen much info about the theory. Can you please share some thoughts about this side too if you think that’s a possibility…? and what is phase_ready_to_end? and when we can you use it?

In reply to AbdulRauf1251:

I do not really understand what you mean with ‘delay in the scoreboard’. As I said the scoreboard is self-synchronizing because it depends only of the order of the transactions. You have to take care that you do not loose any transaction and you do not modify the correct order. Both is guaranteed when using uvm_tlm_analysis fifos. They have any depth and can store nay number of transactions.
What can happen is that you do not see all transactions generated from the sequencer in the scoreboard because the objection mechanism has stopped the execution of the testbench before all transactions were processed in the scoreboard. This is the case when you have only 1 objection mechanism in the test implemented. You have to decide if this is critical with respect to your verification quality. In most cases this is not relevent. If you are generating 10000 seq_items and you do process in the scoreboard only 9999 this might not be critical.
But there are mechanisms in the UVM to avoid this.:
(1) Defining a drain_time. The corresponding process waits exactly for this time until it drops the objection. This is not easy to use because you have to specify an absolute time.
(2) A more flexible way is to implement the task phase_ready_to_end. The the dropping of the objections will be delayed until this task has been finished.