Two parallel threads working on same driver

Hi

In my environment, I have a host agent and a driver that should access to two different interfaces of DUT (TX and RX).

I have sequences for TX transactions and RX transactions called from a test in fork join
fork
tx_seq(0
rx_seq()
join

However the driver completes one transaction (either TX or RX) and then only start the next transaction since the driver is coded already like this,

forever begin
seq_item_port.get_next_item(item);
case(item.req_type)
TX: tx_dr();
RX: rx_dr()
endcase
seq_item_port.item_done();
end //

I want to change this to do a parallel fullduplex transaction (TX and RX driving out parallely). Is this possible given that I have the agent with single sequencer?

Please give some suggestions on how I can build a driver that can cater to this with a singe agent.

Thanks

end

Hi,

One way to do this as follows;
→ Your TX and RX sequences must share a common sequence item or transaction.
→ Define an enum variable in a package like; typedef enum bit [1:0] {TX,RX,FULL_DUPLEX} op_type;
→ Set the enum variable inside build_phase of testcase using set_config_int(“*”,“op_t”,2); OR you can set globally.

→ Similar like in testcase fork…join; apply the same inside the driver like;
seq_item_port.get_next_item(item);
drive_dut(); //Task calling which has driving to DUT logic
seq_item_port.item_done();
→ Define the task drive_dut inside the driver like;
task drive_dut();
if(op_t == FULL_DUPLEX) begin //op_t is enum declaration
fork
begin
tx_dr();
end
begin
rx_dr();
end
join
end
endtask

Hope above things may help you.

Thanks,

regards,
Mahee.

In reply to mahesh_424:

Thanks Mahesh.

This is a simple fix to the problem.

Sujitha

Since you said full duplex I understand that the Tx and Rx transactions will happen simataneously.

For your requirement you can use seq_item_port.peek to identify the current data item type on the sequencer fifo and then if it matches with the required type emit the item_done.

May be the following code will help,

class my_data_item extends uvm_sequence_item;
//Common fields if any b/w tx and rx
endclass : my_data_item

class my_tx_item extends my_data_item;
 // Fields for the Tx Item
endclass : my_tx_item

class my_rx_item extends my_data_item;
  // Fields for the Rx Item
endclass : my_rx_item

class my_driver extends uvm_driver#(my_data_item);

my_tx_item tx_q[$];
my_rx_item rx_q[$];

virtual task run_phase(uvm_phase phase);
 fork
  get_item();
  tx_drv();
  rx_drv();
 join
endtask : run_phase

virtual task get_item();
 my_data_item item;
 forever begin
  if(tx_q.size()==0) begin // Fetch item only if we dont have one in our local queue
   seq_item_port.peek(item);
   if(item.get_type_name()=="my_tx_item") begin
     my_tx_item tx_item;
     $cast(tx_item,item);
     tx_q.push_back(tx_item);
     seq_item_port.item_done();
   end
  end
  if(rx_q.size()==0) begin
   seq_item_port.peek(item);
   if(item.get_type_name()=="my_rx_item") begin
     my_rx_item rx_item;
     $cast(rx_item,item);
     rx_q.push_back(rx_item);
     seq_item_port.item_done();
   end
  end
 #1; // This is must to avoid dead lock
 end // End: forever
endtask : get_item

virtual task tx_drv();
 while(1) begin
  if(tx_q.size()!=0) begin
   my_tx_item tx_item;
   tx_item = tx_q.pop_front();

   // Implement Tx Driving part here

  end
  #1; // This is must to avoid dead lock
 end
endtask : tx_drv

virtual task rx_drv();
 while(1) begin
  if(rx_q.size()!=0) begin
   my_rx_item rx_item;
   rx_item = rx_q.pop_front();

   // Implement Rx Driving part here

  end
  #1; // This is must to avoid dead lock
 end
endtask : rx_drv

In reply to advaneharshal:

#1 is never required.
But you have to make sure you are doing first get_item and then run in parallel tx_drive and rx_drive. Your fork/join does not guarantee get_item is executed first. What the first task is is randomly selected (but repeatable).