Is this a right way to do pipeline access through driver

I need to generate 15 outstanding write in my bus as in pipeline, the 15 write addresses are sending out bus without waiting for response to come back. Frist, I make my driver get 15 sequence items from sequence and put them into a TLM FIFO fist before I start my write task. Then I have my task sends out 15 write address to the bus. Is it the right way to do it?

Thanks for response,

Allen

What are you going to do with the responses?

Take a look at:

https://verificationacademy.com/uvm-ovm/Driver/Pipelined

This should give you a starting point on how to address the design of your driver.

In reply to mperyer:

Hi,
I am trying to implement partial AXI protocol in which driver stores both read and write transactions in queue.
But though sequence sends 20 transactions for each read and write always 4 write txn and 2 read txn appear on bus.

Is there a simpler way to wait in driver itself (like counting number of posedges of RVALID and BRESP signals) before ending run phase? Or it should be done with UVM events as shown in the link?

Thanks in advance.

Regards,
Chandan

In reply to chandanc9:

My project experience with another protocol shows taht any implementation like you are doing with queues etc. are ending up in a fiasco.
I’m strongly recommending to follow the example mperyer is pointing to.
https://verificationacademy.com/uvm-ovm/Driver/Pipelined

But you have to investigate how many tasks ‘do_pipelined_transfer’ you are running in the fork-join. It depends on the number of clock cycles the protocol needs and might be more than 2.
Following these recommendations gives you all the flexibility for further options.

In reply to chr_sue:

Thanks a lot chr_sue.Still I would like your kind suggestion for the following scenario:
I want to generate a number of write transactions followed by read.In driver run phase tasks for each channel could be launched in fork-join(five of them). So there are actually outstanding transactions instead of actual pipeline. Then I have to as in https://verificationacademy.com/cookbook/driver/pipelined for sequence to wait as :


.....
// Do not end the sequence until the last req item is complete
  wait(count == 20);
endtask: body
// This response_handler function is enabled to keep the sequence response
// FIFO empty
function void response_handler(uvm_sequence_item response);
  count++;
endfunction: response_handler

or the next example with events to signal from driver to sequence. Is the event one better in any way though complicated ?

In reply to chandanc9:

It is not correct that you have outstanding reqs if you are doing the things correctly.
And the response handler is only needed if you want to pass back responses. This might not be always the case.
The pipelined processing has a ‘cmd’ and a ‘data’ phase. If the first cmdd-phase has been completed you can start the next cmd-phase. One option to control the execution of the cmd-phases is employing semaphores with 1 key. You are performing a get on the semaphore. The task which gets the key is executed first in the fork-join. After completing the cmd-pahse in the first task the next one, which gets the key will be executed and so on. The dataphases are running independently.