UVM driver asyncrounous FIFO issue


I am writing a UVM to test CDC FIFO, I have used read and write drivers to run independent read/write requests from the virtual sequence.

The first simple test I am doing is to do random read/write multiple times.

The virtual sequence body code:

  virtual task body();
    assert(randomize(m_times) with {m_times < FIFO_DEPTH;} );
    wr_seq   = cdc_fifo_write_seq::type_id::create ("wr_seq");
    rd_seq   = cdc_fifo_read_seq::type_id::create ("rd_seq");
    repeat (m_times) begin

The read and write drivers sequence code (each has its own sequence):

  virtual task body();
    `uvm_do (req)

The transaction type is shared between both drivers and its contain data member only.

The read driver run_phase code:

  task drive_read();
    transaction = new("transaction");
    wait(vif.reset_n == 0);
    vif.data_rd_en    <= 1'b0;
    wait(vif.reset_n == 1);

    @(posedge vif.clk_rd);

    forever begin
      // #1ns
      while (vif.empty) begin
        @(posedge vif.clk_rd);

      vif.data_rd_en    <= 1'b1;
      @(posedge vif.clk_rd);
      vif.data_rd_en    <= 1'b0;

  task run_phase(uvm_phase phase);
    `uvm_info("FIFO read DRIVER", "Running run_phase", UVM_HIGH)

The error I am getting in simulation is that after the first write, it is followed by a read. This asserts EMPTY signal and the read_driver should not assert read_enable for a second read to avoid underflow.
However, the ENABLE signal seems to be delayed a bit because it combinational circuit (some delta cycles I guess), so the condition while (vif.empty) is skipped causing underflow.

If I added like #1ns delay before this condition it works fine, but how I can solve this issue in a different way? is there is something wrong in the driver code or sequences?

Many thanks

In reply to asadmosab:

Hard to follow with so little code shown, but maybe you need to put your first @(posedge vif.clk_rd); inside the forever loop, or an @(negedge vif.clk_rd); at the end of the forever loop.

In reply to dave_59:

Thanks for the reply Dave.

Adding a clk delay will solve the issue, but I am trying to send consecutive read requests without a delay in between. Is there a way to do that?

My guess is the delay in the virtual interface to the driver is higher than the delay from the sequence to the driver to send the next transaction. Can I control this delay?

If I added a condition not to read when empty in the tp_top where the DUT is instantiated, it also solve the issue.

Any part of the code should I share to explain more?