Inserting delay in monitor if chip_select goes high (deselected)

I have a monitor which keep sampling signals when chip_select(active low) is ‘0’, as below:

virtual task run_phase(uvm_phase phase);
super.run_phase(phase);
forever begin
@(posedge vif.sclk);
if ( vif.reset == 1) begin
seq_item_collected.data_in = vif.data_in;
seq_item_collected.cs_in = vif.cs_in;

@(posedge vif.monitor_cb);
  if (vif.cs_in == 0) begin  //chip_select
    seq_item_collected.data_out= (vif.data_out);
    :
    :
  end
 trans_collected_port.write(seq_item_collected);
end  // reset =1   

end //forever
endtask : run_phase

I face some issue here. Since i am stopping sampling as soon as i see cs_in (chip_select) to be low, I have a mismatch on the last data sent.
Since data_out from DUT is avaiable in the next clock after the data_in (is in sync with cs_in), the last data_in is missed.

I wanted to sample just once soon after i see cs_in going high.
I tried to do insert below code:
@(posedge vif.cs_in)
@(posedge vif.sclk);

But its not helping. Above code made my test run for very long. I had to break the simulation.
Any better way to do so?

Thanks in advance

In reply to uvmsd:

Your monitor implementation has to meet the requirements of your bus protocol. In most cases this protocol makes delays with clock cycles and nothing else.
BTW the cs_in is a control signal and should not be a member of your seq_item class.

In reply to chr_sue:

@chr_sue, yes i do agtee with your both the points.

  1. Since i am sampling my data_out as per cs_in, i am missing the last transaction. I think, i need to look into it again.
  2. Protocol requires cs_in to be low(active low) for say 400 bytes of pkt. It needs to be aligned with data_in and should go low after last bit of packet. I was confused whether to include cs_in in sequence or driver. Since the packet is generated in sequence, it was easy to generate cs_in in align with packet. Is there a way, i generate packet in sequence and send an event or so to driver to drive cs_in as per packet boundaries? What is the ideal practice to follow in such cases?

In reply to uvmsd:

You are driving cs_in in your driver. It is useless to have this in the seq_item. Because it has to follow the timimng of your protocol, but the seq_item does not know anything about the timimg…
What do you exactly mean, when saying you llost ther last transaction? Do you not see this transaction in thhe monitor? And why do you not see this transction? Is the simulation stopping meanwhile?

In reply to chr_sue:

@chr_sue: In our case, cs_in is more like data_valid. It has to be low(active low) just for the frame length. Since, frame boundary info is easily available in sequence, I had implemented it in sequence. But was facing an issue to insert a delay between the frames.
I moved cs_in generation to driver now. Things look good and now have better control.

And thanks much for your inputs.

In reply to uvmsd:

You are wrong the frame boundary is not known in sequence. The sequencce and seq_item do not know anything about the timing. It knows only the order. Modelling the cs_in in the seq_item creates your problem. Move it out and it will work. You can easily model gaps between the frames with random length.

In reply to chr_sue:

@chr_sue: I did mention in my above message that “I moved cs_in generation to driver now. Things look good and now have better control.”

I have moved cs_in generation out of sequence. That is what my previous message mentioned.

Thanks for your inputs.