Sending txns in specific order b/w multiple interfaces

Hi Forum,
I am modeling a UVM Tb to drive txns across 3 input interfaces to a DUT ( router ).
Since there are 3 independent interfaces I have 3 driver-sequencer pair ( one for each interface )
I am trying to generate 2 scenarios

(1) Transactions are driven in parallel across the 3 interfaces. For this I could simply use ::

// Within main_phase() of test
  txn_seq seq1 , seq2 , seq3;
  phase.raise_objection(this);

  // Create the 3 sequences here
    ............
  // Then start them in parallel across the 3 interfaces
    fork
         seq1.start(env.agent_h.seqr_h1);

         seq2.start(env.agent_h.seqr_h2);

         seq3.start(env.agent_h.seqr_h3);
    join  
  phase.drop_objection(this);

(2) Transactions are driven one at a time.
Eg: 1st txn occurs on intf2 then 2nd txn is driven on intf3 and then 3rd txn is driven on intf1

// Within main_phase() of test
  txn_seq seq1 , seq2 , seq3;
  bit[1:0]   order[3];
  phase.raise_objection(this);

  // Create the 3 sequences here
     .......................
 void'(std::randomize( order ) with { foreach(order[i]) order[i] inside {[1:3]} ; unique{ order } ; } )
   
   foreach( order[i] ) begin
    case( order[i] )
     1:  seq1.start(env.agent_h.seqr_h1);
     2:  seq2.start(env.agent_h.seqr_h2);
     3:  seq3.start(env.agent_h.seqr_h3);
    endcase
  end

  phase.drop_objection(this);

Is there an alternative / better approach to achieve (2) ?

What assures in your 2nd approach that the transactions will actually “be driven” by the BFM according to this order?

In case there is some pre_delay to wait in the BFM prior to driving of the transactions on the interfaces, the actual order of transaction being driven on the buses might be different (and in some cases even parallel).

You should recognize that the actual timing is defined by ypur drivers and not by your sequence execution.

I didn’t quite get your comment.
My understanding is that since the code is procedural, each of the start task will unblock only when the txns have been driven ( i.e finish_item unblocks )
Assuming that the driver uses get_next_item & item_done approach, finish_item will unblock only when driver has driven the txns on the interface

When you add the assumption on the driver get_next_item & item_done it adds more clarity…

Few additional thoughts I could write down:

  1. There are 3 interface agents in your example, each one of them has its own sequencer and driver.
  2. You wrote: seq1.start(env.agent_h.seqr_h1);
    Is each txn_seq is a extended from a uvm_sequence_item or uvm_sequence?
    Could the sequence has a stream of uvm_sequence_item(s) or txns?