Sequence driving issue

Hi,

I have a sequence driving issue. In my request I have an 8-bit msg_id field which has to be unique for every request transaction. So, I have declared a queue of type msg_id to track the id’s in use.

Both the ways below give me issues:

  1. I was pushing the id in queue of msg_id’s when I see a request transaction and deleting it when I see response transaction with that id comes out on the output interface.
    The issue that I see with the approach is, my virtual sequence generates transactions and sends it to the sequencer where they seem to be stored in a queue and between each request transaction the delay could be anything. As a result there is a delay between sequence generation and they being driven on the interface, but my queue which tracks of id’s in use based on the actual activity on the input and output interfaces, doesn’t see them as generated as they are lying in the sequencer’s queue. Hence, the sequence picks-up an id which was already generated but lying in the sequencer’s queue. When eventually the second instance of the id gets transferred onto the interface, the assertion to detect unique id fires.

Then, I moved my pushing of id’s (to say id in use) to vseq from my system monitor, so as to track at the time of generation.

  1. Again due to the huge delay between sequence generation and the actual time of driving, I run of all the 256 id’s as they are all sitting in the sequencer queue and my constraint solver fails as it can’t find unique id.

I have something like below in my virtual seq:



bit[7:0] active_ids[$];

task body() 
  int num_repetitions = 1000;

  while (num_repetitions > 0) begin
    fork
      if(active_ids.size() > 0) begin
       drive_seq1();
       drive_seq2();
       // decrement the number of repetitions when req was successful sent to sequencer
       mum_repetitions--;
      end
    join
  end
endtask: body

task drive_seq1();
  assert(req_seq.randomize() with {
                          ! (msg_id inside {active_ids});
                          }) else begin
    `uvm_fatal ("Randomization failure...")
  end
  req_seq.start(p_sequencer.req_seqr);
endtask: drive_seq1


Either my simulation hangs with time not advancing or it exists with num_repetitions less than 1000.

I am not sure how to solve this issue ?
Could anyone explain possibly with a code snippet to solve this issue ?

Thanks in advance,
Madhu

In reply to mseyunni:

I don’t see enough information here to answer your question.
Why do you have a fork-join with a single if-statement inside it?
If active_ids starts out empty, how is the if-condition ever satisfied?

It appears that you want to make sure that a) each msg_id is unique relative to other ids that are in-flight at the time the message is created, and b) you only want to assign/store msg_ids when the message is sent to the DUT.

If this is the case, then you could use the config_db to ensure that the driver sequence and monitor both have a handle to the same active_id queue. In the sequence, you can randomize the msg_id in between start_item() and finish_item(), and constrain it so that it is not inside active_ids, as you show in the constraint block in your drive_seq1() task. So, you’ll only be randomizing the msg_id when the transaction is about to be sent, so you don’t have to worry about pre-assigning ids. Then, you can use the monitor to recognize response transactions and delete the ids when you see them.

In reply to tfitz:

Why do you have a fork-join with a single if-statement inside it?

Sorry, I missed out on adding some more code. Here is the updated code which works for me. I would like to know whether this is correct and can work in all possible situations.



class my_config extends uvm_object;

   bit[7:0] active_ids[$];

endclass
 
task body() 
  int num_repetitions = 1000;
  int scenario_id = 1;
  my_config         cfg;
  // Get the handle to the my_config object
  // code omitted intentionally.
  fork
     while (scenario_id <= num_repetitions) begin
       wait(cfg.active_ids.size() < 256);
       drive_seq1();
       drive_seq2();
       scenario_id++;
     end
     drive_seq3();
     drive_seq4();
   join
  endtask: body
 
task drive_seq1();
  assert(req_seq.randomize() with {
                          ! (msg_id inside {active_ids});
                          }) else begin
    `uvm_fatal ("Randomization failure...")
  end
  req_seq.start(p_sequencer.req_seqr);
endtask: drive_seq1

drive_seq2(), drive_seq3() and drive_seq4(); run their sequences on their sequencers.


If this is the case, then you could use the config_db to ensure that the driver sequence and monitor both have a handle to the same active_id queue. In the sequence, you can randomize the msg_id in between start_item() and finish_item(), and constrain it so that it is not inside active_ids, as you show in the constraint block in your drive_seq1() task. So, you’ll only be randomizing the msg_id when the transaction is about to be sent, so you don’t have to worry about pre-assigning ids. Then, you can use the monitor to recognize response transactions and delete the ids when you see them.

Yes. I am using the config db as you suggested in one of my previous posts (thanks for that)

In the above code I have put a wait statement to stop trying to randomize seq1 as there won’t any id’s. Once the scoreboard/system monitor free’s them up this will unblock and will move.

I have some questions:

  1. If i replace the wait with if (cfg.active_ids.size() < 256) …do stuff., then simulation hangs. Could you explain why ?

  2. Should I encapsulate the while block in a separate begin… end block, or it doesn’t matter ?

  3. I would like to run seq2() only when I start seq1() and I would like to run seq3 and seq4 in parallel to seq1 (and seq2). I have a requirement for seq1 to not take any id’s in use (which I am taking care by tracking the id’s as above) and also the id’s taken by seq3 and seq4. Basically seq3 and seq4 pick-up any of the free-ids to invalidate caches inside the DUT, then I should make sure that those id’s are not used by the seq1 to use them until I see the invalidation happen in the RTL. How can I achieve this by stopping seq1 not taking the id’s picked up by seq3 and seq4.

Thanks,
Madhu