In SystemVerilog LRM (1800-2012), section 4.4.2.3:
“If events are being executed in the active region set, an explicit #0 delay control requires the process to be suspended and an event to be scheduled into the Inactive region of the current time slot so that the process can be resumed in the next Inactive to Active iteration.”
Can anyone help explain how this “inactive to active iteration” works? Let’s use the following code from uvm_sequence_base::start() as an example. I remove extra lines to simplify the sequence of events and assume default pre/post_body, pre/post_start behavior (return immediately so consume no time). When forked process starts, it is in active region. When the first #0 occurs (before pre_start), current process halts and an event gets scheduled for inactive region? Then the process needs to wait for other concurrent processes finish their current active region events before going back and re-activate the event scheduled in inactive region (calling pre_start)? This whole process is considered as an “inactive to active iteration”?
In a totally different process, uvm_phase::execute_phase(), there are #0s between phase states. For non-task phases, all states consume NO time. So all states finish within one time-stamp. I assume these “phase state”+“#0” pairs are also “inactive to active iteration”? How these iterations interact with iterations in uvm_sequence_base::start()?
fork
begin
m_sequence_process = process::self();
// absorb delta to ensure PRE_START was seen
#0;
pre_start();
m_sequence_state = UVM_PRE_BODY;
#0;
pre_body();
m_sequence_state = UVM_BODY;
#0;
body();
m_sequence_state = UVM_ENDED;
#0;
m_sequence_state = UVM_POST_BODY;
#0;
post_body();
m_sequence_state = UVM_POST_START;
#0;
post_start();
m_sequence_state = UVM_FINISHED;
#0;
end
join
LRM section 4.5 partly answered my question…now I think all processes proceeds and gets scheduled in parallel. If there are iterations in so-called iterative regions, process moves back and forth between active (or reactive) region and other regions.
Now I’m wondering what #n or ##n (with default clocking) would mean in scheduling semantics? Move directly to active region n-timeunit away/n-cycle away?
Before answering this, let me just say that using #0 in your code is a very poor programming practice (even for the UVM base class library). You are just moving a race condition by one active-to-inactive iteration, not eliminating it. Seeing #0’s usually means they wrote their code to quickly, or did not have time to understand another piece of code and remove the race condition in the first place.
You are correct that all processes share one active event region. Verilog executes the active events in any order it chooses until the region is empty. Executing an event can create new events in any region, so the active region must drain empty before it can move to the next iteration. There is an inactive region for every scheduled delay. When the active region empties, the events #0 inactive region become the active region, and execution continues. If the #0 inactive region is empty, it looks at the next region (NBA) and that become the active region. And so on for all the other regions in the current time slot. Once all the regions for the current time slot are empty, it looks for the next time-slot for a non-empty region and makes that the active region and the current time jumps to that time-slot.
Simulation events only execute in the active region. A #n delay schedules an event into the inactive region of the time-slot n units away. ##n is just a shortcut for repeat (n) @(posedge clocking_block).
Thanks for your response. I know I can count on you :)
But is #0 the whole reason for the existence of inactive region in SystemVerilog event scheduling? Also the whole purpose of active/inactive iteration? Giving execution/process a chance to go back to active region.
Any delay creates an inactive region event; #0 is just a special case.
The problem with #0’s is they have a tendency to accumulate. You at a #0 to wait for another process to finish, then it adds a #0 for something else, then you need to a #0#0.
A better solution is using mailboxes, semaphores, or wait for an NBA update event.
I guess #0 comes in handy when we try to model a sequence of events within one timeslot, and we want to guarantee those events in deterministic order, assuming no dependency on/from other process of another set of events.
The following is commonly used after join_none to let forked process start. Any other way to replace #0 in this scenario? Thanks!
// phase runner, isolated from calling process
fork begin
// spawn the phase runner task
phase_runner_proc = process::self();
uvm_phase::m_run_phases();
end
join_none
#0; // let the phase runner start
Erhhh, I probably picked the worst example :) There are other examples here and there without wait etc after #0 to move region forward.
But I do agree with you there might be better approach. Actually I just noticed in the same uvm_phase::execute_phase() task, there are a few calls to uvm_wait_for_nba_region(). It is probably the approach you just mentioned “wait for an NBA update event”…
This task is defined in uvm_globals.svh. Basically it does a dummy NBA assignment then force to active region by doing @(nba) before task returns.
task uvm_wait_for_nba_region;
string s;
int nba;
int next_nba;
//If `included directly in a program block, can't use a non-blocking assign,
//but it isn't needed since program blocks are in a separate region.
`ifndef UVM_NO_WAIT_FOR_NBA
next_nba++;
nba <= next_nba;
@(nba);
`else
repeat(`UVM_POUND_ZERO_COUNT) #0;
`endif
endtask
The restriction on NBAs in program blocks was removed back in 1800-2005.
But you can see that people made arbitrary #0 loops to make things work. It would work until it broke and then increase the loop count. Very inefficient.