I can see one reason, and that can be explained when you consider the evaluation regions within a time step (see my paper)
c lkis the RTL design clock, internal_clk is the testbench (TB) clock. The internal_clk is derived from the RTL clk: always @(cb_all or negedge CLK internal_clk = CLK;
Both clocks occur in the SAME time step (e.g, at t==100ns), but RTL activities processed in the ACTIVE region are processed first. Example:
// RTL
assign d=b&&k; // "d" is evaluated and assigned in the ACTIVE Region
always (@(posedge clk) a <=b; // "a" is evaluated in the ACTIVE Region, but is assigned in the NBA Region.
// TB
TB could use the cb_all . However, if they use the internal_clk, because it is derived from clk, it is the last queued clocking event for that time same time step. Thus, any ACTIVE Region processed after this is TB related, afar from ACTIVE region processed first for the RTL code.
I don't think it makes a difference. For TB the recommendation is to use the clocking blocks. |
See https://www.perplexity.ai/search/in-systemverilog-coding-why-is-aLyZnh1sRFasVEfXtAM_Mw
Bottom line: Maybe an extra layer of isolation.