Synchronization of two different interfaces with two different clock domains in predictor

Dear UVM experts,
I would like to have some ideas and information on how to proceed in the verification of two interfaces using two diferent clock domains.
I just want high level ideas on how to do this.
The problem i have is the following
INPUT Inteface A deliver monitored_XFER_A at time T1 to the predictor , at the same time INPUT Interface B deliver monitored_XFER_B to the same predictor.

Now i would like to predict how the DUT should behave. In other words, i want to predict when these 2 xfers produce a change in the DUT output (another xfer or 3rd monitored interface).
However, i cannot use those transfers because, the designer triggers its output looking at the BUS at different times than the predictor. That is because he has a different pipeline number of register that delays the incoming BUS data to the DUT logic. (not only registers but there could be state machines or complex logic that delays the reaction of the DUT a fixed number of clocks)
This is worse when there are different clock domains, since the designer normally has even more registers to avoid cross domain problems.
That means the designer/DUT just sees Interface A transfer (e.g.) at T1+3 and the interface B at T1+5. In that case, the designer reacts at a different time and sees a different case as i see in my predictor, which receives both monitored interface xfers at the same time T1.

The question is what are the typical solutions for those cases?. Is there any correct way to handle these problems in UVM?, Any link for reference?
I can think of creating “unsafe_temporal” windows (where one interface is seen or not respect to the other in the predictor) or delaying the monitored xfers according to the DUT output clock.
What are your suggestions?
Thanks in advance.

In reply to JA:

This is difficult to answer without knowing the differences in timing implemented by the DUT versus the predictor. In many cases you have to put your transactions into an analysis fifo for each interface and wait for all interfaces to send their transactions before sending them on to the predictor.

Thanks Dave (you are the best),
Do you mean that i have to implement this waiting in the Monitor?, or in the environment? or in the predictor.
Because this depends on each DUT, would not be more convenient to implement it inside the predictor which has the specific implementation for that DUT?

And, do you have a link or an example for this construction?

you mean using the analysis FIFO described here:
https://verificationacademy.com/verification-methodology-reference/uvm/docs_1.1d/html/files/tlm1/uvm_tlm_fifos-svh.html
The description says: " Typical usage is as a buffer between an uvm_analysis_port in an initiator component and TLM1 target component."
Yes, that is what a want. A buffer between those points, but
how do i have control of which value of the buffer do i want to take?

I mean, in the end, i have to wait so many clocks (the specific clock for the interface i will predict) that are needed to align with the triggering output event of the DUT. In order to do that, i need to take the clock period information from the interface, and know how many clocks i need to wait for the DUT depending on its implementation.

Now, that i know that the recommended UVM solution is the buffering (with analysis_port), then i wonder if creating my own shift register FIFO which is read on each DUT clock will be also an accepted solution. The FIFO input will have, as input, the previous monitored value if there is no new value from the Monitor or the new value from monitor. In that FIFO, i know, that each place/value of the FIFO would be to look in the past of the pipeline structure of the DUT. Then i only need which place of the FIFO i need to read in the predictor to be aligned with the DUT.
Is there something like that in UVM?

I do not know if i can do the same with the analysis FIFO. (i hope i could make me understand)
Thanks.

In reply to JA:

See if this helps: AnalysisConnections | Verification Academy

Hi JA,

I see your monitor is plugged on an interface which gives too early information for your predictor. I understand the logic you like to predict is little deep down the RTL after some pipeline stages. I think its better to plug your monitor at that level i.e. interface (say Interface_A_int) close to the logic your like to predict. You will be lucky if your Interface_A and Interface_A_int exhibits a one-to-one mapping behavior so that you can reuse your monitor as such.

Alternate approach is to totally avoid modeling such time sensitive things (as it has its own maintenance effort) and accomplish verification using extensive SVAs. We dont need to model/predict always in a class container :-)

These are my kind suggestions.

Thanks,
Prem

In reply to run2prem:

Hi,
thanks for your answer. I cannot plug the monitor to the intern logic of the DUT because the DUT has already disolved the protocol inside in different state machines and registers. It is difficult to have a perfect match of the protocol after these pipeline stages. I agree that would be good if i can match perfectly the interface with the pipelines, but it is not the case. :(

Thanks!!!
Best regards,
JA

In reply to JA:

I’d assume your DUT has to give an indication what are the valid values. This is also needed for any electronic around your DUT. You have to use exactly this indication signal.

In reply to chr_sue:

Thanks, that is another solution, just to spy a signal inside the DUT and use it in the predictor.
I tend always to verify the DUT as a black box and avoid the extraction of intern states to my predictor. Doing this with a spy/bind signal has the disadvantage of not using TLM handshaking of your monitor and a second disadvantage is that, in case we simulate the NETLIST, the hierarchy or that signal (path and name) could change.
A third thing i see (could be good or bad) is that i want to predict in an static manner, i mean, if there are 3 clock delays needed after the trasnsaction reachs the predictor, then there should be also 3 clock delays in the future. (if some DUT modification will be done). If i use the intern/spy valid signal, then i am having a predictor which adapts to the DUT (with the spy of that signal). That could have negative or positive effects.

But YES that is a good solution.
Thanks!!

In reply to JA:

This was not my answer. I was assuming there is an external siganls which indicates a signal is valid.
What is your design spec saying? Is there only a relationship between the asynchronous clocks defined. If yes then you can observe this with SVA and you might generate an event there to be used in your UVM environment.

In reply to chr_sue:

The DUT has 2 interfaces. Each interface has a different protocol. It is defined when each protocol or interface is ready with the data. However, internally the DUT reads/reacts to these defined interfaces at different pipelined stages to create another output that i want to predict.
Thanks

In reply to JA:

THis sounds like an out-of-order approach. Right? Then you have to collect your responses and compare them with the same id. This is what I would do.

In reply to chr_sue:

The problem is not to keep the order of the transactions but to just find the point in time where the DUT decides to do something in the output interface with the information it has in the input interface.

I have tried to create a more visual example.


THIS IS JUST AN EXAMPLE NOT CONSIDERING THAT INT(A) AND INT(B) HAVE DIFFERNT CLOCKS
AND
TIME MARK           T1 T2 T3 T4 T5 T5 T7
MONITOR SEND2PRED   A1 0  A2 0  0  0  0
MONITOR SEND2PRED   B1 0  B2 0  0  0  0
DUT sees A XFER     0  0  A1 A1 A2 A2 A2    (delay due to intern logic)
DUT sees B XFER     0  0  0  0  B1 B1 B2
DUT REACTS on C     0  0  0  0  1  1  2
clock               C  C  C  C  C  C  C  


As i said, the DUT sees valid interface A and interface B at time T3 and T5 respectively. So its logic reacts different (C interface/logic) than the predictor which sees monitored xfer of A at T1 and monitored xfer of B at T1 both at the same time.

The solutions here proposed are to put between each monitor and each predictor a FIFO where i can delay for each interface the required number of clocks so that i can predict at the same time the DUT does, and therefore do not have mismatches on interface C/signal/register/whatever you want to predict. This problem happens mainly when you have many interfaces with different timings and the output hangs on the status information coming from different interfaces.
Thanks

In reply to JA:

I do not understand why you want to delay a transaction coming from the any monitor. On the Transaction Level (TL) you do not think in terms of time. There is only 1 odering scheme and this is the order the transactions has been extracted by monitors. Using uvm_analysis_fifos can store the transactions. Doing get on both fifos will block until a transactions are available. This is how TLM is working.

In reply to JA:

Here i present a very simple implementation of a delay. With this example the user can have adelayed copied version of the monitored transaction (A_xfer_delayed_class_var). The user can then call another specific write function where the prediction is done at the correct time.


class predictor_MODULE extends uvm_component;
   `uvm_component_utils(predictor_MODULE)
   
   //declare TLM port _monitor_A_interface
   `uvm_analysis_imp_decl(_monitor_A_interface)
    ....
	
    //DECLARATION
    A_xfer_t    A_xfer_delayed_class_var;
    event A_xfer_delayed_class_var_event;
    longint class_var_time_from_A_to_DUT_reaction_FUNCTION_A_ps;

    virtual function void write_monitor_A_interface(A_xfer_t A_xfer);
      class_var_time_from_A_to_DUT_reaction_FUNCTION_A_ps=A_xfer.interface_clk_period_ps*7+10;//defined 7clock (pipelines) + small 10ps of offset
 
      fork
          A_xfer_t     A_xfer_fork;
          $cast(A_xfer_fork,A_xfer.clone);   //DEEP COPY OF VARIABLE INSIDE THE FORK        
          begin
              #(class_var_time_from_A_to_DUT_reaction_FUNCTION_A_ps*1ps); //delay assignment 
              $cast(A_xfer_delayed_class_var,A_xfer_fork.clone);  //DEEP COPY OF VARIABLE INSIDE THE FORK to CLASS VARIABLE at the correct time
    	      ->A_xfer_delayed_class_var_event;
              write_delayed_custom(A_xfer_delayed_class_var);  //call prediction function at the correct time
          end
      join_none

    endfunction

    virtual function void write_delayed_custom(A_xfer_t A_xfer);
       //implement prediction at the correct time using other interfaces.    
    endfunction

...
endclass

You can also implement this using different input types, instead of “A_xfer_t” you can delay a “bit” signal. In that case you do not need to do clone but just assignments.

I recommend to define class_var_time_from_A_to_DUT_reaction_FUNCTION_A_ps as:

class_var_time_from_A_to_DUT_reaction_FUNCTION_A_ps=A_xfer.interface_clk_period_ps*7+10;

In other words, i recommend to use delays using clock_period variables (A_xfer.interface_clk_period_ps). That is because (normally) those delays or internal pipelines depends on the defined interface clock. So, if the clock period, for that monitored interface, is increased after some time, then you will still have a correct prediction.

Another considerations to take into account is what happen if there is a reset in the XFER. In that case, probably no delay would be needed.

In reply to chr_sue:

The reason for the delays is that it is not enough to keep the order of transactions to have a CORRECT prediction of the output. As i said, the small delays between the two input interfaces creates different output values or predicted values that i need to model.

In reply to JA:

I understand what you are saying. Then both interfaces are not completely independent. It seems ther eis a timing relationship between the clocks. I guess there is a minimum and a maxumum timing delay. I’d run the tests with the minimum and the maximum clock delays.