Why I have to keep aligning predictor with design under test?

It is not a UVM specific question but a general verification question.

I have attempted to write predictor logic for some in-house IP’s (using UVM testbench) but I always ended up spending time aligning them to match RTL behaviour which is not the right way to do things. It means I must be doing something wrong or not using a high abstraction level or maybe the predictor is not the right way to verify a particular IP.

Let’s take a simple example of protocol converter IP e.g. I2C ↔ Wishbone (simple behaviour).
These are two simple protocols, no issues verifying them.
Adding more to requirements, bunch of configuration inputs which control behaviour of I2C and how it converts frames to Wishbone (WB). Some of these inputs affect the behaviour immediately and some after in-flight frame is complete (seen STOP or REPEATED START).

Predictor samples input when I2C monitor detects START and then it waits for I2C frame to complete and check expected response (ACK/NACK) and data (from WB).

The main issue I faced for this IP was when to send a configuration update to predictor.
From testbench view, I see frame complete when I2C monitor broadcasts the event but DUT sees it when its internal FSM goes to IDLE (which is not necessarily when it sees STOP, earlier than it).
DUT applies configuration when it’s FSM goes IDLE which will be early than when predictor will do and this causes mismatches between two.
DUT FSM going to idle also varies with configuration inputs. I believe every design will have some sort of quirky behaviour in at least few situations.

So, I am interested to hear what experts opinions are, if someone has suggestions for me on how can I improve my experience with writing predictors, I am all ears.

In reply to njain:

Your assumptions about the predictor or more accurate your refenece model are wrong.
The prdictor/refence model is a TL model and it is completely independent of any timimg. It does not know anything about clock cycles and control signals. It has only 1 odering scheme. This is the order of data. Your RTL model is very specific and dependent on clock cycles and contrl signals. Beside of these signals it has also data signals like addresses and data. They will be extracted from a monitor. This is what you have to compare against the data of your reference model. The exact protocol of your functional interface can be checked using SV assertions. This checks the right timing behavior.

In reply to chr_sue:

I agree that predictor or reference model should be independent of any timing but IMHO not necessarily for any control signals. It needs information about control signals to generate expected TL.

I never intended to do Exact protocol checking using reference model.

In reply to njain:

It seems you have aproblem with the period between seeing STOP and IDLE. The you have to omake your ref model more accurate. I believe there is a dependency bteween both signals. This is what you have to model on TL.