It is not a UVM specific question but a general verification question.
I have attempted to write predictor logic for some in-house IP’s (using UVM testbench) but I always ended up spending time aligning them to match RTL behaviour which is not the right way to do things. It means I must be doing something wrong or not using a high abstraction level or maybe the predictor is not the right way to verify a particular IP.
Let’s take a simple example of protocol converter IP e.g. I2C ↔ Wishbone (simple behaviour).
These are two simple protocols, no issues verifying them.
Adding more to requirements, bunch of configuration inputs which control behaviour of I2C and how it converts frames to Wishbone (WB). Some of these inputs affect the behaviour immediately and some after in-flight frame is complete (seen STOP or REPEATED START).
Predictor samples input when I2C monitor detects START and then it waits for I2C frame to complete and check expected response (ACK/NACK) and data (from WB).
The main issue I faced for this IP was when to send a configuration update to predictor.
From testbench view, I see frame complete when I2C monitor broadcasts the event but DUT sees it when its internal FSM goes to IDLE (which is not necessarily when it sees STOP, earlier than it).
DUT applies configuration when it’s FSM goes IDLE which will be early than when predictor will do and this causes mismatches between two.
DUT FSM going to idle also varies with configuration inputs. I believe every design will have some sort of quirky behaviour in at least few situations.
So, I am interested to hear what experts opinions are, if someone has suggestions for me on how can I improve my experience with writing predictors, I am all ears.