How to deal with RTL and behavior model mismatch

Hi,

the block i am verifying is a traffic policer using token-bucket algorithm in a network chip.
the algorithm is simple: dut add tokens with a configured period, and also receive incoming packets;
if a packet length is less than or equal to the remained tokens in bucket, then it’s outputted with GREEN color;
otherwise, it’s outputted with RED color.

the challenge is: since dut take some extra cycles to update tokens each time, while model doesn’t need to. so their output colors may be not same. in the interval that model has updated tokens while dut hasn’t, many packets may come in. since model and dut may do different decisions, remained tokens no. in them may become more and more different; and more turbulence is added by large delays of multif-flows and simulations become longer. so after a few update periods, the model has been almost useless and untrustable – more output colors mismatch but TB can’t judge if that’s just because of RTL timing delays, or real Bug.

i have searched on internet exhaustly but unlucky, haven’t found any papers detailing the approaches to deal with this kind of mismatch. so look forward to your great comments. please let me know if i describe clearly.

Great Thanks
Wade

Wade,

This is a common problem when trying to predict RTL functionality that is sensitive to slight variations in cycle timing behavior, and there’s no perfect solution to it.

I’ve experienced this a long time back trying to create a golden model to predict which requester would be granted by a bus arbiter. Each time my test found a mismatch, the RTL designer would investigate and explain that in “this particular case”, he adds an extra delay before granting, or “this particular case” I grant the other guy, etc. In the end, there were more exception cases than rules. Also in the end, the golden model ended up simply being a behavioral copy of the RTL – not a particularly useful way to spend engineering effort.

Verification is all about “bang-for-the-buck” and prioritization. Accordingly, let me ask this: How critical is it that your DUT switches from green to red (and vice-versa) at an exact boundary? What if your golden model provided a threshold (say N packets) around the boundary in which you might issue a warning or note, but not an error. This would allow your tests to be self-checking on a macro scale for cases where packets should be clearly marked as red or green, but not fail in the micro scale where your model and DUT may be out of sync for several cycles. In addition, you’d have display warnings/msgs that you could use to manually spot check a couple of instances and verify correct behavior around the boundary cases.

Seems to me like this would be a more efficient use of your verification efforts than trying to make your model match the exact cycle timing of the DUT and debug all of those false positives…

Dan