Help on UVM for non transaction based designs

I’ve been using UVM/OVM since its beginnings a few years back and I have come up with several solutions that augment UVM/OVM’s standard methodology to support designs that are not inherently transaction based, specifically microprocessor core verification at core and unit level. I am now looking for some feedback from the community and the UVM designers/specialists.
I have found that in the designs I work with that generation of stimulus using transactions can often times create more overhead. An example being trying to drive pure control signals, which have no temporal transaction based nature. I want to have some top level sequence(virtual sequence) to control these signals, but there is a large overhead of allocating a transaction,assigning variables, sending the transaction to driver, just to have the driver take the transaction variables and directly assign them to the interface to be driven into the DUT. There are cases where transactions do make sense for the designs I work with, but my gripe with UVM/OVM’s methodology and examples are there aren’t any that showcase issues of control based signal designs like I have described above.
Along the same lines, the monitor/scoreboard concept also creates a massive overhead. Creating transactions that essentially assign interface signal values directory to transaction variables and sending that transaction through a TLM channel to a scoreboard is a huge performance killer. This also creates an issue in trying to determine when to do the modeling in the scoreboard if the correctness of the modeling depends on receiving transactions from multiple monitors which are received at different time by the scoreboard over the same delta cycle, conversely there could be one monitor which samples all the signals, but that isn’t ideal either. Since the design is not really transaction based, control signal input sent in every cycle can directly affect control signal output which has to be checked by the scoreboard every cycle.
As such, in our environment, we still use transactions to send control signal information to the drivers even though its not very efficient. However, we adopted a different methodology for our scoreboards. Since our scoreboards are in essence models of the DUT, each scoreboard is directly tied to this DUT and only that DUT, therefore trying to abstract away the scoreboard from the DUT by the use of monitors and TLMs doesn’t make sense. For that reason we decided to remove the monitors and tlm and let the scoreboard directly view all the signals going into and out of the DUT via a virtual interface. The scoreboard starts in the run phase and has a ‘forever @(posedge clock)’ nature where each cycle the modeling/checking is performed by sampling the signals of the virtual interface.

So my question is this.
Is there a better way to do this which which is more UVM centric?
My question to the UVM designers/specialists.
UVM is great for transaction based designs, can we get some focus and examples on other types of designs?

Any feedback would be greatly appreciated.

Very good topic to discuss indeed - thanks for taking time to share some “real life verification issues” :-) While the input/stimulus driving is still “abstract-able” (as you also seemed to have done it), the checking/monitoring is a huge, unnecessary pain if we want to too religious about UVM base class usage. Why not use assertions for them instead? For instance we had recently a customer case of PCIe-to-SPI address decoding and we simply had to write 2 SVAs per interface (one to say “get-correct-decoding”, another “if my-sel is ON, we better have the PCIe-addr in my-space earlier”).

If you can provide few more samples on what kind of “control signals” you deal with, maybe we can explore further.

Warm Regards
Ajeetha, CVC

In reply to Ajeetha Kumari CVC:

Without going to in-depth and breaching confidentiality, here is the short and the long.
Short, assertions won’t work. To many variables.
Long, We model an entire sub unit of a processor, Unfortunately, this isn’t the standard 5 stage IF,DCD,EX,MEM,WB processor. If I could only be so lucky enough to have that few stages in each unit, much less the entire design. Checking of the output control signals are very dependent on the different stages/pipelines. Writing an assertion to check them would be nearly impossible, if not plain impossible, without modeling these stages/pipelines. Our checkers/scoreboards are written to cycle accurately check the design and are as big as the rtl itself. Without such cycle accurately checkers the verification of the design would be compromised. Such checkers ensure that not only do we get the correct answer, but are we getting it as designed. I.E. If we are off a cycle on when a control signal is produced, we care, as this ultimately affects performance.
Thanks for the response.

With due respect to your sensitivity and experience, I beg to differ. I believe you are thinking “black-box” or end-to-end assertions and hence saying “it won’t work”.

In anycase - if you/your team has decided so I don’t want to counter it for your usage.

The below is mainly for other readers who may want to still consider assertions:

However there are proven track records of assertions on such designs, see one at: http://www.cs.rice.edu/~vardi/comp607/bentley.pdf

And if you wonder how one would create/code so many white-box assertions, technology such as “Assertion Synthesis” (www.nextopsoftware.com) might well be the answer.

Warm Regards
Ajeetha, CVC

In reply to Ajeetha Kumari CVC:

After reading the article, i still feel as though assertions won’t work for effectively/efficiently to verify a full subunit. The references to assertions in the article was towards formal verification, which our subunits are much to large for formal verification techniques.
As an example, if we were to use assertions to check outputs and even internal state signals, each assertion would either have to look directly at the rtl or be driven by some stage modeling, and if its stage modeling, is that not just the same thing as the checker/model? The downside to assertions are the longer you go without checking internals before the output might/might not make its way out, the more state variables you have to keep track of. If you choose to check more internals more often, then you are sacrificing performance as you have many more assertions that incrementally check the design to verify correct behaviour. So here you are sacrificing ease of assertion writing for quanitity of assertion writing.
By saying assertions won’t work, I should have really said, assertions for our design would be less efficient and harder to maintain. We do currently maintain assertions to check common occurances and the correctness of the outcome, but the scenarious are very limited in scope.

An example:
Hold equations
stage_1_hold = x | y | z.

Yes, i could write an assertion very easily around this. But the problem gets more complicated if x,y,and z all all combinatorial signals. I can’t assume x,y,z are correct unless i have an assertion for each of those or they are included in my stage_1_hold assertion. So starts the process writing an assertion for each these cases until a register is found. Only then could i stop b/c the register input/output pair should have been checked by another assertion.

If I decide not to check combinatorial signals, I leave a huge gap in my verification that could potentional contain a bug. I.E. if Z comes on inadverntenly. This bug won’t cause any results to come back incorrect, as holding longer than you have to is really only a performance bug, unless you hold indefinately, which i’ll exlcude in this case. So we hold an extra cycle(or more), and our performance gets degradated. But since I didn’t check all combinatorial signals, no tests fail due to this bug and we(designers and verifiers) think everything is ok.

So I’m left with having to write all these assertion, which in the end what I have implicitly done is write a model, except much more complex and likely much more segregated.

I have given just a small example of why I think assertions won’t work, the holds are just on of many that I have to deal with. If correct results where all that mattered, then assertions would work great. Unfortunately for us correct results also need to come at the correct time. And that correct time is primarily driven by control signals(in my example, stage hold equations).

I have herd of Next Stop, my company actually has a license for it, a group used it and did like it, but said its not the end all be all and that modeling is still needed. This is becuase the assertions you get out of the tool are only as good as your ability to stimulate the design for the tests the teams decides to use with the tool. If 1 in 1000 random tests excercise a portion of the design that would affect the results of the next stop tool, then what if 1 in 10000 excerise a bug? At what point do you stop feeding in tests to the tool in order for it to generate proper assertions so that the 1 in 10000 tests that exercise a bug gets caught. Now imagine running random 24x7 7 days a week. Each test having the same possibility of exposing a bug or excercising a portion of the design never excisied before. If you never hit the scenrio when running with the tool, then the assertions generated by the tool might never catch the bug. This is why the team reported to us that they still need modeling.

I really enjoy these types of dialogues, b/c it opens my mind to think outside the box.
I look forward to your retort.