The Whys of UVM

This question is for dave_59 and chr_sue and anyone else that may be interested. I think it belongs in the UVM forum but feel free to move it.

In the thread “Why do we use drivers?”, you both gave the usual answers. “It’s a software design paradigm.” “It’s for reuse.”

We’re over 20 years into the experiment of UVM and its predecessors. All the original developers and advocates have moved on, and many are retiring which makes me feel old.

The industry still struggles mightily to hire and train verification engineers. It is still not taught in universities, so new grads always start in design. Most engineers that cross over into doing verification are either a) RTL designers pulled in for lack of anyone else to do the job, or b) experienced RTL designers looking for a new challenge. All RTL designers. And they all ask the same WHY questions. Why drivers? Why agents? Why analysis ports? Why the class-based aspects of SV, when I already know how to write clocked RTL to do the job?

No one ever wrote a book about the “whys” of UVM. What training that exists, including that on this site, never focused on the “why” or the bad things that can happen when you don’t follow the rules. Only on the “how”.

I still cannot answer questions like “Why drivers?” with anything other than “Because that’s how we do it, now go away.” So designers cheat, they write tests that drive the DUT directly, the projects get done on schedule, and the bad habits are established.

I don’t know what answer I am expecting, but I would like to know your thoughts.

In reply to scott_barvian:

The Separation of Concerns principle I gave in my answer to “why use drivers” question is a language agnostic and a contemporary principle.

It’s unfortunate that DVCon just went through a catastrophic loss of proceedings. They’ve recovered most of the papers and posters, but many of the tutorials and workshops that addressed the “why” are gone.

A few papers that might be worth reading are “First Reports from the UVM Trenches: User-friendly, Versatile and Malleable, or just the Emperor’s New Methodology?” and “UVM Rapid Adoption: A Practical Subset of UVM

However, “Because that’s how we do it, now go away.” could be the best answer for certain practices. Sometimes it’s more important to focus on the final decision rather than getting caught up in the argument or discussion that led to it. Take naming conventions like camelCase versus snake_case and other. It’s important to be consistent with the chosen convention making code easier to read and understand amongst a wider group. The same can be said about testbench architecture.

In reply to dave_59:

For me the UVM has 4 key benefits:
(1) increasing the verification productivity
(2) Plug-and-play reuse
(3) improving the simulation speed
(4) one verification methodology for all companies

In my opinion this describes the “WHY”.
A few more detailed explanations:
(1) when introducing the UVM in a company it requires more effort as continuing with the old approach. The additional effort is between 1.3 to 1.5 of the old methodology. In the 2nd project it is around 0.8 -0.7. in the 3rd project it goes down to around 0.5 of the effort using the traditional approach. These are numbers from my personal experience in introducing the UVM in companies.
The minimal prerequisites to the verification engineers is SV and UVM training. Even after the UVM training the engineers will not be able to start aproject on their own. Best is to engange a UVM coach who is helping to startt this new methodology. This coaching effort is not very high but it pays-out in teh 1st project because it helps to follow the basic rules/guidelines of the UVM.
A good starting point is to use a UVM Framework Generator to create a UVM environment. This has the benfit you can focus on your application specific problems and you do not have to spend a lot of effort to implement all the TLM-related contructs.
(2) The reuse aspect is the most important means to increase the productivity. And it is a plug-and-play reuse. You do not have to modify anything in your code with the exception of parameter settings.
(3) The UVM uses heavily TLM approaches. TLM reduces the number of events and improves the simulation speed heavily.
(4) because it is not a company-specific approach you can ask the UVM community to solve issues you are facing. This is a big difference compared to the traditional apporaches.

The practice I see is in some way different. Only a few companies are following this strict approach. A lot other companies implementing an approach which is not strict resulting in serious problems.

In reply to chr_sue:

In reply to dave_59:
(3) improving the simulation speed

Do you have any numbers for this? Doesn’t the solver add to the overhead of a SystemVerilog simulation? I have seen the Verilog vs. VHDL comparisons, but I have never seen anything like SystemVerilog vs Verilog.

In reply to Jim Lewis:
This is a misunderstanding. I do not measure Systemverilog vs. Verilog. What I mean is the UVM is heavily employing a specific software technique - transaction-leve-modelling (TLM) - which is reducing the amount of events tremendously. This results in less activity during the simulation and thus increases the simulation speed. Only the so-called transactors (drivers, monitors) are working on the pin-level with all the events. Depending on the amount of code for transactors and TLM-code the simulation speeds-up.
But there are no numbers because we had to compare a traditional TB with a UVM envroment with the same behavior. Nobody did implement and check this.

In reply to chr_sue:

So is your claim about “improving simulation speed” an it ought to do that? Or did you measure the speed of their previous undisciplined SV approach against the new SV approach?

Maybe someone did a study of simulation speed of SystemVerilog against VHDL?

TLM can be functional equivalent, cycle equivalent, or even timing equivalent. I would suspect that once you get to cycle equivalent (which I think you would need to verify a design), that TLM has the same number of events on the interface as an undisciplined a “copy and paste” approach. Going further, TLM requires additional overhead to hand off the transaction information vs. the “copy and paste” approach. That said, I doubt the TLM additional overhead is much though - and there is no doubt that the other benefits of TLM out weigh this.

OTOH, I do agree with your first two claims - as I noted similar items in my response to the “Whys of Drivers” post.

WRT your 4th claim, SV only makes sense for the Verilog community. You see this more clearly in the FPGA community where for verification, VHDL is used more often than SV - and UVM is on a decreasing trend where as VHDL’s OSVVM is on an increasing trend.

In reply to Jim Lewis:

I’d not over estimate the increase of the OSVVM. It’s quite new and it’s growing because the VHDL community does not have to learn another language. Aother reason is EA companies are saying UVM is too complex for FPGA. I do not believe the UVM is too complicated. It is only a question how you are employing it. In my dail<y life as a SV/UVM trainer and a UVM consultant I have seen a lot of companies facing serious issues with the UVM because they do not follow the guidelines and were violating basic rules. The problem is to deal with the freedom in the right way.
And I have my doubts about the OSVVM. I believe it is not so powerful as the UVM is.