Technical Leads' Expectations on UVM
As with any large framework, there is more than one way to get things done in UVM. As a technical lead, one wants to establish a set of coding guidelines/best practices and ensure that his/her team members adhere to the same. Typical concerns from a technical lead perspective are:
- Simulator performance – impacted if one makes poor choices in UVM
- Debug-ability – use consistent debug hooks to improve turn-around time
- Productivity BKMs (Best Known Methods)
Leads would also benefit from an automated audit of UVM code prior to every milestone release of their teams’ code base.
 |
Figure 1 - UVM testbench expectations
Verification Engineers’ Expectations on UVM
Not every verification engineer is lucky enough to have been trained by UVM experts. Given the myriad of base classes and features such as TLM, Factory, Config-DB, junior engineers find it hard to comprehend at the beginning. Also the ability to get things done in many ways in UVM adds to their chaos. Typical expectations from a verification engineer would be:
- A concise set of dos and don’ts (aka rules)
- A brief description of why such rules exist
- How to ensure his/her code is compliant to a basic set of key rules, say during every check-in to a revision control mechanism (Git/SVN/P4/SOS/Clearcase, etc.)
- Handy debug tricks that can save hours for him/her during the coding process
WHAT CAN GO WRONG WITH UVM-BASED TESTBENCHES?
Depending on the team’s expertise answer to this question would vary; however on an average, we have observed that a typical UVM testbench provides about 20-30% scope for improvement. The specific area of focus (we refer to them as perspect-ives) can vary across teams, typical ones being:
- Performance
- Reuse
- Debug
- Functionality (of TB typically, at times design as well, albeit indirectly)
- Productivity
- Coding styles (aka Lint)
Over the past few years, attempts have been made to apply the concept of Linting to testbenches. Given the success of RTL-lint tools, this is a no-brainer attempt. However, verification engineers are smart and are on a tight schedule by itself. Linting usually throws a sea of messages and finding perspectives such as the ones listed above is nothing less than finding a needle in a haystack. Attempting to perform a “lint” type of checks on a UVM code base simply falls flat and would discourage adoption of such technology.
Given that verification as a discipline employs a lot of junior engineers, who at times do not understand the intricacies of a complex methodology such as UVM leads to sub-optimal code.
SAMPLE RULES, CODE SNIPPETS AND POTENTIAL FIXES
Having looked at what can go wrong with UVM code, let’s look at some real-life use cases.
Performance Perspective - Example
UVM supports wildcard usage in uvm_config_db::set/get API. Though it is at times handy to use, this comes with a performance penalty as the search for wildcards goes through DPI. Also using a single wildcard anywhere in inst_name incurs a significant penalty in the UVM search algorithm, as can be seen in the code comments from the UVM BCL in Figure 2, below.
 |
Figure 2 - UVM BCL code comments
The snippet is extracted from the UVM 1.1d base class and is an indicative of what a user's poor code can lead to. Even a single use of the wildcard in inst_name would enable a “greedy” search algorithm for all upcoming uvm_config_db searches. Below is an example of user code demonstrating a poor style:
uvm_config_db::set(.cntxt(this), .inst_name (“axi_ag_0”),
.field_name(“*enable*”), .value(1) );
|
Potential fixes for the above style, with better per-formance, same functionality with little verbose code is shown below:
1. ENABLE SCOREBOARD_0:
uvm_config_db::set(.cntxt(this), .inst_name (“axi_ag_0”),
.field_name(“sbrd_enable_0”), .value(1) );
|
2. ENABLE SCOREBOARD_1:
uvm_config_db::set(.cntxt(this), .inst_name (“axi_ag_0”),
.field_name(“sbrd_enable_1”), .value(1) );
|
3. ENABLE FUNCTIONAL COVERAGE:
uvm_config_db::set(.cntxt(this), .inst_name (“axi_ag_0”),
.field_name(“fcov_enable”), .value(1) );
|
One of the challenges is to identify a list of field names that an existing (poor style) wildcard expands to.
Debug Perspective - Example
UVM has built-in debug hooks for easier log file based analysis. Problem is, engineers are not always aware of these hooks or they do not add these in production environments, leading to costly debug cycles. We show two such features below:
1. UVM FACTORY PRINT
function void axi_env::start_of_simulation_phase(uvm_phase phase);
uvm_factory f_h;
super.start_of_simulation_phase (.phase(phase));
f_h = uvm_factory::get();
f_h.print(.all_types(1));
endfunction : start_of_simulation_phase
|
A sample log from Questa® with the above debug hook is shown below.
This guideline/rule saying “factory.print()” shall be invoked at the right time/phase in a simulation goes a long way in debug as this information appears early in simulation (usually 0 time), hence one can be sure whether the intended overrides worked as per the test specification/requirements.
2. UVM TOPOLOGY PRINT
UVM components are built hierarchically and at time 0. It is essential to get the testbench configured properly before running simulations and debugging the design behavior. One of the very useful debug hooks in UVM is the popular print_topology() – though popular, it is not called by default by UVM. Engineers also do not always remember to call this API leading to unnecessary debug cycles.
In Figure 3, below, is a sample output from this debug API.
 |
Figure 3 - Debug API sample output
A code snippet enabling the print output is shown below:
function void axi_env::end_of_elaboration_phase(uvm_phase phase);
uvm_top t_h;
super. end_of_elaboration_phase (.phase(phase));
t_h = uvm_top::get();
t_h.print_topology();
endfunction : end_of_elaboration_phase
|
Reuse Perspective - Example
1. USING FACTORY BASED CREATION
One of the primary motivations to use UVM in a pro-ject is to keep various components and transactions reusable; this is achieved by deferring the creation of objects to a factory based approach instead of direct new() invocation. A key check will be to ensure all interesting objects are indeed following this rule.
The code snippet below demonstrates this key rule.
function void axi_env::build_phase (uvm_phase phase);
super. build _phase (.phase(phase));
this.axi_agt_0 = axi_agent::type_id::create(.name(“axi_agt_0”), .parent(this));
// POOR code this.axi_agt_0 = new (.name(“axi_agt_0”), .parent(this));
endfunction : build_phase
|
VERIFYING THE VERIFYER
So how do we “verify” the verification code itself? There are at least two angles to this question:
- Purely functional correctness
- Quality perspectives such as:
a. Debug-ability of the code (Directly impacts schedule)
b. Simulation Performance friendly code (Impacts licensing costs and time)
c. Reusability perspective (Impacts the time for the next task/project), etc.
With domain experts and using the right off-the-shelf VIPs such as Questa® VIPs, the functionality angle can be addressed. What about the second angle?
We at VerifWorks have done years of consulting and have learned several best practices and have encountered poor coding styles. Over the years, we have accumulated these findings to a new solution named Karuta (Design Verification Rule checker). When run on a typical UVM- based testbench along with Questa®, Karuta highlights various strengths and weaknesses of users’ UVM code thereby helping them address the quality angle.
We ran Karuta on some of the popular, open source, non-trivial UVM code bases from Google, Cadence, Juniper, Ariane RISC–V®, TVIP and more. Our analysis into these code bases reveal interesting results, that left us baffled. Below is a sneak preview in Table 1.
 |
Table 1 - Code base analysis
Do visit us at the upcoming DVCon US 2020 exhibition to learn more about these results and how Karuta can help in your projects.
Back to Top