How can a test change the environment configuration?

Can a test randomize environment configuration variables and run all possible combinations in a single test? If so, how?

For example:

class base_test extends uvm_test;
...
typedef enum {BIT8, BIT16, BIT32} width_e;
randc width_e data_width;

typedef enum {PORTA, PORTB} port_e;
randc port_e port;

/* Possible combinations of data_width and port is 8 */

env env_h;
env_cfg env_cfg_h;

/* Assume I have two ports in the testbench; agnt is instantiated twice in the env */

function build_phase;
assert(randomize());

/* Change variables inside env_cfg_h */
env_cfg_h.port = port;
env_cfg_h.width = data_width;

uvm_config_db#()::set(this, "env_h*", "env_cfg", env_cfg_h);
endfunction

task run_phase();

phase.raise_objection(this);

/* run sequences */

phase.drop_objection(this);
endtask

endclass


In the code above the environment configuration is randomized once and run. How do I get it to run 8 times with all possible combinations? I mean I can have a 8-loop in the run phase and call assert(randomize()), but the env_cfg_h variables are assigned to downstream components (agnt, drvr, mntr variables) only in the build_phase. How do I get it assign them again in run_phase?

In reply to gopal_susarla:

Wha do you not use a parameter for the data_width? Then you can use a configuration object in your test. I guess both agents are always connected? Right?

In reply to chr_sue:

Yes, the two agents are always connected. The test needs to pick an agent (PortA, PortB) and the size of the data (BIT8, BIT16, BIT32). The code above (base_test) runs one random configuration. The assert(randomize()) call in the build_phase of the base_test accomplishes that.

The problem is when I try to call assert(randomize()) again in the run_phase of base_test. The agents (and drivers and monitors within the agent) are all built using the information from env_cfg_h in the build_phase. When you change env_cfg_h variables in run_phase of base_test, how do they get assigned?

In reply to gopal_susarla:

It is always better to run multiple shorter tests than it is to run one long test. This is because if you find an error, you can get to the failure point quicker for debugging and re-testing. I would recommend running separate simulations for each set of parameters.

Also, never use assert() with a randomize() call. This is because it is possible to disable assertions, resulting in your testbench failing without warning. Instead, you should use an if() statement to check the return code from the randomize call.

In reply to cgales:

Cgales,
Thank you for your input. I understand your suggestions.

What I am trying to synthesize is that:

  1. How are multiple Test Bench Configurations handled in UVM? I was hoping to randomize my TB configuration in a single test. Is there a way to ‘spawn’ 6 tests from a defined set of possibilities?

  2. Correct me if I’m wrong: the test bench configuration is set up and only in ‘build_phase’?

regards,
Gopal

In reply to gopal_susarla:

Yes, the testbench is normally configured only in the build phase.

With respect to multiple testbench configurations, there are several considerations which you need to take into account when you are designing your environment:

  • Is your DUT parameterized? If so, you will need to have these parameters set at the top testbench level. These parameters will then need to be identical in your UVM environment.

  • Does the bit width really affect your stimulus generation? Many times you can use integers for stimulus and only use the required number of bits. This keeps you from having to use parameters for data elements.

  • Does the port selected really affect the DUT? What happens when you use both ports at the same time? Your tests could be designed to send stimulus to both ports and predict the DUT response appropriately.

  • If your DUT is not parameterized but only configurable, then you can use a randomized configuration. I would recommend randomizing the configuration only once at the test level, but you could use a loop to create different configurations. Note that these configurations aren’t needed in the build phase, but only control how the DUT is configured via registers.

In reply to gopal_susarla:

Hi Gopal,
Yes, configuration is only done in the build phase.
But I usually just have reusable/configurable classes and then multiple non-configurable top-level testbenches that configure those reuable/configurable classes.

If you have a single top-level configurable testbench, then every time you change the testbench, you have to run all your configurations (and make sure you didn’t break anything) before you do a checkin.

Whereas with multiple top-level ones, if one changes (or you create a new one), the old ones remain intact. The only time multiple testbenches have to be regressed is if you change a common module - and then you only need to regress the testbenches that use it (or its edited configuration).

About 10% of software is Documentation, 10% coding, 80% maintaining - so I code for lower maintenance costs.

Regards,
Erik

Thank you Cgales and Eric. Both of you provided some good points that I will explore and understand. I shall see how I can ‘beef’ up my testbench with your points.

But in the interim, I just created $value$plusargs for ‘data_width’ and ‘port’ and supplied the values from the command line, instead of randomizing it. Since I need to run all the combinations, randomizing them wouldn’t have guaranteed all possible combinations.

Thanks again for the effective feedback. Very much appreciated.

regards,
Gopal

In reply to gopal_susarla:

$value$plusargs is a very good solution.
Good luck!

Erik