Best way to randomize from virtual sequence

Hi All,
I have a basic question related to UVM methodology.
I have a virtual sequence , in which there are three child sequences .
Child sequences initially has some constraints , but now I want to change it from top level virtual sequence .
What is the recommended method of doing this , `uvm_do_with doesn’t seem to be working.

Thanks a lot for help,
GG

You should never use the `uvm_do_* macros. Read this section of the UVM Cookbook on using virtual sequences and how to start the sub-sequences. If you want to randomize the sub-sequence, simply call the randomize() function for the sub-sequence, using the with {} clause to pass in the appropriate constraints.

Thanks a lot .

I’m going to disagree with cookbooks view on`uvm_do. If you know what it does it is a very useful little function that completely encapsulates the generation and randomization of sequence_items.

You need to be aware of the sequence / sequence_item you are randomizing with `uvm_do_with. It does work very well but you need to aware of scope.

In reply to Phill_Ferg:

The reason that the `uvm_do_* macros aren’t recommended is that there are several inherent flaws which make their usage model limited:

  • A new sequence item is created every time. If you are creating complex sequence items or sending many items in a loop, there can be a significant performance hit. The recommendation in this case would be to create only one sequence item and reuse it multiple times.
  • The error checking for randomization() failure generates only a warning, which might not be what the user desires.
  • You can not modify/alter a sequence item directly. Often it is more to efficient to create a sequence item and assign values directly instead of calling randomize(). This is especially true if you don’t want to add any constraints.

Since the one line macro expands to 3 commands, the amount of flexibility provided by manually coding only what you need outweighs the use of the macros.

In reply to cgales:

I hope you don’t find me argumentative, but i’m just curious on some of your points.

  • Ok so a sequence_item is created, as i thought sequence_items are objects they would disappear in whatever garbage collection SV has? The same data is still taking up the same memory space at that point in time with the same overheads.

  • i appreciate the randomization point, - i never knew this!

  • the last point is subject to sequence or sequence_item hierarchy. The point of coverage driven verification is to constraint and let everything else randomize. In that context, creating a sequence_item and letting the contents randomize based on a few constraints is exactly the simplicity and abstraction that is the holy grail of test design.
    This is the appeal of uvm_do to me - my tests are just a collection of sequences that call `uvm_do.
    Most of my sequence_items have constraints for the limits, and then all the hard data is generated with urandom() in the post_randomize depending on how much control i need of the data distribution.

In reply to Phill_Ferg:

Nothing wrong with discussing the advantages and disadvantages of coding styles as there are many different ways of accomplishing the same thing. Our recommendations come from our experiences with customers around the world involving all different complexities of verification environments, so we try to provide something that works well with all users.

With respect to creating a new sequence item with every call, the amount of memory used shouldn’t change, although this would be simulator dependent with how and when garbage collection is performed. The main benefit with creating only one sequence item and reusing it comes from the performance overhead required for each item. If you are doing single transactions, then there is no difference. However, we see customers doing loops of thousands of transactions. In this case, there can be a measurable performance impact.

As you mentioned, a completely self-contained randomizable environment is ideal, but new customers who don’t understand this concept will utilize UVM in a directed-type methodology. This results in sequences with many specific constraints which are better applied without the overhead of a randomize() call.

Overall, it is up to the user to determine which approach they desire to use. It is easier to start out with the explicit generation and make minor changes than it is to start with the macros and then realize that you will need to re-factor at a later date to make a small change.

In reply to cgales:

What is stopping you to remove all macros from UVM ?

Due to different approaches to do same thing (`uvm_do(Cadence) vs .randomize()(Mentor)) OVM & UVM became CVM (confused verification methodology).

What industry needs is a uvm (unified verification methodology).

No Cadence vs Mentor vs Synopsys.

In reply to spradhan:

Unfortunately UVM is designed by committee, with each member wanting their own ‘features’ which carry on the concepts of their original legacy Verification Methodologies. This results in a methodology which contains many features which may or may not be considered ideal.

The coding guidelines and recommendations we provide are designed to make it easy for the new UVM user to get started while providing the power and flexibility for the advanced user.