Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IP
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Clock-Domain Crossing
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • Advanced UVM
      • Basic UVM
      • Introduction to UVM
      • UVM Connect
      • UVM Debug
      • UVMF - One Bite at a Time
    • Featured Courses

      • Introduction to ISO 26262
      • Introduction to DO-254
      • Clock-Domain Crossing Verification
      • Portable Stimulus Basics
      • Power Aware CDC Verification
      • Power Aware Verification
      • SystemVerilog OOP for UVM Verification
    • Additional Courses

      • Assertion-Based Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • Metrics in SoC Verification
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal-Based Technology: Automatic Formal Solutions
      • Formal Coverage
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizons - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Upcoming & Featured Events

      • Creating an Optimal Safety Architecture  - February 9th
      • The ABC of Formal Verification - February 11th
      • Events Calendar
    • On Demand Seminars

      • I'm Excited About Formal...
      • Visualizer Coverage
      • Formal-based ‘X’ Verification
      • 2020 Functional Verification Study
      • All On-Demand Seminars
    • Recording Archive

      • Improving Your SystemVerilog & UVM Skills
      • Should I Kill My Formal Run?
      • Visualizer Debug Environment
      • All Recordings
    • Mentor Training Center

      • SystemVerilog for Verification
      • SystemVerilog UVM
      • UVM Framework
      • Instructor-led Training
    • Mentor Learning Center

      • SystemVerilog Fundamentals
      • SystemVerilog UVM
      • Questa Simulation Coverage Acceleration Apps with inFact
      • View all Learning Paths
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification Horizons Blog
      • Academy News
      • Academy Newsletter
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - November 2020
      • Verification Horizons - July 2020
      • Verification Horizons - March 2020
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Contact Us
    • Training

      • Questa® & ModelSim®
      • Questa® inFact
      • Functional Verification Library
  • Home /
  • Verification Horizons /
  • December 2019 /
  • Exercising State Machines with Command Sequences

Exercising State Machines with Command Sequences

Verification Horizons - Tom Fitzpatrick, Editor

Exercising State Machines with Command Sequences Matthew Ballance - Mentor, A Siemens Business

Almost every non-trivial design contains at least one state machine, and exercising that state machine through its legal states, state transitions, and the different reasons for state transitions is key to verifying the design’s functionality. In some cases, we can exercise a state machine simply as a side-effect of performing normal operations on the design. In other cases, the state machine may be sufficiently complex that we must take explicit targeted steps to effectively exercise the state machine. In this article, we will see how inFact’s systematic stimulus generation and ability to generate constraint-aware functional coverage simplify the process of exercising a state machine by generating command sequences.

STATE MACHINE EXAMPLE

The example used in this article is the state machine for an LPDDR SDRAM memory. A generic and slightly-simplified LPDDR diagram is shown below, with the relevant states and commands that provoke these transitions.

The state machine shown here appears deceptively simple. As we will see, exercising all valid three-deep sequences of commands is far from trivial! Much of the complication comes from the fact that we don’t just need to exercise the state machine. We also need to ensure the design is in a state where transitions in the state machine can be exercised.

Figure 1 - LPDDR State Machine


MAPPING TO STIMULUS

There are a few ways that we could write stimulus for generating LPDDR commands to exercise the memory device’s state machine. We could write a transaction that represents a single LPDDR command and not attempt to target the sequence of commands. On the other extreme, we could write a set of directed tests that attempts to exercise each possible transition in the state machine. We will take a slightly different path, and capture the constraints that relate the device state to each of the three commands. This will allow us to use inFact’s systematic stimulus generation to efficiently exercise all the valid command sequences.

Identifying State

The LPDDR state machine, like many others, is conditioned by device state. For example, in order to perform a write (WR_x) on one of the banks, that bank must be in the active state. Our first task is to identify the device states that dictate the valid commands that can be applied at any given point in time.

With LPDDR, there are two elements of state to be aware of:

  • Whether the device is in the self-refresh state
  • Whether a given bank is active

We start by capturing these state elements in a struct, as shown below in Figure 2. Note that in this LPDDR memory, there are eight banks.

Figure 2 - LPDDR State Information

typedef struct {
	rand bit[3:0]		bank_active[8];
	rand bit			refresh;
} cmd_state_s;


We will use this state to condition the set of commands that can be generated next.

Identifying Command-Generation Constraints

The first thing we need to do in forming the command-generation constraints is to identify the set of commands. The enumerated type shown in Figure 3 below encodes every command that can be applied – either globally or to a specific bank.

Figure 3 - LPDDR command set

typedef enum {
	PREA, 
	SRE,
	SRX,
	REFA,
	ACT_0, ACT_1, ACT_2, ACT_3, ACT_4, ACT_5, ACT_6, ACT_7,
	WR_0, WR_1, WR_2, WR_3, WR_4, WR_5, WR_6, WR_7,
	WRA_0, WRA_1, WRA_2, WRA_3, WRA_4, WRA_5, WRA_6, WRA_7,
	RD_0, RD_1, RD_2, RD_3, RD_4, RD_5, RD_6, RD_7,
	RDA_0, RDA_1, RDA_2, RDA_3, RDA_4, RDA_5, RDA_6, RDA_7,
	PRE_0, PRE_1, PRE_2, PRE_3, PRE_4, PRE_5, PRE_6, PRE_7,
	REF_0, REF_1, REF_2, REF_3, REF_4, REF_5, REF_6, REF_7
} lpddr_cmd_e;


We next capture everything necessary to describe a single command within a class, as shown in Figure 4.

Figure 4 - Single-Command Class

class lpddr4_cmd;
	rand lpddr4_cmd_e		cmd;
	rand int 			    target_bank;
	rand cmd_state_s		state;
	// . . .
 
endclass


The single-command class captures the device state prior to execution of the command (the state field), the command to be generated, and the target bank
(if applicable) to which the command is applied.

Now, we need to form constraints that reflect the rules expressed in the state machine as constraints against the current state.

Figure 5 - Constraints on Self-Refresh

constraint refresh_c {
	// When in self-refresh mode, the only thing we can
	// do is to exit self-refresh mode. When not in 
	// self-refresh mode, we cannot exit self-refresh mode
	if (state.refresh) {
		// SRE stays in self-refresh mode. SRX exits
			cmd inside {SRE, SRX};
	} else {
		cmd != SRX;
	}
 
	(state.bank_active.sum() != 0)  -> cmd != SRE;
 
}


Figure 5 above shows the constraints necessary to encode the arcs between the idle state and the self-refresh state. Note that the only thing we can do while in self-refresh state is to stay in that state or exit self-refresh state. When we are not in the refresh state, we cannot issue a self-refresh-exit command. Finally, we cannot issue a self-refresh command unless all of the banks are idle.

There are a similar set of constraints to control when bank-specific commands can be issued, as shown in Figure 6.,

Figure 6 - Bank-specific Command Constraints

constraint bank_cmd_c {
    if (cmd inside {PREA, SRE, SRX, REFA}) {
        target_bank == -1;
    } else {
        target_bank inside {[0:7]};
 
        // Restrict possible commands based on bank state
        foreach (state.bank_active[i]) {
            if (i == target_bank) {
                if (state.bank_active[i]) {
                    // The only things we cannot do when the bank
                    // is active is to activate the bank or refresh
                    cmd == (ACT_0+i || WR_0+i || WRA_0+i || 
                            RD_0+i || RDA_0+i || PRE_0+i);
                } else {
                    // In the idle state, the only thing we can do 
                    // is to activate the bank or refresh a bank
                    cmd == (ACT_0+i || REF_0+i);
                }
            }
        }
    }
}


Here, the constraints primarily ensure that specific commands can only be issued when the appropriate bank is activated. Here, again, we can trace these restrictions back to the rules described by the state machine shown in Figure 1.

Identifying Command-Sequence Constraints

That’s just the first step. Now we need to capture the rules around sequences of commands. As mentioned at the beginning of the article, we are targeting generation of sequences of three commands. Consequently, we define a three-deep array of the command classes we looked at in the previous section.

Figure 7 - Command-sequence Class

class lpddr_cmd_seq;
	parameter int unsigned N_CMDS = 3;
 
	rand lpddr_cmd		cmds[N_CMDS];
	cmd_state_s		    prev_state;
 
	function new();
		foreach (cmds[i]) begin
		        cmds[i] = new();
		end
	endfunction
    // . . .
 
endclass


Note that we also capture the previous state, which will be the state of the device after the last command of the sequence executes (or the initial state).

Figure 8 - State-Transition Constraints

// Constraints to relate the sub-commands
constraint prop_bank_state_c {
    foreach (cmds[i]) {
        if (i == 0) {
            // Pull the previous state
            cmds[i].state.refresh == prev_state.refresh;
            foreach (cmds[i].state.bank_active[j]) {
                cmds[i].state.bank_active[j] == prev_state.bank_active[j];
            }
        } else {
            // Look back and determine how the
            // last commands impacts current state
            if (cmds[i-1].cmd == SRE) {
                cmds[i].state.refresh == 1;
            } else {
                cmds[i].state.refresh == 0;
            }
 
            if (cmds[i-1].cmd inside {PREA, SRE, SRX, REFA}) {
                // a PREA command deactivates all banks
                // Other global commands require banks to be inactive
                foreach (cmds[i].state.bank_active[j]) {
                    cmds[i].state.bank_active[j] == 0;
                }
            } else {
                foreach (cmds[i].state.bank_active[j]) {
                    if (cmds[i-1].cmd == ACT_0+j) {
                        // A previous bank-activate command causes
                        // the current bank state to be 1
                        cmds[i].state.bank_active[j] == 1;
                    } else if (
                        cmds[i-1].cmd == WRA_0+j
                        || cmds[i-1].cmd == RDA_0+j
                        || cmds[i-1].cmd == PRE_0+j) {
                        // A RD/WR with auto-precharge deactivates the bank
                        // An explicit precharge deactivates the bank
                        cmds[i].state.bank_active[j] == 0;
                    } else {
                        // Other bank-specific 
                        cmds[i].state.bank_active[j] == 
                            cmds[i-1].state.bank_active[j];
                    }
                }
            }
        }
    }
}


The constraints shown in Figure 8 above compute the value of the state variables at each step of the command sequence based on the command that is executed. For example, if the last command was SRE (self-refresh enter), then the state of the design for this command will be ‘refresh’. Note, also, that the state of the first command is set to be equal to the state from the end of the previous command sequence, captured in the prev_state field.

We’ve now defined the stimulus model for generat-ing three-deep command sequences of LPDDR commands. We’ll come back to how we will use inFact to efficiently generate these command sequences.

DEFINING COVERAGE

In addition to generating valid stimulus, we also want to be able to collect coverage. It’s difficult to create coverage for command sequences like this because of the dependency between the state of valid command sequence, and because of the constraints across commands. Fortunately, inFact gives us a way to both easily specify these coverage goals, and to generate a SystemVerilog coverage model from this coverage specification.

The first step is to capture the coverage goals. Figure 9 shows a CSV (comma-separated value) file displayed as a spreadsheet that captures the coverage goals for our three-deep command sequence.

Figure 9 - Coverage Definition File


Our coverage specification simply needs to capture the goals we care about: a cross between the three commands in the three-deep command sequence array.

Figure 10 - Generating Functional Coverage

qcc lpddr_cmd_seq_pkg::lpddr4_cmd_seq \
    -coverage-strategy app_lpddr_cmd_seq_cov.csv \
    -name app_lpddr_cmd_seq_cov \
    -o app_lpddr_cmd_seq_cov.svh


Figure 10 above shows the inFact command used to generate a SystemVerilog covergroup. inFact leverages the coverage goals captured in the CSV file along with the constraints captured in the SystemVerilog class to generate a SystemVerilog covergroup that accurately captures the reachable combinations and excludes the unreachable combinations.

A big benefit of using inFact’s automation in creating functional coverage is that inFact will automatically compute the unreachable solutions from the constraints, and will generate the exclusion bins to exclude these unreachable cases. As Figure 11 shows, the exclusions in this case are both extensive and complex. Certainly not something that is easy to create by hand!

Figure 11 - Command-sequence Coverage Exclusions


In addition to generating code to exclude unreachable combinations, inFact also reports the number of reachable combinations, as shown in Figure 12. As expected, all 60 variants of each individual command are fully-reachable. Only 187,845 of the apparent 216,000 command-sequence combinations are reachable due to constraints. That’s quite a large number of command sequences we need to exercise!

Figure 12 - Command-Sequence Reachable Combinations

// Coverpoint cmd_0
cmd_0 : coverpoint item.cmd[0].cmd {
	option.weight = 60;
	bins lpddr4_cmd_seq_inst_cmds_0_cmd[] = {[lpddr4_cmd_seq_pkg::PREA:lpddr4_cmd_seq_pkg::REF_7]};
}
// Coverpoint cmd_1
cmd_1 : coverpoint item.cmd[1].cmd {
	option.weight = 60;
	bins lpddr4_cmd_seq_inst_cmds_1_cmd[] = {[lpddr4_cmd_seq_pkg::PREA:lpddr4_cmd_seq_pkg::REF_7]};
}
// Coverpoint cmd_2
cmd_2 : coverpoint item.cmd[2].cmd {
	option.weight = 60;
	bins lpddr4_cmd_seq_inst_cmds_2_cmd[] = {[lpddr4_cmd_seq_pkg::PREA:lpddr4_cmd_seq_pkg::REF_7]};
}
// Cross_cmd_cross
cmd_cross : cross cmd_0, cmd_1, cmd_2 {
	option.weight = 187845;


TESTBENCH INTEGRATION

In order to integrate our inFact-automated command-sequence generator into the testbench, we need to do two things: use inFact to create a stimulus-generator class, and integrate that class into a UVM sequence (assuming we’re using UVM in our testbench).

Figure 13 - Stimulus-Generation Class Creation Command

qso lpddr_cmd_seq_pkg::lpddr_cmd_seq \
  -coverage-strategy app_lpddr_cmd_seq_cov.csv \
  -o lpddr_cmd_seq_gen.svh


Figure 13 above shows the inFact command that reads in the command-sequence class that contains the array of commands and constraints and the coverage-strategy CSV file, and produces a class to allow inFact to efficiently generate all the reachable command-sequence combinations.

Figure 14 - Integrating the Command-Sequence Generator

class lpddr_cmdseq_seq extends lpddr_seq;
 
	function new();
 
	endfunction
 
	task body();
	        lpddr_cmd_seq cmd_seq = lpddr_cmd_seq::type_id::create();
	        lpddr_cmd_seq_gen cmd_gen = new({get_full_name(), ".cmd_gen"});
 
	        repeat (10000) begin
		            // Call inFact to generate three commands
		            cmd_gen.ifc_fill(cmd_seq);
 
		            // Execute the three commands
		            foreach (cmd_seq.cmds[i]) begin
		                    run_cmd(cmd_seq.cmds[i]);
		            end
	        end
    endtask
 
endclass


Now, of course, we need to connect our command-sequence generator to our UVM testbench. About the only challenge here is that our command generator produces three commands at a time, while our testbench accepts a single command at a time. The integration approach we take, as shown in Figure 14 above, is to generate three commands at a time, then apply them one at a time to the testbench via the ‘run_cmd’ task.

Note that our sequence runs a loop of 10,000 command sequences (30,000 commands). This means that we will need to run quite a few simulations in order to achieve our command-sequence coverage goal. Fortunately, inFact offers a regression mode where every simulation makes unique progress toward the overall coverage goal.

CONCLUSION

Command sequences are an excellent way to exercise a sequential design. While there are several ways to generate command sequences, capturing the relationships between the design state and valid next command as constraints allows automation to help. Using this constraint-based description, inFact is able to generate a SystemVerilog covergroup that accurately captures the reachable command sequences, as well as enabling us to efficiently and systematically generate all the command sequences of interest.

Back to Top

Table of Contents

Verification Horizons Articles:

  • Lessons from “Verifying” the Harry Potter Franchise

  • Why Hardware Emulation Is Necessary to Verify Deep Learning Designs

  • Deadlock Prevention Made Easy with Formal Verification

  • Exercising State Machines with Command Sequences

  • Designing A Portable Stimulus Reuse Strategy

  • Don’t Forget the Protocol! A CDC Protocol Methodology to Avoid Bugs in Silicon

© Mentor, a Siemens Business, All rights reserved www.mentor.com

Footer Menu

  • Sitemap
  • Terms & Conditions
  • Verification Horizons Blog
  • LinkedIn Group
SiteLock