Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IP
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Clock-Domain Crossing
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • Advanced UVM
      • Basic UVM
      • Introduction to UVM
      • UVM Connect
      • UVM Debug
      • UVMF - One Bite at a Time
    • Featured Courses

      • Introduction to ISO 26262
      • Introduction to DO-254
      • Clock-Domain Crossing Verification
      • Portable Stimulus Basics
      • Power Aware CDC Verification
      • Power Aware Verification
      • SystemVerilog OOP for UVM Verification
    • Additional Courses

      • Assertion-Based Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • Metrics in SoC Verification
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal-Based Technology: Automatic Formal Solutions
      • Formal Coverage
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizons - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Upcoming & Featured Events

      • CDC+RDC Analysis - 4/20
      • Low Power Verification - 4/29
      • User2User - 5/26
      • Webinar Calendar
    • On-Demand Webinars

      • Basic Abstraction Techniques
      • Safety Analysis Techniques
      • QVIP Workflow and Debug for PCIe
      • Writing a Proxy-driven Testbench
      • Achieving High Defect Coverage
      • Visualizer Features
      • Questa Static and Formal Apps
      • All On-Demand Webinars
    • Recording Archive

      • Siemens EDA 2021 Functional Verification Webinar Series
      • Improving Your SystemVerilog & UVM Skills
      • Should I Kill My Formal Run?
      • Visualizer Debug Environment
      • Industry Data & Surveys
      • All Recordings
    • Conferences

      • DVCon 2021
      • DVCon 2020
      • DAC 2019
      • All Conferences
    • Mentor Learning Center

      • SystemVerilog Fundamentals
      • SystemVerilog UVM
      • View all Learning Paths
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification Horizons Blog
      • Academy News
      • Academy Newsletter
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - March 2021
      • Verification Horizons - November 2020
      • Verification Horizons - July 2020
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Contact Us
    • Training

      • Questa Basic
      • Questa Advanced
      • Mastering Questa
  • Home
  • Verification Horizons
  • June 2012
  • Targeting Internal-State Scenarios in an Uncertain World

Targeting Internal-State Scenarios in an Uncertain World

Verification Horizons - Tom Fitzpatrick, Editor

Targeting Internal-State Scenarios in an Uncertain World by Matthew Ballance, Verification Technologist, Mentor Graphics

The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.

Questa inFact Intelligent Testbench Automation, part of the Questa verification platform, provides a comprehensive solution for efficiently and comprehensively exercising the functional input space of a design – the input commands and operating modes. Questa inFact's efficient graph-based stimulus description enables 10-100x more unique stimulus to be created in a given period of time than could be created using directed tests. The advanced coverage-targeting algorithms within Questa inFact achieve input functional coverage 10-100x faster than random stimulus, and enable this benefit to be easily-scaled to the simulation farm.

Many of the most-interesting verification scenarios, however, are scenarios involving design-internal state. These internal-state scenarios often end up being verified with directed tests, due to the difficulty in coercing random tests to reliably target the desired scenarios. Often, the difficulty in exercising these internal-state scenarios lies in properly combining the inputs required to achieve the pre-conditions for the internal-state scenario with the stimulus required to make progress towards coverage of the scenario. For example, a customer I recently worked with found that in one case, their entire regression suite only covered 5% of a moderately-sized internal-state coverage due to the dual requirements of creating pre-conditions,
then hitting an interesting internal-coverage case once the pre-conditions were met.

In this article, we will look at how two capabilities of Questa inFact Intelligent Testbench Automation can be used to more-efficiently target verification scenarios involving design-internal state.

PIPELINED COMMAND PROCESSOR EXAMPLE

The example that we will examine in this article is a command-processing pipeline. The pipeline, in this case is a 5-stage pipeline that processes a command with operands. This particular processor supports eight commands – CMD1 through CMD8. Under ideal circumstances, a new input command is accepted by the pipeline every cycle, and a single command completes every cycle. As with all pipeline-based processors, however, stalls can occur in this pipeline when one of the stages takes longer than one cycle to complete.

One of our verification tasks for this pipeline involves ensuring that certain command sequences proceed through the pipeline. Specifically, we want to ensure that all combinations of back-to-back commands are exercised. For example, CMD1 followed by CMD1, CMD1 followed by CMD2, etc. We also want to exercise these same back-to-back command sequences with one, two, and three different commands in the middle. Figure 1 below summarizes the sequences that we wish to verify. The blue-shaded commands below are the ones that we care about from a coverage perspective. The grey-shaded boxes are the commands whose specific value we don't care about, apart from ensuring that these commands are different from the commands that begin and end the command sequence.

We are using a UVM environment for this block. Stimulus is described as a UVM sequence item that contains a field that specifies the command (cmd) as well as fields for both command operands (operand_a, operand_b). A UVM sequence running on the sequencer is responsible for generating this stimulus, while the driver converts the command described in the sequence item to signal-level transactions that are applied to the command-processor's interface.

Figure 1 - Command-Sequence Examples
Figure 1 - Command-Sequence Examples

This verification scenario presents a couple of challenges. First, describing the full set of desired sequences is a challenge. We could, of course, carefully create the scenario sequences using directed tests, but this would be a significant amount of work. We could leverage random generation on a single-command basis and hope to hit all the cases. However, the efficiency with which we can achieve our goals is hampered by the redundancy inherent in random generation and the fact that the constraint solver doesn't comprehend the overall sequential goal that we are targeting. The second challenge involves the pipeline stalls. From our perspective as a test writer, these stalls are unpredictable. Despite our careful efforts to design a command sequence to apply to the pipeline, what is actually processed by the pipeline may be quite different than what we intended.

DESCRIBING THE STIMULUS SPACE

The task of describing and generating the command sequences is a classic input-stimulus problem. First, we create a set of inFact rules that describe the sequence of five commands. The rule description specifies the variables for which inFact will select values and the constraints between the variable values (in this case, for simplicity, there are no validity constraints).

At the top of the rule description, we declare graph variables, using the meta_action keyword, corresponding to the fields in the cmd_item sequence item: cmd, operand_a, operand_b. We also need to check the state of the pipeline when we issue a command. The cmdX_stall_i meta_action_import variables bring the current state of the pipeline into inFact from the testbench. Since we are de-scribing a sequence of five commands, we create five sets of variable declarations to represent cmd1 through cmd5.

Figure 2 - UVM Environment
Figure 2 - UVM Environment

We use symbols to group our variables together. Each symbol defined below in Figure 3 (Cmd1 through Cmd5) declares the sequence of operations needed to issue a single command to the command processor. Specifically: Call the UVM Sequence API start_item task, sample the stall state of the pipeline, select values for cmd, operand_a, and operand_b, then call the UVM Sequence API finish_item task to send the sequence item to the command processor.

Figure 3 - Command-sequence Rules
Figure 3 - Command-sequence Rules

Finally, at the bottom of the rule file we describe the top-level operation sequence. The most important aspect of this operation sequence is the repeat loop that contains references to the Cmd1 through Cmd5 symbols. During execution, this will cause inFact to repeatedly generate sequences of five commands.

Figure 4, below, provides a visual representation of our stimulus space. We can see the top-level sequence of operations described at the bottom of the rule file.

Figure 4 - Command-Sequence Graph
Figure 4 - Command-Sequence Graph

The graph is expanded to show the implementation specifics of the Cmd1 symbol. As you can see, the graph is a nice visual way to view the stimulus space and verification process. For each of the five commands in the sequence, we will read in state information from the testbench that indicates whether the pipeline is stalled, and select a command and operands to issue.

TARGETING VERIFICATION GOALS

Next, we need to describe the set of stimulus for Questa inFact to generate, which corresponds to the verification goals outlined above. At a high level, we are interested in crossing the following variables in order to realize the command sequences described earlier:

  • Cmd1 x Cmd2
  • Cmd1 x Cmd3
  • Cmd1 x Cmd4
  • Cmd1 x Cmd5

However, we also need to account for the requirement that commands in the middle of our command sequences must be different from the starting and ending commands in the sequence. Questa inFact provides a special type of constraint, called a coverage constraint, which provides an added level of flexibility and productivity when describing stimulus-creation scenarios like that above. A coverage constraint only applies when inFact is targeting a specific stimulus-generation goal, which enables stimulus to be very targeted for the portion of the simulation when inFact is targeting a specific goal, but revert to being less-constrained once inFact achieves that goal.

We create four constraints like the one shown below to describe the specific restriction that commands in the middle of a sequence must be different than the commands at the beginning and the end of the sequence. The constraint shown in Figure 5 describes the restrictions on a three-command sequence. In this case, our verification goals call for the command in the middle of the sequence (cmd2) to be different than the command at the beginning of the sequence (cmd1) and the command at the end of the sequence (cmd3). This constraint, and the other three like it, are linked to the corresponding cross-coverage goals that describe our verification goals.

Figure 5 - Coverage Constraint
Figure 5 - Coverage Constraint

REACTIVE STIMULUS

Of course, our efficiently-described comprehensive stimulus isn't much use if the design doesn't actually accept what we apply. Fortunately, Questa inFact supports generating reactive stimulus based on state information from the environment. inFact is able to react to the current design state in cases where this must be done in order to create valid stimulus. In addition, inFact is also able to make coverage-targeting decisions based on the input from the environment. This enables inFact to take advantage of the current state of the environment to make rapid progress to coverage, even when inFact isn't able to directly control the environment state. In other words, inFact is constantly looking for an opportunity to make progress towards the user-specified verification goals, and makes choices based on the current state to target verification goals that have not yet been satisfied.

In this case, the environment provides a way to query whether the first stage of the pipeline is stalled. Feeding this design-state information to inFact before each command is issued allows inFact to properly target our back-to-back command verification goals. Since the pipe-stage stall information tells us whether our verification scenario was properly applied, we reference the stall information in our coverage constraints. If the pipeline stalls during application of a command sequence, the coverage constraint will evaluate to false, causing inFact to retry that command sequence at a future time.

CONCLUSION

As we've seen from this example, Questa inFact provides specific features that simplify the process of targeting coverage involving design-internal states. Coverage constraints simplify the process of describing complex verification goals. Questa inFact's reactive stimulus generation enables inFact to react to the design state and generate stimulus that makes progress towards the verification goals whenever possible. And, as always, inFact's redundancy-eliminating algorithms enable efficient coverage closure for verification goals with and without design-state dependencies. The customer I mentioned at the beginning of the article applied inFact to the verification problem where their full regression achieved only 5% of a particular internal-state coverage goal. With a small amount of integration work and one short inFact simulation, they were able to achieve full coverage of that verification goal. For them, achieving this type of difficult-to-hit coverage goals was critical to the success of their project. The ability to achieve the verification goal efficiently – both in terms of engineering investment and simulation efficiency – was truly intelligent testbench automation.

Back to Top

Table of Contents

Verification Horizons Articles:

  • "Murphy" on Verification: If It Isn't Covered, It Doesn't Work

  • Mentor Has Accellera's Latest Standard Covered

  • Is Intelligent Testbench Automation For You?

  • Automated Generation of Functional Coverage Metrics for Input Stimulus

  • Targeting Internal-State Scenarios in an Uncertain World

  • Virtualization Delivers Total Verification of SoC Hardware, Software, and Interfaces

  • On the Fly Reset

  • Relieving the Parameterized Coverage Headache

  • Four Best Practices for Prototyping MATLAB and Simulink Algorithms on FPGAs

  • Better Living Through Better Class-Based SystemVerilog Debug

Siemens Digital Industries Software

Siemens Digital Industries Software

##TodayMeetsTomorrow

Solutions

  • Cloud
  • Mendix
  • Siemens EDA
  • MindSphere
  • Siemens PLM
  • View all portfolio

Explore

  • Digital Journeys
  • Community
  • Blog
  • Online Store

Siemens

  • About Us
  • Careers
  • Events
  • News and Press
  • Newsletter
  • Customer Stories

Contact Us

USA:

phone-office +1 800 547 3000

See our Worldwide Directory

  • Contact Us
  • Support Center
  • Give us Feedback
©2021 Siemens Digital Industries Software. All Rights Reserved.
Terms of Use Privacy Cookie Policy