Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IP
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Clock-Domain Crossing
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • UVMF - One Bite at a Time
      • Introduction to UVM
      • Basic UVM
      • UVM Debug
      • Advanced UVM
      • UVM Connect
    • Featured Courses

      • Portable Stimulus Basics
      • Introduction to DO-254
      • SystemVerilog OOP for UVM Verification
      • Power Aware Verification
      • Power Aware CDC Verification
      • Assertion-Based Verification
      • Metrics in SoC Verification
    • Additional Courses

      • Clock-Domain Crossing Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal Coverage
      • Formal-Based Technology: Automatic Formal Solutions
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizon - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Upcoming & Featured Events

      • DVClub- Austin, TX | 12/11
      • Events Calendar
    • On Demand Seminars

      • UVM 1800.2 & Updated UVM Cookbook
      • Debug
      • FPGA Verification
      • All On-Demand Seminars
    • Recording Archive

      • DAC 2019 Session Recordings
      • User2User Silicon Valley 2019
      • Portable Stimulus from IP to SoC
      • Wilson Research Group - 2018 Results
      • All Recordings
    • Mentor Training Center

      • SystemVerilog for Verification
      • SystemVerilog UVM
      • SystemVerilog UVM Advanced
      • Instructor-led Training
    • Mentor Learning Center

      • SystemVerilog Advanced OOP
      • UVM Transactions and Sequences
      • UVM Monitors and Agents
      • Functional Verification Library
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification Horizons Blog
      • Academy News
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - December 2019
      • Verification Horizons - June 2019
      • Verification Horizons - February 2019
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Contact Us
    • Training

      • Questa® & ModelSim®
      • SystemVerilog
      • Functional Verification Library
  • Home /
  • Verification Horizons /
  • December 2017 /
  • Getting Generic with Test Intent: Separating Test Intent from Design Details with Portable Stimulus

Getting Generic with Test Intent: Separating Test Intent from Design Details with Portable Stimulus

Verification Horizons - Tom Fitzpatrick, Editor

 Separating Test Intent from Design Details with Portable Stimulus by Matthew Ballance - Mentor, A Siemens Business

It’s pretty typical to think about writing tests for a specific design. However, as the number of SoCs and SoC variants that a verification team is responsible for grows, creating tests that are specific to the design is becoming impractical. There has been a fair amount of innovation in this space recently. Some organizations are using design-configuration details to customize parameterized tests suites. Some have even gone as far as generating both the design and the test suite from the same description.

The emerging Accellera Portable Stimulus Standard (PSS) provides features that enable test writers to maintain a strong separation between test intent (the high-level rules bounding the test scenario to produce) and the design-specific tests that are run against a specific design. This article shows how Accellera PSS can be used to develop generic test intent for generating memory traffic in an SoC, and how that generic test intent is targeted to a specific design.

Figure 1. Example Multi-Core SoC


SoCs have complex memory subsystems with cache coherent interconnects for cores and accelerators, multiple levels of interconnects and bridges, and many initiators and targets – often with limited accessibility. While it’s certainly important to verify connectivity between all initiators and targets, it is much more important to generate traffic between the various initiators and targets to validate performance. The goal here is to stress the interconnect bandwidth by generating multi-hop traffic and parallel transfers.

There are, of course, multiple approaches to generating traffic across an SoC memory subsystem. SystemVerilog constrained-random tests can be used, as could manually-coded directed tests. While both of these approaches are tried and tested, both also have drawbacks. Constrained-random tests are generally limited to simulation and (maybe) emulation, but we often want to run these traffic tests across all the engines – simulation, emulation, and prototype. Directed tests, while portable, are incredibly laborious to create, and often miss critical corner cases. A bigger challenge to both approaches, though, is the impact that a design change has on the test suite. Because test intent (what to test) is so enmeshed with the test realization (how to implement the test behavior), a change to the design often results in the need to manually review hundreds (or more) tests to identify needed updates to account for the new things that need to be tested in the design variant, and the things that are currently being tested that no longer exist in the new design variant.

If we take a step back from our memory subsystem traffic-generation problem, our test intent is actually quite simple: generate traffic between available initiators and available targets. Accellera PSS allows us to generalize a surprising amount of our overall test intent and test scenarios without knowing anything about the specific resources present in the design. Accellera PSS also allows design details to be specified such that they are quite separate from the generic test intent and scenarios, making these easily reusable.

GENERIC TEST INTENT INFRASTRUCTURE

As previously stated, memory-subsystem traffic generation involves using initiators to transfer data from memory to memory. We start capturing generic test intent by characterizing a transfer. Specifically, where the data is stored and what initiator is performing the transfer.

Figure 2 shows the Accellera PSS description of a memory transfer and related types. A buffer in PSS is a data type that specifies that its producer must complete execution before its consumer can execute. The data_xfer_b type shown above captures the region in which the data is stored, the address and size of the data, what initiator transferred the data, and how many “hops” the data took in getting to its current location.

Figure 2. Generic Description of a Transfer


Note that empty enumerated types have been defined to capture the initiator moving the data (data_mover_e) and the memory region where the data is stored (mem_region_e). The specific enumerators that compose these types will be specified by the system configuration.

GENERIC TEST INTENT PRIMITIVES

Accellera PSS uses the action construct to specify the high-level test intent behavior. The component construct is used to collect resources and the actions that use those resources.

Figure 3 shows the PSS description for a generic component action that transfers data. Note that the move_data_a action is abstract, which means that it cannot be used on its own (it’s far too generic). However, this generic outline will be used as the basis for design-specific actions that represent the actual initiators in the system.

Figure 3. Generic Data Mover Action


The move_data_a action has an input and an output of type data_xfer_b. This specifies that some action must run before an action that inherits from move_data_a, and that this action produces a data transfer descriptor that can be used by another action. Figure 4 shows a diagram of the move_data_a action with its input and output buffers.

Figure 4. Generic Data Mover Action Diagram


It’s also helpful (and generic) to provide generic actions that produce an initialized data-transfer descriptor and one that accepts and terminates a series of transfers. Basic versions of these can be provided (as shown in Figure 5), and more environment-specific versions provided for specific designs.

Figure 5. Data Initialization and Sink Action


GENERIC TEST INTENT

With just the infrastructure and primitives we’ve defined thus far, we can already get started specifying test intent. Our first test scenario is shown in Figure 6.

Figure 6. Point-to-point Scenario


We’ve created a top-level action (mem2mem_point2point_a) that instantiates the data_src_a and data_sink_a actions, and traverses them in an activity block with the stipulation that one action come between (num_hops==1).

Figure 7 (below) shows a graphical representation of our point-to-point scenario. Note that, while we have required an action to exist between src and sink, we haven’t specified what it is – just that it must accept an input data buffer and produce an output data buffer. Likewise, our coverage goals are specified in terms of the set of target memory regions and initiators, despite the fact that we don’t know which initiators and targets our design will eventually contain.

Figure 7. Graphical Representation of Point-to-point


This ability of a PSS description to capture generic test intent is one thing that makes it such a powerful way to specify test scenarios.

CAPTURING SYSTEM SPECIFICS

Of course, we do have to capture the specifics of our actual design before we can generate interesting and legal tests. Our example system contains:

  • Two processor cores
  • One DMA engine with 32 channels
  • Two accelerators with private memory
  • A DDR controller
  • An internal RAM and ROM

We need to provide actions to describe the behavior of the initiators in our system that we want to test.

Figure 8 shows a PSS component and action that represents one of the processors in our system. PSS groups actions and the resources they require inside a component. In this case, we have a resource type to represent the CPU being used (cpu_r), and a pool of that resource type (core) to ensure that only one CPU operation can occur at one time.

Figure 8. CPU Component and Action


Note that our cpu_c component inherits from the data_mover_c component, and the memcpy_a action inherits from the data_mover_a action.

As a consequence, it will have the same data buffer input and output that the data_mover_a action has.

Figure 9 below shows a description of the DMA component. Just like with our CPU component, we use a resource to describe how many parallel operations can run on an instance of the DMA component. Because we have 32 DMA channels, we create a pool of 32 dma_channel_r resources.

Figure 9. DMA Component


CAPTURING SYSTEM RESOURCES

Thus far, we have captured information about blocks within the design. These components and actions may well have been created by the teams responsible for verifying the IP, and reused at SoC level. Now, though, we need to capture the complete view of the resources and actions available in our SoC.

Figure 10 shows a top-level component with a component instance to capture each available resource in the design. We have two instances of the cpu_c component to represent the two processors, an instance of the dma_c component to represent the DMA engine, and component instances to represent the accelerators.

Figure 10. Mem Subsystem Resources


CAPTURING SYSTEM-LEVEL CONSTRAINTS

Now that we’ve captured the available resources, we need to capture the system-level constraints.

Figure 11 (below) shows the system-level constraints. Note that we use type extension (the extend statement) to layer our system-level constraints into the existing enumerated types and base action. Like many powerful programming constructs, type extension is very useful when used judiciously, though overuse can easily lead to spaghetti code.

Figure 11. System-Level Constraints


In addition to capturing the available memory regions (MEM_codec, MEM_crypto, etc), we also capture restrictions on which initiators can access which targets. Note that we’ve captured the restriction that only accelerators can access their local memories by stating that if either the source of destination memory is the accelerator-local memories, then the initiator must be the corresponding accelerator.

So, all in all, a fairly simple process to capture system capabilities and constraints.

BRINGING TEST INTENT AND SYSTEM SPECIFICS TOGETHER

Now that we have both generic test intent and system specifics available, we can bring them together and start generating specific tests.

Figure 12 shows how we can customize our generic test intent (mem2mem_test_c) with the capabilities of our specific system. Our specific test component extends from the generic test scenario we previously described. By instantiating the mem_subsystem_c component and connecting all the actions to the same pool of buffers, we make our system-specific actions and resources available to our generic test scenario.

Figure 12. System-Targeted Test Intent


Figure 13 shows a few specific scenarios that could result from our point-to-point scenario combined with our system description. One important thing about a portable stimulus description is that it is declarative and statically analyzable. This means that we can use analysis tools to discover exactly how many legal solutions there are to our point-to-point scenario in the presence of the available system resources. In our cases, there are a total of 72 legal point-to-point scenarios.

Figure 13. Example Specific Scenarios


EXTENDING THE SCENARIO

We can easily expand our set of tests by using the system resource and constraint description we already have, and just altering our original test scenario a bit.

For example, we can alter the number of ‘hops’ our data takes moving from source to sink, as shown in Figure 14. If we increase the number of transfers to 2, there are 864 possible test scenarios. Expanding the number of hops to 4 results in an incredible 124,416 legal test scenarios. Not bad for just a few extra lines of PSS description!

Figure 14. Expanding Test Scenario


We can just as easily extend the scenario to account for parallel transfers. In this case, we reuse our two-hop scenario and run two instances in parallel (Figure 15).

Figure 15. Generating Parallel Transfers


The resulting transfers will be parallel back-to-back transfers, an example of which is shown in Figure 16. Because we’ve captured the available resources and their restrictions, our PSS processing tool will ensure that only legal sets of parallel transfers are generated.

Figure 16. Example Back-to-Back Parallel Transfers


CHANGING THE DESIGN

Updating a test suite when the SoC changes, or trying to reuse a test suite for an existing SoC on a variant, is laborious and challenging. Just for a start, the set of available resources is different and the memory map is different.

The process is entirely different with a PSS-based test suite. Let’s assume we have an SoC variant that doesn’t have a codec, but does have an additional local RAM (Figure 17).

Figure 17. SoC Variant


The only change we need to make is to our description of the system resources. In this case, we need to remove the codec component instance and add another RAM to the memory_region_e enumeration, as shown in Figure 18.

Figure 18. System Description Changes


With only these minor changes, a PSS processing tool can re-generate specific tests from our high-level test intent that match the new system. In this case, making these design changes expands the number of transfers described by our original point-to-point transfer test from 72 to 128.

SUMMARY

As we’ve seen from this simple example, the capabilities of Accellera PSS go far beyond the simple ability to target the same test intent to various verification platforms. PSS allows us to dramatically raise the abstraction level at which test intent is described, allowing us to easily capture generic test intent and test scenarios independent of the design details. Modeling available design resources and constraints and using these to shape test intent is straightforward. Finally, PSS test intent easily adapts to design changes, preserving the effort invested in capturing test intent. Combined, all of these capabilities dramatically boost verification productivity!

Back to Top

Table of Contents

Verification Horizons Articles:

  • When It Comes to Verification, Hitting the Wall Can Be a Good Thing

  • How Static and Dynamic Failure Analysis Can Improve Productivity in the Assessment of Functional Safety

  • Step-by-step Tutorial for Connecting Questa® VIP into the Processor Verification Flow

  • PA GLS: The Power Aware Gate-level Simulation

  • Reset Verification in SoC Designs

  • Debugging Inconclusive Assertions and a Case Study

  • Getting Generic with Test Intent: Separating Test Intent from Design Details with Portable Stimulus

© Mentor, a Siemens Business, All rights reserved www.mentor.com

Footer Menu

  • Sitemap
  • Terms & Conditions
  • Verification Horizons Blog
  • LinkedIn Group
SiteLock