Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IP
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Clock-Domain Crossing
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • Advanced UVM
      • Basic UVM
      • Introduction to UVM
      • UVM Connect
      • UVM Debug
      • UVMF - One Bite at a Time
    • Featured Courses

      • Introduction to ISO 26262
      • Introduction to DO-254
      • Clock-Domain Crossing Verification
      • Portable Stimulus Basics
      • Power Aware CDC Verification
      • Power Aware Verification
      • SystemVerilog OOP for UVM Verification
    • Additional Courses

      • Assertion-Based Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • Metrics in SoC Verification
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal-Based Technology: Automatic Formal Solutions
      • Formal Coverage
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizons - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Upcoming & Featured Events

      • Low Power Verification - 4/29
      • Fault Campaign for Mixed-Signal - 5/4
      • User2User - 5/26
      • Webinar Calendar
    • On-Demand Webinars

      • CDC+RDC Analysis
      • Basic Abstraction Techniques
      • Safety Analysis Techniques
      • QVIP Workflow and Debug for PCIe
      • Writing a Proxy-driven Testbench
      • Achieving High Defect Coverage
      • Visualizer Features
      • All On-Demand Webinars
    • Recording Archive

      • Siemens EDA 2021 Functional Verification Webinar Series
      • Improving Your SystemVerilog & UVM Skills
      • Should I Kill My Formal Run?
      • Visualizer Debug Environment
      • Industry Data & Surveys
      • All Recordings
    • Conferences

      • DVCon 2021
      • DVCon 2020
      • DAC 2019
      • All Conferences
    • Mentor Learning Center

      • SystemVerilog Fundamentals
      • SystemVerilog UVM
      • View all Learning Paths
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification Horizons Blog
      • Academy News
      • Academy Newsletter
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - March 2021
      • Verification Horizons - November 2020
      • Verification Horizons - July 2020
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Contact Us
    • Training

      • Questa Basic
      • Questa Advanced
      • Mastering Questa
  • Home
  • Verification Horizons
  • June 2017
  • Portable Stimulus Modeling in a High-Level Synthesis User's Verification Flow

Portable Stimulus Modeling in a High-Level Synthesis User's Verification Flow

Verification Horizons - Tom Fitzpatrick, Editor

Portable Stimulus Modeling in a High-Level Synthesis User's Verification Flow by Mike Andrews and Mike Fingeroff - Mentor, A Siemens Business

Portable Stimulus has become quite the buzz-word in the verification community in the last year or two, but like most 'new' concepts it has evolved from some already established tools and methodologies. For example, having a common stimulus model between different levels of design abstraction has been possible for many years with graph-based stimulus automation tools like Questa® inFact. High-Level Synthesis (HLS), which synthesizes SystemC/C++ to RTL has also been available for many years with most users doing functional verification at the C-level using a mixture of home grown environments or directed C tests. With HLS now capable of doing very large hierarchical designs, however, there has been a growing need to have a verification methodology that enables high performance and production worthy constrained random stimulus for SystemC/C++ in order to achieve coverage closure at the C-level and then be able reproduce that exact stimulus to test the synthesized RTL for confidence. This article describes a methodology where, a stimulus model can be defined (and refined) to help reach 100% code coverage of the C++ HLS DUT, and then reused in a SystemVerilog or UVM testbench with the synthesized RTL. Given a truly common model, it is also possible to maintain random stability between the two environments, allowing some issues to be found in one domain and then debugged in the other.

INTRODUCTION

A little over a decade ago, ESL (Electronic System-level) methodologies were all the rage and there were a number of language options that promised to raise the abstraction level for both design and verification, with C/C++, SystemC and SystemVerilog being the dominant ones. While C/SystemC are the most prevalent languages for abstract hardware and system modeling, SystemVerilog has standardized the necessary features needed for advanced verification such as constrained random stimulus and functional coverage.

At the same time, many users have been looking for more efficient ways to describe stimulus, specifically looking for ways to expand the number of verification scenarios that can be automated from a compact description, and also improve the efficiency in the generation process.

Questa® inFact has provided just such a capability, with a rule/graph based approach, borrowed from software testing techniques and enhanced for hardware verification. Since this rule based model is independent of the target language (it has been applied in at least 7 different HVL environments) it has always been a portable stimulus solution.

HLS users often state that one of the major benefits that they receive from moving to C++/SystemC for design is the performance for verification which enables them to run substantially more tests. However, there are no standards in the C environment that have the power of SystemVerilog for advanced verification capabilities, which includes amongst other things, random/automated stimulus modeling. A Portable Stimulus solution provides this power and capability in addition to preserving the investment in creating a stimulus model for C-based simulation environments to be leveraged downstream when the RTL is wrapped in SystemVerilog, and vice-versa.

THE COMMON STIMULUS MODEL

The rule-based stimulus model is, as you might expect, created hierarchically from a main top-level rule file, which typically varies very little from a default template, and one or more modular rule segment files. The top level rule file declares the main rule_graph, giving the graph model a name and, depending on the code architecture chosen doesn't need to contain anything else other than statements to import the necessary rule segment files that define the details of the stimulus to be applied.

The example in Figure 1 below shows four separate files, two of which – test_data_C.rules and test_data_SV.rules – both define a graph object called test_data_gen. These two top level rule files correspond to graph components that are language specific wrappers for the actual stimulus model. In other words, the inFact automated generators will create a C++ class called test_data_C, and an SV class called test_data_SV respectively, and each of these will define a rule graph model test_data_gen.

Figure 1. Hierarchical Rule Code Architecture


Both top-level rule files import a common hierarchy of rule segment files that actually define the behavior. By keeping all the definition of the rules hierarchy in common files, the compiled graph models will behave identically.

The test_data_C.rules file has an extra construct, which is an attribute that specifies language specific requirements for the code generated. In this case, it specifies the code needed to add an include statement to the generated C++ class definition file. The language supports other attributes that can be used to customize the generated HVL files, but these have no effect on the underlying graph model.

The test_data_gen.rseg file defines the rule for the scenario(s) that the graph can generate, which in this case is simply to loop through the randomization of the contents of the test_data object, shown by Figure 2 below.

Figure 2. The test_data_gen Rule Graph


Note: the scenario rule could include multiple objects, either instances of the same object type, or instances of multiple different types, as well as other graph topology constructs, and this will be described briefly later.

The test_data object itself, declared as a struct in the inFact rule language, is defined in a separate rule segment file, to allow for modularity and re-use. This struct has additional hierarchy, defining other structs called packedArray0 and packedArray1, which mirror C++ structs defined and used for the DUT stimulus in the C++ testbench.

This is another key element of the methodology, i.e. that the rule graph references objects that have the same name and hierarchy, and use data types that can be mapped to the corresponding C++ and SV types. Since the inFact language allows bit widths to be defined for all variables, this allows us to target the Mentor algorithmic bit-accurate data types and the SystemC data types.

In this example, the form of the test data object was derived from the C++ testbench around a C model that implements a configurable vector multiply-accumulate. The first step therefore in implementing this methodology is to determine which of the DUT inputs are going to be randomized by the graph model, what their bit-widths are, and to collect these into a test data struct or class. In this case, packedArray0 contains an 8-element fixed-size array of 10-bit values, packedArray1 contains a similar array of 7-bit values. Added to these is a single 4-bit quantity called num_col. The data types used for these structs are defined using the Mentor Algorithmic, bit-accurate data types allowing designs to be modeled with arbitrary precision.

Although this begins with the C++ testbench, there will also need to be a similar object in the SystemVerilog domain. The SystemVerilog and C++ versions of this object are shown in Figure 3.

Figure 3. Test Data Types for SV and C++


The SystemVerilog model, like the inFact model, can contain algebraic constraints, and probably should if it is ever going to be randomized using a traditional SystemVerilog .randomize() call. If the inFact graph is always going to do the randomization, then this is not necessary.

Note: the single constraint in this simple example limits the values of num_col to the range 1 up to 8, but the inFact language supports all the common constraint operators that are used in SystemVerilog, with some minor syntax differences. As a bonus, for those familiar with SystemVerilog syntax, a utility is available to create or update the inFact graph model from the SystemVerilog one.

RUNNING C IN A TESTBENCH

Once the test data object is defined, the integration of the portable stimulus model into the C testbench is quite simple. As mentioned previously, a C++ class is created automatically from the common rule model, and this class has a method built-in that corresponds to an interface that is defined in the rules language. The test_data_gen.rseg file declares an interface called fill, that operates on any instance of the type test_data. This produces a method, task or function in the generated HVL object called ifc_fill, simple prepending ifc_.

This method, task or function will take an argument which is a handle to the corresponding HVL object of the same name – i.e. the test_data class or struct shown earlier.

So, the integration mechanism is simply to construct an instance of the class containing the portable stimulus model, and then call its ifc_fill method with a handle to the testbench test_data container. Figure 4 below shows a code excerpt from the C++ testbench, with the creation of a handle to the test_data struct – td_h – and a handle to the class containing the inFact model – td_gen_h – with the latter's constructor call defining the instance name for inFact to use internally. This inFact instance name is important as it will be discussed later.

Figure 4. Code Snippet from C++ Testbench


Inside a for loop in the C++ test, the call to the ifc_fill method can be seen, followed by the assignment of the contents of the td_h struct instance to the local variables that will be applied to the C function that is the DUT in this bench.

This architecture is not really any different from using a SystemVerilog random class or sequence item and .randomize(), or a SystemC/SCV class with its 'next' method. The only difference is that the model doing the randomizing is an inFact graph model.

At this stage the value that the inFact portable stimulus model is adding is the ability to randomize several numeric values, while obeying any algebraic constraints that may be defined on these values, or their relationships.

CONSIDERING COVERAGE

An additional value of the inFact model is that there is another type of input that can be overlaid on the stimulus model, which is termed a coverage strategy. This strategy can be considered somewhat analogous to a SystemVerilog covergroup, in that it defines the variables of interest, desired bins of these values, as well as crosses of these variables. The difference is that this is an input to the randomization process that alters the random distribution to efficiently cover the goals in the strategy.

The coverage metrics that are being measured in this case are not functional coverage coverpoints/ crosses but rather code coverage, which is more common in C/C++ environments (although functional coverage could also be implemented). So the goals defined in the coverage strategy should be, as the name implies, the encoding of a strategy (or strategies) expected to achieve high code coverage, or to target specific areas that are not included in other strategies.

Since the DUT in this example – the multiplier – is quite simple, a fairly simple strategy may suffice. The inFact tool set includes utilities that can create coverage strategies from a variety of inputs, including automated strategies of pre-defined types, custom strategies defined using a CSV file or spreadsheet, and also a graphical editor. In this example, an automated strategy can be used, which targets each stimulus variable in isolation, i.e. no crosses. For each variable in the test_data hierarchy (including each array element), the utility will ascertain all the legal values, employing an analysis of the constraints, and divide them into a defined number of bins. For this example a total of 128 bins was specified, since that would mean all the coeff values are covered for each 7-bit element in that array. Distinct edge-bins (the individual values at the top and bottom of the range) can be added if desired, and in this case the larger quantities – the 10-bit data values – had two single-value bins created at each of the two extremes.

As hoped, after running the automated strategy to completion, the code coverage results are very good – see Figure 5 below – hitting 100% (the results from the initial pure random test approach were about 20% lower).

Figure 5. Code Coverage Results


Note: Being able to achieve 100% code coverage on the C++ source is essential to being able to easily close coverage on the synthesized RTL from HLS using the same stimulus. This is because debugging C++ coverage issues is far easier than debugging the RTL output from HLS.

Figure 6. SystemVerilog Testbench Code Excerpt


PORTABLE STIMULUS WITH RANDOM STABILITY

While getting high code coverage is nice, the point of this article is to describe how a stimulus model, and one or more accompanying coverage strategies, can be developed for one domain and then re-run in another. A seed for the inFact stimulus model can be defined by a user, or simply output to a file from the original run.

The SystemVerilog wrapped version of the model can be dropped into an SV testbench to drive the RTL DUT in the same way as the C version, i.e. simply instantiate the SV class object that contains it, and then use its built-in task – ifc_fill – to randomize the contents of the SystemVerilog test_data class, as shown in Figure 6, above.

In this case, the packed arrays used in the test_data class need to be reformatted to fit the wide reg objects that are the DUT inputs in this case, but that is quite simple, with a concatenation operator being used – {arrEl[0], ... , arrEl[N]} – to achieve this.
In this example, the state of the coverage strategy can be queried via another available built-in function of the strategy – allCoverageGoalsHaveBeenMet() – and used as a qualifier for generating new inputs or to define a loop exit condition for the test.

When the SystemVerilog testbench is run, the code coverage produced for the RTL DUT is also high – 97.11% in this case, when running until the coverage strategy was completed, as shown below in figure 7.

Figure 7. The RTL Code Coverage Results


While this example is simple it does illustrate the re-usability of the common portable stimulus model that the Questa® inFact tool suite provides. Of course additional tests are always likely to be needed for the RTL version of the DUT to handle the additional behavior added to the synthesized RTL. This is because the HLS process adds additional structures that do not exist in the untimed C++ source description such as stall-able interface protocols, control FSMs, and clock and reset logic. However, by closing 100% coverage on the C++ using an inFact portable stimulus model we are guaranteed to get the same coverage of the design functionality when running RTL verification. Then it is simply a matter of adding additional tests to cover the remaining structures added by HLS.

CREATING MORE COMPLEX SCENARIOS

Any number of rule graph scenario models can be created in this way and applied in either domain. For example, a new scenario can be created by adding a new rule segment that creates two instances of the test data object – td1 and td2 – and uses them in the same fill interface in series in the rule. This allows the creation of a coverage strategy that would achieve transition coverage of one of the fields in test_data, e.g. the num_col variable. Figure 8 shows this new rule graph and the selection of the num_col variables in td1 and td2 as the fields to target for cross coverage.

Figure 8. Scenario with Two Test Data Instances


SUMMARY

The existence of portable stimulus solutions can help bring advanced verification capabilities to a C-based high-level verification environment, and also allow reuse of the investment in the stimulus models and coverage information at other levels of abstraction.
High-level synthesis users can especially benefit from this, especially if the stimulus created can be mirrored in both environments via seed-based random stability, since they are more familiar with the C source of the design and would find it easier to develop a comprehensive set of stimulus models at that level. For the HLS user, portable stimulus gives them a standards-based methodology to predictably and quickly close coverage from C to RTL.

Back to Top

Table of Contents

Verification Horizons Articles:

  • What Does Improving Your Golf Swing Have in Common with Verification?

  • Parallel Debug: A Path to a Better Big Data Diaspora

  • Portable Stimulus Modeling in a High-Level Synthesis User's Verification Flow

  • Smoothing the Path to Software-Driven Verification with Portable Stimulus

  • Verification Planning with Questa® Verification Management

  • MIPI® CSI2 TX IP Verification Using Questa® VIPs

  • Converting Legacy USB IP to a Low Power USB IP

  • Understanding the UPF Power Domain and Domain Boundary

  • Automation and Reuse in RISC-V Verification Flow

  • Emulation – A Job Management Strategy to Maximize Use

  • RTL CDC Is No Longer Enough — How Gate-Level CDC Is Now Essential to First Pass Success

  • Formal Verification: Not Just for Control Paths

Siemens Digital Industries Software

Siemens Digital Industries Software

##TodayMeetsTomorrow

Solutions

  • Cloud
  • Mendix
  • Siemens EDA
  • MindSphere
  • Siemens PLM
  • View all portfolio

Explore

  • Digital Journeys
  • Community
  • Blog
  • Online Store

Siemens

  • About Us
  • Careers
  • Events
  • News and Press
  • Newsletter
  • Customer Stories

Contact Us

USA:

phone-office +1 800 547 3000

See our Worldwide Directory

  • Contact Us
  • Support Center
  • Give us Feedback
©2021 Siemens Digital Industries Software. All Rights Reserved.
Terms of Use Privacy Cookie Policy