by Matthew Ballance, Verification Technologist, Mentor Graphics
The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.
Questa inFact Intelligent Testbench Automation, part of the Questa verification platform, provides a comprehensive solution for efficiently and comprehensively exercising the functional input space of a design – the input commands and operating modes. Questa inFact's efficient graph-based stimulus description enables 10-100x more unique stimulus to be created in a given period of time than could be created using directed tests. The advanced coverage-targeting algorithms within Questa inFact achieve input functional coverage 10-100x faster than random stimulus, and enable this benefit to be easily-scaled to the simulation farm.
Many of the most-interesting verification scenarios, however, are scenarios involving design-internal state. These internal-state scenarios often end up being verified with directed tests, due to the difficulty in coercing random tests to reliably target the desired scenarios. Often, the difficulty in exercising these internal-state scenarios lies in properly combining the inputs required to achieve the pre-conditions for the internal-state scenario with the stimulus required to make progress towards coverage of the scenario. For example, a customer I recently worked with found that in one case, their entire regression suite only covered 5% of a moderately-sized internal-state coverage due to the dual requirements of creating pre-conditions,
then hitting an interesting internal-coverage case once the pre-conditions were met.
In this article, we will look at how two capabilities of Questa inFact Intelligent Testbench Automation can be used to more-efficiently target verification scenarios involving design-internal state.
PIPELINED COMMAND PROCESSOR EXAMPLE
The example that we will examine in this article is a command-processing pipeline. The pipeline, in this case is a 5-stage pipeline that processes a command with operands. This particular processor supports eight commands – CMD1 through CMD8. Under ideal circumstances, a new input command is accepted by the pipeline every cycle, and a single command completes every cycle. As with all pipeline-based processors, however, stalls can occur in this pipeline when one of the stages takes longer than one cycle to complete.
One of our verification tasks for this pipeline involves ensuring that certain command sequences proceed through the pipeline. Specifically, we want to ensure that all combinations of back-to-back commands are exercised. For example, CMD1 followed by CMD1, CMD1 followed by CMD2, etc. We also want to exercise these same back-to-back command sequences with one, two, and three different commands in the middle. Figure 1 below summarizes the sequences that we wish to verify. The blue-shaded commands below are the ones that we care about from a coverage perspective. The grey-shaded boxes are the commands whose specific value we don't care about, apart from ensuring that these commands are different from the commands that begin and end the command sequence.
We are using a UVM environment for this block. Stimulus is described as a UVM sequence item that contains a field that specifies the command (cmd) as well as fields for both command operands (operand_a, operand_b). A UVM sequence running on the sequencer is responsible for generating this stimulus, while the driver converts the command described in the sequence item to signal-level transactions that are applied to the command-processor's interface.
Figure 1 - Command-Sequence Examples
 |
This verification scenario presents a couple of challenges. First, describing the full set of desired sequences is a challenge. We could, of course, carefully create the scenario sequences using directed tests, but this would be a significant amount of work. We could leverage random generation on a single-command basis and hope to hit all the cases. However, the efficiency with which we can achieve our goals is hampered by the redundancy inherent in random generation and the fact that the constraint solver doesn't comprehend the overall sequential goal that we are targeting. The second challenge involves the pipeline stalls. From our perspective as a test writer, these stalls are unpredictable. Despite our careful efforts to design a command sequence to apply to the pipeline, what is actually processed by the pipeline may be quite different than what we intended.
DESCRIBING THE STIMULUS SPACE
The task of describing and generating the command sequences is a classic input-stimulus problem. First, we create a set of inFact rules that describe the sequence of five commands. The rule description specifies the variables for which inFact will select values and the constraints between the variable values (in this case, for simplicity, there are no validity constraints).
At the top of the rule description, we declare graph variables, using the meta_action keyword, corresponding to the fields in the cmd_item sequence item: cmd, operand_a, operand_b. We also need to check the state of the pipeline when we issue a command. The cmdX_stall_i meta_action_import variables bring the current state of the pipeline into inFact from the testbench. Since we are de-scribing a sequence of five commands, we create five sets of variable declarations to represent cmd1 through cmd5.
Figure 2 - UVM Environment
 |
We use symbols to group our variables together. Each symbol defined below in Figure 3 (Cmd1 through Cmd5) declares the sequence of operations needed to issue a single command to the command processor. Specifically: Call the UVM Sequence API start_item task, sample the stall state of the pipeline, select values for cmd, operand_a, and operand_b, then call the UVM Sequence API finish_item task to send the sequence item to the command processor.
Figure 3 - Command-sequence Rules
 |
Finally, at the bottom of the rule file we describe the top-level operation sequence. The most important aspect of this operation sequence is the repeat loop that contains references to the Cmd1 through Cmd5 symbols. During execution, this will cause inFact to repeatedly generate sequences of five commands.
Figure 4, below, provides a visual representation of our stimulus space. We can see the top-level sequence of operations described at the bottom of the rule file.
Figure 4 - Command-Sequence Graph
 |
The graph is expanded to show the implementation specifics of the Cmd1 symbol. As you can see, the graph is a nice visual way to view the stimulus space and verification process. For each of the five commands in the sequence, we will read in state information from the testbench that indicates whether the pipeline is stalled, and select a command and operands to issue.
TARGETING VERIFICATION GOALS
Next, we need to describe the set of stimulus for Questa inFact to generate, which corresponds to the verification goals outlined above. At a high level, we are interested in crossing the following variables in order to realize the command sequences described earlier:
- Cmd1 x Cmd2
- Cmd1 x Cmd3
- Cmd1 x Cmd4
- Cmd1 x Cmd5
However, we also need to account for the requirement that commands in the middle of our command sequences must be different from the starting and ending commands in the sequence. Questa inFact provides a special type of constraint, called a coverage constraint, which provides an added level of flexibility and productivity when describing stimulus-creation scenarios like that above. A coverage constraint only applies when inFact is targeting a specific stimulus-generation goal, which enables stimulus to be very targeted for the portion of the simulation when inFact is targeting a specific goal, but revert to being less-constrained once inFact achieves that goal.
We create four constraints like the one shown below to describe the specific restriction that commands in the middle of a sequence must be different than the commands at the beginning and the end of the sequence. The constraint shown in Figure 5 describes the restrictions on a three-command sequence. In this case, our verification goals call for the command in the middle of the sequence (cmd2) to be different than the command at the beginning of the sequence (cmd1) and the command at the end of the sequence (cmd3). This constraint, and the other three like it, are linked to the corresponding cross-coverage goals that describe our verification goals.
Figure 5 - Coverage Constraint
 |
REACTIVE STIMULUS
Of course, our efficiently-described comprehensive stimulus isn't much use if the design doesn't actually accept what we apply. Fortunately, Questa inFact supports generating reactive stimulus based on state information from the environment. inFact is able to react to the current design state in cases where this must be done in order to create valid stimulus. In addition, inFact is also able to make coverage-targeting decisions based on the input from the environment. This enables inFact to take advantage of the current state of the environment to make rapid progress to coverage, even when inFact isn't able to directly control the environment state. In other words, inFact is constantly looking for an opportunity to make progress towards the user-specified verification goals, and makes choices based on the current state to target verification goals that have not yet been satisfied.
In this case, the environment provides a way to query whether the first stage of the pipeline is stalled. Feeding this design-state information to inFact before each command is issued allows inFact to properly target our back-to-back command verification goals. Since the pipe-stage stall information tells us whether our verification scenario was properly applied, we reference the stall information in our coverage constraints. If the pipeline stalls during application of a command sequence, the coverage constraint will evaluate to false, causing inFact to retry that command sequence at a future time.
CONCLUSION
As we've seen from this example, Questa inFact provides specific features that simplify the process of targeting coverage involving design-internal states. Coverage constraints simplify the process of describing complex verification goals. Questa inFact's reactive stimulus generation enables inFact to react to the design state and generate stimulus that makes progress towards the verification goals whenever possible. And, as always, inFact's redundancy-eliminating algorithms enable efficient coverage closure for verification goals with and without design-state dependencies. The customer I mentioned at the beginning of the article applied inFact to the verification problem where their full regression achieved only 5% of a particular internal-state coverage goal. With a small amount of integration work and one short inFact simulation, they were able to achieve full coverage of that verification goal. For them, achieving this type of difficult-to-hit coverage goals was critical to the success of their project. The ability to achieve the verification goal efficiently – both in terms of engineering investment and simulation efficiency – was truly intelligent testbench automation.
Back to Top