It’s pretty typical to think about writing tests for a specific design. However, as the number of SoCs and SoC variants that a verification team is responsible for grows, creating tests that are specific to the design is becoming impractical. There has been a fair amount of innovation in this space recently. Some organizations are using design-configuration details to customize parameterized tests suites. Some have even gone as far as generating both the design and the test suite from the same description.
The emerging Accellera Portable Stimulus Standard (PSS) provides features that enable test writers to maintain a strong separation between test intent (the high-level rules bounding the test scenario to produce) and the design-specific tests that are run against a specific design. This article shows how Accellera PSS can be used to develop generic test intent for generating memory traffic in an SoC, and how that generic test intent is targeted to a specific design.
 |
Figure 1. Example Multi-Core SoC
SoCs have complex memory subsystems with cache coherent interconnects for cores and accelerators, multiple levels of interconnects and bridges, and many initiators and targets – often with limited accessibility. While it’s certainly important to verify connectivity between all initiators and targets, it is much more important to generate traffic between the various initiators and targets to validate performance. The goal here is to stress the interconnect bandwidth by generating multi-hop traffic and parallel transfers.
There are, of course, multiple approaches to generating traffic across an SoC memory subsystem. SystemVerilog constrained-random tests can be used, as could manually-coded directed tests. While both of these approaches are tried and tested, both also have drawbacks. Constrained-random tests are generally limited to simulation and (maybe) emulation, but we often want to run these traffic tests across all the engines – simulation, emulation, and prototype. Directed tests, while portable, are incredibly laborious to create, and often miss critical corner cases. A bigger challenge to both approaches, though, is the impact that a design change has on the test suite. Because test intent (what to test) is so enmeshed with the test realization (how to implement the test behavior), a change to the design often results in the need to manually review hundreds (or more) tests to identify needed updates to account for the new things that need to be tested in the design variant, and the things that are currently being tested that no longer exist in the new design variant.
If we take a step back from our memory subsystem traffic-generation problem, our test intent is actually quite simple: generate traffic between available initiators and available targets. Accellera PSS allows us to generalize a surprising amount of our overall test intent and test scenarios without knowing anything about the specific resources present in the design. Accellera PSS also allows design details to be specified such that they are quite separate from the generic test intent and scenarios, making these easily reusable.
GENERIC TEST INTENT INFRASTRUCTURE
As previously stated, memory-subsystem traffic generation involves using initiators to transfer data from memory to memory. We start capturing generic test intent by characterizing a transfer. Specifically, where the data is stored and what initiator is performing the transfer.
Figure 2 shows the Accellera PSS description of a memory transfer and related types. A buffer in PSS is a data type that specifies that its producer must complete execution before its consumer can execute. The data_xfer_b type shown above captures the region in which the data is stored, the address and size of the data, what initiator transferred the data, and how many “hops” the data took in getting to its current location.
 |
Figure 2. Generic Description of a Transfer
Note that empty enumerated types have been defined to capture the initiator moving the data (data_mover_e) and the memory region where the data is stored (mem_region_e). The specific enumerators that compose these types will be specified by the system configuration.
GENERIC TEST INTENT PRIMITIVES
Accellera PSS uses the action construct to specify the high-level test intent behavior. The component construct is used to collect resources and the actions that use those resources.
Figure 3 shows the PSS description for a generic component action that transfers data. Note that the move_data_a action is abstract, which means that it cannot be used on its own (it’s far too generic). However, this generic outline will be used as the basis for design-specific actions that represent the actual initiators in the system.
 |
Figure 3. Generic Data Mover Action
The move_data_a action has an input and an output of type data_xfer_b. This specifies that some action must run before an action that inherits from move_data_a, and that this action produces a data transfer descriptor that can be used by another action. Figure 4 shows a diagram of the move_data_a action with its input and output buffers.
 |
Figure 4. Generic Data Mover Action Diagram
It’s also helpful (and generic) to provide generic actions that produce an initialized data-transfer descriptor and one that accepts and terminates a series of transfers. Basic versions of these can be provided (as shown in Figure 5), and more environment-specific versions provided for specific designs.
 |
Figure 5. Data Initialization and Sink Action
GENERIC TEST INTENT
With just the infrastructure and primitives we’ve defined thus far, we can already get started specifying test intent. Our first test scenario is shown in Figure 6.
 |
Figure 6. Point-to-point Scenario
We’ve created a top-level action (mem2mem_point2point_a) that instantiates the data_src_a and data_sink_a actions, and traverses them in an activity block with the stipulation that one action come between (num_hops==1).
Figure 7 (below) shows a graphical representation of our point-to-point scenario. Note that, while we have required an action to exist between src and sink, we haven’t specified what it is – just that it must accept an input data buffer and produce an output data buffer. Likewise, our coverage goals are specified in terms of the set of target memory regions and initiators, despite the fact that we don’t know which initiators and targets our design will eventually contain.
 |
Figure 7. Graphical Representation of Point-to-point
This ability of a PSS description to capture generic test intent is one thing that makes it such a powerful way to specify test scenarios.
CAPTURING SYSTEM SPECIFICS
Of course, we do have to capture the specifics of our actual design before we can generate interesting and legal tests. Our example system contains:
- Two processor cores
- One DMA engine with 32 channels
- Two accelerators with private memory
- A DDR controller
- An internal RAM and ROM
We need to provide actions to describe the behavior of the initiators in our system that we want to test.
Figure 8 shows a PSS component and action that represents one of the processors in our system. PSS groups actions and the resources they require inside a component. In this case, we have a resource type to represent the CPU being used (cpu_r), and a pool of that resource type (core) to ensure that only one CPU operation can occur at one time.
 |
Figure 8. CPU Component and Action
Note that our cpu_c component inherits from the data_mover_c component, and the memcpy_a action inherits from the data_mover_a action.
As a consequence, it will have the same data buffer input and output that the data_mover_a action has.
Figure 9 below shows a description of the DMA component. Just like with our CPU component, we use a resource to describe how many parallel operations can run on an instance of the DMA component. Because we have 32 DMA channels, we create a pool of 32 dma_channel_r resources.
 |
Figure 9. DMA Component
CAPTURING SYSTEM RESOURCES
Thus far, we have captured information about blocks within the design. These components and actions may well have been created by the teams responsible for verifying the IP, and reused at SoC level. Now, though, we need to capture the complete view of the resources and actions available in our SoC.
Figure 10 shows a top-level component with a component instance to capture each available resource in the design. We have two instances of the cpu_c component to represent the two processors, an instance of the dma_c component to represent the DMA engine, and component instances to represent the accelerators.
 |
Figure 10. Mem Subsystem Resources
CAPTURING SYSTEM-LEVEL CONSTRAINTS
Now that we’ve captured the available resources, we need to capture the system-level constraints.
Figure 11 (below) shows the system-level constraints. Note that we use type extension (the extend statement) to layer our system-level constraints into the existing enumerated types and base action. Like many powerful programming constructs, type extension is very useful when used judiciously, though overuse can easily lead to spaghetti code.
 |
Figure 11. System-Level Constraints
In addition to capturing the available memory regions (MEM_codec, MEM_crypto, etc), we also capture restrictions on which initiators can access which targets. Note that we’ve captured the restriction that only accelerators can access their local memories by stating that if either the source of destination memory is the accelerator-local memories, then the initiator must be the corresponding accelerator.
So, all in all, a fairly simple process to capture system capabilities and constraints.
BRINGING TEST INTENT AND SYSTEM SPECIFICS TOGETHER
Now that we have both generic test intent and system specifics available, we can bring them together and start generating specific tests.
Figure 12 shows how we can customize our generic test intent (mem2mem_test_c) with the capabilities of our specific system. Our specific test component extends from the generic test scenario we previously described. By instantiating the mem_subsystem_c component and connecting all the actions to the same pool of buffers, we make our system-specific actions and resources available to our generic test scenario.
 |
Figure 12. System-Targeted Test Intent
Figure 13 shows a few specific scenarios that could result from our point-to-point scenario combined with our system description. One important thing about a portable stimulus description is that it is declarative and statically analyzable. This means that we can use analysis tools to discover exactly how many legal solutions there are to our point-to-point scenario in the presence of the available system resources. In our cases, there are a total of 72 legal point-to-point scenarios.
 |
Figure 13. Example Specific Scenarios
EXTENDING THE SCENARIO
We can easily expand our set of tests by using the system resource and constraint description we already have, and just altering our original test scenario a bit.
For example, we can alter the number of ‘hops’ our data takes moving from source to sink, as shown in Figure 14. If we increase the number of transfers to 2, there are 864 possible test scenarios. Expanding the number of hops to 4 results in an incredible 124,416 legal test scenarios. Not bad for just a few extra lines of PSS description!
 |
Figure 14. Expanding Test Scenario
We can just as easily extend the scenario to account for parallel transfers. In this case, we reuse our two-hop scenario and run two instances in parallel (Figure 15).
 |
Figure 15. Generating Parallel Transfers
The resulting transfers will be parallel back-to-back transfers, an example of which is shown in Figure 16. Because we’ve captured the available resources and their restrictions, our PSS processing tool will ensure that only legal sets of parallel transfers are generated.
 |
Figure 16. Example Back-to-Back Parallel Transfers
CHANGING THE DESIGN
Updating a test suite when the SoC changes, or trying to reuse a test suite for an existing SoC on a variant, is laborious and challenging. Just for a start, the set of available resources is different and the memory map is different.
The process is entirely different with a PSS-based test suite. Let’s assume we have an SoC variant that doesn’t have a codec, but does have an additional local RAM (Figure 17).
 |
Figure 17. SoC Variant
The only change we need to make is to our description of the system resources. In this case, we need to remove the codec component instance and add another RAM to the memory_region_e enumeration, as shown in Figure 18.
 |
Figure 18. System Description Changes
With only these minor changes, a PSS processing tool can re-generate specific tests from our high-level test intent that match the new system. In this case, making these design changes expands the number of transfers described by our original point-to-point transfer test from 72 to 128.
SUMMARY
As we’ve seen from this simple example, the capabilities of Accellera PSS go far beyond the simple ability to target the same test intent to various verification platforms. PSS allows us to dramatically raise the abstraction level at which test intent is described, allowing us to easily capture generic test intent and test scenarios independent of the design details. Modeling available design resources and constraints and using these to shape test intent is straightforward. Finally, PSS test intent easily adapts to design changes, preserving the effort invested in capturing test intent. Combined, all of these capabilities dramatically boost verification productivity!
Back to Top