Building a complex signal processing function requires a deep understanding of the signal characteristics and of the different algorithms and their performances.
When it comes to the verification(a) of such designs, a quite generic approach consists of injecting more or less realistic stimulus and using reference models (most often C or Matlab®), to compare the expected results.
Requirement based designs following a DO-254(1) process add the constraints that each function of the design should be specified as a traceable requirement. The proof of the complete verification of each requirement should also be provided with additional emphasis on physical verification, therefore running the tests on the physical device.
This article describes a combined requirement and metric driven methodology developed at a customer site for the verification of a complex signal processing SoC block under DO-254 constraints. This methodology also enables both horizontal and vertical reuse of the tests, allowing tests to run both in IP simulation and on FPGA boards at SoC level. This approach is described in a generic way and can be applied to different signal or data processing designs.
In the process of building a complex signal processing block, a mathematical model of the function is usually developed to validate(b) the core functionality. In our case, the model was first developed with Matlab and Simulink® and was validated as being a data sample accurate reference model.
As this algorithm completely defines our signal processing function, it would be ideal to describe this algorithm as one big requirement. However, given the complexity of such function and the billions (if not infinite) possible design states and combinations, the high level of abstraction and the algorithm complexity, this high level requirement does not satisfy the unitary-complete-consistent-traceable-unambiguous-verifiable-atomic requirement(2) definition of a "requirement based" approach. We therefore need to split this main function into smaller derived requirements, each of them specifying a specific part of the algorithm (e.g., FIR filter, Correlator, etc…)
The design we had to verify (Figure 1) contains over 50 of such atomic derived requirements, which together specify completely the processing algorithm, from sampling to results in RAM.
Figure 1: Design Under Test.
Additional to these signal processing functions, dedicated blocks and control logics ensure the communication with the CPU and the systems via interrupts, control and status registers and two AXI4 interfaces. At SoC level, the core CPU can then read and interpret the results and take appropriate actions.
The following sections describe a metric driven and requirement based verification strategy allowing reusable tests from IP simulations to the implemented SoC on a FPGA board.
METRIC DRIVEN REQUIREMENT BASED VERIFICATION
As said above, each atomic computation part of the signal processing algorithm is described as an atomic requirement that the final design shall obey and that the verification shall ensure. The following example shows a requirement of a FIR filter:
Figure 2: Sample of a FIR Filter requirement.
Analysing this requirement, specific points of functional interest should be covered in order to consider the design as verified.
- Are all bits of the input covered?
- Are all bits of the output covered?
- Are all bits of all coefficients used?
- Are minimal, null and maximum values reached (inputs, outputs, coefficients)?
In the above requirement, the use of defined terms have simplified the specification, but this does not make the requirement simpler. The complexity described in the used glossary of terms is actually the key points of the verification analysis.
In this example, the FIR filter is not just a sum, it's a sum of complex products between complex values in the form of A = ( a + j * b ) where j is the imaginary number for which j² = -1.
The verification analysis therefore requires covering additional points:
- Has each complex product been used with minimum, null, maximum values of imaginary and real parts of the sample Xi crossed with minimum, null and maximum values of imaginary and real parts of the coefficient Ci?
- Have we reached the worst case where the carry of each single sum has propagated to the final result? On each imaginary and real parts?
This analysis leads to what needs to be covered to actually verify the requirement and can easily be mapped to functional coverage points and coverage groups in SystemVerilog as explained in(3).
In our case, the testbench is architected around a SystemVerilog wrapper, directly monitoring the internal signals of the design described in the requirements (and only these signals) and build covergroups based on the monitored signals (Figure 3).
Figure 3: Grey box monitor of the specified signals.
REQUIREMENT BASED ASSERTIONS AND CHECKERS
Although the final test results are compared against the reference model, DO-254 certification will need a clear traceability about the requirements and their verification. If the coverage above ensures that the different conditions of requirement have been exercised, there is no direct proof that the design behaves as specified by each requirement.
Each requirement therefore needs either to be modeled or to rely on assertions to check the required values against the specified expectation.
Adding SVA assertions and in some case small functional SystemVerilog models of the different filters has fulfilled this need. Then using reporting techniques similar to the one described in(4) allow us to capture and map the assertion results to the requirements.
Note though that since these assertions are actually verifying the requirements, they need additional qualification(c) effort.
COVERAGE, ASSERTIONS AND TESTS
While the coverage is recording if a requirement has been exercised and the assertions are actually verifying the requirement (providing the fact the coverage is reached), the way to hit our coverage goals requires some more complex testing. Different papers, including(3) suggest that coverage random methodologies may be applied to DO-254 projects, but providing interesting use cases using such an approach is hard to develop and time consuming for these kinds of complex signal processing blocks. Since we had a Matlab model available, it is much more convenient to use this model to generate our test scenarios.
REFERENCE MODEL AND REQUIREMENTS
The reference model which has served to validate the high level function is architected between a generic core model, and chosen dedicated hardware parameters (size of vectors, filters, width of signals, …).
Adding use case parameters on top of this, the model then holds all the necessary data to generate our test inputs and the expected result of the complete processing. We can therefore compare the result of the RTL with the result of this model to verify the overall functionality of the design, from the input samples to the final result.
As the model has been validated using the same parameters and samples we can therefore ensure that the design follows the same algorithm. Additional to the previous requirement based coverage and assertions, this additional check validates the complete algorithm as a whole and is an additional assurance that the set of requirements is describing the complete algorithm.
In our case, we made the choice to generate the test and its expected results directly from this Matlab model. Generating the test directly from Matlab has the advantage to reduce the number of file processing such as intermediate parsing, but requires some extra Matlab scripting effort.
Other possible approaches could be:
- Link Matlab to the simulator, but this can only work in simulation and not on the physical device
- Use different scripting languages to interpret Matlab results. This would require extra qualification(c) of the environment
With these, in the end, the generated tests embed:
- The test configuration sequence
- The self-checking function responsible for the result analysis
Several types of implementation are possible for the tests:
- Stimulus command file, but this lacks flexibility, reusability and maintainability in the long run
- UVM sequences
- SystemVerilog tasks or VHDL procedures
SEAMLESS MODULE VERIFICATION
Thinking of how to replay the RTL tests on the FPGA platform, generating C is actually a more reasonable choice:
- In simulation, the C test running on the host computer is using DPI interface to control the AXI4 agents (monitor driver), connected to the design.
- On board level, the C test running on the platform CPU is accessing the hardware directly, from the same software drivers.
The diagram below shows the overall simulation verification environment.
Figure 4: Overall Verification Environment.
SoC LEVEL PHYSICAL REUSE
Providing some precautions in the test software architecture, we can then remap the write/read register functions and memory DPI calls to direct pointer accesses aligned on the SoC address map, the same C can therefore be reused at SoC level and run bare metal on the physical hardware (see Figure 5).
Figure 5: Physical SoC Environment.
In order to actually be able to reuse the same expected data, the same set of samples needs to be driven to the design. On the physical platform, this could be an issue and may require dedicated hardware prototype, signal generator or a synthesized Bus Functional Model (BFM) to achieve this. Our case was simplified with the use of an embedded signal generator for which the Matlab model was also available in this flow.
Therefore, the main distinctions between the simulation at IP level and the physical device execution is the latency introduced by the software execution from the core CPU on the device (while in simulation, C executes virtually in 0 cycle in regards to the RTL) . Another difference is that the physical device testing lacks observability points, assertions, and functional coverage and are therefore missing from the physical testing. But the tests being self-checking, we can consider this self-check as a signature which ensures that the physical device has actually hit the same functions in the same way. Functional coverage of the simulation is therefore hit by the physical tests in the same way.
While the top level algorithm is validated by independent implementations and comparisons of the final expected results, the set of the obtained derived requirements should also be validated.
In particular, the completeness of the derived set of requirements impose that:
- Requirements define outputs in regards to inputs and history
- Each input is either a primary input of the design, or the output of another requirement
- Each output value or sequence is either fully defined, or defined at a specific point in time or on specific conditions
- Special care is taken in regards to outputs/inputs that are valid at specific points in time to ensure they are sampled at the right time by other requirements
A traceability matrix with inputs, outputs and links to the source requirement will validate this completeness.
The approach described here is combining different verification methodologies. It brings metric driven verification to the requirement based verification. It brings both vertical and horizontal verification reuse from IP level simulation to physical SoC device using a C generation of self-checking tests. Providing functional cover group and assertion traceability up to the requirements, this approach can be applied to safety critical processes such as DO-254 and ISO 26262.
The terms "validation", "verification" and "qualification" used in this article are aligned on the DO-254 / ED-80 terminology.
- Verification ensures that the hardware is what has been specified
- Validation ensures the specification is what we want
- Qualification ensures that the verification environment and tools are capable of doing verification ED-80, Chapter 6, Validation and Verification Process ED-80, Chapter 11, Tool Assessment and Qualification
- RTCA/DO-254 (EUROCAE/ED-80) "Design Assurance Guidance for Airborne Electronic Hardware"
- Wikipedia: "Requirement"
- Who takes the driver seat for ISO 26262 and DO-254 verification? Reconciling coverage driven verification with requirement based verification. DVCon Europe 2016, Avidan Efody, Mentor Graphics
- Verifying Airborne Electronics Hardware: Automating the Capture of Assertion Verification Results for DO-254
- DO-254 Testing of High Speed FPGA Interfaces
- Best Practices for FPGA and ASIC Development
Back to Top