Verification Horizons Complete Issue:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
Now that it's May here in New England, I'm sure you'll all be happy to know that, even after our record-setting snowfall this past winter, there is, in fact, no more snow on my lawn. When this statement became a fact is still a source of amazement, but I'm just glad the snow is gone.
Of course, that means it's time to get the lawn in shape, and I'm faced with the unfortunate situation that my 17-year-old son is too busy (with homework and standardized tests) to be of much help these days. Still, I'd rather ride the tractor than shovel snow any day, so I can't really complain. Speaking of greener grass, the lawn is lush on this issue's side of the fence.
by Vipul Patel, ASIC Engineer, eInfochips
This article focuses on Assertion-Based Verification (ABV) methodology and discusses automation techniques for capturing verification results to accelerate the verification process. Also, it showcases how requirement traceability is maintained with use of assertions to comply with the mandatory DO-254 standards.
When developing airborne electronics hardware (AEH), documenting the evidence-linking verification results to requirements is an important compliance criterion for satisfying regulatory guidance for certification. Section 6.2 of the RTCA/DO-254 “Design Assurance Guidance for Airborne Electronic Hardware” defines verification processes and the subsequent objectives that are required for safety assurance and correctness, as given below:
* Section 6.2.1(1) – Evidence is provided that the hardware implementation meets the requirements.
* Section 6.2.1(2) – Traceability is established between hardware requirements, the implementation, and the verification procedures and results.
* Section 6.2.1(3) – Acceptance test criteria are identified, can be implemented, and are consistent with the hardware design assurance levels of the hardware functions.
by Nir Weintroub, CEO, and Sani Jabsheh, Verisense
As the complexity of electronics for airborne applications continues to rise, an increasing number of applications need to comply with the RTCA DO-254/ Eurocae ED-80 standard for certification of complex electronic hardware, which includes FPGAs and ASICs.
The DO-254 standard requires that for the most stringent levels of compliance (levels A and B), the verification process for FPGAs and ASICs must measure and record the verification coverage by running tests on the device in its operational environment. What this essentially means is that verification engineers and designers need to compare the behavior of the physical outputs of the device on the hardware device pins with their corresponding RTL model simulation results. In addition, the standard requires running robustness tests on the interface pins. The robustness testing is accomplished by forcing abnormal or slightly out- of-spec behavior on the device and ensuring that it is able to deal with this behavior without catastrophic results.
by Ajay Daga, CEO, FishTail Design Automation, and Benoit Nadeau-Dostie, Chief Architect, Mentor Graphics
Built-In Self-Test (BIST) is widely used to test embedded memories. This is necessary because of the large number of embedded memories in a circuit which could be in the thousands or even tens of thousands. It is impractical to provide access to all these memories and apply a high quality test. The memory BIST (MBIST) tool reads in user RTL, finds memories and clock sources, generates a test plan that the user can customize if needed, generates MBIST IP, timing constraints, simulation test benches and manufacturing test patterns adapted to end-user circuit.
Multi-cycle paths (MCPs) are used to improve the performance of the circuit without having to use expensive pipelining which would be required when testing memories at Gigahertz frequencies. Most registers of the MBIST controller only update every two clock cycles. Only a few registers need to operate at full speed to perform the test operations. The architecture takes advantage of the fact that most memory test algorithms require complex operations such as Read-Modify-Write.
by Jin Zhang, Senior Director of Marketing & GM Asia Pacific, and Vigyan Singhal, President & CEO, OSKI Technology
Planning is key to success in any major endeavor, and the same is true for meaningful formal applications. End-to- End formal, with the goal of achieving formal sign-off, is a task that usually takes weeks if not months to complete, depending on the size and complexity of the design under test (DUT). Dedicating time and effort to planning is of utmost importance. While most formal engineers and their managers understand the need for formal planning, they do not know how to conduct thorough planning to arrive at a solid formal test plan for execution.
by Tao Jia, HDL Verifier Development Lead, and Jack Erickson, HDL Product Marketing Manager, MathWorks
The growing sophistication of verification environments has increased the amount of infrastructure that verification teams must develop. For instance, UVM environments offer scalability and flexibility at the cost of upfront efforts to create the UVM infrastructure, bus-functional models, coverage models, scoreboard, and test sequences.
Engineers everywhere use MATLAB and Simulink to design systems and algorithms. Math-intensive algorithms, such as signal and image processing, typically begin with MATLAB language-based design. Complex systems, such as control and communications, typically begin with Simulink and Model-Based Design. At this early stage, engineers create behavioral models of the algorithms and subsystems along with system environment models and test sequences to validate the design given the requirements. The rest of this article will refer to this stage as "system design and verification." At the end of this stage, the detailed requirements are captured in a paper specification for the hardware and software teams.
by Marcela Simkova and Neil Hand, VP of Marketing and Business Development Codasip Ltd.
This article describes an automated approach to improve design coverage by utilizing genetic algorithms added to standard UVM verification environments running in Questa® from Mentor Graphics®. To demonstrate the effectiveness of the approach, the article will utilize real-world data from the verification of a 32-bit ASIP processor from Codasip®.
Application-specific instruction-set processors (ASIP) have become an integral part of embedded systems, as they can be optimized for high performance, small area, and low power consumption by changing the processor’s instruction- set to complement the class of target applications. This ability makes them an ideal fit for many applications — including the Internet of Things (IoT) and medical devices — that require advanced data processing with extremely low power.
by Neil Johnson, Principal Consultant, XtremeEDA, and Mark Glasser, Principal Engineer, Verification Architect, NVIDIA
Writing tests, particularly unit tests, can be a tedious chore. More tedious - not to mention frustrating - is debugging testbench code as project schedules tighten and release pressure builds. With quality being a non-negotiable aspect of hardware development, verification is a pay-me-now or pay-me-later activity that cannot be avoided. Building and running unit tests has a cost, but there is a greater cost of not unit testing. Unit testing is a proactive pay now technique that helps avoid running up debts that become much more expensive to pay later.
Despite academics and software developers advocating the practice of writing the test suite before you write the code, this is rarely, if ever done by hardware developers or verification engineers. This applies to design and it is also typical in verification where dedicating time to test testbench code is not generally part of a verification project plan. As a result, testbench bugs discovered late in the process can be very expensive to fix and add uncertainty to a project plan. Even worse, they can mask RTL bugs making it possible for them to reach customers undetected.
by Dr. Lauro Rizzatti, Verification Consultant, Rizzatti LLC
In the second decade, the hardware emulation landscape changed considerably with a few mergers and acquisitions and new players entering the market. The hardware emulators improved notably via new architectures based on custom ASICs. The supporting software improved remarkably and new modes of deployment were devised. The customer base expanded outside the niche of processors and graphics, and hardware emulation slowly attracted more and more attention.
While commercial FPGAs continued to be used in mainstream emulation systems of the time (i.e., Quickturn, Zycad and IKOS) four companies — three startups plus IBM — pioneered different approaches.
IBM continued the experimentation it started a decade earlier with the YSE and EVE. By 1995, it had perfected its technology, based on arrays of simple Boolean processors that processed a design data structure stored in a large memory via a scheduling mechanism. The technology was now applicable to emulation. While IBM never launched a commercial product, in 1995 it signed an exclusive OEM agreement with Quickturn that gave the partner the right to deploy the technology in a new emulation product.
by Lior Grinzaig, Verification Engineer, Marvell Semiconductor Ltd.
Long simulation run times are a bottleneck in the verification process.
A lengthy delay between the start of a simulation run and the availability of simulation results has several implications:
* Code development (design and verification) and the debug process are slow and clumsy. Some scenarios are not feasible to verify on a simulator and must be verified on faster platforms — such as an FPGA or emulator, which have their own weaknesses.
* Long turn-around times.
* Engineers must make frequent context-switches, which can reduce efficiency and lead to mistakes.
Coding style has a significant effect on simulation run times. Therefore it is imperative that the code writer examine his/her code, not only by asking the question "does the code produce the desired output?" but also "is the code economical, and if not, what can be done to improve it?" The following discussion presents some useful methods for analyzing code based on these questions.
by David Kaushinsky, Application Engineer, Mentor Graphics
All types of electronic systems can malfunction due to external factors. The main sources causing faults within electronic components are radiation, electromigration and electromagnetic interference. The evaluation of a fault- tolerant system is a complex task that requires the use of different levels of modeling. Compared with other possible approaches such as proving or analytical modeling, fault injection is particularly attractive.
Fault injection is a key requirement of functional safety standards like the Automotive ISO 26262 and is highly recommended during hardware integration tests to verify the completeness and correctness of the safety mechanisms implementation with respect to the hardware safety requirements.
This article reviews the use of processors in the automotive industry, the origin of faults in processors, architectures of fault tolerant processors and techniques for processor verification with fault injection.
by Amit Tanwar and Manoj Manu, Questa VIP Engineering, Mentor Graphics
Because of the complexities involved in the entire design verification flow, a traditional Verification IP (VIP) tends to overlook the subtle aspects of the physical layer (PHY) verification, often leading to costly debug phases later in the verification cycle.
In addition, because of the several possible topologies in a PHY implementation, completely exercising the role and related functionality of a PHY becomes challenging for a traditional VIP.
Furthermore, the analog signaling and the homologous functionality of the physical layer in serial protocols, led the industry to define a common PHY that multiple protocols could use and that segregates the PHY logic from that of the general ASIC. One such common PHY is used in PCI Express, USB 3.0 and 3.1, and SATA protocols. Similarly, M-PHY is used in SSIC, M-PCIe, and LLI protocols, among others.
This article describes the limitations of a traditional VIP for PHY verification, which can typically be resolved using an exclusive PHY verification kit.