Verification Horizons Complete Issue:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
Hi everyone, and welcome to another super-sized Verification Horizons issue for DVCon U.S.
Last spring, my wife and I decided to take a family vacation to Hawaii this February, which coincidentally is just before DVCon. We're celebrating several family milestones, including our 20th anniversary, my son's 18th birthday and my father-in-law's 80th, so we're all going. As I've mentioned before, my wife is great at planning and managing things like this, and there are many similarities between planning a vacation and managing a verification project. As we know, having a plan is critical to a project's success, so my endearingly "old-school" wife has all of our plans written out in a document, which she'll print out and bring with us. As a "tool guy," I showed her an app I use that will automatically load all our confirmation emails into a cool interactive itinerary. It's great to have a plan, but automation and tools are what make the plan into something usable. You'll see this theme throughout the following articles.
by Shaela Rahman, Baker Hughes
FPGA designs are becoming too large to verify by visually checking waveforms, as the functionality has become beyond easy comprehension. At Baker Hughes, a top-tier oilfield service company, we primarily design small scale FPGA designs, typically less than 100 thousand gates, but our designs are growing in size and complexity. As well, they are part of more complex systems that require long lab integration times.
For these reasons, our FPGA design and verification team needed to find a new methodology that does not rely on visual checks, shortens our lab integration testing time, standardizes our testbenches, and makes our functional verification more robust.
VHDL was the language of choice for both our RTL design and testbench, but there was no standardization or robustness for our testbenches. We did not reuse our testbenches, though we had the intention to do so. We knew the Universal Verification Methodology (UVM) provided testbench standardization, which is why we wanted to investigate the adoption of a UVM-based flow.
In order to justify the transition from a VHDL to a UVM testbench, we conducted an evaluation to compare the pros and cons of using three styles of testbench: a simple VHDL testbench, an advanced VHDL testbench, and a UVM testbench. The metrics used to evaluate them were the level of code coverage achieved, the reusability of testbench components, and the ease of creating different stimuli.
by Neil Johnson, XtremeEDA Corp. and Nathan Albaugh & Jim Sullivan, Qualcomm Technologies, Inc.
Verification teams don't typically verify testbench components. But this Qualcomm Technologies IP team realized the necessity of unit testing a critical testbench component and the corresponding debug time and frustration it could prevent for downstream IP and chip teams.
This experience report documents the team's first time unit testing with SVUnit, from start to finish. We discuss the justification made with managers to have engineers assigned, how to pick the right test subject to ensure a positive outcome, the method used for verifying testbench checkers, component defect rate, framing unit testing and the defects found as opportunities for improvement and lessons learned for long term maintenance. The focus of the test method is the SVUnit UVM report mocking to write automated tests for uvm_error checks.
Debug has become a fundamental part of SoC development. So fundamental, in fact, that in 2014 verification engineers estimated 37% of their time was spent debugging code. Assuming an eight hour day, 37% is roughly two hours and 53 minutes a day, everyday, spent debugging issues that were more than likely created by the development team itself. For hardware developers this is a horrid statistic that signals an obvious breakdown in development.
by Harry Foster, Michael Horn, Bob Oden, Pradeep Salla, and Hans van der Schoot, Mentor Graphics
The literature for many of today's testbench verification methodologies (such as UVM) often reference various software or object-oriented related patterns in their discussions. For example, the UVM Cookbook (available out on the Verification Academy) references the observe pattern when discussing the Analysis Port. One problem with the discussion of patterns in existing publications is that it is generally difficult to search, reference, and leverage the solutions these patterns provide since these publications are distributed across multiple heterogeneous platforms and databases and documented using multiple varied formats. In addition, most of the published examples of a verification design deal more with the software implementation details of constructing a testbench. To address these concerns, we have decided to extend the application of patterns across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation) and have just released a comprehensive pattern library out on the Verification Academy.
But first, we should answer the question, "What is a pattern?" In the process of designing something (e.g., a building, a software program, or an airplane) the designer often makes numerous decisions about how to solve specific problems. It would be nice if the knowledge gained from solving a specific problem could be shared, and this is where patterns help out. That is, if the designer can identify common factors contributing to the derived solution in such a way that it can be applied to other similar recurring problems, then the resulting generalized problem-solution pair is known as a pattern. Documenting patterns provides a method of describing good design practices within a field of expertise and enables designers to improve the quality in their own designs by reusing a proven solution on a recurring problem.
by Thomas Ellis, Mentor Graphics
For all the incredible technological advances to date, no one has found a way to generate additional time. Consequently, there never seems to be enough of it. Since time cannot be created, it is utterly important to ensure that it is spent as wisely as possible. Applying automation to common tasks and identifying problems earlier are just two proven ways to best utilize time during the verification process. Continuous Integration (CI) is a software practice, which is focused on doing precisely that, resulting in a more efficient use of time.
The basic principle behind Continuous Integration is that the longer a branch of code is checked out, the more it begins to drift away from what is stored in the repository. The more the two diverge, the more complicated it becomes to eventually merge in changes easily, ultimately leading to what is commonly referred to as "integration hell". To avoid this, and ultimately save engineers time, CI calls for integrating regularly and often (typically daily).
Regular check-ins are of course, only half the equation, you need to be able to verify their changes quickly as well, otherwise many small check-ins over several days, is no different than one large check-in at weeks end. Commonly, in a Continuous Integration environment, a CI server monitors the source control for check-in's, which in turn triggers a CI process (time-based triggers are also common). This process will then build the necessary design files, and run the requisite integration tests. Once complete, the results of the tests are reported back to the user, and assuming everything passed, can now be safely committed to the repository.
by Saumya Agrawal and Manish Chand, Mentor Graphics
The display protocol IP market is growing at a very fast pace. This is chiefly the outcome of the incredible increase in popularity of a wide variety of display source devices: such as DVD players, computer systems, and display sink/receiver devices: such as televisions, projectors, and display instruments. End users, the consumers, have also become more technologically savvy, increasing the demand for more and better products.
With this rapid growth in sophisticated display-dependent devices, there is an even faster rate of growth required in the field of verification of such devices. The verification of these devices must occur within very tight schedules, which demands a user-friendly solution so verification engineers can spend verification cycles in a productive and effective manner. This article provides a unique solution that is far superior to other solutions present in the industry. It allows
engineers to design structured, reusable VIP that is easy-to use and makes it easy to debug issues.
The display IP segment involves transferring images and in some cases audio of various types and dimensions. Protocols like HDMI, Embedded DisplayPort, V-by-One, and others, as defined by multiple organizations, fall under this category. Along with these, there are various content protection mechanisms specified by various HDCP versions.
by Saurabh Sharma, Mentor Graphics
Non-Volatile Memory Express® (NVMe®) is a new software interface optimized for PCIe® Solid State Drives (SSD). It significantly improves both random and sequential performance by reducing latency, enabling high levels of parallelism, and streamlining the command set while providing support for security, end-to-end data protection, and other client and enterprise features. NVMe provides a standards-based approach, enabling broad ecosystem adoption and PCIe SSD interoperability.
This article provides an overview of the NVMe specification and examines some of its key features. We will discuss its pros and cons, compare it to conventional technologies, and point out key areas to focus on during its verification. In particular, we will describe how NVMe Questa® Verification IP (QVIP) effectively contributes and accelerates verification of PCIe-based SSDs that use NVMe interfaces.
Over the last decade, the amount of data stored by enterprise and consumers has increased astronomically. For enterprise and client users, this data has enormous value, but requires platform advancements to deliver the needed information quickly and efficiently.
by Yogesh Chaudhary and Vikas Sharma, Mentor Graphics
The MIPI Alliance signature dishes, C-PHY™ and D-PHY™, are becoming favorite dishes of the imaging industry. These interfaces allow system designers to easily scale up the existing MIPI Alliance Camera Serial Interface (CSI-2™) and Display Serial Interface (DSI™) ecosystems to support higher resolution image sensors and displays while keeping low power consumption at the same time. This gives them an edge to get more into the mobile systems with bigger and better pictures.
"The MIPI C-PHY specification was developed to reduce the interface signaling rate to enable a wide range of high-performance and cost-optimized applications, such as very low-cost, low-resolution image sensors; sensors offering up to 60 megapixels; and even 4K display panels," said Rick Wietfeldt, chair of the MIPI Alliance Technical Steering Group.
"The MIPI C-PHY, D-PHY and M-PHY®, these three physical layers, combined with MIPI Alliance application protocols, address the evolving interface needs of the entire mobile device. Fundamentally, MIPI Alliance interfaces enable manufacturers to simplify the design process, reduce costs, create economies of scale and shorten time-tomarket," said Ken Drottar, chair of the MIPI Alliance PHY Working Group.
by Mark Peryer, Mentor Graphics
Almost all electronics systems use memory components, either for storing executable software or for storing data. Accurate memory models are fundamental. Making these models available in proven, standards-based libraries is essential to functional verification of these kinds of designs. The models that make up the library should possess specific qualities, and the library itself should deliver a comprehensive solution that supports any type of simulation environment.
High fidelity memory models should include:
- Front and back door memory protocol interfaces
- Assertions
- Functional coverage monitors
- Memory protocol debug support
- Standards compliance
- Compatibility with all major simulators
- Flexible configuration for second-source evaluation
Mentor Graphics® now offers a new, comprehensive memory Verification IP (VIP) library that embodies all of these qualities and addresses the growing need for accurate memory simulation models.
by Doug Amos, Mentor Graphics
Well done team; we've managed to get 100's of millions of gates of FPGA-hostile RTL running at 10MHz split across a dozen FPGAs. Now what? The first SoC silicon arrives in a few months so let's get going with integrating our software with the hardware, and testing the heck out of it. For that, we'll need to really understand what's going on inside all those FPGAs.
Ah, there's the rub.
Our conversations with very many prototypers, confirmed by numerous user surveys, tell us that debug has emerged as just about the biggest challenge for prototypers today. In fact, debug is really a series of challenges.
Assuming you trust your FPGA hardware, the first challenge is to make sure you haven't introduced new bugs during the process of converting and partitioning the SoC into the FPGA hardware. Any unintended functional inconsistency between the SoC design and its FPGA prototype means that we don't even get to first base.
The recommended approach is to bring the design up piecemeal, and debug each new piece in its turn. Some prototypes can be driven via a signal-level interface from the RTL simulation testbench to the top-level ports on the design. Then a relevant simulation testbench can be applied as each function is added to the prototype in turn. When the whole design passes this initial check, we can be confident that the FPGA prototype is a valid cycle-accurate representation of our RTL.
by Shashi Bhutada, Mentor Graphics
DO-254 and other safety critical applications require meticulous initial requirements capture followed by accurate functional verification. "Elemental Analysis" in DO-254 refers to the verification completeness to ensure that all 'elements' of a design are actually exercised in the preplanned testing. Code Coverage is good for checking if implementation code has been tested, but cannot guarantee functional accuracy. Currently, functional accuracy is guaranteed using pre-planned directed tests, auditing the test code and auditing the log files. This is not scalable as designs get complex. In this article we will look at using SystemVerilog syntax to concisely describe the functional coverage in the context of accurate "elemental analysis".
"Safety Specific Verification Analysis" as called out in DO-254 Appendix B addresses the need to check not only intended-function requirements, but also anomalous behaviors. It is left up to the applicant to propose a method to sufficiently stimulate the "element" to expose any anomalous behavior. In this article we shall focus on this "Safety Specific Verification Analysis" or "Sufficient Elemental Analysis". We will be using UVM to describe the stimulus space to look for anomalous behavior.
The article will also highlight items to use for auditing a flow from DER's point of view.
by François Cerisier, AEDVICES Consulting
Building a complex signal processing function requires a deep understanding of the signal characteristics and of the different algorithms and their performances.
When it comes to the verification(a) of such designs, a quite generic approach consists of injecting more or less realistic stimulus and using reference models (most often C or Matlab®), to compare the expected results.
Requirement based designs following a DO-254(1) process add the constraints that each function of the design should be specified as a traceable requirement. The proof of the complete verification of each requirement should also be provided with additional emphasis on physical verification, therefore running the tests on the physical device.
This article describes a combined requirement and metric driven methodology developed at a customer site for the verification of a complex signal processing SoC block under DO-254 constraints. This methodology also enables both horizontal and vertical reuse of the tests, allowing tests to run both in IP simulation and on FPGA boards at SoC level. This approach is described in a generic way and can be applied to different signal or data processing designs.
by Hari Patel and Amarkumar Solanki, eInfochips
As per the DO-254 standard, the Airborne Electronic Hardware (AEH) needs accurate assurance of device behavior as intended within optimal operating conditions. For DAL A (Design Assurance Level A) Devices, you need to verify 100% functionality of the device and achieve 100% code coverage, including FEC. Code coverage can be managed using a simulation tool such as Questa®, while functional coverage would require a comprehensive Verification Case Document (VCD) that has cases traced to each requirement of the AEH device.
Once the VCD is defined, it is necessary to ensure accuracy of test bench codes and test cases written to achieve intended operation. To ensure the same, a Verification Procedure Document (VPD) is generated, which consists of test procedures and coverage information for each of the test cases.
Once VCD and VPD are defined, a test plan is generated through linking of both documents, which would prove that all test scenarios are covered with 100% coverage of cover groups, checkers, assertions and test cases. Simulation tools like Questa provide support for generating Test Plan documents. By input of UCDB file using vcover command to Questa, the tool provides the information about the overall coverage and what is not covered.
by Ates Berna, ELECTRA IC
Late 2014, we found ourselves in a Project to develop a custom interconnect UVM Compliant VIP. Not only was there a need to develop a custom UVM VIP, but there was a need to plug this to a DUT which has a PCIe and an Avalon Streaming interface on it and perform the advance verification using our custom UVM VIP.
The challenges were:
- Developing a custom interconnect UVM compliant VIP from scratch
- Verification of a DUT using the custom interconnect UVM VIP
- Verification of the PCIe interface of the DUT
- Making sure that the documentation created during the verification is DO-254 compliant
- Requirements, traceability information, functional coverage reports, code coverage reports need to be created
This verification project had to be compliant with DO-254. Therefore, not only did we have to plan the UVM environment and how we're going to do the verification, we had to plan how we were going to manage different aspects of the project life cycle.
by Gunther Clasen, Ensilica
Testbenches written in SystemVerilog and UVM face the problem of configurability and reusability between blockand system-level. Whereas reuse of UVCs from a block- to a system-level verification environment is relatively easy, the same cannot be said for the UVC's connection to the harness: The interfaces that these UVCs need changes from connections to primary inputs and outputs at block level to a set of hierarchical probes into the DUT at system level. This requires a re-write of all interface connections and hinders reuse.
This article demonstrates how to write interface connections only once and use them in both block- and system-level testbenches. The order is not important: System-level testbenches can be written without all the blocks of the DUT completed, DUT and UVM blocks can easily be interchanged. Taking care not to use virtual interfaces in the UVC but Bus Functional Models (BFM) in the interface instead – so called polymorphic interfaces, UVCs can be fully configurable as well as reusable.