Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
As I write this, spring appears to have finally arrived here in New England – about a month and a half later than the calendar says it should have. As much as I love warm spring weather, though, it means that I now have to deal with my lawn again. I know that many people actually enjoy working on the lawn, but as far as I'm concerned, the greatest advance in lawn-care technology happened last year when my son became old enough to drive the lawn mower. If you've ever seen a 13-year old boy driving a lawn tractor, you'll understand my characterizing him as "constrained-random" when it comes to getting the lawn cut. I handle the "directed testing" by taking care of the edging and hard-to-reach spots, and together we manage to get the lawn done in considerably less time than it used to take me alone.
by Joshua Rensch, Verification Lead, Lockheed Martin and Tom Fitzpatrick, Verification Methodologist, Mentor Graphics Corporation
Many of us are so used to the idea of "verification methodology," including constrained random and functional coverage, that we sometimes lose sight of the fact that there is still a large section of the industry to whom these are new concepts. Every once in a while, it's a good idea to go back to "first principles" and understand how we got where we are and why things like the OVM and UVM are so popular. Both authors have found ourselves in this situation of trying to explain these ideas to colleagues and we thought it might be helpful to document some of the discussions we've had.
by Mark Peryer, Verification Methodologist, Mentor Graphics Corporation
The UVM register model provides a way of tracking the register content of a DUT and a convenience layer for accessing register and memory locations within the DUT. The register model abstraction reflects the structure of a hardware-software register specification, since that is the common reference specification for hardware design and verification engineers, and it is also used by software engineers developing firmware layer software. It is very important that all three groups reference a common specification and it is crucial that the design is verified against an accurate model.
by Hans van der Schoot, Anoop Saha, Ankit Garg, Krishnamurthy Suresh, Emulation Division, Mentor Graphics Corporation
A methodology is presented for writing modern SystemVerilog testbenches that can be used not only for software simulation, but especially for hardware-assisted acceleration. The methodology is founded on a transaction-based co-emulation approach and enables truly single source, fully IEEE 1800 SystemVerilog compliant, transaction-level testbenches that work for both simulation and acceleration. Substantial run-time improvements are possible in acceleration mode and without sacrificing simulator verification capabilities and integrations including SystemVerilog coverage-driven, constrained-random and assertion-based techniques as well as prevalent verification methodologies like OVM or UVM.
by Mike Andrews, Verification Technologist, Mentor Graphics
The Questa® inFact intelligent testbench automation tool is already proven to help verification teams dramatically accelerate the time it takes to reach their coverage goals. It does this by intelligently traversing a graph-based description of the test sequences and allowing the user to prioritize the input combinations required to meet the testbench coverage metrics while still delivering those sequences in a pseudo-random order to the device under test (DUT). The rule language, an extended Backus Naur Format (BNF) that is used to describe the graph structure, has recently been enhanced to add two powerful new features.
by Darron May, Manager of Verification Analysis Solutions and Gabriel Chidolue, Verification Technologist, Mentor Graphics Corporation
With the sheer volumes of data that are produced from today's verification environments there is a real need for solutions that deliver both the highest capacities along with the performance to enable the data to be accessed and analyzed in a timely manner. There is no one single coverage metric that can be used to measure functional verification completeness and today's complex systems demand multiple verification methods.
by Yuan Lu, Nextop Software Inc., and Ping Yeung, Mentor Graphics Corporation
As SOC integration complexity grows tremendously in the last decade, traditional blackbox checker based verification methodology fails to keep up to provide enough observability needed. Assertion-based verification (ABV) methodology is widely recognized as a solution to this problem. ABV is a methodology in which designers use assertions to capture specific internal design intent or interface specification and, either through simulation, formal verification, or emulation of these assertions, verify that the design correctly implements that intent. Assertions actively monitor a design (or testbench) to ensure correct functional behavior.
by Mike Bartley and Mike Benjamin, Test and Verification Solutions
Over the years there have been numerous attempts to develop benchmarking methodologies. One of the most widely used is the Capability Maturity Model (CMMI) developed by the Software Engineering Institute at Carnegie Mellon University. Although aimed at software engineering it provides a framework that is widely applicable to most business activities. However, whilst we have drawn considerable inspiration from CMMI, it has a number of serious limitations when trying to use it to benchmark a highly specific activity such as functional verification.
by Tanja Cotra, Program Manager, HDL Design House
With the increasing number of different VITAL model families, there is a need to develop a base Verification Environment (VE) which can be reused with each new VITAL model family. UVM methodology applied to the SystemVerilog Testbench for the VITAL models should provide a unique VE. The reusability of such UVM VE is the key benefit compared to the standard approach (direct testing) of VITAL models verification. Also, it incorporates the rule of "4 Cs" (Con-figuration, Constraints, Checkers and Coverage). Thus, instead of writing specific tests for each DUT feature, a single test can be randomized and run as part of regression which speeds up the collection of functional coverage.
by Sean Safarpour, Evean Qin, and Mustafa Abbas, Vennsa Technologies Inc.
Functional debug is a dreadful yet necessary part of today's verification effort. At the 2010 Microprocessor Test and Verification Workshop experts agreed that debug consumes approximately one third of the design development time. Typically, debugging occurs in two steps, triage and root cause analysis. The triage step is applied when bugs are first discovered through debugging regression failures.
by Adam Erickson, Mentor Graphics Corporation
Are macros evil? Well, yes and no. Macros are an unavoidable and integral part of any piece of software, and the Open Verification Methodology (OVM) and Universal Verification Methodology (UVM) libraries are no exception. Macros should be employed sparingly to ease repetitive typing of small bits of code, to hide implementation differences or limitations among the vendors’ simulators, or to ensure correct operation of critical features. Although the benefits of the OVM and UVM macros may be obvious and immediate, benchmarks and recurring support issues have exposed their hidden costs.