Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
Hello, and welcome to the annual Super-Sized DAC Edition of Verification Horizons.
I think it's important every once in a while to take a step back and appreciate the creativity, ingenuity and effort that has transformed the technology we experience in our everyday lives. It's easy for us to get complacent about such things, and even take the next new release of our favorite gadget in stride. But since we know what goes into making these things, we should never lose that sense of wonder. I'm reminded of that wonder every time my kids show their grandparents a new iPhone app or show them a PowerPoint presentation they did for homework, and it energizes me to know that I've contributed, in even a small way, to that sense of wonder. I hope that thought and these articles will help you appreciate the magic1 of what we do.
by Josh Rensch, Application Engineer and John Boone, Sales Specialist, Mentor Graphics
This article is divided into descriptions of each phase shown in Figure 1 Best Practices Flow. Each phase is broken up into Overview, Techniques and Deliverables sections. Information in the Overview is always specific to each phase. Information in the other sections may apply to other phases but will not be repeated in each phase. This article will only introduce a topic where it is first used. Subsequent phases will provide a reference to the section where the topic is introduced. For example, the PLAN phase Techniques section calls out issue tracking. A project should track issues throughout the whole development process and not just in the PLAN phase. This document only introduces issue tracking in the PLAN phase. Subsequent phases will provide a reference to the PLAN phase where the topic is introduced.
by Rich Edelman, Verification Technologist, Mentor Graphics
Guess what's a good way to find bugs and assure functional correctness? Have a good testbench.
Testbenches are different than RTL. They have behavioral Verilog code. They have files and data structures. They have threads. They have objects. Modern testbenches are likely class-based testbenches, using SystemVerilog, the UVM and object-oriented programming concepts.
Debugging these modern class-based testbenches has been painful for at least two reasons. First, they are designed and built using object-oriented coding
styles, macros, dynamic transactions, phasing and other constructs completely foreign to RTL verification experts. Second, the tools for class-based debug have lagged the simulators ability to simulate them.
by Russ Klein, Program Director, Mentor Graphics
Emulators, like Mentor Graphics Veloce®, are able to run designs in RTL orders of magnitude faster than logic simulators. As a result, emulation is used to execute verification runs which would be otherwise impractical in logic simulation. Often these verification runs will include some software executing on the design – as software is taking an increasing role in the functionality of a System-on-Chip (SoC). Software simply executes too slowly to practically run anything but the smallest testcase in the context of logic simulation. For example, booting embedded Linux on a typical ARM® design might take 20 or 30 minutes in emulation. The same activity in a logic simulator would be 7 or 8 months.
With significant software being executed in the context of the verification run, there needs to be some way to debug it. Historically, this is accomplished using a JTAG (or similar interface) probe with a general purpose debugger. JTAG, which stands for "Joint Test Action Group", is a standards organization which developed a standard interface to extract data from a board to an external system for testing purposes. This capability is used to extract the register state from a processor, which can be used by a debugger to present the state of the processor. The JTAG standard was extended to enable the debugger not to just view the state of the processor, but to be able to set the state processor and to control its execution. As a result, JTAG probes have become the "standard" method of debugging "bare metal" embedded software on a board or FPGA prototype. Through just a few signals, the debugger and the JTAG probe can download a software image to a board, single step through the program, and read arbitrary locations in memory.
by Vaibhav Gupta, Lead Member Technical Staff and Yogesh Chaudhary, Consulting Staff, Mentor Graphics
Questa Verification IP (QVIP) enables reuse of the components and build environment. Other features include layered verification methodology supported by highly configurable and feature-rich transactors, a protocol assertions monitor, advanced debug support and comprehensive functional coverage. LLI-based design IPs require a complex verification system in which standalone Verilog testing can kick-start verification planning but is not sufficient for complete verification. What's needed is a well planned and executed QVIP and methodology that address the following questions: Is the captured protocol in design IP verified? Which error scenarios are verified? Have you covered all required scenarios? Can you provide a progress report to your manager? These challenges are not new for verification engineers. However, complex verification projects often force teams to do more planning. If they don't, their verification engineer can easily get lost in technical detail, which slips the project schedule, jeopardizes the quality and increases the risk of re-spin.
by Martin Vlach, PhDude, IEEE Fella, Chief Technologist AMS, Mentor Graphics
Well here I am again. Last time I talked about putting stuff together, and when I mean stuff, it turned out that the digital folks handed me an RTL and the analog dudes gave me a SPICE netlist. I finally got it all working together, and let'er rip, thinking that this was easy and I'll be done in no time. But that "no time" turned into hours, and then days, and then nights, too. Simulating the analog stuff just takes forever, and while I don't mind having a cup of Joe now and then, I got pretty jittery. And don't talk to me about my boss. She was after me all the time – are we done yet? When are we going to know if we put it together right? Tell me something, anything, I can't wait, we have a market to go to, my bosses are after me – you get the drill.
by Eldon Nelson M.S. P.E., Verification Engineer, Micron Technology
SystemVerilog has the concept of covergroups that can keep track of conditions observed during a simulation. If you have a single instance of a covergroup in your design, you don't need to deal with merging coverage across all of the instances of that covergroup. If you do have multiple instances of a covergroup and want to merge coverage, there are implementation details that can make a big difference on what information you will be able to collect. An incorrect application of a covergroup might collect extraneous information that slows down the simulation and the merging process. Or, an incorrect application of a covergroup could result in covergroups that cannot be merged with other instances because of where the covergroup was defined. You can avoid making these mistakes by making informed choices when implementing a covergroup.
by Samrat Patel, ASIC Verification Engineer, and Vipul Patel, ASIC Engineer, eInfochips
A verification engineer's fundamental goal is ensuring that the device under test (DUT) behaves correctly in its verification environment. Achieving this goal gets more difficult as chip designs grow larger and more complex, with thousands of possible states and transitions. A comprehensive verification environment must be created that minimizes wasted effort. Using functional coverage as a guide for directing verification resources by identifying tested and untested portions of the design is a good way to do just that.
Functional coverage is user-defined, mapping all functionality defined in the test plan to be tested to a cover point. Whenever the functionality is hit during simulation, the functional coverage point is automatically updated. A functional coverage report can be generated summarizing how many coverage points were hit, metrics that can be used to measure overall verification progress.
by Gunjan Gupta and Anand Shirahatti, CTO, Arrow Devices
Functional coverage plays a very important role in verifying the completeness of a design. However customizing a coverage plan for different chips, users, specification versions, etc. is a very tedious process especially when dealing with a complex design.
Quite often a verification engineer needs to customize the coverage plan. Customization might be required as the coverage plan can vary amongst multiple users. Additionally, users might need to reflect regular updates in specifications in the coverage plan. Making all these changes in the source coverage code leads to conflicts and confusion amongst different users and projects. Managing the above stated issues is a challenging and time consuming task for a verification engineer.
by Mahak Singh, Design Engineer, Siddhartha Mukherjee, Sr. Design Engineer, Truechip
With the growing complexity of system-on-chip and number of gates in design, verification becomes a huge task. With the increase in design complexity verification is also evolving and growing day by day. From an underdog to a champion it has conquered more than 70% of the man power and time of the whole tape out cycle. The case was not the same earlier when the designs were simple and less focus was given on verification. The industry was set for a change. More minds and companies jumped into the verification domain and within a decade it was a different ball game altogether; open groups, new ideas, better strategies backed up by global verification standards (such as IEEE standard). The verification enigma was about to reveal all its secrets. Let us have a look.
by Mihajlo Katona, Head of Functional Verification, Frobas
Developing UVM-based testbenches from scratch is a time consuming and error-prone process. Engineers have to learn object oriented programming (OOP), a technique with which ASIC developers generally are not familiar. Automatic generation of testbench building blocks, guided by the standard, is a sensible way to speed up the initial setup. Verification engineers can then focus on specific tasks associated with verifying the design.
This article discusses how a UVM verification environment was set up easily for a mixed signal device under test (DUT) using a scripting tool developed in-house and based on a testbench configuration file. The article focuses mostly on presenting two mixed signal DUT examples and the corresponding UVM-based testbench with a digital-ontop structure. Also discussed: how Questa® InFact is used to improve the coverage based on results from nightly regression runs for functional and mixed signal verification, managed with Questa Verification Run Manager.