Verification Horizons Complete Issue:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
Welcome to the 10th Anniversary edition of Verification Horizons! It's hard to believe we've been doing this long enough to reach such a significant milestone, but it's been a great ride. For me personally, this issue marks two significant milestones as well.
My son and I reached the pinnacle of our Boy Scout experience this past summer when we successfully completed a 10-day 80-mile hike at the Philmont Scout Ranch in New Mexico, including reaching the 12,441 ft. summit of Baldy Mountain. Please indulge me in a bit of fatherly pride when I tell you that the highlight of the trip was sharing the experience with my son. The other major milestone is that my daughter has just started high school.
She's quickly gotten involved in the Drama Club and was cast in the fall play, which is a milestone I'm sure she'll remember for a long time. We are very proud of how well she's handled this transition in her life, despite the fact that it makes my wife and me feel old that our youngest is now in high school.
As with everything, milestones mark the end of one phase and the beginning of another.
As much as we've enjoyed sharing so many articles about verification techniques and technology with you over the years, we look forward to continuing that mission as we advance to our next milestone.
by Mark Peryer, Verification IP Architect, Mentor Graphics
One of the most common requirements for the verification of a chip, board or system is to be able to model the behaviour of memory components, and this is why memory models are one of the most prevalent types of Verification IP (VIP).
Memory models have two main functions. The first is to store information in a data structure so that it can be written, retrieved and updated. The second is to provide a signal level interface which allows access to the storage array using a pre-defined protocol (see figure 1 for a representative functional block diagram). For effective verification the model should also check that the signal level protocol from the host interface is behaving correctly, provide a backdoor access to the storage array and provide a means of collecting functional coverage.
Over the years, memory devices such as Dynamic RAMs (DRAM) have driven semiconductor process development as manufacturers have sought to create faster memories with a greater storage capacity through the use of smaller feature sizes. Over time, several different memory protocols have been developed to optimise the various trade-offs between device access speed and packaging costs, and today, one of the most widely used protocols is the Double Data Rate (DDR) memory protocol, and its low power variant, LPDDR. The DDR protocol is designed for high speed access, with data being transferred on both edges of the clock. However, in practice, this high level of throughput can only be achieved by managing the DDR accesses so that the time and power expensive operations such as activate and pre-charge can be taking place in one part of the memory whilst a data transfer takes place with another part of the memory. It is the task of a DDR controller to order the operations taking place in the DDR device to maximise throughput, and in order to verify the complex trade-offs a sophisticated memory model is needed.
by Staffan Berg, European Applications Engineer, and Mike Andrews, Verification Technologist, Mentor Graphics
Verifying that a specific implementation of a processor is fully compliant with the specification is a difficult task. Due to the very large total stimuli space it is difficult, if not impossible, to ensure that every architectural and micro-architectural feature has been exercised. Typical approaches involve collecting large test-suites of real SW, as well as using program generators based on constrained- random generation of instruction streams, but there are drawbacks to each.
SystemVerilog Constrained-Random (CR) 'single class with constraints on all possible fields approach' can be unwieldy and hard for solvers to handle, especially when dealing with a series of instructions with many for each constraint. Slow solving performance, testbench bugs hidden in the complexity, or simply not having the flexibility to fully describe the wanted scenarios for verification are quite common issues. When the problem is adequately modeled, achieving significant coverage is close to impossible. There are custom options other than SV CR – creating an instruction stream generator from scratch, for example C++ may seem like a viable option, but the complexity and vast solution space quickly makes this approach unmanageable. Moreover, languages like C++ have no notion of functional coverage, which makes it hard or impossible to assess when the verification is complete.
by Eric Rentschler, Chief Validation Scientist, Mentor Graphics
With ASIC complexity on the increase and unrelenting time-to-market pressure, many silicon design teams still face serious schedule risk from unplanned spins and long post-silicon debug cycles. However, there are opportunities on both the pre-silicon and post-silicon sides that can be systematically improved using on-chip debug solutions. In the pre-silicon world, there exists a great opportunity for a solution that can provide more cycles of execution than what is possible today with simulation and emulation. But in the past, functions like power management span HW, FW, BIOS, virtualization and OS levels and are difficult to cover until silicon hardware is available. The most recent industry tape-out data shows that despite the use of state-of-the art pre-silicon verification techniques, the number of spins to get to market is not only higher than planned, but is not improving over time. On the post-silicon side, the black-box complexity of today’s designs calls for enhanced internal observation. Many design teams could benefit by enhanced debug logic. A more intuitive debug GUI can enable higher productivity through quicker adoption. The Certus™ on-chip logic analyzer solution addresses much of this in a systematic manner as an off-the-shelf solution. Certus can be deployed in FPGA or in ASIC silicon, with an easy-to-use GUI for instrumenting the debug logic and interacting with the instruments in silicon. After a decades-long legacy of EDA improvements in test, verification and physical design, Mentor is bridging the gap into the post-silicon realm. This makes cross-functional teams significantly more productive and reduces schedule risk, while allowing teams to focus more on their core business.
by Dr. Lauro Rizzatti, Verification Consultant, Rizzatti LLC
At the beginning of the third decade, circa 2005, system and chip engineers were developing evermore complex designs that mixed many interconnected blocks, embedded multicore processors, digital signal processors (DSPs) and a plethora of peripherals, supported by large memories. The combination of all of these components gave real meaning to the designation system on chip (SoC). The sizes of the largest designs were pushing north of 100-million gates. Embedded software—once virtually absent in ASIC designs—started to implement chip functionality.
By the end of the hardware emulation's third decade, i.e., 2015, the largest designs are reaching into the billion-gate region, and well over 50% of functionality is now realized in software. Design verification continues to consume between 50% and 70% of the design cycle.
These trends became tremendous drivers for hardware emulator vendors to redesign and enhance their tools, opening the door to opportunities in all fields of the semiconductor business, from multimedia, to networking, to storage.
by Kiran Sharma, Sr. Verification Engineer and Vipin Kumar, Sr. Verification Engineer, Agnisys Technology Pvt. Ltd.
The present day designs use standard interfaces for the connection and management of functional blocks in System on Chips (SoCs). These interface protocols are so complex that, creating in-house VIPs could take a lot of engineer's development time. A fully verified interface should include all the complex protocol compliance checking, generation and application of different test case scenarios, etc.
Our tool IDesignSpec automatically generates registers and memory interfaces which can interface with all the standard bus protocols. One of the outputs from IDesignSpec product is the Bus client RTL. The generation of this IP is challenging since what gets generated is based on the specification that the user provides.
For the reliability and credibility of our design, we need to ensure that it is fully compliant with the standard protocol. Testing out such dynamic designs is a huge task since the customer can create virtually any possible specification. So it is not a static IP that needs to be verified, nor is it a configurable IP. It really is a generated IP, so the task is an order of magnitude, more complicated.
by Anshul Jain, Verification Engineer, Oski Technology
Most false positives (i.e. missing design bugs) during the practice of model checking on industrial designs can be reduced to the problem of a failing cover. Debugging the root cause of such a failing cover can be a laborious process, when the formal testbench has many constraints. This article describes a solution to minimize the number of model checking runs to isolate a minimal set of constraints necessary for the failure. This helps improve formal verification productivity.
Formal verification in the form of model checking has become an integral part of the chip design sign-off process at many semiconductor companies. The process involves building formal testbenches that can exhaustively verify certain design units. This process complements the simulation testbenches that typically run at full-chip or subsystem level.
There can be many reasons why any verification methodology (simulation or formal) can give rise to false negatives or false positives, undermining the confidence built by the verification process. False negatives are typically less worrisome because they are accompanied by a failure report or a counterexample that can be analyzed to determine if the root cause is a design bug. False positives in formal verification are important to eliminate, and often debugging the root cause of such false positives can be very tedious.
by Jacob Andersen, CTO, Kevin Seffensen, Consultant and UVM Specialist, Peter Jensen, Managing Director, SyoSil ApS
All UVM engineers employ scoreboarding for checking DUT/reference model behavior, but only few spend their time wisely by employing an existing scoreboard architecture. The main reason is that existing frameworks have inadequately served user needs, and have failed to improve user effectiveness in the debug situation. This article presents a better UVM scoreboard framework, focusing on scalability, architectural separation and connectivity to foreign environments. Our scoreboard architecture has successfully been used in UVM testbenches at various architectural levels and across models (RTL, SC). Based on our work, the SV/UVM user ecosystem will be able to improve how scoreboards are designed, configured and reused across projects, applications and models/architectural levels.
Addressing the increasing challenges met when performing functional verification, UVM 1 proposes a robust and productive approach for how to build and reuse verification components, environments and sequences/tests. When it comes to describing how to scoreboard and check the behavior of your design against one or more reference models, UVM offers less help. UVM does not present a scoreboard architecture, but leaves the implementer to extend the empty uvm_scoreboard base class into a custom scoreboard that connects to analysis ports. Experience shows that custom scoreboard implementations across different application domains contain lots of common denominators of deficiency. Users struggle to implement checker mechanisms for the designs under test being exposed to random stimuli, while sacrificing aspects like efficiency, easy debug and a clean implementation.