Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick - Mentor, A Siemens Business
Welcome to the DVCon US 2019 edition of Verification Horizons. We here in New England are still basking in the glow of yet another Patriots Super Bowl victory to go along with the Red Sox having won the World Series in October. That makes six championships for the Patriots and four for the Red Sox since 2001, not to mention one each for the Celtics and Bruins. I know that many of you either may not care or find it really annoying that we keep winning, but since this is my one opportunity to gloat to 70,000 readers, I can’t help myself.
For a professional sports team like the Patriots, who by the way have played in nine of the last 18 Super Bowls, to have so much more success than other teams playing under the same set of rules got me thinking about standards in our industry. Just as the NFL defines rules for everything from on-field play to salary-cap and other personnel issues, an industry standard specifies the rules that vendors and users must follow to allow fair competition. The excitement comes from one team’s (or one company’s) ability to innovate within those rules to surpass the competition and provide value to their fans (or users).
by Ashish Darbari - Axiomise Limited
About four years ago I gave a couple of talks on the myths surrounding formal. Although, formal has seen more adoption since then, we have a long way to go before it is recognized as a mainstream technology used throughout design and verification. I still see some of these myths clouding the judgement of end users and their managers. Last year at DVCon US, I was invited to be on a very entertaining panel on whether one should go deep or go broad with formal. We discussed whether one needs a PhD to do formal! Well one thing we could all agree was that you don’t. What you do need is the combination of a disciplined process, plan and execution of good rules.
Over the last decade, I have worked on projects with engineers at the coalface of design verification and trained nearly a hundred engineers in how to apply scalable formal verification techniques in practice at some of the leading user and EDA companies in our industry. That experience has reinforced my belief that for those of us who have moved beyond the myths and are busy applying formal on projects there are a few ground rules that help you get the most out of formal.
by Bill Au - Mentor, A Siemens Business
When we spend hours, days, or even weeks putting our hearts and minds into creating something, we have a tendency to emphasize its strengths and minimize its weaknesses. This is why verification engineers have a blind spot for their own verification platforms. This blind spot, or bias, often leads to overlooking those areas where bugs may lurk, only to emerge at the worst possible time when errors are most costly and take longer to fix. Compounding the problem is that these “late bugs” become progressively more expensive to detect, fix, and revalidate as the design under test (DUT) grows in complexity and the project nears completion.
Every verification engineer has learned this lesson one way or another. Every verification engineer eventually comes to realize that they need an easy way to overcome that bias and uncover those hidden or unforeseen bugs. An increasingly popular approach has been to employ a comprehensive, constrained-random, coverage-driven testbench development flow, such as the Universal Verification Methodology (UVM). Indeed, when random stimulus generation is well executed it often finds bugs “that no one thought of.” However, this method's success is limited to smaller DUTs as it is essentially impossible to do an exhaustive analysis of large, complex DUTs within a realistic project schedule. You simply can't write enough constrained-random tests to cover everything, leaving numerous state spaces and control logic combinations unverified and opening up the possibility of corner-case bugs escaping and Trojan paths going undetected.
by Arushi Jain - Mentor, A Siemens Business
An assertion is a conditional statement that indicates the incorrect behavior of a design by flagging an error and thereby catching bugs. Assertions are used for validating a hardware design at different stages of its life-cycle, such as formal verification, dynamic validation, runtime monitoring, and emulation. Assertion-based verification provides significant benefits to the design and verification process. It aids easy detection of functional bugs, allows the user to find bugs closer to the actual cause, and ensures that bugs are found early on in the design process. Assertions bring immediate benefits to the whole design and verification cycle; thus any challenges engineers face in coding and testing them are worth resolving.
A callback in UVM is a mechanism for changing the behavior of a verification component (such as a driver or monitor) without actually changing the code of the component. The uvm_callback class provides a base class for implementing callbacks. Since extending a class from a uvm_callback is not a recommended coding practice, as it could lead to some potential ordering issues, callbacks in Questa® Verification IP (QVIP) are implemented by extending a base class and populating it with methods required to achieve callback implementation - such as replacing a sequence item with another sequence item containing a new set of attributes.
by Progyna Khondkar - Mentor, A Siemens Business
The UPF is the ultimate abstraction of low power methodologies today. It provides the concepts and the artifacts of power management architecture, power aware verification and low power implementation for any design. Although UPF is very well defined through IEEE 1801 LRM, it is often difficult to comprehend many primitive and inherent features of individual UPF commands-options or relations between different varieties of UPF commands-options. The semantic context between most of the UPF commands are orthogonal. However, fundamental constituent parts of UPF that buildup the power management architecture are inherently linked because of their transitive nature - specifically the UPF commands that establish the links with DUT objects; like instances, ports, and nets, etc.
In this article, we provide a simplistic approach to find inherent links between UPF commands-options through their transitive nature. We also explain how these inherent features help to foster and establish exact relationships between UPF and DUT objects in order to develop UPF for power management and implementation as well as conduct power aware verification.
by Matthew Ballance - Mentor, A Siemens Business
s designs, especially System on Chip designs, have become more complex, the need for generated good automated stimulus across the verification spectrum has increased. Today, the need for verification reuse and automated stimulus is clearly seen from block to subsystem to SoC-level verification. The Accellera Portable Test and Stimulus Standard (PSS) language supports this need with a language for capturing test scenarios in such a way that they are reusable across verification levels (block, subsystem, and SoC) and across execution environment (simulation, emulation, and prototype), as illustrated in figure 1.
A language with such an ambitious scope must, of course, support a wide variety of applications and use models. This is important and necessary, but can also make it complicated to classify how PSS is being applied in a specific case. Recently, I've found it useful to categorize Portable Stimulus applications according to what type of reuse is most central to the application. I call these types of reuse the Axes of Reuse.