- Bill Au- Mentor, A Siemens Business
When we spend hours, days, or even weeks putting our hearts and minds into creating something, we have a tendency to emphasize its strengths and minimize its weaknesses. This why verification engineers have a blind spot for their own verification platforms. This blindspot, or bias, often leads to overlooking those areas where bugs may lurk, only to emerge at the worst possible time when errors are most costly and take longer to fix. Compounding the problem is that these “late bugs” become progressively more expensive to detect, fix, and revalidate as the design under test (DUT) grows in complexity and the project nears completion. Every verification engineer has learned this lesson one way or another. Every verification engineer eventually comes to realize that they need an easy way to overcome that bias and uncover those hidden or unforseen bugs.
An increasingly popular approach has been to employ a comprehensive, constrained-random, coverage driven testbench development flow, such as the Universal Verification Methodology (UVM). Indeed, when random stimulus generation is well executed it often finds bugs “that no one thought of.” However, this method’s success is limited to smaller DUTs as it is essentially impossible to do an exhaustive analysis of large, complex DUTs within a realistic project schedule. You simply can’t write enough constrained-random tests to cover everything, leaving numerous state spaces and control logic combinations unverified and opening up the possibility of corner-case bugs escaping and Trojan paths going undetected.
In contrast, formal verification tools develop a mathematical model of a given DUT, enabling formal analysis to exhaustively verify all inputs and outputs against the expected behaviors of all the states in a DUT, for all time. Until recently, such analysis could be done only on very small DUTs, but advances in formal algorithms and their corresponding “engine” implementations make it possible to verify millions of gates of logic, including, in some cases, whole systems. The result is that formal-based RTL analysis effectively provides an exhaustive, unbiased “audit” of any DUT. It finds corner case bugs or defunct code that can be missed by other, nonexhaustive verification methods.
A Multi-Tool For Formal Analysis:
The problem with formal is that it can be hard to do. That’s why we developed the formal verification apps for the most common verification tasks; including clock domain crossing (CDC), reset domain crossing (RDC), X-state propagation, finite state machine (FSM) traversal, and dead code analysis. Because the formal apps automate these analysis tasks, you don’t have to know how to write assertions or understand how formal analysis works under-the-hood. Each of these automated tasks are integrated into a holistic, formal analysis workflow that allows you to use what you need when you need it, just like a Swiss Army Knife®.
Analysis results are presented as familiar, easy-to-understand waveforms, annotated state diagrams, and straightforward text log files that can be read by both human and machine. The best part is that this automation can be run before a testbench exists - only the DUT RTL is needed, truly enabling a “shift left” in the verification schedule.
To give an idea of the utility of this approach, let’s briefly look at some common verification challenges and how specific Questa® Formal Apps handle them.
View & Download:
Read the entire Formal Apps Take the Bias Out of Functional Verification technical paper.