Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IP
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Clock-Domain Crossing
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • Advanced UVM
      • Basic UVM
      • Introduction to UVM
      • UVM Connect
      • UVM Debug
      • UVMF - One Bite at a Time
    • Featured Courses

      • Introduction to ISO 26262
      • Introduction to DO-254
      • Clock-Domain Crossing Verification
      • Portable Stimulus Basics
      • Power Aware CDC Verification
      • Power Aware Verification
      • SystemVerilog OOP for UVM Verification
    • Additional Courses

      • Assertion-Based Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • Metrics in SoC Verification
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal-Based Technology: Automatic Formal Solutions
      • Formal Coverage
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizons - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Upcoming & Featured Events

      • The Digital Twin: An Aerospace and Defense Revolution - March 9th
      • VIP solutions for Protocol and Memory Verification  - March 11th
      • Advance your Designs with Advances in CDC and RDC - March 23rd
      • Webinar Calendar
    • On Demand Seminars

      • The ABC of Formal Verification
      • I'm Excited About Formal...
      • 2020 Functional Verification Study
      • All On-Demand Seminars
    • Recording Archive

      • Improving Your SystemVerilog & UVM Skills
      • Should I Kill My Formal Run?
      • Visualizer Debug Environment
      • All Recordings
    • Conferences

      • DVCon 2020
      • DAC 2019
      • All Conferences
    • Mentor Learning Center

      • SystemVerilog Fundamentals
      • SystemVerilog UVM
      • View all Learning Paths
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification Horizons Blog
      • Academy News
      • Academy Newsletter
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - March 2021
      • Verification Horizons - November 2020
      • Verification Horizons - July 2020
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Contact Us
    • Training

      • Questa® & ModelSim®
      • Questa® inFact
      • Functional Verification Library
  • Home
  • Verification Horizons
  • November 2015
  • Minimizing Constraints to Debug Vacuous Proofs

Minimizing Constraints to Debug Vacuous Proofs

Verification Horizons - Tom Fitzpatrick, Editor

Minimizing Constraints to Debug Vacuous Proofs by Anshul Jain, Verification Engineer, Oski Technology

Most false positives (i.e. missing design bugs) during the practice of model checking on industrial designs can be reduced to the problem of a failing cover. Debugging the root cause of such a failing cover can be a laborious process, when the formal testbench has many constraints. This article describes a solution to minimize the number of model checking runs to isolate a minimal set of constraints necessary for the failure. This helps improve formal verification productivity.

1. INTRODUCTION

Formal verification in the form of model checking has become an integral part of the chip design sign-off process at many semiconductor companies. The process involves building formal testbenches that can exhaustively verify certain design units. This process complements the simulation testbenches that typically run at full-chip or subsystem level.

There can be many reasons why any verification methodology (simulation or formal) can give rise to false negatives or false positives, undermining the confidence built by the verification process. False negatives are typically less worrisome because they are accompanied by a failure report or a counterexample that can be analyzed to determine if the root cause is a design bug. False positives in formal verification are important to eliminate, and often debugging the root cause of such false positives can be very tedious.

This topic assumes an even higher importance since formal verification is a newer methodology, and the reasons for false positives are different than in simulation. One of the main reasons for false positives is presence of over- constraints, often resulting in constraints that unexpectedly conflict with each other. The focus of this article is to make it easier to debug such false positives.

The first place to start is to identify the presence of false positives. Some symptoms of false positives are:

  • A user-written cover property fails unexpectedly
  • A tool-generated cover on a sub-formula of a user- written assert fails (typically reported as a vacuous proof by a formal tool)
  • The proof time for an assert is much faster than expected!
  • Dead-ends are reported by either the simulation or the formal tool

For the remainder of this article, we will assume that the formal user is adept at identifying such symptoms and reducing them to a failing cover on the design, although this process in itself can be non-trivial.

Debugging the root cause of a failing cover can be a hard problem, especially in formal testbenches that consist of hundreds or thousands of constraints. In all situations we have ran into, such a cover also fails in the presence of a small subset of these constraints, typically between 2 to 5. Once such a subset has been identified, we have usually been able to relate this subset to an unintentional mistake made by the formal engineer. In this article, we describe a method to minimize the number of model checking runs to identify a minimal failing subset (MFS) of constraints. We have run into the problem of debugging vacuous proofs often, and most formal tools do not provide this solution automatically. Therefore it is important that formal users know how to solve the problem manually with minimum effort.

To solve the problem, we find the MFS by running the model checking tool multiple times, after modifying a set of asserts, covers or constraints. We will call each such run a step, with the goal of trying to minimize the number of steps. Assuming that the initial failing cover does not incur a large run-time, likely each or most of such steps finish fast too. Nevertheless, since each step requires some human time, we would like to avoid hundreds or thousands of runs, if the number of constraints n is large, i.e. a solution that takes O(n) steps is an unacceptable solution to our problem.

2. DATA PATH CONTROL DESIGN

To show an application of this problem, we show a simplified version of a real verification example. Consider a design (Fig. 1, shown below) consisting of two source ports A and B, and a single destination port. Note that the interfaces for the A and B are different from each other.

Fig. 1. Block Diagram of Data Path Control Design

Fig. 1. Block Diagram of Data Path Control Design

Fig. 2. State Machine of Data Path Design

Fig. 2. State Machine of Data Path Design

2.1 Design Functionality

The design arbitrates between the two sources under a strict priority scheme of arbitration, i.e., source A is serviced at higher priority than source B. In order to back-pressure the data coming from the sources, the design uses the ready_a output signal on source A interface and the wait_b output signal on source B interface. There is a state machine in the design (Fig. 2) that governs which source is being serviced. The outputs ready_a and wait_b remain asserted as long as design arbiter is in SERVICE_A state. Likewise, the output wait_b remains asserted.

2.2 Formal Verification Testbench

The formal verification testbench consists of a set of constraints and a set of checkers as shown in Fig. 3. The testbench contains the following checkers on the outputs of the design:

  • A1: Data correctness checker that data sent for source A must match the expected data ref_data_a, modeled in a scoreboard:

    source_a_sel && valid_out |-> (data_out == ref_data_a)

  • A2: Data correctness checker that data sent for source A must match the expected data ref_data_b:

    source_b_sel && valid_out |-> (data_out == ref_data_b)

In addition, the testbench contains the following constraints on the inputs of the design:

  • C1: Once asserted, valid_a should remain asserted until ready_a is asserted

    valid_a && !ready_a |-> ##1 valid_a

  • C2: If valid data is sent from source A, it should hold its value until ready_a is asserted

    valid_a && !ready_a |-> ##1 (data_a == $past(data_a))

  • C3: If wait_b is asserted, then valid_b should be deasserted in the next clock cycle

    wait_b |-> ##1 !valid_b

  • C4: If valid_b is deasserted, then start_b should also be deasserted

    !valid_b |-> !start_b

  • C5: If valid_b is deasserted, then end_b should also be deasserted

    !valid_b |-> !end_b

  • C6: start_b and end_b should be mutually exclusive

    !(start_b && end_b)

  • C7: If valid_b and wait_b is asserted, then valid_b should hold its value in next clock cycle

    valid_b && wait_b |-> ##1 (valid_b == $past(valid_b))

  • C8: If valid_b and wait_b is asserted, then data_b should hold its value in next clock cycle

    valid_b && wait_b |-> ##1 (data_b == $past(data_b))

Fig. 3. Formal verification setup of Data Path Control design.

Fig. 3. Formal verification setup of Data Path Control design

2.3 Vacuous Proof

The formal engineer ended up with the above-mentioned constraints after a typical process of iterating through false failures and adding constraints as needed, and as she developed a deeper understanding of the design and its interfaces. At this point, each of the two checkers A1 and A2 passed after the model checking run. However, the alert engineer noticed that the model checking tool was warning that the checker A2 was passing vacuously. This was a surprising result, and necessitated a debug for the root cause of this vacuous proof. Clearly, there must be some input that must be over-constrained accidentally. Even after a tedious manual review of the hundreds of constraints (of which 8 are mentioned above), it was not clear where the mistake was.

It is not hard to get the tool to report why the checker A2 was passing vacuously. In fact, it was easy enough to reduce that problem to the surprising result that the following cover was failing (i.e. the design was never sending any data out for source B!)

witness_src_b_data_w: cover property (
@(posedge clk) disable iff (rst)
source_b_sel && valid_out
);

2.4 Minimal Failing Subset of Constraints

The debug of the vacuous proof in the previous section forced the engineer to identify a minimal subset of constraints that is sufficient to cause the cover to fail. In fact, once she solved that problem, she realized that of the hundred of constraints, C3 and C7 were sufficient to produce the unexpected failing cover. At this point, a manual review of these two constraints was feasible. And she realized that C3 and C7 conflict with each other because in state SERVICE_A, the output wait_b is asserted, and if valid_b is asserted here, C3 will force valid_b to be deasserted in the next clock cycle, whereas C7 will force valid_b to be asserted in the next clock cycle. C3 was the incorrect constraint. And it happened because at some point in the project, she had realized that her understanding of the source B interface was incorrect and needed to be modified to implement that by C7 and C8. But she had forgotten to remove C3 on that interface, which was based on her earlier incorrect understanding!

The need to debug failing covers happens enough times during formal verification, and the debug of that is so tedious that we were motivated to solve this problem more efficiently.

3. PROBLEM AND SOLUTIONS

We are given a model checking tool that can verify whether a cover passes or fails on a given design under any set S of n constraints (the cover is said to fail if the model checking tool proves the cover is unreachable, and it is said to pass if the tool finds a witness to reach the cover from reset). To avoid verbosity, if the cover passes, we will say S PASSes; conversely, if the cover fails, we will say S FAILs. The goal is to find any minimal FAILing subset (MFS) of S using as few steps as possible.

If no MFS is larger than m (note we do not know m beforehand), the problem is called a problem of size (m; n). (Note that 1 ≤ m ≤ n.)

The following solution returns a minimal T ⊆ S such that T ∪ Saux FAILs. To find an MFS of S, we simply run the algorithm wit Saux = ∅

Fig. 4 and 5 illustrate some example cases.

Fig. 4 and 5 illustrate some example cases.

Fig. 4. A sample case where m = 1 and n = 8.

Fig. 4. A sample case where m = 1 and n = 8.

Fig. 5. A sample case where m = 2 and n = 8.

Fig. 5. A sample case where m = 2 and n = 8.

A similar algorithm to solve this problem was published earlier by Junker 1 . Both Junker’s algorithm and ours have the same worst-case complexity O (m log n). We implemented both his algorithm and ours, and the results appear in Table 1. When there is only one MFS, i.e. there is only value of m, both Junker’s algorithm and our algorithm take similar numbers of steps. However, when there are multiple MFSes, especially when they are disparately sized, our algorithm runs in far fewer steps than his. This is because his algorithm is biased towards finding an MFS which contains lower-numbered constraints even when such an MFS may be much larger than an MFS that contains only higher-numbered constraints. This puts his algorithm at a disadvantage.

Table 1. Results with n=1000, and 100000 trials

m Ours Junker
{1} 15.01 15.02
{2} 27.55 27.55
{3} 39.11 39.11
{4} 49.93 49.94
{10} 106.56 106.58
{1, 1} 14.52 14.52
{1, 2} 16.60 16.63
{1, 10} 15.06 22.96
{2, 2} 26.49 26.81
{2, 3} 30.02 31.42
{2, 10} 28.83 40.18
{1, 2, 10} 16.64 20.74
{1, 10, 100} 15.06 23.85

 

4. CONCLUSION

The article discussed a solution that requires O (m log n) steps of model checking runs to find the minimal FAILing subset of constraints; since typically m << n, this is useful. This solution has increased our productivity by reducing the iterations required to debug vacuous proofs.

REFERENCES

1 Junker, Ulrich. “QUICKXPLAIN: preferred explanations and relaxations for over-constrained problems.” AAAI. Vol. 4. 2004.

Back to Top

Table of Contents

Verification Horizons Articles:

  • Welcome to the 10th Anniversary edition of Verification Horizons!

  • Memories Are Made Like This

  • A New Stimulus Model for CPU Instruction Sets

  • On-Chip Debug – Reducing Overall ASIC Development Schedule Risk

  • Hardware Emulation: Three Decades of Evolution—Part III

  • QVIP Provides Thoroughness in Verification

  • Minimizing Constraints to Debug Vacuous Proofs

  • A Generic UVM Scoreboard

Siemens Digital Industries Software

Siemens Digital Industries Software

##TodayMeetsTomorrow

Solutions

  • Cloud
  • Mendix
  • Siemens EDA
  • MindSphere
  • Siemens PLM
  • View all portfolio

Explore

  • Digital Journeys
  • Community
  • Blog
  • Online Store

Siemens

  • About Us
  • Careers
  • Events
  • News and Press
  • Newsletter
  • Customer Stories

Contact Us

USA:

phone-office +1 800 547 3000

See our Worldwide Directory

  • Contact Us
  • Support Center
  • Give us Feedback
©2021 Siemens Digital Industries Software. All Rights Reserved.
Terms of Use Privacy Cookie Policy