by Roger Sabbagh, Mentor Graphics
Most things in life are not evenly distributed. Consider for example, the sun and the rain. The city of Portland, Oregon gets much more than its fair share of rainy days per year at 164 on average, while in Yuma, Arizona, 90% of all daylight hours are sunny.1 Or, how about life as an adolescent? Do you remember how some of us were "blessed" with acne, buck teeth and short sightedness, while others with seemingly perfect skin could spend the money they saved on skin cream, braces and eyeglasses to buy all the trendiest designer clothes? No, things are not always meted out in equal measures.
So it is with the effort required to achieve code coverage closure. A state-of-the-art, constrained-random simulation environment will achieve a fairly high level of coverage as a by-product of verifying the functionality of the design. It is typically expected to achieve >90% coverage quite rapidly. However, getting closure on the last few percent of the coverage bins is typically where the effort starts to balloon. The traditional process that is followed to achieve coverage closure is depicted in Figure 1.
Figure 1. Traditional Code Coverage Closure Process
While it looks quite straightforward, this flow actually presents a number of serious challenges. Firstly, as part of this process, design and verification engineers must spend a lot of time reviewing the coverage holes to determine whether or not they are coverable and write the additional tests or waivers. For large designs with millions of coverage bins, this could take many man-weeks of effort. Furthermore, it is a very tedious and error-prone task that runs the risk of mistakenly ignoring reachable coverage goals and missing bugs. Finally, it is not easily repeated as the design undergoes change and manually written waivers have to be maintained otherwise they become stale.
That's simply not fair! What can be done to turn the tables here and get the upper hand in this situation? In this article, we will explore how formal methods are being used to do just that. Using formal for code coverage closure is one of the top 5 formal apps being used in the industry today. We will explain how it helps by bringing schedule predictability and improved design quality to the process, and potentially makes designs more verifiable. We will use the Ethernet MAC design from OpenCores2 as a case study.
QUESTA COVERCHECK FOR CODE COVERAGE CLOSURE
Questa CoverCheck is the formal app for code coverage closure. It targets all the different flavors of code coverage bins with formal analysis to determine if the coverage points are reachable or not. The results can be used to generate waivers automatically for unreachable coverage bins and to see how the reachable ones can be hit. The code coverage closure flow using CoverCheck is depicted in Figure 2.
Figure 2. Cover Check Code Coverage Closure Process
AUTOMATIC GENERATION OF WAIVERS
In any given design, there are many code coverage bins which are not expected to be covered. This occurs for a variety of reasons, including the possibility that the coding style requires some lines of code for synthesis that will never be exercised in simulation.
For the purposes of this article, we will examine only the statement coverage, but the concepts presented here could be extended to branch, condition, FSM and toggle coverage as well. In the Ethernet MAC design, the default statement coverage achieved in simulation is 96.6%, with 62 statements missed.
The code above contains a case statement default branch which can never be activated in simulation.
This is an example of the type of code that is flagged as unreachable by CoverCheck and automatically pruned from the coverage model. These types of targets are essentially "noise" and removing them improves the fidelity of the code coverage metrics. After updating the Unified Coverage Database (UCDB) with the CoverCheck results, the statement coverage has now risen to 98.8% (as shown below).
Of course, care must be taken to review the unreachable items to be certain they are expected. Some coverage items may be unreachable due to a design bug that overconstrains the operation of the logic, such as the case where the code is related to a mode of operation of a reused block that is not relevant to the current design.
The Ethernet MAC testbench does not test the design in half-duplex mode. Since this mode is not intended to be verified, the code related to that mode can be ignored for the purposes of code coverage measurements. But, rather than manually reviewing the code coverage holes to determine which ones are related to the half-duplex mode of operation, CoverCheck can automatically derive that information if a simple constraint is specified to indicate that the design should only be tested in full-duplex mode.
The following TCL directive sets the mode register bit that controls operation of the device to full-duplex mode:
netlist constant ethreg1.MODER_1.DataOut\[2\] 1'b1
Running CoverCheck with this constraint and updating the UCDB again shows that the statement coverage is actually sitting at 99.3% (as shown above).
GUIDANCE TO WRITE ADDITIONAL TESTS
Now we've reduced the number of coverage holes to be investigated by a factor of 5, which isn't bad. But what about the remaining 20%, or in this case 12 missed statements? The CoverCheck results show that 11 out of these 12 statements are in fact reachable, as illustrated on the previous page, lower left.
Formal analysis can show how to exercise these tough to reach places in the design. For example, line 314 of the txethmac1 block related to the transmit packet retry function is not covered in simulation (shown above).
CoverCheck provides a waveform (below) that shows the sequence of input stimulus that will get the design into this state. This can be directly converted into a simulation testbench or it can be used to guide the manual creation of new tests to hit this coverage point.
DESIGN FOR VERIFIABILITY
So, at this point, we are down to 1 out of the original list of 62 missed statements that would have required manual review in the traditional flow. We have addressed the vast majority of the issues (98.4% to be precise). The last inconclusive coverage point would have to be reviewed by the designers to determine if it can be ignored, if it must be covered or if it is related to a bug in the design that makes it difficult or impossible to reach. The line of code in question is related to the excessive deferral of packets in the eth_txethmac block.
Above is an example of a line of code that can't be reached through simulation regressions and is inconclusive when analyzed by formal analysis. It indicates that it's a very complex piece of logic – potentially overly complex. When this type of situation occurs, the question could be asked: Could this part of the logic be redesigned in such a way as to make it more easily coverable and verifiable?3
Even a few percentages of missed targets on a large design will take a disproportionate amount of time and effort to review and get closure on in the traditional way. Using an automatic formal application like CoverCheck reduces the pain by at least an order of magnitude. Not only does it speed up the process, but it ensures that excluded coverage bins have been formally proven to be safe to ignore, delivering higher design quality. Finally, it provides feedback that is very useful in guiding the designers to give more consideration to design for verifiability.
- OpenCores Ethernet MAC http://opencores.org/ project,ethmac
- C. Richard Ho, Michael Theobald, Martin M. Deneroff, Ron O. Dror, Joseph Gagliardo, and David E. Shaw, "Early Formal Verification of Conditional Coverage Points to Identify Intrinsically Hard-to-Verify Logic," Proceedings of the 45th Annual Design Automation Conference (DAC '08), Anaheim, California, June 9–13, 2008.
Back to Top