INTRODUCTION
Formal assertion-based verification uses formal technologies to analyze if a design satisfies a given set of properties. Formal verification doesn’t need simulation testbenches and can start much earlier in the verification process. There are three possible results for an assertion after formal runs: “proven,” “fired,” and “inconclusive.” Proven means that formal has done exhaustive mathematic analysis and proved the design satisfies the assertion and no legal stimulus sequence can violate the assertion. Fired means that formal has found an error trace showing the assertion violation. Inconclusive means that formal has only found bounded proof result, i.e., the assertion is true for the bounded cycle, but may be true or false for longer cycles. An inconclusive result cannot guarantee the design function associated with the assertion is correct.
Considering the size and complexity of designs today, formal users often encounter inconclusive results for some assertions, especially when users run formal on the blocks of designs that have long sequential depths, a huge number of state bits, and complicated mathematic operations. When we get inconclusive results, we can try some simple solutions; such as increasing the run time, running more cores, and using bigger machines. If the assertion still cannot converge, we need to debug the root cause of it. In this article, we discuss the flow to debug inconclusive assertions and use an ECC design as an example to show a decomposition technique for handling inconclusive assertions.
DEBUGGING INCONCLUSIVE ASSERTIONS
The flow of debugging the root causes of inclusive assertions is shown in Figure 1 and Figure 2, below. When an assertion cannot converge, the user has to check if the design is correctly initialized. Questa® PropCheck reports the registers that are still X or Z after initialization in the generated report “formal_verify.rpt.” The user can look at the two tables “X Registers in Starting State” and “Z Registers in Starting State” in the file to find out if the design is properly initialized. If it is not, the user has to fix it, either by changing the design, writing a correct initial sequence, or using the tool command “netlist initial.”
 |
Figure 1. Flowchart (1)
 |
Figure 2. Flowchart (2)
Incorrect constraints may increase formal complexity. Questa® PropCheck reports which ports are constrained and which assumptions are used in the “formal_verify.rpt” file. The user can set some ports to constant, such as setting mode pin to one mode at a time, to reduce the complexity of the circuit. The user can also get rid of some complicated logic by cutting signals, or blackboxing modules or instances. These are easy steps to do. After fixing the initial state, modifying port constraints, or getting rid of some complicated logic, the user can rerun the test. If these simple steps cannot solve the inconclusive issue, the user has to do more analysis on assumptions, assertions, and design logic. Further steps are shown in Figure 2 above.
Sometimes removing assumptions can reduce complexity for formal. If an assumption drags in extra design signals that are not in the Cone of Influence circuit (COI) of the assertion, removing the assumption may be helpful. If an assumption is really complicated, introducing lots of extra state bits, removing it may help. Removing constraints may allow illegal stimulus that can result in false firing. When the assertion is fired, the user can check the error trace generated by the tool to figure out the root cause: either a design bug or missing constraint. If an assertion can be proven with under-constraining, it is still true with more constraints, i.e., the proof is valid. Sometimes adding more constraints can reduce complexity. Reducing the number of values that inputs can get may help. When over-constraining inputs, assertion firing is valid, but proof may be false.
If assertions are not written in a formal-friendly style, performance can be affected significantly. The user may consider rewriting the assertion to reduce state bits and sequential latency. If an assertion is to verify the end-to-end property of a big design, the chance to converge is small. Decomposing an end-to-end property to several simpler properties with small latencies can improve performance a lot. Since Questa® PropCheck can automatically use proven assertions as assumptions for proving other assertions (called assume-guarantee), the user can add helper assertions to leverage the tool’s features to increase performance.
Big memories and counters can introduce a lot of state bits and large cycles for formal to analyze, which can be the root cause of an inconclusive issue. The user can find out the memories and counters used by Questa® PropCheck for a specific assertion by using “formal verify –show_active_logic.” The tool reports the active logic used by formal engines and orders the signals according to their significance. The signal affecting formal the most is listed on top. The user can reduce the size of the memories and counters in the list, or remodel them.
If the user has tried all the steps introduced above, but the assertion still cannot converge, the user can contact Mentor verification support team for help.
A CASE STUDY
The design is the ECC logic shown in Figure 3. The encoder generates a syndrome, then the data and syndrome are written to the FIFO, and the decoder reads the data and syndrome from the FIFO and decodes it. The design can correct maximum 2-bit random errors in the data.
 |
Figure 3. ECC Design
The end-to-end property verifies that data can be corrected when errors are less than 3 bits. This assertion may not converge within a reasonable time due to the complicated mathematical operations in the encoder and decoder and the long latency and large state bits introduced by the FIFO. To verify this design, we can verify the encoder and decoder separately from verifying the FIFO. We have to create a wrapper that only instantiates the encoder and the decoder as shown in Figure 4.
 |
Figure 4. Wrapper for the Encoder and the Decoder
After the FIFO is not included, the formal complexity for the property verifying the correctness of the encoder and the decoder is reduced a lot, and the property writing also becomes much simpler. The property can be written using SystemVerilog Assertions (SVA) or the tool command “netlist property” as follows:
SVA:
Check_err_corrected: assert property (@(posedge clk) disable iff (!rstn) dout == din);
|
Tool directive:
netlist property –name Check_err_corrected -assert {dout == din}
|
We can use Questa® PropCheck’s command “netlist cutpoint” to inject random errors using the following tcl file “inject_errors.tcl.”
for {set i 0} {$i < 26} {incr i} {
if {$i < 16} {
netlist cutpoint data\[$i\] -cond (randbit\[$i\]==1'b1)
} else {
netlist cutpoint syn\[[expr {$i-16}]\] -cond (randbit\[$i\]==1'b1)
}
}
netlist property -name assume_2errs -assume
{$countones(randbit) <= 2}
|
Here “randbit” is an undriven wire defined in the wrapper. Formal considers undriven wires as free variables that can get any values randomly. When any bit of “randbit” is 1, the associated “data” bit or “syn” bit is cut. Since we define “assume_2errs” to constrain “randbit” to have maximum 2 bits of value 1, the maximum number of “data” and “syn” bits that can be cut is 2. When a signal is cut, it is not controlled by design logic anymore, instead it becomes a free variable for formal to control. The cut signal bits can be any value; as errors are injected when they get values different from the correct values. The above code can inject 0, 1, 2 errors randomly. When formal proves “check_err_corrected”, it proves “dout” always equals to “din” considering all random 0, 1, 2 errors. The Makefile to run the verification task is as follows:
Run: compile formal
compile:
vlib work
vlog rtl/wrapper.v –y ./rtl +incdir+./rtl +libext+.v
formal:
qverify –c –od log –do “ \
do inject_errors.tcl; \
netlist property –name Check_err_corrected –assert {dout==din}; \
formal compile –d wrapper; \
formal verify; \
exit”
|
Now we have verified the encoder and the decoder. If the FIFO module has design bugs, the original ECC won’t work. We have to verify FIFO to make sure it is correct. We can use the Questa® QFL library to verify the FIFO. To use the library, we can simply copy the template “qfl_fifo_single_clock.template.sv” under the QFL installation directory to “bind_qfl_fifo.sv,” and modify the parameters and signal names to match the ECC design, shown below.
bind fifo qfl_fifo_single_clock
#(
.WIDTH (26),
.DEPTH (64),
.TIMEOUT (0),
.ENQ_WIDTH (1),
.DEQ_WIDTH (1),
.ENABLE_ASSERT (1),
.ENABLE_COVER (1)
)
QFL_INSTANCE_FIFO_SINGLE_CLOCK
(
`ifdef QFL_TEMPLATE_PORT_CONNECTIONS_ALL
.*
`else
/*input [ENQ_WIDTH - 1:0] */ .enq (push),
/*input [ENQ_WIDTH * WIDTH - 1:0] */ .enq_data (data_in),
/*input [DEQ_WIDTH - 1:0] */ .deq (pop),
/*input [DEQ_WIDTH * WIDTH - 1:0] */ .deq_data (data_out),
/*input */ .clk (clk),
/*input */ .resetn (rstn)
`endif
);
|
When verifying the FIFO, we can read in the original design ECC to make sure the FIFO is working in the context of the design. For verifying the FIFO, we don’t care about the actual value of the data written into the FIFO. To reduce the complexity, we cut the signals “data” and “syn” to remove the complicated encoder logic. Now “data” and “syn” become free inputs to the FIFO for formal. If the tool can verify that for any values written to the FIFO, the values are read out in the correct order from the FIFO, the data integrity feature of the FIFO is proven. Using the Questa® QFL FIFO library, the tool can automatically verify FIFO data integrity, no overflow, no underflow, and no timeout.
The Makefile to verify the FIFO is as follows. The first “vlog” command compiles the RTL files of the design, the second “vlog” command compiles the QFL library and the bind file. For the FIFO to work correctly, the primary inputs “wr_en” and “rd_en” have to be constrained such that when the FIFO is empty, “rd_en” cannot be issued, and when the FIFO is full, “wr_en” cannot be issued. We use the “netlist property” command to define these two constraints in the example.
INSTALL := $(shell qverify -install_path)
run: compile formal
compile:
vlib work
vlog rtl/ecc_top.v -y ./rtl +incdir+./rtl +libext+.v
vlog -sv -mfcu -cuname mybind qft_files/bind_qfl_fifo.sv \
-y $(INSTALL)/share/QFL/checkers +libext+.sv \
+incdir+$(INSTALL)/share/QFL/checkers
formal:
qverify -c -od log -do " \
netlist cutpoint data; \
netlist cutpoint e_syn; \
netlist property -name constr_no_underflow -assume {!(rd_en && empty)}; \
netlist property -name constr_no_overflow -assume {!(wr_en && full)}; \
formal compile -d ecc_top -cuname mybind; \
formal verify; \
exit"
|
SUMMARY
In this article we discussed the flow to debug inconclusive assertions and also some techniques to reduce formal complexity that may result in inconclusive assertions; including fixing incorrect initial state and incorrect constraints, removing complicated design logic by blackboxing design module/instance or cutting design signals, and reducing assertion complexity. We looked at a testcase where we reduced the complexity of the original end-to-end property by verifying the complicated arithmetic blocks and FIFO separately. There are a lot of creative ways to reduce formal complexity and get more results from formal runs. To learn more, you can check our Verification Academy website for the course “Handling Inconclusive Assertions in Formal Verification.”
Back to Top