Search form

Main menu

My Account Menu

Improving FPGA Debugging with Assertions

Harry Foster
Chief Scientist Verification,
Mentor Graphics Corp., Wilsonville, OR, USA

Here’s one reason why FPGA design starts dwarf ASIC design starts: choosing flexible, inexpensive and readily available FPGAs is one fairly obvious way to reduce risk when designing complex SoCs for everything from mobile devices and smartphones to automobile electronics. In fact, Gartner reported that FPGAs now have a 30-to-1 edge over ASICs in terms of new design starts. This ratio is startling since FPGAs have traditionally been relegated to glue logic, low-volume production, or prototype parts used for analysis. But the landscape is changing. Transistor counts and advanced features found in today’s FPGAs have increased dramatically to compete with capabilities traditionally offered by ASICs alone. These innovations are helping meet the inexorable requirement that more functionality be packed into smaller and more power-efficient form factors. This demand, coupled with a change in FPGA capabilities, has resulted in the emergence of advanced FPGA system-on-chip (SoC) solutions, including the integration of third-party IP, DSPs, and multiple microprocessors—all connected through advanced, high-speed bus protocols. Accompanying these changes has been an increase in design and verification complexity, which traditional FPGA flows are generally not prepared to address. To reduce development costs associated with complex FPGA-based SoC designs, project teams have been forced to evolve and mature their development processes. One question many engineers often ask when thinking about improving processes is “where should I start?” As the saying goes, “when eating an elephant, take one bite at a time.” Hence, focus on the existing bottlenecks.

When asked what the biggest bottleneck in today’s FPGA flows is, most engineers will, without hesitation, say it is debugging. This is certainly true when tracking down an issue found in the lab—as well as tracking down bugs found during simulation. Anything that can be done to reduce the debugging effort is a huge win for the overall project schedule—and this is where assertions can help. In fact, multiple industry case studies have shown that assertions can significantly reduce debugging time by up to 50 percent.


Informally, an assertion is a statement of design intent that can be used to specify design behavior. Assertions may specify internal design implementation features, for example, some aspect of a specific FIFO structure. Alternatively, assertions may specify external architectural (or micro-architectural) features such as a bus protocol or even higher-level, end-to-end behavior that spans multiple design blocks.

One key characteristic of assertions is that they allow us to specify what the design is supposed to do at a high level of abstraction, without having to describe the details of how the design was implemented. Thus, this abstract view of the design intent is ideal for the verification process, whether we are specifying high-level requirements or lower-level implementation behaviors by means of assertions.
Let’s examine a simple assertion. For this example we use a simple design FIFO, as illustrated in Figure 1.

Figure 1. Simple FIFO design and a VHDL OVL assertion checker

Figure 1. Simple FIFO design and a VHDL OVL assertion checker

A requirement of our simple FIFO is that it is not possible to overflow or underflow the FIFO under any condition. Figure 1 also shows a simple VHDL OVL assertion checker that has been embedded in our RTL design to verify for this condition. Any improper or unexpected behavior associated with the FIFO can now be caught closer to the source of the design error (in terms of both time and location). Hence, the use of assertions dramatically reduces our debugging effort.


To understand how assertions can cut debugging time in half, let’s discuss the concepts of controllability and observability. Controllability refers to the ability to activate an embedded finite state machine, structure, or specific logic element within a design by stimulating various input ports. While it is possible to achieve high controllability of the design’s input ports, test stimuli may not be able to achieve much controllability of an internal structure within the design. Observability refers to the ability to observe the effects of a specific internal finite state machine, structure, or stimulated logic element. For example, a testbench generally has limited observability if it can only observe the external ports of the design model.

To identify a design error using a simulation testbench approach, the following conditions must hold:

  • The testbench must generate a proper input stimulus to activate a design error.
  • The testbench must generate a proper input stimulus to propagate all effects resulting from the design error to an output port.

It is possible, however, to set up a condition where the input stimulus activates a design error that does not propagate to an observable output port, as illustrated in Figure 2.

Figure 2. Poor observability and controllability misses bugs

Figure 2. Poor observability and controllability misses bugs

Embedding assertions in the design model increases observability. In this way, the testbench no longer depends on generating input stimulus to propagate a design error to an observable port. In fact, one observation made by teams just getting started with assertion-based verification (ABV) is that after the assertions are enabled, their simulations that had previously been working often fail as new bugs are identified due to improved observability. The result: an overall reduction in debugging time.


Following a few simple principles and guidelines will greatly simplify the process of creating assertions. The most important principle is to keep it simple and short! However, engineers often think too hard about the type of assertion they want to create. For example, many engineers are thinking in terms of higher-level, end-to-end checks—similar to what they might consider when creating a directed test. Or they might think in terms of a complex sequence they want to check as specified by a specific requirement. One problem with this approach is that many of these complex assertions do require a rocket scientist to write them. But more importantly, they will not provide you with the simulation debugging benefits that lower-level assertions provide.

Hence, when getting started, my first recommendation is that you don’t think of assertions in the same way as you would when creating directed end-to-end tests. You should write simple assertions to check lower-level discrete behavior—such as a FIFO will not underflow or overflow, or an input packet’s tag has a legal value, or the grants from an arbiter are mutually exclusive. The advantage of keeping it simple and short is that you will find it much easier to write these assertions. Furthermore, they simplify your debugging effort in simulation by localizing the problem. In contrast, high-level, end-to-end assertions generally do not reduce your debugging effort.

My second recommendation is that you write an assertion in place of a comment whenever possible. For example, many engineers write a comment about some assumption they have made about the design or some aspect of the design that is concerning them—such as the two control signals must never be high at the same time or the design will not function correctly. Adding an assertion instead of a comment is a great way to validate your assumptions during simulation and monitor specific design concerns.

My third recommendation is, whenever it is possible and efficient, create assertions on the interface ports of your design block to validate basic control signals or legal command values. For example, if a block has two input control signals (for example, en1 and en2) that must never be active at the same time, then a simple assertion can check this condition. Or, if there is a specific timing relationship or restriction between interface control signals, such as a grant must follow a request within 10 cycles, then a simple assertion can quickly identify problems when the timing restriction is violated. Note that I am recommending that you evolve your assertion skills first before you take on creating these more complex interface assertions, such as a set of assertions checking a bus protocol. Remember: keep it simple and short!

My final recommendation is to evolve your skills over time by analyzing bug escapes, and then learn how to improve your assertion-creating skills from these mistakes. For example, when a bug slips through simulation into the lab, try to identify whether some lower-level or discrete internal error could have been checked with an assertion to quickly identify the issue that manifested itself into the larger issue found in the lab. This approach enables you to mature your skills over time, and increase the quality and density of assertions in your design.


Even after learning the syntax of writing assertions, many engineers experience what is often referred to as the blank-page syndrome—and don’t have a clue how to get started. This section describes a handful of common design errors that you might want to monitor with assertions. Obviously, these three suggestions are comprehensive in terms of the types of assertions you could write. However, in the spirit of evolving your assertion creation skills from crawl, to walk, to run—I recommend that you start with these simple assertions since they require minimal effort to add into your design and generally provide high value.

  • FIFO Overflow and Underflow Condition: Often very complex system-level bugs are a result of a much lower-level design bug—such as a FIFO overflowing or underflowing—which will manifest into a complex bug observed at the system level. The good news is that the FIFO overflow and underflow assertions are two of the simplest assertions you can add to your design. Note that in Section 3 of this paper we provide a step-by-step example for adding OVL assertions into a VHDL design that contains a FIFO, and details on how to run simulation on this example containing assertions.
  • One-Hot Control Condition: Many designs implement a one-hot control structure, which is particularly useful for managing shared design resources, such as a bus or memory. If the control logic inadvertently enables multiple resources at once, then an error will occur in the design. The effect of this bug might not show up until hundreds of cycles later, on the outer edge of your testbench. By adding an assertion to check for a one-hot condition, for example, one (and no more than one) memory enable en1 and en2 must be active at a time, you can catch a bug closer to its source.


Do not wait to add your assertions after RTL coding is complete. The reason for this is that there are many assumptions or potential design concerns whose details you are likely to have forgotten if you wait. I recommend that as you are writing your RTL, and you make an assumption or identify a concern, you add an appropriate assertion to check these conditions.


To learn more about ABV and how to evolve your organization’s verification process capabilities, visit the Verification Academy web site at, which is a major training initiative at Mentor Graphics.

Harry Foster is chief scientist for Mentor Graphics' Design Verification Technology Division. He holds multiple patents in verification and has co-authored seven books on verification, including the 2008 Springer book Creating Assertion-Based IP.  Foster was the 2006 recipient of the Accellera Technical Excellence Award for his contributions to developing industry standards, and was the original creator of the Accellera Open Verification Library (OVL) assertion standard.

← Back to News