Upcoming RDC Assist Webinar

Questa RDC Assist – Improving designer productivity and enabling faster RDC verification closure with machine learning

Wednesday, May 22nd | 8:00 AM US/Pacific

Learn more and register.

  1. On-Demand

    Win the Tick to Trade Race by Root Causing Bugs Faster with QuestaSim

    In this webinar we will highlight the key innovations in QuestaSim that enable full debug visibility with significant reduction in simulation performance overhead and waveform database size. For existing QuestaSim users, we will show the simple change to scripts to see a 2X performance boost and 3X reduction in waveform database.

  2. Simulation Sessions

  3. Simulation Discussion

    View more posts about Simulation in the Forum
  4. Simulation Container

    1. Simulation Overview

      Simulation plays a pivotal role in the digital design and verification process. Its primary purpose is to validate whether the design being created functions according to the specified requirements. By running simulations early in the design phase, potential issues can be identified, thus minimizing the need for extensive code revisions. Simulations can be performed at different levels of abstraction and at various stages throughout the design process.

      From an abstraction perspective, simulations can be performed at Register-transfer level (RTL) and at gate level. Gate-level simulation can further be subdivided as timing and zero-delay simulations depending on whether timing information is considered during the simulation process.

      In the course of electronic system design, simulations are performed at different levels: block-level and system level. Block-level simulations are relatively shorter in runtime and less intricate compared to system level simulations. System level simulations encompass the entire system under test and typically involve a verification framework to drive the inputs and observe outputs.

    2. Simulation Metrics

      It would be impossible to just look at the outputs of the Design Under Test (DUT) and make any assumptions about the quality of the code written. There is a need for some type of metrics to determine if the DUT is performing according to the specification and that there are no unforeseen issues in the code.

      Code Coverage

      One of the main and commonly used metrics is code coverage. Code coverage is the measure of how much of the code written is executed during the simulation. There are various types of code coverage and each of them give a different perspective of which parts of the code are executed during the simulation.

      Statement coverage shows whether a line in the code is executed or not.

      Branch coverage shows whether all the possible ways a branch (if-else, Case Statements) can be executed has occurred during the simulation.

      Expression Coverage shows whether all the possible ways an expression can be assigned has being executed.

      Toggle coverage shows whether all the bits of a bus have toggled between 0 and 1.

      Extending on this, State machine coverage provides metrics related to state machines in the design. It offers information on the active states, transition between states, and other relevant aspects.

      Functional Coverage

      Functional coverage plays a crucial role in the verification of electronic systems. Unlike code coverage, which focuses on the lines of code being executed, functional coverage is a user-defined metric that assesses how much of the design specification is covered during simulation. Verification engineers find functional coverage particularly valuable because it allows them to explicitly define coverage models. By doing so, they can verify the exact behavior of the design and ensure all expected values are observed during simulation.


      Assertions provide a powerful means to verify and monitor the behavior of a design. By incorporating assertions into the design, one can express specific rules based on the design specification. These rules act as checks that the simulation tool evaluates during simulation runs. If an assertion fails – meaning the specified condition is not met – the tool reports the violation. This feedback allows one to investigate why the design is not functioning as intended.

      Constrained-Random Stimulus

      The quality of the verification of the DUT relies heavily on the type of stimulus driving the inputs. Manually toggling inputs through all possible combinations and sequences is impractical due to the sheer complexity of modern designs. Therefore, there needs to be a practical approach to drive input ports.

      Traditionally, verification engineers wrote directed tests – tests that explicitly set input values and change them at specific times. This approach becomes very cumbersome as the design gets more complex. To address this, random number generation was introduced. Randomly cycling through all the possible input combinations seems like a solution, but it quickly becomes impractical. The number of possible combinations becomes astronomically large, leading to long simulation runs. This leaves verification engineers spending a lot of time running simulations hoping to catch corner cases.

      A good middle ground between directed tests and random stimulus is constrained random stimulus generation. By constraining the random value generation to the values that we know the inputs can have according to the specification, we can significantly reduce the number of combinations of the inputs and thereby reduce the total simulation time.

    3. Verification Methodologies

      In today’s complex digital designs, a good verification methodology plays a vital role in determining how effective and efficient your simulations are being run. Different verification methodologies exist, tailored to the programming languages you are most comfortable with.

      The Universal Verification Methodology (UVM) is the go-to choice for SystemVerilog users. It has widespread adoption and offers guidelines, libraries and tools to create reusable and scalable test environments.

      VHDL users can leverage UVM as well. By integrating UVM with VHDL they benefit from established practices while staying within their preferred design language. For those committed to exclusive use of VHDL, there is UVVM (Universal VHDL Verification Methodology) and OSVVM (Open-Source VHDL Verification Methodology).

      For Python enthusiasts, they can harness cocotb – a coroutine-based co-simulation testbench environment, which uses a simulator’s Programming Language Interface (PLI) to drive stimulus from a Python environment to the underlying SystemVerilog or VHDL DUT.