Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick - Mentor, A Siemens Business
I am a big fan of the television show Jeopardy! and, like many of you, I have been intrigued by James Holzhauer, who, as of this writing, is now the second-winningest contestant in the show’s history. During his current 22-game winning streak (Ken Jennings won 74 consecutive games in 2004), Holzhauer has amassed $1,691,008 in winnings (vs. $2,520,700 for Jennings), including the top 11 single-game totals in the show’s history. For those of you not familiar with the show, it is a general-knowledge quiz show consisting of two rounds in which players choose from six clue categories, each of which has five clues, which are assigned increasing dollar values from top to bottom. Players are awarded the dollar amount for a correct response, and given the next choice of clue, or they have that amount deducted for an incorrect response. The third round, called “Final Jeopardy,” consists of a single question on which contestants can wager as much as they wish.
Holzhauer has achieved his amazing success through two complementary approaches. The first is his analytics-based approach to the game. Instead of starting at the top (lowest-value) clue in a category and proceeding down through the same category, as most players tend to do, he starts with the bottom higher-value clue of a given category and proceeds across the bottom of the board to accumulate large amounts of money very quickly. Since the “Daily Double” squares (which allow the contestant to bet as much money as they wish) tend to be towards the bottom of the board, Holzhauer often is able to further enrich his total before his opponents have really gotten started. Of course, the other key to his success is that he has answered 97% of the questions correctly.
The only comparable Jeopardy! performance I can recall is watching Bill Murray in the movie Groundhog Day where he knows all the answers because he has seen the same episode over and over again and has obviously memorized the answers. I think both of these examples show that just doing things the way you’ve always done them is not the path to success, especially when your competition is changing the game. So, with this thought in mind, we’ve assembled a great DAC edition of Verification Horizons for you, with each article showing how you can take a new approach to some aspect of functional verification.
by Keroles Khalil and Magdy A. El-Moursy - Mentor, A Siemens Business
An integrated framework to simulate electronic systems (including digital and analog devices) with the mechanical parts of a heterogeneous automotive system is presented. The electronic system, consisting of many electronic control units (ECUs), is modeled to simulate the mechatronic system functionality. The recently developed functional mock-up interface standard approach is used to create a model for a complex cyber-physical automotive system. The framework simulates real systems, including the hardware (HW) and the software (SW) to run on the virtual ECUs. It allows co- development of the automotive system SW and HW while the mechanical system is in the loop. Hardware and software debugging is demonstrated using the developed methodology. The development cycle for the automotive mechatronic system can be greatly shortened using the proposed framework.
Autonomous driving (AD) requires sophisticated electronic systems. The designs of today’s automotive electronic systems contain many system-on-chips (SoC).[1, 2] Deep verification is required to ensure that there are no bugs or risks of failure. The safety of advanced driver assistance systems (ADAS) can not be guaranteed without verification of the whole system, including thermal, mechanical, and electrical parts. Cars with ADAS have extensive data processing and decision making capabilities. A driver is either warned so a potentially dangerous situation can be avoided, or the systems on board take control once a dangerous situation is inevitable (e.g., by steering away from the impact point). In this article, modeling the whole automotive system is adopted to verify the operation of ADAS, including the software and hardware.
by Rich Edelman - Mentor, A Siemens Business
In a SystemVerilog UVM [2] testbench, most activity is generated from writing sequences. This article will outline how to build and write basic sequences, and then extend into more advanced usage. The reader will learn about sequences that generate sequence items; sequences that cause other sequences to occur and sequences that manage sequences on other sequencers. Sequences to generate out of order transactions will be investigated. Self-checking sequences will be written.
A UVM sequence is a collection of SystemVerilog code which runs to cause “things to happen”. There are many things that can happen. A sequence most normally creates a transaction, randomizes it and sends it to a sequencer, and then on to a driver. In the driver, the generated transaction will normally cause some activity on the interface pins. For example a WRITE_READ_SEQUENCE could generate a random WRITE transaction and send it to the sequencer and driver. The driver will interpret the WRITE transaction payload and cause a write with the specified address and data.
by Matthew Ballance - Mentor, A Siemens Business
Portable Stimulus is one of the latest hot topics in the verification space. Mentor, and other vendors, have had tools in this space for some time, and Accellera just recently released the Portable Test and Stimulus Standard, a standard language that can be used to capture Portable Stimulus semantics.
From the name, one very obvious application of Portable Stimulus is to enable a test scenario to easily be reused across test-execution platforms or levels of verifica-tion. As the figure above shows, Portable Stimulus does allow test intent to be reused from block level to subsystem level to SoC level. It also enables a Portable Stimulus tool to create tests that are appropriate for the variety of test platforms on which that verification is carried out – typically SystemVerilog for block and subsystem level, and C tests for SoC level.
However, Portable Stimulus enables more than just test portability. Portable Stimulus enables a high degree of automation in the test creation process, and enables the user to describe tests at a far higher level of abstraction than is possible with techniques like SystemVerilog and UVM.
by Mark Eslinger and Ping Yeung - Mentor, A Siemens Business
Formal verification has been used successfully to verify today’s SoC designs. Traditional formal verification, which starts from time 0, is good for early design verification, but it is inefficient for hunting complex functional bugs. Based on our experience, complex bugs happen when there are multiple interactions of events happening under uncommon scenarios. Our methodology leverages functional simulation activity and starts formal verification from interesting “fishing spots” in the simulation traces. In this article, we are going to share the interesting fishing spots and explain how formal engine health is used to prioritize and guide the bug hunting process.
Formal verification has been used successfully by a lot of companies to verify complex SoCs[2] and safety-critical designs.[3] The ABCs of formal have been used extensively as described in:[5]
- Assurance: to prove and confirm the correctness of design behavior
- Bug hunting: to find known bugs or explore unknown bugs in the design
- Coverage closure: to determine if a coverage statement/bin/element is reachable or unreachable
by Sukriti Bisht, Sulabh Kumar Khare, Ashish Hari, and Kurt Takara - Mentor, A Siemens Business
Clock-Domain Crossing (CDC) issues are the second most common reason for silicon re-spins. Most modern day designs have more than one clock, many of which are asynchronous. Signals that pass between logic clocked by different asynchronous clocks are called clock-domain crossing (CDC) signals. Due to the asynchronous nature of clocks, there is a possibility of CDC signals going metastable and propagating incorrect logic downstream resulting in functional failure of the design. To mitigate these problems, synchronizers are used on CDC paths.
Each synchronizer is dependent on a set of assumptions or protocols, which when violated can make the transfer unreliable. To avoid such issues, it is crucial to validate each synchronizer for its reliability. Engineers have been using assertion-based verification methods to verify synchronizer protocols in Formal and Simulation environments, but have fallen short of addressing the issue.
In this article, we will present the most prominent challenges faced with the existing methodology currently being used to verify synchronizer protocols and propose a new methodology to overcome them.
by Amanjyot Kaur and Louie De Luna - Agnisys
The Portable Test and Stimulus Standard (PSS) v1.0a aims to help the user describe the high-level test intent and create code for any downstream verification platform. This article starts out with a quick introduction to PSS; it then shows how to use Questa® inFact to create a PSS description and generate C & UVM code. We will then show how you can use ISequenceSpec™ to describe the Hardware/Software Interface (HSI) layer to create truly portable tests.
The HSI layer or Hardware Abstraction Layer (HAL) is used invariably in all modern digital systems. It is a way to abstract out the details of the hardware from the higher layers of the system, like the device driver, firmware, and software stack. For example, a power up sequence or an initialization sequence could have a list of specific steps that need to be carried out in a certain order. These steps could include register field writes, waits for a specific period of time or events, or set up of clock PLLs by trimming the values of a variable stored in a field.
Typically, an SoC consists of several subsystems and each subsystem could contain a multitude of blocks or IPs. Each IP could have its own HSI layer or a set of sequences that abstract out the core functionality of the IP. A subsystem could use these IP-level sequences and build its own hierarchical sequences. Then at the top, SoC level, there could be sequences built using the subsystem sequences. These hierarchical sequences are analogous to the register level hierarchy (or RTL hierarchy).
by George Stevens - DesignLinx Solutions
The basis of this article was derived from practical experience. The scenario was this:
“Here is a DUT specification, we have no UVM environment for you to start with as a template, so go and find out how to generate one with Mentor’s UVM Framework (UVMF) template generation methodology.”
The Mentor UVMF documentation and examples provided great direction on how to generate a UVMF framework from scratch via, in this case, Python scripting. And then, boom, in fairly short order and from an architectural standpoint, you are suddenly presented with a UVM framework that has hundreds of files and a large directory structure consisting of tests, sequences, transactions, drivers, monitors, predictors, scoreboards, configuration, and coverage. All connected, compile-able, and simulate-able… and a bit overwhelming.
So, what do you do next? The generated tests have no idea what is in your DUT or DVE (Design Verification Environment) functional specifications, nor does it know how to get stimulus all the way to the DUT from the top test. Also, your predictors do not know what to expect and are not ready to support score-boarding. Of course, there are other customization requirements such as synchronizing the drivers and monitors to the DUT and the framework itself which is quite well covered in the existing Mentor documentation AND all good subjects for white papers none-the-less. This article will concentrate on one of these areas: how to customize the front end Test Control structure. It will utilize, specifically, the primary Mentor UVMF tutorial called “Generator Tutorial” as the base example. This article is really a “how to” guide vs. a technical dissertation, and should be immediately useful to verification folks with limited UVM background engaged in something similar to the stated scenario at the beginning of this text. Additionally, these techniques should be directly applicable to any UVM template generated by Mentor’s UVMF template generator methodology.