Verification Horizons Articles:
by Tom Fitzpatrick, Siemens EDA
Welcome to our DVCon U.S. 2021 edition of Verification Horizons! You’ll notice that a few things look different about this issue, and that’s because we are no longer Mentor Graphics, but are now Siemens EDA. This is a huge milestone for us as a company and we hope it will prove to be even more significant for our industry as a whole, giving us the opportunity to share so much more with you.
I’m writing this note to you on Groundhog Day. For those of you not familiar with the particular American phenomenon, on Groundhog Day, February 2nd, the groundhog comes out of his hole and looks around. If he sees his shadow, he goes back in his hole and we get six more weeks of winter. If he doesn’t see his shadow, he stays out and we get an early spring. Of course, Groundhog Day is also the backdrop for the movie of the same name, considered by many to be a masterpiece of both comedy and philosophy, in which Bill Murray plays Phil Connors, a cynical weatherman who gets stuck living the same day over and over again until he eventually learns what it means to live well.
by Mark Peryer, Siemens EDA
Version 1.0 of the UVM class library was released by Accellera at the end of February 2011, the result of a unique collaborative effort between fierce competitors (Siemens EDA, formerly Mentor Graphics, Cadence, and Synopsys) and a small number of activist user companies. The objective was to provide an industry standard SystemVerilog based verification methodology. To celebrate the 10th anniversary of the UVM, I would like to take the opportunity to reflect on how well it has achieved its objectives and what its future might hold.
SOME HISTORY
I am old enough to remember the days before verification methodologies. My first experience writing a testbench was to write parallel patterns that were fired at a design in a simulator, where every bit in every pattern had to be right and abstraction meant using a hex rather than binary format. With the advent of Verilog and VHDL in the late 1980s, abstraction improved substantially, but there was no real industry consensus on how to go about writing testbenches. I can remember spending weeks figuring out the best way to structure a Verilog or VHDL testbench around a design, and I can remember spending months struggling to master testbenches that colleagues or customers had constructed. This was not a scalable approach, and with time the limitations of these verification languages became all too apparent as designs started to increase in size and complexity.
by Nikhil Jain and Gaurav Manocha, Siemens EDA
OVERVIEW
The massive growth in the production and consumption of data, particularly unstructured data, like images, digitized speech, and video, results in an enormous increase in accelerators' usage. The growing trend towards heterogeneous computing in the data center means that, increasingly, different processors and co-processors must work together efficiently, while sharing memory and utilizing caches for data sharing. Hence sharing memory with a cache brings a formidable technical challenge known as coherency; which is addressed by the Compute Express Link (CXL).
WHAT IS CXL?
CXL is a technology that enables high-bandwidth, low-latency connectivity between the host processor and devices such as accelerators, memory buffers, and smart I/O devices. CXL, based on the PCI Express® (PCIe®) 5.0 physical layer infrastructure, i.e. uses PCIe electricals and standard PCIe form factors for the add-in card. Leveraging the PCIe 5.0 infrastructure makes it easy for devices and platforms to adopt CXL without designing and validating the PHY, channel, any channel extension devices such as retimers, or the upper layers of PCIe, including the software stack. It is designed to address the growing high-performance computational workloads by supporting heterogeneous processing and memory systems with applications in Artificial Intelligence, Machine Learning, communication systems, and High-Performance Computing by enabling coherency and memory semantics.
by Michelle Lange, Jeff Reeve and Tammy Reeve, Patmos Engineering Services
If you are a hardware design or verification engineer, you probably have a good idea of what verification entails. However, add compliance to RTCA/DO-254 as a requirement, and suddenly the definition of “verification” may not be so clear. First, the term “verification” must be understood alongside the synergistic term “validation.” Next, in a DO-254 context, verification spans a wider scope than it does traditionally, so understanding this is crucial. Also, while “advanced verification” is required for the more safety-critical designs, it might not mean what you think it means. Add to this DO-254 terms and concepts like requirements-based testing, elemental analysis, robustness testing, target testing, and independence, and suddenly the realm of verification might feel quite foreign. If DO-254 verification is on your horizon, keep reading to understand the scope, expectations, and nuances of DO-254 verification.
VERIFICATION AND VALIDATION
One of the first clarifications you need when understanding “verification” in the scope of DO-254, is how verification and validation are both intricately synergistic and yet subtly different. RTCA/DO-254 defines validation as “The process of determining that the requirements are the correct requirements and that they are complete” and defines verification as “The evaluation of an implementation of requirements to determine that they have been met.” In simple terms, validation ensures the item is correctly defined while verification ensures the item operates as per its (validated) definition. Together, validation and verification (referred to as V&V) ensure the hardware item is what it is supposed to be and does what it should do.
by Ajeetha Kumari, Arunachalam R and Satheesh Ethiraj, VerifWorks
INTRODUCTION
Coverage closure is a key step in any IP design verification project. Code coverage is a much-needed metric in most modern-day IP designs. It helps teams to ensure that all RTL code written is indeed exercised and verified prior to tape-out. Without such a guarantee, a semiconductor design house may well be risking millions of dollars in a potential bug-escape to silicon. As the process of code coverage is well automated, it is widely used in the industry.
One of the challenges in coverage closure is the time taken to get to 100% with potential waivers/exclusions. Traditionally, teams have deployed a battery of RTL designers to review each and every uncovered item to ensure it is indeed a genuine hole and can be safely excluded. This process is expensive, time-consuming and is prone to human errors.
Formal Verification can help in such scenarios to automate the bulk of this exclusion flow.
In this article, the authors share their experience of using Questa CoverCheck in various IP verification projects along with results.
by Dr. Sam Elliott, Imagination Technologies
With more emphasis within the electronics industry on high-performance and shorter time to market, the need for high-confidence and high-quality end-to-end verification is becoming more and more important. This is especially true on the heavily optimized arithmetic datapath blocks most used in modern compute and neural network applications. Missed bugs can delay projects and be costly to reputations, and, at the very least, be likely to sap performance.
Given that physical hardware is inherently fixed and unchangeable, it is extremely hard, if not impossible, to correct bugs and design faults after release. As such, the verification of hardware IP well before the move to full silicon tape-out is an established and widespread practice. A hardware verification engineer has several tools at their disposal, including:
Simulation: Compiling the hardware design into a software model and applying stimuli to test various aspects of the expected functionality.
Emulation: Compiling the hardware design onto an emulation device (an FPGA or similar), and much as with simulation, applying stimuli, but often at higher throughput and capacity than the software model could achieve.
Formal Verification: A set of systematic techniques to prove mathematically that the hardware design functions according to its specification.
by Ashish Darbari, Axiomise
In this article, we discuss why formal verification adoption has been limited in industry, and how abstraction-based methodology in formal verification can help DV engineers become successful in adopting formal property checking more widely. Abstraction is the key to obtaining scalability and predictability. It provides an efficient bug-hunting technique and helps in establishing exhaustive proof convergence. We illustrate our methodology on a FIFO in this article, but similar methods are used in the verification of a range of designs ranging from a RISC-V processor to multi-million gate designs.
INTRODUCTION
Despite its rich history, formal verification adoption is growing mainly through the usage of automated apps, but its full potential is hardly exploited with only 35-40% projects using formal property checking, according to Harry Foster’s 2020 Wilson Research report. Formal is not the main functional verification technology of choice in the industry, and we believe that this affects the cost of verification, the overall time-to-market as well as the quality of results.
by Sandeep Nasa and Sagar Thakran, Logic Fruit Technologies
Verifying a RISC-V Core-Based Design: A Primer
This article focuses on providing a jump start on RISC-V development. It shows how to build a verification environment quickly involving a RISC-V core and required peripherals based on selected applications.
RISC-V is an open-source instruction set architecture (ISA) specification. It is a general-purpose ISA developed at U.C. Berkeley, which is designed for supporting a wide variety of applications, from micro-power embedded devices to high-performance cloud server multi-processors, and is freely available for anyone to build a processor core compliant to its ISA. Because it is open-source, it is possible to customize the processor’s core and still be compliant to the RISC-V ISA, which has led to a rapidly-growing ecosystem in the market based on application requirements. All these custom-based processor ecosystems are used for various applications by integrating required peripherals. Many companies/organizations have developed RISC-V cores for targeted applications and made them available for further enhancement via open source.
To make a working application, we’ll need a verification environment to verify the intended functionality. A good verification environment flow is required to verify the targeted application and showcase performance for commercial needs. To fulfill the goal of creating a system with the core and peripherals based on an application, we need to enable the verification environment along with selected test cases that suits our needs. To achieve this, we need to select one of the standard cores along with its test suite. We have selected a 32-bit RISC-V core from Western Digital (named SweRV_EH1).
by Lee Moore, Simon Davidmann, Larry Lapides and Kevin McDermott, Imperas Software, Ltd.
The open standard ISA of RISC-V allows SoC developers to also build or modify a processor core optimized to the application requirements. The SoC verification tasks are adapting to address the significant increases in complexity. This article covers the 6 key components of RISC-V processor verification: The DV Plan, RTL DUT, Testbench, Tests, Reference model, and Siemens EDA Questa SystemVerilog simulation environment.
Within the RISC-V specification many standard extensions and options are available in addition to any user defined custom instructions. While some processor DV aspects may appear similar to a modern SoC verification flow, the flexibility of the open standard ISA of RISC-V makes almost every step uniquely challenging. This article provides some insights with the development of the latest architectural validation test suites for the RISC V Vectors draft specification, using Siemens EDA Questa with a UVM SystemVerilog testbench, including coverage analysis and results.
INTRODUCTION
RISC-V processor verification is growing almost as fast as the adoption of RISC-V in SoC designs. This is due in part to the flexibility that is permitted with an open standard ISA (Instruction Set Architecture), which allows SoC developers to build or modify the processor design. As SoC developers address the additional processor verification tasks in the SoC design verification (DV) plans they are facing some significant increases in verification complexity. This article covers the 6 key components of RISC-V processor verification.