-
by Tom Fitzpatrick - Mentor, A Siemens Business
Welcome once again to our annual Super-Sized DAC Edition of Verification Horizons!
There are two similar experiences I’d like to share with you before we get into the great articles in this edition.
A few weeks ago, I attended the Ultimate Frisbee College National Championships to cheer on my son’s Georgetown team, who had qualified for the tournament for the first time ever. They were seeded 19th out of 20 teams and didn’t expect to qualify at all, so most of the team, fellow students, alumni and parents were just happy to be there to support them. No one really had any great expectations going into the tournament. Yet to our surprise, the team did well enough to find themselves in a qualifying match for the championship bracket. They played really well, but unfortunately lost the game 13-12.
-
by Anwesha Choudhury, Ashish Hari, Aditya Vij, and Ping Yeung - Mentor, A Siemens Business
The size and complexity of designs, and the way they are assembled, is changing the clock-domain crossing (CDC) verification landscape. It is now common for these complex SoCs to have hundreds of asynchronous clocks.
As CDC signals can lead to metastability, CDC metastability issues have become one of the leading causes of design re-spins. This trend has made CDC verification an even more critical step in the design verification cycle than before. Naturally, this requires more time and effort to verify CDC issues and poses multiple challenges for CDC verification at the SoC level.
-
by Matthew Ballance - Mentor, A Siemens Business
Writing and reading registers is the primary way that the behavior of most IPs is controlled and queried. As a consequence of how fundamental registers are to the correct operation of designs, register tests are a seemingly-simple but important aspect of design verification and bring-up. At IP level, the correct implementation of registers must be verified – that they are accessible from the interfaces on the IP block and that they have the correct reset levels. At subsystem level, verifying access to registers helps to confirm that the interconnect network and address decode have been implemented as per the spec. At SoC level, verifying access to registers confirms that the processor byte order matches the interconnect implementation, and that the boot code properly configures the memory management unit (MMU) such that IP registers are visible to the processor.
In this article, we will explore how portable stimulus, via Accellera’s Portable Stimulus Standard (PSS), can leverage information captured in a register model to automate creation of block, subsystem, and SoC register-access tests.
-
by Doug Smith - Mentor, A Siemens Business
The ISO 26262 automotive safety standard requires evaluation of safety goal violations due to random hardware faults to determine diagnostic coverages (DC) for calculating safety metrics. Injecting faults using simulation may be time-consuming, tedious, and may not activate the design in a way to propagate the faults for testing. With formal verification, however, faults are easily categorized by structural analysis providing a worst-case DC. Furthermore, faults are analyzed to verify propagation, safety goal violations, and fault detection, providing accurate and high confidence results. This article describes in detail how to run a better fault campaign using formal.
The automotive functional safety standard, ISO 26262, defines two verification requirements for integrated circuits used in automotive applications—systematic failure verification and random hardware failure verification. Systematic failures are design bugs found with typical functional verification. Random hardware failure verification is where faults are injected into a design to test assumptions about the design’s safety mechanisms to see if they work. Just as functional verification uses coverage to determine completeness, the ISO standard defines a form of coverage for random hardware failures called diagnostic coverage, which represents the percentage of the safety element that is covered by the safety mechanism.
-
by Anurag Singh - Mentor, A Siemens Business
Verification planning requires identification of the key features from the design specification along with prioritization and testing of the functionality that leads to the development of a coverage model.
Functional coverage is an integral part in verifying the completeness of an IP. The key to functional coverage is to make an exhaustive verification plan containing coverpoints and crosses which are extracted from the protocol specification. This article explains the need for coverage-driven verification for Non-Volatile Memory Express (NVMe), and how QVIP achieves it.
NVMe is an interface specification for PCI Express® based solid state drives focusing on low latency, scalability, and security. NVMe is a host controller interface and storage protocol that utilizes high-speed data transfer between enterprise and client systems and SSDs over a computer’s high-speed PCIe bus; i.e., it is an application layer.
-
by Progyna Khondkar - Mentor, A Siemens Business
Part I of this article provided a consolidated approach to understand verification tools and methodologies that applies a set of pre-defined power aware (PA) or multi-voltage (MV) rules based on the power requirements, statically on the structures of the design. More precisely, the rule sets are applied on the physical structure, architecture, and microarchitecture of the design, in conjunction with the UPF specification but without the requirements of any external stimulus or testbenches. This actually makes the PA-Static tool simpler and a popular choice to verify various features of UPF strategies (e.g., UPF strategy definitions are correct, incorrect, missing, or redundant, MV cells are correct, incorrect, missing, or redundant, UPF strategies are present but cells are missing, or vice versa).
Part I also explained that PA-Static checks work beyond UPF strategies. Part 1: List 2, Essential PA-Static Checks at Different Design Abstraction Level, is the best reference for a complete list of PA-Static checks. In addition, Part 1: List 4, Summary of Information Analyzed by PA-Static Tool, showed that the tool may be deployed to conduct verification as early as the RTL with the first five of the listed information (i.e., power domains, power domain boundary, power domain crossings, and power states). The last two, the cell and pin-level attributes, are mandatory information for the GL-netlist and PG-netlist levels of the design. As well it also revisits the library requirements and processing techniques for PA-Static tools. In Part II of the article, we conclude this discussion with a real example to analyze PA-Static results and reporting, as well as efficient debugging PA-Static anomalies.
-
by Charley Selvidge and Vijay Chobisa - Mentor, A Siemens Business
A significant evolution is underway in SoC verification and validation.
The complexity of SoC designs has resulted in the need to perform both comprehensive verification as well as system-level validation very early in the design cycle, often before stable RTL code is available for the entire design. This same complexity has also created the need for extensive internal visibility into the design to understand subtle problems that can occur during silicon bring-up.
While the needed level of visibility can be provided with a model of the design, it requires sufficient execution speed in the modeling environment to run content that matches silicon tests to highlight issues. Hardware emulation has sufficient execution speed, full visibility capabilities and ease-of-use in model creation and model updates to span the entire range of needs throughout the life of the design development process.
-
by Ishanee Bajpai - Agnisys
Design complexity is increasing proportionally with advancing technology. Reusing the same design in different applications and for multiple configurations are ways to deal with this complexity. Often this leads to the addressable registers in a design becoming more complex as well to support a multitude of functionalities.
Unlike simple addressable registers, complex addressable registers are not easy to implement in UVM and RTL using a simple script or an off-the-shelf generator. These complex addressable registers have special functionalities based on the particular application and are used to control various aspects of the design.
Often such registers are used in mission-critical applications. If a designer uses a simple script to create the RTL, they will need to change the RTL and UVM code manually to describe the complex registers, then re-simulate and verify the RTL.
Additionally, complex registers may have side-effects. They may interact with other register fields or hardware signals, and they may have parameterized fields and their width may be wider than the software bus, making it non-atomic to read/write.
In this article, we will discuss some complex registers that we have seen our customers use in mission-critical applications.
-
by Ajay Daga, Naveen Battu - FishTail Design Automation, Inc.
It is important that certain timing endpoints on a design are safe from glitches. For example, it is necessary that an asynchronous reset never have a glitch that momentarily resets a flop. It is also necessary that multi-cycle paths are safe from glitches, i.e., it should not be the case that while a cycle accurate simulation of the RTL shows correct multi-cycle behavior, once delays are accounted for a glitch can propagate along the path resulting in a single-cycle path.
Traditionally, engineers have verified that a design is safe from glitches with delay-annotated gate-level simulation. There are several issues with this approach – it only confirms that there is no glitch for specific delay values and it happens late in the design cycle, i.e., it is late and incomplete. What engineers want is to establish, once their RTL is frozen, that it is impossible for certain timing endpoints to glitch regardless of the final gate-level implementation and circuit delays. They want to make sure that they have ensured safety from glitches by design.
-
by Marcela Zachariášová and Luboš Moravec — Codasip Ltd., John Stickley and Shakeel Jeeawoody — Mentor, A Siemens Business
RISC-V is a free-to-use and open ISA developed at the University of California, Berkeley, now officially supported by the RISC-V Foundation [1][2]. It was originally designed for research and education, but it is currently being adopted by many commercial implementations similar to ARM cores. The flexibility is reflected in many ISA extensions. In addition to basic Integer (“I”/”E”) ISA, many instruction extensions are supported, including multiplication and division extension (“M”), compressed instructions extension (“C”), atomic operations extension (“A”), floating-point extension (“F”), floating-point with double-precision (“D”), floating-point with quad-precision (“Q”), and others. By their combination, more than 100 viable ISAs can be created.
Codasip is a company that delivers RISC-V IP cores, internally named Codix Berkelium (Bk). In contrast to the standard design flow, as defined for example in [3][4], the design flow utilized by Codasip is highly automated, see Fig. 1. Codasip describes processors at a higher abstraction level using an architecture description language called CodAL. Each processor is described by two CodAL models, the instruction-accurate (IA) model, and the cycle-accurate (CA) model. The IA model describes the syntax and semantics of the instructions and their functional behavior without any micro-architectural details. To complement, the CA model describes micro-architectural details such as pipelines, decoding, timing, etc. From these two CodAL models, Codasip tools can automatically generate SDK tools (assembler, linker, C-compiler, simulators, profilers, debuggers) together with RTL and UVM verification environments, as described in [5]. In UVM, the IA model is used as a golden predictor model, and the RTL generated from the CA model is used as the Design Under Test (DUT). Such high level of automation allows for very fast exploration of the design space, producing a unique processor IP with all the software tools in minutes.