Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation
As I write this, we're experiencing yet another winter storm here in New England. It started this morning, and the timing was fortuitous since my wife had scheduled a maintenance visit by the oil company to fix a minor problem with the pipes before it really started snowing heavily. While the kids were sleeping in due to school being cancelled, the plumber worked in our basement to make sure everything was working well. It turned out that he had to replace the water feeder valve on the boiler, which was preventing enough water from circulating in the heating pipes. Aside from being inefficient, this also caused the pipes to make a gurgling sound, which was the key symptom that led to the service call in the first place. As I see the snow piling up outside my window (6-8 inches and counting), it's easy to picture the disaster that this could have become had we not identified the problem early and gotten it fixed.
by Stuart Sutherland, Sutherland HDL, Inc.
The little things engineers can do when coding RTL models can add up to a significant boost in verification productivity. A significant portion of SystemVerilog is synthesizable. Taken individually, these synthesizable RTL modeling constructs might seem insignificant, and, therefore, easy to overlook when developing RTL models. These "little things", however, are like getting free assertions embedded directly in the RTL code, some of which would be quite complex to write by hand. Using these SystemVerilog constructs in RTL modeling can reduce verification and debug time. This article presents several features that SystemVerilog adds to traditional Verilog RTL modeling that can help catch subtle RTL coding errors, and make verification easier and more efficient.
by Gaurav Jalan & Senthil Duraisamy, SmartPlay Technologies
The internet revolution has changed the way we share content and the mobile revolution has boosted this phenomenon in terms of content creation & consumption. Moving forward, the Internet of Things would further drive this explosion of data. As a result, the area and performance battle has taken a back seat and optimizing power consumption is at the forefront. Longer battery life for handhelds and devices running on coin cell batteries are the primary drivers for this change. Initiatives to reduce global energy consumption also play a vital role in promoting this further. Power consumption on silicon is a result of switching activity (dynamic power) and leakage (static power) with the later claiming prominence on lower process nodes. Functional profile of the devices targeting the Internet of Things e.g. sensors, suggest that there would be minimal switching activity throughout the day and leakage would be the main contributor towards power consumption if the circuit is ON all the time. Such products demand implementation of features like power shut off, multi voltage and voltage frequency scaling. Traditional HDLs (Hardware Design Languages) and simulators are alien to power on/off & voltage variations. There is a need for an additional scheme to describe the power intent of the design and a simulator to validate it. The IEEE 1801 UPF (Unified Power Format) standard defines the semantics to represent the low power features intended for a design. Questa is a simulator solution that enables verification of power aware designs. This article discusses how power aware simulations using Questa helped us identify power related bugs early in the design cycle.
by CJ Clark & Craig Stephan, Intellitech Corporation
IEEE 1149.1-2013 is not your father's JTAG. The new release in June of 2013 represents a major leap forward in standardizing how FPGAs, SoCs and 3D-SICs can be debugged and tested. The standard defines register level descriptions of on-chip IP with operational descriptions via the new 1149.1 Procedural Description Language.1, 2, 3 IEEE 1149.1-2013 adds support for TAP based access to on-chip IP, configuring I/O parameters such as differential voltage swing, crossing power domains and controlling on-chip power, segmented scan chains and interfacing into IEEE 1500 Wrapper Serial Ports and WIRs. The standard is architected to lower the cost of electronic products by enabling re-use of on-chip instruments through all phases of the IC life-cycle. The standard takes a 'divide and conquer' approach allowing IP (instrument) providers who have the most domain expertise of their IP to communicate the structure and operation of their IP in computer readable formats.
by Hari Patel & Dhaval Prajapati, eInfochips
UVM/OVM methodologies are the first choice in the semiconductor industry today for creating verification environments. Because UVM/OVM are TLM-based (Transaction Level Modeling), sequence and sequence items play vital roles and must be created in the most efficient way possible in order to reduce rework and simulation time, and to make the verification environment user friendly. This article covers how to write generic and reusable sequences so that it's easy to add a new test case or sequence. We use SRIO (Serial Rapid IO) protocol as an example.
In UVM- and OVM-based environments, sequences are the basic building blocks directing the scenario generation process. Scenario generation consists of a sequence of transactions generated using UVM/OVM sequencers, so it's important to write sequences efficiently. Key to keep in mind is the fact that sequences are just not for the generating one random scenario for a specific test case. Sequences should be user- and protocol-friendly, utilizable at the test case level, and flexible enough to allow for maximum reuse in randomly generating scenarios.
by Martin Vlach, Mentor Graphics
I don't know how this came about, but the other day I got hired to do something called AMS Verification. It seems that there is this chip design that combines digital and analog stuff, and I was asked to make sure that all of it works when it's put together and that it does what it was meant to do when they got going in the first place.
Not knowing any better, I guess I'll start by hoping that all of the pieces they handed to me were done just right as far as those designer dudes understood when they got handed their jobs. So here's what I'm thinking: The dudes may be real good in designing, but they are humans too and so they probably missed some points, and misunderstood some others, and when it's all put together, Murphy says that things will go wrong. Besides, those analog and digital dudes don't talk to each other anyway.
by Matthew Ballance, Mentor Graphics
We've come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directed random, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological "bridges".
Back to Top