Search form

Main menu

My Account Menu

Verification Horizons

Subscribe to Academy News

Verification Horizons - Tom Fitzpatrick, Editor

The advent of new technologies — such as constrained-random data generation, assertion-based verification, coverage-driven verification, formal model checking and Intelligent Testbench Automation to name a few — have changed the way we see functional verification productivity. An advanced verification process enables users to manage the application of these new technologies in a complementary way, providing confidence that the myriad corner cases of today's increasingly complex designs have been covered.

The Verification Horizons publication expands upon verification topics to provide concepts, values, methodologies and examples to assist with the understanding of what these advanced functional verification technologies can do and how to most effectively apply them.

In addition, the Verification Horizons Blog provides an online forum providing updates on concepts, values, standards, methodologies and examples to assist with the understanding of what advanced functional verification technologies can do and how to most effectively apply them.

Verification Horizons Issues:

2014

March

2013

October | June | February

2012

October | June | February

2011

November | June | February

2010

November | June | February


March 2014 | Volume 10, Issue 1

Verification Horizons Complete Issue:

Verification Horizons Articles:

Whether It's Fixing a Boiler, or Getting to Tapeout, It's Productivity that Matters

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

As I write this, we're experiencing yet another winter storm here in New England. It started this morning, and the timing was fortuitous since my wife had scheduled a maintenance visit by the oil company to fix a minor problem with the pipes before it really started snowing heavily. While the kids were sleeping in due to school being cancelled, the plumber worked in our basement to make sure everything was working well. It turned out that he had to replace the water feeder valve on the boiler, which was preventing enough water from circulating in the heating pipes. Aside from being inefficient, this also caused the pipes to make a gurgling sound, which was the key symptom that led to the service call in the first place. As I see the snow piling up outside my window (6-8 inches and counting), it's easy to picture the disaster that this could have become had we not identified the problem early and gotten it fixed. Whether It's Fixing a Boiler, or Getting to Tapeout, It's Productivity that Matters.

Don't Forget the Little Things That Can Make Verification Easier

by Stuart Sutherland, Sutherland HDL, Inc.

The little things engineers can do when coding RTL models can add up to a significant boost in verification productivity. A significant portion of SystemVerilog is synthesizable. Taken individually, these synthesizable RTL modeling constructs might seem insignificant, and, therefore, easy to overlook when developing RTL models. These "little things", however, are like getting free assertions embedded directly in the RTL code, some of which would be quite complex to write by hand. Using these SystemVerilog constructs in RTL modeling can reduce verification and debug time. This article presents several features that SystemVerilog adds to traditional Verilog RTL modeling that can help catch subtle RTL coding errors, and make verification easier and more efficient. Don't Forget the Little Things That Can Make Verification Easier

Taming Power Aware Bugs with Questa®

by Gaurav Jalan & Senthil Duraisamy, SmartPlay Technologies

The internet revolution has changed the way we share content and the mobile revolution has boosted this phenomenon in terms of content creation & consumption. Moving forward, the Internet of Things would further drive this explosion of data. As a result, the area and performance battle has taken a back seat and optimizing power consumption is at the forefront. Longer battery life for handhelds and devices running on coin cell batteries are the primary drivers for this change. Initiatives to reduce global energy consumption also play a vital role in promoting this further. Power consumption on silicon is a result of switching activity (dynamic power) and leakage (static power) with the later claiming prominence on lower process nodes. Functional profile of the devices targeting the Internet of Things e.g. sensors, suggest that there would be minimal switching activity throughout the day and leakage would be the main contributor towards power consumption if the circuit is ON all the time. Such products demand implementation of features like power shut off, multi voltage and voltage frequency scaling. Traditional HDLs (Hardware Design Languages) and simulators are alien to power on/off & voltage variations. There is a need for an additional scheme to describe the power intent of the design and a simulator to validate it. The IEEE 1801 UPF (Unified Power Format) standard defines the semantics to represent the low power features intended for a design. Questa is a simulator solution that enables verification of power aware designs. This article discusses how power aware simulations using Questa helped us identify power related bugs early in the design cycle. Taming Power Aware Bugs with Questa®

Using Mentor Questa® for Pre-silicon Validation of IEEE 1149.1-2013 based Silicon Instruments

by CJ Clark & Craig Stephan, Intellitech Corporation

IEEE 1149.1-2013 is not your father's JTAG. The new release in June of 2013 represents a major leap forward in standardizing how FPGAs, SoCs and 3D-SICs can be debugged and tested. The standard defines register level descriptions of on-chip IP with operational descriptions via the new 1149.1 Procedural Description Language.1, 2, 3 IEEE 1149.1-2013 adds support for TAP based access to on-chip IP, configuring I/O parameters such as differential voltage swing, crossing power domains and controlling on-chip power, segmented scan chains and interfacing into IEEE 1500 Wrapper Serial Ports and WIRs. The standard is architected to lower the cost of electronic products by enabling re-use of on-chip instruments through all phases of the IC life-cycle. The standard takes a 'divide and conquer' approach allowing IP (instrument) providers who have the most domain expertise of their IP to communicate the structure and operation of their IP in computer readable formats. Using Mentor Questa® for Pre-silicon Validation of IEEE 1149.1-2013 based Silicon Instruments

Dealing With UVM and OVM Sequences

by Hari Patel & Dhaval Prajapati, eInfochips

UVM/OVM methodologies are the first choice in the semiconductor industry today for creating verification environments. Because UVM/OVM are TLM-based (Transaction Level Modeling), sequence and sequence items play vital roles and must be created in the most efficient way possible in order to reduce rework and simulation time, and to make the verification environment user friendly. This article covers how to write generic and reusable sequences so that it's easy to add a new test case or sequence. We use SRIO (Serial Rapid IO) protocol as an example.

In UVM- and OVM-based environments, sequences are the basic building blocks directing the scenario generation process. Scenario generation consists of a sequence of transactions generated using UVM/OVM sequencers, so it's important to write sequences efficiently. Key to keep in mind is the fact that sequences are just not for the generating one random scenario for a specific test case. Sequences should be user- and protocol-friendly, utilizable at the test case level, and flexible enough to allow for maximum reuse in randomly generating scenarios. Dealing With UVM and OVM Sequences

Stories of an AMS Verification Dude: Putting Stuff Together

by Martin Vlach, Mentor Graphics

I don't know how this came about, but the other day I got hired to do something called AMS Verification. It seems that there is this chip design that combines digital and analog stuff, and I was asked to make sure that all of it works when it's put together and that it does what it was meant to do when they got going in the first place.

Not knowing any better, I guess I'll start by hoping that all of the pieces they handed to me were done just right as far as those designer dudes understood when they got handed their jobs. So here's what I'm thinking: The dudes may be real good in designing, but they are humans too and so they probably missed some points, and misunderstood some others, and when it's all put together, Murphy says that things will go wrong. Besides, those analog and digital dudes don't talk to each other anyway. Stories of an AMS Verification Dude - Putting Stuff Together

Portable VHDL Testbench Automation with Intelligent Testbench Automation

by Matthew Ballance, Mentor Graphics

We've come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directed random, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological "bridges". Portable VHDL Testbench Automation with Intelligent Testbench Automation

Read more →

October 2013 | Volume 9, Issue 3

Verification Horizons Complete Issue:

Verification Horizons Articles:

It's Surprising What the Boy Scouts Can Teach Us about Verification

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

Hello, and welcome to our Autumn, 2013 edition of Verification Horizons. I love autumn here in New England. The air is crisp and clean, the trees are turning beautiful colors, and it's time for David and me to begin another year of camping and other fun activities with the Boy Scouts. David is an Eagle Scout and our troop's Senior Patrol Leader, which means that he's in charge of running troop meetings and campouts. I'm the Scoutmaster, so it's my job to support him and the other boys, make sure they're adequately trained and keep them safe. What that really means is that David and I have to work together to make sure that the troop functions well as a unit and that everyone has fun. Those of you with teenage sons may recognize the potential pitfalls of such an arrangement, but so far we've done pretty well, if I do say so myself. You'll see these ideas of teamwork, planning and safety in our articles for this issue. It's Surprising What the Boy Scouts Can Teach Us about Verification.

Software-Driven Testing of AXI Bus in a Dual Core ARM System

by Mark Olen, Mentor Graphics

Here we present an architecture for verifying proper operation and performance of a complex AXI bus fabric in a dual-core ARM processor system using a combination of SystemVerilog and C software-driven test techniques.

Specifically, we describe deployment of an advanced graph-based solution that provides the capability for checking full protocol compliance, an engine for continuous traffic generation, precise control and configurability for shaping the form and type of traffic needed to test the fabric. These characteristics were easier to construct, easier to analyze and review, and were more efficient in terms of achieving coverage using a graph-based approach than constrained-random or directed OVM sequences. For us, this architecture yielded successful completion of the verification process ahead of schedule. Software-Driven Testing of AXI Bus in a Dual Core ARM System

Caching in on Analysis

by Mark Peryer, Mentor Graphics

The on-chip bus interconnect has become a critical subsystem of a System On a Chip (SoC). Its function is to route data between different parts of the system at a rate that allows the system to meet its performance goals. The scale and complexity of the interconnect varies from a single switching block, routing transactions between bus masters and bus slaves, to a hierarchy of switches organized to maximize through-put. Verifying that the interconnect works correctly requires that the various bus ports in the systemadhere to the protocol specification; that the system address map has been implemented correctly; and that the overall interconnect fabric delivers on its performance goals.

The advent of heterogeneous, multi-processor systems with multiple caches sharing the same data has added significantly to the complexity of interconnect design and has prompted the development of cache coherent protocols such as AMBA® 4 ACETM and AMBA 5 CHITM. Questa SLI supports interconnect verification with a range of capabilities targeting interconnect verification for both simulation and emulation. These capabilities include testbench and instrumentation generation based on Questa VIP; stimulus targeting interconnect and cache coherency verification; and visualization and analysis techniques aimed at giving insight into system level behavior. Caching in on Analysis

DDR SDRAM Bus Monitoring using Mentor Verification IP

by Nikhil Jain, Mentor Graphics

This article describes how Mentor's verification IP (VIP) for various double-data rate (DDR) memory standards can act as a bus monitor and help yield fast, flexible verification and easy debugging. We discuss how to boost productivity using the inbuilt coverage collector and checking assertions by passively monitoring data on the bus. Mentor provides verification IP that spans multiple engines; simulation, formal and acceleration (emulation) allowing effective test re-use from simulation through to acceleration, but the article focuses on simulation using Questa Verification IP.

Verifying and debugging DDR SDRAM memory designs is challenging because of the speed and complex timing of the signals that need to be acquired and debugged. We can reduce this complexity using the DDRx Questa VIP (QVIP) as a bus monitor. In general, the bus monitor's task is to observe, so it should be configured to be passive and not inject errors. A monitor must have protocol knowledge in order to detect recognizable patterns in signal activity. QVIP has all these features and thus can be a boon to any verification team. DDR SDRAM Bus Monitoring using Mentor Verification IP

Simulation + Emulation = Verification Success

by Lanfranco Salinari, Alberto Allara, and Alessandro Daolio, STMicroelectronics

The fact is there's little that's simple about designing and verifying SoCs. One reason stems from the choice and flexibility — all benefits have a downside — that come with assembling the chips. In the case of ARM, for example, companies can either buy off-the-shelf processor designs made by the British company or build their own processors that run on the ARM instruction set. Then comes the task of linking these processors to other necessary components, including memory, which also can be purchased and tweaked or built from scratch. Eventually the paradox of choice kicks in — more options about the way to solve a given problem eventually can lead to engineering anxiety and almost always slows things down. The second reason has to do with Moore's Law, which continues to plod ahead, seemingly oblivious to regular reports of its impending demise. According to the 2012 Wilson Research Group Functional Verification Study sponsored by Mentor Graphics, about a third of new chip designs target a feature size of 28 nm or less and contain more than 20 million gates, and 17% of all new designs are greater than 60 million gates. About 78% of all new designs have one or more embedded processors. These numbers should give some pause, particularly given the rule of thumb that verification complexity scales exponentially with gate count. Simulation + Emulation = Verification Success

Life Isn't Fair, So Use Formal

by Roger Sabbagh, Mentor Graphics

Most things in life are not evenly distributed. Consider for example, the sun and the rain. The city of Portland, Oregon gets much more than its fair share of rainy days per year at 164 on average, while in Yuma, Arizona, 90% of all daylight hours are sunny. Or, how about life as an adolescent? Do you remember how some of us were "blessed" with acne, buck teeth and short sightedness, while others with seemingly perfect skin could spend the money they saved on skin cream, braces and eyeglasses to buy all the trendiest designer clothes? No, things are not always meted out in equal measures.

So it is with the effort required to achieve code coverage closure. A state-of-the-art, constrained-random simulation environment will achieve a fairly high level of coverage as a by-product of verifying the functionality of the design. It is typically expected to achieve >90% coverage quite rapidly. However, getting closure on the last few percent of the coverage bins is typically where the effort starts to balloon. Life Isn't Fair, So Use Formal

AMS Verification for High Reliability and Safety Critical Applications

by Martin Vlach, Mentor Graphics

Today, very high expectations are placed on electronic systems in terms of functional safety and reliability. Users expect their planes, automobiles, and pacemakers to work perfectly, and keep on working for years. A reboot of a smartphone is annoying, but rebooting the airplane or car electronics while underway could be catastrophic, and a glitch in an implanted medical device could be life threatening.

The extremely successful digital abstraction allows us to decompose the problem of ensuring that a digital circuit "works" into the separate steps of functional and physical verification. EDA tools take the design either from an algorithm, or from RTL, all the way to implementation. Functional verification of digital systems is primarily concerned with verifying that the logic design at the algorithmic and RTL level conforms to specification, and as a final check, physical verification is performed to make sure that nothing in the automation went wrong. Verifying that the logic and its circuit implementation are correct are orthogonal problems. AMS Verification for High Reliability and Safety Critical Applications

Assertions Instead of FSMs/logic for Scoreboarding and Verification

by Ben Cohen, Accellera Systems Initiative, VhdlCohen Publishing

Monitors, scoreboards, and verification logic are typically implemented using FSMs, logic, and tasks. With UVM, this logic is hosted in classes. This article demonstrates another option of implementing some monitors and scoreboards using SVA assertions hosted in SV interfaces. The basic concept involves using assertion statements along with functions, called from sequence match items, to collect the desired scoreboard information, to compare expected results to collected data, and to trigger covergroups. This concept is demonstrated using a UART transmitter as the DUT. Since the purpose of this model is to demonstrate the use of assertions to emulate verification logic, the driver for this DUT originates directly from a top level module. To demonstrate the difference between an assertion-verification solution versus a monitor/scoreboard-solution in classes, a monitor class was implemented. Assertions Instead of FSMs/logic for Scoreboarding and Verification

Read more →

June 2013 | Volume 9, Issue 2

Verification Horizons Complete Issue:

Verification Horizons Articles:

An Up-sized DAC Issue Takes the Stage

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

Building a theater set is not unlike what we do as verification engineers. It involves modeling the "real world," often at a higher level of abstraction, and it has hard deadlines. "The show must go on," after all. Productivity is also key because all of us building the set are volunteers. We reuse set pieces whenever we can, whether it's something we built for a past production or something we borrowed from a neighboring group. And we often have to deal with shifting requirements when the director changes his mind about something during rehearsals. Oh, and it also involves tools – lots of tools. However, there is one important way in which set construction is different from verification. Those of us building theater sets know our work only has to stay up for two weeks and that no one in the audience is going to see it from closer than thirty feet away. In contrast, all engineers know the chips they verify must work a lot longer under much closer scrutiny. An Up-sized DAC Issue Takes the Stage

Interviewing a Verification Engineer

by Akiva Michelson, Ace Verification

A key challenge today is choosing the right staff for achieving excellent verification results. Indeed, the defining moment for most projects is when the staff is selected, since the right combination of skills and personality can lead to outstanding technical outcomes (while the wrong combination can lead to disaster). Verification engineers differ significantly from other engineers in terms of skill sets required for success. Due to the nature and breadth of verification tasks, a verification engineer needs to have excellent communication and interpersonal skills, a passion for verification, and the technical know-how to complete tasks in a highly dynamic environment. This article provides a basic interview framework for identifying a capable verification engineer who will work well with your team. Questions about previous projects, verification skills, debug skills and programming skills are described, as well as how to plan the content of the interview, what to look for in the answers, and what traits are most important in a prospective candidate. Interviewing a Verification Engineer

Maximum Productivity with Verification IP

by Joe Rodriguez, Raghu Ardeishar, and Rich Edelman, Mentor Graphics

When beginning a new design it's common to evaluate how to build a verification infrastructure in the quickest amount of time. Of course it's never just quick to deploy, verification also has to be complete enough to improve confidence in the design. Rapid bring-up and improving the quality of your design are excellent goals. However, you should not forget that your environment should be efficient to use during the verification process. This is where you will spend most of your time, slugging it out day after day. Arguably, debugging design bugs is one of the most time consuming tasks of any project. Transaction Level Modeling (TLM) will change the way you think about debug productivity, especially if you have recently experienced the long and difficult task of deciphering PCIe's training sequences,data transfers and completion codes at the pin level. Maximum Productivity with Verification IP

Power Up Hardware/Software Verification Productivity

by Matthew Ballance, Mentor Graphics

Today's complex designs increasingly include at least one, and often more, embedded processors. Given software's increasing role in the overall design functionality, it has become increasingly important to leverage the embedded processors in verifying hardware/software interactions during system-level verification. Comprehensively verifying low-level hardware/software interactions early in the verification process helps to uncover bugs that otherwise would be uncovered during operating system or application bring-up – potentially in the lab. Characterizing, debugging, and correcting this type of bug is easier, faster, and thus less expensive, early in the verification cycle. Power Up Hardware/Software Verification Productivity

Non-invasive Software Verification using Vista Virtual Platforms

by Alex Rozenman, Vladimir Pilko, and Nilay Mitash, Mentor Graphics

With the SoCs now supporting Multi-Core processors, complex ASICs and combinations that include systems on a board, SoC implementations now become an ever growing challenge for software development. Software development has to be supported not only by the inclusion of an RTOS, but, many SoC providers now have to leverage upon the Bare-Metal concept to achieve the necessary demands of today's SoCs. However, there still exists a huge chasm in the software development arena that addresses both the need to be able to verify not only the sw/hw interactions, but, also the software itself in a hardware context. This has become almost a necessity in today's "security" based systems marketplace. Non-invasive Software Verification using Vista Virtual Platforms

QVM: Enabling Organized, Predictable, and Faster Verification Closure

by Gaurav Jalan, SmartPlay Technologies, and Pradeep Salla, Mentor Graphics

Until recently, the semiconductor industry religiously followed Moore's Law by doubling the number of transistors on a given die approximately every two years. This predictable growth allowed ecosystem partners to plan and deal with rising demands on tools, flows and methodologies. Then came the mobile revolution, which opened up new markets and further shifted the industry's focus to consumers. The diversified nature of this market posed many, often conflicting development challenges: How to speed time to market while building products rich in functionality? How to boost performance while keeping both power consumption and cost at modest levels? Wading through these questions contributed to a multifold increase in verification complexity.  Enabling Organized, Predictable, and Faster Verification Closure

Verifying High Speed Peripheral IPs

by Sreekanth Ravindran and Chakravarthi M.G., Mobiveil

High speed serial interconnect bus fabric is the SoC backbone, managing dataflow and keeping up with the dynamic bandwidth requirements of high speed applications. Verification of high speed interconnect IPs presents critical challenges not only in terms of complying with standards, but also in ensuring that the design is robust and flexible enough to handle and manage a large amount of time-critical data transfers. Acquiring such expertise requires years of verification experience. In this article, Silicon IP and platform enabled solution provider Mobiveil shares its story of verifying high speed bus protocol standards like PCI Express and Serial RapidIO, including what considerations are required when verifying high speed designs. In addition, Mobiveil highlights the benefits of using the Mentor Graphics Questa Verification Platform, including Questa Advanced Simulator, Questa CoverCheck, and Questa Clock-Domain Crossing (CDC) Verification, which together facilitates smart functional verification, debug and reporting of the high speed designs. Verifying High Speed Peripheral IPs

Confidence in the Face of the Unknown: X-state Verification

by Kaowen Liu, MediaTek Inc., and Roger Sabbagh, Mentor Graphics

Unknown signal values in simulation are represented as X-state logic levels, while the same X-states are interpreted as don't care values by synthesis. This can result in the hazardous situation where silicon behaves differently than what was observed in simulation. Although the general awareness of X-state issues among designers is good, gotchas remain a risk that traditional verification flows are not well equipped to guard against. The unknown simulation semantics of X has two oft discussed pitfalls: X-pessimism and X-optimism.

X-optimism is most worrisome as it can mask true design behavior by blocking the propagation of X-states and instead propagating a deterministic value through the design in simulation, when in reality various possible values will be seen in the hardware. If the unexplored values cause the design to malfunction, then X-optimism has masked a design bug that will only be discovered in silicon.  X-state Verification

Making it Easy to Deploy the UVM

by Dr. Christoph Sühnel, frobas GmbH

The Universal Verification Methodology (UVM) is becoming the dominant approach for the verification of large digital designs. However, new users often express concern about the effort required to generate a complete and useful UVM testbench. But the practical experience collected in numerous OVM and UVM projects during the last few years shows a different view. The UVM is a very suitable methodology for any kind of design and implementation, i.e. ASIC and FPGA due to the availability of the UVM library and the well-defined testbench structure. This allows the automation of essential steps in employing the UVM.

This article describes an UVM approach reducing testbench implementation effort, guaranteeing an early success and streamlining the processing of the test results. Depending on the number of functional interfaces and the design complexity up to 6 weeks of implementation effort or even more can be saved. A runnable UVM testbench will be handed over to the verification team at the very beginning of the project. The verification engineers have to implement only the corresponding drivers, monitors and functional coverage models. Later on the scoreboards needed have to be architected and implemented. Making it Easy to Deploy the UVM

NoC Generic Scoreboard VIP

by François Cerisier and Mathieu Maisonneuve, Test and Verification Solutions

The increase of SoC complexity with more cores, IPs and other subsystems has led SoC architects to demand more from the main interconnect or network-on-chip (NoC), which is thus becoming a key component of the system. Power management, multiple clocks, protocol conversions, security management, virtual address space, cache coherency are among the features that must be managed by main interconnect and that demand proper verification.

In addition, IP reuse and NoC generation solutions have enabled the conception of new SoC architectures within months or even weeks. Simple point-to-point scoreboard methodology is taught in most good verification methodology books and tutorials. However, building a generic verification solution for an SoC interconnect that can quickly adapt to any bus protocols and SoC architectures, and can deal with SoC advanced features, requires much more than dealing with point-to point transaction matching. NoC Generic Scoreboard VIP

Flexible UVM Components: Configuring Bus Functional Models

by Gunther Clasen, Ensilica

Modern object-oriented testbenches using SystemVerilog and OVM/UVM have been using SystemVerilog interface constructs in the testbench and virtual interfaces in the class based verification structure to connect the two worlds of static modules and dynamic classes. This has certain limitations, like the use of parameterized interfaces, which are overcome by using Bus Functional Models. BFMs are now increasingly adopted in UVM testbenches, but this causes other problems, particularly for complex BFMs: They cannot be configured from the test environment, thus significantly reducing code reuse.

This article shows a way to write BFMs in such a way that they can be configured like any other UVM component using uvm_config_db. This allows a uniform configuration approach and eases reuse. All code examples use UVM, but work equally with the set_config_*() functions in OVM.  Configuring Bus Functional Models

Monitors, Monitors Everywhere – Who Is Monitoring the Monitors?

by Rich Edelman and Raghu Ardeishar, Mentor Graphics

The reader of this article should be interested in predicting or monitoring the behavior of his hardware. This article will review phase-level monitoring, transaction-level monitoring, general monitoring, in-order and out-of-order transactionlevel monitors, A protocol specific AXI monitor written at the transaction-level of abstraction will be demonstrated. Under certain AXI usages, problems arise. For example partially written data may be read by an overlapping READ. This kind of behavior cannot be modeled by the "complete transaction" kind of monitor; it must be modeled by a phase-level monitor. All of these monitoring and scoreboard discussions can be widely applied to many protocols and many monitoring situations.

The task of a monitor is to monitor activity on a set of DUT pins. This could be as simple as looking at READ/WRITE pins or as complex as a complete protocol bus, such as AXI or PCIe. In a very simple case a monitor can be looking at a set of pins and generating an event every time there is a change in signal values. The event can trigger a scoreboard or coverage collector. This monitor is typically very slow and not very useful as it generates a lot of irrelevant data. Monitors, Monitors Everywhere – Who Is Monitoring the Monitors?

The Need for Speed: Understanding Design Factors that Make Multi-core Parallel Simulations Efficient

by Shobana Sudhakar & Rohit Jain, Mentor Graphics

Running a parallel simulation may be as easy as flipping on a switch with the progressive and maturing solutions available today, but do people really take full advantage of the technology? It is true that in some scenarios the overhead of communication and synchronization needed for parallel simulation can negate any substantial performance gains. However, there are scenarios where deploying the parallel simulation technology can provide tremendous benefit. A long running simulation that exercises large blocks of a design concurrently and independently is one good example.

Designers need to be aware of the factors that can inhibit the advantages of parallel simulations, even in these best case scenarios; the main factor being inflexibility due to the way designs are modeled today. This article focuses on these factors and is an effort to educate on best design principles and practices to maximize the advantage of simulation with parallel computing. The discussion also extends to the three main fundamental features of parallel simulations, viz., load balancing, concurrency and communication. Designers need to understand how their designs run in simulation with these factors to ensure they get the maximum out of parallel simulations.  Understanding Design Factors that Make Multi-core Parallel Simulations Efficient.

Read more →

February 2013 | Volume 9, Issue 1

Verification Horizons Complete Issue:

Verification Horizons Articles:

What Do Meteorologists and Verification Technologists Have in Common?

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

As verification engineers, we have to be able to forecast the accurate completion of our projects and also be able to cope with problems that may occur. Unfortunately, there are severe consequences when we get it wrong. And the stakes keep getting higher. We've spoken for years about designs getting more complex, but it's not just the number of gates anymore. The last few years have shown a continuous trend towards more embedded processors in designs, which brings software increasingly into the verification process. These systems-on-chip (SoCs) also tend to have large numbers of clock domains, which require additional verification. On top of this, we add multiple power domains and we need not only to verify the basic functionality of an SoC, but also to verify that the functionality is still correct when the power circuitry and control logic (much of which is software, by the way) is layered on top of the problem. What Do Meteorologists and Verification Technologists Have in Common?

Using Formal Analysis to "Block and Tackle"

by Paul B. Egan, Rockwell Automation

Traditionally, connectivity verification at the block level has been completed using dynamic simulation, and typically involves writing directed tests to toggle the top level signals, then debugging why signal values did not transition as expected. For modules with a high number of wires and many modes of operation, the number of required tests quickly becomes unmanageable. We were searching for a better approach. This article will explain how we applied formal analysis at the block level, extended this to full chip and describe how we significantly reduced verification time at both the block and chip level. Just like a block and tackle provides a mechanical advantage, the formal connectivity flow provides a verification advantage. Formal Analysis to Block and Tackle

Bringing Verification and Validation under One Umbrella

by Hemant Sharma and Hans van der Schoot; Mentor Graphics

The standard practice of developing RTL verification and validation platforms as separate flows, forgoes large opportunities to improve productivity and quality that could be gained through the sharing of modules and methods between the two. Bringing these two flows together would save an immense amount of duplicate effort and time while reducing the introduction of errors, because less code needs to be developed and maintained.

A unified flow for RTL verification and pre-silicon validation of hardware/software integration is accomplished by combining a mainstream, transaction-level verification methodology – the Universal Verification Methodology (UVM) – with a hardware-assisted simulation acceleration platform (also known as co-emulation). Necessary testbench modifications to enable this combination are generally nonintrusive and require no third-party class libraries; thus, verification components from customer environments are readily reusable for pure simulation environments, different designs using the same block, and different verification groups. Bringing Verification and Validation under One Umbrella

System Level Code Coverage using Vista Architect and SystemC?

by Ken P. McCarty, Mentor Graphics

System Level Code Coverage gives the architects and designers the opportunity to validate that their verification strategy is comprehensive and the design is optimal. It is important to understand that the System Level Code Coverage is a forward looking strategy, from an architectural perspective. It is preformed even before the first line of the RTL is ever written. System Level Code Coverage using Vista Architect and SystemC?

The Evolution of UPF: What's Next?

by Erich Marschner, Product Manager, Questa Power Aware Simulation, Mentor Graphics

Usage of the Unified Power Format (UPF) is growing rapidly as low power design and verification becomes increasingly necessary. In parallel, the UPF standard has continued to evolve. A previous article1 described and compared the initial UPF standard, defined by Accellera, and the more recent IEEE 1801-2009 UPF standard, also known as UPF 2.0. The IEEE definition of UPF is the current version of the standard, at least for now, but that is about to change. The next version, UPF 2.1, is scheduled for review by the IEEE Standards Review Committee in early March. When it is approved, UPF 2.1 will become the current version of the UPF standard.

UPF 2.1 is an incremental update of UPF 2.0, not a major revision. That said, UPF 2.1 contains a large number of small changes, ranging from subtle refinements of existing commands to improve usability, to new concepts that help ensure accurate modeling of power management effects. This article describes some of the more interesting enhancements and refinements coming soon in UPF 2.1.  What's Next?

Top Five Reasons Why Every DV Engineer Will Love the Latest SystemVerilog 2012 Features

by Ajeetha Kumari, Srinivasan Venkataramanan, CVC Pvt. Ltd.

Power management is a critical aspect of chip design today. This is especially true for chips designed for portable consumer electronics applications such as cell phones and laptop computers, but even non-portable systems are increasingly optimizing power usage to minimize operation costs and infrastructure requirements. Power management requirements must be considered right from the beginning, and the design and implementation of power management must occur throughout the flow, from early RTL design on through physical implementation. Verification of the power management logic is also essential, to ensure that a device operates correctly even when the power to various subsystems or components is turned off or varied to optimally meet operating requirements. Top Five Reasons Why Every DV Engineer Will Love the Latest SystemVerilog 2012 Features

SVA in a UVM Class-based Environment

by Ben Cohen, author, consultant, and trainer.

Verification can be defined as the check that the design meets the requirements. How can this be achieved? Many verification approaches have been used over the years, and those are not necessarily independent, but often complementary. For example, simulation may be performed on some partitions while emulation in other partitions. The verification process in simulation evolved throughout history from simple visual (and very painful) examination of waveforms with the DUT driven by a driver using directed tests, to transaction-based pseudo-random tests and checker modules. This has led to the development of class-based frameworks (e.g., e, VMM, OVM, UVM) that separate the tests from the environment, are capable of generating various concurrent scenarios with randomness and constraints, and are capable of easily transferring data to class-based subscribers for analysis and verification. In parallel, designers and verification engineers have improved the detection and localization of bugs in their design code using assertion-based languages (e.g., PSL, SVA). SVA in a UVM Class-based Environment

The Formal Verification of Design Constraints

by Ajay Daga, CEO, FishTail Design Automation Inc.

There are two approaches to the verification of design constraints: formal verification and structural analysis. Structural analysis refers to the type of analysis performed by a static timing tool where timing paths either exist or not based on constant settings and constant propagation. Formal verification, on the other hand, establishes the condition under which a timing path exists based on the propagation requirements for the path. These path propagation requirements are then used to prove or disprove constraint properties based on a formal analysis of the design. Structural analysis is fast because it is simple. Formal verification, however, is more complete and less noisy. Formal verification allows engineers to guarantee that their design is safe from silicon issues that result from an incorrect constraint specification. Structural analysis cannot make this claim because it cannot independently establish the correctness of a variety of design constraints. The Formal Verification of Design Constraints

OVM to UVM Migration, or "There and Back Again: A Consultant's Tale."

by Mark Litterick, Verification Consultant, Verilab GmbH

While it is generally accepted that the Universal Verification Methodology (UVM) is the way forward for modern SystemVerilog based verification environments, many companies have an extensive legacy of verification components and environments based on the predecessor, the Open Verification Methodology (OVM). Furthermore single push migration of all block-level and top-level environments between the two methodologies may be considered too disruptive for most companies taking into account families of complex derivative under development and scheduled for the near future. Several of our major clients are in the transition phase of initiating new product developments in UVM while having to maintain ongoing development in OVM. Furthermore it is imperative for the new development that we migrate the vast majority of existing verification components from OVM to UVM even though the projects providing the VIP remain with OVM for the time being.  A Consultant's Tale.

Read more →

October 2012 | Volume 8, Issue 3

Verification Horizons Complete Issue:

Verification Horizons Articles:

There's Productivity. And Then There's Productivity

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

Ah, metrics! There is no substitute for clear, objective data when evaluating a player, or a process for that matter. A productive quarterback is measured most of all on how many times he wins, especially in the Super Bowl (advantage: Brady), but there are other metrics that can be used as well. For verification, productivity really comes down to being able to reliably determine if your chip will run correctly as efficiently as possible. Just as the quarterback with the better completion percentage doesn't always win, verification productivity is about much more than just how fast your simulator runs a particular test. We must think about the full spectrum of verification tasks and the tools and technology available to apply to them. Just as a football game is ultimately determined by the final score, all that matters to us as verification engineers is how well we can determine the correctness of our design within the schedule allotted. Keep these thoughts in mind as you read this month's issue of Verification Horizons. There's Productivity. And Then There's Productivity

ST-Ericsson Speeds Time to Functional Verification Closure with Questa Verification Platform

by Rachida El IDRISSI, ST-Ericsson

As is true in many engineering projects, the most difficult step in verification is knowing when you are done. The challenge of reaching verification closure stems mostly from increasing complexity. Recall that as a general rule of thumb, verification cycles rise at some multiple to the number of gates in a design, which is enough to give one pause in the age of billion-gate designs. Other obstacles on the way to closure include new verification methodologies and their sometimes-steep learning curves, aggressive milestones that sometimes force verification teams to truncate their efforts, and the difficulty of reusing verification components. ST-Ericsson Speeds Time to Functional Verification Closure with Questa Verification Platform

The Top Five Formal Verification Applications

by Roger Sabbagh, Mentor Graphics

It's no secret. Silicon development teams are increasingly adopting formal verification to complement their verification flow in key areas. Formal verification statically analyzes a design's behavior with respect to a given set of properties. Traditional formal verification comes in the form of model checking which requires hand-coded properties, along with design constraints. While there certainly are some design groups who continue to be successful with that approach, what are getting more widespread adoption in the industry are the automatic approaches which require much less manual setup. Let's take a look at the top five applications
being used across the industry today. The Top Five Formal Verification Applications

Three Steps to Unified SoC Design and Verification

by Shabtay Matalon and Mark Peryer, Mentor Graphics

Developing a SoC is a risky business in terms of getting it right considering the technical complexity involved, managing the mixture of hardware and software design disciplines, and finding an optimal trade-off between design performance and power. One way to reduce these risks is to use a design and verification flow that is scalable enough to handle the complexity and is flexible enough to explore architectural alternatives
early in the design cycle before implementation starts.

Mentor's System Design and Verification flow encapsulates a range of design and verification disciplines such as: embedded software; SystemC platform design and validation tools; and HDL simulation tools that can be used in various combinations using common interfaces. The flow originally came out of pilot work for the TSMC ESL reference flow and has since matured into a methodology that allows Mentor's Sourcery™ CodeBench, the Vista™ ESL platform, and the Questa® platform to be used together. Three Steps to Unified SoC Design and Verification

Evolution of UPF: Getting Better All the Time

by Erich Marschner, Product Manager, Questa Power Aware Simulation, Mentor Graphics

Power management is a critical aspect of chip design today. This is especially true for chips designed for portable consumer electronics applications such as cell phones and laptop computers, but even non-portable systems are increasingly optimizing power usage to minimize operation costs and infrastructure requirements. Power management requirements must be considered right from the beginning, and the design and implementation of power management must occur throughout the flow, from early RTL design on through physical implementation. Verification of the power management logic is also essential, to ensure that a device operates correctly even when the power to various subsystems or components is turned off or varied to optimally meet operating requirements.  Getting Better All the Time

Improving Analog/Mixed-Signal Verification Productivity

by Ahmed Eisawy, Mentor Graphics

Nearly all of today's chips contain Analog/Mixed-Signal circuits. Although these often constitute only 25% of the total die, they may be 100% of the product differentiation and also, unfortunately, 80% of the problems in actually getting the chip to market in a cost effective and timely way. With growing complexity and shrinking time-to-market Mixed-Signal verification is becoming an enormous challenge for designers, and improving Mixed-Signal verification performance and quality is critical for today's complex designs.

The challenges in mixed-signal verification stem from two opposing forces, time-to-market constraints and shrinking process technologies. Improving Analog/Mixed-Signal Verification Productivity

VHDL-2008: Why It Matters

by Jim Lewis, SynthWorks VHDL Training

VHDL-2008 (IEEE 1076-2008) is here! It is time to start using the new language features to simplify your
RTL coding and facilitate the creation of advanced verification environments.

VHDL-2008 is the largest change to VHDL since 1993. An abbreviated list of changes includes:

• Enhanced Generics = better reuse
• Assertion language (PSL) = better verification
• Fixed and floating point packages = better math
• Composite types with elements that are unconstrained arrays = better data structures
• Hierarchical reference = easier verification
• Simplified Sensitivity List = less errors and work
• Simplified conditionals (if, ...) = less work
• Simplified case statements = less work.  Why It Matters

Read more →

June 2012 | Volume 8, Issue 2

Verification Horizons Complete Issue:

Verification Horizons Articles:

"Murphy" on Verification: If It Isn't Covered, It Doesn't Work

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

On a recent visit to the Evergreen Aviation & Space Museum in Oregon, I had an opportunity to see some great examples of what, for their time, were incredibly complex pieces of engineering. From a replica of the Wright brothers' Flyer, to the World War I Sopwith Camel (sure to be recognized by you Snoopy fans out there) to various World War II fighter planes, including the Spruce Goose, the aviation side of the museum does a great job showing how the technology of flight evolved in just over 40 years. The space side of the museum has a marvelous collection of jet airplanes and rockets, such as replicas of the Gemini and Apollo spacecraft, including the Lunar Module, and pieces of a Saturn V. When you think about the effort and risk people took to advance from Kitty Hawk, North Carolina to the moon in under 70 years, it gives a whole new perspective to the idea of verification.  If It Isn't Covered, It Doesn't Work

Mentor Has Accellera's Latest Standard Covered

by Darron May, Product Marketing Manager, Abigail Moorhouse, R & D Manager, Mentor Graphics Corporation

If you can't measure something, you can't improve it. For years, verification engineers have used "coverage" as a way to measure completeness of the verification effort. Of course, there are many types of coverage, from different types of code coverage to functional coverage, as well as many tools, both dynamic and static, that provide coverage information. Simply put, coverage is a way to count interesting things that happen during verification and the measure of coverage is being able to correlate those things that happened back to a list of things you wanted to happen (also called a verification plan). Meaningful analysis requires a standard method of storing coverage data from multiple tools and languages, which is what Accellera's Unified Coverage Interoperability Standard (UCIS) finally delivers. Mentor Has Accellera's Latest Standard Covered

Is Intelligent Testbench Automation For You?

by Mark Olen, Product Marketing Manager, Mentor Graphics Corporation

Intelligent Testbench Automation (iTBA) is being successfully adopted by more verification teams every day. There have been multiple technical papers demonstrating successful verification applications and panel sessions comparing the merits to both Constrained Random Testing (CRT) and Directed Testing (DT) methods. Technical conferences including DAC, DVCon, and others have joined those interested in better understanding this new technology. An entire course curriculum is available at the Verification Academy. And many articles have been published by various technical journals, including in this and previous Verification Horizons editions. So with all of the activity, how do verification teams separate out the signal from the noise? Is Intelligent Testbench Automation For You?

Automated Generation of Functional Coverage Metrics for Input Stimulus

by Mike Andrews, Verification Technologist, Mentor Graphics Corporation

Verification teams are always under pressure to meet their project schedules, while at the same time the consequences of not adequately verifying the design can be severe. This puts the team between a rock and a hard place as they say. The main value of Questa® inFact is to help with the problem of meeting the schedule requirements by more efficiently, and more predictably, generating the tests needed to meet coverage goals in the case where coverage metrics are being used to determine 'completeness' of the verification project. Automated Generation of Functional Coverage Metrics for Input Stimulus

Targeting Internal-State Scenarios in an Uncertain World

by Matthew Ballance, Verification Technologist, Mentor Graphics Corporation

The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Targeting Internal-State Scenarios in an Uncertain World

Virtualization Delivers Total Verification of SoC Hardware, Software, and Interfaces

by Jim Kenney, Marketing Director, Emulation Division, Mentor Graphics Corporation

With the majority of designs today containing one or more embedded processors, the verification landscape is transforming as more companies grapple with the limitations of traditional verification tools. Comprehensive verification of multi-core SoCs cannot be accomplished without including the software that will run on the hardware. Emulation has the speed and capacity to do this before the investment is made in prototypes or silicon. Virtualization Delivers Total Verification of SoC Hardware, Software, and Interfaces

On the Fly Reset

by Mark Peryer, Verification Methodologist, Mentor Graphics Corporation

A common verification requirement is to reset a design part of the way through a simulation to check that it will come out of reset correctly and that any non-volatile settings survive the process. Almost all testbenches are designed to go through some form of reset and initialization process at their beginning, but applying reset at a mid-point in the simulation can be problematic. The Accellera UVM phasing subcommittee has been trying to resolve how to handle resets for a long time and has yet to reach a conclusion. On the Fly Reset

Relieving the Parameterized Coverage Headache

This article will discuss the coverage-related pitfalls and solutions when dealing with parameterized designs. Relieving the Parameterized Coverage Headache

Four Best Practices for Prototyping MATLAB and Simulink Algorithms on FPGAs

by Stephan van Beek, Sudhir Sharma, and Sudeepa Prakash, MathWorks

Using FPGAs to process large test data sets enables engineers to rapidly evaluate algorithm and architecture tradeoffs quickly. They can also test designs under realworld scenarios without incurring the heavy time penalty associated with HDL simulators. System-level design and verification tools such as MATLAB® and Simulink® help engineers realize these benefits by rapidly prototyping algorithms on FPGAs. Four Best Practices for Prototyping MATLAB and Simulink Algorithms on FPGAs

Better Living Through Better Class-Based SystemVerilog Debug

by Rich Edelman, Raghu Ardeishar, John Amouroux, Mentor Graphics Corporation

This article covers simulator independent debugging techniques that can be adopted as aids during the testbench debug period. Each technique is a small step for better debug, but when considered together can compensate for tool limitations, staff training issues, or other issues with adopting the SystemVerilog language and verification libraries. Better Living Through Better Class-Based SystemVerilog Debug

Read more →

February 2012 | Volume 8, Issue 1

Verification Horizons Complete Issue:

Verification Horizons Articles:

Planning, Contingencies, and Constrained-Random Stimulus...in the Kitchen

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

Our featured articles in this issue introduce two new UVM initiatives that we think you'll find useful. Both are intended to extend UVM accessibility to new groups of users. The first article, "Introducing UVM Connect," by my friend and colleague Adam Erickson, gives an introductory look at our open-source UVM Connect library, which provides TLM1 and TLM2 connectivity and object passing between SystemC and SystemVerilog models and components, as well as a UVM Command API for accessing and controlling UVM simulation from SystemC (or C or C++). The library makes it possible to use UVM in a mixed-language environment in which the strengths of each language can be applied to the problem as needed. Planning, Contingencies, and Constrained-Random Stimulus...in the Kitchen.

Introducing UVM Connect

by Adam Erickson, Verification Technologist, Mentor Graphics Corporation

So what does this new capability allow you to do? UVM Connect enables the following use models, all designed to maximize IP reuse: Abstraction Refinement— Reuse your SystemC architectural models as reference models in UVM verification. Reuse your stimulus generation agents in SystemVerilog to verify models in SystemC. Expansion of VIP Inventory—More off-the-shelf VIP is available when you are no longer confined to VIP written in one language. Increase IP reuse! To properly verify large SoC systems, verification environments are becoming more of an integration problem than a design problem. Leveraging language strengths—Each language has its strengths. You can leverage SV's powerful constraint solvers and UVM's sequences to provide random stimulus to your SC architectural models. You can leverage SC's speed and capacity for verification of untimed or loosely timed system-level environments. Access to SV UVM from SC—The UVM Command API provides a bridge between SC and UVM simulation in SV. With this API you can wait for and control UVM phase transitions, set and get configuration, issue UVM-style reports, set factory type and instance overrides, and more. Introducing UVM Connect

UVM Express

Extracted from the UVM/OVM Online Methodology Cookbook

UVM Express provides a small first step toward UVM adoption. UVM Express is a way to build your testbench environment, a way to raise your abstraction level, a way to check the quality of your tests and a way to think about writing your tests. Each of the steps outlined for UVM Express is a reusable piece of verification infrastructure. These UVM Express steps are a way to progressively adopt a UVM methodology, while getting verification productivity and verification results at each step. Using UVM Express is not a replacement for full UVM, but instead enables full UVM migration or co-existence at any time. UVM Express is UVM – just organized in a way that allows progressive adoption and a value proposition with each step. UVM Express

Automation Management: Are You Living a Scripted Life?

by Darron May, Manager of Verification Analysis Solutions, Mentor Graphics Corporation

Increasing demand for high quality IP and SoC designs and ever shortening design cycles puts pressure on IP and SoC houses to leverage automation as much as possible throughout the entire electronic design and verification processes. This is indeed widely seen in the verification space, where verification engineers have to contend with tasks such as coverage closure, bug hunting, smoke and soak testing, all of which are done through running lots of regressions. How automation is applied to a verification process can have a massive impact (positive and negative) on overall productivity of that process.  Are You Living a Scripted Life?

Efficient Project Management and Verification Sign-off Using Questa Verification Management

by Suresh Babu P., Chakravarthi M.G., Test and Verification Solutions India Pvt. Ltd.

Test and Verification Solutions (TVS) uses Questa Verification Management (Questa VM) for both project management and verification sign off for its asureVIP development program. Questa VM can manage verification data, process and tools with features such as testplan tracking, trend analysis, results analysis and run management. TVS has benefitted specifically from Questa VRM (Verification Run Manager), which automates regression runs and monitors coverage status. These features are useful in helping us identify a bug during a regression run, immediately starting the debug process and subsequently easily monitoring project status. Efficient Project Management and Verification Sign-off Using Questa Verification Management

Using Formal Technology To Improve Coverage Results

by Roger Sabbagh, Product Marketing Manager Design Verification & Harry Foster, Chief Verification Scientist, Mentor Graphics

Debugging continues to be one of the biggest bottlenecks in today's design flow. Yet, when discussing the topic of debugging among project teams, the first thought that comes to mind for most engineers is related to the process of finding bugs in architectural and RTL models, as well as verification code and test. However, debugging touches all processes within a design flow—including the painful task of coverage closure. In fact, one of the most frustrating aspects of debugging is tracking down a particular coverage item that has not been hit only to learn that the coverage item is unreachable. In this article we explore the debugging aspect of coverage closure; with a focus on the unique ability of formal technology to automatically generate simulation exclusion files to improve coverage results while reducing the amount of time wasted trying to hit unreachable states. Using Formal Technology To Improve Coverage Results

Hiding the Guts

by Ray Salemi, Senior Verification Consultant, Mentor Graphics

We verification test bench designers are happy sausage makers, merrily turning out complex and powerful verification environments. To us, object-oriented programming environments not only greatly enhance our productivity, but they make us feel smarter. Who doesn't like to throw around words such as extend, factory, and, of course, polymorphism. It's good for the ego and the soul.

However, our test writing coworkers, the ones who will use our environment to drive stimulus into the DUT, look upon object oriented programming as a horrible goo, that some people need to touch to make the sausages, but that they would rather ignore. When you tell these folks, "Just extend the basic test class, and override the environment in the factory", they look at you as if you had asked them to plunge their hands into a bowl of freshly ground pork. Hiding the Guts

A Methodology for Advanced Block Level Verification

by Ashish Aggarwal, Verification Technologist and Ravindra K. Aneja, Verification Technology Manager, Mentor Graphics

This paper outlines the process for advanced verification methods at the block level. Design and verification issues can be divided into four major categories, each of which we briefly address in this paper: RTL development, verification of standard protocol interfaces, end-to-end verification using a simulation-based environment, and effective management of coverage closure. A Methodology for Advanced Block Level Verification

Read more →

November 2011 | Volume 7, Issue 3

Verification Horizons Complete Issue:

Verification Horizons Articles:

With Vision and Planning, You Can Actually Enjoy Upgrading Your Process. Seriously

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

We're finally doing it. My wife and I have been in our house for 13½ years and we're finally redoing our kitchen, which my wife has wanted to do for about 11 years now. This is kind of a big step for us, as those of you who have gone through the exercise can attest, I'm sure. When we had the house built, we chose to limit our budget to keep "expensive" from turning into "exorbitant," and we've been living with formica countertops and a linoleum floor ever since. It's rather exciting seeing the first steps of converting these into granite and hardwood. As you might imagine, I find myself thinking of how this relates to upgrading a verification methodology because I'm sure that, by now, you know that that's how my mind works. <br />
With Vision and Planning, You Can Actually Enjoy Upgrading Your Process. Seriously


Graph-Based IP Verification in an ARM SoC Environment

by Andreas Meyer, Verification Technologist, Mentor Graphics Corporation

The use of graph-based verification methods for block designs has been shown to provide a significant time and cost reduction when compared to more traditional constrained random techniques. By filling the random constraint space in a controlled random fashion, the full set of constraints can be filled significantly faster than a random constraint solver will. Possibly more importantly, the ability to describe a graph in canonical form is likely to be easier to modify and maintain over the life of an IP block than constraint statements. Combining graphs with a UVM environment, it is possible to extend block-level verification components into a SoC-level test environment with little additional verification development. Graph-Based IP Verification in an ARM SoC Environment


Use Scripting Language in Combination with the Verification Testbench to Define Powerful Test Cases

by Franz Pammer, Infineon

This article focuses on providing a methodology to make the process of writing test cases easier for verifying mixed analog digital designs. It explains the advantage of using a scripting language together with a Hardware Description Language (HDL) environment. It shows a way to give the HDL engineers the possibility to use constrained randomized test cases without having too much knowledge about the implemented testbench. Two possible scenarios for defining your test cases will be demonstrated. One can be used for the traditional VHDL testbench and another for the SystemVerilog (SV) Open Verification Methodology (OVM) environment. For both cases a scripting language will be used to write test cases. At the end you will understand that with the mentioned methodology also design engineers will fully benefit of verifying their design. Use Scripting Language in Combination with the Verification Testbench to Define Powerful Test Cases

STMicroelectronics Engineers Find Mixed Analog-Digital Verification Success with OVM

by Alberto Allara, Matteo Barbati, and Fabio Brognara, STMicroelectronic

By now it's a cliché to speak of the rise of digital technology. Follow technology coverage in the media for any length of time and it doesn't take long to note the tacit assumption that nearly anything with an on/off switch will eventually communicate with the world at-large exclusively in strings of 0s and 1s. Of course, as long as the electronics industry is built on harnessing the laws of physics, the importance of analog signals will never go away. Nature speaks in waveforms, not regimented bitstreams. So the challenge, and one that must be repeatedly solved by those building ever more complex semiconductor devices, is how to verify what's happening at the analog-digital interface. STMicroelectronics Engineers Find Mixed Analog-Digital Verification Success with OVM

Polymorphic Interfaces: An Alternative for SystemVerilog Interfaces

by Shashi Bhutada, Mentor Graphics Corporation

The SystemVerilog language is increasing in adoption for advanced verification techniques. This enables the creation of dynamic test environments using SystemVerilog classes among other things. The SystemVerilog virtual interface has been the primary method for connecting class-based SystemVerilog testbenches with a VHDL or Verilog DUT, but this construct has certain limitations, especially when the interface is parameterized. In this article we will discuss some of these limitations and demonstrate an alternative approach called as the Polymorphic Interface. The recommended approach can also work generically wherever one uses virtual interfaces and not just parameterized interfaces. For the SystemVerilog testbench, we will use OVM infrastructure, but this same discussion applies to UVM testbenches. This article assumes SystemVerilog OVM/UVM class and interface syntactical understanding.  An Alternative for SystemVerilog Interfaces


Layering in UVM

Extracted from the UVM/OVM Online Methodology Cookbook

Many protocols have a hierarchical definition. For example, PCI express, USB 3.0, and MIPI LLI all have a Transaction Layer, a Transport Layer, and a Physical Layer. Sometimes we want to create a protocol independent layer on top of a standard protocol so that we can create protocol independent verification components ( for example TLM 2.0 GP over AMBA AHB ). All these cases require that we deconstruct sequence items of the higher level protocol into sequence items of the lower level protocol in order to stimulate the bus and that we reconstruct higher level sequence items from the lower level sequence items in the analysis datapath. <br />
Layering in UVM


Successful Adoption of OVM and What We Learned Along the Way

by Mike Bartley, Founder, TVS and Steve Holloway, Senior Verification Engineer, Dialog Semiconductor

OVM has gained a lot of momentum in the market to become the dominant verification "methodology" and the indications are that UVM will gain even greater adoption. OVM is built around SystemVerilog and provide libraries that allow the user to build advanced verification test benches more quickly. There is extensive documentation, training and support for how to best develop such test benches and to encourage test bench re-use. However, there is less advice on how to adapt your verification processes on your project to best use OVM and even less advice on how to do this for company wide adoption. In this article we discuss the experiences of the authors of a company-wide adoption of OVM. We consider the original motivations for that adoption and the expected benefits, and the actual results achieved and problems that have been overcome. Successful Adoption of OVM and What We Learned Along the Way

Process Management: Are You Driving in the Dark with Faulty Headlights?

by Darron May, Manager of Verification Analysis Solutions, Mentor Graphics Corporation

If your car's headlights were faulty, would you even think about leaving home in the dark bound for a distant destination to which you've never been? Even if you were armed with a map, a detailed set of instructions and a good flashlight, the blackness on the other side of your windshield would still make it difficult to navigate. And even with reliable headlights and maps, you'll invariably still encounter obstacles and detours. These hassles are nowadays mitigated somewhat by car navigation systems that tell us where we are, how far we have travelled and, so we can consider alternate routes and estimate how long it will take to get to our final destination, how bad the traffic is ahead. Verification of SOCs and electronic systems is certainly a little more complex than road navigation. However, the process of planning what you are verifying and then constantly measuring where you are in the process is equally important whether your final destination is a swanky new hotel a few hours away or a successful tape-out by the end of the year.  Are You Driving in the Dark with Faulty Headlights?

Read more →

June 2011 | Volume 7, Issue 2

Verification Horizons Complete Issue:

Verification Horizons Articles:

How Do You Know You Have the Right Tool for the Right Job?

by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics Corporation

As I write this, spring appears to have finally arrived here in New England – about a month and a half later than the calendar says it should have. As much as I love warm spring weather, though, it means that I now have to deal with my lawn again. I know that many people actually enjoy working on the lawn, but as far as I'm concerned, the greatest advance in lawn-care technology happened last year when my son became old enough to drive the lawn mower. If you've ever seen a 13-year old boy driving a lawn tractor, you'll understand my characterizing him as "constrained-random" when it comes to getting the lawn cut. I handle the "directed testing" by taking care of the edging and hard-to-reach spots, and together we manage to get the lawn done in considerably less time than it used to take me alone. How Do You Know You Have the Right Tool for the Right Job?

First Principles: Why Bother with This Methodology Stuff, Anyway?

by Joshua Rensch, Verification Lead, Lockheed Martin and Tom Fitzpatrick, Verification Methodologist, Mentor Graphics Corporation

Many of us are so used to the idea of "verification methodology," including constrained random and functional coverage, that we sometimes lose sight of the fact that there is still a large section of the industry to whom these are new concepts. Every once in a while, it's a good idea to go back to "first principles" and understand how we got where we are and why things like the OVM and UVM are so popular. Both authors have found ourselves in this situation of trying to explain these ideas to colleagues and we thought it might be helpful to document some of the discussions we've had.  Why Bother with This Methodology Stuff, Anyway?

Online UVM/OVM Methodology Cookbook: Registers/Overview

by Mark Peryer, Verification Methodologist, Mentor Graphics Corporation

The UVM register model provides a way of tracking the register content of a DUT and a convenience layer for accessing register and memory locations within the DUT. The register model abstraction reflects the structure of a hardware-software register specification, since that is the common reference specification for hardware design and verification engineers, and it is also used by software engineers developing firmware layer software. It is very important that all three groups reference a common specification and it is crucial that the design is verified against an accurate model. Online UVM/OVM Methodology Cookbook: Registers/Overview

A Methodology for Hardware-Assisted Acceleration of OVM and UVM Testbenches

by Hans van der Schoot, Anoop Saha, Ankit Garg, Krishnamurthy Suresh, Emulation Division, Mentor Graphics Corporation

A methodology is presented for writing modern SystemVerilog testbenches that can be used not only for software simulation, but especially for hardware-assisted acceleration. The methodology is founded on a transaction-based co-emulation approach and enables truly single source, fully IEEE 1800 SystemVerilog compliant, transaction-level testbenches that work for both simulation and acceleration. Substantial run-time improvements are possible in acceleration mode and without sacrificing simulator verification capabilities and integrations including SystemVerilog coverage-driven, constrained-random and assertion-based techniques as well as prevalent verification methodologies like OVM or UVM. A Methodology for Hardware-Assisted Acceleration of OVM and UVM Testbenches

Combining Algebraic Constraints with Graph-based Intelligent Testbench Automation

by Mike Andrews, Verification Technologist, Mentor Graphics

The Questa® inFact intelligent testbench automation tool is already proven to help verification teams dramatically accelerate the time it takes to reach their coverage goals. It does this by intelligently traversing a graph-based description of the test sequences and allowing the user to prioritize the input combinations required to meet the testbench coverage metrics while still delivering those sequences in a pseudo-random order to the device under test (DUT). The rule language, an extended Backus Naur Format (BNF) that is used to describe the graph structure, has recently been enhanced to add two powerful new features. Combining Algebraic Constraints with Graph-based Intelligent Testbench Automation

Data Management: Is There Such a Thing as an Optimized Unified Coverage Database?

by Darron May, Manager of Verification Analysis Solutions and Gabriel Chidolue, Verification Technologist, Mentor Graphics Corporation

With the sheer volumes of data that are produced from today's verification environments there is a real need for solutions that deliver both the highest capacities along with the performance to enable the data to be accessed and analyzed in a timely manner. There is no one single coverage metric that can be used to measure functional verification completeness and today's complex systems demand multiple verification methods.  Is There Such a Thing as an Optimized Unified Coverage Database?

A Unified Verification Flow Using Assertion Synthesis Technology

by Yuan Lu, Nextop Software Inc., and Ping Yeung, Mentor Graphics Corporation

As SOC integration complexity grows tremendously in the last decade, traditional blackbox checker based verification methodology fails to keep up to provide enough observability needed. Assertion-based verification (ABV) methodology is widely recognized as a solution to this problem. ABV is a methodology in which designers use assertions to capture specific internal design intent or interface specification and, either through simulation, formal verification, or emulation of these assertions, verify that the design correctly implements that intent. Assertions actively monitor a design (or testbench) to ensure correct functional behavior. A Unified Verification Flow Using Assertion Synthesis Technology

Benchmarking Functional Verification

by Mike Bartley and Mike Benjamin, Test and Verification Solutions

Over the years there have been numerous attempts to develop benchmarking methodologies. One of the most widely used is the Capability Maturity Model (CMMI) developed by the Software Engineering Institute at Carnegie Mellon University. Although aimed at software engineering it provides a framework that is widely applicable to most business activities. However, whilst we have drawn considerable inspiration from CMMI, it has a number of serious limitations when trying to use it to benchmark a highly specific activity such as functional verification. Benchmarking Functional Verification

Universal Verification Methodology (UVM)-based SystemVerilog Testbench for VITAL Models

by Tanja Cotra, Program Manager, HDL Design House

With the increasing number of different VITAL model families, there is a need to develop a base Verification Environment (VE) which can be reused with each new VITAL model family. UVM methodology applied to the SystemVerilog Testbench for the VITAL models should provide a unique VE. The reusability of such UVM VE is the key benefit compared to the standard approach (direct testing) of VITAL models verification. Also, it incorporates the rule of "4 Cs" (Con-figuration, Constraints, Checkers and Coverage). Thus, instead of writing specific tests for each DUT feature, a single test can be randomized and run as part of regression which speeds up the collection of functional coverage. Universal Verification Methodology (UVM)-based SystemVerilog Testbench for VITAL Models

Efficient Failure Triage with Automated Debug: a Case Study

by Sean Safarpour, Evean Qin, and Mustafa Abbas, Vennsa Technologies Inc.

Functional debug is a dreadful yet necessary part of today's verification effort. At the 2010 Microprocessor Test and Verification Workshop experts agreed that debug consumes approximately one third of the design development time. Typically, debugging occurs in two steps, triage and root cause analysis. The triage step is applied when bugs are first discovered through debugging regression failures.  a Case Study

Are OVM & UVM Macros Evil? A Cost-Benefit Analysis

by Adam Erickson, Mentor Graphics Corporation

Are macros evil? Well, yes and no. Macros are an unavoidable and integral part of any piece of software, and the Open Verification Methodology (OVM) and Universal Verification Methodology (UVM) libraries are no exception. Macros should be employed sparingly to ease repetitive typing of small bits of code, to hide implementation differences or limitations among the vendors&rsquo; simulators, or to ensure correct operation of critical features. Although the benefits of the OVM and UVM macros may be obvious and immediate, benchmarks and recurring support issues have exposed their hidden costs. Are OVM & UVM Macros Evil? A Cost-Benefit Analysis

Read more →

February 2011 | Volume 7, Issue 1

Verification Horizons Complete Issue:

Read more →

Pages