Search Results

Filters
Reset All

Filters

Topic

Show More

Show Less

Content Type

Show More

Show Less

Audience

Tags

Show More

Show Less

1772 Results

  • Better Living Through Better Class-Based SystemVerilog Debug

    Debugging large testbenches has changed recently. The testbenches are larger than they used to be, and they are more like software than they used to be. In addition, the testbench language features use object-oriented constructs, and may use a new library of verification components. Each of these characteristics adds to the pain of debugging the testbench, which must be done before debugging of the actual device- under-test can begin.

  • Relieving the Parameterized Coverage Headache

    Modern FPGA and ASIC verification environments use coverage metrics to help determine how thorough the verification effort has been. Practices for creating, collecting, merging and analyzing this coverage information are well documented for designs that operate in a single configuration only. However, complications arise when parameters are introduced into the design, especially when creating customizable IP. This article will discuss the coverage-related pitfalls and solutions when dealing with parameterized designs.

  • Targeting Internal-State Scenarios in an Uncertain World

    The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.

  • Is Intelligent Testbench Automation For You?

    Intelligent Testbench Automation (iTBA) is being successfully adopted by more verification teams every day. There have been multiple technical papers demonstrating successful verification applications and panel sessions comparing the merits to both Constrained Random Testing (CRT) and Directed Testing (DT) methods. Technical conferences including DAC, DVCon, and others have joined those interested in better understanding this new technology.

  • VHDL-2008: Why It Matters

    VHDL-2008 (IEEE 1076-2008) is here! It is time to start using the new language features to simplify your RTL coding and facilitate the creation of advanced verification environments.

  • Improving Analog/Mixed-Signal Verification Productivity

    Nearly all of today's chips contain Analog/Mixed-Signal circuits. Although these often constitute only 25% of the total die, they may be 100% of the product differentiation and also, unfortunately, 80% of the problems in actually getting the chip to market in a cost effective and timely way. With growing complexity and shrinking time-tomarket Mixed-Signal verification is becoming an enormous challenge for designers, and improving Mixed-Signal verification performance and quality is critical for today's complex designs.

  • Maximum Productivity with Verification IP

    When beginning a new design it's common to evaluate how to build a verification infrastructure in the quickest amount of time. Of course it's never just quick to deploy, verification also has to be complete enough to improve confidence in the design. Rapid bring-up and improving the quality of your design are excellent goals. However, you should not forget that your environment should be efficient to use during the verification process. This is where you will spend most of your time, slugging it out day after day. Arguably, debugging design bugs is one of the most time consuming tasks of any project. Transaction Level Modeling (TLM) will change the way you think about debug productivity, especially if you have recently experienced the long and difficult task of deciphering PCIe's training sequences, data transfers and completion codes at the pin level.

  • Caching in on Analysis

    The on-chip bus interconnect has become a critical subsystem of a System On a Chip (SoC). Its function is to route data between different parts of the system at a rate that allows the system to meet its performance goals. The scale and complexity of the interconnect varies from a single switching block, routing transactions between bus masters and bus slaves, to a hierarchy of switches organized to maximize through-put. Verifying that the interconnect works correctly requires that the various bus ports in the system adhere to the protocol specification; that the system address map has been implemented correctly; and that the overall interconnect fabric delivers on its performance goals.

  • Portable VHDL Testbench Automation with Intelligent Testbench Automation

    We've come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directedrandom, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological "bridges".

  • Please! Can Someone Make UVM Easier to Use?

    UVM was designed as a means of simplifying and standardizing verification which had been fragmented as a result of many methodologies in use like eRM, VMM, OVM. It started off quite simple. Later on, as a result of feature creep, many of the issues with the older methodologies found its way into UVM. This article looks at some of those issues and suggests ways of simplifying the verification environment.

  • Does Design Size Influence First Silicon Success?

    In 2002 and 2004, Collett International Research, Inc. conducted its well-known ASIC/IC functional verification studies, which provided invaluable insight into the state of the electronic industry and its trends in design and verification at that point in time. However, after the 2004 study, no additional Collett studies were conducted, which left a void in identifying industry trends. To address this dearth of knowledge, over the years Mentor Graphics has commissioned multiple world-wide, double-blind, functional verification studies, covering all electronic industry market segments. In this article, we present a few highlights from our most recent study, and try to address the question: “Does design size influence the likelihood of achieving first silicon success?”

  • Unit Testing Your Way to a Reliable Testbench

    Writing tests, particularly unit tests, can be a tedious chore. More tedious - not to mention frustrating - is debugging testbench code as project schedules tighten and release pressure builds. With quality being a non-negotiable aspect of hardware development, verification is a pay-me-now or pay-me-later activity that cannot be avoided. Building and running unit tests has a cost, but there is a greater cost of not unit testing. Unit testing is a proactive pay now technique that helps avoid running up debts that become much more expensive to pay later.

  • New School Thinking for Fast and Efficient Verification Using EZ-VIP

    The session will show how to swiftly move through VIP instantiation, connection, configuration and protocol initialization, covering the use of UVM based verification IP for protocols such as PCI Express and MIPI CSI and DSI.

  • New School Regression Control

    Getting the very best from your verification resources requires a regression system that understands the verification process and is tightly integrated with workload management and distributed resource management software. Both requirements depend on visibility into available software and hardware resources, and by combining their strengths, users can massively improve productivity by reducing unnecessary verification cycles.

  • Evolution of UPF: Getting Better All the Time

  • Evolution of Debug

    In this session, Gordon Allan takes a critical look at the past, present and future challenges for debug, exploring real world situations drawn from years of experience in SoC design and verification, and describing leading-edge techniques and compelling solutions.

  • Evolution of Debug

  • Accelerating RTL Simulation Techniques

  • Unleashing the Full Power of UPF Power States

  • UVM Sans UVM - An Approach to Automating UVM Testbench Writing

  • EZ Verification with Questa Verification IP

  • Automated Generation of Functional Coverage Metrics for Input Stimulus

  • Successive Refinement: A Methodology for Incremental Specification of Power Intent

  • New School Connectivity Checking

    This session discusses the use of a new school formal verification method which can be easily applied to solve the problem of connectivity checking with detailed case studies of how this formal app was used to automatically verify connectivity and accelerate the debug process.

  • Power Aware CDC Verification

    In this track, you will learn the low power CDC methodology by discussing the low power CDC challenges, describing the UPF-related power logic structures relevant to CDC analysis, and explaining a low power CDC verification methodology.