Search Results

Filters
Reset All

Filters

Topic

Show More

Show Less

Content Type

Show More

Show Less

Audience

Resource Type

Show More

Show Less

Tags

Show More

Show Less

2056 Results

  • UVM Forum - All Slides

  • UVM Forum Seminar - 2015: UVM Enabled Advanced Storage IP Silicon Success

  • UVM and Emulation - Easing the Path to Advanced Verification and Analysi

  • Automating Scenario-Level UVM Tests with Portable Stimulus

    In this session, you will learn how to easily leverage lower-level descriptions, such as sequence items, in larger scenarios and efficiently and predictably exercise the scenario space, ensuring high quality verification results.

  • UVM Forum Seminar - 2015: Improving UVM Testbench Debug Productivity and Visibility

  • Creating UVM Testbenches for Simulation & Emulation Platform Portability

    In this session, you will learn the fundamentals of hardware-assisted testbench acceleration.

  • UVM Forum Seminar - 2015: UVM Technology Overview

  • UVM Forum Seminar - 2015: UVM Everywhere: Industry Drivers, Best Practices, and Solutions

  • ISO 26262 Fault Analysis – Worst Case is Really the Worst

    Imagine you’re a verification engineer being asked to get a small 10K gate design ISO 26262 certified. Assuming you don’t take the smart decision to quit your job, what would be your first step? If you would have been asked to do plain functional verification of the design, it is obvious you would start by reading the DUT spec.

  • Memories Are Made Like This

    One of the most common requirements for the verification of a chip, board or system is to be able to model the behavior of memory components, and this is why memory models are one of the most prevalent types of Verification IP (VIP).

  • A New Stimulus Model for CPU Instruction Sets

    Verifying that a specific implementation of a processor is fully compliant with the specification is a difficult task. Due to the very large total stimuli space it is difficult, if not impossible, to ensure that every architectural and micro-architectural feature has been exercised. Typical approaches involve collecting large test-suites of real SW, as well as using program generators based on constrained- random generation of instruction streams, but there are drawbacks to each.

  • On-Chip Debug – Reducing Overall ASIC Development Schedule Risk

    With ASIC complexity on the increase and unrelenting time-to-market pressure, many silicon design teams still face serious schedule risk from unplanned spins and long post-silicon debug cycles. However, there are opportunities on both the pre-silicon and post-silicon sides that can be systematically improved using on-chip debug solutions.

  • Hardware Emulation: Three Decades of Evolution - Part III

    At the beginning of the third decade, circa 2005, system and chip engineers were developing evermore complex designs that mixed many interconnected blocks, embedded multicore processors, digital signal processors (DSPs) and a plethora of peripherals, supported by large memories. The combination of all of these components gave real meaning to the designation system on chip (SoC).

  • QVIP Provides Thoroughness in Verification

    The present day designs use standard interfaces for the connection and management of functional blocks in System on Chips (SoCs). These interface protocols are so complex that, creating in-house VIPs could take a lot of engineer’s development time. A fully verified interface should include all the complex protocol compliance checking, generation and application of different test case scenarios, etc.

  • Minimizing Constraints to Debug Vacuous Proofs

    Most false positives (i.e. missing design bugs) during the practice of model checking on industrial designs can be reduced to the problem of a failing cover. Debugging the root cause of such a failing cover can be a laborious process, when the formal testbench has many constraints. This article describes a solution to minimize the number of model checking runs to isolate a minimal set of constraints necessary for the failure. This helps improve formal verification productivity.

  • A Generic UVM Scoreboard

    All UVM engineers employ scoreboarding for checking DUT/reference model behavior, but only few spend their time wisely by employing an existing scoreboard architecture. The main reason is that existing frameworks have inadequately served user needs and have failed to improve user effectiveness in the debug situation. This article presents a better UVM scoreboard framework, focusing on scalability, architectural separation and connectivity to foreign environments.

  • Getting ISO 26262 Faults Straight

    Random hardware faults – i.e. individual gates going nuts and driving a value they’re not supposed to – are practically expected in every electronic device, at a very low probability. When we talk about mobile or home entertainment devices, we could live with their impact. But when we talk about safety critical designs, such as automotive or medical, we could well die from it. That explains why ISO 26262 automotive safety standard is obsessed with analyzing and minimizing the risk they pose.

  • Getting ISO 26262 Faults Straight

    ISO 26262 for automotive requires that the impacts of random hardware faults on hardware used in vehicles are thoroughly analyzed and the risk of safety critical failures due to such faults is shown to be below a certain threshold.

  • Low Power Verification Techniques

    This session highlights a "new school" low power methodology termed "successive refinement" that uses the strength of UPF in just such a structured approach.

  • Targeting Internal-State Scenarios in an Uncertain World

    The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.

  • Is Intelligent Testbench Automation For You?

    Intelligent Testbench Automation (iTBA) is being successfully adopted by more verification teams every day. There have been multiple technical papers demonstrating successful verification applications and panel sessions comparing the merits to both Constrained Random Testing (CRT) and Directed Testing (DT) methods. Technical conferences including DAC, DVCon, and others have joined those interested in better understanding this new technology.

  • VHDL-2008: Why It Matters

    VHDL-2008 (IEEE 1076-2008) is here! It is time to start using the new language features to simplify your RTL coding and facilitate the creation of advanced verification environments.

  • Improving Analog/Mixed-Signal Verification Productivity

    Nearly all of today's chips contain Analog/Mixed-Signal circuits. Although these often constitute only 25% of the total die, they may be 100% of the product differentiation and also, unfortunately, 80% of the problems in actually getting the chip to market in a cost effective and timely way. With growing complexity and shrinking time-tomarket Mixed-Signal verification is becoming an enormous challenge for designers, and improving Mixed-Signal verification performance and quality is critical for today's complex designs.

  • Portable VHDL Testbench Automation with Intelligent Testbench Automation

    We've come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directedrandom, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological "bridges".

  • Please! Can Someone Make UVM Easier to Use?

    UVM was designed as a means of simplifying and standardizing verification which had been fragmented as a result of many methodologies in use like eRM, VMM, OVM. It started off quite simple. Later on, as a result of feature creep, many of the issues with the older methodologies found its way into UVM. This article looks at some of those issues and suggests ways of simplifying the verification environment.