Search Results

Filters
Reset All

Filters

Topic

Show More

Show Less

Content Type

Show More

Show Less

Audience

Resource Type

Show More

Show Less

Tags

Show More

Show Less

2072 Results

  • A New Stimulus Model for CPU Instruction Sets

    Verifying that a specific implementation of a processor is fully compliant with the specification is a difficult task. Due to the very large total stimuli space it is difficult, if not impossible, to ensure that every architectural and micro-architectural feature has been exercised. Typical approaches involve collecting large test-suites of real SW, as well as using program generators based on constrained- random generation of instruction streams, but there are drawbacks to each.

  • On-Chip Debug – Reducing Overall ASIC Development Schedule Risk

    With ASIC complexity on the increase and unrelenting time-to-market pressure, many silicon design teams still face serious schedule risk from unplanned spins and long post-silicon debug cycles. However, there are opportunities on both the pre-silicon and post-silicon sides that can be systematically improved using on-chip debug solutions.

  • Hardware Emulation: Three Decades of Evolution - Part III

    At the beginning of the third decade, circa 2005, system and chip engineers were developing evermore complex designs that mixed many interconnected blocks, embedded multicore processors, digital signal processors (DSPs) and a plethora of peripherals, supported by large memories. The combination of all of these components gave real meaning to the designation system on chip (SoC).

  • QVIP Provides Thoroughness in Verification

    The present day designs use standard interfaces for the connection and management of functional blocks in System on Chips (SoCs). These interface protocols are so complex that, creating in-house VIPs could take a lot of engineer’s development time. A fully verified interface should include all the complex protocol compliance checking, generation and application of different test case scenarios, etc.

  • Minimizing Constraints to Debug Vacuous Proofs

    Most false positives (i.e. missing design bugs) during the practice of model checking on industrial designs can be reduced to the problem of a failing cover. Debugging the root cause of such a failing cover can be a laborious process, when the formal testbench has many constraints. This article describes a solution to minimize the number of model checking runs to isolate a minimal set of constraints necessary for the failure. This helps improve formal verification productivity.

  • A Generic UVM Scoreboard

    All UVM engineers employ scoreboarding for checking DUT/reference model behavior, but only few spend their time wisely by employing an existing scoreboard architecture. The main reason is that existing frameworks have inadequately served user needs and have failed to improve user effectiveness in the debug situation. This article presents a better UVM scoreboard framework, focusing on scalability, architectural separation and connectivity to foreign environments.

  • Getting ISO 26262 Faults Straight

    Random hardware faults – i.e. individual gates going nuts and driving a value they’re not supposed to – are practically expected in every electronic device, at a very low probability. When we talk about mobile or home entertainment devices, we could live with their impact. But when we talk about safety critical designs, such as automotive or medical, we could well die from it. That explains why ISO 26262 automotive safety standard is obsessed with analyzing and minimizing the risk they pose.

  • Getting ISO 26262 Faults Straight

    ISO 26262 for automotive requires that the impacts of random hardware faults on hardware used in vehicles are thoroughly analyzed and the risk of safety critical failures due to such faults is shown to be below a certain threshold.

  • Low Power Verification Techniques

    This session highlights a "new school" low power methodology termed "successive refinement" that uses the strength of UPF in just such a structured approach.

  • Targeting Internal-State Scenarios in an Uncertain World

    The challenges inherent in verifying today's complex designs are widely understood. Just identifying and exercising all the operating modes of one of today's complex designs can be challenging. Creating tests that will exercise all these input cases is, likewise, challenging and labor-intensive. Using directed-test methodology, it is extremely challenging to create sufficiently-comprehensive tests to ensure design quality, due to the amount of engineering effort needed to design, implement, and manage the test suite. Random test methodology helps to address the productivity and management challenges, since automation is leveraged more efficiently. However, ensuring that all critical cases are hit with random testing is difficult, due to the inherent redundancy of randomly-generated stimulus.

  • Is Intelligent Testbench Automation For You?

    Intelligent Testbench Automation (iTBA) is being successfully adopted by more verification teams every day. There have been multiple technical papers demonstrating successful verification applications and panel sessions comparing the merits to both Constrained Random Testing (CRT) and Directed Testing (DT) methods. Technical conferences including DAC, DVCon, and others have joined those interested in better understanding this new technology.

  • VHDL-2008: Why It Matters

    VHDL-2008 (IEEE 1076-2008) is here! It is time to start using the new language features to simplify your RTL coding and facilitate the creation of advanced verification environments.

  • Improving Analog/Mixed-Signal Verification Productivity

    Nearly all of today's chips contain Analog/Mixed-Signal circuits. Although these often constitute only 25% of the total die, they may be 100% of the product differentiation and also, unfortunately, 80% of the problems in actually getting the chip to market in a cost effective and timely way. With growing complexity and shrinking time-tomarket Mixed-Signal verification is becoming an enormous challenge for designers, and improving Mixed-Signal verification performance and quality is critical for today's complex designs.

  • Portable VHDL Testbench Automation with Intelligent Testbench Automation

    We've come a long way since digital designs were sketched as schematics by hand on paper and tested in the lab by wiring together discrete integrated circuits, applying generated signals and checking for proper behavior. Design evolved to gate-level on a workstation and on to RTL, while verification evolved from simple directed tests to directedrandom, constrained-random, and systematic testing. At each step in this evolution, significant investment has been made in training, development of reusable infrastructure, and tools. This level of investment means that switching to a new verification environment, for example, has a cost and tends to be a carefully-planned migration rather than an abrupt switch. In any migration process, technologies that help to bring new advances into the existing environment while continuing to be valuable in the future are critical methodological "bridges".

  • Please! Can Someone Make UVM Easier to Use?

    UVM was designed as a means of simplifying and standardizing verification which had been fragmented as a result of many methodologies in use like eRM, VMM, OVM. It started off quite simple. Later on, as a result of feature creep, many of the issues with the older methodologies found its way into UVM. This article looks at some of those issues and suggests ways of simplifying the verification environment.

  • New School Thinking for Fast and Efficient Verification Using EZ-VIP

    The session will show how to swiftly move through VIP instantiation, connection, configuration and protocol initialization, covering the use of UVM based verification IP for protocols such as PCI Express and MIPI CSI and DSI.

  • New School Regression Control

    Getting the very best from your verification resources requires a regression system that understands the verification process and is tightly integrated with workload management and distributed resource management software. Both requirements depend on visibility into available software and hardware resources, and by combining their strengths, users can massively improve productivity by reducing unnecessary verification cycles.

  • Evolution of Debug

    In this session, Gordon Allan takes a critical look at the past, present and future challenges for debug, exploring real world situations drawn from years of experience in SoC design and verification, and describing leading-edge techniques and compelling solutions.

  • Evolution of Debug

  • Unleashing the Full Power of UPF Power States

  • UVM Sans UVM - An Approach to Automating UVM Testbench Writing

  • EZ Verification with Questa Verification IP

  • Automated Generation of Functional Coverage Metrics for Input Stimulus

  • New School Connectivity Checking

    This session discusses the use of a new school formal verification method which can be easily applied to solve the problem of connectivity checking with detailed case studies of how this formal app was used to automatically verify connectivity and accelerate the debug process.

  • Power Aware CDC Verification

    In this track, you will learn the low power CDC methodology by discussing the low power CDC challenges, describing the UPF-related power logic structures relevant to CDC analysis, and explaining a low power CDC verification methodology.