Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick - Mentor, A Siemens Business
I may have mentioned over the years that I am an avid golfer. In order to have an excuse to play weekly, I joined a local golf league last year, and this year's season just started. In an effort to improve my swing, and consequently my score, I decided to start this season by taking a lesson with the pro at my local course. After looking at my swing, the pro told me he was going to make four small changes to my swing and could guarantee that my score will improve by 10 shots per round. He showed me the four things, and they all make sense. Some of them are easy, like changing my grip slightly. Some are harder, like keeping a steeper wrist angle during my downswing. Let's just say that, since I haven't reliably been able to do everything he told me yet, I haven't achieved the promised results, although when I played earlier this week I did hit a number of very good approach shots. So I'm encouraged.
by Hamilton Carter - Mentor, A Siemens Business
The EDA industry is increasingly avaricious for the benefits of big data. While functional verification has been a producer of big data for several years, paradoxically, big data analysis adoption may not have progressed as quickly as it could have due to a shortage of big data consumers. Most verification engineers have participated in a project where EDA big data was ignored until near the end of the project—if it was gathered at all—at which point there was a mad dash to complete coverage closure. This article describes a methodology—parallel debug—as well as a supporting Jenkins framework, enabled by the availability of massive processor and disc farms which are commonplace among chip design projects. Parallel debug is an objective, disciplined methodology wherein the engineer changes one and only one aspect of a complex problem based on a hypothesis, and then tests the hypothesis. That is to say, it's the scientific method repackaged as a debug technique. Many engineers circumvent this technique, making multiple changes to their code at once in (oftentimes vain) hopes of saving time by reducing the number of simulations. Parallel debug provides a methodology for specifying multiple hypotheses; tracking the associated individual code changes via revision control; and—as the name implies—using compute farms to perform all the specified experiments in parallel.
by Mike Andrews and Mike Fingeroff - Mentor, A Siemens Business
Portable Stimulus has become quite the buzz-word in the verification community in the last year or two, but like most 'new' concepts it has evolved from some already established tools and methodologies. For example, having a common stimulus model between different levels of design abstraction has been possible for many years with graph-based stimulus automation tools like Questa® inFact. High-Level Synthesis (HLS), which synthesizes SystemC/C++ to RTL has also been available for many years with most users doing functional verification at the C-level using a mixture of home grown environments or directed C tests. With HLS now capable of doing very large hierarchical designs, however, there has been a growing need to have a verification methodology that enables high performance and production worthy constrained random stimulus for SystemC/C++ in order to achieve coverage closure at the C-level and then be able reproduce that exact stimulus to test the synthesized RTL for confidence.
This article describes a methodology where, a stimulus model can be defined (and refined) to help reach 100% code coverage of the C++ HLS DUT, and then reused in a SystemVerilog or UVM testbench with the synthesized RTL. Given a truly common model, it is also possible to maintain random stability between the two environments, allowing some issues to be found in one domain and then debugged in the other.
by Matthew Ballance - Mentor, A Siemens Business
Designs are becoming more complex, and increasingly include a processor – and often multiple processors. Because the processor is an integral part of the design, it's important to verify the interactions between software running on the processor and the rest of the design. Verification and validation of the hardware/software boundary cannot reasonably be deferred until prototype bring-up in the lab, because software is so critical to the operation of today's systems. Or, at least, verification teams do so at their own peril. I'm sure we've all heard the nightmare scenarios where, for example, a team discovered in the lab that the processor's bus was connected to the design in reverse order, or the processor was unable to power up again from low-power mode.
by Kishan Kalavadiya and Bhavinkumar Rajubhai Patel - eInfochips
Verification of complex SoC (System on Chip) requires tracking of all low level data (i.e. regression results, functional and code coverage). Usually, verification engineers do this type of tracking manually or using some automation through scripting. Manual efforts in order to get above information while verifying complex SoC may lead us towards the delay in project execution. A verification planning tool can help to reduce such manual efforts and make the tracking process more efficient. Mentor, A Siemens Business has such a Verification Planning tool for QuestaSim within their Verification Management tool suite known as "Questa® Testplan Tracking". This article contains detailed steps to use this tracking process along with key features which can reduce the time in verification cycle to track the verification progress.
by Ivan Ristic - HDL Design House
The purpose of this article is to present the verification process of HDL Design House MIPI® CSI2 TX IP core using Questa® VIPs by Mentor, A Siemens Business.
ABOUT THE CSI2 PROTOCOL
Camera Serial Interface 2 (CSI2) defines communication protocol between a peripheral device (camera) and a host processor. It is intended for point-to-point image and video transmission between transmitter (camera) and receiver (host processor). It is mostly used in mobile and automotive industry. High performance and low power are the key features.
CSI2 TRANSMITTER IP
HDL Design House's CSI2 Transmitter IP (HIP3900), a silicon proven IP, was successfully integrated into million gate LSI that was implemented in Fujitsu's 65nm process technology. It supports high speed video transmission protocol used in automotive applications – The Automotive Pixel Link (APIX®).
by Anand Paralkar and Pervez Bharucha - Silicon Interfaces
Two common queries that customers pose to a design house is whether an existing or new IP can be made "low power" or if "power aware" verification can be carried out on an IP. The IEEE standard – P1801 captures what one may call the syntax and semantics to express the intent of the power architecture of a design. Merely adopting the standard (commonly known as the Unified Power Format) doesn't help. What it really takes to successfully achieve a low power design is design team know-how and a simulation tool that is geared towards low power implementation. This article captures how we use Mentor's Questa® Simulator to lower the power usage of our legacy USB IP. We also share how to start a low power implementation and provide some examples from our current effort.
by Progyna Khondkar - Mentor, A Siemens Business
The Unified Power Format (UPF) plays a central role in mitigating dynamic and static power in the battle for low-power in advanced process technology. A higher process node is definitely attractive as more functionality integration is possible in a smaller die area at a lower cost. However, in reality, this comes at the cost of exponentially increasing leakage power. This is because the minimum gate-to-source voltage differential that is needed in CMOS devices to create a conducting path between the source and the drain terminals (known as threshold voltage) has been pushed to its limit. Leakage power is a function of the threshold voltage, and at smaller device geometries, its contribution to total energy dissipation becomes significant. Device supply voltage and leakage current directly contribute to leakage power; while switching activity of the capacitive load on supply voltage and its switching frequency contribute to dynamic power.
by Marcela Zachariasova and Lubos Moravec - Codasip Ltd
The Open RISC-V Instruction Set Architecture (ISA) managed by the RISC-V foundation[1] and backed by an ever increasing number of the who's who in the semiconductor and systems world, provides an alternative to legacy proprietary ISA's. It delivers a high level of flexibility to allow development of very effective application optimized processors, which are targeted to domains that require high performance, low area or low power.
The RISC-V ISA standard is layered and contains a small set of mandatory instructions as well as optional instruction set extensions, and finally, custom instructions defined by the intended application. As a result, when searching for the best functionality performance combination, we can end up with at least 100 viable standard ISA configuration variants, and nearly unlimited combinations when custom extensions are taken into account. This flexibility is without a doubt a good thing from the system design perspective, but generates significant verification challenges that will be discussed throughout this article.
by Vijay Chobisa - Mentor, A Siemens Business
There are many reasons why hardware-based emulation is a "must have" for an effective verification flow. Increased complexity, protocols, embedded software, power and verification at the system level all drive the need for the kind of performance, capacity, and "shift-left" methodology that only emulation delivers.
But this growing need for emulation is still met with some resistance by management and verification teams primarily because some regard emulation as an expensive and hard to use resource. What is at the heart of these perceptions, and how can an emulation job management strategy address these misconceptions?
by Anwesha Choudhury, Ashish Hari, and Joe Hupcey III - Mentor, A Siemens Business
CDC verification ensures that signals pass across asynchronous clock-domains without being missed or causing metastability. Traditionally, CDC verification is done on a register-transfer level (RTL) representation of the design. However, during the synthesis stage, when the design is transformed from RTL to gate-level, various new issues can be introduced that may eventually lead to chip failures. So, even after CDC verification closure at the RTL, it is important to perform CDC verification on a gate-level design to detect and address new issues.
by Rusty Stuber - Mentor, A Siemens Business
Formal property verification is sometimes considered a niche methodology ideal for control path applications. However, with a solid methodology base and upfront planning, the benefits of formal property verification, such as full path confidence and requirements based property definition, can also be leveraged for protocol driven datapaths. Incorporating layered SystemVerilog constructs to provide a transaction-like protocol description simplifies property creation for both well formed packets and error scenarios. Ultimately though, the key to successful formal datapath analysis is reduction of the typically large state spaces resulting from variable and dynamic packet sizes. Proper interleaving of SystemVerilog helper constructs with protocol targeted assumptions defines a manageable state space and unlocks the promise of formal driven, full path verification for datapaths too.