-
by Tom Fitzpatrick - Mentor, A Siemens Business
As I write this, it is autumn in New England, which means two things. First, the leaves are turning colors to make some of the most beautiful scenery to be found anywhere. Second, the leaves will be covering my lawn, which also means that I’ll have to remove them so I can cut the grass. I’ve lived in my house for over 20 years, and my standard leaf-removal method is to use the lawn mower, starting alongside the house, in ever-widening circles to cut the grass and blow the leaves and clippings out away from the house. We're fortunate to live on a lot surrounded by conservation land, so we don’t have to worry about blowing things onto a neighbor’s property.
This method of leaf removal has served me adequately for many years, but the inefficiency has always bothered me. Due to the shape of our yard, I wind up going over areas that are already clear so that I can loop around to the back where there is more area to cover. So last year, I purchased a leaf blower. As the name implies, a leaf blower is a specialized power tool that blows a powerful jet of air strong enough to blow the leaves off the lawn, giving me a nice leaf-free lawn to mow, and standing the grass up so it cuts more evenly. So, even after 20 years of doing something the same way, the right technology can still improve your life. This issue of Verification Horizons will likewise give you some new tools to improve your verification efforts.
-
by Harry D. Foster - Mentor, A Siemens Business
The 2019 global semiconductor market was valued at $385.4 billion after experiencing a 15% decline due to a 32% drop in the memory IC market, which is expected to recover in 2021[1]. The FPGA portion of the semiconductor market is valued at about $5 billion[2]. The FPGA semiconductor market is expected to reach a value of $7.5 billion by 2030, growing at a compounded annual growth rate (CAGR) of 4.4% during this forecast period. The growth in this market is being driven by new and expanding end-user applications related to data center computing, networking, and storage, as well as communication.
Historically, FPGAs have offered two primary advantages over ASICs. First, due to their low NRE[3], FPGAs are generally more cost effective than IC/ASICs for low-volume production. Second, FPGAs’ rapid prototyping capabilities and flexibility can reduce the development schedule since a majority of the verification and validation cycles have traditionally been performed in the lab. More recently, FPGAs offer advantages related to performance for certain accelerated applications by exploiting hardware parallelism (e.g., AI Neural Networks).
-
by Vikas Sharma - Mentor, A Siemens Business
This article describes the verification process of the ARASAN MIPI® CSI-2-RX IP core using Questa® VIPs by Mentor, A Siemens Business.
MIPI Camera Serial Interface 2 (CSI-2) provides an interface between a peripheral device (such as a camera module) and a host processor (such as a baseband or application engine). It is mostly used in the mobile and automotive industries. High performance and low power are the key features of this protocol.
MIPI CSI-2 provides two high-speed serial data transmission interface options. The first option, which is referred to as the D-PHY physical layer option, uses a differential interface with one 2-wire clock lane and one or more 2-wire data lanes. The second high-speed data transmission interface option, which is referred to as the C-PHY physical layer option, uses one or more unidirectional 3-wire serial data lanes, each of which has its own embedded clock.
-
by Ridham Kothari - Mentor, A Siemens Business
As always, we must continue to reduce the time-to-market of SoCs and complex systems. An FPGA prototype implementation of these systems can be used as a basis for early software or firmware development, hardware-software co-verification and system validation, and all this can be achieved before actual silicon is available. As FPGA prototyping systems can also be used as a platform to validate system level functionality and are fast enough to develop application code running on top of an OS, the adoption of FPGA prototyping systems is increasing.
An important part of system level validation is to ensure that the performance meets expectations and that the system is capable of supporting anticipated workloads. In order to validate critical scenarios it is important to accurately model the clock-level timing of target memory types such as DDR4, DDR5, LPDDR4 and LPDDR5, so that the whole-system effects of running with different levels of memory subsystem loading can be observed, albeit at a scaled frequency. This article focuses on how Memory Softmodels are used in the validation process and how they provide the foundation for performance validation accuracy.
-
by Karthik Bandaru, Priyanka Gharat, and Sastry Puranapanda - Silicon Interfaces®
The efforts to apply constrained randomization to create test cases is based on the developer or verification engineer’s perception of what test vectors are required and can easily lead to hidden bugs being overlooked. Traditionally, the coverage goals would have been reached by writing more test cases with unpredictable schedules, often impacting time-to-market goals. Functional coverage defines critical states and constrained randomization tests those states in unpredictable ways. Often constrained randomization necessarily repeats states or worse, catastrophically misses a coverage point which has a “hidden” bug since the coverage points are written by the verification engineer using coverage bins. We are choosing constrained random stimulus-based coverage verification because we have found it to be a simple and flexible verification technique that saves time, reduces complexity and improves the performance of coverage by using various coverage techniques, such as bins and coverage transitions. Other advantages of functional coverage are that we define critical states, which can be targeted first with fewer test (provided we know which the critical paths are). To meet coverage goals, the verification engineer would otherwise need to write more test cases.
This article explores the shortcomings of standard constrained randomization techniques to attain coverage and uses coverage automation tools, using intelligent routines and algorithms, to analyze the coverage matrix and achieve maximum possible coverage by traversing the identified paths to detect otherwise undetectable bugs to increase functional coverage. We apply these techniques to a Zetta-Hz High Speed CDMA transceiver.
-
by Milan Patel - eInfochips, LTD (an Arrow Company)
Complexity and the need to achieve faster throughput further increases the complexity of finite state machines (FSM) of serial protocols like USB, PCIe, Ethernet, OTN, etc. These complex FSMs contain a large number of states, transition conditions, timeout mechanisms, and relevant behavior of the design in each respective state. FSMs are a source of functional bugs in any protocol. It is a very tough job to functionally verify a FSM in each of its state transition conditions and corner scenarios.
The purpose of this article is to share a strategy on how to verify any simple or complex FSM in an organized, robust, manageable, and efficient way. To verify such FSMs thoroughly we need random scenarios that cover all the possible state transition conditions, corner and boundary conditions, and relevant functional behavior. For that, we require a strong base entity that helps to generate random scenarios to cover all FSM entry-exit conditions and erroneous scenarios easily. This base entity also helps to choose a random FSM path to reach your expected states and perform various error operations.
This article describes an approach to develop an organized base entity that produces random scenarios to minimize verification time and effort during test case implementation.
-
by Mike Bartley - Tessolve, Lavanya Jagan, G S Madhusudan & Neel Gala & InCore Semiconductors Pvt. Ltd.
As the RISC-V architecture becomes increasingly popular, it is being adopted across a diverse range of products. From the development of in-house cores with specialized instructions, to functionally safe SoCs and security processors for a variety of verticals – RISC-V adoption brings several verification challenges that are discussed in this article, along with potential approaches and solutions.
This article first considers verification of the core. The core is made up of several blocks including: a fetch unit, execution units, instruction cache and data cache, TLB and complex logic for controlling functions like branch prediction and out-of-order execution. We discuss the pros and cons of performing block level verification. Either way, a core level verification strategy is required (this includes both constrained random instruction generation targeted at micro-architectural features and corner cases captured in functional coverage), and architectural compliance. The article outlines the tools and compliance suites required and available.
Moving out from the core the article considers both fabric integration verification and SoC level verification where safety, security, and low power features come into consideration. In addition, the article also considers the verification of RISC-V specific features including multiple architectural options, configurability, and the ability to add new instructions.