Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick, Editor and Verification Technologist, Mentor Graphics
Our remarkable run here in New England continues. The New England Patriots just won their fifth Super Bowl championship. After falling behind 28-3, they staged a record-setting come-back to win in overtime. It was incredibly exciting, and I'm not ashamed to admit that my family and I were literally jumping up and down and screaming when the Patriots scored the winning touchdown. No team had ever come back in a Super Bowl from a deficit of more than ten points, so going by past performances, the Patriots' comeback was "impossible." My daughter actually turned to me at one point and said, "They're going to lose, aren't they?" I replied that it would require an historic comeback, but I never counted them out. In the end, it took an incredible combination of plays (and, admittedly, some bad decisions by the Falcons) for them to pull it off, but they did it! The emotion of this win is just what we need to keep us warm as we face yet another snow storm while I write this.
The morning after the victory, Patriots coach Bill Belichick was asked how he felt about the game. After admitting how special the game was and how proud he was of his team, he said something that shows why he's now considered the greatest coach of all time. When asked what his plans were, he said, "As far as I'm concerned, we're now five weeks behind all the other teams in preparing for next season." His constant devotion to preparation, planning, and teaching his players is what sets him apart. While we may not be able to reach that level of focus, we can always try to be better at planning and preparing for success.
by Harry D. Foster, Mentor Graphics
Perhaps my interest in data mining and analytics originated from Steven Levitt and Stephen Dubner's 2005 bestselling book Freakonomics: A Rogue Economist Explores the Hidden Side of Everything,. The authors in this book apply economic theory (a science of measurement) to a diverse set of subjects not usually covered by "traditional" economists; such as correlating cheating as applied to teachers and sumo wrestlers. This book inspired me to look at data differently. In that spirit, I decided to have some fun with the data from our 2016 Wilson Research Group Functional Verification Study by examining interesting correlations in an attempt to uncover unexpected observations. For example, in the March 2015 issue of Verification Horizons, I decided to correlate design size with first silicon success from our previous industry study, and the results were non-intuitive. That is, the smaller the design the less the likelihood of achieving first silicon success. This observation concerning design size and likelihood of achieving first silicon success still holds true today.
For this issue of Verification Horizons, I have decided to do a deeper dive into our 2016 industry study and see what other non-intuitive observations could be uncovered. Specifically, I wanted to answer the following questions: (1) Does verification maturity impact silicon success (in terms of functional quality)? (2) Does the adoption of safety critical design practices improve silicon success?
by Jamil R. Mazzawi and Amir N. Rahat, Optima Design Automation Ltd.
The computers are fleeing their cages. Until recently, people interacted with computers in a virtual world of screens and mice. That world had many security risks but relatively few safety risks, mostly electrocution or having a PC fall on your foot. But in the last few years a new wave of computers is invading the real world, and physically interacting with it. This trend is expected to explode in the near future, with self-driving cars and drones leading the rush. This raises totally new safety concerns for the teams designing the semiconductor parts used in these markets. In the good old days, a HW bug would cause a blue-screen and everyone would blame Microsoft®. Nowadays, a HW bug can trigger a criminal trial for involuntary manslaughter.
To prevent such problems, at least for the Automotive market, The International Standards Organization ISO published in 2011 the first version of ISO 262621, "Road vehicles — Functional safety". The second revision is being completed now and should be published in about a year. While focused on road vehicles, this standard can be easily adapted to related areas that do not yet have their own safety standard, such as drones, since it is in fact an adaptation of IEC 61508, the basic standard for Functional Safety of all Electrical/Electronic/Programmable Electronic Safety-related Systems.
This article discusses functional safety. The International Electrotechnical Commission IEC, who own the ultimate standard in this area, define safety as freedom from unacceptable risk of physical injury or of damage to the health of people, either directly, or indirectly as a result of damage to property or to the environment. Functional safety is the part of the overall safety that depends on a system or equipment operating correctly in response to its inputs. Functional safety is the detection of a potentially dangerous condition resulting in the activation of a protective or corrective device or mechanism to prevent hazardous events arising or providing mitigation to reduce the consequence of the hazardous event2.
The following discussion is based on ISO 26262, and so targets people in the Automotive market. But it is general enough to be useful for anyone who worries about the functional safety of their semiconductor products.
by Manasa Nair, Sunil Kumar, Pranesh Sairam, Srinivasan Venkataramanan, CVC Pvt. Ltd.
The world of ASIC and FPGA design has been adopting the Universal Verification Methodology (UVM) over the last several years. UVM is a culmination of well-known ideas, thoughts and best practices. Though UVM-1.1d is the most popular and default UVM version, UVM-1.2 has been around for a few years and has been adopted by many leading-edge semiconductor design houses. The upcoming IEEE version of UVM (IEEE P1800.2) is set to make UVM even more widely adopted, just like many other IEEE standards.
While UVM is great in building testbenches and test scenarios/sequences, the primary objective of UVM was to build robust, reusable testbenches. For IP and sub-system level verification, these scenarios could be constrained-random and/or directed. Ideally constrained-random sequences run over multiple seeds should attain a very high level of functional coverage goals. However, practically speaking, it takes quite a few redundant stimuli to hit a given set of coverage points when attempted through traditional constrained-random techniques. This is mathematically proven via "coupon collector's theory".
At the system level (SoCs with multiple embedded processors, for instance) the scenarios tend to mimic real life application models, use cases etc. A pure constrained-random approach falls short quickly at this level as the level of co-ordination needed across IPs, peripherals, processors and sub-systems within the SoC is very high. Though in theory one could develop a sophisticated constrained model for an end-to-end application scenario all within UVM/SystemVerilog, it is likely more painful than it is worth. Also with embedded processors becoming part of every modern-day SoC, UVM scenarios alone do not suffice; as hardware processors (such as ARM® Cortex®) do not understand UVM, rather they run C-code or assembly code in real life.
by Matthew Ballance, Mentor Graphics
Over the past few years, lots of energy has been invested in improving the productivity and quality-of-results of design verification. The bulk of this effort has focused on techniques that are most applicable at the block level. These techniques — such as constrained-random transaction generation, functional coverage, and the UVM — have had a dramatic positive improvement on verification quality and productivity. However, while these techniques have been successful at the block level, verification continues to be increasingly challenging at the subsystem and SoC levels, and thus a new approach is called for.
Both commercial and in-house tools have been developed to improve the productivity and efficiency of verification. Mentor's Questa® inFact™ is one example of a commercial tool that raises the level of abstraction (boosting productivity), increases test-generation efficiency, and can be applied across a wide variety of verification environments.
As interest in bringing automated tests to environments beyond transaction-oriented block-level environments has increased, so has interest in having a standardized input-specification language with which to specify these tests. In response, Accellera launched a working group, titled the Portable Stimulus Working Group (PSWG), to collect requirements, garner technology contributions, and specify a standardized input language that can be used to specify test intent that can be targeted to a variety of verification platforms. Mentor has been participating and driving the activity in the PSWG, and we've contributed our technology and expertise to the standardization process.
by Sandeep Nasa and Shankar Arora, Logic Fruit Technologies Pvt. Ltd.
UVM is the most widely used Verification methodology for functional verification of digital hardware (described using Verilog, SystemVerilog or VHDL at appropriate abstraction level). It is based on OVM and is developed by Accellera. It consists of base libraries written in SystemVerilog which enables the end user to create testbench components faster using these base libraries. Due to its benefits such as reusability, efficiency and automation macros it is a widely accepted verification methodology.
UVM has a lot of features so it's difficult for a new user to use it efficiently. A better efficiency can be obtained by customizing the UVM base library and applying certain tips and tricks while building UVM testbenches, which is mainly the purpose of this article.
Most of the engineers which are new to UVM or have RTL experience may not be able to create efficient and productive testbenches due to unfamiliarity with the OOPS concepts, UVM base class library and UVM verification environment architecture.
This article will furnish several examples to improve the performance of the UVM testbench by applying different optimizing techniques to random access generation, configuration database, objection mechanism, sequence generation, loop usage.
by Progyna Khondkar, Mentor Graphics
The Questa® Power Aware (PA) dynamic simulator (PA-SIM) provides a wide range of automated assertions in the form of dynamic sequence checkers that cover every possible PA dynamic verification scenario. However, design specific PA verification complexities may arise from adoption of one or a multiple of power dissipation reduction techniques, from a multitude of design features — like UPF strategies — as well as from target design implementation objectives. Hence, apart from tool automated checks and PA annotated testbenches, additional and customized PA assertions, checkers, and their monitors are sometimes required to be incorporated in a design.
But a design may already contain plentiful assertions from functional verification parts, often written in SystemVerilog (known as SVA) and bind with the language bind construct. SystemVerilog provides a powerful bind construct that is used to specify one or more instantiations of a module, interface, program, or checker without modifying the code of the target. So, for example, instrumentation code or assertions that are encapsulated in a module, interface, program, or checker can be instantiated in a target module or a module instance in a non-intrusive manner. Still, customized PA checks, assertions, and monitors are often anticipated to be kept separate, not only from the design code but also from functional SVA.
UPF provides a mechanism to separate the binding of such customized PA assertions from both functional SystemVerilog assertions (SVA) and the design. The UPF bind_checker command and its affiliated options allows users to insert checker modules into a design without modifying and interfering with the original design code or introducing functional changes. However, UPF inherits the mechanism for binding the checkers to design instances from the SystemVerilog bind directives. Hence similar to SVA, the UPF bind_checker directive causes one module to be instantiated within another without having to explicitly alter the code of either. This facilitates the complete separation between the design implementation and any associated verification code.
by Rick Eram, Excellicon
Since the advent of formal techniques, the application of formal analysis has helped designers achieve more in-depth analysis and coverage of functional verification activities in general. However what has spurred the growth and popularity of such techniques has been specific and targeted applications of formal analysis.
Functional verification is often focused on verification of the logical functions of the design. An overlooked area closely related to functional verification is proper implementation of the logic. In recent years, requirements for better power performance has brought the power implementation aspect of design more and more into the functional verification side, as design functionality became more dependent on new structures designed for power savings.
Similarly, on the timing side the increased complexity of implementation, number of clocks, and clock complexity, as well as greater challenges for closing timing requires a closer look at the verification of the functional aspects of the design early on. The relationship between the functional and timing side of equations in clock-domain crossing verification requires a close analysis of timing in order to gain accuracy and efficiency on the functional side.
Much of the timing information in the front-end design stage is not generally used during functional verification. This information, however, can provide a great deal of guidance and initial seeds for many downstream steps in the chip design process, including activities related to functional verification.