The following papers were presented at DVCon 2014.
Presentation Slides by Tom Fitzpatrick and Dave Rich, Mentor Graphics Corporation
When developed and used wisely, standards such as SystemVerilog and UVM can have a profound impact on both users and tool developers. Users benefit from having a common code base that they can use across multiple tools, and tool developers benefit from (ideally) only having to support one language or library, freeing them up to develop tools and technologies to improve the lives of their users. However, keeping in mind the old maxim that "a camel is a horse designed by committee," the downside of standards development is that all standards eventually reach a point of diminishing returns, where the standard is able to deliver on the needs of the vast majority of the user community but development in committee continues to be driven by a few atypical users who continue to try and include more "pet features" that they believe will make the standard "better." Unfortunately, these features often make the standard bloated, more complex, inefficient, and many times incompatible with previous versions of the standard, taking away from the primary goal of providing a stable platform to the user community.
This paper takes the position that SystemVerilog and UVM have both reached this stage and that, rather than continuing to add features driven by a small percentage of the user community, a "cooling off" period should be adopted to allow tool developers a stable platform on which to innovate.
by Rich Edelman and Raghu Ardeishar, Mentor Graphics Corporation
The reader of this paper is interested in writing reusable tests using UVM and C. C tests, such as device drivers, are very commonly used in the industry and are written by system level designers. In many cases they have to be translated or rewritten to UVM based tests to test the design. This paper shows how the translation step can be rendered obsolete by mapping the C tests to the UVM environment. C tests include application layer software such as eHCI, xHCI for USB. These pre-written tests will be mapped to UVM sequences. The key issue of communication between SystemVerilog classes, UVM threads and C routines are shown in detail. An example is provided to be implemented in a UVM based environment where a legacy C test is also present.
Presentation Slides by Avidan Efody, Mentor Graphics Corporation
Today's SoCs often contain tens or even hundreds of standard interfaces such as AXI, PCIe, USB, DDR and similar. From a verification point of view these standard interfaces can be an ideal point to extract analysis data from the Device Under Test (DUT), since users can connect to those interfaces using off-the-shelf or internal Verification IPs (VIPs), rather than their own custom code. Starting with a few common SoC verification challenges, such as debug and integration validation, we explain how connecting VIPs to the standard interfaces available can help users overcome those. We then describe the flip side of the coin and point out the problems associated with a large scale deployment of VIPs in a single verification environment, focusing mainly on creation effort, integration effort and performance penalty. Finally we suggest a SystemVerilog/UVM testbench structure that addresses these problems. We show how creation effort can be largely mitigated by automation, and how integration effort can be avoided using separate hierarchies for all VIP code. As for VIP performance penalty, we place it under full user control, by allowing any instantiated VIP to be turned on or off at run time.
Presentation Slides by Gordon Allan, Mentor Graphics Corporation
Simulation of today's SoC devices is not easy. A typical SoC in the 'mobile' application domain consists of one or more processors, one or more levels of bus fabric, one or more internal memories or caches, one or more off-chip memory interfaces, one or more peripheral interfaces, one or more timing/control resources, one or more specialized processing elements or data pipes, etc., etc.
The emphasis is normally on the 'more' than the 'one' as process technologies scale to allow us to integrate ever more levels of detail in one device, and mobile device manufacturers compete for functionality and performance. In 2011 the average high-end mobile SoC gate count was 104 million gates [Gary Smith 2012] and growing by 10% per year.
That's a lot of gates to simulate at once.
Adding hardware-assistance such as emulation can solve some verification problems, but for many projects there remains a need to simulate the final integrated RTL and/or gate-level SoC design in its entirety, at least to check power up, boot up from reset, and key architectural interactions. Given this need, many project teams still perform a significant portion of verification on this medium.
Apart from the main challenge of scale and performance, which we will address, there are other challenges. Those simulations take the longest to run and are also the hardest to debug, and the hardest to develop. Providing the right stimulus to exercise system-level and chip-level concerns is not easy. Stimulus has to be orchestrated so that the different peripheral features of the SoC are being exercised in concert with test firmware running on the processor or more likely on multiple processors.
In this paper we present a set of standard, freely available, easy to use techniques to accelerate all aspects of SoC simulation, allowing more rapid development, regression, and debug cycles during the integration phase of an SoC project.
by Rich Edelman and Raghu Ardeishar, Mentor Graphics Corporation
The traditional art of debug is a common occurrence in design and verification. As larger systems are built, a functionally correct system may still need to be debugged by understanding its performance characteristics. Performance analysis can be used effectively for debug, but it can be hard to collect and organize the data required. This paper will discuss a variety of solutions to collect and organize performance data, and discuss some useful metrics and recent situations where they have been useful.
by Matthew Ballance, Mentor Graphics Corporation
Hardware verification typically uses two primary types of stimulus generation: engineer-directed stimulus generation and open-loop random generation. This paper proposes a third approach for generating stimulus that automatically identifies and produces high-value test patterns from a constraint-based stimulus model. A trial implementation built on top of an intelligent testbench automation tool is described.
by David Crutchfield, Cypress Semiconductor, Thom Ellis, Mentor Graphics Corporation
Today's verification projects are responsible for verifying some of the largest and most complex designs we have ever seen. Accordingly, the gathering and tracking of development and verification metrics, including coverage and test results, is more important than ever to project success. From figuring out what files are necessary in building a DUT (Design Under Test) and Testbench to knowing what development and verification metrics need to be gathered and tracked, the task can be significant. Like many others, teams at Cypress traditionally had created verification management environments to meet a specific project need. Scripts were either borrowed from other projects or created from scratch and tweaked for the targeted project. Over time this ad-hoc script management often transformed verification environments into an unintelligible mass of interconnected files. Managing such environments requires dedicated resources for each individual project, thus wasting scarce time and money as verification demands continued to grow.
This paper will focus on an infrastructure created at Cypress to abstract away file list and metric gathering by providing a uniform front-end shell and back-end database, boosting predictability of testbench creation and metric tracking across multiple projects. Additionally, this paper will discuss various metrics collected and the use of Mentor's Verification Run Manager (VRM) toolset in gathering metrics, tracking coverage and reducing test suites to quickly and efficiently obtain coverage goals.
by Fritz Ferstl, Univa, Darron May, Mentor Graphics Corporation
Getting the very best from your verification resources requires a regression system that understands the verification process and is tightly integrated with Workload Management and Distributed Resource Management software. Both requirements depend on visibility into available software and hardware resources and by combining their strengths, users can massively improve productivity by reducing unnecessary verification cycles.
by Gaurav Kumar Verma and Doug Warmke, Mentor Graphics Corporation
Technology advances allows for the creation of larger and more complex designs. This poses new challenges, including efforts to balance verification completeness with minimization of overall verification effort and cycle time. It is practically impossible to enumerate all of the conditions and states to do an exhaustive test. Therefore, it is imperative to use well defined criteria to measure and check when the verification is sufficiently complete and meets a reasonable quality threshold. Code coverage is a popular measure of design quality.
This paper focuses on expression coverage, which is one of the most complex and least understood types of code coverage, and discusses 'Rapid Expression Coverage' (REC), which is a new metric for expression coverage, while comparing it with some popular metrics being used to evaluate expression coverage in the industry today. Even though this paper describes REC in context of code coverage of designs, these same techniques could also be applied to coverage tools for software languages like C, C++, or Java.
by Alan Hunder, ARM, Andreas Meyer, Mentor Graphics Corporation
Developing and maintaining an effective and efficient verification suite for a complex system requires the ability to measure, understand, and improve the environment. Distributed, hierarchical caches are an example of interacting components within an SoC. Understanding how well the components are verified is a challenge since the cache interactions are complex, the components are distributed across an environment, and the data is spread across one or more regressions. This paper discusses the challenges of collecting metrics, providing the visualization to understand complex state machine interactions, and then reviews results of a regression analysis.
A complex ARM-based SoC with multiple processors, and a coherent distributed multi-level cache is the basis of our study. A modern constrained-random test suite is used to generate regression suites, with traditional code and functional coverage methods used to steer and grade the test suite. While this approach has been successful, it does require significant compute resources to reach coverage closure, and it has been difficult to determine how to improve the efficiency and quality of the test suite.
We introduce statistical coverage as an approach to provide new coverage analysis capability. Within our ARM SoC project, we show how we are able to find and fix significant weaknesses in the stimulus that could not be seen using traditional code and functional coverage metrics. The stimulus improvements provided improved regression efficiency, found areas that had not been fully explored, and as a result found additional RTL issues.
Presentation Slides by Amit Srivastava and Madhur Bhargava, Mentor Graphics Corporation
The increasing complexity and growing demand for energy efficient electronic systems has resulted in sophisticated power management architectures. To keep up with the pace, the power formats have also evolved over the years. With the recent release of the IEEE P1801-2013 (UPF 2.1), several new features have been added along with improving clarity on existing features. It has also bridged the gap between UPF and CPF to provide much needed convergence. However, it has also posed some questions about the compatibility, differences, and challenges related to migration and its impact on verification.
In this paper, we will provide an in-depth analysis and relevant examples of all the new features introduced by the UPF 2.1 along with highlighting any semantics differences with the earlier versions to help the user easily migrate to the new standard.
by Durgesh Prasad and Jitesh Bansal, Mentor Graphics Corporation
X-optimism is a precarious problem in RTL simulation. It can hide X bugs to cause serious issues in real silicon. Such hidden bugs are aggravated in power aware simulation due to injection of additional 'X' from power down regions. Traditional verification techniques such as tool generated assertions  and custom bind checkers  cannot catch such issues. Nowadays a new technique X-propagation is used to catch x-optimism related issues in RTL simulation. But this technique lacks the knowledge of power intent of design, causes unnecessary noise, and therefore not very useful in power aware simulation. In this paper, the authors would describe an effective technique to catch x related issues such as reset failures, wake up failures and x-optimism issues in power aware simulation.
In this paper we present a method to use power aware knowledge on existing x-propagation technique for comprehensive x verification which is fully automated and provides ease of debug. Our solution will selectively apply X propagation technique according to system power state in a controlled way. This dynamic selection and controlability would ensure minimal noise, relevant x-propagation and better debug capability. The propagated x values can be observed in simulation waveforms and debug tools. This will catch x-optimism issues in a power aware simulation which are known to cause design failure at synthesis level.
Also, our solution will automatically insert SystemVerilog assertions to catch x-errors at the source. These assertions would be active according to current simstate of the system and they can also be used as an alternate for custom bind checker or low power assertion checks. This solution has advantage that it is fully automatic, comprehensive and free from user input. The downside is that it could generate some noise because the RTL morphing could be overly pessimistic.
In the paper we will further discuss the tradeoffs and methodology in detail.
by Vidya Bellippady and Sundar Haran, Microsemi Corporation, Jay O'Donnell, Mentor Graphics Corporation
This paper describes a new verification technique using Test-IP, which are pre-built UVM test sequences implemented using a combination of directed, intelligent testbench (iTBA), and random methods. Test-IP converts an abstract test description defined in the UVM test into a series of protocol-specific burst sequence items passed to a standard verification-IP driver. This paper describes why the technique was first developed for AXI bus fabric applications and references a case-study where it was used to verify a 2-port AXI DDR controller.
Presentation Slides by Kenneth Bakalar and Eric Jeandeau, Mentor Graphics Corporation
IEEE Std 1801-2009 (1) defines the Unified Power Format (UPF) for specifying power distribution and control information required by digital RTL and gate models to represent the structure and behavior of designs with active power management. We propose an interpretation of UPF for designs that include analog and mixed signal elements coded in Verilog-AMS, VHDL-AMS, or SPICE. No changes to the UPF syntax or file are required. We offer as proof of concept a complete implementation and a demonstration of its use in a sample case. The implementation consists of a modified elaborator and an extensible set of power-aware interface elements.
by Hardik Parekh, Manish Kumar Karna, Mohit Jain, ST Microelectronics PVT, LTD., Atul Pandey and Sandeep Mittal, Mentor Graphics Corporation
Mixed signal system-on-chips (MSSoC) integrate digital and analog functions on the same chip. The increased analog content in today's SoCs is tightly integrated to the digital portion of the design. Market segments such as power management, automotive, communication, and security applications are driving ever more integration of analog and digital content on MSSoCs. SoC verification requires a lot of effort to achieve good functional coverage. Additional complexity in MSSoCs arises from the interconnection of signals flowing between digital and analog domains. To achieve good verification coverage on mixed signal SoCs, abstract models of analog components (henceforth called analog IP) are used. These abstract models, commonly called behavioral models, capture functional features of analog behavior in digital HDL languages and are orders of magnitude faster than simulating SPICE views of analog IP. To effectively use the behavioral models in SoC level verification, it is important to establish the equivalence between the model and the implementation (SPICE). This paper will present essential components of an equivalence validation environment and commonly used methods.