Bringing Regression Systems into the 21st Century
Today's verification projects are responsible for verifying some of the largest and most complex designs we have ever seen. Accordingly, the gathering and tracking of development and verification metrics, including coverage and test results, is more important than ever to project success.
Today's verification projects are responsible for verifying some of the largest and most complex designs we have ever seen. Accordingly, the gathering and tracking of development and verification metrics, including coverage and test results, is more important than ever to project success. From figuring out what files are necessary in building a DUT (Design Under Test) and Testbench to knowing what development and verification metrics need to be gathered and tracked, the task can be significant. Like many others, teams at Cypress traditionally had created verification management environments to meet a specific project need. Scripts were either borrowed from other projects or created from scratch and tweaked for the targeted project.
Over time this ad-hoc script management often transformed verification environments into an unintelligible mass of interconnected files. Managing such environments requires dedicated resources for each individual project, thus wasting scarce time and money as verification demands continued to grow. This paper will focus on an infrastructure created at Cypress to abstract away file list and metric gathering by providing a uniform front-end shell and back-end database, boosting predictability of testbench creation and metric tracking across multiple projects.
Additionally, this paper will discuss various metrics collected and the use of Mentor's Verification Run Manager (VRM) toolset in gathering metrics, tracking coverage and reducing test suites to quickly and efficiently obtain coverage goals. Introduction: For years verification management environments within Cypress have been tailored to suit a given project's needs.
This lead to largely diverging variations of scripts, each of which had to be managed by dedicated resources. Maintaining these multiple management environments quickly became an overwhelming task given increasing complexity and other demands on verification environments. Additionally, having to interface with multiple management environments created a drag on efficiency, making both resources and projects less portable and eliminating opportunities for shared learning and project reuse. A standard methodology for managing verification was required to provide a uniform interface to users and promote IP reuse.
Over time, as enhancements to this standard methodology were made, the entire company benefitted, instead of pockets within the company making improvements. Ideally, each project can benefit from performance enhancements or efficiency improvements, as well as metrics collection, progress tracking and reporting changes. Furthermore, with consistent metric collection, project regressions can be compared both within and across projects company-wide. This paper will discuss the verification management infrastructure which includes the following: division of labor between Ruby scripts and Questa's Verification Run Manager (VRM)design and testbench tree and file list gathering test tree infrastructure and test lists.
This paper will also discuss metrics and report generation which includes: metrics gathering test-level output and reports regression-level output and reports trending of data Finally, efficient coverage through ranking, including automated seed generation and test ranking with Questa's Verification Management tool, will be discussed.