Increased Efficiency with Questa® VRM and Jenkins Continuous Integration
by Thomas Ellis, Mentor Graphics
"Time is really the only capital that any human being has, and the only thing he can't afford to lose."
—Thomas Edison
For all the incredible technological advances to date, no one has found a way to generate additional time. Consequently, there never seems to be enough of it. Since time cannot be created, it is utterly important to ensure that it is spent as wisely as possible. Applying automation to common tasks and identifying problems earlier are just two proven ways to best utilize time during the verification process. Continuous Integration (CI) is a software practice, which is focused on doing precisely that, resulting in a more efficient use of time.
WHAT IS CONTINUOUS INTEGRATION?
The basic principle behind Continuous Integration is that the longer a branch of code is checked out, the more it begins to drift away from what is stored in the repository. The more the two diverge, the more complicated it becomes to eventually merge in changes easily, ultimately leading to what is commonly referred to as "integration hell". To avoid this, and ultimately save engineers time, CI calls for integrating regularly and often (typically daily).
![]() |
Regular check-ins are of course, only half the equation, you need to be able to verify their changes quickly as well, otherwise many small check-ins over several days, is no different than one large check-in at weeks end. Commonly, in a Continuous Integration environment, a CI server monitors the source control for check-in's, which in turn triggers a CI process (time-based triggers are also common). This process will then build the necessary design files, and run the requisite integration tests. Once complete, the results of the tests are reported back to the user, and assuming everything passed, can now be safely committed to the repository.
By following this model, issues can be caught earlier in the development process, and can be resolved quicker as there is less variance between check-ins.
This practice has been used successfully for many years in the software industry, so much so, that it is fairly common place today. However, the idea of Continuous Integration is still fairly new in the realm of hardware verification, so it is difficult to find any published metrics on its usage as it pertains to that space specifically. However, one of the benefits of adopting a more mature technology, is you can avoid making some of the pitfalls which plagued early adopters. Since Continuous Integration technology has been used by software teams for some time, you can glean a general idea of both how widespread its usage has become, as well as what technologies have risen to the top.
ZeroTurnaround is a development company, which amongst other things, conducts an annual global survey of Java developers, and produces a report of the tools and technologies being most commonly used by the industry. In 2014, they received responses from nearly 2200 developers covering many topics, one of which was their usage of Continuous Integration Technologies. In that survey, they found that roughly 80% of (or four out of five) developers, reported using Continuous Integration in their teams. A number which itself, showed fairly significant growth, up from 68% the prior year.
![]() |
Another interesting aspect of the report, is the breakdown of which Continuous Integration servers were most commonly used. Far and away the most popular server was Jenkins, which was reportedly used by 70% of the developers who claimed to use CI. The second place tool was used by a mere 9% of users. So what is Jenkins, and why is it the favorite CI tool of so many users?
MEET JENKINS
Jenkins is a freely available, open-source continuous integration tool (released under the MIT license).
![]() |
A quick background, Jenkins was initially developed by Kohsuke Kawaguchi while he was working at Sun Microsystems in 2004. However, at the time, the project was named Hudson. After its initial release in 2005, it quickly became a favorite open-source build server. In 2010, issues began to arise between the open source community working on Hudson, and Oracle (who had since acquired Sun). Eventually requiring a vote to be called, as to whether or not to fork the project. Based on an overwhelmingly supportive community vote, Jenkins was created as a fork of Hudson. The majority of those working on, or using Hudson at the time, eventually migrated to Jenkins. Currently there are at least 127,000 installations of Jenkins (based on the anonymous usage statistics of the tool). Remember the ZeroTurnaround study? They found only 8% of users to still be using Hudson.?
Apart from being open-source, Jenkins is easy to install and highly configurable via its web interface. While Jenkins offers a lot itself, it is also highly extensible via plug-ins to the tool. At present, it boasts 1350+ plugins from 580+ contributors, to perform a myriad of different tasks, allowing for many third-party tools to leverage the power of Jenkins.
JENKINS AND VRM
On the surface, one might think that Jenkins and VRM are competitive technologies; after all, both tools can build, run and report on regressions. However, in actuality, they are truthfully complementary technologies. Furthermore, by marrying the two technologies together, you can benefit from the strengths of both, and create an extremely powerful solution for building and testing hardware designs.
While Jenkins is extremely flexible, and can run just about anything, with lots of neat bells and whistles to boot, nothing within the Jenkins core is knowledgeable about hardware verification. In the same way that VRM does not natively monitor code repositories for developer check-ins, concepts like merging SystemVerilog functional coverage, or recognizing why a UVM testbench failed are not native to Jenkins, in the way that they are at the core of VRM. What you want to do is leverage Jenkins' strengths as a build system to monitor our source repository and allow it to launch our regressions. Ultimately what it will launch though, is VRM, which will handle managing the individual verification tasks by integrating with our grid software, collecting and merging the coverage and results, etc. Once the regression is completed, Jenkins can then ask VRM to supply metrics for what was accomplished during the run, and display those results in its web dashboard.
![]() |
A REGRESSION IN JENKINS
Let's take a quick look at setting up a project to run VRM in Jenkins. On the following page is the project configuration page in Jenkins.
Here you can see the basic steps for configuring a project in Jenkins. Tasks in Jenkins are represented by builds. A build could be a complete regression, it could be the running of unit tests, or any other task you may wish Jenkins to automate.
![]() |
First you specify when to run our tests via a build trigger. A build trigger can be a period of time, a specific time, or you can even have Jenkins monitor your repository for changes, and automatically start a build for you. Jenkins will then run whatever you tell it to, which in this example, will be to launch VRM.
Finally, Jenkins will report the results of the regression run (or build). Out of the box, Jenkins will give you basic pass/fail information and some basic reporting of results, however, its lack of the metrics verification engineers are most commonly interested in, makes it feel a bit empty. To solve this shortcoming, you need one last piece to truly tie everything together neatly.
VRM JENKINS PLUG-IN
As mentioned earlier, one of the key benefits of Jenkins is that it is highly extensible through plug-ins. To get Jenkins to become more useful with respect to you running regressions with VRM, you can leverage the VRM Jenkins plug-in. You simply install the plug-in through Jenkins plug-in manager, and now Jenkins has the ability to understand code and functional coverage, determine where log files reside, monitor host utilization, and many other verification centric tasks.
To display the VRM results, and enable these features, you simply need to add what is called a post-build action (in Jenkins terms), which has Jenkins call the plug-in to make sense of the regression results.
The setup is very straightforward, you simply need to tell Jenkins where the regression ran. Additionally, you can optionally select to enable a few other features such as creating HTML reports and publishing a coverage graph to the project page. That's it! Jenkins and VRM will do the rest.
![]() |
VRM REGRESSION RESULTS IN JENKINS
One of the great features of Jenkins is its web dashboard. Now that it is using the VRM Jenkins Plug-in, you get access to a lot of great information at a glance. There is far too much to show in this short article, but here are a few examples.
The main project page has two graphs which shows you a trend of the test results, as well as the coverage results from all your past regression runs. You also get a summary table which lists the last several regressions, including information on their duration, pass/fail statistics and coverage. There are also quick links to the HTML coverage report, as well as the latest test results.
If you dig into the most recent build, you can get more detailed data on that particular regression.
![]() |
Here, in addition to pass/fail and coverage results, you can also see a list of the specific tests which failed, providing a means for easy high-level inspection. Expanding a given test, will give us both the reason for the failure, as well as the standard output for the test in question.
![]() |
The plug-in leverages the vast amount of data VRM collects from the regression runs, allowing for all sorts of data to be analyzed that would otherwise need to be collected and reported manually. Otherwise difficult questions become easy to answer. Has this test, with this seed ever failed before? What is the host utilization like during a nightly regression? When did coverage drop off?
![]() |
![]() |
SUMMARY
Continuous Integration with Jenkins CI, coupled with Questa Verification Run Manager, provides a powerful automated solution for build and regression management. By automating the regression process and helping to identify problem areas earlier, they allow verification engineers to make more efficient use of the time given even in the tightest of schedules.
Increased Regression Efficiency with Jenkins Continuous Integration Before You Finish Your Morning Coffee
by Thomas Ellis, Mentor Graphics
Introduction:
As verification engineers, we are always looking for ways to automate otherwise manual tasks. In case you have not heard, we are constantly trying to do more with less. Continuous Integration is a practice which has been widely, and successfully used in the software realm for many years. Deploying a continuous integration server such as Jenkins not only provides a way to automate the running of jobs, and collection of results, but it also allows for teams to reap the benefits of a continuous integration practices, which ultimately leads to a cleaner repository, with less integration headaches. Among many other benefits, Jenkins also provides a web dashboard to view and analyze results in a common place, regardless of how spread out your team may be. Its open source, has a strong community behind it, and you can start seeing the benefits by getting it up and running a regression in your environment before you even finish your morning cup of coffee.
Abstract:
The topic of Continuous Integration (CI) is one which has started to become more and more common in the world of verification. For those unfamiliar with CI, it is a concept often associated with Agile programming practices, and it runs off the basic principle that the longer a branch of code is checked out, the more it begins to drift away from what is stored in the repository. The more the two diverge, the more complicated it becomes to eventually merge in changes easily. Ultimately leading to what is commonly referred to as "integration hell". To avoid this, and ultimately save engineers time, CI calls for integrating regularly and often (typically daily).
Regular check-ins are of course, only half the equation, you need to be able to verify their changes quickly as well, otherwise many small check ins over several days, is no different than one large check in at weeks end. Commonly, in a Continuous Integration environment, a CI server monitors the source control for check in's, which in turn triggers a CI process (time-based triggers are also common). This process will then build the necessary design files, and run the requisite integration tests. Once complete, the results of the tests are reported back to the user, and assuming everything passed, can now be safely committed to the repository.
By following this model, issues can be caught earlier in the development process, and can be resolved quicker as there is less variance between check ins.
There are several options to choose from when it comes to CI, however, far and away the most common solution today is Jenkins. So what is Jenkins, and why is it the favorite CI tool of so many users? This paper will take a look at Jenkins, and why it is such a popular choice when it comes to CI.
The process of getting Jenkins running a regression is a simple and straightforward process, and one aim of this paper will be to walkthrough that process. It will also show the types of data that can be extracted from a regression, both in terms of an individual regression run, as well as historical analysis over an entire project. This allows a team to see trends in metrics such as build times, test pass and fail results, as well as coverage, all directly from within the Jenkins web dashboard.
We will also look at how having an automation server tied directly into your source code management (SCM) system, allows for tests to be automatically ran everytime a user checks in code. This helps to ensure that a stable branch of code always exists, and with frequent checkins, helps teams to spend less time integrating large changes Figure 1. Continuous Integration Flow which often result in multiple issues. In the event of a failed regression we can quickly alert the submitter who is responsible such that a solution can be found, ensuring the repository returns to a stable state as soon as possible.
One of Jenkins biggest advantages is the large community behind it which is continually creating new plugins which enhance and add to the features of Jenkins. Additionally, we will take a look at how we can leverage the vast array of plugins that are available via the Jenkins community to better analyze the results generated from Jenkins, and get the most out of the new CI environment created.
Mentor Graphics Integrates its Questa Verification Solution with Jenkins Ecosystem Enabling Maximum Regression Speed
WILSONVILLE, Ore., October 20, 2016– Mentor Graphics® Corporation (NASDAQ: MENT) today announced the integration of its Questa® Verification Solution with the Jenkins Continuous Integration and Source Code Management (CI/SCM) ecosystem. As a result, Mentor Graphics Questa users are now able to manage and control regressions running Questa Formal and Questa Simulation tools from within the Jenkins dashboard. They can also use Jenkins to interpret and analyze results, generate a complete suite of charts and graphs, and automatically generate status and trend reports for management. By integrating Questa tools with the Jenkins Ecosystem, Mentor Graphics doubles verification management productivity, which according to the Wilson Research Group is one of the top four time-consuming tasks faced by both ASIC and FPGA verification teams around the world.
Jenkins is the leading open-source automation server that enables engineers around the world to reliably build, test, and deploy their software projects. Over the past several years, many project teams have begun to adopt Jenkins to support their verification projects as well. However, with existing solutions verification engineers can run only their verification engines from within Jenkins. They receive very little meaningful information about their verification results through Jenkins, requiring them to manually interpret and analyze results. Jenkins is a continuous integration management environment, but does not directly support the management, execution, and analysis of verification toolsets. The Mentor Graphics Questa Verification Run Manager (VRM) plugin gives Jenkins the ability to utilize regression run results through the Questa VRM system, in addition to managing and running their regressions through Jenkins.
“We cordially welcome Mentor Graphics into the Jenkins ecosystem and community,” said Oleg Nenashev, engineer at CloudBees and Jenkins core team member. “The Questa VRM plugin is one of the very first open-source plugins tightly integrating an EDA tool into Jenkins. Such plugins will greatly help with setting up efficient, continuous integration and delivery in hardware and embedded projects. As one of the top EDA vendors, Mentor Graphics shows a great example to all users and competitors. We are looking forward to more integrations with the Mentor Graphics tool chain and increased use of Jenkins in the area.”
The Questa VRM plugin pulls in important information about regression runs and displays it in the Jenkins cockpit, enabling users to easily observe, analyze, and report the latest regression results, as well as regression trends over time. Questa VRM also enables users to launch any tool within the Mentor Graphics Enterprise Verification Platform™ including its portable stimulus generator, performance analysis, and debugging environments.
"Integrating the Mentor Graphics Questa Solution into the Jenkins CI/SCM ecosystem lets users take advantage of the benefits of both systems in a single environment”, says Mark Olen, Mentor Graphics product marketing manager, Design Verification Technology Division. “The two complementary environments create a more productive and efficient verification flow, enabling customers to achieve maximum utilization of their formal and simulation systems.”
Integration with the Jenkins CI/SCM ecosystem is available to all Mentor Graphics Questa users. The Questa VRM plugin is available for free download on the Jenkins open source automation server.
(Mentor Graphics and Questa are registered trademarks of Mentor Graphics Corporation. Enterprise Verification Platform is a trademark of Mentor Graphics Corporation. All other company or product names are the registered trademarks or trademarks of their respective owners.)
For more information
Laura Parker
laura_parker@mentor.com
Mentor Graphics
503.685.1775
About Mentor Graphics
Mentor Graphics Corporation is a world leader in electronic hardware and software design solutions, providing products, consulting services and award-winning support for the world’s most successful electronic, semiconductor and systems companies. Established in 1981, the company reported revenues in the last fiscal year of approximately $1.18 billion. Corporate headquarters are located at 8005 S.W. Boeckman Road, Wilsonville, Oregon 97070-7777. World Wide Web site: http://www.mentor.com/.
Scalable and Modular Verification Management
When verification is not under control, project schedules slip, quality is jeopardized and the risk of re-spins soars. What’s required is a common platform and environment that provides all parties – system architects, software engineers, designers and verification specialists – with real-time visibility into the project. And not just to the verification plan, but also to the specifications and the design, both of which change over time. There are three dimensions to any IC design project: the process, the tools and the data. Questa® offers a comprehensive approach to the problem with its verification management option that handles all within a scalable and modular solution.
BENEFITS
- Visibility to hit market windows on schedule
- Reduce the volume of data and track project progress
- Manage the risk to keep resources on track
- Reduce maintenance and improve automation
- Jump start the debug process
FEATURES
Data Management
Questa's verification management capabilities are built upon the Unified Coverage Database (UCDB). The UCDB captures any source of coverage data generated by verification tools and processes; Questa and ModelSim use this format natively to store code coverage, functionality coverage and assertion data in all supported languages.
UCDB also enables the capability to capture information about the broader verification context and process, including which verification tools were used and even which parameters constrained these tools.
The result is a rich verification history, one that tracks user information about individual test runs and also shows how tests contribute to coverage objects.
Process Management
Verification is driven by requirements concerning both the functionality of the final product and the intended methods of testing this functionality. By providing tools to import verification or test plans and then guide the overall process, Questa verification management helps deal with this complexity and shepherd a project toward electronic closure. It also provides the ability to store snapshots of data across the lifetime of a project, which helps to concentrate efforts where they are most needed.
Test Plan Tracking
Projects are tracked in spreadsheets or documents created by a range of applications, from Microsoft Excel and Word to OpenOffice Calc and Write. So it's critical that a verification management tool be open to a range of file formats, a basic feature of Questa, which is built on the premise that a user should be able to use any capture tool to record and manage the plan.
This document becomes the guide for the verification process. Within Questa's user interface the plan's data can be sorted, filtered and subjected to complex queries such as which tests are most effective at testing this particular feature or which set of tests needs to be run to get the best coverage for a modified instance of the design.
Trend Analysis
Understanding the progress of a dynamic verification process requires an ability to view coverage data. Accordingly, a verification management tool needs to provide the means to manage, view and analyze this data, whether it's generated from a single test or the combination of a complete regression run.
Just producing and managing individual snapshots of coverage data can be difficult due to the huge amounts of data involved. Questa UCDB affords this ability, reducing to a single database the regression data from multiple snapshots and then querying this database for trends.
Tool Management
Verification management means balancing various tools and techniques to get to closure, often with an infrastructure built on home-grown scripting and lots of manual maintenance. And as verification complexity ascends, so too does the need for a more flexible automated solutions.
Verification Run Management
Questa's verification run manager is one such solution, bringing consistency to a project through heavy doses of automation. This improves time to coverage and time to next bug, and enhances the ability of dispersed project teams to accurately estimate the time to completion. Additionally, its integration with Jenkins furthers these benefits by providing an intuitive web dashboard to observe and analyze project results and trends.
Verification Results Analysis
Questa's verification results analysis speeds the ability to address failures identified during a regression, which helps a verification project stay on schedule. The technology brings together the results of multiple verification runs, assisting in grouping, sorting, triaging and filtering messages over the complete set of regression tests. The results analysis can be triggered automatically and used by the run manager, allowing the results of a given test to control if and what should be saved in a triage database to allow further analysis.
Questa Verification Management is the most effective and modular solution available in the industry today. It manages all three dimensions of the verification process, providing incremental improvements to any verification environment as well as the ability to manage the complete flow.