Late 2014, we found ourselves in a Project to develop a custom interconnect UVM Compliant VIP. Not only was there a need to develop a custom UVM VIP, but there was a need to plug this to a DUT which has a PCIe and an Avalon Streaming interface on it and perform the advance verification using our custom UVM VIP.
The challenges were:
- Developing a custom interconnect UVM compliant VIP from scratch
- Verification of a DUT using the custom interconnect UVM VIP
- Verification of the PCIe interface of the DUT
- Making sure that the documentation created during the verification is DO-254 compliant
- Requirements, traceability information, functional coverage reports, code coverage reports need to be created
STEP 1: CREATING A PROJECT MANAGEMENT ENVIRONMENT
This verification project had to be compliant with DO-254. Therefore, not only did we have to plan the UVM environment and how we're going to do the verification, we had to plan how we were going to manage different aspects of the project life cycle.
Requirements Management and Tracing:
We used conventional methods here. Word and Excel. The requirements are captured in MS Word, the verification plan is developed again in MS Word and requirements traceability matrix is developed in MS Excel. And this is REAAAALY cumbersome. Why? Here are the reasons why we're going to switch to a professional requirements management and tracing system in our next project.
Requirements Review is hard with MS Word. Reviewers were using the "Track Changes" feature of MS Word and each review was being committed to the revision control system. Tracking when the review is done, who has done the review is all manual. Also it's not possible to generate queries related to reviews. Requirement numbering was also manual which is cumbersome when new requirements are to be defined. A requirement management tool will be a better choice and make our life easier in terms of review logging and tracking. It will also be possible to use queries to extract different kinds of information related to requirements with a requirements management tool.
SVN is used for revision control. It's free and does the work for revision control. We used it for it's tight integration to Trac and also our configuration management environment is based on SVN.
Configuration management is very important in a DO-254 compliant project. An internal configuration management tool is used based on SVN. This allowed us to create configuration items, generate SVN tags specific to configuration items, build configurations and embed configuration items in configurations. This method allowed us to run regressions on tagged top-level configurations and track which configuration item is used inside a top level configuration. The verification environment is reproducible and reverification, even after months for a specific top level configuration which has been released, is possible this way. Release notes for the configuration are also being generated automatically depending on which files are modified on SVN. Engineers can easily generate release note data without having to remember which changes to which files are done and our internal configuration management tool is providing a list of modifications before tagging a configuration item.
There needs to be a Change/Defect Management system deployed and used throughout the project, especially in a DO-254 compliant hardware development project. We used Trac for Defect and Change Management. The revision control system was SVN. Trac is a great (and free) software which can interface to Subversion, Git and other version control systems. Trac allows wiki markup in issue descriptions and commit messages, creating links and seamless references between bugs, tasks, change sets, files and wiki pages. A timeline shows all current and past project events in order, making the acquisition of an overview of the project and tracking progress very easy. The roadmap shows the road ahead, listing the upcoming milestones. See trac.edgewall.org for further information.
STEP 2: GENERATE THE REQUIREMENTS AND THE PROJECT ENVIRONMENT
After two months of project kick-off we had the first version of the requirements. Then, we started to develop the UVM environment with the initial requirements at hand. However, the requirements were not frozen. Have you seen any project where the requirements are frozen throughout the project? Probably not. And our case was not an exception. Therefore, requirements are changed and changed and modified and some of them removed and again changed, modified, and so on…… This is why managing and tracking requirements using MS Word and MS Excel is not a good idea as when a requirement is changed/modified/deleted, we need to also modify the verification plan and requirements traceability matrix which are not linked.
This modification process is error prone and more than this, time consuming. Lessons learned. It's time for us to use a requirements management tool and a traceability tool like ReqTracer™ in future projects.
STEP 3: PLAN & DEVELOP UVM ENVIRONMENT
The initial work was to generate a top level block diagram of the UVM environment and come to a common agreement within the team on what will be coded, which UVM items, sequences, monitors, drivers, agents to be developed; create a plan and estimate the work load for the entire verification development. The micro planning has been done, milestones are identified and milestones are entered into Trac. Knowing our milestones is crucial as your defects, change requests, tasks with owners, priorities and related configuration items will all be linked to milestones. We had to know which item we were entering into Trac would be for which milestone. This way weekly progress of the project could be tracked for every item entered into the Trac system. Project plan was also aligned to the items in the Trac system and that facilitated the project management.
For the custom interconnect UVM VIP, the initial work was to develop the UVM environment base classes for the agent, sequence item, monitor, sequencer, environment configuration. This took almost one month with two people.
Another engineer tried to integrate Mentor's PCIe VIP to the DUT and run the initial tests. This took a couple of days to have the verification environment up and ready, thanks to Mentor's Altera Kit to integrate PCIe QVIP. Mentor provided an example kit to integrate their PCIe VIP into Altera's PCIe Hard IP and this made our life easier during the first bring up of the PCIe interface; we saw that there was a link established and a bus enumeration could be done. The documentation was also good quality and most of the time we were able to find the answer to a question related to PCIe VIP in the documentation without contacting Mentor Support.
STEP 4: TEST CASE IMPLEMENTATION & REGRESSION & RESULT ANALYSIS
We had close to 200 test cases to be implemented. All of them were random test cases. They were linked to the requirements. For each test case, we defined the assertions, cover directives coverpoints and covergroups for our DUT. Questa® Prime was used as the HDL simulator, we gathered them in a test plan in XML format to be able to create a test_plan.ucdb. Creating a test_plan.ucdb allowed us to link test cases defined in our Verification Plan to the Regression Results UCDB file. This way, after running a regression, it was possible to see the total coverage (code or functional) of the regression and also for each test case. It was also possible to analyze which cover items were not covered in which test case and this made verification engineers' coverage analysis much easier and faster.
We initially created a regression suite based on C-shell scripts. We created a C-shell script which could run a single test case or run a regression by read in a test file where all the test case names were defined. Script can accept some command line arguments like GUI mode, test case name, UVM verbosity level, waveform dump file, coverage on/off, seed, etc. The result analysis was done at the end of each test case run by checking some predefined terms like "Error:" "UVM_ERROR", "UVM_FATAL", etc. There were two drawbacks of this regression C-shell script:
- Running test cases in parallel to speed up the regression time was not possible
- Running the tests with different repeat numbers was not supported with the script. This was needed to be able to increase functional coverage by repeating the run times for the test cases
As a result of the needs listed above, we did a pilot evaluation of Verification Run Manager (VRM) tool from Mentor. The deployment was easy, thanks to the support from Mentor. Once the regression environment was up and running using VRM, our regression times improved (of course you're limited here with the number of Questa licenses you have as it is a parallel run) and we had a very nice way to run the test cases in a regression with different seeds, different repeat numbers and different run time options. VRM is also working in an integrated way with Questa, hence, we were able to debug a failing test in an interactive way as a result of the regression run. Another benefit was "Result Analysis". VRM can capture UVM errors, fatals, warnings, infos for each test case run and generate a nice report for further debugging and project reporting. What can be captured can also be customized, meaning we can search for a specific word/sentence in the results of test cases and place them in a result analysis group for further investigation.
Developing a UVM environment, especially for VIP, takes time, requires experience. However, once it is there, with the right tools in hand, the confidence of the verification environment is definitely increased. Because we now know that the methods and the environment itself can be reused in our next projects and because it is UVM, when a new engineer with UVM knowledge starts with our team, he/she will already be equipped with the necessary information to get up and running with us. There is no need to teach him/her company internal methods/tools/methodologies for verification.
We now have ReqTracer, Register Assistant and Questa inFact in our "To Do list" to be learned and to be deployed.
Back to Top