Version 1.0 of the UVM class library was released by Accellera at the end of February 2011, the result of a unique collaborative effort between fierce competitors (Siemens EDA, formerly Mentor Graphics, Cadence, and Synopsys) and a small number of activist user companies. The objective was to provide an industry standard SystemVerilog based verification methodology. To celebrate the 10th anniversary of the UVM, I would like to take the opportunity to reflect on how well it has achieved its objectives and what its future might hold.
I am old enough to remember the days before verification methodologies. My first experience writing a testbench was to write parallel patterns that were fired at a design in a simulator, where every bit in every pattern had to be right and abstraction meant using a hex rather than binary format. With the advent of Verilog and VHDL in the late 1980s, abstraction improved substantially, but there was no real industry consensus on how to go about writing testbenches. I can remember spending weeks figuring out the best way to structure a Verilog or VHDL testbench around a design, and I can remember spending months struggling to master testbenches that colleagues or customers had constructed. This was not a scalable approach, and with time the limitations of these verification languages became all too apparent as designs started to increase in size and complexity.
Figure 1 - Where the major features of the UVM came from
The end of the 1990s saw the evolution of libraries and languages that started to address the problems of abstraction and scalability. The “e” language gave us constrained random verification, and also led to the key concepts of agents and functional coverage through the eRM verification methodology. But all of these good things were locked up in proprietary software. A little later, the SystemVerilog language evolved out of Vera and Superlog and merged with Verilog.
In 2006, Siemens EDA released the AVM as an open source class library. This started out as a SystemVerilog implementation of the SystemC TLM standard but quickly evolved to support a standardized testbench methodology. Siemens EDA and Cadence then collaborated to develop the OVM, which was an open source SystemVerilog class library combining eRM and AVM features to create a methodology that the user community recognized as a viable proposition, since they were no longer forced to use proprietary software. The initial release of the OVM was in January 2008. Usage of the OVM started to ramp up with various companies starting to supply verification IP (VIP).
Then in April 2010, the Accellera VIP interoperability technical committee voted to take the OVM forward as an industry standard, with Synopsys and the user community joining Siemens EDA and Cadence in the effort to produce the UVM, which was largely based on the OVM, adding run-time phases, the register package, and TLM2 support.
Figure 2 - How the UVM code base breaks down by lines of code (LOC)
COMPARING THE VISION AND THE REALITY
Taking on the baton from the AVM and the OVM, the UVM was developed as an open source SystemVerilog library with the view that it would run on any IEEE 1800 SystemVerilog simulator — a move aimed at enabling growth in simulation usage and the development of a verification eco-system. Today, UVM VIP and UVM based simulation environments are easily migrated between simulators. Having an industry standard methodology has brought a number of advantages, not least of which is that verification teams can focus on developing verification environments and tests rather than having to develop a project or company based methodology and testbench infrastructure from scratch.
Since UVM 1.0 an ecosystem has developed. It is now possible to license UVM compliant VIP from multiple vendors and integrate it into a testbench based on well understood patterns. Understanding how to get the best out of the VIP might take a while, but, like understanding the DUT, that is a domain specific problem and outside the scope of a methodology.
Tools that support the development and debug of UVM code are another valuable part of the eco system. There are several companies that have emerged as providers of authoring tools, linting tools, or register specification and code generators. Most up to date debug GUIs now offer support for UVM debug or leverage UVM abstractions to help with protocol debug.
Another aspect of the vision was how the methodology might impact verification project management. For instance, one strategy for tackling a project is to have an expert UVM architect who assemble the verification environment and develop specialized components while a team of verification engineers—who needed only minimal knowledge of UVM to start with—would use the environment to focus on test case generation, bug hunting, debug, and closure.
Verification reuse was touted as another advantage of the UVM, with block level testbench code reused vertically at successive levels of design integration and horizontally between derivative projects. Exactly how this has played out is probably very dependent on the wisdom, skill, foresight, and luck prevailing at different companies. However, the general consensus seems to be that getting vertical reuse to work has not been straightforward.
From the human resources perspective, there are now several respected training companies that offer UVM training, and many more websites that offer on-line learning material and examples, so learning about the UVM class library is well supported. Once you have learned UVM and got some experience, it is fairly easy to transfer your skills between projects or even jobs. Over time a talent pool has developed that makes it easier to recruit and staff verification projects. From the verification engineer’s viewpoint, unless you can demonstrate UVM knowledge it is harder to get a verification job.
UVM ADOPTION – WILLING OR OTHERWISE
For verification practitioners who had adopted the OVM, moving to the UVM was fairly straightforward, and there was a migration script provided to help change OVM code to UVM, which did a bit more than change “ovm” to “uvm” but not much. As a result, the UVM became mainstream within 6-12 months of its debut and has since continued to grow in usage. This only serves to illustrate that the UVM was a compelling solution to the issues that were prevalent in the industry.
For other adopters, who were more Verilog or VHDL orientated and to whom object orientated programming (OOP) was a complete mystery, the initial adoption of the UVM arose out of practical necessity rather than choice. As an example, anyone who is using an IP block with a complex protocol, such as PCIe or DisplayPort, will quickly realize that the make-versus-buy decision for VIP has to be a buy, and as a consequence, the only viable way forward is to adopt UVM. Another example is in the FPGA domain where at one time teams may have been able to check design functionality with multiple iterations on a development board, but as devices grew in size and place-and-route times increased this became too painful, forcing another class of user to embrace simulation based verification and, with it, the UVM.
A common complaint from new users is that the UVM is just too big and confusing, with several possible ways of doing the same thing. In many cases, its class-based OOP implementation poses a steep learning curve to those that are more used to writing Verilog or RTL code. Training, guidance, and mentoring are some help, but the persistence of the problem has been recognized for some time. The UVM Cookbook on the Verification Academy promotes the use of a sub-set of the UVM that allows you to steer a straight course through the implementation of a verification environment without getting side-tracked into the more esoteric corners. Other organizations have produced training courses or template libraries that promote an “easier” UVM by taking a similar approach. For instance, the UVM Framework (UVMF) shipped with Questa, from Siemens EDA, adds a layer on top of UVM that makes it quick and easy to construct a testbench by passing a VIP agent as a parameter to a class that then takes care of the implementation.
One positive thing to note here is that the component side of a UVM environment uses fairly regular patterns and much of the implementation detail is boilerplate, which means that once the testbench VIP has been selected the generation of the testbench code can be automated. As an example, the Questa VIP configuration tool generates UVM testbench code that instantiates VIP agents, connects the associated interfaces to the DUT, and configures them based on selections made in a GUI. Using this approach gets you to the point where you can start writing tests and sequences in minutes without having to understand how the UVM components (test, env, and agents) are connected.
Figure 3 - Verification methodology adoption rates versus UVM versions - Data source: Wilson Research Group Verification Survey Results 2020
A GROWING PROBLEM WITH VERSIONS
In the early days of the UVM, there were regular updates to the library that occurred about twice a year, aimed mainly at fixing bugs and addressing usage problems. This situation prevailed for about two years for UVM versions 1.0 through to 1.1d. Then there was a three year gap at the end of which UVM 1.2 was released with a view to IEEE standardization of the UVM reference guide. UVM 1.2 added and changed a number of things, some of which were esoteric and unnecessary. This made users of UVM 1.1 pause. The IEEE standardization process resulted in the 1800.2 - 2017 LRM, followed by a reference implementation in 2018, effectively a gap of 2 years in time from the release of UVM 1.2. The experience of implementing the IEEE 1800.2 2017 version of the library flushed out a number of errata and inconsistencies in the IEEE LRM, and the standard was revised, resulting in a 2020 edition. The Accellera UVM working group followed through with version 1.0 of the IEEE 1800.2 2020 implementation in December 2020, which addressed a few changes due to LRM clarifications and a number of bug fixes.
However, a growing problem has emerged in that the IEEE versions of the UVM library are not being adopted. When users report bugs in the Accellera Mantis system, they are reporting them for pre-IEEE versions of the UVM library. There is other anecdotal evidence pointing to lack of adoption of more recent versions of the library. Discussions with users about versions quickly highlights the reality of project schedules and the perceived risks associated with migrating to a more up to date version. This leaves the Accellera working group in a quandary: when they fix a bug or add a feature it gets added to the next release of the 1800.2 implementation, which is of no help to someone using the 1.1x or 1.2 version. Some teams monitor the bug fixes or come up with their own and integrate them into their own version of the UVM source code, negating one of the main points of having an industry standard. Obviously, the ideal situation would be that most users were on the latest version of the library and that there existed a healthy environment where library quality is steadily improving because errata are addressed in a timely manner for industry wide consumption.
The Accellera working group is seeking to take steps to mitigate this problem by developing and publishing a migration guide and, potentially, tools to help users migrate to an up to date version of the library.
FACING EMERGING OPPORTUNITIES
The UVM is of its time. It was a big step forward for the verification community. But to some extent, it is becoming a victim of its own success. Since 2011, the UVM has enabled verification to scale, but at the same time, chip designs have grown in complexity and the associated software content has become a significant part of the problem. This is leading to a shift in emphasis towards system validation whereby an SoC needs to be prototyped using different layers of software and different long-running use case scenarios need to be explored before going to silicon. During the validation process, the overall system needs to be checked to make sure that not only is it functional but it also meets performance and power goals. This is a long way from the original scope of the UVM, which was to provide a reusable verification methodology to support constrained random verification based on SystemVerilog. The UVM working group is seeking input on what needs to be put in place so that the UVM continues to be relevant to the needs of its users and support the next 15–20 years of methodology developments.
The shift to validation has prompted users to adopt hardware engines such as emulators and FPGA prototypes to increase overall throughput. At the same time, there is growing interest in using stimulus and analysis that is portable between simulation, emulation, and FPGA prototyping engines and the final silicon implementation. A number of standardization efforts have been under way to support this trend. There are several ways the UVM library might develop in order to support these interests, the most obvious one being to evolve to provide an implementation layer for Portable Stimulus, either software driven or based on the Accellera PSS. In order to make the solution viable, it would have to provide alternative architectures tuned for getting transactions in and out of different hardware engines at a rate that does not impact the overall throughput. Another possible contribution would be to focus on increasing the throughput of stimulus and analysis transactions in UVM testbenches.
Another long standing area of concern is the UVM register layer, which has a number of inconsistencies and is not really architected to scale to represent a register map with multiple views of hundreds of thousands of registers, typical of a large scale device. An update to the register layer should be aligned with industry standards, such as IEEE 1684-2014 IPXact and Accellera SystemRDL.
Another potential area for improvement would be to support the increasing use of other languages, such as Python and C in verification. These languages offer higher performance than SystemVerilog for solving certain types of modeling and generation problems, and they provide access to extensive libraries for processing different types of data. The UVM could address this by providing an industry standard inter-language interface solution.
Celebrating the 10th anniversary of the UVM means acknowledging what has been achieved, but also facing up to some challenges. The Accellera UVM working group and IEEE 1800.2 committee are staffed with volunteers who have given up their spare time to keep the standard moving forward. They need our collective support and input to help the verification community overcome the hurdles of migrating to the latest library versions and to define what additions need to be made to underpin the future of verification.
You might not be receiving an invite to an anniversary party, but please be on the lookout for invitations from the Accellera UVM working group to take part in consultation meetings and surveys over the next few months.
Back to Top