Why do we use drivers?

Hi, i’m a new verfication engineer.

I wonder why do we use drivers in UVM.

I know the driver’s role but i think it’s more easy to use driving sequence.

if we call the virtual interface in the virtual sequencer and the virtual sequencer in sequences using the p_sequencer.

Then, we can use clocks and interface controls in the sequence.

I think it’s more easy to test

But many verification engineers use agents include driver.

Is it a just framework? or Are there some problems using driving sequences?

Thank you

In reply to HanP:

Writing UVM tests is a software project. Splitting the sequence and driver into separate objects follows the separation of concerns design principle.

Typically your sequences generate the data and commands to be sent to the DUT at a high level of abstraction (the What), and the driver has the protocol and timing to deliver that to the DUT that translates the high level transaction to lower level pin wiggles. (the How).

Here is a link to some more benefits: Separation of Concerns in Software Design - Alexey Naumov

In reply to dave_59:

Hi, dave.

According to you, the reason we use driver is there is nothing wrong with the behavior, but is it because of UVM protocol?

In my UVM, virtual sequencer in the sequence controls DUT’s I/O and timing with interface signals. And there is not a driver. But, Test is ends successfully.

Currently my UVM method is strange, but could it also cause problems in behavior?

I’m still a newbie so I’m sorry for sounding weird.

Thank you

In reply to HanP:

The reuse aspect is one of the main benefits when using the UVM. This is supported using the agent-based approach. It is reusable simply with copy/paste without any modifications. What you are doing is simply a traditional testbench approach using SV and some UVM constructs without using the power of the UVM.

In reply to HanP:

With VHDL using OSVVM methodology we do the same sort of separation of concern. We just use slightly different terminology. Verification components (VC) drive the interface behavior and the test sequencer has calls to the transaction API which instructs the verification component what signal wiggling to do.

In hardware verification, there is a long history of separating transaction initiation from interface stimulus generation - I started doing this in QuickSim gate simulations and continued when I started VHDL.

Why do we do this?

  • Maintence. Ever have an interface specification change? If you already have coded 1000’s of interface actions by directly driving the IO, it can be maddening to correct. With a VC/driver, correct a couple of lines of code and you are done.
  • Readability. It is easier to see what is going on in the test sequencer - I like to think that if you explain what the transaction API does, then System and Software engineers should be able to understand the test case.
  • Adaptability. OSVVM has codified its transaction API for Address Bus/Memory mapped interfaces (AXI, Avalon, Wishbone) and streaming type interfaces (AxiStream, UART, …). Hence to switch from Avalon to AXI in the testbench, you only need to switch the VC/driver, update the DUT for the new interface, and rerun the test. Of course, any interface specific tests need to be re-written - for example on AXI you need to test simultaneous transfers on both the Write and Read interfaces (split transaction and they operate independently) where other interfaces may not have this sort of capability.
  • Adaptability Continued. The same sort of things can be done when moving from RTL (Unit) tests to Core tests to full chip tests. The Address bus interface signaling may change as you transition the different levels of the design, but a write on the interface to Address AAA using data DDD, still transfers the same information - so the VC/driver changes but the test sequencing does not.