Why bother with agents for miscellaneous IO?

After many years, projects and clients, I have yet to find a persuasive argument for why they insist on using an agent and all the rigmarole that implies simply to drive and monitor those miscellaneous IO signals that don’t form part of any particular interface and which don’t have a defined protocol (control and stauts bits/bytes etc.). I have used and/or coded multiple variants of such an agent, and in all cases:

  • The transaction consists of a WRITE and READ command (and sometimes a WAIT_FOR command, but I prefer to do that in a sequence).
  • The driver either sets the value of the WRITE or just passes back the current value for a READ.
  • The monitor outputs a READ transaction every time a signal changes state.
  • The top-level interface is parameterized to the width of the signals and put into the ConfigDB to be fetched by the agent in the usual way.
  • Top level virtual sequences run either a WRITE or READ transaction to drive and/or check the signals.
  • Scoreboards implement analysis ports to monitor the signal states.

This is a whole heap of hassle and code (surely a tautology), and it’s even more of a fiddle for virtual-sequences or scoreboards that want to wait on events, and this happens a lot!

Why not just get scoreboard or sequence to get the VIF straight out of the ConfigDB. If you’re like me and like to use a virtual sequencer to run virtual sequences, you can even fetch the VIF in it’s build_phase and keep the handle there, accessible via p_sequencer. All my interfaces use a “set” and “get” task to drive and return the signal values, so these can be called directly by the virtual sequence, and I always put a “changed” event in the interface too, so waiting on events becomes a lot simpler too.

So, anyone got a good reason to use an agent instead of just the VIF? I’ve never yet had reason to extend such an agent or do a factory override on one. Please let me have any thoughts - I’d love a reason to believe that my efforts have been to some purpose after all.

In reply to James Ferris:

This is just TLM purism, usually defended with “the methodology says that scoreboards have to be fed transactions”, etc. It’s a similar situation to what I once saw: clock transaction that gets sent whenever a clock tick is seen. The TB writer wanted to implement some cycle accurate modeling of something (a whole different discussion in itself) and didn’t use an interface that contained the clock, because in UVM everything has to be transactions.

In my interpretation, transactions are only useful when you can abstract the behavior of multiple signals over time. For example, for an APB transfer I don’t care that there’s are PSEL, PENABLE, PREADY signals or that data is driven on either PWDATA or PRDATA. In high level code I just care that there is data and whether it’s a read or a write. Aside from the obvious advantage of making things easier to reason about, it’s also more efficient to work with transactions (from a simulation speed point of view). All of this falls apart if you don’t have signal protocols and you’re trying to shoehorn transaction where they don’t belong (see the clock transaction example above - no added abstraction and obviously slower).

In your case, maybe you could get the first benefit (abstraction) if you don’t think in terms of signal changes, but assign semantic meaning to those changes. For control signals this means something like “do_command_X” instead of “drive signal ‘cmd’ high for one cycle”. For status signals you would have something like “design responds with X” instead of “‘rsp_valid’ signal goes high and ‘rsp’ signal is driven to X”.

In reply to Tudor Timi:

Thanks Tudor.

I agree with all you said, but it’s good to hear it from others. The idea that “scoreboards have to be fed transactions” is reasonable - but only to a point. In my epxerience, scoreboards of the “tick off expected vs simulated events” variety lend themselves quite well to UVM-style abstraction and I quite often use them to check off bus transactions, but they always have an accompanying “predictor” that produces the “expected” transactions, and these I usually find are not sensibly UVMifiable. Again, for single I/O pins with no protocol, prediction is just “pin should go high soon” and the overhead (both simulation- and coding-time) of converting all these transitions to expected and simulated seems pointless.

The abstraction to “commands” is a good one, but I do that by writing sequences. For the UVMophiles, these can run on an agent, but they could also be virtual sequences that read/write the I/O directly (via read/write tasks in the interface which can be got from the config DB). Also allows for the abstraction of “interesting” data sequences (e.g. 1:10 PWM sequence, bit walking sequence etc.) so you can keep the high level abstraction without the UVM baggage and, as far as I can see, without any loss of portability.