Multiple analysis ports to single implementation

Hi,

I have an arbiter with 16 data-valid inputs, that I want to verify. I’m trying to use 16 data-valid agents that will send the capture data to the scoreboard through analysis port, while being able to distinguish from which input the data came from, so I could store them in different queues for further operations.

I know I can have different implementation for each agent, but they will be just the same and I’m trying to avoid code duplication.

Is there a way to implement it effectively with the data-valid agents and use a single implementation?
Maybe use a different TLM component?

Thanks

Shalom Avichay,

I’m not sure how this could be achieved with any TLM component other than analysis ports—perhaps someone else might have additional suggestions.

Here’s what I would propose:

Your “data-valid agent” should include a member variable that represents the agent index, for example, m_agent_index.
In mesh/fabric architectures, this could be split into m_agent_x_index and m_agent_y_index.

When the monitor component collects a “data valid” transaction, it should assign its agent’s index to the collected transaction (e.g., data_valid_trans.m_agent_index = this.m_agent_index).
This index is a non-physical field—essentially auxiliary data attached to the transaction.
(I’m borrowing the term non-physical from E/Specman)

In the scoreboard verification component, you can then use this non-physical field of the actual monitored transaction, which is passed through the analysis_port, to determine which input of the arbiter generated the transaction.

Does this approach make sense?

Another option would be to add instrumental RTL code (non-synthesizable RTL) to retrieve the arbiter input index, but that would require effort from more than one engineer. :slightly_smiling_face:

This can be the solution to your problem:

Thanks @MichaelP

This makes sense, and I was actually thought of expanding the data to include the index hard coded in the testbench, meaning if I have 16 agents that collect 64 bits of data, so they should expand the structure to 68 bits (64 for the data and 4 more for the index). This way I don’t change the agent behavior.

But as I’m working to make my environment modularly, and I want to be able to integrate it in other environment, I tried to avoid such solution.

As for other TLM components, I’m not really familiar with uvm_tlm_analysis_fifo or uvm_tlm_fifo components, but I thought I could use them, one instance per agent, and have a task managing their arbitration. What do you think?

This solution requires a change with the agent, but actually it doesn’t seem to hard other instances which already exist.

Thanks @gkaragat

Although we connect 1 Analysis port to 0 or multiple Analysis Imp port types.

In your case you could simply connect each of the AP (within each agent) to a single Analysis Imp in Scbd.

Since the parameterization of the each of the AP and Analysis Imp would be the same, it should work. Within the write function the id type field (in the parameterizated class type) would help identify the originating agent

@AGS91 not sure I understood your point.. my issue was with identifying the originating agent. how could I get the id? may you add an example?

I was under the impression that you wanted to connect the 16 analysis ports to a single analysis imp.

There are 2 solutions that I can think of ::

(1) As Michael suggested above the transaction type which is broadcasted via ap.write(..) should have a property named id. Within the write function in analysis imp, you could check this id field to determine the source

class transaction extends uvm_sequence_item;
  int id;
  ......
endclass

// Within each of the agent
uvm_analysis_port#(transaction) ap;

//Prior to broadcasting via analysis port you would create an object of transaction type and assign id field

// Within your scbd
uvm_analysis_imp#(transaction,scbd) a_imp;

function void write(transaction t);
  case( t.id )
  
endfunction

(2) Each of the 16 agents would have a unique name ( as 1st arg. to create function ). You could use a suffix 1 to 15 for each of them and then use the same suffix as argument while creating the transaction type. Within the write function you could call txn.get_name() to identify

 // Within each of the agent created using suffix "_N"
 // Eg: agent0 = agent::type_id::create( "agent_0" , this );

 Now when you create transaction type before broadcasting it via ap.write,
 we could use the suffix as unique identifier to create

 txn = transaction::type_id::create("txn_0",this);

 // Assign properties
 ap.write(txn);

// Now within the Scbd's write function
function void write(transaction t);
  case( t.get_name() )
  
endfunction

I would suggest first reading the UVM source code and documentation of these classes (`uvm_tlm_analysis_fifo and `uvm_tlm_fifo`).

Adding links to the Github Accellera official UVM release UVM, IEEE 1800.2-2020 (source code):

Documentation complaint to UVM1.2 release:

Summary by one of the LLM chatbots:

The main differences between uvm_tlm_fifo and uvm_tlm_analysis_fifo are:

uvm_tlm_fifo

  • Blocking communication: Uses put() and get() methods that block when FIFO is full or empty

  • Point-to-point: Connects one producer to one consumer

  • Buffered storage: Stores transactions with configurable size (default 1)

  • Consumes data: Once get() is called, the transaction is removed from FIFO

  • Typical use: Between sequencer and driver, or for pipelined communication

uvm_tlm_analysis_fifo

  • Non-blocking broadcast: Uses write() method (never blocks)

  • One-to-many: Can connect to multiple subscribers via analysis_export

  • Unlimited buffered storage: No size limit (grows as needed)

  • Non-consuming: Transactions remain until explicitly retrieved via get()

  • Typical use: Between monitor and scoreboard/coverage collectors for broadcasting

Key takeaway: Use uvm_tlm_fifo for controlled data flow between two components. Use uvm_tlm_analysis_fifo for broadcasting monitored data to multiple analysis components.

My personal suggestion based on my experience and the “Key takeaway” above:

  1. To add to your data-valid agent what I have suggested, it is really a minor change in the agent.
  2. Each monitor of the 1 to 16 agents will broadcast its monitored/collected transaction using a write()method, to uvm_tlm_analysis_fifo in your scoreboard/reference-model code.
    You don’t really need here a blocking relationships modeling in the verification
    scoreboard/reference-model..

Please update us if it worked, or in case u have further problems/issues.