Uvm_analysis_imp_decl write function ordering

I have a scoreboard with a LOT of agents connected to it, that can be broadly categorized into three sections:

  • Input to RTL
  • RTL internal process
  • Output from RTL

I have connected all of these up with uvm_analysis_imp_decl write functions, but now I’m facing some race condition scenarios. In the RTL a lot of the logic is combinatorial, so if a Input happens at the same time as a Process the Input is taken into account. In my scoreboard I need to ensure that the Input write functions are called before the Process functions, but I cannot work out how using uvm_analysis_imp_decl macros.

Given the amount of work it would require I would prefer to NOT switch over to using exports/tlm_fifos and forever/begin loops.

Any suggestions?

In reply to Cmdr_Vimes:

There are a lot of missing details about the timing. Let’s say you have a total of N Agents connect to your scoreboard. Will all N agents send a one transaction for every check your scoreboard needs to verify? If that’s the case, you can set up a counter that each write function increments. The write function that reaches N can execute your check and reset the counter.

Otherwise, I can’t see how you can do this without using an anaylsis_fifo with the details given.

In reply to Cmdr_Vimes:

I have a scoreboard with a LOT of agents connected to it, that can be broadly categorized into three sections:

  • Input to RTL
  • RTL internal process
  • Output from RTL
    I have connected all of these up with uvm_analysis_imp_decl write functions, but now I’m facing some race condition scenarios. In the RTL a lot of the logic is combinatorial, so if a Input happens at the same time as a Process the Input is taken into account. In my scoreboard I need to ensure that the Input write functions are called before the Process functions, but I cannot work out how using uvm_analysis_imp_decl macros.
    Given the amount of work it would require I would prefer to NOT switch over to using exports/tlm_fifos and forever/begin loops.
    Any suggestions?

The race conditions you are facing might be caused by a bad architecture.
You say you have a lot of agents with the categories you mention above. Your agents should not be orgainzed with respect to inputs and outputs. They shouldl be specified wrt to functional interfaces. The key question for your UVM testbench is: how many functional interfaces do you have. Each agent should be related to these interfaces. Then you might avoid your races.

The best way I think to use uvm_tlm_analysis_fifo in scoreboard.
So connect all process functions to fifo, to not lost them.
than after input functions are done in scoreaboard, then start getting process transactions from tlm_fifo and parse them.

Other way is using uvm_events. When inputs are done, generate event and monitor start sending the RTL internal process transactions.

There are a lot of missing details about the timing. Let’s say you have a total of N Agents connect to your scoreboard. Will all N agents send a one transaction for every check your scoreboard needs to verify? If that’s the case, you can set up a counter that each write function increments. The write function that reaches N can execute your check and reset the counter.
Otherwise, I can’t see how you can do this without using an anaylsis_fifo with the details given.

The race conditions you are facing might be caused by a bad architecture.
You say you have a lot of agents with the categories you mention above. Your agents should not be organized with respect to inputs and outputs. They should be specified wrt to functional interfaces. The key question for your UVM testbench is: how many functional interfaces do you have. Each agent should be related to these interfaces. Then you might avoid your races.

My RTL has multiple input and output buses, some using proper communication protocols and some just being wired signal buses (interrupts, status registers etc). Each bus has its own agent; communication bus monitors report transactions to the scoreboard on the usual valid/ready combination, wire monitors report transactions on a change in the wires.

All of that logic feeds into some non-deterministic round robin type request handler, so in order to make the scoreboard handle all this I have agents at a key stage in the middle of the design telling the scoreboard exactly what is being processed. I wish I could blackbox this properly but I can’t.

The scoreboard observes inputs and processes and creates expected output transactions, and then compares those when the outputs occur. The issue I am facing is that if the scoreboard sees certain inputs BEFORE it sees the processes then it predicts the outputs incorrectly.

I first turned all of the imp_decl write functions into tasks (I got lots of compilation warnings from the simulator) and added in minor delays to stagger the triggers to go inputs > processes > outputs, but this caused other issues to appear in the scoreboard. I backed out of this fix (though in retrospect I believe that I had the timescale set incorrectly so I shall try this again in future)

The current solution I have is to connect the agents to slightly delayed clocks. Active agents operate on the same clock as the RTL, Passive agents operate on the delayed clock. This ensures that my inputs are seen before my processes. This is working so far.

In reply to Cmdr_Vimes:

It is unclear to me what you are explaining here and it is definitely a bad idea to change the write functions to tasks.
And I do not understand what kind of race conditions you have on the transaction level. Race conditions are limited to RTL or gate level.
Transactions have no timing they have only a determined order. You have to make sutre you do not loose any transaction and you do not change the order of your transactions. Using uvm_analysis_fifo helps you to meet these 2 requirements.

In reply to chr_sue:

Basically I can have two transactions occurring at exactly the same time, so they get processed by my scoreboard at the same time… but one transaction can set a variable in my scoreboard, and the other uses that variable, so the exact order that the transactions are processed affects the output.

I think “race condition” was a bit vague, it appears to be a simulation delta.

In reply to Cmdr_Vimes:

In this situation you have to store the transaction waiting until the configuration information itself is available. Then you can start the comapre process by doing get(9 on both stores.