In reply to chr_sue:
OK. Talking first about the architecture.
When using randomized input pattern it is costly to generate them seperately and store them in a file.
What I proposing is the following:
(1) We are using sequences to generate seq_items with randomized data (constrained). These data will be used in the driver to stimulate the DUT. Additionally we are sending these patterns through an analysis port in the driver to the scoreboard.
(2) In the scoreboard we are storing these input seq_items/transaction in an analysis fifo. This fifo stores the input transactions in the right order.
(3) in the scoreboard component we are instantiating the reference model which is C-/C++ code. If we get a transaction from the DUT we do a get/try_get on the fifo with the input transactions and sending these dara to the reference model. The expected data will be generated in 0 time and can be compared with the data from the DUT.
The scoreboard code could look like this:
import "DPI-C" function void ref_model( <arguments>, inputs/outputs);
//------------------------------------------------------------------------------
class my_sb extends uvm_subscriber #(out_item); // out_item comes from the DUT
//------------------------------------------------------------------------------
`uvm_component_utils(my_sb)
uvm_tlm_analysis_fifo #(in_item) in_fifo; // connected to the drivers analysis_port
.....
virtual function void process_refs(output ref_item);
......
if (fifo.try_get(i_item)) begin // try_get is not blocking
....
ref_model(<arguments>);
end
endfunction
virtual function void write(img_out_item t);
<data type> result;
...
process_refs(result);
...
endfunction
endclass
I know this is very rudimentary code. If you have more questions or need help you can share your code privately to me by email:
christoph@christoph-suehnel.de