Issue with the use of analysis FIFO

Hi All,

This is one of the strange issues I found with the Analysis FIFO.
Or possibly this will also happen with the simple TLM FIFO.

I have some component A which has analysis port, connected with the FIFO-A in some other component-B.
Whenever the transaction has been sent from the Analysis port, that is received in the component B successfully.

But now, after returning from write method, component A makes some changes in the object which was sent, This should not make any reflection in the componentB’s received transaction.

But this is happening.
Any change in the object after write method, which was sent by component A directly reflect the Component-B’s object.

The code is somewhat like this.

class item extends ovm_transaction;
int d;
ovm_object_utils_begin(item) ovm_field_int(d, OVM_ALL_ON)
`ovm_object_utils_end
// new - constructor
function new (string name = “simple_item”);
super.new(name);
endfunction : new
function bit comp (item t);
return compare(t);
endfunction: comp
endclass :item
class A
ovm_analysis_port #(item) port1;
item i1;
task run;

i1.d = 16’h0055;
port1.write(i1); #10 i1.d =16’h0199;
endtask
endclass
class B.
tlm_analysis_fifo #(item) fifo;
item a;
task run
forever
begin
fifo.get(a);
#10 $display(“a data is %0h”,a.d);
end
endtask: run
endclass

The value of a.d displayed is 16’h0199 instead of 16’h0055.

Thanks in advance.

If you put a $display right after the fifo.get and before the #10 delay, I’ll bet you see the value of a.d as 16’h0055.

When the type of the object to be stored in a fifo (i.e. the parameter) is a class type then when you put something into the fifo you are putting a handle to the object, not a copy of it. Thus when it is retrieved on the other side, in class B in this case, both the producer and consumer have the same handle to the object. That means if one component modifies the object the other component will see the modifiecations. In some cases that is desired, but in most it is not. You must take care to isolate components from this problem when you expect that the object will be modified.

The general principle is called Copy on Write, or COW for short. The idea is that anytime you modify an object you must make a copy of it first. The component that knows about the modifications should do the copying. In your case the producer knows that it will modify the object after it sends it to B via the analysis fifo. So it should make a duplicate of the object and send the duplicate, thus isolating component B from having to worry about whether or not the object might be modified.

In A::run(), before the call to port1.write() you should make a copy of i1. To do this you will need to implement clone() or use the field automation mcaros to build an implementation for you. Then your code might look something like this:

task run;

id = new();
i1.d = 16’h0055;
port1.write(i1.clone());
#10 i1.d =16’h0199;
endtask

The cloned copy will be sent to component B through port1. Later when you modify i1 the changes will not be visible in component B.

– Mark

Thanks Mark. I got the point.

That means if one component modifies the object the other component will see the modifiecations. In some cases that is desired, but in most it is not. You must take care to isolate components from this problem when you expect that the object will be modified.

As per me, this kind of behavior is no-where required.
As the transaction between two components should only happen when they are attempting to do so else not.

So I think this is something which should be taken care by the TLM mechanism.
By defalult it should work with the behavior which is getting used in most cases.

Anyways I enjoyed this learning. :)

Another way is to use begin_tr and end_tr.

i) whenever there is a begin_tr create a new transaction using new().
ii) Right after end_tr use port.write(tr) to broadcast the transaction.

Kal

Default implementation will only hold one copy of item which is memory optimized implementation.

I think thats what required. If we need extra copy then we do need to care.

Regards,
Kinjal

Hi Chip Maker,

Another way is to use begin_tr and end_tr.
i) whenever there is a begin_tr create a new transaction using new().
ii) Right after end_tr use port.write(tr) to broadcast the transaction.

begin_tr & end_tr will be used for transaction recording. How this will be used to create new transaction. Please explain with example.

Regards,
Vishnu

As per me, this kind of behavior is no-where required.
As the transaction between two components should only happen when they are attempting to do so else not.
So I think this is something which should be taken care by the TLM mechanism.
By default it should work with the behavior which is getting used in most cases.

There are many cases where cloning of a transaction is not required. An analysis port typically broadcasts a transaction to many subscribers, and each many put that transaction into a FIFO. No cloning is needed unless one of those subscribers modifies the transaction, which is usually not the case. Arbitration is another example - transactions come in on one FIFO and go out on another with no need for cloning.

So you can follow the Manual Object Clone On Write (Mo-Cow) guidelines, or you can perform a more detailed analysis of the lifetime of your object to see which modifications require cloning.

If the TLM mechanism had clone as the default, getting the no-clone behavior would have been more difficult; and we’d have people complaining the other way around. :rolleyes:

Dave

If the TLM mechanism had clone as the default, getting the no-clone behavior would have been more difficult; and we’d have people complaining the other way around.

I see,
This makes sense.

Thanks Dave.

Clone operations are considered to be relatively expensive. However, cloning each transaction as it is passed through each TLM interface is the safest thing you could do. So, to clone or not to clone turns into a tradeoff between performance and safety.

The decision by the OSCI TLM Working Group, when they created TLM-1.0, was to focus on performance and let the user deal with safety issues. No clone as a default, requiring the user to decide when to clone or not, results in the highest performance. The alternative would be to clone every time causing many clones to be done, many of which would be unnecessary, even in the most conservative of designs.

The rules we recommend, which are called MCOW, are quite simple and intuitive. MCOW stands for Manual Copy on Write, and, as the name implies, suggests that you clone (or copy) a transaction object only when you are going to modify it. Analysis components, scoreboards and coverage collectors, for instance, only read data from transaction and do not ever modify them. Therefore, there is no need to clone in those components. A driver may only read data from a transaction to turn it into pin wiggles and also does not need to clone the transaction. In a packet-based design, a component may compute error correction codes, for example, and would need to clone each transaction when it modified it with new error correction codes.

Having no-clone as the default lets you make the safety vs performance tradeoff, recognizing that quite often there’s no need to clone to maintain a safe and secure design.

– Mark

Hi Chip Maker,
begin_tr & end_tr will be used for transaction recording. How this will be used to create new transaction. Please explain with example.
Regards,
Vishnu

begin_tr & end_tr mark the transaction properly and helps when doing table print of transaction to know when transaction started and ended.

Kal