Vivado UVM Simulation Memory Leak

I have developed a UVM simulation code for AXIS FIFO. There is a memory leak in the simulation, but I was not able to solve the issue. The simulation tool is Vivado 2022.1, and UVM version is 1.2. You can access the code with this link.

How to get rid of memory leak?

In reply to atabey:

You have to explain what you mean by ‘memory leak’. The SystemVerilog language handles all memory allocation/deallocation for you, so there is nothing you as a user can do to control it.

In reply to cgales:

In reply to atabey:
You have to explain what you mean by ‘memory leak’. The SystemVerilog language handles all memory allocation/deallocation for you, so there is nothing you as a user can do to control it.

That is what is expected due to garbage collector mechanism. However, the memory usage increases gradually forever when the simulation starts to run.

In reply to atabey:

You are describing functional issues, not memory leaks.

You need to start with the basics and debug your environment:

  • Are your clocks/reset connected and functional?
  • Is your DUT/interfaces connected correctly?
  • Does your simulation finish?
  • Do your drivers work? Do you see transactions correct at the DUT interfaces?
  • Monitor work?
  • Scoreboards work?

You have some significant issues in your code, as well as some extremely poor implementations of the AXI streaming protocol. To start, your sequence item should encapsulate an entire data stream packet, not each individual beat. The driver should handle the protocol handshake, and not just mirror sequence item data to the interface.

In reply to cgales:

In reply to atabey:
You are describing functional issues, not memory leaks.
You need to start with the basics and debug your environment:

  • Are your clocks/reset connected and functional?

  • Is your DUT/interfaces connected correctly?

  • Does your simulation finish?

  • Do your drivers work? Do you see transactions correct at the DUT interfaces?

  • Monitor work?

  • Scoreboards work?
    You have some significant issues in your code, as well as some extremely poor implementations of the AXI streaming protocol. To start, your sequence item should encapsulate an entire data stream packet, not each individual beat. The driver should handle the protocol handshake, and not just mirror sequence item data to the interface.

  • Clock and reset signals are connected to DUT. I do not know what is functional in clock/reset.

  • Yes. Interfaces are connected correctly. I can see the transactions in the waveform analyzer.

  • Simulation finishes succesfuly.

  • I think drivers corrrectly runs, so I can see the expected changes in the waveform analyzer.

  • Monitor and scoreboard are fine in simulation. Monitor can send the packets to scoreboard.

Thanks for your comments about the code. I do not understand how to encapsulae entire data with one sequence item. I will think about the method. The driver handles the protocol handshake. I developed the code to wait the ready signal if there is a valid data in the interface. The lines for handshake in axis_driver code:


do begin
    @(posedge vif.clk);
end while (req.valid && !vif.ready);


do begin
    @(posedge vif.clk);
end while (req.ready && !vif.valid);

In reply to atabey:

Your latest post contradicts your earlier post. You earlier stated “the memory usage increases gradually forever when the simulation starts to run”, but your last post states that simulation finishes successfully. Does the simulation run forever or does it finish?

If your simulation finishes successfully, then I don’t understand your comment about memory leaks. It seems like your testbench is functioning correctly.

In reply to cgales:

In reply to atabey:
Your latest post contradicts your earlier post. You earlier stated “the memory usage increases gradually forever when the simulation starts to run”, but your last post states that simulation finishes successfully. Does the simulation run forever or does it finish?
If your simulation finishes successfully, then I don’t understand your comment about memory leaks. It seems like your testbench is functioning correctly.

I keep the number of items being sent limited, so simulation should finish at some point. I can change the desired number, so the simulation runtime changes. For example, if number of items double, simulation time and memory usage double also. This issue does not allow simulation run for longer packets.

In reply to atabey:

You need to take a look at the memory profiler of your tool. It should be able to tell you which objects are consuming the most amount of memory. But I would start by looking at the transactions you are creating going into your scoreboard and make sure you do not keep accumulating transactions.

In reply to dave_59:

In reply to atabey:
You need to take a look at the memory profiler of your tool. It should be able to tell you which objects are consuming the most amount of memory. But I would start by looking at the transactions you are creating going into your scoreboard and make sure you do not keep accumulating transactions.

Thanks for your advices. I am not sure the Vivado simulator provides a memory profiler tool. This tutorial claims that Questa have memory profiler, but it is a paid version. I will check the transactions between monitor and scoreboard.