Using timescale to speed simulation run time

Hi ,

Currently I have 'timescale 1ns/100ps . Is there anyway, to speed my simulation run time by changing this timescale alone, if so what value can speed the simulation run time.

Thanks.

In reply to Bibin Paul:
The `timescale compiler directive specifies the default time unit and precision for all design elements that follow this directive. It does not speed up the simulation time.

To increase the simulation time, you can use several coding tricks as given in dvcon paper “Yikes! Why is My SystemVerilog Testbench So Slooooow?

In reply to Bibin Paul:

Time is an abstraction in Verilog simulator. If you write a clock generator with a #50 delay, it really does not matter if the #50 is 50ns or 50ps, the code it executes is the same. If you are simulating a completely RTL design, the clock is probably the only place a delay appears. Then the simulator just advances time from 0, to 50, to 100, to 150. So changing the timescale will have no effect on the simulation speed.

It starts to matter when you have many different clock rates, and introduce gates with delays, and how much precision those delays are specified. Sometimes poorly written code with combinational feedback loops can oscillate, and the relative size of the delays in the loop determines how many times it oscillates per clock cycle.

It would help to have a good understanding of how timescales work, and how delays are specified throughout your design. An inadvertent #1 placed in your code can have various consequences if you you do not understand the timescale.

In reply to bdreku:

In reply to Bibin Paul:
The `timescale compiler directive specifies the default time unit and precision for all design elements that follow this directive. It does not speed up the simulation time.
To increase the simulation time, you can use several coding tricks as given in dvcon paper “Yikes! Why is My SystemVerilog Testbench So Slooooow?

Hi,

Could you please let me know, if I write a particular code using concurrent way or in sequential way , which will be faster/slower . ? Which has advantage over the other ? or are they both same .

Example code snippet :

Code1:
if (en)
count = count +1;
display(“new count value”)

VS:

Now lets assume the count is incrementing in some other parallel block, to display the value of count we are using a parallel thread like below .

Code2:
fork
forever @count
display(“new count value”)
join_none

Which among the 2 codes will make the simulator run faster ? if so why.

Thanks

In reply to dave_59:

Hi Dave,

I have a doubt regarding your following statement:
"So changing the timescale will have no effect on the simulation speed. "

Suppose my TB uses the timescale 1fs/1fs and a specific task takes 6ms to complete.
Now if I change the timescale to 1ns/1ns , you have mentioned :
“Then the simulator just advances time from 0, to 50, to 100, to 150.”

Wouldn’t the simulator be taking faster steps to reach 6ms, and hence completing the simulation faster?

Regards,
Parvathy

In reply to paru0885:

Yes, but that’s what I meant by having different clock rates. Adding a specific task that takes 6ms is the same as adding a clock with a 6ms period. A larger ratio between the periods means it takes more smaller clock periods of activity to reach one larger period. The absolute time scales do not matter for performance, only the ratios.

In reply to dave_59:

Hi Dave,

I am not sure I completely got your point about the different clock rates.
Consider my TB parameters as below :

Scenario1 : `timescale 1ns/1ns
Clock period - 100Mhz
Task1 takes 6ms (simulation time, test ended in 30min)

Scenario 2 : `timescale 1fs/1fs
Clock period - 100Mhz

I understand that Task1 will still take 6ms of simulation time. But can we expect the test to end in less than 30min?

Regards,
Parvathy

In reply to paru0885:

Hi All,

I think timescale only mentions the reference and precision for simulation. It has nothing to do with simulation performance. Even after simulation you can have the change in timescale using vendor tools to scale the plot

In reply to paru0885:

As was stated before, if the only delay in your entire simulation description (DUT + test bench) was the specification of a single clock period for generation of a clock in your test bench, then the timescale/ time precision has no effect on simulation performance.