Using system functions : $realtime and $time as arguments to format specification %t

While revisiting my verilog slides I came across an example :


`timescale 10 ns / 1 ns 
module time_test;
 integer a=0, b=0, c=0;

 initial begin
   #1.53  a=6;  $display($realtime,," a == 6  @ T == %0t ",$realtime );
   #2.56  b=9;  $display($realtime,," b == 9  @ T == %0t ",$realtime );
   #1.547 c=4;  $display($realtime,," c == 4  @ T == %0t ",$realtime );
 end 
endmodule

Based on timeprecision the delays are rounded off to 1 decimal place . #1.53 would be #1.5 , #2.56 would be #2.6 and #1.547 would be #1.5 .

Without using %t , $realtime would return the same rounded-off real numbers : 1.5 , 4.1 and 5.6

(a) However when I use %t along with $realtime , why do we observe 15 , 41 and 56 in the result ?

Then I replaced $realtime with $time in the above example :

Without using %t , $time would return an integer after rounding off : 2 , 4 and 6

(b) But when I use %t , I observe 20 , 40 and 60 in the result

One intuition I have is that format specification %t displays simulation time.

$realtime returns a real number scaled to time unit of 10ns .
Hence simulation time would be obtained my multiplying these real values ( i.e the argument $realtime ) by time unit of 10ns

Therefore we observe 15 , 41 and 56 ( which are actually in units of “ns” )

Similarly for $time we simply multiply the time unit of 10ns to the integer value returned by $time .

Hence we observe 20 , 40 and 60 ( which are again in units of “ns” )

NOTE : These values returned by $time are different than the actual simulation time at which the assignments occur .

Is my initial thought process correct ? Please correct me if wrong .

Thanks .

In reply to Have_A_Doubt:

  1. Using a timescale of 10ns or 100ns is asking for trouble in my opinion.

  2. Specify a unit with your delays to remove any ambiguity, such as 10ns.

  3. Have a look at $timeformat(), and use %t with $realtime:

$timeformat [ ( units_number , precision_number , suffix_string , minimum_field_width ) ] ;

In reply to chrisspear:

Hi Chris ,

My intent is to know the working of the above code. Any inputs about the output are welcome .

Have a look at $timeformat(), and use %t with $realtime

Yes , I did go through this part . As %t works with $timeformat ,
By default : the first argument would be 1ns in this case , precision_number has default value of 0 , suffix_string is “”.

In reply to Have_A_Doubt:

When you say “without using %t”, what are you doing in its place?

In reply to Have_A_Doubt:

My intent is to know the working of the above code.

IMHO: Don’t worry about Verilog slides. Unless you are a teacher, you don’t ship PPT, you create Verilog code to make chips. Write clear, reusable Verilog code. A timescale of 10ns/1ns and delays of #1.53 are just bugs waiting to bite you.

Use a timescale of 1ns, write the delay as #15.3ns, and use $timeformat(-9, 0, “ns”, 0) to show you the resulting time in nanoseconds. Better yet, if you know that the delays are going to be rounded, use just #15ns.

In reply to dave_59:

Hi Dave ,

without using %t

I provide $realtime / $time as argument without a format specification %t

Eg : $display($realtime,," a == 6  @ T == %0t ",$realtime );

First part of the display is without %t and uses $realtime directly as argument ,
second part provides %t as format specification with $realtime as argument.

The output is different when I use %t with $realtime / $time compared to using only $realtime / $time

In reply to Have_A_Doubt:

When using %t without first using $timeformat, the default is to scale the formatted time to the global time precision, 1ns.

In reply to dave_59:

the default is to scale the formatted time to the global time precision

Yes , I do get this part . As Chris pointed it’s the default $timeformat in work .

I wanted to confirm if the following is correct :


Since %t is used to display time / simulation time , the argument to it ( $realtime ) is first scaled to time unit via : ( 1.5 / 4.1 / 5.6 )  x  10ns .

Then the result is scaled to global time precision .

If the argument to %t is real variable / parameter :


  `timescale 10 ns / 1 ns 
   module top;
   parameter  param = 1.9890 ;

   initial  $display(" param is %0t " , param );
   endmodule

In this case the parameter would be first scaled to time unit to calculate simulation time i.e 1.9890 x 10ns i.e 19.890ns
Then the result is scaled to global time precision i.e 20ns .
As a result we observe 20 in output ( no ns since default value of argument suffix_string is “” )



EDIT : After scaling the simulation time to a precision based on argument ' units_number ' of $timeformat ,  
       the equivalent result is rounded-off as per argument ' precision_number ' of $timeformat
 

In reply to Have_A_Doubt:

Correct. There was a recent request to make this clearer in the 1800 LRM.

In reply to dave_59:

Hi Dave ,

When using %t without first using $timeformat, the default is to scale the formatted time to the global time precision, 1ns

In the code above since we have only `timescale compiler directive , the global time precision is 1ns.

However if I were to add a timeunit within the module :


`timescale 10 ns / 1 ns 
module time_test;  
 timeunit 1ns / 1ps ;  //  Takes priority over `timescale
 integer a=0, b=0, c=0;
 
 initial begin
   #1.53  a=6;  $display($realtime,," a == 6  @ T == %0t ",$realtime );
   #2.56  b=9;  $display($realtime,," b == 9  @ T == %0t ",$realtime );
   #1.547 c=4;  $display($realtime,," c == 4  @ T == %0t ",$realtime );
 end 
endmodule

// In this case : global time precision aka simulation time unit is 1ps

Table 20-3 of the LRM mentions the default for argument units_number as :

The smallest time precision argument of all the `timescale compiler directives in the source description

Although the smallest time precision argument of all the `timescale compiler directive(s) is 1ns , why does %t use global time precision of 1ps ?
The output for %t is different than what Table 20-3 states

In reply to ABD_91:

It works correctly on 4 out of 5 simulators on EDAPlayground. You must be using the 5th.

In reply to dave_59:

On eda 4 tools give output as :

1.53  a == 6  @ T == 1530 
4.09  b == 9  @ T == 4090 
5.637  c == 4  @ T == 5637

For the output using %t , global precision of 1ps is being used whereas as per LRM Table 20-3
default value of argument units_number uses the smallest of all `timescale compiler directive i.e 1ns

LRM doesn’t mention if the default value of units_number is dependent on timeprecision construct .
It only mentions about `timescale compiler directive

Hence I expected for %t the value is displayed in 1ns ( smallest timeprecision specified in `timescale ) :

1.53  a == 6  @ T == 1.53
4.09  b == 9  @ T == 4.09
5.637  c == 4  @ T == 5.637

Am I missing something ?

You previously quoted :

the default is to scale the formatted time to the global time precision

With the timeunit additions within the module , isn’t it ( i.e your above quote ) different than what LRM Table 20-3 says ?

In reply to ABD_91:

It actually Table 20-4. That appears to be an oversight.

Section 3.14.3 Simulation time unit says

The global time precision, also called the simulation time unit, is the minimum of all the timeprecision statements, all the time precision arguments to timeunit declarations, and the smallest time precision argument of all the `timescale compiler directives in the design.

I think that is what is meant

In reply to dave_59:

Section 3.14.3 Simulation time unit says :
The global time precision, also called the simulation time unit, is the minimum of all the timeprecision statements, all the time precision arguments to timeunit declarations, and the smallest time precision argument of all the `timescale compiler directives in the design.

So this means that global time precision considers `timescale as well as timeunit and timeprecision construct and uses the minimum of them .

However in the LRM I can’t find the relation between global time precision and default value of argument units_number .

My understanding was global time precision is related to #1step of a clocking block .
As format specification %t would use default value of argument units_number ( in absence of $timeformat )

Section 20.4.2 $timeformat simply mentions the following about units_number :

The smallest time precision argument of all the `timescale compiler directives in the source description

Currently LRM doesn’t say default value of argument units_number uses global time precision or timeunit / timeprecision construct .
It only says out of all the compiler directive(s) `timescale present , the minimum of time precision is used .

My interpretation is that default value of units_number is independent of timeunit and timeprecision constructs .

Am I missing something in LRM ?

In reply to ABD_91:

This is an obvious omission in the LRM. There can be only one global time precision, and that has to include precisions from all constructs. I have made a note of it here.