While revisiting my verilog slides I came across an example :

```
`timescale 10 ns / 1 ns
module time_test;
integer a=0, b=0, c=0;
initial begin
#1.53 a=6; $display($realtime,," a == 6 @ T == %0t ",$realtime );
#2.56 b=9; $display($realtime,," b == 9 @ T == %0t ",$realtime );
#1.547 c=4; $display($realtime,," c == 4 @ T == %0t ",$realtime );
end
endmodule
```

Based on timeprecision the delays are rounded off to 1 decimal place . #1.53 would be #1.5 , #2.56 would be #2.6 and #1.547 would be #1.5 .

Without using %t , $realtime would return the same rounded-off real numbers : 1.5 , 4.1 and 5.6

(a) However **when I use %t along with $realtime , why do we observe 15 , 41 and 56 in the result ?**

**Then I replaced $realtime with $time in the above example :**

Without using %t , **$time would return an integer after rounding off** : 2 , 4 and 6

(b) But when I use %t , **I observe 20 , 40 and 60 in the result**

One intuition I have is that **format specification %t displays simulation time.**

$realtime returns a real number scaled to time unit of 10ns .

**Hence simulation time would be obtained my multiplying these real values ( i.e the argument $realtime ) by time unit of 10ns**

Therefore we observe 15 , 41 and 56 ( which are actually in units of “ns” )

**Similarly for $time we simply multiply the time unit of 10ns to the integer value returned by $time .**

Hence we observe 20 , 40 and 60 ( which are again in units of “ns” )

NOTE : These values returned by $time are different than the actual simulation time at which the assignments occur .

**Is my initial thought process correct ? Please correct me if wrong .**

Thanks .