Difference between input skews :: #1step V/S #1

Hi All,

If user were to declare a clocking block as ::


 clocking bus1 @(posedge clock);
   default input  #1;

   input  a ;
 endclocking

 clocking bus2 @(posedge clock);
   default input  #1step;  //  Default  input  skew  

   input  a ;
 endclocking

[Q] What’s the difference between input skew :: #1step and #1 ?

LRM describes #1step as :
A 1step input skew allows input signals to sample their steady-state values in the time step immediately before the clock event (i.e., in the preceding Postponed region).

#1step is same as the value in Preponed region of the same time step ( as essentially values don’t change in Postponed region of previous time step and Preponed region of current time step )

Is there a case that value sampled for clockvar a would be different for the 2 clocking block ?

In reply to Have_A_Doubt:

#1step is a delay representing the global time precision. If #1 is being used in a timescale that is the same as that global precision, then they represent the same delay. But that is rarely the case.

In reply to dave_59:

Adding a note from LRM for future reference :


3.14.3 Simulation time unit :

The global time precision, also called the simulation time unit, is the minimum of all the timeprecision statements, all the time precision arguments to 
timeunit declarations, and the smallest time precision argument of all the `timescale compiler directives in the design.
The step time unit is equal to the global time precision. 

Dave ,

If user were to define both a `timescale as well as timeunit , which one takes priority ?


`timescale 1ns / 10ps
 module A (...);
  timeunit 100ps;
  timeprecision 10fs;
  
  //  What's  the  time  unit  and  precision  used  within  module  A ?
...
endmodule

module B (...);
...
endmodule

If the files were compiled in the order of A , B then module B would simulate with time units in nanoseconds .

For module A which one would take precedence ?

In reply to Have_A_Doubt:

See section 3.14.2.3 Precedence of timeunit, timeprecision, and `timescale

In reply to dave_59:


LRM  3.14.2.3 :  If a timeunit is not specified within a module, program, package, or interface definition, then the time unit shall be determined using the 
following rules of precedence :

a) If the module or interface definition is nested, then the time unit shall be inherited from the enclosing module or interface (programs and packages cannot be nested).
b) Else, if a ` timescale directive has been previously specified (within the compilation unit), then the time unit shall be set to the units of the last 
` timescale directive.
c) Else, if the compilation-unit scope specifies a time unit (outside all other declarations), then the time unit shall be set to the time units of the 
compilation unit.
d) Else, the default time unit shall be used.

The time unit of the compilation-unit scope can only be set by a timeunit declaration, not a ` timescale directive. 
If it is not specified, then the default time unit shall be used.

So if timeunit were defined it takes precedence automatically .


`timescale 1ns / 10ps

module A ; 
   timeunit 100ps;
   timeprecision 10fs;

   initial  $printtimescale() ;
endmodule

module B;

    initial  $printtimescale() ;
endmodule

module  Precedence ;

 A a1() ;

 B b1() ;

    initial  $printtimescale() ;
endmodule

The output I observe :

Time scale of (Precedence.a1) is 100ps / 10fs
Time scale of (Precedence.b1) is 1ns / 10ps
Time scale of (Precedence) is 1ns / 10ps

In reply to dave_59:

Hi Dave ,

A few questions on the same topic :

(a) As you mentioned 1step is the is the minimum of all the timeprecision statements, all the time precision arguments to timeunit declarations, and the smallest time precision argument of all the `timescale compiler directives in the design .

[Q1] If I were to write #1 will it consider time_precision or the time_unit ( or combination of both ) of the scope where clocking block is defined ?

(b) I then tried an example using real nos. as skew . Mainly to check whether they are valid


interface intf ( input logic clk );

  timeunit       1ns;
  timeprecision  1ps;

  wire [7:0] ip1 , ip2 ;

  clocking cb @(posedge clk);
    default input #1.2565 output #2.3598 ;  // Real Nos. are Valid  Indeed  !!                                            
    output ip1 , ip2  ;                       
  endclocking
endinterface

`timescale  1ps / 1fs 
module  top_tb ;
  .......
  intf  int_f( clk ) ;
  ...........
endmodule


So 1step would mean timeprecision of 1fs ( as it’s the min of all precision ) .

But since the timeunit and timeprecision is mentioned within the interface will skews be rounded to 3 decimal places ?

[Q2] Will input skew of 1.2565 mean 1.257ns prior to the clocking edge ?