Properly Constraining Signed Logic Dynamic Array

I am trying to properly constrain an array of signed logic. I need to constrain rx_data using the range [-0.7999, 0.4999]. However, when I run the test, I get a constraint solver error.

class my_seq_item extends uvm_sequence_item;
  `uvm_object_utils(my_seq_item)

  rand logic signed [13:0] rx_data[];

  constraint rx_data_c {
    foreach (rx_data[i]) {
      rx_data[i][17:0] inside {['h13108:'h0800]};
    }
  }
endclass : my_seq_item

Constraint solver error

# /home/my_sequence.sv(63): randomize() failed due to conflicts between the following constraints:
#       /home/my_seq_item.sv(77): rx_data_c { (rx_data[0][13:0] inside { [196608:2048] }); }
# Given:
#       logic signed [27:0] rx_data[0]
# ** Warning: (vsim-7084) No solutions exist which satisfy the specified constraints; randomize() failed.

What is the proper method using SystemVerilog to constraint positive/negative logic data values?

A couple of problems.

rx_data is an array of 14-bit signed numbers, yet 'h13108 needs 17-bits to be represented as a bit pattern; 18-bits as a positive signed number.

rx_data[i][17:0] index is out of bounds. Maybe you meant to declare rx_data as an array of 18-bit signed.

When comparing signed numbers, all operands must be signed, or the entire expression gets treated as unsigned. The literals you used are unsigned. 14-bit signed hex literals start with 14'sh

The range on the inside operator is [low:high].

If the bound to the left of the colon is greater than the bound to the right, the range is empty and contains no values.

BTW, the IEEE 1800-2023 SystemVerilog LRM has been extended to directly randomize real datatypes. Check with your tool vendor for support.

Would a simple solution be something similar to this:

class my_seq_item extends uvm_sequence_item;
  `uvm_object_utils(my_seq_item)

  rand bit signed [17:0] rx_data[];

  constraint rx_data_c {
    foreach (rx_data[i]) {
      rx_data[i][17:0] inside {[-0.7999, 0.4999]};
    }
  }
endclass : my_seq_item

This would be similar to a solution found here:

https://verificationacademy.com/forums/t/constrain-a-relation-between-arrays-of-signed-values-in-systemverilog/39108

Another idea I am working on can be found in this Q&A:

https://verificationacademy.com/topics/systemverilog/verilog-basics-for-systemverilog-constrained-random-verification/

It is important that rx_data is properly constrained using the range [-0.7999, 0.4999], otherwise the verification solution is incomplete.

I tried a simple test to see if I can yield the correct results.

class my_seq_item extends uvm_sequence_item;
  `uvm_object_utils(my_seq_item)
  
  rand int int_val; 
  real real_val;
  logic [17:0] test_val;
  typedef logic [17:0] ulogic18_t;
  
  constraint range { int_val inside {[-1000:1000]};}

    function void post_randomize(); 
    real_val = int_val/1000.0;   // [-1, 1]
    test_val = $realtobits(real_val);
  endfunction

endclass : my_seq_item

However, the results show bit truncation:

# real_val : -0.734000
# real_val (ulogic18_t) : 3ffff
# real_val ($realtobits) : bfe77ced916872b0
# test_val : 72b0

How can we properly yield a 18-bit result without rounding?

Please see Why does 0.1 not exist in floating-point?

You say that you want rx_data to have a range [-0.7999, 0.4999], yet it is declared as an 18-bit intergral type. I assume it represents some fixed point type. You need to clarify. Where did the value h13108 come from?

The range [-0.7999, 0.4999] is used to for tests purposes, the actual range I cannot post here. However, the concept asked here is important to resolve. The range is shown as a real data type range, and I need to convert to logic/bit for driving input to DUT.

The value h13108 came from a Python script I am using to verify test code, not for actual verification solutions.

Hopefully the above explanation can shed some light on our thought process and possible solutions we are working on.

That does not help. It would really help if you showed us a complete reproducible example that displays the results you are getting and then show us the results you are expecting. You have still never shown us how you are representing a real number stored in a 17-bit integral type.

Unfortunately, this project has been sidelined for higher priority tasks. I may get back to this sometime in the future. So for now, I can close this thread and return if/when needed.