UVM reg atomicity for write/read tasks

I noticed that each uvm_reg class has a local semaphore m_atomic which is initialized to 1 by m_atomic = new(1); When we do

reg.write

(without overriding any of the tasks), there will be two calls to

XatomicX(0);

which puts two keys to the semaphore and hence the whole atomic concept on a register access is lost. Can someone shed light on this if I am overlooking something?

Here is the code snippet

task uvm_reg::write(...);
  XatomicX(1);
  ...
  do_write(...);
  ...
  XatomicX(0);
endtask

task uvm_reg::do_write(...);
  XatomicX(1);
  ...
  XatomicX(0);
endtask

task uvm_reg::XatomicX(bit on);
   process m_reg_process;
   m_reg_process=process::self();

   if (on) begin
     if (m_reg_process == m_process)
       return;
     m_atomic.get(1);
     m_process = m_reg_process; 
   end
   else begin
      // Maybe a key was put back in by a spurious call to reset()
      void'(m_atomic.try_get(1));
      m_atomic.put(1);
      m_process = null;
   end
endtask: XatomicX

So the thing that is missing is the process check in

uvm_reg::XatomicX()

task when putting back the key the way it is checked before getting the key back.

In reply to agkiran:

It’s protected:

      void'(m_atomic.try_get(1));
      m_atomic.put(1);

Before adding a key to the semaphore, it tries to get a key. A second call to XatomicX(0) results in any existing key getting removed ( back down to 0 keys again ), followed by adding a key ( back up to 1 ).

That’s still racy though. If two threads somehow tried to execute those same 2 lines at the same time, Verilog’s scheduler rules allow both try_gets to be evaluated, followed by the puts. This would result in 2 keys added to the semaphore. In practice I don’t think any simulators actually would do that, but it’s allowed by the LRM.

In reply to warnerrs:

I have a test where I do the following

fork
reg.write(…);
reg.read(…);
reg.write(…);
join

I have attached a frontdoor sequence to this register and hence there is one sequence instance for this register. Everytime I do an operation on this register, the same sequence instance will be executed. In this case, since there is a chance of putting more than one key to the semaphore, two threads get the key and since the same sequence instance is used, I am getting UVM FATAL for the second one saying sequence is already started.

This is the actual error I am getting
UVM_FATAL @ 10000030: uvm_test_top.m_env.m_agent.m_seqr@@seq [SEQ_NOT_DONE] Sequence uvm_test_top.m_env.m_agent.m_seqr.seq already started

I think we need more details. Does this frontdoor sequence consume time, or does it execute in zero time?

How about adding some log messages to those threads, so we can see when each call starts and finishes. And similar start/end messages in your frontdoor sequence.

Most revealing would be to see the order those m_atomic statements are executed, from each thread, relative to each other. Probably most easily done from the debugger.

In reply to warnerrs:

I found the same in my testbench. The XatomicX usage by uvm_reg::write and uvm_reg::do_write caused the issue.

  1. thread 1: write a reg. call XatomicX(1) by uvm_reg::write, m_atomic key 1 → 0.
  2. thread 2: call XatomicX(1) by uvm_reg::write on the same reg object. m_atomic key == 0. thread 2 blocked.
  3. thread 1: call XatomicX(1) by uvm_reg::do_write. nothing happens
  4. thread 1: call XatomicX(0) by uvm_reg::do_write, m_atomic.try_get(1) failed. m_atomic.put(1) causes key 0 → 1, thread 2 activated and m_atomic key 1 → 0.
  5. thread 1: call Xatomic(0) by uvm_reg::write, m_aotmic.try_get(1) failed. m_atomic.put(1) caused key 0 → 1.
  6. thread 1: write a reg again. call XatomicX(1) by uvm_reg::write, m_atomic key 1 → 0.
  7. thread 2: resumed and exit m_atomic.get(1) from step 4. Now non-zero m_process would be overwritten by m_reg_process.

I am also facing the same issue. Is there any WA fix for the same