Prliminary verification of the DUT by designers

Hi
I was wondering, how much effort should the design team put in order to do a preliminary verification of their design, before delivering it to the verification team? On the one hand, we may say that the designers probably shouldn’t spend too much time on verification because that task belongs to the verification team. On the other hand, if the design contains a lot of elementary bugs, the code will be passed between the two teams a lot of times which may impel it’s own overhead. There seems to be a trade off here, and my question is how do we get to the optimal point of this trade off?
The second question is, how should the designers do their initial verification? Is is a good idea that they use the UVM or SV infrastructure, which is developed by the verification team? Or should they just pre-verify their design in any manner they like? I mean, ideally, the testbench and the design should be developed in parallel with each other, by two separate teams. So if the TB is ready by the time the design is finished, can the designers use the TB to do a preliminary verification of their design?
Thanks you

In reply to Farhad:

  • …how much effort should the design team put in order to do a preliminary verification of their design, before delivering it to the verification team?
    Good question. It is my belief that RTL designers should do code reviews of the code and the assertions with whoever wants to listen and also do sanity checks. Those assertions can originate from both the verification team and the designer who clarifies the internals of the design.

  • What is a sanity check? I use the following checks when I verify my assertions. the TB basically contains the instantiation of the DUT, variable declarations and clocks, and an initial statement to generate constrained-random tests. Below is my template for that initial statement. Randomized weights can be adjusted. You’ll be surprised at the number of errors you can catch.

initial begin
bit v_a, v_b, v_err;
repeat (200) begin
@(posedge clk);
if (!randomize(v_a, v_b, v_err) with {
v_a   dist {1'b1 := 1, 1'b0 := 1};
v_b   dist {1'b1 := 1, 1'b0 := 2};
v_err dist {1'b1 := 1, 1'b0 := 15};
}) `uvm_error("MYERR", "This is a randomize error");
a <= v_a;
if(v_err==0) b<=v_b; else b<=!v_b;
end
$finish;
end
  • Is is a good idea that they use the UVM or SV infrastructure, which is developed by the verification team?
    If the UVM model is available, I have no objections to using it. Typically, at when I was doing designs, the DUT consisted of small partitions that were integrated as a whole to make the complete design. Will the UVM accommodate testing those partitions? What modifications you need to make it work.

  • **My points:**There may be disagreement in what I am about to say, but I believe that before getting into simulation or formal verification, one needs a good set of assertions at the system and internal level to clarify the requirements and assumptions. Those assertions and the RTL code should be reviewed by engineers. Those reviews address the RTL approach and the understanding of the requirements. The designer can do the sanity check during the design process and after the reviews. The TB test team can then take it from there,

Ben Cohen
http://www.systemverilog.us/ ben@systemverilog.us
** SVA Handbook 4th Edition, 2016 ISBN 978-1518681448

  1. SVA Package: Dynamic and range delays and repeats SVA: Package for dynamic and range delays and repeats - SystemVerilog - Verification Academy
  2. Free books: Component Design by Example https://rb.gy/9tcbhl
    Real Chip Design and Verification Using Verilog and VHDL($3) https://rb.gy/cwy7nb
  3. Papers:

Udemy courses by Srinivasan Venkataramanan (http://cvcblr.com/home.html)
https://www.udemy.com/course/sva-basic/
https://www.udemy.com/course/sv-pre-uvm/

The second question is, how should the designers do their initial verification? Or should they just pre-verify their design in any manner they like? I mean, ideally, the testbench and the design should be developed in parallel with each other, by two separate teams. So if the TB is ready by the time the design is finished, can the designers use the TB to do a preliminary verification of their design?

In reply to ben@SystemVerilog.us:

Thank you very much for your instructive answer, Ben.
Just to to make sure that I understand you correctly, what you mean by a sanity check is applying some simple constrained random inputs to the DUT and monitor the assertions to detect unexpected behavior, right? This is what I understand from the example you gave above. The assertions are used during the sanity check so that the check goes faster and without a need for a reference model of the design. If this is the case, then the designer doesn’t need to actually compare the DUT output to some reference model’s output during the sanity check, he can be happy if no failures are reported by the assertions. Then he can deliver the code to the verification team. Please correct me if I misunderstood you.
Thanks

In reply to Farhad:

… what you mean by a sanity check is applying some simple constrained random inputs to the DUT and monitor the assertions to detect unexpected behavior, right?

[Ben] This is correct to some extent. The goal is obviously to have an RTL that meets the requirements. SVA facilitates that aspect because it forces the writer of those assertions to clearly document those requirements and assumptions to the best of his understanding.
A review of SVA and the RTL architecture is necessary, even prior to coding because the design would then be better characterized to meet those requirements. As you know, SVA is just a short-form executable notation to define requirements, but a lot can also be unsaid or assumed, and that is why a review is necessary.
The use of constrained-random tests for a partition facilitates the generation of test vectors; it also provides a good mix of conditions. I would use this approach to initially test the SVA. You can also use constrained-random tests to drive the inputs to the partition to see how the RTL responds. You most likely will get passes and fails, and you can examine many of those for quality. You may also have to write some directed tests as needed.
I believe that this approach will increase you level of confidence as to the quality of your design.

… The assertions are used during the sanity check so that the check goes faster and without a need for a reference model of the design. If this is the case, then the designer doesn’t need to actually compare the DUT output to some reference model’s output during the sanity check, he can be happy if no failures are reported by the assertions. Then he can deliver the code to the verification team.

If you have a reference model and a set of tests that reflects the system under normal and error conditions, definitely use that. Chances are you may not have this. The constrained-random tests approach is a first stab at it for your partition. Refinements on that can be generated by tuning the randomized weights and by directed tests.

I shared your question on LinkedIn and received some very interesting feedback; below is a copy of them.

  1. Bert Verrycken, 23+ yrs of semiconductors.
    We went from all-in-one front-end engineers to partitioning of design, verification, synthesis/STA and DFT. (Hyperbole warning!) The driver is cost, since multi-skilled people are performing one task but need to be paid for their expertise. Thus single skilled engineers are cheaper but a designer doesn’t know how to verify and a verification engineer doesn’t know how to design anymore. So designers make a complex design and throw it over the wall to verification without pipe cleaning it. And pipe cleaning usually means some form of direct testing or a version of the eventual unit verification that the verification team is going to do anyway. Then you get discussions. Design engineers want access to the testbenches to verify changes they are doing to the design, but the verification team doesn’t want that (it will be more like hacking the TB than implementing extra TB functionality). But the verification team has other priorities or is otherwise engaged. It makes more sense than ever to do pipe cleaning of the RTL before it is handed off to verification. But how to organize this without duplicating the verification team’s work is a balance effort.
  2. Jeremy Ralph. Principal Verification Engineer
    Excellent topic! In my experience requiring designers to do a basic verification (dynamic non-UVM sanity check or FPV) before throwing it over the wall to the verification team is far superior to throwing untested RTL. FPV is really really good if the designer has that skill.
    I don’t know how any designer can deliver code that they don’t test, but that’s just the way I operate… iterations of code and test as I tinker away.
  3. Paul Scheidt Staff Engineer
    A designer doesn’t need to know UVM to write a unit test bench to pipe clean their design. Straight SystemVerilog or Verilator will do just fine. It’s a requirement on my team for designers to sanity check their work. That’s not DV’s job.

Ben Cohen
http://www.systemverilog.us/ ben@systemverilog.us
** SVA Handbook 4th Edition, 2016 ISBN 978-1518681448

  1. SVA Package: Dynamic and range delays and repeats SVA: Package for dynamic and range delays and repeats - SystemVerilog - Verification Academy
  2. Free books: Component Design by Example https://rb.gy/9tcbhl
    Real Chip Design and Verification Using Verilog and VHDL($3) https://rb.gy/cwy7nb
  3. Papers:

Udemy courses by Srinivasan Venkataramanan (http://cvcblr.com/home.html)
https://www.udemy.com/course/sva-basic/
https://www.udemy.com/course/sv-pre-uvm/

In reply to ben@SystemVerilog.us:

Thank you very much for your comprehensive answer, Ben.