Is there any good resource which covers performance verification in block/system level?
Most UVM/SystemVerilog discussion focus on functional verification.
And I can’t find good discussion/tutorial on this topic. (google or here…etc)
For example, how to verify arbiter/scheduler’s performance in UVM/SystemVerilog env?
In reply to javatea:
By performance you mean enough resources to handle the traffic and load on the system.
A generic example would be a restaurant: can it handle the traffic and demands? Do I have enough ovens, burners, cooks, tables, waiters, etc? Also what would be the best and worst case latencies? Similar questions are asked if instead of a restaurant you are dealing with a cpu or a bus interface.
The way to handle such class of problems is to abstract the resources in terms of consume and response time, and to drive this abstracted model with demand loads that match the expected loads.
Systemverilog allows you to model the abstracted parts as it has features like queues, forked tasks, etc. SV has constraint random transactions to model the load, and coverage capabilities to gather the statistics.
Thanks for your feedback, abstract model of producing/consuming rate is one of the key point, and figuring out those corner case/max/min performance scenario is another topic as well.
Doing the great job on verifying architectural/system performance is very challenged.
Hi,
Is there any good resource which covers performance verification in block/system level?
Most UVM/SystemVerilog discussion focus on functional verification.
And I can’t find good discussion/tutorial on this topic. (google or here…etc)
For example, how to verify arbiter/scheduler’s performance in UVM/SystemVerilog env?
Appreciate any good link/paper/blog.
At a very high level, performance can be categorized into throughput and latency.
If there are targets then these have to be met.
Now you have to add application specific performance numbers apart from the above two.
This is a tricky part as these will vary from application to application.
For example, in case of wireless functions, latency is highly critical and is actually specified in the standard itself.
Why we normally don’t see much of performance verification at simulation?
Performance verification requires appropriate traffic loading conditions which is difficult in simulation. It may take a long time to achieve such conditions. For example, 2-3 seconds of actual simulation time to get proper loading conditions in the designs.
However, we have many times ended up doing performance verification. Many times, latency measurement is resilient to design loads. So, we have ended up measuring latency in simulation. In case of throughput, we have mostly done some piece meal measurement based on block level specifications. If those targets are met, we assume it is fine.
However, you cannot sign off a design with basic testing as above. It is for this reason, many of performance tests are delegated to emulation or prototyping platforms these days.