Hi. I’m new to UVM and verification in general, and I’m just learning as I go along with a project I’m working on, but here is a challenge we’re facing with the project and hope to hear some feedback on our proposed approach.
We’re obtaining an IP from a provider. The entire VIP for System Verilog is in UVM. The interface between the stimulus and DUT is in a proprietary interface, the response is in another proprietary interface. In our project, the IP will be modified by removing the proprietary interfaces and replacing them with industry standard AHB and AXI interfaces. Eventually, we want the stimulus agent to be an AHB master agent. The AXI is the response, and AHB is a master wrt AXI. The DUT is a slave wrt the AHB agent.
We would like to leverage the test suite provided by the IP provider and keep the provider’s UVM agents for driving the DUT, thereby minimizing the changes to the testbench itself. However, because the driver has a proprietary interface to the DUT, and our design will have AHB/AXI interfaces to the DUT, our plan is to convert the lowest level stimulus generator for the DUT to drive in the AHB language instead of proprietary. That is, we want to keep the classes for higher level stimulus generators from our IP providers, but at the lowest level make the translation from proprietary language to AHB/AXI language in order to drive our DUT. Would this approach work? Should we be able to use the System Verilog AHB agents? Or should we use a totally different AHB agent under this type of interface conversion?
The question you should ask yourself is what are you trying to verify? The way I read the objectives of your project you are converting from proprietary interfaces to industry standard AXI and AHB interfaces. If nothing else is changing then that would seem to be where the risk is in the project.
When using a proprietary interface you can do your own thing and you often know the bounds of the problem, if something doesn’t quite work then you don’t mention it in the specification or it becomes a feature. When you move to industry standard interfaces then the game changes, there is the published specification and the interpretation of the specification. Depending on the application of the design, you will probably want to cover all possible variants of transfers over the protocol (compilance).
Therefore, I’d advocate using third party VIP for the AXI and AHB interfaces and I would look at abstraction techniques such as layering for the stimulus so that you can transition from the proprietary interfaces to AXI and AHB by swapping the agents without having to change the stimulus.
I have a different viewpoint on this, however. The programming interface currently used by the IP is a propreitary programming interface and the interface that the IP uses to talk to the DRAM is a propreitary AXI interface. From a stimulus point of view, the VIP comes with a “CPU agent” that mimics what is being programmed.
We will be modifying the IP so that the interface is AHB for the programming interface (as opposed to the propreitary one) and the link to the external world for data transfer is AXI (instead ot their one). The biggest thing that we need to test in the DUT as you mentioned is the changes to the DUT for AHB and AXI. Since the master interface BFM is simple, and we want to reuse the tests, the best approach would be to use the CPU agent at the highest level and all the clasess while at the lower level swap their driver to the one that needs to talk AHB language.
From an AXI perspective, we’ll have a fully featured AXI slave BFM wherein we’ll vary everything we need to vary as per the AXI spec. This, in my mind, might work better than trying to retrofit a fully featured AHB agent.