How do people use Clocking blocks in synthesizable interfaces

I’ve recently discovered that adding a Clocking Block into an interface, and having the clocking block drive an output is akin to adding a driver to that interface signal.

LRM 14.6 For each clocking block output whose target is a net, a driver on that net shall be created. The driver so created shall have (strong1 , strong0 ) drive strength and shall be updated as if by a continuous assignment from a variable inside the clocking block

If you are just content with using interfaces as top level test harness things with a single modport direction… great. But this is super frustrating if you want to use the interface internally for synthesis BECAUSE…

The driver created by the clocking block now conflicts with drivers in the synthesizable code, and you get an error(Supressible)

Things are worse if you want to use a single interface with a mod port to declare the direction, as now you might want to drive every pin from a clocking block. AFAIK there is no way to associate a clocking block with a mod port.

Now this does get associated with a suppressible error… so you can still use the clocking block… but YUCK.

I dealt with this by just using the clocking block as a monitor (everything input) and then manually adding the output delays on everywhere my driver, drives an output. As a designer this feels quite frustrating, because it felt like this was a convenience clocking blocks were supposed to provide.

I’m guessing that this requirement comes from the simulators need to optimize, and wanting simple rules on who is a driver, so it can remove intermediate steps?

But, I gather I’m not the only one using interfaces to connect my design internally. It so handy to have the interfaces inside the design, so I can just point to them. But, if I have to choose between the convenience of using the clocking block to drive outputs, or using interface to connect my synthesizable blocks, I’m going to choose to connect my synthesizable blocks. But it feels like an artificial choice… Clocking blocks are purely a simulation only construct. I keep mine in a synthesis translate off. Why aren’t they treated like any other assignment from a class to an interface signal, like a deposit, instead of a full on driver? Is it because counting how many depositors there might be on an interface member is difficult, which makes optimization difficult?

Back to my title question.

How do other people deal with this.
(1) Do you not use clocking blocks drivers on internal interfaces
(2) Do you just suppress the error?
(3) Is there a magical work around to this issue I don’t know about! (hopeful!)

The recommended approach is to use an interface designed just for design that contains the required signals and modports.

For verification, a BFM interface with the required tasks/functions/clocking blocks can be used which references the design interface.

This ensures a separation of intent, and ensures that the design interface is only modified by the designer, and the verification interface is modified by the verification engineer.

Thanks for the reply cgales!

I can see using separate interfaces for verification and design, especially if you are buying a 3rd party VIP, and it comes with it’s own set of interfaces.

I forget that a sizable portion of the industry has siloed off into verification/design specialization (Especially the readers of this forum), and their might be some needed separation.

I had a dream I could just use a virtual interface to point to the interfaces in my design, and pass it to my agent, but to quote Fantine from Les Miserables “Life killed the dream I dreamed.”