Changing clock frequency during simulation run

Hi, I am new to OVM and I have one specific question related to the DUT and its environment that I am creating:

The DUT is a serial flash memory that accepts, for example, READ instruction on one clock frequency, but outputs data on different clock frequency.
Currently, I am generating constant frequency clock from the top file.

My question is how to set up the environment to be able to change the clock frequency during simulation run?

There is nothing magic about the clock. Your tests can just drive the clock through a virtual interface, though that could be slow.

One of my recent projects had many clock domains and I needed to test small PPM differences between them. For that project I wrote a clock generator and instantiated several of them at the top, each controlled through a virtual interface that dictated frequency, PPM error, drift, jitter, etc. It was nice but a bit of overkill.

For a later project, I included the standard clock inside top, but I added an interface with enable and clk wires. If “enable” is asserted then the “clk” wire in that interface drives the DUT’s clock. I threw that virtual interface out into the config system with set_config_object() so a test can grab it if necessary. (Kind of like an analysis port publishes information without knowing or caring who is listening.) Under that system, any test can have complete control of the clock when it wants, but “ordinary” tests can just accept the basic clock.

Thanks for the reply.

In this specific case that I have, one sequence (READ instruction code + address) is sent to the driver from the test. One clock frequency is used for instruction and address and then output data is collected on different clock freq.
So I think I will have to change the freq from the driver somehow, because when the driver finishes driving the instruction and address, it must signal somehow to the clock generator in the top file to change the clock freq at this specific moment.

I think maybe creating a task that will be called from the driver to set the clock to the appropriate freq (something like: if it is a READ and instruction and address are driven, then call this task and change the clock freq)

One clock frequency is used for instruction and address and then output data is collected on different clock freq. So I think I will have to change the freq from the driver somehow

Clocks don’t usually change that way in the real world, so that’s a very strange DUT. I would advise you to double (and triple) check that you really need to alter the clock frequency in such a dynamic way. Are you absolutely certain there aren’t two different inputs clocks to this DUT?

If the fundamental clock frequency must change according to the operation, it would make perfect sense for the driver to adjust the clock frequency according to the type of operation it’s injecting. You would need some kind of mechanism to alter the clock rate in the virtual interface – it could be as simple as a single wire to select between clock rate A and B.

But this sounds very, very odd.

I double checked and I agree, very strange (so much about clock domain crossing I guess)
But I think the bottom line is that the DUT does not care if the freq will be changed because it waits for the rising clock edge one way or another. It is up to the testbench to incorporate this kind of clock generation.

Thanks for your help.

By the way, when creating the SystemVerilog interface, should the clock be input to the interface:

interface dut_if (input clk);

endinterface

or is the clock one of the signals in the interface:

interface dut_if;
logic clk;

endinterface

I’m now terribly curious, but this treads close to what might be considered proprietary information so you don’t need to answer. In the real world, how does the clock change frequency so quickly? If this were some kind of power-saving mode, where the DUT was still able to read while running at a lower frequency (but it could not write), it might make some sense. But I’ve never heard of a device that dynamically switched frequencies from one operation to the next.

By the way, when creating the SystemVerilog interface, should the clock be input to the interface […] or is the clock one of the signals in the interface

That’s a good question, and I’m not 100% confident in my own answer, but I’ve always created interfaces without ports (the IEEE 1800/D8 spec calls these ‘simple interfaces’). The clock is just another wire to me – just another stimulus that I’m allowed control to steer the DUT into a dark corner (or into the ditch).

Typically, you use ports of an interface when signals need to be shared with another interface or module. A global clock or reset would be an example. You would not want a separate reset signal inside each interface instance.

Dave