Chip level simulation optimization

The most general way to think about this, is how can you reduce the number of events in your simulation?

You absolutely should insert log markers that help you track windows of time where the simulation is doing things you care about, versus where it’s just crunching away to setup for the next interesting thing. Then you need to find ways to accelerate or skip the windows of time that are not interesting.

If you have a block generating a lot of simulation events, but which ends up being don’t care for your test; then as you say, black boxing that block is one tool in the box. Or maybe just force the inputs of that block to a constant, which will remove the simulation activity.

You could try to use your tool’s checkpoint restore facility to save state after reset and start your tests from there.

You can use forces to load configuration values into registers, bypassing software based configuration.

You can use forces to setup for and skip past state machine sequences which you don’t care about, aka. you’ve already verified in some other test.

RTL can be ifdef’d to replace counters with shorter values. Programmable counters can be programmed to smaller values.

You can re-evaluate your test plan. Are you sending lots of huge transactions through the design, but 90% of that transaction time isn’t interesting? Can you use small transaction sizes instead? Can you ifdef the RTL to use smaller than actual sizes? Obviously don’t do all your testing that way.

There’s a lot of strategies, and no magic bullets. Use all approaches to the extent that they work and don’t hide real bugs.