What does vertical scalability mean in UVM?

What does Vertical Scalability mean in UVM?
I came across an article that says: Virtual sequencer in UVM eliminates the problem of vertical scalability if the test is later encapsulated in a
higher-level test. I don’t understand what this means in a big picture.
Can someone please provide a better explanation for this?

In reply to pare:

For me this means the same as “vertical reuse.” There are a lot of aspects to UVM that enable vertical reuse of UVM component from block to top. I’m not sure about how virtual sequencers eliminate the problem of vertical scalability - I guess without a virtual sequence the same stuff would be put in a test which is certainly less vertically reusable than a virutal sequence. The thing with vertical reuse though is that a lot that was active at lower levels becomes passive - meaning parts of the virtual sequence aren’t needed or reused.

One of the most important things is that the block-level environment can work passively and actively on different interfaces. At lower levels, like a block level, the environment will actively drive the interfaces to provide stimulus to the DUT for complete control. At higher levels of integration where the block environment becomes a part of a larger environment (perhaps a subsystem or chiplevel environment), the block environment will not have active control over all interfaces. For this reason, it’s important that all scoreboarding and functional coverage will work passively for the environment so it can be reused when the block-environment doesn’t have control of the interfaces or know about any stimulus into the block other than what it can passively sniff off the buses via monitors.

Building vertically reusable components is much harder than building ones that are not.

Something else I might say about vertical reuse, is that block environments must be able to handle prediction and checking of all stimulus they may encounter or they will break at higher levels of vertical integration. For example, if one has 5 tests for 5 different interfaces in isolation and then merges the results to get 100% functional coverage at the block level, there is a good chance the related prediction logic will fail when all 5 features are occurring concurrently at the higher level of integration. I have seen this many times and it’s a one reason (of many reasons) that the block level should be very random and have fewer tests than more.

In reply to jeremy.ralph:

That was very helpful Jeremy! Thank you.