"As verification engineers, we have to be able to forecast the accurate completion of our projects and also be able to cope with problems that may occur. Unfortunately, there are severe consequences when we get it wrong."
As I write this, New England is bracing for a snowstorm that could bring as much as two feet of snow in the next day or two. By the time you read this, we'll know a) whether the forecasters were correct and b) how well we hardy New Englanders were able to cope. I often joke that I'm going to encourage my children to be meteorologists because that's the one job where, apparently, you can be consistently wrong and not suffer any consequences (except for Bill Murray in the movie "Groundhog Day"). As verification engineers, we have to be able to forecast the accurate completion of our projects and also be able to cope with problems that may occur. Unfortunately, there are severe consequences when we get it wrong.
And the stakes keep getting higher. We've spoken for years about designs getting more complex, but it's not just the number of gates anymore. The last few years have shown a continuous trend towards more embedded processors in designs, which brings software increasingly into the verification process. These systems-on-chip (SoCs) also tend to have large numbers of clock domains, which require additional verification. On top of this, we add multiple power domains and we need not only to verify the basic functionality of an SoC, but also to verify that the functionality is still correct when the power circuitry and control logic (much of which is software, by the way) is layered on top of the problem. Who wouldn't feel "snowed under"?
In this issue, we're going to try to help you dig yourself out. As always, we've brought together authors from all over to give you their perspective on key issues you're likely to face in assembling and verifying your SoC. Our featured article, "Using Formal Analysis to 'Block and Tackle'," comes from our friend Paul Egan at Rockwell Automation. One of the critical stages of the SoC process is putting the blocks together and verifying that they are connected correctly. This is a great application for formal analysis and Paul shows how they were able to use formal to verify connectivity at both the block and chip level. As you'll see, the process is the same regardless of the size of the block, and makes it really easy to verify late-stage changes, too.
I mentioned how important software is in the SoC process. Our next article, from my Mentor colleagues Hans van der Schoot and Hemant Sharma, shows how "Bringing Verification and Validation Under One Umbrella" can both speed up functional verification and also give you a platform on which your software team can validate the software presilicon. This early detection of hardware/software integration issues greatly simplifies the post-silicon validation process, since problems are much easier to debug pre-silicon.
Often when developing an SoC, we want to start with a system-level model of the design, so we can analyze its performance and other characteristics to make sure it's right before we build it. The advantage of applying "System Level Code Coverage using Vista Architect and SystemC," is that you can gain important insight into the completeness of your system-level testbench as well as whether certain blocks you've included in your design are actually contributing. You've probably used code coverage on RTL, but now you can do it at the system level.
The Unified Power Format (UPF) is the standard for specifying low-power design intent in a way orthogonal to the RTL. The standard is being widely adopted and also being updated by the IEEE. In "The Evolution of UPF: What's Next?" my friend Erich Marschner describes some of the interesting enhancements of UPF 2.1 that help UPF more easily and accurately model power management effects.