by Sreekanth Ravindran and Chakravarthi M.G., Mobiveil
High speed serial interconnect bus fabric is the SoC backbone, managing dataflow and keeping up with the dynamic bandwidth requirements of high speed applications. Verification of high speed interconnect IPs presents critical challenges not only in terms of complying with standards, but also in ensuring that the design is robust and flexible enough to handle and manage a large amount of time-critical data transfers. Acquiring such expertise requires years of verification experience. In this article, Silicon IP and platform enabled solution provider Mobiveil shares its story of verifying high speed bus protocol standards like PCI Express and Serial RapidIO, including what considerations are required when verifying high speed designs. In addition, Mobiveil highlights the benefits of using the Mentor Graphics Questa Verification Platform, including Questa Advanced Simulator, Questa CoverCheck, and Questa Clock-Domain Crossing (CDC) Verification, which together facilitates smart functional verification, debug and reporting of the high speed designs.
Speeds of high speed interconnect bus fabric have transitioned from the high MHz to multiple GHz in a few short years. This is due to the critical bandwidth requirements from various applications required to interface with different peripheral devices that have varying requirements for speed, QoS, classes of service, and so on. The fast processor/slow peripherals tradeoff is slowly becoming a story of the past decade. Peripheral speeds and the huge amounts of data transferred over the interface have become similar to the data transfer between a processor and its immediate interfaces, such as RAM and cache. To enable real-time data processing applications, devices have to deal with vast swaths of data and a reliable bus fabric delivering the lowest bit error rate (BER) possible. Over the last decade, bus fabric bandwidths have moved from 10x Gbps to 100x Gbps, an increase driven by data density that comes from better fidelity, clarity and real-time data processing. Such a trend poses critical challenges in architecting and constructing the design and verification infrastructure as it means dealing with higher throughput, more gates, and faster clocks. Attempts to meet demands for high performance can bump up against lower power requirements that call for careful construction of clocking logic and power distribution that is economical, viable and widely acceptable across applications. Most of the high speed standard specifications provide several features for improving throughput, power optimization, efficiency, and coherency. Depending on the application's requirements, design and verification engineers must carefully address the tradeoff considerations, which usually call for highly configurable design and a complementary verification infrastructure.
Configurability in design and verification
Based in Milpitas, Calif., Mobiveil is an industry leader in licensing high-speed interconnect silicon IP cores. The Mobiveil team has successfully delivered such IP to customers worldwide for over a decade, in the process gaining an unparalleled understanding of what brings value to a customer procuring third-party IPs. This experience has led to unique perspective on how to architect a highly configurable design and verification infrastructure, which together is selfcontained with its own independent ecosystem that can fit into the customer verification ecosystem with minimal changes and provide high amount of vertical and horizontal reuse. The mantra "correct by construction for configurability" drives all work on configurable design and verification solutions at Mobiveil.
Executing verification goals requires good EDA infrastructure. The Questa Verification Platform provides such infrastructure, not only addressing most of the pain points but also enhancing design quality.
Verification Strategy And Goals
The goal of most design verification activities includes three key aspects:
- Check (compliance) and
- Constraints (functional)
Certainly vendors offering configurability in their IPs, where multiple features need to be supported, must account for all three aspects. For IP level verification effort, even minute feature coverage details can prove critical when interoperability with different applications is taken into account. In contrast, verification goals may not be that stringent for system-level design work, where redundancy or data path limitations mean that certain traffic conditions never occur. Hence the importance of choosing the right strategy.
There has been lot of buzz in the last few years on adopting a proper verification strategy, with particular attention paid to constrained random verification (CRV) and its ability to deal with check, coverage and constraints. CRV has taken precedence over the conventional directed/targeted verification approach, a move that comes with risk since redundancy with the simulators or the language can result in more emphasis on the constraints. This risk is high enough that, for high-speed designs, engineering teams are ill-advised to adopt CRV as the sole means of verification strategy. CRV, by virtue of its advantages, belongs in almost every verification effort, though requires that these key questions be answered:
- What should be "directed"?; what should be "constrained"?
- What should be "randomized (directed/constrained)"?
- When and where should both converge?
High speed designs with multiple Gbps throughputs work mostly with multiple clock domains, as it does not make sense to clock the entire design at GHz speeds due to power and technology constraints. Being a technology independent configurable design provider, Mobiveil must take care of the clock partitioning and power requirements in its design and verification work. The goal is always for a given design to meet or exceed the specification margins, if any, for power requirements.
The pie chart below outlines the typical high speed design verification effort. Directed random is the preferred strategy during the initial phase of establishing the baseline requirement for verification effort. This differs from the conventional directed/targeted testing, where the amount of time spent on testing is similar to that spent when using CRV infrastructure. It is important not to lose focus on the objective that the verification infrastructure be configurable, random and "correct by construction." Hence, establishing a baseline with directed random takes little more time and iterations than doing so with conventional directed/targeted testing. Constraints still must be established, though this requirement is relaxed almost to the point of being disabled. Subsequent to the baseline phase comes constrained random testing, where constraints are in full swing and the progress is actively evaluated looking at coverage metrics. Analyzing metrics provides ways to further iterate and update the verification plan. Multiple iterations of this process continue to get to the point of coverage saturation, where functional is almost 100% (with a max deviation of 1%). The final phase may begin when code coverage is still only around 95% (with a max deviation of -5%). This last phase consists of multiple iterations, which today are highly tool assisted. Intelligent tools such as Questa CoverCheck provide early visibility to redundant/dead coverage; other tools such as Questa inFact provide a graphical stimulus on the particular targeted coverage point.
High speed interconnect verification strategy pie
Engineers planning verification of high speed serial interfaces must contemplate many details, including:
- Deal with multiple specifications including any hardware design specific detail
- Try to account for any and all CDC paths
- Identify critical design elements through the data/control path
- Segment the verification plan
- Weigh them and have the data to the verification plan and strongly typed assertions on all applicable interfaces
- Need to account for the performance (throughput, bandwidth etc.) and power budgets, as these may be driven by the protocol specification — configurable designs would try to accommodate trade-off considerations
Different teams (including architects, designers, verification, compliance, QA and customers) should review the verification plan for compliance with the verification goals. A "correct by construction" verification plan should ensure that iterations decrease towards the end of the verification task. These reviews lead to the "acceptance test criteria document" for the design under test giving a quantified and qualified representation of the verification goal and implying the necessary strategy to achieve it.
The Questa Advanced Simulator and Verification Run Manager solution is the right kind of platform to plan high speed interconnect verification. Its various features provide confidence to the ecosystem of customers, architects, designers, and other technical staff working on verification, compliance, implementation and QA. The Questa Verification Run Manager (QVRM) gives consistency across teams by providing a unified regression environment that can be integrated with cluster setups, as well.
Integration of verification planning and tracking
Building A Configurable Design And Complementary Verification Environment
Configurable blocks of design IPs allow customers to choose what functionality and features best suit their needs. This configurability should be easy to use and comprehend, and should come without any redundancy. There are multiple features and levels at which configurability gets defined. Different languages and methodologies allow for controlling configurability by means of soft and hard constructs, compiler directives, generates, etc.
Over time, Mobiveil has developed in-house configurability policies, practices and syntaxes based on its work building IP for multiple silicon-based designs. Hence at Mobiveil, the idea of configurability is included from the point of inception of the design itself. High-speed interconnect definition for instances have around 2000 odd taps for controlling configurability, yet simple enough to manage with default settings automatically arranging the design block reducing the over-head to understand each of these. Customers provide a set of options based on their acceptance criteria; set the parameters and pass it on; and with configurability in place, generating a design that meets customer requirements becomes push-button simple.
This push-button approach also generates the sort of rough approximate data upfront that can help customers understand if their selections are good (or can help them add customization, if necessary). Available upfront data can include logical gate count, maximum frequency at which the design can operate for its core, power budget and more. Having such estimates enables customers to avoid iterations.
Hence, customers ultimately get to see the code that fits into their requirements. All redundancy, dead code and conditional code get out of the way, which makes it easier for customers to integrate these design blocks, especially in their tool chains.
The verification environment also gets configured in such a way that complements the design topology. Most of the configurations will be hard configured, in the sense that there are no redundant object handles or code. Yet ultimately this makes the verification environment easy to understand, enables reuse, and most importantly results in optimized databases to deal with when these designs are simulated, thus giving optimum coverage results and minimum or near zero excludes/waivers. And there is still some amount of soft configurability, which may involve programming or setting certain design variables to a desired mode of operation.
Here is an example to help understand this homegrown configuration mechanism. Consider the design IP that operates on four lanes. This parameter gets defined as hardware configurable, such that all design elements do not generate any redundant code for lanes greater than four. There are no generate/compiler directives to deal with. The customer gets to see the actual code only for four lanes. The verification infrastructure also gets built in such a way to complement the design. A particular mode, for example, which operates this as a two-lane design (down-configure), becomes a soft configurable option. This gets programmed to the design and set by the verification environment.
"Correct by construction" indicates that over subsequent specification updates, the designs are kept tidy, even if they contain multiple levels of features. At the same time, these revisions do away with features not necessary for a particular customer design. However, as work progresses all details about configurability options associated with the IP are presented. This skill set is built into the engineering team and has made Mobiveil successful with many design wins. Multiple variants of the Mobiveil IP design blocks are working effectively in a range of different products made by a host of technology firms. Hence Mobiveil's strategy of "correct by construction" for configurability seems to be working. As a result of this strategy, design cycles are extended during the construction time, but it is worth it in terms of allowing different customers to meet a range of preferences for optimizing their interconnect controllers.
Mobiveil RapidIO configurability at a glance
High-speed Serial Interconnect Verification Challenges
High-speed verification presents critical challenges (see bulleted list below) in ensuring the creation of a robust and flexible design verification environment.
- Meeting the coverage goals: Configurability in the verification environment is required to chase the cover points and groups applicable to achieving functional coverage. Questa CDC methodologies augment this task, in part with CoverCheck, which uses formal technologies to mask unreachable code. The tool also provides stimuli for the reachable code that is not yet tested, thus helping verification engineers in modifying the verification plan for coverage.
Mentor Graphics Questa CoverCheck enables verification to chase the right code-coverage
- Choosing the right configurability with the right verification infrastructure: It is important to be able to determine the earlier point on coverage and also to allow the simulations to be run with the right set of data, which should be optimized by virtue of construction. Questa Advanced Simulator provides such optimization options automatically and also facilitates interoperability with other platforms. This is critical to help all stakeholders to look at the right data, thus making debug easier and improving productivity.
- Partitioning verification space to accommodate clock crossing domains: For power budgeting there should ideally be a high speed clocking domain and one or multiple low speed clocking domains as determined by the architecture. Verification infrastructure should have real time understanding of these clocks, and try to model them as close as possible to the real system. Identifying various clocking domains is extremely critical to understand the design elements used to interface with these domains, including control/data path, and also to identify corner/stress cases.
- Understanding design elements critical to stress testing: Verification needs to allow for early visibility and understanding of critical design elements in the control/data path that can be origins for corner case behavior. Such details should be incorporated without fail into the verification plan, and be updated accordingly if there are changes in the design. (When the architecture is sound, it's not ideal to have to make such updates.) Knowing the corner cases upfront allows for proper construction of verification infrastructure to address them, thus avoiding the need to search for corner case solutions at the last minute. All this helps make sure that verification closure proceeds according to plan and is not clogged by ambiguity.
- Strict and stringent interface assertions: All interfaces to and from the design and within the design, which are critical in the control/data path, should have strongly typed assertions. This also allows for extending the verification to a formal infrastructure and to assertion based verification (ABV), if required. Having strong assertions helps cover each and every possible scenario that might impact the interface
- Dealing with software programmable design configurability: Verification must deal with a multitude of registers. Managing these many registers, including front/back door accesses, is very complex. Mentor Graphics UVM Register Assistant creates register models and blocks, and thus saves vast amounts of time.
UVM Register Assistant makes register management easier and friendly
- Understanding the right clock, reset strategy: Random verification infrastructure must know the allowable margins and create random stimuli even for targeted cases with variances within these margins. Creating an acceptable duty cycle, and modeling the right kind of skew in clocks across different interfaces allows certain corner cases to be caught upfront.
- Performance, latency monitoring in addition to protocol and data integrity checks: Qualification of these high speed IPs is measured in terms of these parameters. Any verification effort that does not account for them is ultimately insufficient. Immediate feedback on these parameters and test-scenarios gets the right data to designers, who can then do the blockby- block work to improve the quality of the design IP. Well architected design blocks should not run into this problem, but building in these additional components allows the QoS to be monitored correctly.
- Interoperability with multiple third party components, keeping the verification infrastructure maximally independent of the third party components: Interoperability tests at the functional verification level of third party and homegrown components are critical to avoiding iterations later on. Challenges include tuning these third party components to match the configurability requirements for in-house components, and also optimizing testing to focus on features of interest alone and not on the whole design.
- Keeping the verification infrastructure reusable at the horizontal and vertical space: Having a verification infrastructure that allows for plug-and-play integration from the IP to the system level is desired by customers. This helps customers leverage the verification infrastructure and make the most of effort spent at the IP level in terms of reuse at the system level. Plug-and-play verification infrastructure must be correct by construction, generally only achieved after having success on several multiple interoperability platforms.
If a successful high-speed interconnect verification infrastructure has accounted for all of the above and has relevant data to provide it, it might well be used by the next level(s) of integrators, as well.
Verification Data For Analysis
Verification goal must define acceptance criteria for closure (signoff). The verification plan acts as the starting point for any iteration, and defines the acceptance criteria of test infrastructure. Individual tests are executed and coverage is measured across these multiple test runs. This coverage data is analyzed against the defined acceptance criteria (generally set at 100%). Functional coverage iteration repeats the flow back to the verification plan, whereas code coverage analysis can further be assisted by tools such as Questa CoverCheck. CoverCheck provides visibility to unreachable states/code in the design, which immediately get classified as waivers/exclusions or non-waiver reachable states that are iterated back, thus refining or augmenting the test plan.
It's critical that this analysis (especially functional coverage) is performed for the actual data. If the verification environment is not configured correctly, engineers wind up hunting cover points not relevant for a particular design. Incorrect configuration of a verification environment might, for example, show xN times more or less cover points than the correct number. To keep the engineers focused on the actual required data, it is therefore important that all relevant data for functional coverage of the current design configuration be available through configurability options, and that any unwanted data be hidden.
Specialization, Experience And Expertise
High-speed interconnect verification requires engineers to be equipped with specific domain expertise to develop a highly interoperable component, that will in all likelihood be used in different target applications. This work also requires that verification engineers understand the dynamics of designs operating in the high GHz domain.
Though standards should ensure QoS and reliability by virtue of protocol guidelines, handling these speeds, still requires unique domain expertise, such as modeling dynamics of signal integrity, jitter and other artifacts of high speed interface verification. If these dynamics aren't accounted for, the designs can have interoperability issues in different platforms.
Domain expertise also matters in focusing on the right issue. Sometimes all features of the design must be covered and configured. Other times, certain features may be more important or might come with difficult corner cases. Identifying priorities and appropriately scheduling the verification effort requires experience handling high-speed interconnects.
It does not take much effort or man-hours to establish the baseline for high speed verification. Indeed, such baseline work takes similar amounts of time in many other types of designs. Modeling the verification environment to meet and match the actual dynamic environment does bring challenges when it comes to the EDA tool chain. Having a simulator as Questa Advanced Simulator saves quite a lot of time in closing on the acceptance matrices, in part with assistance from CoverCheck and Register Assistant. Performing CDC formal verification and identifying design elements of interest in these paths for stress testing is highly simplified by the combined use of Questa CDC and Questa Formal Verification.
Verification iterations can be a nightmare if not wellarchitected and partitioned. Such iterations can take 2-3x times longer than they should if not planned properly. Identifying tough spots early on requires background understanding of the dynamics of high-speed serial interconnects. Also it is required to have knowledge of the physical (PHY) layer elements of how the design gets translated in terms of BER, clock skew, jitter and signal integrity concerns, which can be as tough as radio frequency (RF) verification at these speeds. Having gone through these iterations and learned these guidelines the hard way makes Mobiveil an ideal partner for taking high-speed verification to the next level and leveraging the benefits of advanced verification concepts and powerful EDA tools.
Integrated Verification Flow
Back to Top