-
by Darron May - Siemens EDA
Big data is a term that has been around for many years. The list of applications for big data is endless, but the process stays the same: capture, process, and analyze. With new, enabling verification solutions, big data technologies can improve your verification process efficiency and predict your next chip sign-off.
The ability to see gathered metrics over time can provide great insights into the process. Historical coverage data trended over time alone can give indications of how much more time is needed to complete sign-off. Being able to plot these single metrics together on the same graph also opens information that is often lost. By providing a big data infrastructure, with state-of-the-art technologies, within the verification environment, the combination of all verification metrics allows all resources to be used as efficiently as possible and enables process improvements using predictive analysis.
Predictive analytics is a part of machine learning (ML) and is one of the major fundamentals that make the concepts of big data extremely interesting when applied to the verification process.
-
by Sumit Vishwakarma - Siemens EDA
As the world of technology continues to evolve, the way we design and verify circuits is also evolving. The next-generation automotive, imaging, IoT, 5G, computing, and storage markets are driving the strong demand for increasing mixed-signal content in modern systems on chips (SoCs). Mixed-signal designs are a combination of tightly interwoven analog and digital circuitry. There are two main reasons for increased mixed-signal contents in today's SoC.
Firstly, machines today are consuming more and more real-world analog information, such as light, touch, sound, vibration, pressure, or temperature, and bringing it into the digital world for processing. Secondly, technology scaling is enabling integration of more functionality, increased compute power and reduce power consumption in new generation chips. While digital circuits have seen improved PPA (Power Performance Area) efficiency due to CMOS scaling, the same has not been true for analog circuits. As a result, integrated circuits that were primarily analog are transitioning into a new category of "digitally-assisted analog" to take advantage of the benefits of scaling.
-
by Sachin Mishra - Siemens EDA
PCI Express® (PCIe) announced its fourth generation (PCIe 4.0 standard) in year 2017.With PCIe Gen 3 the speed of operation was 8 GT/s (giga transfers per second) and error rate is manageable (10-12) but with doubling the frequency with each successive generations performance degradation become more pronounced due to variety of reasons like losses in the channels due to different components, reflections in the channel, jitter and cross talk between lanes in a multi-lane system and other parameters varying with process, voltage and temperature (PVT). Solving this problem is another challenge for the designers but the prior issue is to identify them in the live system as they occur since there was no standard approach to identify or test the faulty links in such complex systems. Debugging with probes is also quite a difficult and expensive task and sometimes next to impossible due to increasing complexity and compactness of the coming system on chips.
Finally, the issue is addressed with the introducing of lane margining at the receiver in PCIe gen4 allowing the designers to measure the performance variation in their system. Lane margining helps in measuring the available electrical margin at each receivers of a system. This article describes lane margining feature and how it helps system designers to deliver a more robust system.
-
by Aimee Sutton, Lee Moore, Kevin McDermott - Imperas Software
The open standard ISA (Instruction Set Architecture) of RISC-V is at the forefront of a new wave of design innovation. The flexibility to configure and optimize a processor for the unique target application requirements has a lot of appeal in emerging and established markets alike. RISC-V can address the full range of compute requirements such as an entry-level microcontroller, a support processor (for such functions as power management, security etc.), right up to the state-of-the-art processor arrays with vector extensions for advanced AI (Artificial Intelligence) applications and HPC (High-Performance Computing).
This wave of innovation is generating a tsunami in verification as more and more SoC development teams face the complexities of RISC-V processor verification. Processor verification is not new, but in the past most processor IP was single-sourced, and the basic assumption of the SoC verification plan was based on high-quality pre-verified IP cores.
-
by Laurent Arditi, Paul Sargent, Thomas Aird, Lauranne Choquin - Codasip
The openness of RISC-V allows customizing and extending the architecture and microarchitecture of a RISC-V based core to meet specific requirements. This appetite for more design freedom is also shifting the verification responsibility to a growing community of developers. Processor verification, however, is never easy. The very novelty and flexibility of the new specification results in new functionality that inadvertently creates specification and design bugs.
During the development of an average complexity RISC-V processor core, you can discover hundreds or even thousands of bugs. As you introduce more advanced features, you introduce new bugs that vary in complexity. Certain types of bugs are too complex for simulation to find them. You must augment your RTL verification methods by adding formal verification. From corner cases to hidden bugs, formal verification allows you to exhaustively explore all states within a reasonable amount of processing time.
In this article, we go through a formal-based, easy-to-deploy RISC-V processor verification application. We show how, together with a RISC-V ISA golden model and RISC-V compliance automatically generated checks, we can efficiently target bugs that would be out of reach for simulation. By bringing a high degree of automation through a dedicated set of assertion templates for each instruction, this approach removes the need for devising assertions manually, thus improving the productivity of your formal verification team.
-
by Doug Smith - Doulos
An advantage of using formal verification is how quickly a formal environment can be created with a few simple properties that immediately start finding design issues. However, not all design behaviors are easily modeled using SystemVerilog's property syntax, resulting in complex or numerous properties, or behaviors that require more than just SVA. That is where helper code comes to the rescue. Helper code can significantly reduce the complexity of properties as well as be used to constrain formal analysis. Likewise, formal analysis may need to reduce the complexity of the problem and state space, which helper code can also help. So where are some places to use helper code and when? This article looks at how helper code can be used to simplify our properties, model formal abstractions, constrain formal inputs, and aid formal analysis.
-
by Priyanka Changan, Darshan Sarode, Avnita Pal, Priyanka Gharat - Silicon Interfaces
The increasing complexity of SoCs has resulted in a higher demand for clock domain crossing verification. As more functionality is integrated into chips and data is constantly being transferred between clock domains, ensuring proper communication across these domains has become a critical aspect of deep submicron design verification.
This article provides an in-depth guide on establishing a clock domain crossing environment using Questa CDC tools. As an example, we delve into the use of Synchronizers to mitigate metastable conditions in an I2C design.
Our goal is to demonstrate how utilizing the Questa CDC Tool can enhance clock stability and improve your design output. To better illustrate the concepts, we present the I2C protocol with two separate clock domains as a case study. This section will clarify key concepts related to implementing two clock domains in a circuit.