Verification Horizons Complete Issue:
Verification Horizons Articles:
by Tom Fitzpatrick - Mentor, A Siemens Business
I admit it. I’m a huge Harry Potter fan. I own a complete first-edition set of the books and actually drove over an hour and a half each way to pick up my niece and nephew as my “excuse” to see the first movie when it came out, since my own children were too young at the time. I have read all the books (or listened to the audiobooks) and watched all the movies more times than I can count. When my son got older I read the books to him at bedtime, and later with my daughter.
My rule has always been that you have to read the book before you can see the movie — a rule that I scrupulously apply to any book-to-movie endeavor. If you see the movie first, then when you read the book all you will see in your mind is the corresponding scene from the movie. But if you read the book first, then your imagination can run wild and bring the story to life in a way that a movie simply can’t match. Also, there are usually details in the book that they can’t fit into the movie (with the possible exception of the 9-hour trilogy version of The Hobbit). So, in a way, the book is the specification and we as consumers judge how well the filmmakers did in implementing the story. So, we’re kind of like verification engineers! You knew I’d bring it around to verification, didn’t you?
by Jean-Marie Brunet - Mentor, A Siemens Business, and Lauro Rizzatti - Verification Expert
There is no doubt that computers have changed our lives forever. Still, as much as computers outperform humans at complex tasks such as solving complex mathematical equations in almost zero time, they may underperform when solving what humans can do easily — image identification, for instance. Anyone in the world can identify a picture of a cat in no time at all. The most powerful PC in the world may take hours to get the same answer.
The problem belongs to the traditional control-processing-unit (CPU) Von Neuman architecture. Devised to overcome the inflexibility of early computers that were hardwired to perform a single task, the stored-program computer, credited to Von Neuman, gained the flexibility to execute any program at the expense of lower performance.
Limitations of the stored-program computer, compounded by limited data available for analysis and inadequate algorithms to perform the analysis, conspired to delay artificial intelligence (AI), and its sub-classes of machine learning (ML) and deep learning (DL) implementation for decades.
by Jeremy Levitt, Zyad Hassan, and Joe Hupcey III - Mentor, A Siemens Business
You are about to go into a planning meeting for a new project when you get a call from one of your company’s Customer Advocate Managers: a product that you worked on just started shipping in volume, and there are a growing number of field reports that the system randomly freezes-up after a few weeks in operation. While a soft reset “works,” it will run anywhere from 5 to 10 days before it has to be rebooted again.
The highly variable MTBF suggests the problem is not a clock-domain crossing (CDC) issue (after all, crashes from CDC issues would have never made it out of the pre-production lab). The unfortunate conclusion is that the design’s logic itself has a problem. Somehow the code reviews, linting, and constrained-random simulations missed case(s) where the system could deadlock. The only people happy about this are your competitors’ sales reps…
by Matthew Ballance - Mentor, A Siemens Business
Almost every non-trivial design contains at least one state machine, and exercising that state machine through its legal states, state transitions, and the different reasons for state transitions is key to verifying the design’s functionality. In some cases, we can exercise a state machine simply as a side-effect of performing normal operations on the design. In other cases, the state machine may be sufficiently complex that we must take explicit targeted steps to effectively exercise the state machine. In this article, we will see how inFact’s systematic stimulus generation and ability to generate constraint-aware functional coverage simplify the process of exercising a state machine by generating command sequences.
by Matthew Ballance - Mentor, A Siemens Business
Creating sufficient tests to verify today’s complex designs is a key verification challenge, and this challenge is present from IP block-level verification all the way to SoC validation. The Accellera Portable Test and Stimulus Standard (PSS) [1] promises to boost verification reuse by allowing a single description of test intent to be reused across IP block, subsystem, and SoC verification environments, and provides powerful language features to address verification needs across the verification levels and address the specific requirement of verification reuse. However, even as powerful object-oriented features in the Java and C++ languages didn’t automatically result in high-quality reusable code, the PSS standard’s language features on their own do not guarantee productive reuse of test intent. Judiciously applied, reuse of design IP and test intent can dramatically reduce rework and avoid mistakes introduced during the rework process. In addition, just as reuse of design IP accelerates the creation of new designs, reuse of test intent accelerates the creation or new test scenarios. However, effective reuse of test intent requires up-front planning, in the same way that reuse of design IP or software code does. Without a well-planned process, reuse can backfire and require more work without providing proportionate benefits. This article will help you to design a PSS reuse strategy that matches the goals and profile of your organization, and maximizes the benefits you receive by adopting PSS.
by Abdelouahab Ayari, Sukriti Bisht, Sulabh Kumar Khare, Ashish Hari, and Kurt Takara - Mentor, A Siemens Business
Today’s complex designs include multiple asynchronous clocks and the signals crossing between asynchronous clock-domains may result in functional errors. When a signal from one asynchronous clock-domain is sampled by a register on a different asynchronous clock-domain, the setup/hold timing requirement will be violated for the destination register. The setup/hold timing violation indicates that the destination register may become metastable and that the destination register will settle to an unpredictable value and possibly cause a functional error. Although clock- domain crossing (CDC) verification is a critical task in design verification projects, many design teams only statically verify CDC synchronization structures.
When designers add synchronization logic to prevent the propagation of metastable events, designers should implement and verify the correct CDC protocol. Without a correctly implemented protocol, a CDC structure will not function correctly and thus, lose or corrupt data or propagate metastability. It is common practice for CDC tools to generate assertions to check correct protocol adherence, but assertion generation is not sufficient for designers to verify CDC protocols.