Verification Academy

Search form

My Account Menu

  • Register
  • Log In
  • Topics
  • Courses
  • Forums
  • Patterns Library
  • Cookbooks
  • Events
  • More
  • All Topics
    The Verification Academy offers users multiple entry points to find the information they need. One of these entry points is through Topic collections. These topics are industry standards that all design and verification engineers should recognize. While we continue to add new topics, users are encourage to further refine collection information to meet their specific interests.
    • Languages & Standards

      • Portable Test and Stimulus
      • Functional Safety
      • Design & Verification Languages
    • Methodologies

      • UVM - Universal Verification Methodology
      • UVM Framework
      • UVM Connect
      • FPGA Verification
      • Coverage
    • Techniques & Tools

      • Verification IQ
      • Verification IP
      • Static-Based Techniques
      • Simulation-Based Techniques
      • Planning, Measurement, and Analysis
      • Formal-Based Techniques
      • Debug
      • Acceleration
  • All Courses
    The Verification Academy is organized into a collection of free online courses, focusing on various key aspects of advanced functional verification. Each course consists of multiple sessions—allowing the participant to pick and choose specific topics of interest, as well as revisit any specific topics for future reference. After completing a specific course, the participant should be armed with enough knowledge to then understand the necessary steps required for maturing their own organization’s skills and infrastructure on the specific topic of interest. The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization’s processes so that you can then reap the benefits that advanced functional verification offers.
    • Universal Verification Methodology (UVM)

      • Introduction to UVM
      • UVM Basics
      • Advanced UVM
      • UVM Connect
      • UVM Debug
      • UVMF - One Bite at a Time
    • Featured Courses

      • Introduction to ISO 26262
      • Introduction to DO-254
      • Clock-Domain Crossing Verification
      • Portable Stimulus Basics
      • Power Aware CDC Verification
      • Power Aware Verification
      • SystemVerilog OOP for UVM Verification
    • Additional Courses

      • Assertion-Based Verification
      • An Introduction to Unit Testing with SVUnit
      • Evolving FPGA Verification Capabilities
      • Metrics in SoC Verification
      • SystemVerilog Testbench Acceleration
      • Testbench Co-Emulation: SystemC & TLM-2.0
      • Verification Planning and Management
      • VHDL-2008 Why It Matters
    • Formal-Based Techniques

      • Formal Assertion-Based Verification
      • Formal-Based Technology: Automatic Formal Solutions
      • Formal Coverage
      • Getting Started with Formal-Based Technology
      • Handling Inconclusive Assertions in Formal Verification
      • Sequential Logic Equivalence Checking
    • Analog/Mixed Signal

      • AMS Design Configuration Schemes
      • Improve AMS Verification Performance
      • Improve AMS Verification Quality
  • All Forum Topics
    The Verification Community is eager to answer your UVM, SystemVerilog and Coverage related questions. We encourage you to take an active role in the Forums by answering and commenting to any questions that you are able to.
    • UVM Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • UVM Forum
    • SystemVerilog Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • SystemVerilog Forum
    • Coverage Forum

      • Active Questions
      • Solutions
      • Replies
      • No Replies
      • Search
      • Coverage Forum
    • Additional Forums

      • Announcements
      • Downloads
      • OVM Forum
  • Patterns Library
    The Verification Academy Patterns Library contains a collection of solutions to many of today's verification problems. The patterns contained in the library span across the entire domain of verification (i.e., from specification to methodology to implementation—and across multiple verification engines such as formal, simulation, and emulation).
    • Implementation Patterns

      • Environment Patterns
      • Stimulus Patterns
      • Analysis Patterns
      • All Implementation Patterns
    • Specification Patterns

      • Occurrence Property Patterns
      • Order Property Patterns
      • All Specification Patterns
    • Pattern Resources

      • Start Here - Patterns Library Overview
      • Whitepaper - Taking Reuse to the Next Level
      • Verification Horizons - The Verification Academy Patterns Library
      • Contribute a Pattern to the Library
  • All Cookbooks
    Find all the methodology you need in this comprehensive and vast collection. The UVM and Coverage Cookbooks contain dozens of informative, executable articles covering all aspects of UVM and Coverage.
    • UVM Cookbook

      • UVM Basics
      • Testbench Architecture
      • DUT-Testbench Connections
      • Configuring a Test Environment
      • Analysis Components & Techniques
      • End Of Test Mechanisms
      • Sequences
      • The UVM Messaging System
      • Other Stimulus Techniques
      • Register Abstraction Layer
      • Testbench Acceleration through Co-Emulation
      • Debug of SV and UVM
      • UVM Connect - SV-SystemC interoperability
      • UVM Versions and Compatibility
      • UVM Cookbook
    • Coding Guidelines & Deployment

      • Code Examples
      • UVM Verification Component
      • Package/Organization
      • Questa/Compiling UVM
      • SystemVerilog Guidelines
      • SystemVerilog Performance Guidelines
      • UVM Guidelines
      • UVM Performance Guidelines
    • Coverage Cookbook

      • Introduction
      • What is Coverage?
      • Kinds of Coverage
      • Specification to Testplan
      • Testplan to Functional Coverage
      • Bus Protocol Coverage
      • Block Level Coverage
      • Datapath Coverage
      • SoC Coverage Example
      • Requirements Writing Guidelines
      • Coverage Cookbook
  • All Events
    No one argues that the challenges of verification are growing exponentially. What is needed to meet these challenges are tools, methodologies and processes that can help you transform your verification environment. These recorded seminars from Verification Academy trainers and users provide examples for adoption of new technologies and how to evolve your verification process.
    • Featured & On-Demand

      • Exploring Formal Coverage
      • Processor Customization
      • Interconnect Formal
      • Formal and the Next Normal
      • Formal Verification Made Easy
      • Data Independence and Non-Determinism
      • Exhaustive Scoreboarding
      • Visualizer Debug Environment
      • Webinar Calendar
    • On-Demand Library

      • SystemVerilog Assertions
      • Practical Flows for Continuous Integration
      • Continuous Integration
      • Questa Verification IQ
      • Avery & Siemens VIP
      • Protocol and Memory Interface Verification
      • HPC Protocols & Memories
      • Preparing for PCIe 6.0: Parts I & II
      • High Defect Coverage
      • SoC Design & Functional Safety Flow
      • Complex Safety Architectures
      • All On-Demand Recordings
    • Recording Archive

      • Lint vs Formal AutoCheck
      • FPGA Design Challenges
      • Design Solutions as a Sleep Aid
      • Fix FPGA Failures Faster
      • CDC and RDC Assist
      • The Dog ate my RTL
      • Questa Lint & CDC
      • Hierarchical CDC+RDC
      • Improving Initial RTL Quality
      • CDC Philosophy
      • Hardware Emulation Productivity
      • The Three Pillars of Intent-Focused Insight
      • All Webinar Topics
    • Conferences & WRG

      • 2022 Functional Verification Study
      • Improving Your SystemVerilog & UVM Skills
      • Automotive Functional Safety Forum
      • Aerospace & Defense Tech Day
      • Siemens EDA Functional Verification
      • Industry Data & Surveys
      • DVCon 2023
      • DVCon 2022
      • DVCon 2021
      • Osmosis 2022
      • All Conferences
    • Siemens EDA Learning Center

      • EDA Xcelerator Academy(Learning Services) Verification Training, Badging and Certification
      • View all Xcelerator Academy classes
  • About Verification Academy
    The Verification Academy will provide you with a unique opportunity to develop an understanding of how to mature your organization's processes so that you can then reap the benefits that advanced functional verification offers.
    • Blog & News

      • Verification IQ
      • Verification Horizons Blog
      • Technical Resources
    • Verification Horizons Publication

      • Verification Horizons - July 2023
      • Verification Horizons - March 2023
      • Verification Horizons - December 2022
      • Issue Archive
    • About Us

      • Verification Academy Overview
      • Subject Matter Experts
      • Academy News
      • Contact Us
    • Training

      • Learning @OneGlance (PDF)
      • SystemVerilog & UVM Classes
      • Siemens EDA Classes
  • Home
  • Topics
  • Functional Safety
  • ISO 26262 Fault Analysis – Worst Case is Really the Worst

ISO 26262 Fault Analysis – Worst Case is Really the Worst

by Avidan Efody, Verification Architect, Mentor Graphics.


In our previous article on ISO 26262 requirements with regards to random hardware faults we explained what random hardware faults were, where they come from, how ISO requires that you classify them, and why classifying them wrong can cost a lot of money and time. We also promised that we will show you how to classify them right, and in this post we will try to keep this promise. We will do so by examining a series of techniques, starting from the simplest and cheapest and working our way to the most elaborate ones. As we go, we will see how an increasing number of worst case assumptions gives way to real life results, hence making the good fault bins grow at the expense of the bad fault ones.

Introduction

Imagine you’re a verification engineer being asked to get a small 10K gate design ISO 26262 certified. Assuming you don’t take the smart decision to quit your job, what would be your first step? If you would have been asked to do plain functional verification of the design, it is obvious you would start by reading the DUT spec. With random hardware faults analysis you’ll do just the same, only your spec will be called a “hardware safety requirements specification” (ISO 26262 5.6) instead of just “spec”, and what it will contain will be a description of how your design should behave in order not to violate any safety goals. The behavior specified might concern some outputs – for example, “output A should never send out a wrong value”(1); relationships between outputs – for example “output A should never assume the value of X when output B is at a value Y”(2); or internal state – for example, “state machine must not jump from state X to state Y”(3). If we follow ISO to the letter, the same document would also describe the safety mechanisms that should be implemented in the design in order to make sure that the specified behavior is met. As a hardware design engineer, your role will be to implement the safety mechanisms. As a hardware verification engineer, your job would be to verify that the above requirements are met. A good chunk of the job would be just plain functional verification of requirements (1), (2) and (3). The additional special part is you also have to show that under a certain dose of random hardware faults the requirements will still hold at a probability above a given threshold. Applying that to requirement (2) we get something like “in the presence of hardware random faults, the probability of output A assuming a value of X when output B is at a value Y should be less than once per 109 hours”(4) (Also known as 1 FIT).

Taking requirement (4) above and ignoring the others, and assuming no safety mechanisms in place (we will factor those in later), we could initially take a very prudent assumption that any fault in any gate in our design could lead to its violation. This would put the probability of a safety goal violation at 10000*(one gate fault probability), which will most probably be higher than 1 FIT. It is easy to see that this very prudent worst case assumption is also quite unrealistic, since it is highly unlikely that all 10K gates of the design are even physically connected to A or B. In this article we will look at three different ways for refining this worst case analysis and replacing this exaggerated estimation with a more realistic one.

Formal analysis

Just about any design includes some debug or status logic that usually can't drive outputs unless it has some telepathic powers. There's no reason in the world to include this logic in the risk probability calculation, especially since taking it out is so easy. All this requires is a good formal tool that can calculate the fan-in cone of any safety critical outputs or internal signals, and exclude all other parts of the design from the risk probability calculation. This usually gives 10-20% reduction in risk probability, practically without having to get of the sofa. Actually, formal tools can usually go further than that and also take out of the calculation any logic that can't influence A or B under certain assumptions about the inputs.

Dynamic analysis

Adding a gray scale to our black and white analysis is a big step up in pain and in gain. Instead of asking if a gate can or can't influence the safety goal, we ask what are the gate's chances of influencing the safety goal. If we take an OR gate for example, as long as one input is '1', the other inputs could go as crazy as they want. If one input is '1' for 99.9% of the time, faults in anything connected to other inputs could be left out. That's where the big savings are hiding, but to get that information we must perform some measurements on a "running" design – i.e. not only connections but also actual operation. One way of doing that would be to log selected gates' inputs while the design is running, than run a script to find out the percentage of time in which each input is really influencing the output. The diagram on the right shows some example results.

Sounds simple? Well that's because I intentionally left a big gotcha out. To be really efficient, and provide a reduction that is worth the investment, the gates for which we perform this analysis would have to meet a somewhat difficult condition: gates that are in the fan-in cone of their inputs, should not have another physical connection to the safety critical areas. Why? The graphic description below tries to make this clear. Basically if faults in the fan in cone don't have another way out, then we could multiply their cumulative probability to fail, by the input's chances to influence the output. If the faults have another way, then we need to consider them in the safety goal violation probability anyhow.

At a high level, gates that meet the criteria above can be found at “strategic” points in the design. For example, at the interface between two blocks, at arbitration points, or when multiplexing/de-multiplexing is taking place. A pretty common example of gates that could be excluded from calculation using this method are those that belong to some configuration interface logic (say APB or AHB). While configuration registers are usually read quite often by DUT and safety critical logic, they’re written only rarely. This means that faults in the bus logic are buffered from safety critical logic, and have a very low probability of impacting it.

Fault simulation

Another way to determine the probability of safe/dangerous faults is to inject a large enough sample of faults into the design and check their impacts. The brute force method of doing that, which has been used around for some time, is to run a test, force an internal signal to a wrong value, and then diff the outputs or some internal signals against the same test without a fault. If a difference is found then the fault is usually counted as dangerous.

Since the fault sample required might be quite big, the brute force method will obviously have a significant cost in regression time. An easy way to avoid additional runs, is to look at the results of tests without fault injection, and try to recalculate their result if a fault was injected at some point. Since most random faults are masked by logic just after they happen, this could help eliminate a good chunk of the sample. For example in the diagram to the right the fault injected at point A at time 200ns will be masked at point B. Note that in some cases, the fault will propagate to a large funnel of logic, or to registers/memory. In this case this sort of "what if" analysis will become growingly more complex, until it takes just as long as a new run. Hence, it makes sense to stop this analysis if the fault isn't masked after an arbitrary number of cycles.

Wrap up

The above fault analysis techniques should all be used together to determine safe/dangerous fault probability. Formal analysis bounds the “spatial” dimension to remove logic that is not physically connected or that has no way of impacting safety critical areas under given assumptions. Dynamic analysis bounds the time dimension removing logic that has influence on safety critical areas only for negligible time windows. Finally, fault simulation is used to determine the probability of safe/dangerous faults in the remaining logic. In our next article we will bring the results derived from these various analysis techniques together and show how they can be turned into meaningful ISO 26262 metrics.

Back to Top

Siemens Digital Industries Software

Siemens Digital Industries Software

#TodayMeetsTomorrow

Portfolio

  • Cloud
  • Mendix
  • Electronic Design Automation
  • MindSphere
  • Design, Manufacturing and PLM Software
  • View all Portfolio

Explore

  • Community
  • Blog
  • Online Store

Siemens

  • About Us
  • Careers
  • Events
  • News and Press
  • Customer Stories
  • Partners
  • Trust Center

Contact

  • VA - Contact Us
  • PLM - Contact Us
  • EDA - Contact Us
  • Worldwide Offices
  • Support Center
  • Give us Feedback
© Siemens 2023
Terms of Use Privacy Statement Cookie Statement DMCA