Over the course of several blogs , I talked about getting realistic failure rate data, where this failure data comes from, and how different methods of failure data analysis compare. I think if you understand this, you will begin to get a very good feel of what it takes to generate realistic failure data. This is a subject I find very important and I hope you will find your time well spent reading this.
In Part 1, I wrote about the fundamental concepts of functional safety standard for the process industries, IEC 61511. As well as the design phase of the safety lifecycle. In this blog, I will continue with talking about two fundamental techniques that have been developed in the field of reliability engineering: failure rate estimation techniques and failure rate prediction techniques. As well as failure rate estimation techniques.
In Part 2, I explained two fundamental techniques that have been developed in the field of reliability engineering: failure rate estimation techniques and failure rate prediction techniques. As well as failure rate estimation techniques.
Part 3 was about field data collection standards and tools as well as prevalent prediction techniques like B10 and FMEDA approaches, Part 4 covered FMEDA results and accuracy, and Part 5 focused on comparing failure rates.
In the final part of this blog series, I will compare actuator data and run down some frequently asked questions regarding getting failure rate data.
Actuator Certificate Data
Let's look at actuators. Here's a certificate that shows some failure rates. There is a dangerous failure rate here of around 4E -8 depending on whether it's a simple or tandem design. The method was unknown. The report did not declare how the data was generated, but one might guess it was based on manufactures warranty data.
Comparison of Actuator Data
The Dow plant study for actuator said 2.10E-07. FMEDA results are pretty close to that and FMEDA average is 2.78! The certificate shows numbers like for 4E-8 or 8E -9. This is over an order of magnitude different.
Here's a graphic of those. The exida results show different types of actuators based on their designs and their design complexity. We compare the average to the Dow field data. You can see we’re sitting about 25% too high which is probably a good place to be for safety verification purposes. Manufactures warranty data from other certificates is dangerously low.
Getting Failure Data
So when we talk about getting field failure data, we look at all these different methods:
- Manufacturer Field Return Data Studies
- Industry databases
- end user field failure data studies
- B10 Data
- Calibrated FMEDA
What we've clearly concluded at exida is, we cannot use manufacturers field return data studies for purposes of absolute failure rates. We should never be using B10 cycle testing for applications in the process industries where equipment does not move frequently like at least it's a twice a week or maybe at least once a week. However, industry databases like OREDA provide excellent total failure rates. Unfortunately , for safety analysis we do need much more than the total. We need failure rates as of function failure modes. We need diagnostic coverage. We need proof test coverage. We need useful life and it’s damn good data because the OREDA people and the SINTEF people really know what they're doing. End-user field failure data studies can be accurate if in fact most or all failures are reported and if they have a very good data collection system. A calibrated FMEDA , on the other hand, can provide accurate predictive data. This is used when a product is newly developed. FMEDAs can work extremely well if the component database is properly calibrated for design strength versus a predefined environment.
It's been a decade of research into this component database (literally thousands of man-hours) and we think at this point the industry is finally able to get relatively accurate field failure data due to the excellent work of industry database people, end users who are collecting good data, the nuclear industry, and anyone who uses a calibrated FMEDA.
To conclude, this blog series, below are some questions that were raised when I conducted a webinar on the matter.
There's no tool or means in the market to push even end-users to report failure rates
There’s an excellent tool from exida called SILStat. Take a look at on the website.
How can the oil & gas end user field failure data studies be relied on?
The exida team will rely on end-user field failure data studies for the purpose of comparison. We use it to calibrate our component database, but we only do so when we have audited the data collection process and vetted the data.
In the case that I use as an example (the Dow data) , several site visits were used to understand that and to decide on the quality of the data. I will give them great credit. That group of people did an excellent job. The bottom line is, you can depend on end-user feel failure data studies if the data is collected by a good tool and vetted by experts.
It seems like whole field of accelerated stress testing is missing typically taken as a more conservative number. I used to be a very strong believer in accelerated stress testing and I'm pretty convinced that some of those methods have a lot to contribute, but there's no strong connection. I haven't seen proof that it can be used to help calibrate component databases. You say “it seems like each method has its weaknesses and is just a number but not necessarily accurate representation of what happens in the field.”
I would agree until you compare the methods to what happens in the field. When you compare the methods to what happens in the field, and you can see that they match, then you know you have realistic data.
SILStat does not push for the end users to schedule routine failure data recording.
No tool forces people to do things. People have to understand and want to do things. That’s a fact.