Engineers and managers who work with safety equipment have likely seen a failure rate of some kind quoted for the equipment. Some equipment even comes with a safety certification stating the failure rate given certain assumptions about how the equipment will be used.
The numbers sometimes turn out to be rather small (e.g., 4 * 10-9 failures / hour = 4 failures / billion hrs of operation) and few people have a detailed understanding of where these numbers come from and what they really mean. Yet these numbers are often used to make critical decisions that impact safe operations.
Therefore, a fair question to ask is, how can an equipment user know that the values quoted for the failure rates are correct and will accurately predict expected failures in his/her application? Put another way, have the failure rate numbers been validated against independent data* for a given application and is clear documentation of the resulting validation available for examination? Unless failure rates have been validated, they cannot be presumed correct and therefore are not meaningful or useful numbers.
So the next time you are quoted a failure rate, be sure to ask to see the validation data and analysis. If it has been peer-reviewed and published, that’s even better. Question anything you don’t understand. Otherwise you may be making critical decisions based on numbers that are not truly meaningful to your application.
*Independent data means that the data has to come from a different source than the data used to assess the failure rates in the first place.
Tagged as: julia bukowski Failure Rates