Oh Really?

It must have been the week of the Proof Test Coverage (PTC) questions. In the latest marketing wars between vendors, the Proof Test Coverage has been used as a weapon. Who would have ever thought about using Proof Test Coverage to show that product A is better than product B, without looking at the complete quantitative analysis?

As the functional safety science evolves, our estimates for Proof Test Coverage become more and more accurate. Using predictive analytics we are able to understand what the failures and failure modes are and which of these modes can be detected as part of a proof test. Also refer back to a previous blog entry: Getting Good Proof Test Coverage Numbers.

Keep in mind that the objective of a proof test is to reveal any failures that remain undetected during normal operation. In other words we want to detect Dangerous Undetected failures and any diagnostic failures that would prevent the detection of failures during normal operation.

So what does it mean when a product has a Proof Test Coverage of a certain percentage? Well it simply means that when we perform the proof test that was considered in the predictive analytics we will detect said percent of all dangerous undetected failures. Consider a Generic Air Operated Ball Valve with a Soft Seat. The exida Safety Equipment Reliability Handbook shows that a Proof Test Coverage of 60% can be claimed when a full stroke test is done. We can also see that if we do a partial stroke test, the full stroke test coverage will decrease because there are fewer faults to be detected. So how we use a product during normal operation can impact that achievable Proof Test Coverage.

We can also compare different proof tests. For example the 60% Proof Test Coverage for the Generic Air Operated Ball Valve with a Soft Seat is based on performing a full stroke and visual inspection for corrosion, etc. If we would add a leak test on top of the full stroke we would expect a higher Proof Test Coverage, simply because we would be able to detect certain seat leakage failures that would remain undetected if we would only do a full stroke. So our predictive analytics may show a Proof Test Coverage of 90% if we perform a full stroke test in combination with a leak test. So based on that, yes the higher the Proof Test Coverage the better, after all we would detect more failures. However we need to keep in mind that we are looking at the exact same product in this comparison of Proof Test effectiveness.

When looking at two different products this conclusion may not be valid. Let’s assume that Product A has a 90% Proof Test Coverage and Product B has a 50% Proof Test Coverage. If you conclude that product A is therefore better you did not understand what the Proof Test Coverage indicates. It indicates the percentage of Dangerous Undetected failures that can be detected given a specific proof test. Still if we assume that both Proof Test Coverage percentages are based on best case proof test scenario for each product, that conclusion may be incorrect as the number of failures that may be available for detection could be a lot lower in case of Product B compared to Product A.

Consider the example as shown in the table below.


Product A

Product B

Dangerous Undetected Failures

20 FIT


Proof Test Coverage



Failures remaining unrevealed after proof test



The reason why the Proof Test Coverage of Product B is only 50%, is that there are very few failures left to reveal. If we would assume that Product A and Product B are of equal complexity and have similar total failure rates, the correct conclusion would be that the internal diagnostic capabilities of Product A are inferior to the internal diagnostic capabilities of Product B. Furthermore if the PFDavg calculation models for Product A and B are identical, we could conclude that Product A will end up with a higher PFDavg than Product B (roughly double).

We should, however tread very carefully as we can see that if we make simple conclusions based on one set of numbers without looking at the complete picture, these conclusions are most likely incorrect. If you really want to compare apples with apples, compare resulting PFDavg numbers for the different products, which result after (comparable) detailed modeling of the two products/systems.

Tagged as:     SERH     Safety Equipment Reliability Handbook     PTC     Proof Test Coverage     PFDavg     Iwan van Beurden     exida  

Other Blog Posts By Iwan van Beurden